text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
#include <cctype> #include <iostream> using namespace std; int largest(); void smallest(); char options(); int main() { char choice; do { choice = options(); switch (choice) { case 'a': largest(); break; case 'A': largest(); break; case 'b': smallest(); break; case 'B': smallest(); break; } } while ((choice != 'C') && (choice != 'c')); return 0; } char options() { char choice; cout << endl; cout << "Enter 'A' to find the largest number with a known quantity of numbers." << endl; cout << "Enter 'B' to find the smallest number with an unknown quantity of numbers." << endl; cout << "Enter 'C' to Quit." << endl; cout << endl; cout << "Please enter your choice: "; cout << endl; cin >> choice; return choice; } int largest() { int largest = -9999999; int numbers = 0, input, C = 0; cout << endl; cout << "How many numbers would you like to enter? "; cin >> numbers; for (C = 0; C < numbers; C++) { cout << "Please Enter Number " << C+1 << ": "; cin >> input; if (input > largest) largest = input; } cout << endl; cout << "The Largest number is " << largest << "." << endl; return numbers; } void smallest() { int smallest = 9999999; int input1 = 0, C1=1; cout << endl; cout << "Enter as many numbers as you want." <<endl; cout << "When you wish to stop entering numbers, enter -99." << endl; cout << endl; do { cout << "Please enter number " << C1 << ": "; cin >> input1; C1++; if ((input1 < smallest) && (input1 != -99)) smallest = input1; } while (input1 != -99); cout << endl; cout << "The Smallest number is " << smallest << "." << endl; } My Teacher says that I need two functions and parameters in "int largest" I thought I did have these but apparently I do not. He also said that I need two return functions? I am lost? any thoughts? Specific Guilde lines for homework are: Modify your week 5 assignment to include a function for both A and B. The function for A should have the quantity of numbers passed in as a parameter and needs to return the largest number. The function for B should have no parameters and return the smallest number. Also: The code for case 'A' and 'B' now needs to be a function call rather than part of the "main" program. You need to accomplish the following: 1) Start with a working week 5 program 2) Declare the function "largest" at the top of the program. You week 5 program will stay as is till the point that the user once you are in the switch selects A and enters the number he/she wants to enter. At that point you will call largest. Notice that the function that we called "largest" should be declared at the top of the file (your program) but the body of the function is at the end of the file. 3) "The function for B should have no parameters and return the smallest number". Having successfully completed the function for case 'A', this should be a piece of cake. For case 'B' no parameter is passed so the declaration of the function at the top will look like this: int smallest(); Once the user selects B you just make the call to smallest and the code that used to be in main gets executed. Please note that there is a solution floating on the web that is using functions. That solution will not apply to what we are doing this week because the functions that they created are different than what we expect from you here. ------------------------------------------------------------------------------- In advance thanks for whatever help you can give me. This post has been edited by Vision_Dreamer: 30 June 2008 - 07:55 PM
http://www.dreamincode.net/forums/topic/56322-functions-and-parameters/
CC-MAIN-2016-36
refinedweb
565
77.27
String hashing Reading time: 20 minutes | Coding time: 5 minutes Hashing is an important technique which converts any object into an integer of a given range. Hashing is the key idea behind Hash Maps which provides searching in any dataset in O(1) time complexity. Hashing is widely used in a variety of problems as we can map any data to integer upon which we can do arithmetic operations or use it as an index for data structures. We will take a look at the techniques to hash a string that is to convert a string to an integer. Ideally, an hashing technique has the following properties: - If S is the object and H is the hash function, then hash of S is denoted by H(S). - If there are two distinct objects S1 and S2, ideally, H(S1) should not be equal to H(S2). - In some cases, H(S1) can be equal to H(S2) which we call collision and can be minimized and taken care of as well - If there is a range of hash function H as 0 to M, then H(S) = H(S) mod M. You can consider S as a string for our case. One of the most common applications of hashing strings is to compare it. Comparing strings of length N takes O(N) time complexity but comparing integers take O(1) time complexity. Hence, comparing hash of strings take O(1) time complexity. Hash of a string For a string of length N, a strong hash function is defined as: $$ H(S) = ( S[0] + S[1] * P + S[2] * P^{2} + ... + S[N-1] * P^{N-1} ) mod M $$ $$ H(S) = ( \sum_{i=0}^{N-1} S[i] * P^i ) mod M $$ where: - P is a prime number (say 3) - S[i] is the ascii value of the character at index i of S (String) - M is the upper limit/ range of the hash function Why this hash function minimizes collision? On careful inspection, you will notice that we are minimizes the number of factors involves in the equation and hence, trying to map each string to a unique prime number each time. As there are factors present in the ascii value of characters and some arise due to addition, we utilize more integers and minimize the collision greatly. This hash function is commonly referred to as polynomial rolling hash function. The probability that there will be a collision is equal to 1/M. Hence, M should be a large number. Reduce collision further To reduce collision further, we can compute two hash values using two different values of P (like 3 and 5) and use both for comparision. Pseudocode The pseudocode is as follows: long hash(string s) { const int p = 3; const int m = 1000000009; long hash = 0; long pow = 1; for (for every character c in s) { hash = (hash + (c - 'a' + 1) * pow) % M; pow = (pow * p) % M; } return hash; } Complexity The complexity of the above algorithm is: Time complexity: O(N) where N is the length of the string Space complexity: O(1) The power of the prime factor can be precomputed to give a boost. Implementations #include<iostream> #include<string> using namespace std; long long compute_hash(string s, int p) { const int m = 1e9 + 9; long long hash = 0; long long pow = 1; for (char c : s) { hash = (hash + (c - 'a' + 1) * pow) % m; pow = (pow * p) % m; } return hash; } int main() { cout << "Hash of opengenus is: " << compute_hash("opengenus", 3); return 0; } Output: Hash of opengenus is: 183060 Applications There are several benefits of being able to compute hash of strings. Some of them are: - Comparing two strings in O(1) time complexity - Rabin-Karp algorithm for pattern matching in a string can be done in O(N) - Calculating the number of different substrings of a string in O(N^2 * log N) - Calculating the number of palindromic substrings in a string
https://iq.opengenus.org/string-hashing/
CC-MAIN-2020-05
refinedweb
656
59.98
Backbone.RadioBackbone.Radio Backbone.Radio provides additional messaging patterns for Backbone applications. Backbone includes an event system, Backbone.Events, which is an implementation of the publish-subscribe pattern. Pub-sub is by far the most common event pattern in client-side applications, and for good reason: it is incredibly useful. It should also be familiar to web developers in particular, because the DOM relies heavily on pub-sub. Consider, for instance, registering a handler on an element's click event. This isn't so much different than listening to a Model's change event, as both of these situations are using pub-sub. Backbone.Radio adds two additional messaging-related features. The first is Requests, an implementation of the request-reply pattern. Request-reply should also be familiar to web developers, as it's the messaging pattern that backs HTTP communications. The other feature are Channels: explicit namespaces to your communications. InstallationInstallation Clone this repository or install via Bower or npm. bower install backbone.radio npm install backbone.radio You must also ensure that Backbone.Radio's dependencies on Underscore (or Lodash) and Backbone are installed. DocumentationDocumentation - Getting Started - API Getting StartedGetting Started Backbone.EventsBackbone.Events Anyone who has used Backbone should be quite familiar with Backbone.Events. Backbone.Events is what facilitates communications between objects in your application. The quintessential example of this is listening in on a Model's change event. // Listen in on a model's change eventsthis;// Later on, the model triggers a change event when it has been changedsomeModel; Let's look at a diagram for Backbone.Events: It goes without saying that Backbone.Events is incredibly useful when you mix it into instances of Classes. But what if you had a standalone Object with an instance of Backbone.Events on it? This gives you a powerful message bus to utilize. // Create a message busvar myBus = _;// Listen in on the message busthis;// Trigger an event on the busmyBus; As long as there was an easy way to access this message bus throughout your entire application, then you would have a central place to store a collection of events. This is the idea behind Channels. But before we go more into that, let's take a look at Requests. Backbone.Radio.RequestsBackbone.Radio.Requests Requests is similar to Events in that it's another event system. And it has a similar API, too. For this reason, you could mix it into an object. _; Although this works, I wouldn't recommend it. Requests are most useful, I think, when they're used with a Channel. Perhaps the biggest difference between Events and Requests is that Requests have intention. Unlike Events, which notify nothing in particular about an occurrence, Requests are asking for a very specific thing to occur. As a consequence of this, requests are 'one-to-one,' which means that you cannot have multiple 'listeners' to a single request. Let's look at a basic example. // Set up an object to reply to a request. In this case, whether or not its visible.myObject;// Get whether it's visible or not.var isViewVisible = myObject; The handler in reply can either return a flat value, like true or false, or a function to be executed. Either way, the value is sent back to the requester. Here's a diagram of the Requests pattern: Although the name is 'Requests,' you can just as easily request information as you can request that an action be completed. Just like HTTP, where you can both make GET requests for information, or DELETE requests to order than a resource be deleted, Requests can be used for a variety of purposes. One thing to note is that this pattern is identical to a simple method call. One can just as easily rewrite the above example as: // Set up a method...myObject {return thisviewIsVisible;}// Call that methodvar isViewVisible = myObject; This is why mixing Requests into something like a View or Model does not make much sense. If you have access to the View or Model, then you might as well just use methods. ChannelsChannels The real draw of Backbone.Radio are Channels. A Channel is simply an object that has Backbone.Events and Radio.Requests mixed into it: it's a standalone message bus comprised of both systems. Getting a handle of a Channel is easy. // Get a reference to the channel named 'user'var userChannel = BackboneRadio; Once you've got a channel, you can attach handlers to it. userChannel;userChannel; You can also use the 'trigger' methods on the Channel. userChannel;userChannel; You can have as many channels as you'd like // Maybe you have a channel for the profile section of your appvar profileChannel = BackboneRadio;// And another one for settingsvar settingsChannel = BackboneRadio; The whole point of Channels is that they provide a way to explicitly namespace events in your application, and a means to easily access any of those namespaces. Using With MarionetteUsing With Marionette Marionette does not use Radio by default, although it will in the next major release: v3. However, you can use Radio today by including a small shim after you load Marionette, but before you load your application's code. To get the shim, refer to this Gist. APIAPI Like Backbone.Events, all of the following methods support both the object-syntax and space-separated syntax. For the sake of brevity, I only provide examples for these alternate syntaxes in the most common use cases. RequestsRequests request( requestName [, args...] ) Make a request for requestName. Optionally pass arguments to send along to the callback. Returns the reply, if one exists. If there is no reply registered then undefined will be returned. You can make multiple requests at once by using the space-separated syntax. myChannel; When using the space-separated syntax, the responses will be returned to you as an object, where the keys are the name of the request, and the values are the replies. reply( requestName, callback [, context] ) Register a handler for requestName on this object. callback will be executed whenever the request is made. Optionally pass a context for the callback, defaulting to this. To register a default handler for Requests use the default requestName. The unhandled requestName will be passed as the first argument. myChannel;// This will be handled by the default requestmyChannel; To register multiple requests at once you may also pass in a hash. // Connect all of the replies at oncemyChannel; Returns the instance of Requests. replyOnce( requestName, callback [, context] ) Register a handler for requestName that will only be called a single time. Like reply, you may also pass a hash of replies to register many at once. Refer to the reply documentation above for an example. Returns the instance of Requests. stopReplying( [requestName] [, callback] [, context] ) If context is passed, then all replies with that context will be removed from the object. If callback is passed then all requests with that callback will be removed. If requestName is passed then this method will remove that reply. If no arguments are passed then all replies are removed from the object. You may also pass a hash of replies or space-separated replies to remove many at once. Returns the instance of Requests. ChannelChannel channelName The name of the channel. reset() Destroy all handlers from Backbone.Events and Radio.Requests from the channel. Returns the channel. RadioRadio channel( channelName ) Get a reference to a channel by name. If a name is not provided an Error will be thrown. var authChannel = BackboneRadio; DEBUG This is a Boolean property. Setting it to true will cause console warnings to be issued whenever you interact with a request that isn't registered. This is useful in development when you want to ensure that you've got your event names in order. // Turn on debug modeBackboneRadioDEBUG = true;// This will log a warning to the console if it goes unhandledmyChannel;// Likewise, this will too, helping to prevent memory leaksmyChannel; debugLog(warning, eventName, channelName) A function executed whenever an unregistered request is interacted with on a Channel. Only called when DEBUG is set to true. By overriding this you could, for instance, make unhandled events throw Errors. The warning is a string describing the type of problem, such as: Attempted to remove the unregistered request while the eventName and channelName are what you would expect. tuneIn( channelName ) Tuning into a Channel is another useful tool for debugging. It passes all triggers and requests made on the channel to Radio.log. Returns Backbone.Radio. BackboneRadio; tuneOut( channelName ) Once you're done tuning in you can call tuneOut to stop the logging. Returns Backbone.Radio. BackboneRadio; log( channelName, eventName [, args...] ) When tuned into a Channel, this method will be called for all activity on a channel. The default implementation is to console.log the following message: '[channelName] "eventName" args1 arg2 arg3...' where args are all of the arguments passed with the message. It is exposed so that you may overwrite it with your own logging message if you wish. 'Top-level' API'Top-level' API If you'd like to execute a method on a channel, yet you don't need to keep a handle of the channel around, you can do so with the proxy functions directly on the Backbone.Radio object. // Trigger 'some:event' on the settings channelBackboneRadio; All of the methods for both messaging systems are available from the top-level API. reset( [channelName] ) You can also reset a single channel, or all Channels, from the Radio object directly. Pass a channelName to reset just that specific channel, or call the method without any arguments to reset every channel. // Reset all channelsRadio;
https://preview.npmjs.com/package/backbone.radio
CC-MAIN-2020-40
refinedweb
1,603
58.18
Flex 3 with Java — Save 50% Develop rich internet applications quickly and easily using Adobe Flex 3, ActionScript 3.0 and integrate with a Java backend using BlazeDS 3. The article is intended towards developers who have never used Flex before and would like to exercise a “Hello World” kind of tutorial. The article does not aim to cover Flex and FB4 in detail but rather focuses on the mechanics of FB4 and getting an application running with minimal effort. For developers familiar with Flex and the predecessor to Flash Builder 4 (Flex Builder 2 or 3), it contains an introduction to FB4 and some differences in the way you go about building Flex Applications using FB4. Even if you have not programmed before and are looking at understanding how to make a start in developing applications, this would serve as a good start. The Flex Ecosystem The Flex ecosystem is a set of libraries, tools, languages and a deployment runtime that provides an end-to-end framework for designing, developing and deploying RIAs. All these together are being branded as a part of the Flash platform. In its latest release, Flex 4, special efforts have been put in to address the designer to developer workflow by letting graphic designers address layout, skinning, effects and general look and feel of your application and then the developers taking over to address the application logic, events, etc. To understand this at a high level, take a look at the diagram shown below. This is a very simplified diagram and the intention is to project a 10,000 ft view of the development, compilation and execution process. Let us understand the diagram now: - The developer will typically work in the Flash Builder Application. Flash Builder is the Integrated Development Environment (IDE) that provides an environment for coding, compiling, running / debugging your Flex based applications. - Your Flex Application will typically consist of MXML and ActionScript code. ActionScript is an ECMAScript compatible Object Oriented language, whereas MXML is an XML-based markup language. - Using MXML you can define/layout your visual components like buttons, combobox, data grids, and others. Your application logic will be typically coded inside ActionScript classes/methods. - While coding your Flex Application, you will make use of the Flex framework classes that provide most of the core functionality. Additional libraries like Flex Charting libraries and 3rd party components can be used in your application too. - Flash Builder compiles all of this into object byte code that can be executed inside the Flash Player. Flash Player is the runtime host that executes your application. This is high level introduction to the ecosystem and as we work through the samples later on in the article, things will start falling into place. Flash Builder 4 Flash Builder is the new name for the development IDE previously known as Flex Builder. The latest release is 4 and it is currently in public beta. Flash Builder 4 is based on the Eclipse IDE, so if you are familiar with Eclipse based tools, you will be able to navigate your way quite easily. Flash Builder 4 like Flex Builder 3 previously is a commercial product and you need to purchase a development license. FB4 currently is in public beta and is available as a 30-day evaluation. Through the rest of the article, we will make use of FB4 and will be focused completely on that to build and run the sample applications. Let us now take a look at setting up FB4. Setting up your Development Environment To setup Flash Builder 4, follows these steps: - The first step should be installing Flash Player 10 on your system. We will be developing with the Flex 4 SDK that comes along with Flash Builder 4 and it requires Flash Player 10. You can download the latest version of Flash Player from here: - Download Flash Builder 4 Public Beta from. The page is shown below: After you download, run the installer program and proceed with the rest of the installation. Launch the Adobe Flash Builder Beta. It will prompt first with a message that it is a Trial version as shown below: To continue in evaluation mode, select the option highlighted above and click Next. This will launch the Flash Builder IDE. Let us start coding with Flash Builder 4 IDE. We will stick to tradition and write the “Hello World” application. Hello World using Flash Builder 4 In this section, we will be developing a basic Hello World application. While the application does not do much, it will help you get comfortable with the Flash Builder IDE. Launch the Flash Builder IDE. We will be creating a Flex Project. Flash Builder will help us create the Project that will contain all our files. To create a new Flex Project, click on the File → New → Flex Project as shown below: This will bring up a dialog in which you will need to specify more details about the Flex Project that you plan to develop. The dialog is shown below: You will need to provide at least the following information: - Project Name: This is the name of your project. Enter a name that you want over here. In our case, we have named our project MyFirstFB4App. - Application Type: We can develop both a Web version and a desktop version of our application using Flash Builder. The web application will then run inside of a web browser and execute within the Flash Player plug-in. We will go with the Web option over here. The Desktop application runs inside the Adobe Integrated Runtime environment and can have more desktop like features. We will skip that option for now. - We will let the other options remain as is. We will use the Flex 4.0 SDK and currently we are not integrating with any Server side layer so we will leave that option as None/Other. Click on Finish at this point to create your Flex Project. This will create a main application file called MyFirstFB4App.mxml as shown below. We will come back to our coding a little later but first we must familiarize ourselves with the Flash Builder IDE. Let us first look at the Package Explorer to understand the files that have been created for the Flex Project. The screenshot is shown below: It consists of the main source file MyFirstFB4App.mxml. This is the main application file or in other words the bootstrap. All your source files (MXML and ActionScript code along with assets like images, and others should go under the src folder. They can optionally be placed in packages too. The Flex 4.0 framework consists of several libraries that you compile your code against. You would end up using its framework code, components (visual and non-visual) and other classes. These classes are packaged in a library file with an extension .swc. A list of library files is shown above. You do not need to typically do anything with it. Optionally, you can also use 3rd party components written by other companies and developers that are not part of the Flex framework. These libraries are packages as .SWC files too and they can be placed in the libs folder as shown in the previous screenshot. The typical step is to write and compile your code—build your project. If your build is successful, the object code is generated in the bin-debug folder. When you deploy your application to a Web Server, you will need to pickup the contents from this folder. We will come to that a little later. The html-template folder contains some boiler-plate code that contains the container HTML into which your object code will be referenced. It is possible to customize this but for now, we will not discuss that. Double-click MyFirstFB4App.mxml file. This is our main application file. The code listing is given below: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns: </s:Application> As discussed before, you will typically write one or more MXML files that will contain typically your visual components (although there can be non-visual components also). By visual components, we mean controls like button, combobox, list, tree, and others. It could also contain layout components and containers that help you layout your design as per the application screen design. To view what components, you can place on the main application canvas, select the Design View. In the Properties panel, we can change several key attributes. All controls can be uniquely identified and addressed in your code via the ID attribute. This is a unique name that you need to provide. Go ahead and give it some meaningful name. In our case, we name it btnSayHello. Next we can change the label so that instead of Button, it can display a message for example, Say Hello. Finally we want to wire some code such that if the button is clicked, we can do some action like display a Message Box saying Hello World. To do that, click the icon next to the On click edit field as shown below. It will provide you two options. Select the option for Generate Event Handler. This will generate the code and switch to the Source view. The code is listed below for your reference. <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns: <fx:Script> <![CDATA[ protected function btnSayHello_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } ]]> </fx:Script> <s:Button </s:Application> There are few things to note here. As mentioned most of your application logic will be written in ActionScript and that is exactly what Flash Builder has generated for you. All such code is typically added inside a scripting block marked with the <fx:Script> tag. You can place your ActionScript methods over here that can be used by the rest of the application. When we clicked on Generate Event Handler, Flash Builder generated the Event Handler code. This code is in ActionScript and was appropriately placed inside the <fx:Script> block for us. If you look at the code, you can see that it has added a function that is invoked when the click event is fired on the button. The method is btnSayHello_clickHandler and if you notice it has an empty method that is, no implementation. Let us run the application now to see what it looks like. To run the application, click on the Run icon in the main toolbar of Flash Builder. This will launch the web application as shown below. Clicking the Say Hello button will not do anything at this point since there is no code written inside the handler as we saw above. To display the MessageBox, we add the code shown below (Only the Script section is shown below): <fx:Script> <![CDATA[ import mx.controls.Alert; protected function btnSayHello_clickHandler(event:MouseEvent):void { Alert.show("Hello World"); } ]]> </fx:Script> We use one of the classes (called Alert) from the Flex framework. Like any other language, we need to specify which package we are using the class from so that the compiler can understand it. The Alert class belongs to the mx.controls package and it has a static method called show() which takes a single parameter of type String. This String parameter is the message to be displayed and in our case it is "Hello World". To run this, click Ctrl-S to save your file or File → Save from the main menu. And click on Run icon in the main toolbar. This will launch the application and on clicking the SayHello button, you will see the Hello World Alert window as shown below. Yahoo! News Application Let us write another application, we will call it the Yahoo! News Application. This application shows a list of current news items from Yahoo that has been classified as "most emailed". Yahoo! provides this list in the form of a RSS feed. The RSS feed for the same is:. Let us first look at the application when it is all ready so that we can understand what we are trying to build. The snapshot is shown below: The application is simple. It has a button called Fetch News, which when clicked will connect to the internet, access the RSS Feed and display the new items. It shows the news items by using a DataGrid component that can show rows of data, which can comprise several columns. In our case, each row is one news item and we are currently displaying only one column —title of the news item. Let us build this application now. To start off, we will create another application which we shall call YahooNews. You do not need to create another project and can stay within the MyFirstFB4App project and create a new MXML application by going to the main menu and selecting File → New → MXML Application. This will bring up a dialog window as shown below. Give the Filename as YahooNews and select the Layout as spark.layouts.VerticalLayout (This will arrange all your components vertically at the main application canvas level). Finally, click on the Finish button. This will generate the same boilerplate code that we saw earlier. Go to the Design View and then from the Components tab, first drag and drop a Button from the Controls tree node. The properties of the button are shown below: Create a click handler for the button as shown below: Go back to the Design view and drop a DataGrid control from the Components tab as shown below. Note that the DataGrid component is present in the Data Controls tree node. The DataGrid component is like a table and visually it can display rows of data where each row can comprise one of more columns. The Design view is shown below. By default the DataGrid shows 3 sample columns and we need to restrict that to just one column in which we wish to show the title. We wish to name the column header as "News Title". You can modify the DataGrid properties by selecting the DataGrid control and viewing the Properties Tab. First let us set the ID to dgNews. Then set the width and height of this control to 100% respectively. This means that it will occupy the whole application screen even if you resize it. You can view the current columns and modify (add/delete/edit) them by clicking on the Configure Columns button in the Properties Tab. That will bring up the current columns as you can see below: We need only one column, so to delete Column 2 and Column 3, simply select the item and click on the Delete button. Additionally, select Column 1 and change the following two properties: - Bind to field: This is the attribute of the row object. For example we are going to assign a list of news items that is a collection of news items to the DataGrid. Each row is a news item object and each item object consists of several attributes like title, pubDate, category, and others as per the RSS definition. So in this case we are interested in only the title attribute or property. - Header Text: This is the text that is shown in the header column for the DataGrid. We will call it News Title. Click on OK to commit the changes. The final step will be for us to implement the code in our handler which we auto generated when the Fetch News button is clicked. Switch over the Source view to take a look at the current code generated for you. Let us go through the code: - Take a look at the Fetch News button code shown below: <s:Button protected function btnFetchYahooNews_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } - Invoke the RSS Feed:. - Retrieve the result and assign the RSS News Items to the DataGrid. - To do that, we will need to use the HTTPService from the mx namespace in the Flex framework. The HTTPService is a non-visual component and hence it is defined under a <fx:Declaration> tag as given below: > - Let us look at the complete code now where we hook up invoking of the Fetch News button to the HTTP Service and then binding the result of the HTTP Response to the DataGrid. <.controls.Alert; //This handler is invoked when FetchNews button is clicked protected function btnFetchYahooNews_clickHandler(event:MouseEvent):void { //Send the HTTP Request by invoking the send() method YahooNewsService.send(); } //When the HTTP Response is received bind the result to the DataGrid's dataprovider protected function YahooNewsService_resultHandler(event:ResultEvent):void { dgNews.dataProvider = event.result.rss.channel.item; } protected function YahooNewsService_faultHandler(event:FaultEvent):void { Alert.show("Error in fetching Yahoo News."); } ]]> </fx:Script> <s:layout> <s:VerticalLayout/> </s:layout> <s:Button <mx:DataGrid <mx:columns> <mx:DataGridColumn </mx:columns> </mx:DataGrid> </s:Application> This code is straightforward and it means that when the button is clicked the method btnFetchYahooNews_clickHandler will be invoked. Since we auto-generated the click handler, it currently looks like this: We need to now implement the code to do the following: The <fx:Declarations> tag is created under the <mx:Application> main tag. We first have an id attribute that is a unique name for the service. Then the url attribute that points to the RSS Feed. We are using the GET method for the HTTP operation. Finally, two critical events are handled. One of them is the result event which is invoked when the HTTP Request has been successful and a response is returned. The other is a fault event which is invoked when the HTTP Request could not be completed and there was an error. Let us go through the key points: - The btnFetchYahooNews_clickHandler method is invoked when the Fetch News button is clicked. In this method, we send the HTTP Request by invoking the send() method on the HTTPService instance—YahooNewsService. - As mentioned the result event is fired when the HTTP Response is available. The HTTP response is available in the event parameter that is passed to the result handler. Since it is a standard RSS feed that is available, a collection of items is available under the event.result.rss.channel.item object. - This is then assigned to the Data Grid. To assign it to the DataGrid, we simply pass it to the dataProvider attribute of the grid. Since we have bound the column to only the title attribute, the DataGrid takes care of the column rendering for us. To run the application, simply click on the Run icon in the main toolbar. A sample output is shown below: >> Continue Reading Flex 101 with Flash Builder 4: Part 2 About the Author : Romin Irani is a software developer at heart, living and working in Mumbai, India. He has been developing software for 15 years now and still wakes up every morning to learn something new in the ever changing world of software development. He has been a big fan of all things related to Flex, ever since he came across it 3 years back. He has a one-point agenda: Harness the power of software by learning, teaching and developing simple solutions. You can follow him on Twitter at. Post new comment
http://www.packtpub.com/article/flex-101-with-flash-builder-4-part1
CC-MAIN-2014-15
refinedweb
3,221
63.59
Boudewijn and Cameron Argue for Qt Pages: 1, 2, 3 Thomas: Hmmm. Well, maybe ... Suppose I do decide to go cross-platform; shouldn't I use Tkinter? The IDLE treeview. Boudewijn: You could. It's quite good ... But maybe just a little old-fashioned. It's simply missing several of the features I know you'll expect. If you want to use a treeview, you have to construct one yourself. Look at the IDLE (IDLE is the Tkinter development environment written in and bundled with the standard Python distribution) treeview, for example -- they're still working the bugs out of that one, and it's currently at about 500 lines of code. I will say this, though: Whatever toolkit you choose, buy Grayson's Python and Tkinter Programming. It's a delight to read, and handles all aspects of GUI design. Thomas: I've looked a bit at the market for Linux, and Gnome sounds good. So shouldn't I develop two version of my apps, one for Windows and one for Gnome? Boudewijn: You might like Gnome -- but will your users? Not everyone is drawn to the dark side of interface design. What I mean is that the Gnome people designed for looks and theme-ability before they tried to achieve stability and functionality. Standard Gnome looks cool, but not very businesslike. Cameron: I like the Gnome guys: They're imaginative and energetic. They're just not done, though. Qt is far more mature. GTK (Gnome's toolkit) copies Qt's signals and slots mechanism, but really doesn't do it as well. Thomas: What's this stuff about the license? I just read that I've got to pay an enormous amount of money for the Windows version of PyQt! That's not open source! Boudewijn: That's not entirely correct. To generate a PyQt application for Windows, you do need to buy a developers license of the main Qt library for Windows. That'll set you back $1550 USD. PyQt itself is free. In fact, if you want to distribute a compiled version of PyQt for Windows with a Qt run-time dll, there's nothing to stop you. Be the first to sell a commercial Python/Qt development environment for Windows! As for open source, you're half-right. You can't use Qt for free on Windows, but the Unix/X11 license is certified Open Source TM. Thomas: Why are we even talking about PyQt? Qt itself is starting to interest me, but PyQt is only version 0.12. That's deep alpha -- as steady as strawberry jelly! Cameron: Phil (Thompson, the originator and maintainer of PyQt) counts differently than most of us. For him, version 0.10 followed 0.9. Anyway, beyond that, he's careful with the software he puts out. You can trust PyQt. It's stable. Boudewijn: It's very, very stable. There are bugs, of course, but they are quickly fixed. In over a year of PyQt development, I've had one serious problem that took a while to fix -- and I could still work around that one. Thomas: KDE doesn't run on Windows, though, right? What's the point of using Qt then? Cameron: This takes some explanation. "KDE" means about four different things. The acronym is for "desktop environment," which is what Unix people used to call a "window manager," plus more. KDE is also an application development framework and an office application suite. KDE stuff is all built with Qt -- as the KDEers explain their decision, "It's the best toolkit for Unix." Boudewijn: It is a bit complicated. Qt itself is available both for Windows and for Unix/X11. So's PyQt. Apps written in PyQt run unmodified on both platforms, too. However, if you want nice KDE extensions, like extra-de-luxe widgets, components, or integration with the KDE desktop, you have to use a second set of bindings: PyKDE. PyKDE depends on PyQt. Cameron: ... which is fine, of course. PyKDE needs the full KDE API (application programming interface), though, so PyKDE only works where you have KDE, which means only for Unix with X11. Boudewijn: What's nice is that it's quite easy to write an app that integrates with KDE when it's available, but degrades gracefully to pure Qt if KDE isn't present. I can show you how. #!/usr/bin/env python import sys, os from qt import * try from kdecore import * from kdeui import * Application=KApplication Mainwin=KTMainWindow except ImportError: Application=QApplication MainWin=QMainWindow class MainWindow(MainWin) def __init__(self, *args): apply(MainWin.__init__,(self,)+ args) .... def main(argv): app=Application(sys.argv, "kpybrowser") appwin=MainWindow() appwin.show() return app.exec_loop() if __name__=="__main__": main(sys.argv).
http://www.linuxdevcenter.com/pub/a/network/2000/07/07/magazine/qt-discussion.html?page=2
CC-MAIN-2015-27
refinedweb
789
77.03
Thanks for the suggestion. But my original servlet already does that and still has the problem. The sample code I included in my original message was created to try to narrow down the problem. But I ended up narrowing it down to almost nothing and the problem still occurs. Are you saying that you can upload a file larger than 40kb with no problems? Has anyone done that successfully running tomcat locally or with IIS? Is there some configuration setting that I am missing? Steve Steve Fyfe CNI Corporation Milford New Hampshire SFyfe@cnicorp.com (603) 673-6600 -----Original Message----- From: arionyu@stt.com.hk Sent: Monday, July 17, 2000 10:13 PM To: <tomcat-dev@jakarta.apache.org> Subject: Re: [BUG] Network connection reset while uploading large file Hi! I think you may try opening the input stream of the request object, dump out everything till the end, and then give out response. Arion Steve Fyfe wrote: > I have discovered that when I attempt to upload a file larger than 40KB, Tomcat will just reset the network connection. As a result, IE shows an error screen with the message "Server or DNS not available", and Netscape gives an error message box saying "A network error has occurred while sending data (Connection reset by peer)." > > Whenever I send a file that is smaller than 40KB, the response produced by the test servlet is correctly shown in the browser's window. But if I upload a file larger than 40KB using the exact same configuration and servlet, the code in the servlet is executed but the web page produced by the servlet never appears - the browser's error message appears instead. > > This problem occurs both when I run Tomcat locally (Win 98) and when I run it on NT 4 with IIS serving the static pages. I get the same problem using Tomcat 3.1 or 3.2b2 when running locally. I have only tried 3.1 with IIS. I don't believe the problem is in IIS, since this application works using a different servlet container on the same machine. > > I have put together a small test case that demonstrates the problem. As you can see from the code below, the servlet is only a few lines long, and it does not even look at the uploaded data. The only thing that is different with each test case is the size of the uploaded file - larger than 40KB fails, but less than 40KB succeeds. > > Here is my test servlet - this is the entire code for the whole thing: > > // start servlet > import java.io.*; > import javax.servlet.*; > import javax.servlet.http.*; > > public class TestUploadServlet extends HttpServlet { > > protected void doPost(HttpServletRequest req, > HttpServletResponse res) > throws ServletException, IOException { > > ////res.sendRedirect("Error.htm"); > > PrintWriter out = res.getWriter(); > out.println("<html><head></head><body>It worked!</body></html>"); > > out.flush(); > res.flushBuffer(); > } > } > // end servlet code > > If I substitute the "sendRedirect" line for all the lines below that, the same problem occurs. > > This is the web page that does the upload: > <html> > <head> > <title>Test Upload Form</title> > </head> > <body> > Test Upload Form<br> > > <form name="upload" method="post" enctype="multipart/form-data" > > > <input type="file" name="UploadedFile" accept="*" size="50"> > > <input type="submit" name="SubmitBtn" value="Upload File Now"> > </form> > </body> > </html> > > Tomcat is still very new to me, so if there is any configuration setting I can change that will resolve this problem, please let me know. I sure hope this can be fixed before 3.2 is final. In its current state I cannot use Tomcat since this web app must be able to accept uploaded files larger than 100 MB (large page full color images or postscript). > > If I can supply anything else that will help to nail this one, please let me know. > > Steve Fyfe > CNI Corporation > Milford New Hampshire > > SFyfe@cnicorp.com > (603) 673-6600
http://mail-archives.apache.org/mod_mbox/tomcat-dev/200007.mbox/%3Cs9757230.056@hawkeye%3E
CC-MAIN-2015-22
refinedweb
645
63.7
). Available,.contrib.corestats.CoreStats': None, } Writing your own extension¶ Writing your own extension is easy. Each extension is a single Python class which doesn’t need to implement any particular method. The main entry point for a Scrapy extension (this also includes middlewares and pipelines) is the from_crawler class method which receives a Crawler instance which is the main object controlling the Scrapy crawler. Through that object you can access settings, signals, stats, and also control the crawler behaviour, if your extension needs to such thing. Typically, extensions connect to signals and perform tasks triggered by them. Finally, if the from_crawler method raises the NotConfigured exception, the extension will be disabled. Otherwise, the extension will be enabled.: from scrapy import signals from scrapy.exceptions import NotConfig): spider.log("opened spider %s" % spider.name) def spider_closed(self, spider): spider.log("closed spider %s" % spider.name) def item_scraped(self, item, spider): self.items_scraped += 1 if self.items_scraped == self.item_count: spider.log("scraped %d items, resetting counter" % self.items_scraped) self.item_count = 0 Built-in extensions reference¶ General purpose extensions¶ Log Stats extension¶ Log basic stats like crawled pages and scraped items. Core Stats extension¶ Enable the collection of core statistics, provided the stats collection is enabled (see Stats Collection). Telnet console extension¶. Memory usage extension¶ Note This extension does not work in Windows. Monitors the memory used by the Scrapy process that runs the spider and: 1, sends a notification e-mail when it exceeds a certain value 2. - libxml2 memory leaks -).
http://doc.scrapy.org/en/0.18/topics/extensions.html
CC-MAIN-2019-39
refinedweb
249
52.05
How do you get/extract the points that define a shapely from shapely.geometry import Polygon # Create polygon from lists of points x = [list of x vals] y = [list of y vals] polygon = Polygon(x,y) So, I discovered the trick is to use a combination of the Polygon class methods to achieve this. If you want geodesic coordinates, you then need to transform these back to WGS84 (via pyproj, matplotlib's basemap, or something). from shapely.geometry import Polygon #Create polygon from lists of points x = [list of x vals] y = [list of y vals] some_poly = Polygon(x,y) # Extract the point values that define the perimeter of the polygon x, y = some_poly.exterior.coords.xy
https://codedump.io/share/1JkX79JKkwFa/1/extract-pointscoordinates-from-python-shapely-polygon
CC-MAIN-2017-39
refinedweb
117
53.04
-- | Rewrite rules are represented as nested monads: a 'Rule' is a 'Pattern' that returns a 'Rewrite' the latter directly defining the transformation of the graph. The 'Rewrite' itself is expected to return a list of newly created nodes. -- -- Data.Maybe (listToMaybe) [Node]) -- rule construction --------------------------------------------------------- -- | primitive rule construction with the matched nodes of the left hand side as a parameter rewrite ∷ (Match → Rewrite n [Node]) → Rule n rewrite r = liftM r history -- | constructs a rule that deletes all of the matched nodes from the graph erase ∷ View [Port] n ⇒ Rule n erase = do hist ← history return $ do mapM_ deleteNode $ nub hist return [] -- | = do hist ← history return $ do mapM_ mergeEs $ joinEdges ess mapM_ deleteNode $ nub hist return []) hist ← history when (null hist ∧ not (null vs)) (fail "need at least one matching node to clone new nodes from") return $ do es ← replicateM n newEdge let (vs,ess) = partition es ns ← zipWithM copyNode (cycle hist) vs mapM_ mergeEs $ joinEdges ess mapM_ deleteNode $ nub hist return ns $ do ns1 ← rw1 ns2 ← apply r2 return (ns1 ⧺ ns2) -- | Apply a rule repeatedly as long as it is applicable. Fails if rule cannot be applied at all. exhaustive ∷ Rule n → Rule n exhaustive = foldr1 (>>>) . repeat -- | Apply a rule to all current redexes one by one. Neither new redexes or destroyed redexes are reduced. everywhere ∷ Rule n → Rule n everywhere r = do ms ← matches r exhaustive $ restrictOverlap (\hist future → future ∈ ms) r -- | Apply rule at an arbitrary position if applicable apply ∷ Rule n → Rewrite n [Node] apply r = maybe (return []) snd . listToMaybe =<< liftM (runPattern r) ask
http://hackage.haskell.org/package/graph-rewriting-0.4.8/docs/src/GraphRewriting-Rule.html
CC-MAIN-2014-42
refinedweb
257
51.21
Python Types and Objects About This Book Explains Python new-style objects: what are <type 'type'>and <type 'object'> how user defined classes and instances are related to each other and to built-in types what are metaclasses New-style implies Python version 2.2 and upto and including 3.x. There have been some behavioral changes during these version but all the concepts covered here are valid. The system described is sometimes called the Python type system, or the object model. This book is part of a series: Python Types and Objects [you are here] Python Attributes and Methods This revision: Discuss | Latest version | Cover page Author: shalabh@cafepy.com Table of Contents - Before You Begin - 1. Basic Concepts - - 2. Bring In The Objects - - 3. Wrap Up - - 4. Stuff You Should Have Learnt Elsewhere - - Related Documentation List of Figures - 1.1. A Clean Slate - 2.1. Chicken and Egg - 2.2. Some Built-in Types - 2.3. User Built Objects - 3.1. The Python Objects Map - 4.1. Relationships - 4.2. Transitivity of Relationships List of Examples - 1.1. Examining an integer object - 2.1. Examining <type 'object'>and <type 'type'> - 2.2. There's more to <type 'object'>and <type 'type'> - 2.3. Examining some built-in types - 2.4. Creating new objects by subclassing - 2.5. Creating new objects by instantiating - 2.6. Specifying a type object while using classstatement - 3.1. More built-in types - 3.2. Examining classic classes Some points you should note: This book covers the new-style objects (introduced a long time ago in Python 2.2). Examples are valid for Python 2.5 and all the way to Python 3.x. This book is not for absolute beginners. It is for people who already know Python (even a little Python) and want to know more. This book provides a background essential for grasping new-style attribute access and other mechanisms (descriptors, properties and the like). If you are interested in only attribute access, you could go straight toPython Attributes and Methods, after verifying that you understand the Summary of this book. Happy pythoneering! So what exactly is a Python object? An object is an axiom in our system - it is the notion of some entity. We still define an object by saying it has: Identity (i.e. given two names we can say for sure if they refer to one and the same object, or not). A value - which may include a bunch of attributes (i.e. we can reach other objects through objectname.attributename). A type - every object has exactly one type. For instance, the object 2has a type intand the object "joe"has a type string. One or more bases. Not all objects have bases but some special ones do. A base is similar to a super-class or base-class in object-oriented lingo. If you are more of the 'I like to know how the bits are laid out' type as opposed to the 'I like the meta abstract ideas' type, it might be useful for you to know that each object also has a specific location in main memory that you can find by calling the id() function. The type and bases (if they exist) are important because they define special relationships an object has with other objects. Keep in mind that the types and bases of objects just other objects. This will be re-visited soon. You might think an object has a name but the name is not really part of the object. The name exists outside of the object in a namespace (e.g. a function local variable) or as an attribute of another object. Even a simple object such as the number 2 has a lot more to it than meets the eye. Example 1.1. Examining an integer object >>> two = 2 >>> type(two) <type 'int'>>>> type(two) <type 'int'> >>> type(type(two)) <type 'type'>>>> type(type(two)) <type 'type'> >>> type(two).__bases__ (<type 'object'>,)>>> type(two).__bases__ (<type 'object'>,) >>> dir(two)>>> dir(two) ['_'] You might say "What does all this mean?" and I might say "Patience! First, let's go over the first rule." The built-in int is an object. This doesn't mean that just the numbers such as 2 and 77 are objects (which they are) but also that there is another object called int that is sitting in memory right beside the actual integers. In fact all integer objects are pointing to int using their __class__ attribute saying "that guy really knows me". Calling type()on an object just returns the value of the __class__ attribute. Any classes that we define are objects, and of course, instances of those classes are objects as well. Even the functions and methods we define are objects. Yet, as we will see, all objects are not equal. We now build the Python object system from scratch. Let us begin at the beginning - with a clean slate. You might be wondering why a clean slate has two grey lines running vertically through it. All will be revealed when you are ready. For now this will help distinguish a slate from another figure. On this clean slate, we will gradually put different objects, and draw various relationships, till it is left looking quite full. At this point, it helps if any preconceived object oriented notions of classes and objects are set aside, and everything is perceived in terms of objects (our objects) and relationships. As we introduce many different objects, we use two kinds of relationships to connect. These are the subclass-superclass relationship (a.k.a. specialization or inheritance, "man is an animal", etc.) and the type-instancerelationship (a.k.a instantiation, "Joe is a man", etc.). If you are familiar with these concepts, all is well and you can proceed, otherwise you might want to take a detour through the section called “Object-Oriented Relationships”. We examine two objects: <type 'object'> and <type 'type'>. Example 2.1. Examining <type 'object'> and <type 'type'> Let's make use of our slate and draw what we've seen. These two objects are primitive objects in Python. We might as well have introduced them one at a time but that would lead to the chicken and egg problem - which to introduce first? These two objects are interdependent - they cannot stand on their own since they are defined in terms of each other. Continuing our Python experimentation: Example 2.2. There's more to <type 'object'> and <type 'type'> If the above example proves too confusing, ignore it - it is not much use anyway. Now for a new concept - type objects. Both the objects we introduced are type objects. So what do we mean by type objects? Type objects share the following traits: They are used to represent abstract data types in programs. For instance, one (user defined) object called Usermight represent all users in a system, another once called intmight represent all integers. They can be subclassed. This means you can create a new object that is somewhat similar to exsiting type objects. The existing type objects become bases for the new one. They can be instantiated. This means you can create a new object that is an instance of the existing type object. The existing type object becomes the __class__for the new object. The type of any type object is <type 'type'>. They are lovingly called types by some and classes by others. Yes you read that right. Types and classes are really the same in Python (disclaimer: this doesn't apply to old-style classes or pre-2.2 versions of Python. Back then types and classes had their differences but that was a long time ago and they have since reconciled their differences so let bygones be bygones, shall we?). No wonder the type()function and the __class__ attribute get you the same thing. The term class was traditionally used to refer to a class created by the class statement. Built-in types (such as intand string) are not usually referred to as classes, but that's more of a convention thing and in reality types and classes are exactly the same thing. In fact, I think this is important enough to put in a rule: Class is Type is Class The term type is equivalent to the term class in all version of Python >= 2.3. Types and (er.. for lack of a better word) non-types (ugh!) are both objects but only types can have subcasses. Non-types are concrete values so it does not make sense for another object be a subclass. Two good examples of objects that are not types are the integer 2 and the string "hello". Hmm.. what does it mean to be a subclass of 2? Still confused about what is a type and what is not? Here's a handy rule for you: Type Or Non-type Test Rule If an object is an instance of <type 'type'>, then it is a type. Otherwise, it is not a type. Looking back, you can verify that this is true for all objects we have come across, including <type 'type'> which is an instance of itself. To summarize: <type 'object'>is an instance of <type 'type'>. <type 'object'>is a subclass of no object. <type 'type'>is an instance of itself. <type 'type'>is a subclass of <type 'object'>. There are only two kinds of objects in Python: to be unambiguous let's call these types and non-types. Non-types could be called instances, but that term could also refer to a type, since a type is always an instance of another type. Types could also be called classes, and I do call them classes from time to time. Note that we are drawing arrows on our slate for only the direct relationships, not the implied ones (i.e. only if one object is another's __class__, or in the other's __bases__). This make economic use of the slate and our mental capacity. A few built-in types are shown above, and examined below. Example 2.3. Examining some built-in types When we create a tuple or a dictionary, they are instances of the respective types. So how can we create an instance of mylist? We cannot. This is because mylist is a not a type. The built-in objects are, well, built into Python. They're there when we start Python, usually there when we finish. So how can we create new objects? New objects cannot pop out of thin air. They have to be built using existing objects. Example 2.4. Creating new objects by subclassing After the above example, C.__bases__ contains <type 'object'>, and MyList.__bases__ contains <type 'list'>. Subclassing is only half the story. After the above exercise, our slate looks quite full. Note that by just subclassing <type 'object'>, the type C automatically is an instance of <type 'type'>. This can be verified by checking C.__class__. Why this happens is explained in the next section. We really ended up with a map of different kinds of Python objects in the last chapter. Here we also unravel the mystery of the vertical grey lines. They just segregate objects into three spaces based on what the common man calls them - metaclasses, classes, or instances. Various pedantic observations of the diagram above: Dashed lines cross spacial boundaries (i.e. go from object to meta-object). Only exception is <type 'type'>(which is good, otherwise we would need another space to the left of it, and another, and another...). Solid lines do not cross space boundaries. Again, <type 'type'>-> <type 'object'>is an exception. Solid lines are not allowed in the rightmost space. These objects are too concrete to be subclassed. Dashed line arrow heads are not allowed rightmost space. These objects are too concrete to be instantiated. Left two spaces contain types. Rightmost space contains non-types. If we created a new object by subclassing <type 'type'>it would be in the leftmost space, and would also be both a subclass and instance of <type 'type'>. Also note that <type 'type'> is indeed a type of all types, and <type 'object'> a superclass of all types (except itself). There are two kinds of objects in Python: Type objects - can create instances, can be subclassed. Non-type objects - cannot create instances, cannot be subclassed. <type 'type'>and <type 'object'>are two primitive objects of the system. objectname.__class__exists for every object and points the type of the object. objectname.__bases__exists for every type object and points the superclasses of the object. It is empty only for <type 'object'>. To create a new object using subclassing, we use the classstatement and specify the bases (and, optionally, the type) of the new object. This always creates a type object. To create a new object using instantiation, we use the call operator ( ()) on the type object we want to use. This may create a type or a non-type object, depending on which type object was used. Some non-type objects can be created using special Python syntax. For example, [1, 2, 3]creates an instance of <type 'list'>. Internally, Python always uses a type object to create a new object. The new object created is an instance of the type object used. Python determines the type object from a classstatement by looking at the bases specified, and finding their types. issubclass(A,B)(testing for superclass-subclass relationship) returns Trueiff: Bis in A.__bases__, or issubclass(Z,B)is true for any Zin A.__bases__. isinstance(A,B)(testing for type-instance relationship) returns Trueiff: Bis A.__class__, or issubclass(A.__class__,B)is true. Squasher is really a python. (Okay, that wasn't mentioned before, but now you know.) The following example shows how to discover and experiment with built-in types. Example 3.1. More built-in types >>> import types >>> types.ListType is list>>> types.ListType is list True >>> def f():True >>> def f(): ... pass ... >>> f.__class__ is types.FunctionType... pass ... >>> f.__class__ is types.FunctionType True >>> >>> class MyList(list):True >>> >>> class MyList(list): ... pass ... >>> class MyFunction(types.FunctionType):... pass ... >>> class MyFunction(types.FunctionType): ... pass ... Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: type 'function' is not an acceptable base type >>> dir(types)... pass ... Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: type 'function' is not an acceptable base type >>> dir(types) ['BooleanType', 'DictProxyType', 'DictType', ..]['BooleanType', 'DictProxyType', 'DictType', ..] So we can create new objects with any relationship we choose, but what does it buy us? The relationships between objects determine how attribute access on the object works. For example, when we say objectname.attributename, which object do we end up with? It all depends on objectname, its type, and its bases (if they exist). Attribute access mechanisms in Python are explained in the second book of this series: Python Attributes and Methods. This is a note about classic classes in Python. We can create classes of the old (pre 2.2) kind by using a plain class statement. Example 3.2. Examining classic classes The types.ClassType object is in some ways an alternative <type 'type'>. Instances of this object (classic classes) are types themselves. The rules of attribute access are different for classic classes and new-style classes. The types.ClassType object exists for backward compatibility and may not exist in future versions of Python. Other sections of this book should not be applied to classic classes. Comment on this book here: discussion page. I appreciate feedback! That's all, folks! Can Skim Section This oddly placed section explains the type-instance and supertype-subtype relationships, and can be safely skipped if the reader is already familiar with these OO concepts. Skimming over the rules below might be useful though. While we introduce many different objects, we only use two kinds of relationships (Figure 4.1, “Relationships”): is a kind of (solid line): Known to the OO folks as specialization, this relationship exists between two objects when one (the subclass) is a specialized version of the other (the superclass). A snake is a kind of reptile. It has all the traits of a reptile and some specific traits which identify a snake. Terms used: subclass of, superclass of and superclass-subclass. is an instance of (dashed line): Also known as instantiation, this relationship exists between two objects when one (the instance) is a concrete example of what the other specifies (the type). I have a pet snake named Squasher. Squasher is an instance of a snake. Terms used: instance of, type of, type-instance and class-instance. Note that in plain English, the term 'is a' is used for both of the above relationships. Squasher is a snake and snake is a reptile are both correct. We, however, use specific terms from above to avoid any confusion. We use the solid line for the first relationship because these objects are closer to each other than ones related by the second. To illustrate - if one is asked to list words similar to 'snake', one is likely to come up with 'reptile'. However, when asked to list words similar to 'Squasher', one is unlikely to say 'snake'. It is useful at this point to note the following (independent) properties of relationships: Dashed Arrow Up Rule If X is an instance of A, and A is a subclass of B, then X is an instance of B as well. Dashed Arrow Down Rule If B is an instance of M, and A is a subclass of B, then A is an instance of M as well. In other words, the head end of a dashed arrow can move up a solid arrow, and the tail end can move down (shown as 2a and 2b in Figure 4.2, “Transitivity of Relationships” respectively). These properties can be directly derived from the definition of the superclass-subclass relationship. Applying Dashed Arrow Up Rule, we can derive the second statement from the first: Squasher is an instance of snake (or, the type of Squasher is snake). Squasher is an instance of reptile (or, the type of Squasher is reptile). Earlier we said that an object has exactly one type. So how does Squasher have two? Note that although both statements are correct, one is more correct (and in fact subsumes the other). In other words: Squasher.__class__is snake. (In Python, the __class__attribute points to the type of an object). Both isinstance(Squasher, snake)and isinstance(Squasher, reptile)are true. A similar rules exists for the superclass-subclass relationship. Combine Solid Arrows Rule If A is a subclass of B, and B is a subclass of C, then A is a subclass of C as well. A snake is a kind of reptile, and a reptile is a kind of animal. Therefore a snake is a kind of animal. Or, in Pythonese: snake.__bases__is (reptile,). (The __bases__attribute points to a tuple containing superclasses of an object). Both issubclass(snake, reptile)and issubclass(snake, animal)are true. Note that it is possible for an object to have more than one base. [descrintro] Unifying types and classes in Python 2.2. [pep-253] Subclassing Built-in Types. Colophon This book was written in DocBook XML. The HTML version was produced using DocBook XSL stylesheets and xsltproc. The PDF version was produced using htmldoc. The diagrams were drawn using OmniGraffe [1]. The process was automated using Paver [2]. [2] From - 顶 - 0 - 踩 - 0 - • Python Types and Objects - • 【免费】深入理解Docker内部原理及网络配置--王渊命 - • 2.Different ways of Converting and Casting objects to different types of objects - • SDCC 2017之区块链技术实战线上峰会--蔡栋 - • Principle of Computing (Python)学习笔记(3) probability +Objects and reference + tic_tac_toe - • php零基础到项目实战 - • Advanced Topics in Types and Programming Languages - • C语言及程序设计入门指导
http://blog.csdn.net/CPP_CHEN/article/details/17146649
CC-MAIN-2017-39
refinedweb
3,296
67.55
After my tutorial on classes, I thought it was time to post a tutorial on another very important aspect of both C and C++: pointers. I also strongly recommend you have a firm grasp of what classes are before advancing to pointers, because I will use classes and objects to explain pointers. However, even without that knowledge you will probably be able to understand the basics of pointer usage, because it is definitely not limited only to OOP (C has them as well). Pointers can be very confusing to people new to intermediate-level languages. They are also a very rich source of bugs and crashes in a program... I think that the easiest way of understanding and explaining pointers is not to first explain what they technically are, but present the problematics from which the need for them arises (this is the part with which I and some people I know struggled the most when learning about pointers). Imagine that you are creating a game. You are using a class called Humanoid to represent an individual in your game. Now, lets say that in your main function, you create two Humanoid objects: a player and an enemy. What would you want your player and your enemy to do? Battle, of course! You need a battle function, which takes in two humanoids as arguments, and makes them fight to death. That function could also return a winner Humanoid object for convenience. You could do something like this: Humanoid battle(Humanoid h1, Humanoid h2) { Humanoid winner; //battle, set the winner return winner; } However, you wouldn't get far with that. Arguments to functions get copied, which means that if you were to do something like battle(player, enemy);, the function would receive copies of those objects and wouldn't alter the originals (in which you are interested) at all! Your player and your enemy objects would be left unchanged! The solution: (you've guessed it) pass a pointer to the object instead of the object itself. A pointer variable is a just another variable, which can contain a memory address of any other variable or object. So, instead of copying your player and enemy objects, doing something to the copies, and then destroying the copies after the function returns, you could tell your function where in memory the objects you want to change reside, and then it could operate on them directly. Not only is this a more convenient method of treating objects, but it is also a great way to optimize (copying large objects can be expensive). Now, the details and syntax. For any type (int, char, float, Humanoid, YourClass, etc), there is an adequate pointer type. A pointer to an integer and a pointer to your Humanoid object do not actually differ in size most of the time, as they contain addresses, but C++ and C still differentiate between those two types of pointers (and this is great for type safety). Here's some basic code for manipulating pointers (the syntax is explained below): int x = 42; //A variable of type int with a value of 42. int * a; //A pointer named "a" to a variable of type int. This doesn't yet point to anything (it is null)! &x; //The address of the integer x. a = &x; //Get the address of the variable x and store it in the integer pointer a. int y = a + 1; //Does not work! a is not an integer but a pointer to an integer (a mere memory address)! int y = *a + 1; //Does work! *a gets the value at the memory address to which a points to (which is x, with its value of 42). y is now 43. Please note that the first * and the second one do not do the same thing at all! On the second line, the first time * is used, it means "a is a pointer which points to an integer". To declare a pointer of any type, just write <type * name>. For example, a pointer to our Humanoid would be written as Humanoid * player;. The second * is the dereferencing operator. It is completely different than the first one, it means "fetch me the value of the variable/memory address to which this pointer points to", in other words: dereference it. If you want to "get to" your actual player object from just that Humanoid pointer named player, you'd need to dereference it with *player, and then you would be able to access all the goodies this object hides. Otherwise you could just pass the pointer around for virtually no cost and let everyone know of where your player object actually is in memory. The last operator is &. This has the same meaning as "get me the address of". If you created an actual object with Humanoid player;, instead of just a pointer with Humanoid * player;, you could use & in order to get the address of player. The following can clear things up: Humanoid player(100, 2, "Billy"); //Initializes a player object in memory. Humanoid * pointer_to_humanoid; //Creates a pointer of type Humanoid. WARNING: it doesn't point to anything and trying to dereference it would cause an error. pointer_to_humanoid = &player; //Gets the address of the player object and stores it into the pointer_to_humanoid. You can now pass this pointer around (although you could just pass &player). if (pointer_to_humanoid) { //Execute this block if the pointer is not null. //These checks should be used when a function returns a pointer, for example, and there is a chance that it is uninitialized. } Here is the code for that little game that was mentioned through this post: #include <iostream> using std::cout; using std::endl; class Humanoid { public: int health; bool alive; int strength; const char * name; Humanoid(int, int, const char *); }; Humanoid::Humanoid(int ahealth, int astrength, const char * aname) { health = ahealth; strength = astrength; name = aname; alive = true; } Humanoid * battle(Humanoid * h1, Humanoid * h2) { Humanoid * winner; while (h1->alive && h2->alive) { if (h1->health < 0) { h1->alive = false; winner = h2; } else if (h2->health < 0) { h2->alive = false; winner = h1; } else { h2->health -= h1->strength; h1->health -= h2->strength; } } return winner; } int main() { Humanoid player(100, 10, "Billy"); Humanoid enemy(50, 5, "Gnawk"); Humanoid * winner = battle(&player, &enemy); cout << "The winner is " << winner->name << "." << endl; cout << "The player is " << (player.alive ? "alive" : "dead") << "." << endl; cout << "The enemy is " << (enemy.alive ? "alive" : "dead") << "." << endl; return 0; } In the example above you will notice that I used yet another operator: ->. It is also a dereferencing operator, and is used to simplify the syntax of OOP. Its use is equivalent to (*h1).health, but instead of writing that you can just write h1->health to access the health member. Hopefully, you now understand what pointers are, how to use them and what to use them for. This would conclude my tutorial on pointers! Topics I'd like to cover next: - The heap and stack - Arrays and pointer arithmetic - Function pointers Editor Note: Check out our other tutorial on Pointer! Edited by Roger, 19 February 2013 - 03:31 PM. added links
http://forum.codecall.net/topic/73750-pointers/
CC-MAIN-2019-47
refinedweb
1,178
69.72
Opened 5 years ago Closed 5 years ago #20621 closed Bug (worksforme) tutorial 04 imports polls namespace while within polls Description Tutorial 4 (for Django 1.5) has update 'polls/detail.html' to contain a form which posts to {% url 'polls:vote' poll.id %} However 'polls/detail.html' is in the polls app/namespace (and so the identifier {{{polls}} is not defined) resulting in the error message: u'polls' is not a registered namespace removing polls: so that the URL reads {% url 'vote' poll.id %} fixes the issue. Change History (2) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by Hi, The namespacing of URLs is introduced at the end of part 3 [1]. Make sure you add namespace='polls' to the include call in the urls.py, and it should work (I've just tried it). Thanks. [1] I just realized the documentation has the same issues in the definition of vote()in view.pyand also polls/results.html.
https://code.djangoproject.com/ticket/20621
CC-MAIN-2018-47
refinedweb
165
66.44
I'm new in Python (2.7) and I try to work on video processing (with module openCv "cv2"). Starting with tutorials, I try to use the script of this tutorial : paragraph "Saving a video". Everything works fine excepting that the video I'm saving is empty. I can find output.avi in my directory but its memory size is 0kb an, of course when I run it, no video is displayed. After a few changes here is my code : import numpy as np import cv2 cap = cv2.VideoCapture(0) # Define the codec and create VideoWriter object #fourcc = cv2.VideoWriter_fourcc(*'DIVX') fourcc = cv2.cv.CV_FOURCC(*'DIVX') out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480)) while(cap.isOpened()): ret, frame = cap.read() if ret==True: # write the flipped frame out.write(frame) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break else: break # Release everything if job is finished cap.release() out.release() cv2.destroyAllWindows() I never worked with openCV, but I bet the problem is in cap = cv2.VideoCapture(0) This is a C version of the VideoCapture method Maybe you can try to do the same. Something like cap = cv2.VideoCapture(0) if (not cap.isOpened()): print "Error" EDIT: just downloaded Python and OpenCV and discovered the problem was the codec. Try to change out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480)) for out = cv2.VideoWriter('output.avi', -1, 20.0, (640,480)) and select the codec by hand.
https://codedump.io/share/kDrLDCAqk6EW/1/saving-a-video-capture-in-python-with-opencv--empty-video
CC-MAIN-2017-13
refinedweb
247
71.51
This is a pure Dart html5 parser. It‘s a port of html5lib from Python. Since it’s 100% Dart you can use it safely from a script or server side app. Eventually the parse tree API will be compatible with dart:html, so the same code will work on the client and the server. (Formerly known as html5lib.) Add this to your pubspec.yaml (or create it): dependencies: html: any Then run the Pub Package Manager (comes with the Dart SDK): pub install Parsing HTML is easy! import 'package:html/parser.dart' show parse; import 'package:html. ./test/run.sh
https://chromium.googlesource.com/external/github.com/dart-lang/html/
CC-MAIN-2017-17
refinedweb
101
76.82
Creating and managing Django projectprof What this tutorial is about? Tutorial explains how to create and run Django projects with PyCharm and also covers Project tool window and navigation. What this tutorial is not about? It does not teach you Python and Django. Before you start Make sure that: - You are working with PyCharm Professional Edition 3.0 or higher. It can be downloaded here. - At least one Python interpreter, version from 2.4 to 3.3 is properly installed on your computer. You can download an interpreter from this page. This tutorial has been created with the following assumptions: - Django 1.6.5 - Default Windows keymap. If you are using another keymap, the keyboard shortcuts will be different. - The example used in this tutorial is similar to the one used in Django documentation. Creating a new project Actually, all new projects are created same way: by clicking the Create New Project button in the Quick Start area of the Welcome screen. If you have an already opened project, create a new one by choosing File → New Project.... Then, in the Create New Project dialog, specify the project name, select its type and the Python interpreter to be used for this project (remember, having at least one Python interpreter is one of the prerequisites of this tutorial!): Click OK - the project per se is ready. It means that the directory with the project name is created in the specified location, and contains .idea directory with the project settings. For an empty project, the process of creating a new project is over, and you can proceed with developing your pure Python application. As for the other supported frameworks, you are just in the middle... Depending on the selected project type, PyCharm suggests to enter additional, framework-specific settings. In this example, let's create and explore a Django application. Creating a Django project So, in the Create New Project dialog we've selected the project type Django - note that PyCharm suggests to install the Django framework, if it is missing from the selected interpreter (which, by the ways, is rather time-consuming). Next, let's define the Django-specific project settings: So... click OK, and the stub Django project is ready. Exploring project structure As mentioned above, basically, the stub project is ready. It contains framework-specific files and directories. Same happens when you create a project of any supported type, be it Pyramid, or Google App Engine. Let's see how the structure of the new project is visible in the Project tool window. Project view of the Project tool window This view is displayed by default. It shows the Django-specific project structure polls and MyDjangoApp directories; also, you see the manage.py and settings.py files. You cannot see the .idea directory in this view: Project Files view of the Project tool window If for some reasons you would like to see contents of the .idea directory, choose the view Project Files: as you see, this view shows same directories and files, plus .idea directory, since is located under the project root. By now, let's return to the Project view. What do we see in the Project view? - untitled directory is a container for your project. In the Project view it is denoted with bold font. - manage.py: This is a command-line utility that lets you interact with your Django project. Refer to the product documentation for details. - The nested directory MyDjangoApp is the actual Python package for your project. - MyDjangoApp/_init_.py: This empty file tells Python that this directory should be considered a Python package. - MyDjangoApp/settings.py: This file contains configuration for your Django project. - MyDjangoApp/urls.py: This file contains the URL declarations for your Django project. - MyDjangoApp/wsgi.py: This file defines an entry-point for WSGI-compatible web servers to serve your project. See How to deploy with WSGI for more details. - Finally, the nested directory polls contains all the files required for developing a Django application (at this moment, these files are empty): - Again, polls/_init_.py: tells Python that this directory should be considered a Python package. - polls/models.py: In this file, we'll create models for our application. - polls/views.py: In this file, we'll create views. - templates directory is by now empty. It should contain the template files. Note that you can create as many Django applications as needed. To add an application to a project, run the startapp task of the manage.py utility (Tools→Run manage.py task - startapp on the main menu). Configuring the database Now, when the project stub is ready, let's do some fine tuning. Open for editing settings.py. To do it, select the file in the Project tool window, and press F4. The file is opened in its own tab in the editor. Specify which database you are going to use in your application. For this purpose, find the DATABASES variable: click Ctrl+F, and in the search field start typing the string you are looking for. Then, in the 'ENGINE' line, add the name of your database management system after dot (you can use any one specified after comment, but for the beginning we'll start with sqlite3.) In the 'NAME' line, enter the name of the desired database, even though it doesn't yet exist. Launching the Django server Since we've prudently chosen sqlite3, we don't need to define the other values (user credentials, port and host). Let's now check whether our settings are correct. This can be done most easily ? just launch the runserver task of the manage.py utility: press Ctrl+Alt+R, and enter task name in the pop-up frame: Creating models Next, open for editing the file models.py, and note that import statement is already there. Then type the following code: from django.db import models class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): poll = models.ForeignKey(Poll) choice = models.CharField(max_length=200) votes = models.IntegerField() Actually, you can just copy-paste, but typing is advisable - it helps you see the powerful PyCharm's code completion in action: Creating database We have to create tables for the new model. For this purpose, we'll use the magic Ctrl+Alt+R shortcut twice: - First, select sql from the suggestion list, and choose the desired application name: This command generates SQL statements for both classes of our application: - Second, select syncdb from the suggestion list to create tables, and see the following results in the console: Performing administrative functions Since we've decided to enable site administration, PyCharm has already uncommented the corresponding lines in the urls.py file. However, we need to enable editing functionality for the admin site. To do that, create admin.py file in the polls directory (Alt+Ins), and enter the following code: from) Again pay attention to the code completion: Preparing run/debug configuration We are now ready to go to the admin page and create some polls. Sure, it is quite possible to run the Django server, then go to your browser, and type the entire URL in the address bar, but with PyCharm there is an easier way: use the pre-configured Django server run configuration with some slight modifications. To open this run/debug configuration for editing, on the main toolbar, click the run/debug configurations selector, and then choose Edit Configuration (or choose Run→Edit Configurations on the main menu): In the Run/Dug Configuration dialog box, give this run/debug configuration a name (here it is myapp), enable running the application in the default browser (select the check box Run browser) and specify the page of our site to be opened by default: Launching the admin site Now, to launch the application, press Shift+F "index", "details", "results", and "votes". First of all, we have to add patterns for the new pages to the file urls.py. Open this file for editing (select this file in the Project tool window and press F4), and add patterns: (r'^polls/$', 'polls.views.index'), (r'^polls/(?P<poll_id>\d+)/$', 'polls.views.details'), (r'^polls/(?P<poll_id>\d+)/results/$', 'polls.views.results'), (r'^polls/(?P<poll_id>\d+)/vote/$', 'polls.views.vote'), These patterns refer to the views, which do not yet exist. You can spend some effort to create the view methods and associated templates manually, but it is much easier to use PyCharm's assistance: as you hover your mouse pointer over an unresolved reference (which is, by the way, highlighted), a yellow light bulb appears, which means that a quick fix is suggested. To show this quick fix, click the bulb, or, with the caret at the view name, just press Alt+Enter: Clicking the Create Django view method option results in creating a view method in the views.py file, and the corresponding template file in the specified location. What do we see now? - First, the templates directory, marked , is not empty any more. It contains the stub templates we've created. - Second, the file views.py now contains the stub view methods. Besides the view methods, PyCharm generates the import statement that enables you to use render_to_response. Note also the icon in the left gutter next to the name of each view method ? you can use it to navigate from a view method to its template. Having created all the required views and templates by means of the Create template<name> quick fix, let's fill them with some suitable code. For example, we'd like to see the list of available polls. So, make sure that the file views.py is opened for editing, and type the following code: def index(request): poll_list = Poll.objects.all() t = loader.get_template('index.html') c = Context({ 'poll_list': poll_list, }) return HttpResponse(t.render(c)) PyCharm suggests a quick fix to add the missing imports statements: So at the end you should see the following: from django.http import Http404, HttpResponse from django.shortcuts import render_to_response from django.template import loader, Context from polls.models import Poll def index(request): poll_list = Poll.objects.all() t = loader.get_template('index.html') c = Context({ 'poll_list': poll_list, }) return HttpResponse(t.render(c)) Creating templates Let's fill in one of the templates with some meaningful code. Open for editing the file index.html, and start typing the template code. First thing you will notice immediately, is the automatic pair braces completion: when you type {%, PyCharm adds the matching closing characters, and the caret rests in the next typing position. Here you can press Ctrl+Space to show the list of suggested keywords: When it comes to typing HTML tags, PyCharm is ready to help you again: - Ctrl+Space code completion shows the list of available tags. - When you type the opening angle bracket, the matching closing bracket is generated automatically. So, you fill your template step by step, and finally get something like the following example (index.html and details.html), with syntax highlighting: Here we are! Let's check the list of available polls. Our admin site is already running, and the easiest way to visit the page that contains the list of polls (the index page), is to specify its URL in the address bar of the browser ? instead of /admin/, type /polls/: Click any poll to view its details: Summary This brief tutorial is over. You have successfully created and launched a simple Django application. Let's repeat what have we done with the help of PyCharm: - created a Django project and application - launched the Django server - configured a database - created models, views and templates Congrats!
http://www.jetbrains.com/pycharm/quickstart/django_guide.html
CC-MAIN-2015-06
refinedweb
1,945
66.03
The descriptor structure that is referred from st_mysql_plugin. More... #include <plugin_audit.h> The descriptor structure that is referred from st_mysql_plugin. An array of bits used to indicate what event classes that this plugin wants to receive. Invoked whenever an event occurs which is of any class for which the plugin has interest.The second argument indicates the specific event class and the third argument is data as required for that class. Interface version. Event occurs when the event class consumer is to be disassociated from the specified THD.This would typically occur before some operation which may require sleeping - such as when waiting for the next query from the client.
https://dev.mysql.com/doc/dev/mysql-server/latest/structst__mysql__audit.html
CC-MAIN-2020-05
refinedweb
110
55.95
sleep tight baby oil £13.00 – £26.00 our gentle, relaxing night oil, designed for babies sensitive skin Moisturising argan oil and relaxing lavender combine in our additive free sleep tight baby oil, creating a soothing oil gentle enough for a. For best results apply our sleep tight baby oil 30 minutes before going to bed. Fantastic for massaging babies, a lovely way to bond with your baby. ingredients argania spinosa kernel oil, lavandula angustofolia (lavender) oil, citral, coumarin, geraniol, linalool, d-limonene import information This product may not be suitable if your baby has a nut or skin allergy. Avoid getting into the eyes and wash out thoroughly with water if this occurs. Anna Smith – Hello Sleep. I started using this on my 2 month old. I rub in into her temples and every single time I do I get the biggest smile. The first few nights she still woke every hour but was calm and fell back to sleep, after a week she only woke for a feed. I also used it for the dry skin she had since birth and it has almost completely cleared. I love it and she clearly does too, even if it didn’t work I’d probably still use just for that gorgeous smile! Sophie Somerset – I bought this after reading good reviews online but it doesn’t work. My daughter still woke continuously through the night. Rebecca Wade – I tried this after I saw Tanya Bardsley”s Instagram post and it has really helped. We have gone from several wakes to just a couple a night and I have regained some sanity!!! Thanks for the recommendation Tanya x Tanya Bardsley (Real Housewives Of Cheshire) – Yayyy it’s worked thank you so much Simply Argan xxxxxxx(from Tanya’s Instagram account re-posted by simply argan with permission) Jane – I use this on both my little boy and girl who are not great sleepers and there has been a definite improvement and I am getting more sleep. I also use the Night Oil and that works for me too. Must be the lavender in both of them. Nikki – Smells divine and has really helped my little boys eczema on his arms. He has always slept well but since using this oil, he is waking up full of energy and is looking so much less tired. Its a great product! Thank you Simply Argan!
https://simplyargan.co.uk/product/sleep-tight-baby-oil
CC-MAIN-2021-31
refinedweb
401
71.14
/* ----------------------------------------------------------------------------- * * (c) The GHC Team, 1998-2009 * * The definitions for Thread State Objects. * * ---------------------------------------------------------------------------*/ #ifndef RTS_STORAGE_TSO_H #define RTS_STORAGE_TSO_H /* * PROFILING info in a TSO */ typedef struct { CostCentreStack *cccs; /* thread's current CCS */ } StgTSOProfInfo; /* * There is no TICKY info in a TSO at this time. */ /* * Thread IDs are 32 bits. typedef StgWord32 StgThreadID; #define tsoLocked(tso) ((tso)->flags & TSO_LOCKED) * Type returned after running a thread. Values of this type * include HeapOverflow, StackOverflow etc. See Constants.h for the * full list. typedef unsigned int StgThreadReturnCode; #if defined(mingw32_HOST_OS) /* results from an async I/O request + its request ID. */ typedef struct { unsigned int reqID; int len; int errCode; } StgAsyncIOResult; #endif /* Reason for thread being blocked. See comment above struct StgTso_. */ typedef union { StgClosure *closure; StgTSO *prev; // a back-link when the TSO is on the run queue (NotBlocked) struct MessageBlackHole_ *bh; struct MessageThrowTo_ *throwto; struct MessageWakeup_ *wakeup; StgInt fd; /* StgInt instead of int, so that it's the same size as the ptrs */ StgAsyncIOResult *async_result; #endif #if !defined(THREADED_RTS) StgWord target; // Only for the non-threaded RTS: the target time for a thread // blocked in threadDelay, in units of 1ms. This is a // compromise: we don't want to take up much space in the TSO. If // you want better resolution for threadDelay, use -threaded. #endif } StgTSOBlockInfo; /* * TSOs live on the heap, and therefore look just like heap objects. * Large TSOs will live in their own "block group" allocated by the * storage manager, and won't be copied during garbage collection. */ * Threads may be blocked for several reasons. A blocked thread will * have the reason in the why_blocked field of the TSO, and some * further info (such as the closure the thread is blocked on, or the * file descriptor if the thread is waiting on I/O) in the block_info * field. */ typedef struct StgTSO_ { StgHeader header; /* The link field, for linking threads together in lists (e.g. the run queue on a Capability. */ struct StgTSO_* _link; /* Currently used for linking TSOs on: * cap->run_queue_{hd,tl} * (non-THREADED_RTS); the blocked_queue * and pointing to the next chunk for a ThreadOldStack NOTE!!! do not modify _link directly, it is subject to a write barrier for generational GC. Instead use the setTSOLink() function. Exceptions to this rule are: * setting the link field to END_TSO_QUEUE * setting the link field of the currently running TSO, as it will already be dirty. */ struct StgTSO_* global_link; // Links threads on the // generation->threads lists /* * The thread's stack */ struct StgStack_ *stackobj; /* * The tso->dirty flag indicates that this TSO's stack should be * scanned during garbage collection. It also indicates that this * TSO is on the mutable list. * * NB. The dirty flag gets a word to itself, so that it can be set * safely by multiple threads simultaneously (the flags field is * not safe for this purpose; see #3429). It is harmless for the * TSO to be on the mutable list multiple times. * * tso->dirty is set by dirty_TSO(), and unset by the garbage * collector (only). */ StgWord16 what_next; // Values defined in Constants.h StgWord16 why_blocked; // Values defined in Constants.h StgWord32 flags; // Values defined in Constants.h StgTSOBlockInfo block_info; StgThreadID id; StgWord32 saved_errno; StgWord32 dirty; /* non-zero => dirty */ struct InCall_* bound; struct Capability_* cap; struct StgTRecHeader_ * trec; /* STM transaction record */ /* * A list of threads blocked on this TSO waiting to throw exceptions. */ struct MessageThrowTo_ * blocked_exceptions; * A list of StgBlockingQueue objects, representing threads * blocked on thunks that are under evaluation by this thread. */ struct StgBlockingQueue_ *bq; #ifdef TICKY_TICKY /* TICKY-specific stuff would go here. */ #endif #ifdef PROFILING StgTSOProfInfo prof; #endif #ifdef mingw32_HOST_OS StgWord32 saved_winerror; #endif /* * sum of the sizes of all stack chunks (in words), used to decide * whether to throw the StackOverflow exception when the stack * overflows, or whether to just chain on another stack chunk. * * Note that this overestimates the real stack size, because each * chunk will have a gap at the end, of +RTS -kb<size> words. * This means stack overflows are not entirely accurate, because * the more gaps there are, the sooner the stack will run into the * hard +RTS -K<size> limit. */ StgWord32 tot_stack_size; } *StgTSOPtr; typedef struct StgStack_ { StgHeader header; StgWord32 stack_size; // stack size in *words* StgWord32 dirty; // non-zero => dirty StgPtr sp; // current stack pointer StgWord stack[FLEXIBLE_ARRAY]; } StgStack; // Calculate SpLim from a TSO (reads tso->stackobj, but no fields from // the stackobj itself). INLINE_HEADER StgPtr tso_SpLim (StgTSO* tso) { return tso->stackobj->stack + RESERVED_STACK_WORDS; } /* ----------------------------------------------------------------------------- functions -------------------------------------------------------------------------- */ void dirty_TSO (Capability *cap, StgTSO *tso); void setTSOLink (Capability *cap, StgTSO *tso, StgTSO *target); void setTSOPrev (Capability *cap, StgTSO *tso, StgTSO *target); void dirty_STACK (Capability *cap, StgStack *stack); /* ----------------------------------------------------------------------------- Invariants: An active thread has the following properties: tso->stack < tso->sp < tso->stack+tso->stack_size tso->stack_size <= tso->max_stack_size RESERVED_STACK_WORDS is large enough for any heap-check or stack-check failure. The size of the TSO struct plus the stack is either (a) smaller than a block, or (b) a multiple of BLOCK_SIZE tso->why_blocked tso->block_info location ---------------------------------------------------------------------- NotBlocked END_TSO_QUEUE runnable_queue, or running BlockedOnBlackHole the BLACKHOLE blackhole_queue BlockedOnMVar the MVAR the MVAR's queue BlockedOnSTM END_TSO_QUEUE STM wait queue(s) BlockedOnSTM STM_AWOKEN run queue BlockedOnMsgThrowTo MessageThrowTo * TSO->blocked_exception BlockedOnRead NULL blocked_queue BlockedOnWrite NULL blocked_queue BlockedOnDelay NULL blocked_queue BlockedOnGA closure TSO blocks on BQ of that closure BlockedOnGA_NoSend closure TSO blocks on BQ of that closure tso->link == END_TSO_QUEUE, if the thread is currently running. A zombie thread has the following properties: tso->what_next == ThreadComplete or ThreadKilled tso->link == (could be on some queue somewhere) tso->sp == tso->stack + tso->stack_size - 1 (i.e. top stack word) tso->sp[0] == return value of thread, if what_next == ThreadComplete, exception , if what_next == ThreadKilled (tso->sp is left pointing at the top word on the stack so that the return value or exception will be retained by a GC). The 2 cases BlockedOnGA and BlockedOnGA_NoSend are needed in a GUM setup only. They mark a TSO that has entered a FETCH_ME or FETCH_ME_BQ closure, respectively; only the first TSO hitting the closure will send a Fetch message. Currently we have no separate code for blocking on an RBH; we use the BlockedOnBlackHole case for that. -- HWL ---------------------------------------------------------------------------- */ /* this is the NIL ptr for a TSO queue (e.g. runnable queue) */ #define END_TSO_QUEUE ((StgTSO *)(void*)&stg_END_TSO_QUEUE_closure) #endif /* RTS_STORAGE_TSO_H */
https://gitlab.haskell.org/TDecki/ghc/-/blame/178eb9060f369b216f3f401196e28eab4af5624d/includes/rts/storage/TSO.h
CC-MAIN-2021-04
refinedweb
1,027
59.13
why variable can't add number as suffix like: int 3a Because its syntactically illegal. Check this link... Edited 7 Years Ago by gerard4143: n/a Hey Dduleep, if you need to use '3a' as a variable, have you considered using a character array to hold the variable? I've attached some code that should work in the capacity that I just mentioned, for character variable "var" char var[3]="3a"; Edited 7 Years Ago by Jmknight: n/a I can't be sure, but maybe you are wanting to carry out character arithmetic. If this is the case then I have an example that you can take a look at: char ch='a'; //take ch from its char value to the its corresponding numerical value ch-='a'; //multiply ch's numerical value *3 ch*=3; //return (3*ch) as an integer printf("%d", word); Or maybe you mean to take an integer 'a' and multiply by three. If that is the case, take this example for consideration: int a, mult_a; //receive integer variable 'a' scanf("%d",&a); //enact a*3 multiplication and store as separate variable (for clarity) mult_a=3*a; //return 3a (3*a) printf("%d",mult_a); ...
https://www.daniweb.com/programming/software-development/threads/326003/c-varible
CC-MAIN-2018-09
refinedweb
199
58.25
On 29 December 2014 at 08:18, Markus Sabadello <markus at projectdanube.org> wrote: > On 12/28/2014 11:51 PM, Melvin Carvalho wrote: > > > > On 28 December 2014 at 22:45, Markus Sabadello <markus at projectdanube.org> > wrote: > >> On today's call we talked about whether Plinth or jwchat should be the >> start page. >> And we currently have Owncloud at the path /owncloud. >> >> I think this question of "URI namespace layout" will become more >> important as we add more applications to the box. >> >> One pattern I have been experimenting with is creating subdomains for >> each new application which has a web interface. >> I think this is more reliable than using folders, since some >> applications may assume they are installed at the root /. >> >> So if my PageKite name is markus.pagekite.me, I could have: >> - owncloud.markus.pagekite.me >> - plinth.markus.pagekite.me >> - jwchat.markus.pagekite.me >> - radicale.markus.pagekite.me >> - diaspora.markus.pagekite.me >> - mailpile.markus.pagekite.me >> - etc. >> > > I was doing something similar with one of my domains. > > It's important in the domain that contains your profile page that the > document and the person entity are delineated. This will facilitate ability > to link to our other properties, and also more easily add future proofed > things such as a public key for PKI. > > Note: indieweb, owncloud, diaspora do *not* use this pattern. They are > all neat systems but I suspect will run into scalability issues for this > reason. I also hope there may be some work in fbx and/or debian to support > WebID. > > The traditional way to do this separation is with the # character. > Unfortunately in HTP this char is overloaded to mean many things (anchor, > linked data subject, media control, hiding device from server) so it can be > very confusing. I use #me in my profile, but #i is sometimes used, user > can choose. > > I remember in Cool URIs, the other way of doing it was 303 URIs, but that > is not the preferred way anymore? > In general I think support for RWW/LDP/WebID/etc would be great. > > At some point I might want my root domain name (e.g. markus.pagekite.me) > to support a range of different services, e.g.: > - When opened in the browser, an IndieWeb-compatible site such as Known ( > withknown.com) > - Accessible with LDP protocol backed by gold or rww-play, etc. > - Smart webfinger service that points to my remoteStorage, OpenID Connect, > Mozilla Persona > > Also note that serving up mixed content over different domains, and htp > vs https is something browsers have enormous problems with. Even something > as simple as using the web crypto API will be problematic cross origin. > Same applies to a lesser extent for AJAX meshups. > > But if you install completely separate applications on subdomains such as > mailpile, owncloud, diaspora, etc. then why would there be mixed content > across domains? > Wouldn't it actually be a big security feature rather than a bug if those > separate applications can't XSS > <> each other? > Do you think an fbx xauth subdomain may be useful here? Just thinking out loud. > So, while I like subdomains, at least today it poses implementation > challenges. Possibly best to avoid, unless you're providing fbx entry > point for family members and/or friends. > > >> >> These should also work with an "internal" (dnsmasq-provided) domain when >> I access the box from within my home network, e.g.: >> - owncloud.freedombox >> - plinth.freedombox >> - jwchat.freedombox >> - radicale.freedombox >> - diaspora.freedombox >> - mailpile.freedombox >> - etc. >> >> In Plinth, I may want to have an option to set a "default" one, i.e. >> which one should show up at markus.pagekite.me >> >> When using subdomains rather than folders, we also need different Tor >> .onion addresses for each application, which is probably preferable >> anyway. >> >> Thoughts? >> >> Markus >> >> >> >> _______________________________________________ >> Freedombox-discuss mailing list >> Freedombox-discuss at lists.alioth.debian.org >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://lists.debian.org/debian-freedombox/2014/12/msg00037.html
CC-MAIN-2020-29
refinedweb
641
57.06
@d ? @d ? This is the final prediction. Correct? If yes, you would need to dequantize the final tensor, .e.g, using dequantized_y = y.dequantize() thanks a lot @dskhudia I tried both methods. Final prediction is really bad Original Prediction from FP32 model - Prediction from INT8 model - Not sure where I’m going wrong You may want to try some quantization accuracy improvement techniques such as per channel quantization for weights Quantization aware training Measuring torch.norm between float model and quantize model to see where it’s off the most. is there an example for per channel quantization and measuring the torch norm between the 2 models ? For per channel see and for norm you can use something like the following: SQNR = [] for i in range(len(ref_output)): SQNR.append(20*torch.log10(torch.norm(ref_output[i][0])/torch.norm(ref_output[i][0]-qtz_output[i][0])).numpy()) print('SQNR (dB)', SQNR) @dskhudia The performance improved slightly after per channel quantization, but it is still very bad Float. In PyTorch there’s a way to compare the module level quantization error, which could help to debug and narrow down the issue. I’m working on an example and will post here later. @Raghav_Gurbaxani, have you tried using histogram observer for activation? In most cases this could improve the accuracy of the quantized model. You can do: model.qconfig = torch.quantization.QConfig( activation=torch.quantization.default_histogram_observer, weight=torch.quantization.default_per_channel_weight_observer) thanks @hx89 , if you could post that example for compare module level quantization error - It would be great In the meantime, I tried the histogram observer and the result is still pretty bad any other suggestions ? Have you checked the accuracy of fused_model? By checking the accuracy of fused_model before converting to int8 model we can know if the issue is in the preprocessing part or in the quantized model. If fused_model has good accuracy, the next step we can check the quantization error of the weights. Could you try the following code: def l2_error(ref_tensor, new_tensor): """Compute the l2 error between two tensors. Args: ref_tensor (numpy array): Reference tensor. new_tensor (numpy array): New tensor to compare with. Returns: abs_error: l2 error relative_error: relative l2 error """ assert ( ref_tensor.shape == new_tensor.shape ), "The shape between two tensors is different" diff = new_tensor - ref_tensor abs_error = np.linalg.norm(diff) ref_norm = np.linalg.norm(ref_tensor) if ref_norm == 0: if np.allclose(ref_tensor, new_tensor): relative_error = 0 else: relative_error = np.inf else: relative_error = np.linalg.norm(diff) / ref_norm return abs_error, relative_error float_model_dbg = fused_model qmodel_dbg = quantized for key in float_model_dbg.state_dict().keys(): float_w = float_model_dbg.state_dict()[key] qkey = key # Get rid of extra hiearchy of the fused Conv in float model if key.endswith('.weight'): qkey = key[:-9] + key[-7:] if qkey in qmodel_dbg.state_dict(): q_w = qmodel_dbg.state_dict()[qkey] if q_w.dtype == torch.float: abs_error, relative_error = l2_error(float_w.numpy(), q_w.detach().numpy()) else: abs_error, relative_error = l2_error(float_w.numpy(), q_w.dequantize().numpy()) print(key, ', abs error = ', abs_error, ", relative error = ", relative_error) It should print out the quantization error for each Conv weight such as: features.0.0.weight , abs error = 0.21341866 , relative error = 0.01703797 features.3.squeeze.0.weight , abs error = 0.095942035 , relative error = 0.012483358 features.3.expand1x1.0.weight , abs error = 0.071949296 , relative error = 0.010309489 features.3.expand3x3.0.weight , abs error = 0.18284422 , relative error = 0.025256516 features.4.squeeze.0.weight , abs error = 0.088713735 , relative error = 0.011313644 features.4.expand1x1.0.weight , abs error = 0.0780085 , relative error = 0.0126931975 ... @hx89 the performance of the fused model is good That means there’s something wrong on the quantization side, not the fusion side. Here’s the log of the relative norm errors - Can you suggest what to do next ? Is there any way to reduce these errors ? Apart from QAT ofcourse Looks like the first Conv basenet.slice1.3.0.weight has the largest error, could you try skipping the quantization of that Conv and keep it as the float module? We have previously seen some CV models’s first Conv is sensitive to quantization and skipping it would give better accuracy. @hx89 actually it seems like all these have pretty high relative errors - [ basenet.slice1.7.0.weight , basenet.slice1.10.0.weight , basenet.slice2.14.0.weight , basenet.slice2.17.0.weight , basenet.slice3.20.0.weight ,basenet.slice3.24.0.weight , basenet.slice3.27.0.weight , basenet.slice4.30.0.weight ,basenet.slice4.34.0.weight ] although that seems like a good idea, keeping a few layers as float while converting the rest to int8. I am not sure how to pass the partial model to torch.quantization.convert() for quantization and then combining the partially quantized model and unquantized layers together for inference on the image. Could you provide an example ? Thanks a ton It’s actually simpler, to skip the first conv for example, there are two step: Step 1: Move the quant stub after the first conv in the forward function of the module. For example in the original quantizable module, quant stub is at the beginning before conv1: Class QuantizableNet(nn.Module): def __init__(self): ... self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.conv1(x) x = self.maxpool(x) x = self.fc(x) x = self.dequant(x) return x To skip the quantization of conv1 we can move self.quant() aftert conv1: Class QuantizableNet(nn.Module): def __init__(self): ... self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.conv1(x) x = self.quant(x) x = self.maxpool(x) x = self.fc(x) x = self.dequant(x) return x Step 2: Then we need to set the qconfig of conv1 to None after prepare(), this way PyTorch knows we want to keep conv1 as float module and won’t swap it with quantized module: model = QuantizableNet() ... torch.quantization.prepare(model) model.conv1.qconfig = None @hx89 Thank you for your advice. I tried placing the Quantstub after slice4 in basenet (line 157) and DequantStub at the end. Also I set the qconfig of slice1-4 as None. But now I get the error RuntimeError: All dtypes must be the same. (quantized_cat at /Users/distiller/project/conda/conda-bld/pytorch_1570710797334/work/aten/src/ATen/native/quantized/cpu/qconcat.cpp:59) raised by self.skip_add.cat() (line88) My guess is it’s trying to concat between fp32 and int8 tensors- hence the problem, I tried moving my quantstub around, but my network has a lot of concat layers so I always incur this problem. Any ideas on how to deal with this issue ? Thanks again for your help so far There are a couple things I noticed in your partial_quantized_craft.py: In line 103: y=self.basenet.dequant(y), it would be better to define the dequant in CRAFT class and use it instead of using the dequant from basenet. For the error you got, it’s because you moved quant() down so h_relu2_2 became float for example. You may add a quant stub so that the output is still in int8: ... h_relu2_2 = h ... h_relu2_2_int8 = self.quant2(h_relu2_2) ... out = vgg_outputs(h_fc7, h_relu5_3, h_relu4_3, h_relu3_2, h_relu2_2_int8) Notice you can’t reuse the same quant stub and need to create new one since each quant stub will have different quantization parameters. def forward(self, X): X=self.quant(X) h = self.slice1(X) h_relu2_2 = h h = self.slice2(h) h_relu3_2 = h h = self.slice3(h) h_relu4_3 = h h = self.slice4(h) h=self.quant(h) h_relu5_3 = h h = self.slice5(h) h_fc7 = h h_fc7 = self.dequant1(h_fc7) h_relu5_3 = self.dequant2(h_relu5_3) h_relu4_3 = self.dequant3(h_relu4_3) h_relu3_2 = self.dequant4(h_relu3_2) h_relu2_2 = self.dequant5(h_relu2_2) vgg_outputs = namedtuple("VggOutputs", ['fc7', 'relu5_3', 'relu4_3', 'relu3_2', 'relu2_2']) out = vgg_outputs(h_fc7, h_relu5_3, h_relu4_3, h_relu3_2, h_relu2_2) return out Hi Raghav, I see one more error. You are using the same float functional module at multiple locations: and etc. This will cause the activations to be quantized incorrectly. A float functional module can be used only once as each module collects statistics on activations. Can you make all of them unique? @hx89 Thank you so much for your advice. Based on points 1&2, I tried several configurations and they worked much better. and the model size reduced from 84 MB to 36 MB (quant() placed after slice 2 in vgg) Here’s another result from a model of 75 MB (quant() placed after slice 4 in vgg) I am still trying other configurations to improve my results. In the meantime, I also want to try your configuration (quantize vgg_bn only), could you explain in your code why we have 2 quant() flags , and 5 dequant() flags ? What parts of the qconfig must be set to None ? Thanks again for your help
https://discuss.pytorch.org/t/quantization-error-during-concat-runtimeerror-didnt-find-kernel-to-dispatch-to-for-operator-aten-cat/59966/10
CC-MAIN-2020-34
refinedweb
1,462
52.15
27 November 2009 17:28 [Source: ICIS news] LONDON (ICIS news)--Here is Friday's end of day European oil and chemical market summary from ICIS pricing. CRUDE: January WTI: $75.52/bbl, down $2.44/bbl. January BRENT: $76.59/bbl, down $0.40/bbl. Prices changed direction to recoup most of the losses posted earlier in the day as fears of the impact of the ?xml:namespace> NAPHTHA: Open spec spot cargoes were assessed in a $682-688/tonne CIF (cost, insurance and freight) NWE (northwest Europe) range, up $15/tonne CIF NWE on the buy side of the range set earlier in the day. December swaps were pegged at $681-683/tonne CIF NWE. A trader bid in the afternoon at $682/tonne CFR BENZENE: More December benzene deals were done at $845-860/tonne CIF ARA ( STYRENE: Two December styrene deals were done at $1,050-1,060/tonne FOB (free on board) TOLUENE: The toluene market closed at $780-800/tonne FOB MTBE: One trade was reported for MTBE in the afternoon, for 1,000 tonnes at $890/tonne XYLENES: The paraxylene market remained quiet throughout the day and was unchanged $970-1,000
http://www.icis.com/Articles/2009/11/27/9267939/EVENING-SNAPSHOT-Europe-Markets-Summary.html
CC-MAIN-2015-14
refinedweb
199
69.31
Distinct Characters May 9, 2017 The quadratic solution uses two nested loops to pair each character to every other character: (define (distinct? str) (call-with-current-continuation (lambda (return) (let ((cs (string->list str))) (do ((cs cs (cdr cs))) ((null? cs) #t) (do ((ds (cdr cs) (cdr ds))) ((null? ds)) (when (char=? (car cs) (car ds)) (return #f)))))))) > (distinct? "Programming") #f > (distinct? "Praxis") #t We use a common Scheme idiom call-with-current-continuation to define return that performs an early return from the function as soon as it finds a non-distinct character. The O(n log n) solution sorts the characters in the string, then runs through them to determine if there are any adjacent duplicates: (define (distinct? str) (let loop ((cs (sort char<? (string->list str)))) (if (null? (cdr cs)) #t (if (char=? (car cs) (cadr cs)) #f (loop (cdr cs)))))) > (distinct? "Programming") #f > (distinct? "Praxis") #t The linear solution uses a hash table to store characters, checking before each insertion if the character is already present: (define (distinct? str) (let ((letters (make-eq-hashtable))) (let loop ((cs (string->list str))) (if (null? cs) #t (if (hashtable-ref letters (car cs) #f) #f (begin (hashtable-set! letters (car cs) 1) (loop (cdr cs)))))))) > (distinct? "Programming") #f > (distinct? "Praxis") #t We used R6RS hash tables in the code shown above, but at we used Guile hash tables, which differ in their calling syntax. Advertisements In Python. Added an additional method to Paul’s solution in Python. Uses pruning, combined with dictionary add/lookup is O(1) average. Has better performance, depending on distribution of frequently occuring characters amongst all different words (i.e. character ‘e’ reoccurs frequently in English). Rutger: I expect the difference between the set object and your emulation of it is just noise. John: order_n_prune is rougly 10-20% faster than order_n for the words in /usr/share/dict/words (linux). For other datasets this may not hold, i.e. a dataset with words that only have unique characters. Klong: Not sure how to provide the 3 different solutions, so here’s the quickest one str::”Programming Praxis” “Programming Praxis” str=?str 0 str::”Praxis” “Praxis” str=?str 1 :” ?a returns a list containing unique elements from ‘a’ in order of appearance.” [sourcecode lang="css"] Not sure how to do the 3 solutions, so here’s just one: DISTCHAR (str) ; Check for distinct characters n arr,flg,i s flg=1 f i=1:1:$l(str) d q:’flg . s char=$e(str,i) . i $d(arr(char)) s flg=0 . e s arr(char)=”” q flg MCL> W $$DISTCHAR^DISTCHAR(“Programming Praxis”) 0 MCL> W $$DISTCHAR^DISTCHAR(“Praxis”) 1 Just the fastest solution (O[n] time) in C11: #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <string.h> bool distinct (char *str); int main (int argc, char **argv) { char *strings[] = {"Programming", "Praxis", "Cat", "Dog", "Dogg", NULL}; for (register size_t i = 0; strings[i] != NULL; ++i) { if (distinct(strings[i])) printf("'%s' is distinct\n", strings[i]); else printf("'%s' is not distinct\n", strings[i]); } exit(0); } // O(n) solution bool distinct (char *str) { size_t arr[127] = {0}; // ASCII const size_t n = strlen(str); for (register size_t i = 0; i < n; ++i) if (++arr[str[i]] > 1) return false; return true; }
https://programmingpraxis.com/2017/05/09/distinct-characters/2/
CC-MAIN-2019-04
refinedweb
550
67.45
Easy Home Surveillance Introduction: thieves and mostly just keeping tabs on your pets throughout the day. There are plenty of free websites where you can "subscribe" and they will host your web cam. But these sites often don't store all the photos taken by your cam -- at least the free ones don't. Most often, these sites simply post the most recent photo and then refresh it -- saving over the old one. In my book, that's just not good enough. What happens if some low-life robber breaks into your place and steals all of your stuff? In order for you to catch someone in the act, you'd have to sit in front of the computer all day long. Otherwise, you have any idea of who's broken in. In this instructable, I'll show you how to set up an inexpensive, yet reliable system to capture photos, upload them to your website, and rename them with a time stamp. This way, ALL of the photos taken are saved on your server, and you can delete them at your convenience. Now, even if your computer is stolen, you'll likely have the photo evidence to catch the low-life thief. Step 1: Materials List You will need: • Webcam • Computer • Internet access • Domain name and hosting account -- I use Godaddy, you can get a domain for 9.99 and three years of hosting for pretty cheap (Google search for hosting promocodes). • Php enabled server -- ask your provider if you don't know if you have this. • Some programming knowledge - though I will be providing you with the code you need, so you can really just copy and paste. • FWink -- a free webcam program you can download here: Files -- I will give you the scripts for these: index.php -- a page to display the most recent photo stamp.php -- a page that renames the most recent photo with a date and time stamp (you'll leave this page up and running when you leave your home) all.php -- a page to list all of the photos Optional parts to make your cam pan from left to right: • Arduino or, better yet, a DIY-Duino ; - ) • Servo motor • 9 Volt battery • Duct tape • Some kind of stand to tape the webcam to, unless you are using your computer's webcam. Step 2: Set Up the Folders OK, lets start with setting up your site. In the root directory of your site, create a folder named "_fwink" (note the underscore) . Here is where you will save the three files: index.php stamp.php all.php You'll create these files in the next three steps. Inside the _fwink folder, create another folder called "photos". Here is where all of the photos will be uploaded. You MUST name all of these files and folders EXACTLY like this, or the code will not work. Step 3: Create the Index.php File The next three steps will contain the code you need for the three files. The pages I created are pretty simple. If you are keen on code, then you can customize them however you wish, adding colors and the like. For this instructable, I kept the design at the bare minimum. Open your coding program. I use Dreamweaver. Note, you MUST have php enabled on your server for this instructable to work. Contact your hosting provider to see if you have it enabled. Create the index.php file, which will show you the most recent photo taken. Here is the code for the index.php file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Webcam</title> <!-- this line refreshes the page every 5 seconds --> <META HTTP- </head> <body> <table width="320" border="0" cellspacing="0" cellpadding="10" align='center'> <!-- this line gets the most recent uploaded photo, the rand() function allows for any cache that might show old photos --> <tr><td><?php echo "<img src='photos/recent.jpg?r=".rand(9,9999999)."' width='640' height='480' border='1'>"; ?></td></tr> <tr><td align="center"> Image refreshes every 5 seconds. </td></tr> <tr><td align="center"> <!-- this is a link to the page that will list all of the photos you have saved --> <a href='all.php'>Show all</a> </td></tr> </table> </body> </html> <!-- End of the Code --> This part of the code: <META HTTP- Refreshes the page every 5 seconds. Save the file as index.php and upload it into the _fwink folder on your server. The image attached is a sample of what you'd hope to see when you have the camera set up -- a messy house, lazy animals and no zombies or robbers. You'll set up the camera later. Step 4: Create the Stamp.php File The page stamp.php will need to stay up and running on your computer for everything to work. If you close the window, the photo will NOT be copied and renamed, and you will only have the most recent photo. What does this file/page do? First, it looks to see if there is a new "webcam.jpg" file in the photo folder. If there is a file, it copies the file and renaes it "recent.jpg" Then, "recent.jpg" is renamed with the current date and time -- this way, your photo is saved when the Fwink program writes over it with the next photo. And this will be done every 5 seconds. The page will tell you if it has found a new photo or not with a simple message: "Time stamping most recent photo" or "No new photo" will show on the page. Here is the code for the stamp.php file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Time Stamp Photos</title> <!-- this line refreshes the page every 5 seconds so we can check for new photos and rename them--> <META HTTP- </head> <body> <?php if(file_exists("photos/webcam.jpg")){ echo "Time stamping most recent photo<br><br>"; copy("photos/webcam.jpg","photos/recent.jpg"); $stamp=date('Y-m-d-_h-i-s'); rename("photos/webcam.jpg","photos/".$stamp.".jpg"); }else{ echo "No new photo"; } ?> </body> </html> <!-- end of code --> Again, <META HTTP- this refreses the page ever 5 seconds. Thus, your file is being renamed every 5 seconds. Step 5: Create the All.php File The file all.php simply displays all of the photos that have been taken. Here is the code for this file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>All Photos</title> </head> <body> <?php $dir = 'photos/'; $files = scandir($dir); foreach($files as $ind_file){ if($ind_file=="." || $ind_file==".."){ // do nothing }else{ echo "<img src='photos/".$ind_file."'>"; echo "<br>"; echo $ind_file; echo "<br><br>"; } } ?> </body> </html> Step 6: Downloading and Setting Up Fwink The program you'll use to capture and upload photos to your site is called Fwink. You can download the program for free here: This program will upload a photo to your server named "webcam.jpg" -- more details on this later. Install the program and lets get started setting up the photo uploads. Click the "Settings..." button to set up your uploads. Step 7: Setting Up the Uploads In the Settings menu, be sure you are on the "File Transfer" tab. Under FTP Settings: • In the "FTP Server" area, enter the name of your website. Example: • Enter the username you use to acces your server, and the password. • Set the Directory as shown in the photo: _fwink/photos • Set the File Name as webcam.jpg • You may need to check off the "Use Passive FTP" if the upload is not working. Under Timing: • Set "Capture an image every:" to 5 seconds. Step 8: Setting Up the Video Capture Tab Click on the "Video Capture" tab, here is where you will select your webcam. If your webcam is not plugged in, of course it will not be recognised. So plug in your cam. You may need to restart the program for it to be recognised. Under "Video Device", click the "Change Device" button. A windo will pop up with a list of devices that are recognised by Fwink. Select the cam you want to use. At this point, if you have changed your cam, you will need to restart the program. After you restart Fwink, you should see what your cam is seeing on the main menu. Get back to Settings, and the Video Capture tab. Under "Resize Captured Image" select the size of the photo you want to create. Note that larger images will take longer to upload your server. Step 9: Setting Up the Text Effects Tab Click on the "Text Effects" tab. This tab is pretty simple to understand. If you want a time stamp printed on your image, check off the "Date" and "Time" boxes. Add any of the extra effects you want, I just use the simple date and time. Click "OK" to save your changes. Your webcam will begin taking photos every 5 seconds and uploading them to your server. Note, you will obviously need to be connected to the Internet. Step 10: What Is Happening Now? So what is actually happening now? Every 5 seconds, your webcam is taking a photo, naming it "webcam.jpg" and uploading it to your server. The web page you have open: is checking for "webcam.jpg" in the "photos" folder. If a "webcam.jpg" is found, it is renamed with the filename "recent.jpg" and also copied to the same folder with the filename as a time stamp -- something like "2011-03-23_09-02-28.jpg" The cam will continue to upload more photos until you exit out of the program and close the internet window with "stamp.php" running. There are tons of crafty ways you can hide your webcam in a stuffed animal or a host of other clever disguises. If you are using your computer's webcam, or just want to use a fixed webcam, you can stop here. However, if you want the webcam to pan from left to right on a 180-degree axis, stay tuned for the fun part in the next step! Step 11: Use a Microcontroller to Pan the Camera For this, you will need: • An Arduino or DIY-Duino (make one here) • Servo motor -- regular (180-degrees) NOT full-rotation • Webcam • A stand -- to secure the servo and cam • Duct tape • Small piece of plexi-glass • Simple panning code (I'll give this to you) You can see how I set everything up from the photo in step 1, but I will describe it anyway. For this project, I didn't want to screw the servo directly to the base of the webcam. So I cut out a small piece of square plexi-glass with my Dremel and used that. I drilled holes into the plexi and screwed it to the servo. Then, I used duct tape to secure the servo to the bottom of the webcam I duct-taped the servo to the stand (which was the base of an old computer monitor). I also wrapped duct tape around the base to secure the webcam's cord, just so the cord wasn't swinging all over the place when it rotated. I plugged the output pin of the servo into pin 5 of my DIY-Duino, red to 5V, black to ground. Then I plugged in a 9-volt battery to give it life. Note: Servos suck up a lot of power, so a battery isn't going to last long. A wall plug is you best bet for continuous, all-day power. Some closing notes: - Again, you MUST have php enabled on your server for this to work. - You need to leave the stamp.php page open or the program will just be writing over the old photo and not copying it to a time-stamped filename. If you have any questions, or something isn't working for you, leave a comment. I'll see what I can do to help. Step 12: Upload the Sketch Here is the sketch to rotate the servo. It's easily modified so you can make the turning delay, etc. anything you want. Commented to let you know what's going on. // ------- Controlling the webcam with a servo // ------- Corey Kingsbury // ------- #include <Servo.h> Servo myservo; // create servo object to control a servo int servoDegrees=0; int modValue=20; // sets the degree increment int centerDelay=10000; // sets the center delay to 10 seconds int mainDelay=6000; // set all other position delays to 6 seconds void setup(){ myservo.attach(5); // Attach the servo to pin 5 } void loop(){ servoDegrees=servoDegrees+modValue; // set the degrees to equal current degrees plus the increment value if(servoDegrees>120){ // if the degrees are greater than 120, then begin to count down modValue=-20; } if(servoDegrees<0){// if the degrees are less than 0, then begin to count up modValue=20; } myservo.write(servoDegrees); // sets servo position according to the degrees in "servoDegrees" if(servoDegrees>=90 && servoDegrees<=110){ // sets a longer delay when the camera is facing straight out delay(centerDelay); }else{ delay(mainDelay); // the normal delay } } Really cool stuff. Nice making your own stuff. Question-does stamp.php need to bee open constantly on the server or the client computer? It needs to be open somewhere. It doesn't have to be on the client computer. It can be open anywhere. This page refreshes every 5 seconds and looks for a new photo. If one is there, it is renamed and saved. If the stamp page is not open, the most recent photo will save over the previous one. does index.php have to be called index.php? is it possible to do this, have the photo be displayed on, say, photo.php, if you modified the code to sent the picture to photo.php? that should work if you modify anywhere index is called to be photo instead. Ok, thanks. Do you also have to specify photo.php in the fwink upload settings? Anything that was previously index should be changed to photo. on the server end. say your website is http: you wouyld want the location to be hope this helps oh, thanks. what exactly is the "root directory" of you website. do you access it through the server end of your website, or the client end?
http://www.instructables.com/id/Easy-Home-Surveilance/
CC-MAIN-2017-39
refinedweb
2,405
75
1. R Function In this tutorial on R Function, you can read all about Introduction to R and Features of R. This tutorial will help to answer some of these questions – How to create a Script in R, how to transform the Script into a R Function, what are the functions in R and how to use them, how to Use the Function Objects in R, Scope of R Function, which might have coming to your mind. 2. R Introduction and Features R is a free software programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis and visualization. Follow this link If you want to learn R from the basics. Let us now look at some key capabilities of R: - R is easily extensible through functions and extensions. - It provides effective data handling and manipulation components. - It provides tools specific to a wide variety of data analysis. - R has several operators for calculations on arrays and other types of ordered data. - It provides advanced visualization capabilities through different tools available. Let us now learn scripts in R function 3. Moving from Scripts to R Function Before starting with R function, let us revise R scripting. R Function provide 2 major advantages over script: - Functions can work with variable input, so we can use it with different data. - It returns the output as an object, so you can work with the result of that function. a. How to create a Script in R?: > source(“sample.R”) This will read the file sample in R. To create any script in R, you need to open a new script file in an editor and type the code. Let us see how to create a script that presents fractional numbers as percentages rounded to one decimal digit. x <- c(0.458, 1.6653, 0.83112) //Represents the values you want to present as percentage percent <- round(x * 100, digits = 1) //Round the result to 1 decimal place result <- paste(percent, “%”, sep = “”) //Pastes a percentage sign after the rounded number print(result) // Prints the desired result. > source(‘pastePercent.R’) The output gets displayed as below: [1] “45.8%” “166.5%” “83.1%” This is the how a script is written in R and how to execute R Script. b. Transforming the Script into R Function Now when we have seen how to write a script and run a script in R, we are going to see how to convert R script into the function in R. Firstly define a function with a name so that it becomes easier to call a R function and pass arguments to it as input. The an R function. addPercent <- function(x){ named x. percent <- round(x * 100, digits = 1) result <- paste(percent, “%”, sep = “”) return(result) }. 4. Using R Function After transforming the script into an R function, you need to save it and you can use the function in R again if required. As R does not let you know by itself that it loaded the function but it is present in the workspace, if you want you can check it by using ls() command as below: > ls() It will display complete list as below: [1] “addPercent” “percent” “result” “x” Now as we know what all functions in R are present in the memory, we can use it when required. For example, if you want to create percentage from values again, you can use add percent function for the same as below: > new.numbers <- c(0.8223, 0.02487, 1.62, 0.4) // Inserts new numbers > addPercent(new.numbers) // Uses the addPercent() function This will give output as: [1] “82.2%” “2.5%” “162%” “40%” //Shows the output of the code 5. Using the Function Objects in R In R, A function is also an object and you can manipulate it as you do other objects. You can assign a function to new object using below command: > ppaste <- addPercent: > ppaste //Copies the function code of addPercent into a new object as we have already assigned addpercent function to ppaste above. > addPercent(new.numbers) //Uses the addPercent() function for new values percent <- round(x * 100, digits = 1) //Rounds the result to one decimal place result <- paste(percent, “%”, sep = “”) //Pastes a percentage sign after the rounded number return(result) //Returns the result } 6. Reducing the Number of Lines in R: - Returning values by default - Dropping {} Let us see the above 2 ways in detail below: a. Returning Values by Default in R Till now in all above code, we have written return() function to return output. But in R this can be skipped as by default R returns the value of the last line of code in the R function body. Now the above code will become as: addPercent <- function(x){ // Defines the function percent <- round(x * 100, digits = 1) //Rounds the result to one decimal place paste(percent, “%”, sep = “”)} //Pastes the function into the console. There is no need to use the return statement. You need return if you want to exit the function before the end of the code in the body. For example, you could add a line to the addPercent function that checks whether x is numeric, and if not, returns NULL, as shown in the following table: addPercent <- function(x){ //Defines the function if( !is.numeric(x) ) return(NULL) //Checks whether x is numeric, and if not, returns NULL percent <- round(x * 100, digits = 1) //Rounds the result to one decimal place paste(percent, “%”, sep = “”)} //Pastes the result into console b. Dropping the {} You can drop braces also in some cases though they form a proverbial wall around: > odds <- function(x) x / (1-x) Here no braces are used to write the function. Get the best R programming book to become a master in R. Any doubt yet in R Function? Please Comment. 7. Scope of R Function Every object you create ends up in this environment, which is also called the global environment. The workspace—or global environment—is the universe of the R user where everything happens. There are 2 types of R Function as explained below: a. External R Function If you use a R function, the function first creates a temporary local environment. This local environment is nested within the global environment, which means that, from that local environment, you also can access any object from the global environment. As soon as the function ends, the local environment is destroyed along with all objects in it. This is what external functions are. If R sees any object name, it first searches the local environment. If it finds the object there, it uses that one else it searches in the global environment for that object. b. Internal R Function Using global variables in an R function is not considered as good practice. Writing your functions in such a way that they need objects in the global environment is not efficient because you use functions to avoid dependency on objects in the global environment in the first place. will always get the same results. This characteristic of R may strike you as odd, but it has its merits. Sometimes you need to repeat some calculations a few times within a function, but these calculations only make sense inside that function. Below example shows using internal function: calculate.eff <- function(x, y, control){ //Defines the calculate.eff() function that create a new local environment min.base <- function(z) z—mean(control) //Defines the min.base() function inside the local environment of the calculate.eff() function to create a new local environment. min.base(x) / min.base(y)} The code for calculate.eff function is shown below: > half <- c(2.23, 3.23, 1.48) > full <- c(4.85, 4.95, 4.12) > nothing <- c(0.14, 0.18, 0.56, 0.23) > calculate.eff(half, full, nothing) This gives output as below: [1] 0.4270093 0.6318887 0.3129473 A closer look at the R function definition of min.base() shows that it uses an object control but does not have an argument with that name. 8. Finding the Methods behind the Function It is easy to find out the function you used in R. You can just look at the function code of print() by typing its name at the command line. The command for displaying info code of the print() function is as below: > print //Command to access the code of the print function. function (x, ...) //Shows the used method that is print in this case. UseMethod(“print”) <bytecode: 0x0464f9e4> //Shows additional information about the print method. <environment: namespace:base> a. The UseMethod(). b. Calling Functions Inside Code You can also call the function print.data.frame() yourself. Below is the example for the same: > small.one <- data.frame(a = 1:2, b = 2:1) //Defines the small.one data frame. > print.data.frame(small.one) //Defines the print.data.frame() function to print the small.one object. a b //Displays the result as below: 1 1 2 2 2 1 9. Using Default Methods in R: > print.default(small.one) //specifies the default print method $a // Prints data frame as a list [1] 1 2 $b [1] 2 1 attr( ,”class”) [1] “data.frame” If you like this post on R Function and have any query about the functions in R, so, do let me know by leaving a comment. See Also-
https://data-flair.training/blogs/r-functions/
CC-MAIN-2018-39
refinedweb
1,579
63.09
When running this program thread-limit.c on my dedicated debian server, the output says that my system can't create more than around 600 threads. I need to create more threads, and fix my system misconfiguration. Here are a few informations about my dedicated server: de801:/# uname -a Linux de801.ispfr.net 2.6.18-028stab085.5 #1 SMP Thu Apr 14 15:06:33 MSD 2011 x86_64 GNU/Linux de801:/# java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) de801:/# ldd $(which java) linux-vdso.so.1 => (0x00007fffbc3fd000) libpthread.so.0 => /lib/libpthread.so.0 (0x00002af013225000) libjli.so => /usr/lib/jvm/java-6-sun-1.6.0.26/jre/bin/../lib/amd64/jli/libjli.so (0x00002af013441000) libdl.so.2 => /lib/libdl.so.2 (0x00002af01354b000) libc.so.6 => /lib/libc.so.6 (0x00002af013750000) /lib64/ld-linux-x86-64.so.2 (0x00002af013008000) de801:/# cat /proc/sys/kernel/threads-max 1589248 de801:/# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 794624 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 128 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Here is the output of the C program de801:/test# ./thread-limit Creating threads ... Address of c = 1061520 KB Address of c = 1081300 KB Address of c = 1080904 KB Address of c = 1081168 KB Address of c = 1080508 KB Address of c = 1080640 KB Address of c = 1081432 KB Address of c = 1081036 KB Address of c = 1080772 KB 100 threads so far ... 200 threads so far ... 300 threads so far ... 400 threads so far ... 500 threads so far ... 600 threads so far ... Failed with return code 12 creating thread 637. Any ideas how to fix this need to create more than 600 threads at a time. You only need threads for things you can usefully do at the same time. Your system cannot usefully do more than 600 things at a time. You do not need more threads to do more work. In any event, you're setting the wrong stack size. You're setting the limiting size of the initial thread stack, the one not created by your code. You're supposed to be setting the size of the stack of the threads your code creates. Change: printf("Creating threads ...\n"); for (i = 0; i < MAX_THREADS && rc == 0; i++) { rc = pthread_create(&(thread[i]), NULL, (void *) &run, NULL); to (roughly): printf("Creating threads ...\n"); pthread_attr_t pa; pthread_attr_init(&pa); pthread_attr_setstacksize(&pa, 2*1024*1024); for (i = 0; i < MAX_THREADS && rc == 0; i++) { rc = pthread_create(&(thread[i]), &pa, (void *) &run, NULL); phtread_attr_destroy(&pa); You can also call pthread_attr_getstacksize to find our your platform's default thread stack size. Note that if a thread exceeds its allocated stack, your process will die. pthread_attr_getstacksize Even with a 2MB stack, which can be very dangerous, you're still looking at 1.2GB just for stacks to have 600 threads. Whatever your job is, threads are not the right tool for it. Failed with return code 12 creating thread 640. pthread_attr_setstacksize(&pa, 128*1024); Failed with return code 12 creating thread 635. pthread_create This is nothing to do with thread limits. Error code 12 doesnt apply to thread limits. $ perror 12 OS error code 12: Cannot allocate memory I guess if anything your running low on stack, if you set it to the redhat default of 8MB your probably going to be able to run more threads. This one is the same as the one linked to with explicit stack sizes set to 8MB. See what happens when you compile this one. /* compile with: gcc -lpthread -o thread-limit thread-limit.c */ /* originally from: */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <pthread.h> #include <sys/time.h> #include <sys/resource.h> #define MAX_THREADS 10000 int i; void run(void) { char c; if (i < 10) printf("Address of c = %u KB\n", (unsigned int) &c / 1024); sleep(60 * 60); } int main(int argc, char *argv[]) { struct rlimit limits; limits.rlim_cur = 1024*1024*8; limits.rlim_max = 1024*1024*8; if (setrlimit(RLIMIT_STACK, &limits) == -1) { perror("Failed to set stack limit"); exit(1); }); } Failed with return code 12 creating thread 637. Just got the following information: This is a limitation imposed by my host provider. This has nothing to do with programming, or linux. asked 4 years ago viewed 815 times active
http://serverfault.com/questions/333407/613-threads-limit-on-debian/333612
CC-MAIN-2016-30
refinedweb
793
76.11
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. "Stephen M. Webb" <stephen@bregmasoft.com> writes: | On Mon, 15 Oct 2001, Ira Ruben wrote: | > Sometime in the past, specifically according to gcc/ChangeLog.2 | > (8/29/99), Zack Weinberg added a #define for '_Bool' to stdbool.h to | > define it as the type 'bool'. This means that now with gcc 3.x you | > can no longer include both stdbool.h and libstdc++'s type_traits.h in | > the same compilation because type_traits.h defines a _Bool template. | > It didn't in 2.95.x days which is why this wasn't a problem until now | > with gcc 3.x. | > | > Does anyone have a suggested (permanent) fix for this problem. For | > the time being I've added a #undef _Bool just before the template | > definition in type_traits.h. | | The problem is that the names in type_traits.h are in the global namespace. _Bool is a keyword in C99. The right solution is to ditch _Bool from type_traits.h. I began some work sometime ago with the file cpp_type_traits.h. -- Gaby CodeSourcery, LLC
http://gcc.gnu.org/ml/libstdc++/2001-10/msg00139.html
crawl-001
refinedweb
188
71.61
The code examples here didn't work for me. Returns "No coupon type", no coupon gets created. Anyone suggestions on this? This is my code: backend // Filename: backend/test.jsw (web modules need to have a .jsw extension) import wixMarketing from 'wix-marketing-backend'; export function createCoupon() { let couponData = { "name": "My Coupon", "code": "myCouponCode", "startTime": new Date(), "expirationTime": new Date(2020,12,31), "active": true, "scope": { "namespace": "stores" }, "moneyOff": 10, }; return wixMarketing.createCoupon(couponData); } page import { createCoupon } from 'backend/test'; $w.onReady(function () { createCoupon().then(id => { console.log(id); }) .catch(err => { console.log(err); }); } After the useless answers from support@wix.com I played around with the code, now it works. Instead of using "moneyOff" I used "moneyOffAmount". Hopefully might help someone out there facing the same issue. // Filename: backend/coupon.jsw (web modules need to have a .jsw extension) import wixMarketing from 'wix-marketing-backend'; export function myCreateCoupon(name) { let couponData = { name: name, code: 'myCouponCode', startTime: new Date(), expirationTime: new Date(2020, 12, 31), active: true, scope: { namespace: 'stores', }, moneyOffAmount: 10, }; return wixMarketing.createCoupon(couponData); } Not to be weird, but... I freaking love you. After hours and hours of trying to figure out how to create a coupon, your simple solution fixed everything. Do you know how I could make it a percentOff coupon? percentOff and percentOffAmount don't work. Thanks Glad I could help you with this. I was lucky and figured it out by guessing. I did not have to use percentOff, but maybe try percentOffNumber or percentOffPercent. The idea of using amount came to me due to the tooltip that appears when creating a coupon in the frontend, maybe you get an idea when you check there for percentOff coupon type. Good luck! See the wix-marketing-backend CouponInfo API for more information. You'll find a complete list of properties that can be used in CouponInfo. The problem is that the documentation is wrong. It says to use "percentOff" "moneyOff" "fixedPrice" but none of them work. Somehow "info" figured out that moneyOffAmount works, which is nowhere to be found in documentation. Nice catch guys. Thanks for pointing this out. I sent this on to the docs team to sort this out. Thank you! I appreciate the help Thanks again everyone. The docs are now fixed. 🍻 Thank you so much Yisrael! It works perfectly now. Can't believe I didn't guess "percentOffRate" haha. You shouldn't have to guess - that's what docs are for. But then again, we're developers - we don't need no steenking docs.
https://www.wix.com/corvid/forum/community-discussion/wix-marketing-backend-coupons-createcoupon-not-working
CC-MAIN-2019-47
refinedweb
423
69.38
Is it possible in a plugin to get the point currently under the cursor? I want to build a plugin that does fuzzy cursor insertion so that if I hold down a meta key while mousing around the cursor is moved to be at the word boundary nearest to the current mouse location instead of being in the middle of a word. To start, I want to have a text command that moves the cursor to the matching point under the cursor but I can't seem to sort out how to get where the cursor by asking for it. I don't want to be relegated to on_hover actions. on_hover Thanks! on_hover is the only way to get notified about cursor positions. By including a mousemap, I can get mouse point information on click and release, is it possible to get updates for drags as well? One method of getting that information is to create an EventListener and listen for the drag_select command being executed, but there may be another way. EventListener drag_select Using on_selection_changed() in a ViewEventListener definitely works better than the text command. But I don't want to have this behavior be enabled all the time, and I don't want it to be modal. Is there any way to get the currently pressed keys, or ways to finagle a mousemap to both call drag_select and also set a global variable? I've tried making a text command triggered by a mousemap with ctrl + option + button1, having that command set a global variable and then call sublime.run_command('drag_select'), but that doesn't seem to work. No selection ends up happening. Invoking drag_select from the mousemap directly does work, but I don't know how to get at the args being passed into it from my EventListener. Thanks! This is getting closer. That seems like it should work (having said that I haven't had a chance to poke at it). Things to check would be: True want_event False event self.view.run_command() self.view.window().run_command() sublime.run_command() run_command sublime.log_commands(True) I'm surprised to find that when I call run_command() from inside a command, nothing is logged by sublime.log_commands(True). So far, no combination of self.view[.window()].run_command("drag_select",[event]) has made text selection occur. Is there a way to get text events triggered by both key down and key up? It's possible for mousemap commands but I don't seen any examples for keymap commands. Then I could fake this by setting a global during key down and looking for it in on_selection_chaged() Indeed, the command log only displays commands executed at the top level. Possibly this is because so many commands build their functionality by executing other commands and it would be hard to track what exactly is happening, but that's just a guess on my part. I don't think there's any way to detect a key up event, no. Simplistically, this plugin creates a drop in replacement for drag_select that lets you know that a drag selection is happening and then invokes the underlying drag_select command to perform the selection: import sublime import sublime_plugin class MyDragSelectCommand(sublime_plugin.TextCommand): def run(self, edit, **kwargs): print("my drag select", kwargs) self.view.run_command("drag_select", kwargs) def want_event(self): return True Replacing the drag_select command in the standard mouse map with my_drag_select works as I would expect it to; it outputs its arguments to the console and then the appropriate selection changes. my_drag_select Note however that it's the drag_select command that performs the drag select, so you don't get notifications of the selection as it changes (except through a selection changed event). As such I don't know that this helps you much unless you can figure a way to detect when the drag_select command has finished executing so you know to stop paying attention to selection changes. Fabulous! the kwargs do the trick, maybe the event just needed to be wrapped in an array. This has got it working great. To determine if the drag_select command has finished, I just check instead to see if a new one has started by checking to see if both the left and right anchors for the selection have changed. If one of them is the same then it's probably the same drag event. Hacks on hacks on hacks but it seems to work well. Thank you for all your help! If the API allowed it, I would make holding down crtl-option cause a shadow cursor to appear where it would be inserted on click, so that's my feature request, but otherwise this lets me do everything I want. There is a similar feature request on the core: clickable ctrl You open up vote it or open a new feature request for a API to correctly handle this.
https://forum.sublimetext.com/t/get-point-under-mouse/29801
CC-MAIN-2018-09
refinedweb
813
59.94
Customizing an email template BMC Remedyforce provides the following out-of-the-box email templates for each object: However, you can clone and customize these email templates to include information that is specific to your business requirements. BMC recommends that you first clone and then edit the out-of-the-box email templates. If you receive an error message when cloning a BMC Remedyforce out-of-the-box email template, see Resolving an error with cloning a BMC Remedyforce email template. The out-of-the-box email template is available in your organization at Remedyforce Administration > Configure Email > Email Templates. You can customize the fields that are used in the templates to display the information of your choice in the email message. You can include the fields that are available for the objects. Select the object in the Related to Type field while creating an email template. For more information, see Creating an email template. Use the API Names field to obtain the values of these fields in the email templates. To see a list of fields available for an object, navigate to Setup > Create > Objects or Remedyforce Administration > Manage Objects > Create and Edit Objects. The list of fields that you can use in an email template is displayed in the Custom Fields & Relationships section. To allow the staff members to add additional information even after selecting an email template, you must customize the email template to include the Additional_email_information field in the body of the email message by using the following syntax: { !relatedto.BMCServiceDesk__Additional_email_information__c} To refer to the fields of a related object for which there exists a lookup or Master Detail data type in an object, use the following syntax. {relatedto.<namespace of organization><Field Name of the lookup field>__r.<API Name of the field of the related object>} The following table provides examples of the syntax that you can use for different objects. To add the logo and signature to an email template You can customize an email template by adding your company logo and signature to it. - Navigate to Setup > Email > My Templates. If you have enabled the improved Setup user interface in your Salesforce organization, navigate to My Settings > Email > Email Templates. For more information about the improved Setup user interface, see the Salesforce Help. - In the Email Template Name column, click the template to which you want to add the logo and email signature. - Click the appropriate option for the type of template: - Visualforce template: Click Edit Template. - Custom or HTML template: Click Edit HTML version. Add the URL of the image by using the appropriate HTML tags. For example, if you are adding an image named companylogo.gif, you must add the following code: <a href="company URL"> <img border="0" src="companylogo.gif" width="250" height="42" alt="company name"></img></a><br/><br/> In this example, you would replace company URL, companylogo.gif, and company name with your company URL, path to the image file, and your company name. You can replace the image link with an externally hosted image link or the link that you received when you upload a document to Salesforce. For more information about how to upload a logo image as a document in Salesforce, see To upload an image as a document in Salesforce. Add the HTML content of the signature at the bottom of the email template content. For example: <div align="left"> <table border="0" cellpadding="4" cellspacing="0" width="500" id="table3"> <tr> <td width="250"> <b>{!$*User.FirstName*} {!$*User.LastName*}</b> <br/>{!$*User.Title*} <br/>BMC Software <br/>phone: {!$*User.Phone*} <br/>fax: {!$*User.Fax*} </td> </tr> </table> </div> In this example, the User.FirstName, User.LastName, User.Title, User.Phone, User.Faxfields are related to the logged on user and are populated by Salesforce while sending the email message. You can modify the alignment, size, position of the text, font, images as required. To upload an image as a document in Salesforce Click the Documents tab. Note If the Documents tab is not visible, click the All Tabs tab (the plus icon) and click Documents. - In the Recent Documents section, click New. - In the 1. Enter details section, in the Document Name field, type a document name. The Document Unique Name field is populated with the value from the Document Name field, and the spaces in the Document Name field are replaced by underscores. If you want to use the file name, leave Document Name field blank. The file name appears automatically when you upload the file. - To make this image available in HTML email templates without requiring a Salesforce.com user name and password, select the Externally Available Image check box. - From the Folder list, select a folder. - (Optional) In the Description field, type a description of the image. - (Optional) In the Keywords field, type the keywords, that can be used as search criteria. - To browse to the location where the image that you want to upload is located, in the 2. Select the File section, click Browse. - Select the required image and click Open. - Click Save. - Right-click the image and click Copy Image URL or Copy Link Location depending on the type of browser. This URL is the unique image ID of the uploaded document. This URL is used to embed the image in the email template. For example, Resolving an error with cloning a BMC Remedyforce email template When cloning a BMC Remedyforce out-of-the-box email template, you might receive the following error message: Error: Invalid Data. Review all error messages below to correct your data. Component c:srminputdisplay does not exist (Related field: Markup) To resolve this issue, edit the BMC Remedyforce email template and replace "c:" in the following string with "BMCServiceDesk:" Original string: <c:SRMInputDisplay Modified string: <BMCServiceDesk:SRMInputDisplay After saving the modifications, you can clone the BMC Remedyforce email template. For information about cloning an email template, see the Salesforce Help.
https://docs.bmc.com/docs/remedyforce/201502/customizing-an-email-template-553328908.html
CC-MAIN-2020-24
refinedweb
991
55.24
Created on 2008-12-20 16:42 by dingo, last changed 2009-01-28 21:47 by mark.dickinson. This issue is now closed. I've been playing around with the newly released Python 3.0, and I'm a bit confused about the built-in round()-function. To sum it up in a single example: Python 3.0 (r30:67503, Dec 7 2008, 04:54:04) [GCC 4.3.2] on linux2 >>> round(25, -1) 30.0 I had expected the result to be the integer 20, because: 1. The documentation on built-in functions says: "values are rounded to the closest multiple of 10 to the power minus n; if two multiples are equally close, rounding is done toward the even choice" 2. Both help(round) and the documentation on built-in functions claim that, if two arguments are given, the return value will be of the same type as the first argument. Is this unintended behaviour, or am I missing something? Looks like a bug to me. I get the expected behaviour on my machine. Python 3.0+ (release30-maint:67878, Dec 20 2008, 17:31:44) [GCC 4.0.1 (Apple Inc. build 5490)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> round(25, -1) 20.0 >>> round(25.0, -1) 20.0 What system are you on? Correction: it looks like two bugs to me. I think the wrong-return-type bug is rather serious, possibly a release blocker for 3.0.1. I'll see if I can produce a patch. The incorrect rounding (returning 30.0 instead of 20.0) is less serious, but still needs fixing. For a start, we should eliminate the use of pow() in float_round, and when the second argument to round is negative there should be a division by a power of 10 rather than a multiplication by the reciprocal of a power of 10. Why do you say the return type is incorrect? It should, and does, return a float. > Why do you say the return type is incorrect? It should, and does, > return a float. The documentation at: says, of round(x[, n]): """The return value is an integer if called with one argument, otherwise of the same type as x.""" On the other hand, PEP 3141 (which I didn't check before) seems to say that you're right: the return value should be an instance of Real. So maybe this is just a doc bug? I have to admit that I don't understand the motivation for round(int, n) returning a float, given that the value will always be integral. It seems that this makes the two-argument version of round less useful to someone who's trying to avoid floats, perhaps working with ints and Decimals, or ints and Rationals, or just implementing fixed-point arithmetic with scaled ints. But given that a behaviour change would be backwards incompatible, and the current behaviour is supported by at least one documentation source, it seems easiest to call this a doc bug. Jeffrey, any opinion on this? I'm using Arch Linux (32 bit) with kernel 2.6.25 and glibc 2.8 on a core 2 duo. The described behaviour occurs in a precompiled binary of Python 3.0, but also in versions I've compiled just now, which includes Python 3.0+ (release30-maint:67879) As far as the rounding itself is concerned, round(x, n) seems to work as documented with n>=0 on my system, while giving the same results as the Python-2.6-version of round() for n<0.. Making that change would be slightly backwards-incompatible, but IIRC, int is supposed to be usable everywhere float is, so there should be very few programs it would break. So my vote's to change it, for 3.0.1 if possible. The documentation is a little too strict on __round__ implementations by requiring round(x, y) to return the same type as x, but I think it should still encourage __round__ to return the same type. And, uh, oops for writing those bugs. Is your guess for round(25.0, -1)==30.0 that 25.0*(1/10.0) is slightly more than 2.5 on some systems? The code that hands-off long.__round__ to float.__round__ is a train wreck. The intended result can be computed directly and exactly for all sizes of long. > Is your guess for round(25.0, > -1)==30.0 that 25.0*(1/10.0) is slightly more than 2.5 on some > systems? Yes, something like that. I don't think it's feasible to make round perfectly correct (for floats) in all cases without implementing some amount of multiple precision code. But I think we can and should rewrite the code in such a way that it has a pretty good chance of returning correct results in common cases. Small powers of 10 can be computed exactly (up to 10**22, I think), in contrast to reciprocals of powers of 10. I'll work up a patch for round(int, int) -> int, so that at least we have the option of fixing this for 3.0.1 *if* there's general agreement that that's the right way to go. Seems like a python-dev discussion might be necessary to determine that. >. In that case, the doc string should be fixed: round(number[, ndigits]) -> floating point number unless "floating point number" is supposed to include type int (which I would find fairly confusing). Wrt. this issue: PEP 3141 specified that round() rounds to even for floats, leaving it explicitly unspecified how ints get rounded. If the result is to be an int, the implementation must not go through floats. It's a problem not only in the border cases, but also for large numbers (which can't get represented correctly even remotely): py> int(round(10**20+324678,-3)) 100000000000000327680 py> int(round(324678,-3)) 325000 IMO, having long.__round__ return a float is a landmine, guaranteed to cause someone problems someday (problems that are hard to find). Here's a first attempt at a patch for round(int, int) -> int. Correct me if I'm wrong, but I think computing the quotient is not necessary; the remainder is sufficient: def round(n, i): if i >= 0: return n i = 10**(-i) r = n%(2*i) if r < i: return n-r return n-r+2*i Martin, that gives some answers like round(51, -2) --> 0 instead of 100. Will review the patch later this evening. Thanks for the submission. > Martin, that gives some answers like round(51, -2) --> 0 instead of 100. I see. Here is a version that fixes that. def round(n, i): i = 10**(-i) r = n%(2*i) o = i/2 n -= r if r <= o: return n elif r < 3*o: return n+i else: return n+2*i However, I now see that it is pointless not to use divrem, since % computes the quotient as a side effect. Updated patch: fix test_builtin. (Rest of the patch unchanged.) Hi Mark, I think there's an overflow for ndigits that predates your patch: >>> round(2, -2**31 +1) 2 >>> round(2, -2**31 +2) nan (it looks like these lines above make 2.6 hang :/) Now, I'm getting a segfault in 3.0 when Ctrl + C-ing during a long running round: Python 3.1a0 (py3k:67893M, Dec 21 2008, 10:38:30) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> round(2, -2**31 + 1) 2 >>> round(2, -2**31 + 2) # Press Ctrl + C Segmentation fault (backtrace below) Also, maybe "round(2, -2**31 + 2)" taking long is a bug of its own? The crash with backtrace: Starting program: /home/ajaksu/py3k/python [Thread debugging using libthread_db enabled] Python 3.1a0 (py3k:67893M, Dec 21 2008, 11:08:29) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. [New Thread 0xb7d2e8c0 (LWP 14925)] >>> round(2, -2**31 + 1) 2 >>> >>> round(2, -2**31 + 2) # Press Ctrl + C Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb7d2e8c0 (LWP 14925)] _PyUnicode_New (length=10) at Objects/unicodeobject.c:359 359 unicode->str[0] = 0; (gdb) bt #0 _PyUnicode_New (length=10) at Objects/unicodeobject.c:359 #1 0x080708a5 in PyUnicodeUCS2_DecodeUTF8Stateful (s=0x813d8dc "last_value", size=10, errors=0x0, consumed=0x0) at Objects/unicodeobject.c:2022 #2 0x08072e22 in PyUnicodeUCS2_FromStringAndSize (u=0x813d8dc "last_value", size=10) at Objects/unicodeobject.c:2000 #3 0x08072f82 in PyUnicodeUCS2_FromString (u=0x813d8dc "last_value") at Objects/unicodeobject.c:557 #4 0x0810ddf7 in PyDict_SetItemString (v=0xb7b21714, key=0x813d8dc "last_value", item=0xb7a4e43c) at Objects/dictobject.c:2088 #5 0x080b5fb1 in PySys_SetObject (name=0x813d8dc "last_value", v=0xa) at Python/sysmodule.c:67 #6 0x080afedb in PyErr_PrintEx (set_sys_last_vars=1) at Python/pythonrun.c:1294 #7 0x080b063c in PyRun_InteractiveOneFlags (fp=0xb7e7a440, filename=0x813f509 "<stdin>", flags=0xbf84bd34) at Python/pythonrun.c:1189 #8 0x080b0816 in PyRun_InteractiveLoopFlags (fp=0xb7e7a440, filename=0x813f509 "<stdin>", flags=0xbf84bd34) at Python/pythonrun.c:909 #9 0x080b0fa2 in PyRun_AnyFileExFlags (fp=0xb7e7a440, filename=0x813f509 "<stdin>", closeit=0, flags=0xbf84bd34) at Python/pythonrun.c:878 #10 0x080bc49a in Py_Main (argc=0, argv=0x8192008) at Modules/main.c:611 #11 0x0805a1dc in main (argc=1, argv=0xbf84de24) at ./Modules/python.c:70 I hope this helps :) Daniel > >>> round(2, -2**31 + 2) # Press Ctrl + C > Segmentation fault > (backtrace below) Thanks, Daniel. It looks like I messed up the refcounting in the error- handling section of the code. I'll fix this. I don't think the hang itself should be considered a bug, any more than the hang from "10**(2**31-1)" is a bug. Cause of segfault was doing Py_XDECREF on a pointer that hadn't been initialised to NULL. Here's a fixed patch. I still get the instant result: >>> round(2, -2**31+1) 2 which is a little odd. It's the correct result, but I can't see how it gets there: under the current algorithm, there should be a 10**(2**31-1) happening somewhere, and that would take a *lot* of time and memory. Will investigate. Aha. The special result for round(x, 1-2**31) has nothing to do with this particular patch. It's a consequence of: #define UNDEF_NDIGITS (-0x7fffffff) /* Unlikely ndigits value */ in bltinmodule.c. The same behaviour results for all types, not just ints. Mark Dickinson <report@bugs.python.org> wrote: > I don't think the hang itself should be considered a bug, any more > than the hang from "10**(2**31-1)" is a bug. Well, besides the fact that you can stop "10**(2**31-1)" with Ctrl+C but not round(2, -2**31+1), the round case may special case ndigits > number to avoid the slow pow(10, x). >>>> round(2, -2**31+1) > 2 > > which is a little odd. It's the correct result, but I can't see how Is it correct? The answer for 0 > ndigits > -2**301+1 was nan before the patch, 0 after. Given that "round(2, 2**31)" throws an OverflowError, iff this is wrong, should it OverflowError too? > it gets there: under the current algorithm, there should be a > 10**(2**31-1) happening somewhere, and that would take a *lot* of time > and memory. Will investigate. That should be optimizable for ndigits > number, and perhaps log10(number) < k * ndigits (for large ndigits), right? But I don't think it's a realworld usecase, so dropping this idea for 2.6. > Aha. The special result for round(x, 1-2**31) has nothing to do > with this particular patch. It's a consequence of: Yep, "predates your patch" as I said :) > [Me] > > which is a little odd. It's the correct result, but I can't see how [Daniel] > Is it correct? No. :-) It should be 0, as you say. > Given that "round(2, 2**31)" throws an OverflowError I think this is wrong, too. It should be 2. It's another consequence of the code in bltinmodule.c. The builtin_round function seems unnecessarily complicated: it converts the second argument from a Python object to a C int, then converts it back again before calling the appropriate __round__ method. Then the first thing the __round__ method typically does for built-in types is to convert to a C int again. As far as I can tell the first two conversions are unnecessary. Here's an updated version of the patch that does away with the unnecessary conversions, along with the UNDEF_NDIGITS hack. All tests still pass, on my machine, and with this patch I get the results I'd expect: >>> round(2, 2**31) 2 >>> round(2, 2**100) 2 >>> round(2, -2**100) ^CTraceback (most recent call last): File "<stdin>", line 1, in <module> KeyboardInterrupt >> round(2, 1-2**31) ^CTraceback (most recent call last): File "<stdin>", line 1, in <module> KeyboardInterrupt > That should be optimizable for ndigits > number, and perhaps > log10(number) < k * ndigits (for large ndigits), right? But I don't > think it's a realworld usecase, so dropping this idea for 2.6. Agreed. I don't think this optimization is worth it. I also think that documentation should be improved for round(number, [ndigits]). The doc/help says, ndigits can be negative, but does not really say what it means to be negative. It can be confusing to anyione. Take the following example: >>> round(26.5,1) 26.5 >>> round(26.5,-1) 30.0 Clearer title. Some minor modifications to the last patch: - fix round docstring: it now reads "round(number[, ndigits]) -> number" instead of "round(number[, ndigits]) -> floating-point number" - add Misc/NEWS entry - add extra tests for round(x, n) with n huge and positive Someone ought to ping Guido on this. He may have some strong thoughts on the signature (having it sometimes return ints and sometimes floats). But in this case, accuracy may be the more important consideration. Guido, do you have any thoughts on this? Basically, it's a choice between giving round() a weird signature (sometimes returning floats and sometimes int/longs) versus having accurate roundings of integers (which become unrepresentable when divided by powers of 10.0). I think it's fine if it returns an int iff the first arg is an int. In other languages this would be overloaded as follows: round(int)int round(float)float round(int, int)int round(float, int)float I'd prefer round(x,positive_integer) return float. Returning int is a bit of a lie, except that the decimal module is available to avoid this sort of lie. For non-positive integer roundings I'd like an integer return. In my opinion, we'd benefit from this definition of round: import numbers def round(a,p=0,base=10): ''' >>> round(147,-1,5) 145.0 >>> round(143,-1,5) 145.0 >>> round(142,-1,5) 140.0 >>> round(12.345,1,2) 12.5 >>> round(12.345,2,2) 12.25 >>> round(12.345,-2,2) 12 >>> round(12.345,-3,2) 16 ''' # consider using sign transfer for negative a if base < 1: raise ValueError('base too confusing') require_integral_output = ( (p < 1) and isinstance(base, numbers.Integral) and isinstance(p, numbers.Integral)) b = base**p result = int(a*b+0.5)/b if require_integral_output: result = int(0.5+result) return result Well, that would leave a function whose return *type* depends on the *value* of one of the arguments, which I find very ugly. I don't know why you think rounding an int to X (X>0) digits after the decimal point should return a float -- it's not like the int has any digits behind the point, so nothing is gained. Why would it be a lie? The value of one of the arguments controls how many digits there are. Certainly if you rounded $10 to the nearest cents you'd expect $10.00. Thus round(10,2) should be 10.00. Without using decimal module, the best we can do is produce 10.0. I'd apply a similar argument to convince you that the return value should be integral for negative "number of digits". Hark! This is python. We can take this correct and beautiful approach. We are not constrainded by function signatures of c++ or FORTRAN90. I'm sorry, but I don't see a significant difference between $10 and $10.00. If you want to display a certain number of digits, use "%.2f" % x. And trust me, I'm not doing this for Fortran's sake. For. marketdickinson> (2) Accuracy I see int/float like bytes/characters: it's not a good idea to mix them. If you use float, you know that you loose precision (digits) after some operations. Whereas I suppose the operations on int are always exact. I like round(int,int)->int (same input/output types). To get a float, use round(float(int), int)->float. Apologies for the poor formatting in the last comment. Bad cut-and-paste job. One more reason: (4) "In the face of ambiguity, refuse the temptation to guess." Why should round(int, int) be float, rather than Decimal, or Fraction? This was the one argument against the integer division change that I found somewhat compelling. But of course there's a big difference: 1/2 had to have *some* type, and it couldn't be an integer. In contrast, given that round(34, n) is always integral, there's an obvious choice for the return type. I'll shut up now. It's settled then. Input type dictates output type. No dependency on the actual values. Committed in r69068 (py3k), r69069 (release30-maint). The original bug with round(float, n) (loss of accuracy arising from intermediate floating-point rounding errors) is still present; I think further discussion for that can go into issue 1869 (which should probably have its priority upgraded).
http://bugs.python.org/issue4707
CC-MAIN-2014-42
refinedweb
3,008
75.1
Achieving Resilient and Efficient Load Balancing in DHT-based P2P Systems - Louisa Morris - 2 years ago - Views: Transcription 1 Achieving Resilient and Efficient Load Balancing in DHT-based P2P Systems Di Wu, Ye Tian and Kam-Wing Ng Department of Computer Science & Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong {dwu, ytian, Abstract In DHT-based P2P systems, the technique of virtual server is widely used to achieve load balance. To efficiently handle the workload skewness, virtual servers are allowed to migrate between nodes. Among existing migration-based load balancing strategies, there are two main categories: (1) Rendezvous Directory Strategy () and (2) Independent Searching Strategy (). However, none of them can achieve resilience and efficiency at the same time. In this paper, we propose a Gossip Dissemination Strategy () for load balancing in DHT systems, which attempts to achieve the benefits of both and. doesn t rely on a few static rendezvous directories to perform load balancing. Instead, load information is disseminated within the formed groups via a gossip protocol, and each peer has enough information to act as the rendezvous directory and perform load balancing within its group. Besides intra-group balancing, inter-group balancing and emergent balancing are also supported by. To further improve system resilience, the position of the rendezvous directory is randomized in each round. For a better understanding, we also perform analytical studies on in terms of its scalability and efficiency under churn. Finally, the effectiveness of is evaluated by extensive simulation under different workload and churn levels. 1 Introduction In DHT (Distributed Hash Table)-based P2P systems, the previous work [10] has pointed out that, the uniformity of hash functions can t guarantee the perfect balance between nodes, and there exists an O(logN) imbalance factor in the number of objects stored at a node. What is more, due to the node heterogeneity or application semantics, it is possible for some nodes to be heavily loaded, while others are lightly loaded. Such unfair load distribution among peers will cause performance degradation, and also provide disincentives to participating peers. To date, the importance of load balancing has motivated a number of proposals, e.g., [1], [7], [8], [3], [15], etc. Many of them are based on the concept of virtual server [10]. By shedding an appropriate number of virtual servers from heavy nodes to light nodes, load balancing can be achieved. According to the difference in load information management, existing migration-based approaches can be categorized into two representative strategies: (1) Rendezvous Directory Strategy () and (2) Independent Searching Strategy (). In, the load information of each peer is periodically published to a few fixed rendezvous directories, which are responsible for scheduling the load reassignment to achieve load balance. In, a node doesn t publish its load information anywhere else. To achieve load balancing, the nodes should perform searching independently to find other nodes with inverse load characteristics, and then migrate load from the heavy nodes to the light nodes. Due to information centralization, can conduct the best load reassignment and be much more efficient than. Existing schemes (e.g., [3], [15]) mainly focus on the scalability problem of, while little attention is paid to the resilience issue. In their approaches, the positions of rendezvous directories are static and known publicly to all the peers. It makes vulnerable to the node-targeted attacks. In case that the rendezvous directory is occupied by malicious peers or overwhelmed by DoS traffic, the service of load balancing is halted. On the contrary, is more resilient to the node-targeted attacks for there exists no central entity in the system. But its efficiency greatly depends on the searching scope. A big scope often incurs huge traffic and becomes unscalable in a large system. In case of a small scope, it is inefficient to achieve the system-wide load balance. Both and cannot achieve resilience and efficiency at the same time. In this paper, we propose a new load balancing strategy for DHT-based P2P systems, called Gossip Dissemination Strategy (), which is scalable and achieves both the /06/$ IEEE 115 2 efficiency of and the resilience of. In, the whole system is formed into groups, and the gossip protocol is used for load information dissemination. After load information dissemination, every group member has the full load information of its own group and the utilization information of the whole system. Thus, each group member can act as the rendezvous directory to perform load reassignment within the group, but the position of the rendezvous directory is randomized to make resistant to the nodetargeted attacks. Besides intra-group load balancing, intergroup load balancing and emergent load balancing are also allowed in to achieve a more balanced system state. Particularly, we make the following contributions in this paper: 1. To the best of our knowledge, our proposed Gossip Dissemination Strategy () is the first migrationbased load balancing strategy to achieve both resilience and efficiency. By utilizing gossip-based information dissemination and randomizing the positions of the rendezvous directory, can retain the efficiency of, while being more resilient to the node-targeted attacks. 2. We perform an analytical study on the scalability of, and the impact of system churn on the efficiency of to get a better understanding. It is found that, can exhibit good scalability and the impact of churn could be minimized when the tuning parameters are properly configured. 3. We evaluate the performance of via extensive simulations. Our simulation results show that, can achieve comparable efficiency to under different scenarios. But different from, our proposed is also resilient and scalable. The remainder of this paper is structured as follows. We will first introduce the related work in Sec. 2. The detailed design of Gossip Dissemination Strategy () is presented in Sec. 3. In Sec. 4, we analyze the scalability of and the impact of churn to. In Sec. 5, extensive simulation is performed to verify the performance of. Finally, Sec. 6 summarizes the whole paper. 2 Related Work The topic of load balancing has been well studied in the field of distributed systems, while the characteristics of P2P systems pose new challenges for system designers. In this section, we briefly introduce existing approaches to load balancing in DHT-based P2P systems. In DHT-based P2P systems, much research work is based on namespace balancing. Namespace balancing is trying to balance the load across nodes by ensuring that each node is responsible for a balanced namespace. It is valid only under the assumption of uniform workload distribution and uniform node capacity. The balancing action can be invoked at the time of node join. Before joining the system, a node samples single/multiple points and selects the largest zone to split (e.g, [13], [7], etc). For better balancing effect, Virtual Servers were introduced in [10], by letting each node host multiple virtual servers, the O(logN) imbalance factor between nodes can be mitigated. If node heterogeneity is considered, we can allocate virtual servers proportional to node capacity (e.g., CFS [2]). In case that a node is overloaded, it simply removes some of its virtual servers. Such simple deletion will cause the problem of load thrashing, for the removed virtual servers may make other nodes overloaded. Pure namespace balancing like the above approaches doesn t perform load migration, thus cannot handle workload skewness well. A more general approach is the migration-based approach, which is applicable to various kinds of scenarios and able to handle workload skewness. A number of migration-based approaches have been proposed to date (e.g., [8], [3], [15], etc). Most of them are based on the concept of virtual servers. In [8], Rao et al. propose three simple load balancing schemes: one-to-one, one-to-many and many-to-many. Among them, one-to-many and many-to-many belong to the category, while oneto-one belongs to the category. To enable emergent load balancing, Godfrey et al. [3] made a combination of one-to-many and many-to-many, and use them in different scenarios. The scheme proposed in [15] also belongs to the category, but its rendezvous directory is organized as a distributed k-ary tree embedded in the DHT. The effectiveness of and is compared analytically under different scenarios in [14]. In this paper, our proposed Gossip Dissemination Strategy () differs from previous work in that, we take both resilience and efficiency into account. By disseminating the load information and randomizing the rendezvous directory, we can exploit the benefits of both and. Besides, there are some other approaches that try to realize load balancing in P2P systems, such as object balancing based on Power of two choices (e.g., [1], [6]), etc. Nevertheless, all these techniques can be regarded as complementary techniques and may be combined to provide a better load-balancing effect under certain scenarios. 3 System Design In this section, we propose a Gossip Dissemination Strategy () for load balancing in DHT-based P2P systems. The objective of our design is to exploit the efficiency of while improving the resilience and scalability at the 116 3 same time. The basic idea of is to disperse the responsibility of load balancing to all the peers, instead of limiting to only a few ones. At the same time, in order to avoid the inefficiency caused by independent searching, load information is disseminated among peers by gossip-like protocols. In each load-balancing round, peers are selected randomly as the rendezvous directories to schedule the reassignments of virtual servers for better balance. In the following, we will introduce the system design in details. 3.1 Load Information Dissemination and Aggregation is built on top of ring-like DHTs, such as Pastry, Chord, etc. Based on the system size, one or more load balancing groups are formed among peers. For systems with only a few thousand peers, only one group is formed; but for systems with over tens of thousands of peers, peers form into multiple groups, each of which corresponds to a continuous region with equal size in the ring. All the peers within the same group share the same prefix, which is referred to as the GroupID. In, any group gossip protocol can be used for load information dissemination within the group. Here, to reduce the message traffic, a gossip tree embedded in the DHT is used for information dissemination. This tree doesn t require explicit maintenance and just expands based on local information. The node who wants to disseminate its load information is the root of the tree. It sends the information to all the peers with the same GroupID in its routing table. Subsequently, when an intermediate node j receives the message from a node i (the one that has already received the information), j only forwards the message to every node k in its routing table satisfying that prefix(i, j) is a prefix of prefix(j, k). Here, prefix(x, y) is defined as the maximum common prefix between node x and node y. The load information of a node i to be published includes: (1) the node s capacity: C i ; (2) the load vector of all the virtual servers hosted by the node: L i =< L i1,..., L im >, where L ik refers the load of the k-th virtual server i k hosted by node i. Denote V (i) to be the set of virtual servers hosted by the node i, i k V (i), k =1..m; (3) the node s IP address. In every update period T u,every node should publish its load information once. For a small system with only one group G, the load information of each peer can be delivered quickly to all members within a short period. After that, each peer has the load information of all the other peers, and knows the total capacity and the total load of the whole system by C = i G C i and L = i G i k V (i) L i k respectively. Thus, the system utilization µ can be given by µ = L/C. i A node i s utilization is given by µ i = k V (i) Li k C i. Based on the ratio between µ i and µ, a node can be classified into three types: (1) Heavy node, if µ i >µ; (2) Light node, if µ i <µ; (3) Normal node, if µ i = µ. In case of a large system, there exist multiple groups. Each node i should run at least two virtual servers. Among them, one virtual server i 1 is with the identifier id 1 generated by hashing the node s IP; and another virtual server i 2 is with the identifier id 2 generated by hashing id 1. i 1 is called the primary server of node i; and i 2 is called the secondary server of node i (see Fig. 1). Each node only publishes load information via its primary server. During load reassignment, the primary and secondary server can t be moved to other nodes, but their size can be changed. Given two arbitrary groups S and T, the probability that there exists at least one node i whose primary server i 1 is in group S and secondary server i 2 belongs to group T is given by 1 e c [12], where c is the ratio between the group size and the number of groups. With c =5, the probability is as high as It implies that, for any group, there exists at least one secondary server of its group members in any other group with high probability. It provides us with an opportunity to aggregate load information of other groups and calculate the system utilization. The information aggregation is important to achieve the system-wide load balancing. Group S i1 i2 Group T Figure 1. Primary server and secondary server. The aggregation process is as follows: through gossiping average load level and the node number of group T within group S by piggybacking with its own load information. As the secondary servers of the group members in group S exist in almost all other groups, after each node disseminates the group load status, every member in the group S learns the load status of all other groups, and is 117 4 able to estimate the system-wide utilization independently. 3.2 Intra-Group Load Balancing After load information dissemination, every member within a group holds the load information of the full group and the information of system-wide utilization. Each of them has the capability to act as the rendezvous directory to classify the nodes and schedule load reassignment in the group. However, in order to be resilient to the node-targeted attacks, it is better to randomize the position of the rendezvous directory in each load-balancing round. Although some sophisticated randomization algorithms and security mechanisms can be used to make the position selection more secure, we adopt a simple but effective approach. It is based on the assumption that, a node-targeted attack is often costly and the probability to compromise a node within a short period is small. The idea of our approach is as follows: we associate each round of load balancing with a sequence number seq lb, which is known by all group members. seq lb will be increased by 1 after every load balancing period.. After completion of load balancing, the node increases seq lb by one and disseminates the value to all the group members for consistency checking of seq lb. In the next round, another node will be chosen as the rendezvous directory due to the randomness of hashing function. Although the position of the rendezvous directory is still publicly known, but it becomes random and dynamic. Even if the current rendezvous directory is compromised, the service of load balancing can quickly be recovered. As there is no fixed rendezvous directory in the group, is more resilient than to the node-targeted attacks. The problem of computing an optimal reassignment of virtual servers between heavy nodes and light nodes is NPcomplete, therefore, a simple greedy matching algorithm similar to [3] is adopted here to perform virtual server reassignments. 3.3 Inter-Group Load Balancing Inter-group load balancing is performed only when there exist multiple groups in the system. Since the distribution of light nodes and heavy nodes may be various in different groups, it is possible that even after intra-group load balancing, some groups may still have many light nodes, while other groups may have many heavy nodes. To handle this situation, inter-group load balancing is allowed. InagroupS with many heavy nodes, its average utilization is higher than the average utilization of the whole system. Suppose that the node i is the current rendezvous directory of the group S, due to load information dissemination, the node i has the full load information of its own group and the load status of all other groups with high probability. The node i can select one group whose free capacity is slightly bigger than the required amount to perform intergroup load balancing. The process is as follows: Given the inter-group load balancing to be performed between the group S and the group T, the rendezvous directory i of the group S first finds another node j within the group S whose secondary server exists in the group T, and then transfers the responsibility to j. Since j has the full load information of both the group S and the group T, it can perform the best-fit load reassignment among the nodes of the two groups. 3.4 Emergent Load Balancing, can achieve a system-wide balanced state. 4 Performance Analysis 4.1 Scalability The main concern is the scalability of, which determines the applicability of in P2P O(g/T u )=O(N). For example, given the update period T u = 60 seconds, N =1, 000 and message size = 30 bytes, W is about 0.5k bytes/sec. If a node hosts multiple virtual servers, the traffic increases proportionally to the number of virtual servers. However, the traffic is still rather low and tolerable for normal users. Even for a powerful node with 20 virtual servers, the message traffic is about 10k bytes/sec. To 118 5 limit the traffic towards a node, each node can maintain a threshold number of virtual servers. In a large system, multiple groups are formed. Define c to be the ratio between the number of nodes in a group and the number of groups, we have g = cn. In case that c N, the message traffic for a peer per interval is given by W O(g/T u )=O( N). Given the update period T u =60seconds, N = 250, 000, c =4(c N) and message size = 30 bytes, W is about 0.5k bytes/sec, which is comparable to the small system. In our approach, c is an important tuning parameter. In order to guarantee efficient information dissemination between groups, c cannot be too small, and normally we set c 5. Asc = g 2 /N, we can guarantee the above requirement by keeping the group size g bigger than doesn t require global cooperation between nodes. As each node knows exactly the GroupID is null, which means there exists only one group in the system. 4.2 Efficiency under Churn The phenomenon of peer dynamics is called churn, and may cause the failure of load reassignments. There exist some peers that may leave after publishing their load information. If the rendezvous directory doesn t know their departurepeers. What we want to know is that, to what degree does the churn impact the efficiency of, and how to tune the parameters to mitigate its impact? Based on the measurement results from real P2P systems [9], we adopt the Pareto lifetime distribution in our analysis, in which the CDF (Cumulative Distribution Function) is given by F (x) =1 (1 + x β ) α, where α represents the heavy-tailed degree and β 1 Given the update period T u, the probability that a peer is a real live peer in the peer set for balancing at the rendezvous directory is p real = 1 α 1 β T β u + (2 α)t u [(1 + Tu β )2 α 1] for Pareto lifetime distribution with α>2,β >0. Proof Denote t i to be the time for the i-th round of load balancing. Within the latest update period [t i T u,t i ],we denote the set of new joining peers as X. For the peers that leaves the system within this period, based on their live time and departure time, we divide them into three sets: (1) Y 1, the set of nodes that are alive at time t i T u, but leave the system without making any update in [t i T u,t i ];(2)Y 2, the set of nodes that are alive at time t i T u and depart before t i, but make one update before departing; (3) Y 3,the set of nodes that join the system during [t i T u,t i ],but depart again before t i. During the load balancing action at t i, the nodes in Y 1 will not be considered, for they don t make any update within [t i T u,t i ]. But for the nodes in Y 2 and Y 3,the rendezvous directory has no knowledge about their departure and still thinks they are alive. The nodes in Y 2 and Y 3 belong to faked live peers. For our analysis is under steady state, the number of arrival peers equals the number of departure peers, thus we have X = Y 1 + Y 2 + Y 3.LetN be the number of live peers at time t i, then the number of peers in the peer set for balancing at time t i can be given by N = N + X Y 1 = N + Y 2 + Y 3. At time t i, among N peers, only N peers are real live nodes and p real can be given by the ratio between N and N. In order to compute p real, we need to compute X and Y 1 first. Let λ be the arrival rate of peers under steady state. By applying Little s Law [4], we have λ =(α 1)/β for Pareto lifetime distribution. Then the number of new arrival peers within [t i T u,t i ] is given by X = NλT u. After getting X, we then proceed to calculate Y 1. Supposing that a peer is alive at time t i T u, then according to [5], given the CDF of peer lifetime distribution as F (x), the CDF of its residual lifetime R is given by F R (x) =P (R <x)= 1 E[L] x 0 (1 F (z))dz. For Pareto lifetime distribution, we have F R (x) =1 (1 + x β )1 α. Due to the randomness, the update time T of a peer since t i T u follows a uniform distribution in 119 6 [0,T u ], with the PDF given by f(x) =1/T u. Define the probability that the peer s residual lifetime R is less than its first update time T as p Y1, then p Y1 can be calculated by p Y1 = P (R < T) = T u P (R < x)f(x)dx = 0 1 Tu T u F 0 R (x)dx. Under Pareto lifetime distribution, we β have p Y1 =1 (2 α)t u [(1 + Tu β )2 α 1],α>2. From the above, we can deduce Y 1 = N p Y1 and get N accordingly. After simple reduction, we can deduce the theorem based on p real = N N. To get a better understanding, we take a real P2P system Kad as a case for study. According to the measurement results in [11], the peer lifetime distribution in Kad follows a heavy-tailed distribution with the expectation of 2.71 hours, which can be approximated by a Pareto distribution with α =2.1,β =3.0. Based on Theorem 1, we get the numerical results of p real under different update periods (as shown in Table. 1). Update Period T u (in hour) p real 1/ / / / Table 1. Impact of churn It can be observed that, by tuning the update period to be smaller, we can keep the p real very high. In a group with 1000 nodes, by tuning T u = 1 60 hour (i.e., 60 seconds), there are less than 4 faked live peers in the balancing set on average. In the real environment, if the update period is properly configured, the impact of churn will not be a big issue for. 5 Experimental Evaluation 5.1 Methodology We evaluate the performance of on top of a Chord simulator, which is enhanced to support virtual servers. The whole system consists of 8192 nodes, and the group thresholds g max and g min are set as 1024 and 256 respectively. Synthetic traces are generated to simulate the system churn. The peer lifetime satisfies a heavy-tailed Pareto distribution with α =2.1, which is close to the real P2P systems (e.g., [9], [11]). β is adjusted to simulate different churn levels. In the experiment, the node capacity follows a Gnutellalike distribution, and the average node capacity is able to serve 100 requests per second. Initially, each node is assigned 5 virtual servers. Similar to [15], we also assume the size of a virtual server satisfies the exponential distribution. The update period and the load balancing period are set as 60 and 180 seconds another four strategies: (1) Rendezvous Directory Strategy (): there exists only one powerful rendezvous directory in the system, which can perform the optimal load reassignment; (2) Independent Searching Strategy (): the light node samples the id space for the heavy node, and the sample size k per interval is set as 1; (3) Strategy: the nodes remove or add virtual servers to make the load proportional to its capacity, similar to the approach adopted in [2]; (4) No Load Balancing for the system. 5.2 Experimental Results Impact of Workload Level In this experiment, we examine the effectiveness of load balancing strategies under different workload levels. 0% Request rate (a) Uniform 0% Request rate (b) Skewed Figure 2. under different degrees of workload (a) Uniform workload; (b) Skewed workload (zipf distribution with parameter 1.2). α =1.2). The results are shown in Fig. 2. Under the uniform workload,,, and can all guaran- 120 7 tee a low percentage of ill requests even with a high request rate. is slightly worse than, but outperforms and. Under the skewed workload, all the strategies perform worse, but and still can retain a low percentage of ill requests. The reason lies in that, both and can utilize the global load information to achieve a more balanced system state, while and α of Zipf distribution. When α = 0, it corresponds to a Uniform distribution without skewness. 0% Time (minutes) Figure 4. under workload shift We can find that, when the workload shift happens, there is a burst of ill requests due to the workload imbalance. After some time, all four strategies can smooth out the burst, but the response time is a bit different. Compared with and, and need more time to converge to the level before workload shift happens. has a short response time, but it is still a bit longer than. It is because that, besides intra-group load balancing, also needs to perform inter-group balancing in order to balance the whole system. 0 % Skewness α Figure 3. under different workload skewness Impact of System Evolution Although the scalability of has been analyzed in Sec. 4, we further validate our results by simulation. In the experiment, the system size is varied from 500 to We measure the average message overhead per virtual server. Fig. 3 plots the percentage of ill requests under different workload skewness. It can be observed that, with the increase of skewness, the effectiveness of all the load balancing strategies is impacted greatly. When the workload is highly skewed (i.e., α =3), even cannot handle the skewed workload well. However, under different skewness, always has similar efficiency as Impact of Workload Shift In the above experiments, there is no shift on the workload distribution. It is desirable to know how the load balancing strategies respond in case of workload shifts. We still use the skewed workload (Zipf distribution with α =1.2), but we change the set of destinations in the middle. Fig. 4 shows the impact of workload shift on the four load balancing strategies. The experiment lasts for 120 minutes, and the destination set is changed after 60 minutes. Message overhead per node (kbps) System size Figure 5. Message overhead per virtual server under different system size From Fig. 5, we can observe that the message overhead per virtual server is less than 0.5k bytes/sec most of 121 8 the time, even with the increasing of the system size. It is because taht, when the group size reaches a threshold, the group will be split to keep the traffic low. The effect of group splitting can be found in the figure. The message overhead increases continuously with the increasing of system size, but it drops down when reaching a threshold Impact of System Churn We also study how the churn impacts on the load balancing strategies by varying the average peer lifetime. We run the experiments under both uniform and skewed workload distributions (Zipf distribution with α =1.2). 0 GBS Average Peer Lifetime (hr) (a) Uniform 0% Average Peer Lifetime (hr) (b) Skewed Figure 6. under different churn rates (a) Uniform workload; (b) Skewed workload (zipf distribution with parameter 1.2). Fig. 6 shows the percentage of ill requests under different churn rates. We can observe that, under both uniform and skewed workload, in spite of the changes of churn rate, and can always lead to a more balanced system state and minimize the percentage of ill requests. and perform slightly worse when the churn rate is higher. The reason is that, a high churn rate will increase the convergence time to the balanced state due to the existence of faked live peers, which cause the increase of ill requests accordingly. In summary, we find that can achieve almost the same efficiency as. But different from, also achieves resilience and scalability at the same time. 6 Conclusion In this paper, we propose a new load balancing strategy, called Gossip Dissemination Strategy (), to realize both the efficiency of Rendezvous Directory Strategy () and the resilience of Independent Searching Strategy (). Instead of reporting load information to the rendezvous directory or searching independently, the load information is disseminated via the gossip protocol. Gossiping makes it hard to stop the load-balancing service by simply overwhelming a few rendezvous directories. In, every peer can perform load scheduling within its group, but the responsibility is randomized to resist node-targeted attacks. Besides intragroup load balancing, also supports inter-group load balancing and emergent load balancing. The performance of is analyzed analytically, and we also perform extensive simulation to evaluate the effectiveness of under different scenarios. References [1] J. Byers, J. Considine, and M. Mitzenmacher. Simple load balancing for distributed hash tables. In Proc. IPTPS 03, [2] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and I. Stoica. Wide-area cooperative storage with cfs. In Proc. 18th ACM Symposium on Operating Systems Principles SOSP 01, [3] B. Godfrey, K. Lakshminarayanan, S. Surana, R. Karp, and I. Stoica. Load balancing in dynamic structured p2p systems. In Proc. IEEE INFOCOM 04, Hong Kong, Mar [4] B. R. Haverkort. Performance of computer communication systems. John Wiley & Sons, [5] D. Leonard, V. Rai, and D. Loguinov. On lifetime-based node failure and stochastic resilience of decentralized peerto-peer networks. In Proc. ACM SIGMETRICS 05, [6] M. Mitzenmacher. The power of two choices in randomized load balancing. Doctoral Dissertation, [7] M. Naor and U. Wieder. Novel architectures for p2p applications: the continuous-discrete approach. In Proc. of ACM SPAA, [8] A. Rao, K. Lakshminarayanan, S. Surana, R. Karp, and I. Stoica. Load balancing in structured p2p systems. In Proc. IPTPS 03, Feb [9] S. Saroiu. Measurement and analysis of internet content delivery systems. Doctoral Dissertation, University of Washington, Dec [10] I. Stoica et al. Chord: A scalable peer-to-peer lookup service for internet applications. In Proc. ACM SIGCOMM 01, pages , San Diego, CA, Aug [11] D. Stutzbach and R. Rejaie. Characterizing churn in peerto-peer networks. In Technical Report CIS-TR-05-03, Univ. of Oregon, [12] C. Tang, M. J. Buco, R. N. Chang, S. Dwarkadas, L. Z. Luan, E. So, and C. Ward. Low traffic overlay networks with large routing tables. In Proc. ACM SIGMETRICS 05, [13] X. Wang, Y. Zhang, X. Li, and D. Loguinov. On zonebalancing of peer-to-peer networks: Analysis of random node join. In Proc. of ACM SIGMETRICS, [14] D. Wu, Y. Tian, and K.-W. Ng. On the effectiveness of migration-based load balancing strategies in dht systems. In Proc. of IEEE ICCCN, [15] Y. Zhu and Y. Hu. Efficient, proximity-aware load balancing for dht-based p2p systems. IEEE Transactions on Parallel and Distributed Systems (TPDS), 16, Apr ARTICLE IN PRESS. Journal of Network and Computer Applications Journal of Network and Computer Applications 32 (2009) 45 60 Contents lists available at ScienceDirect Journal of Network and Computer Applications journal homepage: Resilient Storage Systems Autumn 2009. Chapter 6: Distributed Hash Tables and their Applications André Brinkmann Storage Systems Autumn 2009 Chapter 6: Distributed Hash Tables and their Applications André Brinkmann Scaling RAID architectures Using traditional RAID architecture does not scale Adding news disk implies Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age. Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Load Measurement INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY [Kavita, 2(4): April, 2013] ISSN: 2277-9655 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Histogram Based Live Streaming in Peer to Peer Dynamic Balancing & Clustering System query enabled P2P networks 2009. 08. 27 Park, Byunggyu Load balancing mechanism in range query enabled P2P networks 2009. 08. 27 Park, Byunggyu Background Contents DHT(Distributed Hash Table) Motivation Proposed scheme Compression based Hashing Load balancing Comparison on Different Load Balancing Algorithms of Peer to Peer Networks Comparison on Different Load Balancing Algorithms of Peer to Peer Networks K.N.Sirisha *, S.Bhagya Rekha M.Tech,Software Engineering Noble college of Engineering & Technology for Women Web Technologies & A Topology-Aware Relay Lookup Scheme for P2P VoIP System Int. J. Communications, Network and System Sciences, 2010, 3, 119-125 doi:10.4236/ijcns.2010.32018 Published Online February 2010 (). A Topology-Aware Relay Lookup Scheme., Cooperative Monitoring for Internet Data Centers Cooperative Monitoring for Internet Data Centers Kuai Xu Feng Wang Arizona State University Division of Mathematical and Natural Sciences New College of Interdisciplinary Arts & Sciences P.O. Box 371, AN EFFICIENT DISTRIBUTED CONTROL LAW FOR LOAD BALANCING IN CONTENT DELIVERY NETWORKS Available Online at International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 9, September 2014, Energy Efficient Load Balancing among Heterogeneous Nodes of Wireless Sensor Network Energy Efficient Load Balancing among Heterogeneous Nodes of Wireless Sensor Network Chandrakant N Bangalore, India nadhachandra@gmail.com Abstract Energy efficient load balancing in a Wireless Sensor Identity Theft Protection in Structured Overlays Appears in Proceedings of the 1st Workshop on Secure Network Protocols (NPSec 5) Identity Theft Protection in Structured Overlays Lakshmi Ganesh and Ben Y. Zhao Computer Science Department, U. C. Santa IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 6 June, 2013 Page No. 1914-1919 IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES Ms. On the feasibility of exploiting P2P systems to launch DDoS attacks DOI 10.1007/s12083-009-0046-6 On the feasibility of exploiting P2P systems to launch DDoS attacks Xin Sun Ruben Torres Sanjay G. Rao Received: 7 November 2008 / Accepted: 25 March 2009 Springer Science- Sybil Attack and Defense in P2P Networks Sybil Attack and Defense in P2P Networks Presented by: Pratch Piyawongwisal, Pengye Xia CS 538 Fall 2011 Advanced Computer Networks Outline Sybil attack Attacks on DHTs Solutions Using social networks Motivation for peer-to-peer Peer-to-peer systems INF 5040 autumn 2007 lecturer: Roman Vitenberg INF5040, Frank Eliassen & Roman Vitenberg 1 Motivation for peer-to-peer Inherent restrictions of the standard client/server model Centralised. International Journal of Advanced Research in Computer Science and Software Engineering Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: An Experimental A Robust and Secure Overlay Storage Scheme Based on Erasure Coding A Robust and Secure Overlay Storage Scheme Based on Erasure Coding Chuanyou Li Yun Wang School of Computer Science and Engineering, Southeast University Key Lab of Computer Network and Information Integration, Research on P2P-SIP based VoIP system enhanced by UPnP technology December 2010, 17(Suppl. 2): 36 40 The Journal of China Universities of Posts and Telecommunications Research on P2P-SIP based VoIP system, Big Data & Scripting storage networks and distributed file systems Big Data & Scripting storage networks and distributed file systems 1, 2, in the remainder we use networks of computing nodes to enable computations on even larger datasets for a computation, each node
http://docplayer.net/11336576-Achieving-resilient-and-efficient-load-balancing-in-dht-based-p2p-systems.html
CC-MAIN-2018-26
refinedweb
6,301
52.39
Siew Moi Khor Microsoft Corporation June 2003 Applies to: Microsoft® Office Excel 2003 Summary: Preview the new features and enhancements in the latest version of Microsoft Excel, Excel 2003, such as list, improved Extensible Markup Language (XML) support, Smart Document solutions, research library, statistical function improvements, and smart tag enhancements and improvements. (23 printed pages) Introduction Excel List Management List Integration with SharePoint Products and Technologies Excel List Object Model Improved XML Support XML Programmability Smart Document Solutions Smart Tags Statistical Function Improvements Document Workspace Sites with Windows SharePoint Services Integration Research Library Miscellaneous Features Conclusion The new features in Microsoft® Office Excel 2003 revolve around making common tasks easier, such as list for simplified data management, and integration with Microsoft SharePoint™ Products and Technologies for workplace collaboration. There are also added features and improvements that will appeal to Office developers. Excel 2003 also made some changes to statistical functions to ensure that they are as correct as possible. This article provides a high level preview of Excel 2003 new features and enhancements: This article assumes you are familiar with Excel. If not, you can find more information about Excel on the Microsoft Office Web site on MSDN. Excel is commonly used to analyze data and one of the most common data related activities in Excel is creating lists. Users will find it easy to view, edit and update lists in Excel 2003. There is also a new user interface and behavior for list ranges in Excel 2003. In Excel 2003, you can create lists to group and act upon related data using existing data or from an empty range. When you specify a range as a list, you can manage and analyze the data independent of other data outside of the list. Excel list is designed to bring semi-structure around a range in the worksheet where users commonly work with list-like data. With Excel lists, you get database-like functionality in the spreadsheet. Although geared toward end-user scenarios, Excel lists are fully programmable as lists are exposed through Excel's object model and therefore can be leveraged by developers. Additionally, information contained within a list can be shared with others through integration with Microsoft Windows SharePoint Services. For ranges that are designated as lists, Excel users will be able to share the list by publishing it. Users will also be able to import lists into Excel, or link to them so that changes to the list are shared between the server and the Excel client. This allows you to easily share your list and allows people with the right permission to view, edit, and update the list. You can also synchronize changes with the SharePoint site so other users can see updated data by linking the list. Excel 2003 also provides conflict resolution when updating lists from Excel, and allows lists to be modified offline using Excel's Binary File Format (BIFF). Figure 1. Creating a list from a range with data Using the Create List command, you can create a list in an empty range, or one with data as shown in Figure 1. You can create one or more lists on a single worksheet. The list user interface and a corresponding set of functionality are exposed for ranges that are designated as a list. As can be seen in Figure 2, it is easy to identify and modify the contents of the list with the aid of the list visual elements and functionality. Figure 2. A list user interface The user interface (see Figure 2) shows that there is something special about the range, and exposes common commands for working with lists. Figure 3. A list with a total row Excel 2003 lists allow you to share the information contained within a list through seamless integration with Products and Technologies. You can create a SharePoint list based on your Excel list on a SharePoint site by publishing the list as shown in Figure 4 (to display the Publish List dialog box, in the Data menu, point to List and click Publish List). Figure 5 shows how the published list looks on a SharePoint site. Figure 4. The Publish List to a SharePoint site dialog box Figure 5. A published Excel 2003 list on a SharePoint site If you choose to link the list to the SharePoint site, any changes you make to the list in Excel 2003 will be reflected on the SharePoint site when you synchronize the list. You can also use Excel to edit existing SharePoint lists. You can modify the list offline and then synchronize your changes later to update the SharePoint list as shown in Figure 6. And you can use conflict resolution to resolve any conflicts. Figure 6. Synchronizing data between Excel 2003 and Windows SharePoint Services Excel 2003 lists also provide a way for users to collaborate on list data, and store it in a way that makes it accessible from anywhere, anytime. You can link a list to a custom SharePoint list, which will allow you to easily edit that list offline. You link a list by publishing it, programmatically by using the object model, or by exporting from a SharePoint site. You can also programmatically manipulate a list by using the list object model. For example, the ListObjects collection object is a collection of all ListObject objects on a worksheet. The ListObject object is a member of the ListObjects collection. Individual ListObject objects in the ListObjects collection are indexed beginning with 1 for the first object, 2 for the second object, and so forth. Besides the ListObjects collection, there are also ListRows and ListColumns collections. They are associated methods and properties related to these objects that you can use to programmatically manipulate lists. For example you can add a new column to a list as that is not linked to the Windows SharePoint Services as follows: ... Dim objWksheet As Worksheet Dim objNewCol As ListColumn Set objWksheet = ActiveWorkbook.Worksheets("Sheet1") Set objNewCol = objWksheet.ListObjects(1).ListColumns.Add ... You can get a list range, a list header row address, a list row range and so forth using the list range object and address property as shown below: ... Dim objListObj As ListObject Dim objListRow As ListRow Set objListObj = ActiveSheet.ListObjects(1) Set objListRow = objListObj.ListRows(5) Debug.Print objListObj.Range.Address Debug.Print objListObj.HeaderRowRange.Address Debug.Print objListRow.Range.Address ... The following example retrieves the header row name of the second column of the first list: ... Dim objListObj As ListObject Set objListObj = ActiveSheet.ListObjects(1) Debug.Print objListObj.ListColumns(2).Name ... You can also import a SharePoint list into Excel programmatically as opposed to exporting a SharePoint list to Excel 2003 from a SharePoint site itself as demonstrated by the following subroutine. Note If you want to import the SharePoint list, you must have permission to use a server running SharePoint Products and Technologies. Sub ImportSharePointList() Dim objMyList As ListObject Dim objWksheet As Worksheet Dim strSPServer As String Const SERVER As String = "mySharePointSite" Const LISTNAME As String = "{20B4CF11-ACD8-460B-895E-55213C79FEA6}" Const VIEWNAME As String = "" '(stServer, LISTNAME, VIEWNAME), True, , Range("a2")) Set objMyList = Nothing Set objWksheet = Nothing End Sub The LISTNAME and VIEWNAME are the appropriate GUIDs for the SharePoint list. The easiest way to get the list GUID is to click the Modify settings and columns link (see Figure 5) located on the left frame of the SharePoint list. You will need at least Web Designer permissions on the SharePoint site to do this. In the Microsoft Internet Explorer address field, you'll see something like LISTNAME VIEWNAME{20B4CF11-ACD8-460B-895E-55213C79FEA6} where the list GUID is the part after "List=". The view GUID is optional. You can get the view GUID by using the following procedure. View the SharePoint list in HTML by clicking Show in Standard View (see Figure 5) on the Datasheet toolbar. (Note that if your default view is already an HTML view, you don't need to do this step.) When in HTML view, click Edit in Datasheet (see Figure 7). Figure 7. An HTML view of a SharePoint list In the Internet Explorer address field, you'll see something like? ShowInGrid=True&View=%7BF7B36223%2D487D%2D4550%2D8186%2DB286F1D4698E%7D The view GUID is the part after "View=". The GUID is encoded, and can't be used as is. You would need to replace before it can used it in the code. More and more data is being expressed in XML, and many users want to get that data into Excel to view, analyze, and manipulate the data. The power of Excel in layout, formatting, calculation, and printing make it a great tool for creating reports. Many users now want to enhance their spreadsheet publishing by using more flexible raw data manipulation, and are looking to XML technologies to do this. Users also want easy ways to bring this into Excel, edit the data, and write it back out while preserving the original schema. Excel 2003 improves upon its XML support. As with Microsoft Office Word 2003, Excel now supports the use of customer-defined schema. Developers are no longer limited to using the native Excel XML file format (XMLSS), as was the case with Excel 2002. They can now create applications that are based on business-relevant XML definitions. So, instead of writing cumbersome XSLT files to transform XML data to and from XMLSS, developers can now attach their own schema and interchange data with Excel easily. Through the use of XML, content becomes free-flowing, unlocked data, which can be manipulated and repurposed in parts or fully as needed. Also through XML, the sharing of information between documents, databases, and other applications is simplified. The XML support in Excel 2003 greatly reduces the amount of code that needs to be written and maintained by developers in order to get XML data into Excel. End users don't need to know anything about XML as they can continue to go about their usual tasks of viewing, editing, and writing the data back out. In short, users can continue using Excel the way they always have. They don't need to know anything about XML to take advantage of its capabilities and power in Excel. Some scenarios that Excel 2003 XML now enables: Excel 2003 has the ability to support customer-defined schemas and enable solutions that use any XML schema to be mapped within the structure of a spreadsheet. Excel 2003 provides a visual data mapping tool similar to that in Word (as shown in Figure 8). Note however that Excel only supports XSD. XDR and DTD aren't supported. However, if a user doesn't have a schema associated with the XML file, Excel will infer one on the fly and store it with the XLS/XMLSS. Figure 8. Customer-defined XML mapping in Excel This tool allows developers to see the XML structure in the task pane and quickly create a structured spreadsheet document. It provides you with a visual view of your schema, and enables WYSIWYG drag and drop mapping of your schema elements into the workbook. Unlike the visual tags that surround the XML elements inline in a Word document, mapped elements in Excel are designated with blue, nonprinting cell borders. Logically recurring XML patterns get integrated within the new list feature. The entire XML feature set is built on top of the new list feature. Developers can apply XML schemas to a new workbook or leverage existing ones as appropriate. For advanced spreadsheet models that require the support of various data structures simultaneously, Excel workbooks also support the mapping of multiple XSD schemas. A single translation or mapping of a schema into a workbook is called a map. You can have several maps in one workbook, as long as they don't have overlap ranges. The maps can all be from the same schema, or different schemas. This enables users to import data from one schema into the workbook, do some calculations, and then export the results using another schema. Excel 2003 enables the importing and exporting of XML data of a particular schema into and out of the workbook, according to your layout (see Figure 9). Figure 9. The menu for XML related tasks 2003 solves many of these common issues. Excel-based solutions can be extended to deliver data via XML and recollect it. XML Web services provide an easy method of transport for XML, which offers an optimal vehicle. Users can still import and export XML through the user interface but, more important, developers have full programmatic control to unlock data from workbooks. Users are generally very comfortable using Excel to view data. So it makes sense for developers to target Excel for solutions capable of delivering XML-based data. Business intelligence, analytics, charting and other solutions can benefit from Excel's ease of and openness to consuming XML. With Excel 2003, a developer can build an Excel worksheet to include the formulas and layout that support a business need, and point it at an XML source to provide users with the latest available content. Users get the rich Excel environment they are used to, and developers get fast results. Developers commonly receive user requests to make data available inside of Excel so users can provide their own ad hoc analysis, which is something that can't be done natively with browser applications. As investments in delivering XML Web services and XML start to become realities, developers can provide users with the data in Excel and easily map that data into workbooks regardless of incoming structure. Excel 2003 XML feature set brings with it a rich VBA object model for programmatically adding schema to the workbook, mapping that schema to a sheet, importing the data, exporting the data, finding data and ranges based on XPath, and so forth. Here are some examples of what you can use the rich object model for. For example if you want the corresponding XPath in the XML document for a given range/list/column, you can query as follows: ActiveCell.XPath or ActiveSheet.ListObjects(2).ListColumns(3).XPath To refresh a binding for a map you can do the following: ActiveWorkbook.XmlMaps(1).DataBinding.Refresh The subroutine below shows how to retrieve all the XPaths that are mapped in an active workbook. First you set the XmlMap whose mapped ranges you want to enumerate. Then you retrieve the top level element of the schema contained in the XmlMap. Next you query for the top level element in the schema and search for everything that is mapped from this schema. Then you enumerate through each mapped range (XPath) contained within the area of the mapped range to find out if it is a repeating element. A message box displays the range value for the mapped range, the XPath value and tells you whether the XPath contains repeating elements. Sub EnumMappedRanges() Dim objXmlMap As Excel.XmlMap Dim strRootElementXPath As String Dim objWkSheet As Excel.Worksheet Dim objMappedRanges As Excel.Range Dim objMappedXPath As Excel.Range 'The XmlMap to be enumerated for mapped ranges. Set objXmlMap = ActiveWorkbook.XmlMaps(1) 'RootElementName is the top level element in the schema. 'Build the XPath of the root element by adding 'the namespace prefix, if necessary. If objXmlMap.RootElementNamespace = "" Then strRootElementXPath = "/" & objXmlMap.RootElementName Else strRootElementXPath = "/" & _ objXmlMap.RootElementNamespace.Prefix & _ ":" & objXmlMap.RootElementName End If For Each objWkSheet In ActiveWorkbook.Sheets 'Query for the top level element in the 'schema (using the root element XPath) 'and search for everything that is mapped from this schema. Set objMappedRanges = objWkSheet.XmlMapQuery _ (strRootElementXPath, , objXmlMap) If Not objMappedRanges Is Nothing Then 'Each Range associated to an XPath is contained within 'an Area of objMappedRanges. For Each objMappedXPath In objMappedRanges.Areas MsgBox "Mapped Range: " & objMappedXPath.Address & _ vbCrLf & "XPath: " & objMappedXPath.XPath.Value & _ vbCrLf & "Is Repeating: " & _ objMappedXPath.XPath.Repeating, , _ "Mapped Range Found" Else MsgBox "No mapped ranges found in Worksheet: " & _ objWkSheet.Name, vbExclamation End If End Sub You can also create a mapped range for a non-repeating element. The next subroutine demonstrates how to create a list that contains multicolumn (five columns in this example, from column A to E) range for repeating element at cells A1 through to E12. It uses the element in the map named Root contained in the candidates01.xml file. After the list is created, the XML nodes are then mapped onto the stand-alone list. Each of the columns which by default are respectively named Column1, Column2, and so forth, is then individually mapped and renamed accordingly. Finally, the XML data contained in the candidates01.xml is imported into the list and populate it. If the XML data to be inserted to each row required more than 12 rows, you will find that Excel will automatically add more rows to the list range. Sub MapToList() Dim objMap As XmlMap Dim objList As ListObject Dim strXSDPath As String Dim strXMLPath As String Dim strXPath As String ' Select the range for the list to be created. Range("A1:E12").Select strXSDPath = "resume.xsd" strXMLPath = "candidates01.xml" ' First add the schema to the workbook. Set objMap = ActiveWorkbook.XmlMaps.Add(strXSDPath, "Root") ' To map repeating items, create a list first. Set objList = ActiveSheet.ListObjects.Add ' Map the first column by assigning an XPath to each list column. strXPath = "/Root/DocumentInfo/HRContact" objList.ListColumns(1).XPath.SetValue objMap, strXPath ' Change the column header a name from the default name "Column1". objList.ListColumns(1).Name = "HR Contact" strXPath = "/Root/Resume/LastName" objList.ListColumns(2).XPath.SetValue objMap, strXPath objList.ListColumns(2).Name = "Last Name" strXPath = "/Root/Resume/FirstName" objList.ListColumns(3).XPath.SetValue objMap, strXPath objList.ListColumns(3).Name = "First Name" strXPath = "/Root/Resume/Address/Address1" objList.ListColumns(4).XPath.SetValue objMap, strXPath objList.ListColumns(4).Name = "Primary Address" strXPath = "/Root/Resume/Address/Phone" objList.ListColumns(5).XPath.SetValue objMap, strXPath objList.ListColumns(5).Name = "Phone" ' Populate the list by importing data from an XML file. objMap.Import strXMLPath End Sub The following example shows how you can map a repeating element onto a single cell: Sub MapElementToCell() Dim objMap As XmlMap Dim strXSDPath As String Dim strXMLPath As String Dim strXPath As String strXSDPath = "resume.xsd" strXMLPath = "candidates01.xml" ' First add the schema to the workbook. Set objMap = ActiveWorkbook.XmlMaps.Add(strXSDPath, "Root") ' Map the first column by assigning an XPath to each list column. strXPath = "/Root/DocumentInfo/HRContact" Range("H8").XPath.SetValue objMap, strXPath, , False ' Give the following cell a value. Range("G8").Value = "HRContact :" ' Populate the list by importing data from an XML file. objMap.Import strXMLPath End Sub The Using the XML Features of the Microsoft Office Access 2003 and Microsoft Office Excel 2003 Object Models article discusses the Excel XML object model in detail and has more code examples which you can refer to for more information. Additionally, there is also an Excel 2003 XML Content Development Kit (CDK) available to Office 2003 program participants to help developers quickly get up to speed on how to build XML solutions using Excel 2003 as a development platform. Smart Document technology in Excel 2003 and Word 2003 enables the creation of XML-based applications that provide users with contextual content via the Office task pane. Users benefit from a Smart Documents ability to deliver relevant information and actions through the use of an intuitive task pane that synchronizes content based on the user’s current location within the document. The task pane presents users with almost any supporting information, such as data that corresponds to the document, relative help content, calculation fields, hyperlinks or any number of controls, an example of which is shown in Figure 10. Figure 10. An Excel Smart Document with the task pane displayed on the right Excel 2003 and Word 2003 documents can be designed with an underlying XML structure that ensures users are entering and viewing valid information. At the same time, the XML structure enables developers to build the document with context-specific help and supporting information. Smart Documents build on the concept of smart tags introduced in Microsoft Office XP, and extend it by using a document-based metaphor aimed at simplifying and enhancing the user experience when working with documents. Developers can build upon rich XML-based documents in Word 2003 and Excel 2003 to create Smart Document solutions. These solutions can be deployed and subsequently updated from a server, once the initial document or template has been opened on the client, thus, making distribution a non-issue and maintenance, easy. In Microsoft Office Word 2003 Preview (Part 1 of 2) I discussed Smart Documents in detail which you can refer to for more information. The smart tag technology was first introduced with Office XP in Excel 2002 and Word 2002. Smart tags enable the dynamic recognition of terms within documents and spreadsheets. Once a term is recognized, the user can invoke an action from a list of actions associated with that particular smart tag. Examples of possible actions are inserting relevant data, linking to a Web page, database lookup, data conversion and so forth. You can build custom smart tags by using any programming language that can create a Component Object Model (COM) add-in. In Office XP, to build custom smart tag COM add-ins, you implement the ISmartTagRecognizer and ISmartTagAction interfaces. Without having to write any code, you can also build simple smart tags by using a smart tag XML list. With the Microsoft Office System, smart tag support has extended to Microsoft Office PowerPoint® 2003 and Microsoft Office Access 2003. In addition, the Research task pane that is available across multiple Office applications also supports smart tags. Smart tags in Office 2003 have also been enhanced and improved based on feedback from users and developers. In Office 2003, the ISmartTagRecognizer and ISmartTagAction interfaces exist unchanged. However, the smart tag application programming interface (API) library has been extended to support two additional new interfaces that enable new functionality: ISmartTagRecognizer2 and ISmartTagAction2. The library, which is also backward compatible, is named Microsoft Smart Tags 2.0 Type Library. It should be noted that registering a smart tag DLL using its programmatic identifier isn't supported anymore in the Microsoft Office System. You can also create smart tags that are now schema aware. For example, you can build a smart tag that recognizes a given tag-name, resulting in smart tags that can be much smarter about how and when to surface relevant actions. I've discussed the new smart tag features and enhancement in detail in the Microsoft Office Word 2003 Preview (Part 2 of 2) article which you can refer to for more information. If you are interested in knowing about smart tags in Office 2003, you will also want to read the What's New with Smart Tags in Office 2003 article. Changes have been made to numerous statistical functions to correct shortcomings that include serious inaccuracies that a user is not likely to spot, inaccuracies that result in absurd answers (for example negative sums of squares), and failure to return a numerical answer when one should be obtainable. Some aspects of the following statistical functions, including rounding results, and precision have been enhanced. Therefore the result of the following functions may be different from that of the previous versions of Excel. The list of statistical function improvements are as follows: Computation of Summary Statistics PEARSON, RSQ, SLOPE, STDEV, STEYX, TTEST, and VAR Multiple Linear Regression LINEST There is now improved handling of colinearity. Continuous Probability Distributions NORMSDIST, LOGNORMDIST, ERF (an AnalysisToolPak function), and ZTEST Inverse Functions for Continuous Probability Distributions CHIINV, FINV, NORMSINV, TINV Improved Random Number Generator Discrete Probability Distributions BINOMDIST, HYPGEGEOMDIST, POISSON, CRITBINOM and NEGBINOMDIST The ANOVA function in the AnalysisToolPak has also seen improvement. The changes in statistical functions may also be reflected in the AnalysisToolPak's Descriptive Statistics feature. Today, most people do ad-hoc collaborative authoring in a variety of ways, for example via e-mail, using authoring applications like Word, Excel or PowerPoint, or groupware tools like Windows SharePoint Services or real-time collaboration tools like instant messaging, and conferencing. In Office 2003 ad-hoc collaborative authoring combines the best approaches through the new Document Workspace sites – the ease of getting collaborative efforts started using e-mail, Windows SharePoint Services online file management and sharing, and the rich editing functionality found in Office applications. Document Workspace sites capitalize on natural entry points, supplement and integrate with existing tools, and minimizes collaboration apparatus overhead. In addition, Document Workspace sites makes the collaborative growth effort straightforward; for example from sending a simple shared e-mail file attachment to automatically creating a full-blown SharePoint site. For more information about Document Workspace sites, see Microsoft Office Word 2003 Preview (Part 2 of 2). The new Research Library feature in Office 2003 makes searching for relevant information and integrating that data into Office documents easier. The Research task pane is a task pane-based feature in Word 2003, Excel 2003, Microsoft Office PowerPoint 2003, Microsoft Office Outlook 2003, Microsoft Office Publisher 2003, Microsoft Office Visio 2003, Internet Explorer and Microsoft Office OneNote 2003. The Research Library that functions within the Microsoft Office System allows Office users to easily access Research Library services while working on Office documents. Research sources that come built-in with the Microsoft Office System provides easier access to reference tools like dictionary, thesaurus, translation, encyclopedia and some Web sites in multiple languages. sites based on SharePoint Products and Technologies. It's noteworthy that extending the research library allows developers to provide an innovative and intelligent solution that permeates across multiple applications in the Microsoft Office System since the Research Library feature as mentioned earlier, is supported in many Office applications. In addition, the Research Library integration of smart tag technology allows developers to create custom actions like transforming, inserting or grabbing data from live feeds. Smart tag integration in the Research Library feature is supported in Word 2003, Excel 2003, PowerPoint 2003, Outlook 2003, and Visio 2003. For more information about the Research Library see Microsoft Office Word 2003 Preview (Part 1 of 2). Also, see Build Your Own Research Library with Office 2003 and the Google Web Service API and Customizing the Research Task Pane. There are quite a number of miscellaneous feature enhancements in Excel 2003 like side by side workbook comparison, Tablet PC support and so forth. Viewing changes made by multiple users can be difficult using one workbook. In Excel 2003 there is a new feature for comparing workbooks side by side. To do this, you use the Compare Side by Side with command on the Window menu as shown in Figure 11. Comparing workbooks side by side allows you to see the differences between two workbooks more easily as you don't have to merge all changes into one workbook. You can scroll through both workbooks at the same time to identify differences between the two workbooks (see Figure 12). Figure 11. The compare workbooks side by side command Figure 12. Comparing workbooks side by side without merging content If you are using a device that supports ink input, such as a Tablet PC, you can use the pen device and take advantage of handwriting in Office documents as you would when using pen and a hard copy document. For example you can make handwritten comments, jot down handwritten content, and mark up with handwritten annotations. Additionally, you can now view task panes horizontally to help you do your work on the Tablet PC if you so preferred. Microsoft Office Online Microsoft Office Online is better integrated with all Microsoft Office programs allowing you to take full advantage of what the site has to offer while you work. You can visit Microsoft Office Online directly using the links provided in various task panes and menus in your Office program to access articles, tips, clip art, templates, online training, downloads, and services to enhance how you work with Office programs. The site is updated regularly with new content based on customer feedback and popular requests from you and others who use the Microsoft Office System. With Excel lists, you get database-like functionality in the spreadsheet. You can create lists to group and act upon related data and share the list with others by publishing the list on a SharePoint site. The XML enhancements and improvements in Excel 2003 make it easier for Excel to integrate with other systems. Excel data has now become free-flowing, unlocked data, which can be manipulated and easily repurposed. The innovative Smart Document technology enables the creation of XML-based applications that provide users with contextual content and relevant help, making users more productive. Document Workspace sites makes collaboration an easy undertaking. You will find smart tags even more useful and powerful in Excel 2003 with the added enhancements and improvement. The new Research Library feature enables information search from within an Office application and makes integrating that data into Office documents easy. Developers can build custom research sources that integrate information from a company’s back-end database sources and as such make business-specific data readily available to users by extending the Research library. In Excel 2003, you will find comparing contents of workbooks easier when using the side by side comparison feature. These are some of the new and exciting features to look forward to in the Excel 2003. Acknowledgement I would like to thank Chad Rothschiller, Margaret Hudson, James Rivera, Joseph Chirilov, Michael McCormack, Pat King, John Tholen, Keith Mears, Marise Chan from the Excel 2003 team and Charles Maxson, who is an independent consultant, for their contributions and help in writing this article.
http://msdn.microsoft.com/en-us/library/aa203719%28office.11%29.aspx
crawl-002
refinedweb
4,948
52.19
Last Updated on May 28, 2020 The Keras Python library makes creating deep learning models fast and easy. The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs. The functional API in Keras is an alternate way of creating models that offers a lot more flexibility, including creating more complex models. In this tutorial, you will discover how to use the more flexible functional API in Keras to define deep learning models. After completing this tutorial, you will know: - The difference between the Sequential and Functional APIs. - How to define simple Multilayer Perceptron, Convolutional Neural Network, and Recurrent Neural Network models using the functional API. - How to define more complex models with shared layers and multiple inputs and outputs. Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. - Update Nov/2017: Added note about hanging dimension for input layers. - Update Nov/2018: Added missing flatten layer for CNN, thanks Konstantin. - Update Nov/2018: Added description of the functional API Python syntax. Tutorial Overview This tutorial is divided into 7 parts; they are: - Keras Sequential Models - Keras Functional Models - Standard Network Models - Shared Layers Model - Multiple Input and Output Models - Best Practices - NEW: Note on the Functional API Python Syntax 1. Keras Sequential Models As a review, Keras provides a Sequential model API. If you are new to Keras or deep learning, see this step-by-step Keras tutorial. The Sequential model API is a way of creating deep learning models where an instance of the Sequential class is created and model layers are created and added to it. For example, the layers can be defined and passed to the Sequential as an array: Layers can also be added piecewise: The Sequential model API is great for developing deep learning models in most situations, but it also has some limitations. For example, it is not straightforward to define models that may have multiple different input sources, produce multiple output destinations or models that re-use layers. 2. Keras Functional Models The Keras functional API provides a more flexible way for defining models. It specifically allows you to define. Let’s look at the three unique aspects of Keras functional API in turn: 1. Defining Input Unlike the Sequential model, you must create and define a standalone Input layer that specifies the shape of input data. The input layer takes a shape argument that is a tuple that indicates the dimensionality of the input data.,), for example:. Let’s make this clear with a short example. We can create the input layer as above, then create a hidden layer as a Dense that receives input only from the input layer. Note the (visible) after the creation of the Dense layer that connects the input layer output as the input to the dense hidden layer. It is this way of connecting layers piece by piece that gives the functional API its flexibility. For example, you can see how easy it would be to start defining ad hoc graphs of layers. 3. Creating the Model After creating all of your model layers and connecting them together, you must define the model. As with the Sequential API, the model is the thing you can summarize, fit, evaluate, and use to make predictions. Keras provides a Model class that you can use to create a model from your created layers. It requires that you only specify the input and output layers. For example: Now that we know all of the key pieces of the Keras functional API, let’s work through defining a suite of different models and build up some practice with it. Each example is executable and prints the structure and creates a diagram of the graph. I recommend doing this for your own models to make it clear what exactly you have defined. My hope is that these examples provide templates for you when you want to define your own models using the functional API in the future. 3. Standard Network Models When getting started with the functional API, it is a good idea to see how some standard neural network models are defined. In this section, we will look at defining a simple multilayer Perceptron, convolutional neural network, and recurrent neural network. These examples will provide a foundation for understanding the more elaborate examples later. Multilayer Perceptron In this section, we define a multilayer Perceptron model for binary classification. The model has 10 inputs, 3 hidden layers with 10, 20, and 10 neurons, and an output layer with 1 output. Rectified linear activation functions are used in each hidden layer and a sigmoid activation function is used in the output layer, for binary classification. Running the example prints the structure of the network. A plot of the model graph is also created and saved to file. Multilayer Perceptron Network Graph Convolutional Neural Network In this section, we will define a convolutional neural network for image classification. The model receives black and white 64×64 images as input, then has a sequence of two convolutional and pooling layers as feature extractors, followed by a fully connected layer to interpret the features and an output layer with a sigmoid activation for two-class predictions. Running the example summarizes the model layers. A plot of the model graph is also created and saved to file. Convolutional Neural Network Graph Recurrent Neural Network In this section, we will define a long short-term memory recurrent neural network for sequence classification. The model expects 100 time steps of one feature as input. The model has a single LSTM hidden layer to extract features from the sequence, followed by a fully connected layer to interpret the LSTM output, followed by an output layer for making binary predictions. Running the example summarizes the model layers. A plot of the model graph is also created and saved to file. Recurrent Neural Network Graph 4. Shared Layers Model Multiple layers can share the output from one layer. For example, there may be multiple different feature extraction layers from an input, or multiple layers used to interpret the output from a feature extraction layer. Let’s look at both of these examples. Shared Input Layer In this section, we define multiple convolutional layers with differently sized kernels to interpret an image input. The model takes black and white images with the size 64×64 pixels. There are two CNN feature extraction submodels that share this input; the first has a kernel size of 4 and the second a kernel size of 8. The outputs from these feature extraction submodels are flattened into vectors and concatenated into one long vector and passed on to a fully connected layer for interpretation before a final output layer makes a binary classification. Running the example summarizes the model layers. A plot of the model graph is also created and saved to file. Neural Network Graph With Shared Inputs Shared Feature Extraction Layer In this section, we will use two parallel submodels to interpret the output of an LSTM feature extractor for sequence classification. The input to the model is 100 time steps of 1 feature. An LSTM layer with 10 memory cells interprets this sequence. The first interpretation model is a shallow single fully connected layer, the second is a deep 3 layer model. The output of both interpretation models are concatenated into one long vector that is passed to the output layer used to make a binary prediction. Running the example summarizes the model layers. A plot of the model graph is also created and saved to file. Neural Network Graph With Shared Feature Extraction Layer 5. Multiple Input and Output Models The functional API can also be used to develop more complex models with multiple inputs, possibly with different modalities. It can also be used to develop models that produce multiple outputs. We will look at examples of each in this section. Multiple Input Model We will develop an image classification model that takes two versions of the image as input, each of a different size. Specifically a black and white 64×64 version and a color 32×32 version. Separate feature extraction CNN models operate on each, then the results from both models are concatenated for interpretation and ultimate prediction. Note that in the creation of the Model() instance, that we define the two input layers as an array. Specifically: The complete example is listed below. Running the example summarizes the model layers. A plot of the model graph is also created and saved to file. Neural Network Graph With Multiple Inputs Multiple Output Model In this section, we will develop a model that makes two different types of predictions. Given an input sequence of 100 time steps of one feature, the model will both classify the sequence and output a new sequence with the same length. An LSTM layer interprets the input sequence and returns the hidden state for each time step. The first output model creates a stacked LSTM, interprets the features, and makes a binary prediction. The second output model uses the same output layer to make a real-valued prediction for each input time step. Running the example summarizes the model layers. A plot of the model graph is also created and saved to file. Neural Network Graph With Multiple Outputs 6. Best Practices In this section, I want to give you some tips to get the most out of the functional API when you are defining your own models. - Consistent Variable Names. Use the same variable name for the input (visible) and output layers (output) and perhaps even the hidden layers (hidden1, hidden2). It will help to connect things together correctly. - Review Layer Summary. Always print the model summary and review the layer outputs to ensure that the model was connected together as you expected. - Review Graph Plots. Always create a plot of the model graph and review it to ensure that everything was put together as you intended. - Name the layers. You can assign names to layers that are used when reviewing summaries and plots of the model graph. For example: Dense(1, name=’hidden1′). - Separate Submodels. Consider separating out the development of submodels and combine the submodels together at the end. Do you have your own best practice tips when using the functional API? Let me know in the comments. 7. Note on the Functional API Python Syntax If you are new or new-ish to Python the syntax used in the functional API may be confusing. For example, given: What does the double bracket syntax do? What does it mean? It looks confusing, but it is not a special python thing, just one line doing two things. The first bracket “(32)” creates the layer via the class constructor, the second bracket “(input)” is a function with no name implemented via the __call__() function, that when called will connect the layers. The __call__() function is a default function on all Python objects that can be overridden and is used to “call” an instantiated object. Just like the __init__() function is a default function on all objects called just after instantiating an object to initialize it. We can do the same thing in two lines: I guess we could also call the __call__() function on the object explicitly, although I have never tried: Further Reading This section provides more resources on the topic if you are looking go deeper. - The Sequential model API - Getting started with the Keras Sequential model - Getting started with the Keras functional API - Model class Functional API Summary In this tutorial, you discovered how to use the functional API in Keras for defining simple and complex deep learning models. Specifically, you learned: - The difference between the Sequential and Functional APIs. - How to define simple Multilayer Perceptron, Convolutional Neural Network, and Recurrent Neural Network models using the functional API. - How to define more complex models with shared layers and multiple inputs and outputs. Do you have any questions? Ask your questions in the comments below and I will do my best to answer. Thank you I have been waiting fir this tutorial Thanks, I hope it helps! I’m using a keras API and I’m using the shared layers with 2 inputs and one output and have a problem with the fit model model.fit([train_images1, train_images2], batch_size=batch_size, epochs=epochs, validation_data=([test_images1, test_images2])) I have this error : ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.2 , 0.5019608 , 0.8862745 , …, 0.5686275 , 0.5137255 , 0.59607846], [0.24705882, 0.3529412 , 0.31764707, …, 0.5803922 , 0.57254905, 0.6 ], Perhaps double check the data has the shape expected by each model head. You’re awesome Jason! I just can’t wait to see more from you on this wonderful blog, where did you hide all of this 😉 Can you please write a book where you implement more of the functional API? I’m sure the book will be a real success Best regards Thabet I agree with Thabet Ali, need a book on advance functional API part. What aspects of the functional API do you need more help with Tom? Thanks! What problems are you having with the functional API? I would like to know more on how to implement autoencoders on multi input time series signals with a single output categorical classification, using the functional API Thanks. LSTMs, for example, can take multiple time series directly and don’t require parallel input models. Why do you want to use autoencoders for time series? Yes I think it’s because the output sequence is shorter than the input sequence Like seen in this dataset: Where there are three time series inputs and 5 different classes as output Hi Jason, are you asking for a wish list? 🙂 – Autoencoders / Anomaly detection with Keras’s API Thanks! Thanks for the suggestion! Yes, totally agree. Jason, thank you for very interest blog. Beautiful article which can open new doors. Thanks Alexander. Dr. Brownlee, How do you study mathematics behind so many algorithms that you implement? regards Leon I read a lot 🙂 Can I consider the initial_state of an LSTM layer as an input branch of the architecture? Say that for each data I have a sequence s1, s2, s3 and a context feature X. I define an LSTM with 128 neurons, for each batch I want to map X to a 128 dimensional feature through a Dense(128) layer and set the initial_state for that batch training, meanwhile the sequence s1, s2,s3 is fed to the LSTM as the input sequence. Sorry Alex, I’m not sure I follow. If you have some ideas, perhaps design an experiment to test them? Thank you for the awesome tutorial. For anyone wants to directly visualize in Jupyter Notebooks use the following lines. from IPython.display import SVG from keras.utils.vis_utils import model_to_dot SVG(model_to_dot(model).create(prog=’dot’, format=’svg’)) Awesome, thanks for the tip! Thanks for this blog. It is really helpful and well explained. Thanks Luis, I’m glad to hear that. I appreciate your support! Hi Jason – I am wondering if the code should be: visible = Input(image_file, shape=( height, width, color channels)) if not, I wonder how the code references the image in question…. Surprised no one else asked this… YOU ROCK JASON! In section 1 you write that “the shape tuple is always defined with a hanging last dimension”, but when you define the convolutional network, you define it as such: visible = Input(shape=(64,64,1)) without any hanging last dimension. Am I missing something here? Yes, I was incorrect. This is only the case for 1D input. I have updated the post, thanks. You are still incorrect. There is no “hanging last dimension”, even in the case of 1D input. It’s a “trailing comma” used by Python to disambiguate 0- or 1-tuples from parenthesis. Thanks for your note. Jason, I’ve been reading about keras and AI for some time and I always find your articles more clear and straightforward than all the other stuff on the net 🙂 Thanks! Thanks Franek. Jason, for some reasons I need to know the output tensors of the layers in my model. I’ve tried to experiment with the layer.get_output, get_output_et etc following the keras documentation, but I always fail to get anything sensible. I tried to look for this subject on your blog, but I couldn’t find anything. Are you planning to write a post on this? 🙂 That would really help! Sorry, I don’t have good advice for getting the output tensor. Perhaps post to the keras google group or slack channel? See here: Hi Jason, yet another great post. Using one of your past posts I created an LSTM that, using multiple time series, predicts several step ahead of a specific time series. Currently I have this structure: where: – x_tr has (440, 7, 6) dimensions (440 samples, 7 past time steps, 6 variables/time series) – y_tr has (440, 21) dimensions, that is 440 samples and 21 ahead predicted values. Now, I’d like to extend my network so that it predicts the (multi-step ahead) values of two time series. I tried this code: where y1 and y2 both have (440, 21) dimensions, but I have this error: “Error when checking target: expected dense_4 to have 3 dimensions, but got array with shape (440, 21)”. How should I reshape y1 and y2 so that they fit with the network? Sorry, I cannot debug your code for you, perhaps post to stack overflow? Check the write-up on lstms, this will greatly help. The output series must have dimensions as n,2,21 ; n data with 2 timestep and 21 variables. Y.reshape( (440/2),2,21) The x dimension is already in place, change the y dimension to have 3 dimensions. Num. Samples, time step, variables Great tutorial, it really helped me understand Model API better! THANKS! I’m glad to hear that. Hii Jason, this was a great post, specially for beginners to learn the Functional API. Would you mind to write a post to explain the code for a image segmentation problem step by step for beginners? Thanks for the suggestion. Hi Jason, thank you for such a great post. It helped me a lot to understand functional API’s in keras. Could you please explain how we define the model.compile statement in multiple output case where each sub-model has a different objective function. For example, one output might be regression and the other classification as you mentioned in this post. I believe you must have one objective function for the whole model. Splendid tutorial as always! Do you think you could make a tutorial about siamese neural nets in the future? It would be particularly interesting to see how a triplet loss model can be created in keras, one that recognizes faces, for example. The functional API must be the way to go, but I can’t imagine exactly how the layers should be connected. Thanks for the suggestion Harry. Good tutorial Thanks alot Thanks. Thank you very much for your tutorial! He helped me a lot! I have a question about cost functions. I have one request and several documents: 1 relevant and 4 irrelevant. I would like a cost function that both maximizes SCORE(Q, D+) and minimizes SCORE(Q, D-). So, I could have Delta = SUM{ Score(Q,D+) – Score(Q,Di-) } for i in (1..4) Using the Hinge Loss cost function, I have L = max(0, 4 – Delta) I wanted to know if taking the 4 documents, calculating their score with the NN and sending everything in the cost function is a good practice? I was wondering if was possible to have two separate layers as inputs to the output layer, without concatenating them in a new layer and then having the concatenated layer project to the output layer. If you can’t do this with Keras, could you suggest another library that allows you to do this? I am new to neural networks, so would prefer a library that is suitable for newbies. There are a host of merge layers to choose from besides concat. Does that help? I have two images, first image and its label is good, second images and its label is bad. I want to pass both images at a time to deep learning model for training. While testing I will have two images (unlabelled) and I want to detect which one is good and which one is bad. Could you please tell how to do it? You will need a model with two inputs, one for each image. Any examples you have to give two inputs to a model Yes, see this caption example for inputting a photo and text: Thanks so much for the tutorial! It is much appreciated. Where I am confused is for a model with multiple inputs and multiple outputs. To make it simple, lets say we have two input layers, some shared layers, and two output layers. I was able to build this in Keras and get a model printout that looks as expected, but when I go to fit the model Keras complains: ValueError: All input arrays (x) should have the same number of samples Is it not possible to feed in inputs of different sizes? Correct. Inputs must be padded to the same length. Hi, In the start of the post, you talked about hanging dimension to entertain the mini batch size. Could you kindly explain this a little. My feature Matrix is a numpy N-d Array, in one -hot -encoded form: (6000,200) , and my batch size = 150. Does this mean, I should give shape=(200,) ? * batch size = 50. Thanks! Sorry, it means a numpy array where there is really only 1D of data with the second dimension not specified. For example: Results in: It’s 1D, but looks confusing to beginners. This means I should use shape(200,). Thanks a lot for the prompt reply !!! how to do a case with multi-input and multi-output cases Simply combine some of the examples from this post. Hi Jason, Thank you very much for your blog, it’s easy to understand via some examples, I recognize that learning from some example is one of the fast way to learn new things. In your post, I have a little confuse that in case multi input. If you have 1 image but you want get RGB 32x32x3 version and 64x64x1 gray-scale version for each Conv branch. How can the network know that. Because when we define the network we only said the input_shape, we don’t say which kind of image we want to led into the Conv in branch 1 or branch 2? In fit method we also have to said input and output, not give the detail. And if I want in the gray-scale version is: 32x32x3 (3 because I want to channel-wise, triple gray-scale version). And how can the network recognize the first branch is for gray-scale. Sorry for my question if is there any easy thing I don’t know. Thanks again for your post. I always follow your post. You could run all images through one input and pad the smaller image to the dimensions of the larger image. Or you can use a multiple input model and define two separate input shapes. Hi, When I run this functional API in model for k fold cross validation, the numbers in the naming the dense layer is increasing in the return fitted model of each fold. Like in first fold it’s “dense_2_acc”, then in 2nd fold its “dense_5_acc”. By my model summary shows my model is correct. Could you kindly tell why is it changing the names in the fitted model “history” object of each fold? regards, Sorry, i have not seen this behavior. Perhaps it’s a fault? You could try posting to the keras list: This was a fantastic and concise beginner tutorial for building neural networks with Keras. Great job ! Thanks, I’m glad to hear that. Thanks for the tutorial. ,) … visible = Input(shape=(2,)) ” I was a bit confused at first after reading these 2 sentences. regarding trailing comma: A trailing comma is always required when we have a tuple with a single element. Otherwise (2) returns only the value 2, not a tuple with the value 2. As the shape parameter for Input should be a tuple ( ), we do not have any option other than to add a comma when we have a single element to be passed. So, I’m not able to get the meaning implied in “the shape must explicitly leave room for the shape of the mini-batch size … Therefore, the shape tuple is always defined with a hanging last dimension” Hi Jason, Thanks so much for such a great post. So, in your case in shared input layers section, you have the same CNN models for feature extraction, and the output can be concated since both features produced binary classification result. But what if we have separate categorical classification model (for sequence classification) and regression model (for time series) which relies on the same input data. So is it possible to concate categorical classification model (which produces more than two classes) with a regression model, and the final result after model concatenation is binary classification? Your opinion, in this case, is much appreciated. Thank you. Not sure I follow. Perhaps try it and as many variations as you can think of, and see. Hi Jason, thanks for the neat post. You’re welcome, I’m glad it helped. Thanks Jason, great and helpful post Can you go over combining wide and deep models using th functional api? Thanks for the suggestion. Do you have a specific question or concern with the approach? Thanks Jason, Your articles are the best and the consistency across articles is something to be admired. Can you also explain residual nets using functional api. Thanks Thanks, and thanks for the suggestion. Thank you so much for your great post. though I have one question, I use the Multiple Input and Output Models with same network for my inputs. I wanna share the weights between them, can you please point out how should I address that? Copy them between layers or use a wrapper that lets you reuse a layer, e.g. like timedistributed. All ur posts r awesome. God bless u 🙂 hanks, I’m glad they help. Really, amazing tutorial. Why don’t u complete it with the testing step “predict”? Thanks again 🙂 Thanks for the suggestion. I explain how to make predictions here: The “Shared Input Layer” is very interesting. I wonder if the 2 convolutional structures can be replaced by 2 pre-trained models (let’s say VGG16 and Inception). What do u think? Sure, try it. This question is with reference to your older post on “Multilayer Perceptron Using the Window Method” : In your code there, you have successfully created a model using a multidimensional array input, without having to flatten it. Is it possible to do this with the keras functional API as well? Every solution i find seems like it requires flattening of data, however i’m trying to do a time series analysis and flattening would lead to loss of information. The shape of the input is unrelated to use the use of the functional API. You can use either API regardless of the shape of the data. Time series data must be transformed into a supervised learning problem: Hey, Thanks for the blog. I want to know how can I extract features form intermediate layer in Alexnet model . I am using functional api . You can create a new model that ends at the layer of interest, then use a forward propagation (e.g. call predict()) to get the features. I give an example for VGG in the image captioning tutorial: Hi Jason! Great blog once again, thank you. I have a question regarding the current research on multi-input models. I’m building a model that combines text-sequences and patient-characteristics. For this I’m using an LSTM ‘branch’ that i concat with a normal ‘branch’ in a neural network. I was wondering whether you came across some nice papers/articles that go a little deeper into such architectures, possibly giving me some insights in how to optimize this model and understand it thoroughly. With kind regards, Joost Zeeuw Not off hand. I recommend experimenting a lot with the architecture and see what works best for your dataset. I’d love to hear how to you go. Hi Jason! Really great blog! My question is: how to feed this kind of models with a generator? Well two generators actually, one for test, and one for train. I’m trying to do phoneme classification BTW I have tried something like: #model input_data = Input(name='the_input', shape=(None, self.n_feats)) x = Bidirectional(LSTM(20, return_sequences=False, dropout=0.3), merge_mode='sum')(input_data) y_pred = Dense(39, activation="softmax"), name="out")(x) labels = Input(name='the_labels', shape=[39], dtype='int32') # not sure of this but how to compare labels otherwise?? self.model = Model(inputs=[input_data, labels], outputs=y_pred) ... # I'm gonna omit the optimization and compile steps for simplicty my generator yields something like this: return ({'the_input':data_x, 'the_labels':labels},{'out':np.zeros([batch_size, np.max(seq_lens), num_classes])}) Also, just to be sure for sequence classification (many-to-one) I should use return_sequences=False in recurrent layers and Dense instead of TimeDistributed rigth? Thanks! Isaac I have an example of using a generator here (under progressive loading): Hi Jason, Thanks for the blog. It is very interesting. After reading your blog, I got one doubt if you can help me out in solving that – what if one wants to extract feature from an intermediate layer from a fine-tuned Siamese network which is pre-trained with a feed-forward multi-layer perceptron. Is there any lead that you can provide. It would be very helpful to me. You can get the weights for a network via layer.get_weights() Hi, Thanks for your article. I have one question. What is the more efficient way to combine discrete and continuous features layers? Often an integer encoding, one hot encoding or an embedding layer are effective for categorical variables. Hi, Jason your blog is very good. I want to add custom layer in keras. Can you please explain how can I do? Thanks. I hope to cover that topic in the future. Hi Jason, Thanks for the excellent post. I attempted to implement a 1 hidden layer with 2 neurons followed by an output layer, both dense with sigmoid activation to train on XOR input – classical problem, that of course has a solution. However, without specifying a particular initialisation, I was unable to train this minimal neuron network toward a solution (with high enough number of neurons, I think it is working independent of initialisation). Could you include such a simple example as a test case of Keras machinery and perhaps comment on the pitfalls where presumably the loss function has multiple critical points? Cheers, Matt Thanks for the suggestion. XOR is really only an academic exercise anyway, perhaps focus on some real datasets? Thanks for your excellent tutorials. I am trying to use Keras Functional API for my problem. I have two different sets of input which I am trying to use a two input – one output model. My model looks like your “Multiple Input Model” example and as you mentioned I am doing the same thing as : model = Model(inputs=[visible1, visible2], outputs=output) and I am fitting the model with this code: model.fit([XTrain1, XTrain2], [YTrain1, YTrain2], validation_split=0.33, epochs=100, batch_size=150, verbose=2), but I’m receiving error regarding the size mismatching. The output TensorShape has a dimension of 3 and YTrain1 and YTrain2 has also the shape of (–, 3). Do you have any suggestion on how to resolve this error? I would be really thankful. If the model has one output, you only need to specify one yTrain. Hi Thank you for your reply. I have another question which I will be grateful if you could help me with that. In your Multilayer Perceptron example, which the input data is 1-D, if I add a reshape module at the end of the Dense4 to reshape the output into a 2D object, then is it possible to see this 2D feature space as an image? Is there any syntax to plot this 2D tensor object? Thanks If you fit an MLP on an image, the image pixels must be flattened to 1D before being provided as input. Thanks, Jason Can you give me an example of how to combine Conv1D => BiLSTM => Dense I try to do but can’t figure out how to combine them This will help as a start: Thank you so much for quick reply Jason, I read this article, very useful! But when I apply, I face that it has a very strange thing, I don’t know why: Let see my program, it runs normally, but the val_acc, I don’t know why it always .] – ETA: 0s – loss: 0.2195 – acc: 0.8978 Epoch 00046: loss improved from 0.22164 to 0.21951, 40420/40420 [==============================] – 386s – loss: 0.2195 – acc: 0.8978 – val_loss: 5.2004 – val_acc: 0.2399 Epoch 48/100 40416/40420 [============================>.] – ETA: 0s – loss: 0.2161 – acc: 0.9010 Epoch 00047: loss improved from 0.21951 to 0.21610, 40420/40420 [==============================] – 390s – loss: 0.2161 – acc: 0.9010 – val_loss: 5.0661 – val_acc: 0.2369 Epoch 49/100 40416/40420 [============================>.] – ETA: 0s – loss: 0.2274 – acc: 0.8965 Epoch 00048: loss did not improve 40420/40420 [==============================] – 393s – loss: 0.2276 – acc: 0.8964 – val_loss: 5.1333 – val_acc: 0.2412 Epoch 50/100 40416/40420 [============================>.] – ETA: 0s – loss: 0.2145 – acc: 0.9028 Epoch 00049: loss improved from 0.21610 to 0.21455, 40420/40420 [==============================] – 395s – loss: 0.2146 – acc: 0.9027 – val_loss: 5.3898 – val_acc: 0.2344 Epoch 51/100 40416/40420 [============================>.] – ETA: 0s – loss: 0.2100 – acc: 0.9051 Epoch 00050: loss improved from 0.21455 to 0.20999, You may need to tune the network to your problem. I tried many times, but even it overfits all database, val_acc still low. I know it overfits all because I use predict program to predict all database, acc high as training acc. Thank you Perhaps try adding some regularization like dropout? Perhaps getting more data? Perhaps try reducing the number of training epochs? Perhaps try reducing the size of the model? thank you, Jason, – I am trying to test by adding some dropout layers, – the number of epochs when training doesn’t need to reduce because I observe it frequently myself, – about the size of the model, I am training 4 programs in parallel to check it. – the last one, getting more data, I will do if all of above have better results Sounds great. hi Jason tnx for this awesome post really helpful when i run this code: l_input = Input(shape=(336, 25)) adense = GRU(256)(l_input) bdense = Dense(64, activation=’relu’)(adense) . . . i’ll get this error: ValueError: Invalid reduction dimension 2 for input with 2 dimensions. for ‘model_1/gru_1/Sum’ (op: ‘Sum’) with input shapes: [?,336], [2] and with computed input tensors: input[1] = . i’m really exhausted and i didn’t find the answer anywhere. what should i do? i appreciate your help Sounds like the data and expectations of the model do not match. Perhaps change the data or the model? This is a particularly helpful tutorial, but I cannot begin to use without data source. Thanks. I left a previous reply about needing data sources, I see other readers not having this problem, but seems I am still at the stage where I don’t see what data to input or how to preprocess for these examples. I am also confused, as looks like a png is common source. I am particularly interested in example that takes text and question and returns an answer – where would I find such input and how to fit into your code? Jason, What dataset from your github datasets would be good for this LSTM tutorial? Or is there an online dataset you could recommend. I am interested in both LSTM for text processing (not IMDB) and Keras functional API Not sure I follow what you are trying to achieve? Any chance of a tutorial on this using some real/toy data as a vehicle I have many deep learning tutorials on real dataset, you can get started here: And here: And here: Thank you Mr.Jason, Can you help me to predict solar radiation using kalman filter? Have you a matlab code about kalman filter for solar radiation prediction. Best regards Sorry, I don’t have examples in matlab nor an example of a kalman filter. Hello how are you? Sorry for the inconvenience. I’m following up on his explanations of Keras using neural networks and convolutional neural networks. I’m trying to perform a convolution using a set of images that three channels each image and another set of images that has one channel each image. When I run a CNN with Keras for each type of image, I get a result. So I have two inputs and one output. The entries are X_train1 with size of (24484,227,227,1) and X_train2 with size of (24484,227,227,3). So I perform a convolution separately for each input and then I use the “merge” command from KERAS, then I apply the “merge” on a CNN. However, I get the following error: ValueError: could not broadcast input array from shape (24484,227,227,1) into shape (24484,227,227). I already tried to take the number 1 and so stick with the shape (24484,227,227). So it looks like it’s right. But the error happens again in X_train2 with the following warning: ValueError: could not broadcast input array from shape (24484,227,227,3) into shape (24484,227,227). However, I can not delete the number “3”. Could you help me to eliminate this error? My code is: X_train1: shape of (24484,227,227,1) X_train2: shape of (24484,227,227,3) X_val1: shape of (2000,227,227,1) X_val2: shape of (2000,227,227,3) batch_size=64 num_epochs=30 DROPOUT = 0.5 model_input1 = Input(shape = (img_width, img_height, 1)) DM = Convolution2D(filters = 64, kernel_size = (1,1), strides = (1,1), activation = “relu”)(model_input1) DM = Convolution2D(filters = 64, kernel_size = (1,1), strides = (1,1), activation = “relu”)(DM) model_input2 = Input(shape = (img_width, img_height, 3)) RGB = Convolution2D(filters = 64, kernel_size = (1,1), strides = (1,1), activation = “relu”)(model_input2) RGB = Convolution2D(filters = 64, kernel_size = (1,1), strides = (1,1), activation = “relu”)(RGB) merge = concatenate([DM, RGB]) # First convolutional Layer z = Convolution2D(filters = 96, kernel_size = (11,11), strides = (4,4), activation = “relu”)(merge) z = BatchNormalization()(z) z = MaxPooling2D(pool_size = (3,3), strides=(2,2))(z) # Second convolutional Layer z = ZeroPadding2D(padding = (2,2))(z) z = Convolution2D(filters = 256, kernel_size = (5,5), strides = (1,1), activation = “relu”)(z) z = BatchNormalization()(z) z = MaxPooling2D(pool_size = (3,3), strides=(2,2))(z) # Rest 3 convolutional layers z = ZeroPadding2D(padding = (1,1))(z) z = Convolution2D(filters = 384, kernel_size = (3,3), strides = (1,1), activation = “relu”)(z) z = ZeroPadding2D(padding = (1,1))(z) z = Convolution2D(filters = 384, kernel_size = (3,3), strides = (1,1), activation = “relu”)(z) z = ZeroPadding2D(padding = (1,1))(z) z = Convolution2D(filters = 256, kernel_size = (3,3), strides = (1,1), activation = “relu”)(z) z = MaxPooling2D(pool_size = (3,3), strides=(2,2))(z) z = Flatten()(z) z = Dense(4096, activation=”relu”)(z) z = Dropout(DROPOUT)(z) z = Dense(4096, activation=”relu”)(z) z = Dropout(DROPOUT)(z) model_output = Dense(num_classes, activation=’softmax’)(z) model = Model([model_input1,model_input2], model_output) model.summary() sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss=’categorical_crossentropy’, optimizer=sgd, metrics=[‘accuracy’]) print(‘RGB_D’) datagen_train = ImageDataGenerator(rescale=1./255) datagen_val = ImageDataGenerator(rescale=1./255) print(“fit_generator”) # Train the model using the training set… Results_Train = model.fit_generator(datagen_train.flow([X_train1,X_train2], [Y_train1,Y_train2], batch_size = batch_size), steps_per_epoch = nb_train_samples//batch_size, epochs = num_epochs, validation_data = datagen_val.flow([X_val1,X_val1], [Y_val1,Y_val2],batch_size = batch_size), shuffle=True, verbose=1) print(Results_Train.history) Looks like a mismatch between your data and your model. You can reshape your data or change the expectations of your model. Thank you for all the informations. Do you have any example with Keras API using shared layers with 2 inputs and one output. I want to knew how to use every input to get it’s : Xtrain, Ytrain and Xtest, Ytest, I think, It will be more simple with an example. Thank you. Perhaps this will help: Hi Jason. Do you know of a way to combine models each with a different loss function? Leeor. Yes, as an ensemble after they are trained. Hi Jason, thank you for your wonderful tutorials! I just wonder about the “Convolutional Neural Network” example. Isn’t there a Flatten layer missing between max_pooling2d_2 and dense_1? Something like: pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) flatt = Flatten()(pool2) hidden1 = Dense(10, activation=’relu’)(flatt) Beste regards I think you’re right! Fixed. Hi Jason In Multiple Input Model, How did you naming the layers? ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 64, 64, 1) 0 ____________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 61, 61, 32) 544 input_1[0][0] ____________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 57, 57, 16) 1040 input_1[0][0] ____________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 30, 30, 32) 0 conv2d_1[0][0] ____________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 28, 28, 16) 0 conv2d_2[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 28800) 0 max_pooling2d_1[0][0] ____________________________________________________________________________________________________ flatten_2 (Flatten) (None, 12544) 0 max_pooling2d_2[0][0] ____________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 41344) 0 flatten_1[0][0] flatten_2[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 10) 413450 concatenate_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 1) 11 dense_1[0][0] ==================================================================================================== Total params: 415,045 Trainable params: 415,045 Non-trainable params: 0 ____________________________________________________________________________________________________ If the model is built at one time, the default names are fine. If the models are built at different times, I give arbitrary names to each head, like: name = ‘head_1_’ + name Very good insight into Keras. I also read your Deep_Learning_Time_Series_Forcasting and it was very helpful Thanks. It was your email that prompted me to update this post with the Python syntax explanation! Wow thank a lot for all your post, you save me a lot of time in my learning and prototyping experience! I use an LSTM layer and want to use the ouput to feed a Dense layer to get an first predictive value ans insert this new value to the first LSTM output and feed an new LSTM layer. Im stuck with the dimension problem… main_inputs = Input(shape=(train_X.shape[1], train_X.shape[2]), name=’main_inputs’) ly1 = LSTM(100, return_sequences=False)(main_inputs) auxiliary_output = Dense(1, activation=’softmax’, name=’aux_output’)(ly1) merged_input = concatenate([main_inputs, auxiliary_output]) ly2 = LSTM(100, return_sequences=True)(merged_input) main_output = Dense(1, activation= ‘softmax’, name=’main_output’)(ly2) Any suggestion is welcome What’s the problem exactly? I don’t have the capacity to debug your code, perhaps post to stackoverflow? Thanks Jason for the post 🙂 In the multi-input CNN model example, does the two images enter to the model at the same time has the same index? does the two images enter to the model at the same time has the same class? In training, does each black image enters to the model many times (with all colored images) or each black image enters to the model one time (with only one colored image)? Thanks.. Both images are provided to the model at the same time. does the two images enter to the model at the same time has the same class? It really depends on the problem that you are solving. Hi Jason, I was looking for the comments hoping someone would ask a similar question. I have images and their corresponding numeric values. If I am to construct a fine-tuned VGG model for images and MLP (or any other) for numeric values and concatenate them just how you did in this post, how do I need to keep the correspondence between them? Is it practically possible to input images (and numeric values) into the model by some criteria, say, names of images? Because my images’ names and one column in my numeric dataset keeps the names for samples. Thanks a lot. You can use a multi-input model, one input for the image, one for the number. The training dataset would be 2 arrays, one array of image one of numbers. The rows would correspond. Does that help? Thank you very much. To clarify for myself, please let me know your feedback for these: 1. Do you mean that I need to convert the images into an array and,based on their names, append to the numeric data file (csv) as a new column ? 2. I haven’t seen such a numeric data where it contains RGB values of images as an array in one column. Could you post some related links/ sources? I appreciate your feedback. Thanks again! No. There would be one array of images and one array of numbers and the rows between them would correspond. e.g. row 0 in the first array would be an image that would relate to the number in row 0 of the second array. This will help you work with arrays if it is new: This will help you load images as arrays: Hi, now I want to use a 1-D data like wave.shape=(360,) as input, and 3-D data like velocity.shape=(560,7986,3) as output. I want to ask if this problem can be solved by multilayers perceptron to tain these data? I have tried, but the shape problem is not solved, it shows “ValueError: Error when checking target: expected dense_3 to have 2 dimensions, but got array with shape (560, 7986, 3)” Perhaps, it really comes down to what the data represents. Hello, thank you for sharing the contents 🙂 Is it same that the ‘Multi-output’ case with ‘multi-task deep learning’ ? I am trying to build up the multi-task deep learning model and found here. Thank you again. Yes, it can be. Jason, a very modest contribution for now. Just a typo. In, …Shared Feature Extraction Layer In this section, we will two parallel submodels to interpret the output… it looks like we are missing a verb or something in the sentence, it sounds strange. If it sounds OK to you, just disregard, it must be me being tired. I hope to support more substantially in the future this extraordinary site. (You don’t need to post this comment) Regards Antonio Thanks, fixed! (I like to keep these comments in there, to show that everything is a work in progress and getting incrementally better) Jason, thank you very much for your tutorial, it is very helpful! I have a question, how would the ModelCheckpoint callback work with multiple outputs? If I set save_best_only = True what will be saved? Is it the model that yields the best overall result for both outputs, or will there be two models saved? Really good question, I go into this in great detail in this post: Can you put a post on how to make partially connected layers in Keras? With a predefined neurons connections ( prev layer to the next layer) Thanks for the suggestion. Hi Jason, If we want to write predictions to a separate txt file, what we have to add at the end of the code? Thanks. You can save the numpy array to a csv file directly with the savetext() function. Hi Jason, I really loved this article. I have an one application in that data is in csv file with text data which has four columns and i’m considering first three columns as input data to predict fourth column data as my output. I need to take first three column as input because data is dependent. can you guide me using glove how can i train model? Thanks Perhaps this tutorial will help: Hi Jason Thanks again for good tutorial, i want to know the different when concatenate two layer as feature extraction. You use merge = concatenate([interp1, interp13]) Other people use merge = concatenate([interp1, interp13], axis = -1). I want to know is there different between the two and how different is it They are the same thing. The axis=-1 is the default and does not need to be specified. Learn more here: Hi Jason Thanks for good tutorial, i want use Multiple Input Model with fit generator model.fit_generator(generator=fit_generator, steps_per_epoch=steps_per_epoch_fit, epochs=epochs, verbose=1, validation_data=val_generator, validation_steps=steps_per_epoch_val) but i get ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays keras code input1 = Input(shape=(145,53,63,40)) input2 = Input(shape=(145,133560)) #feature 1 conv1 = Conv3D(16, (3,3,3),data_format =’channels_last’)(input1) pool1 = MaxPooling3D(pool_size=(2, 2,2))(conv1) batch1 = BatchNormalization()(pool1) ac1=Activation(‘relu’)(batch1) ……. ac3=Activation(‘relu’)(batch3) flat1 = Flatten()(ac3) #feature 2 gr1=GRU(8,return_sequences=True)(input2) batch4 = BatchNormalization()(gr1) ac4=Activation (‘relu’)(batch4) flat2= Flatten()(ac4) # merge feature extractors merge = concatenate([flat1, flat2]) output = Dense(2)(merge) out=BatchNormalization()(output) out1=Activation(‘softmax’)(out) # prediction output model = Model(inputs=[input1,input2], outputs=out1) opt2 = optimizers.Adam(lr=learn_rate, decay=decay) model.compile(loss=’categorical_crossentropy’, optimizer=opt2, metrics=[‘accuracy’]) I’m eager to help, but I don’t have the capacity to debug your code, I have some suggestions here: i tried the given section, (5. Multiple Input and Output Models ) please help me to fit the data . i got the error AttributeError: ‘NoneType’ object has no attribute ‘shape’ my input : Nice work. Perhaps try some of the suggestions here: I’am knew to the Machine Learning. After fitting the model using functional API i got training accuracy and loss. but i don’t know how to find test accuracy using keras functional API. You can evaluate the model on the a test dataset by calling model.evaluate() I want make CNN + convLSTM model, but, at last line, occured dim error .. how can i fit shape : Input 0 is incompatible with layer conv_lst_m2d_27: expected ndim=5, found ndim=4 input = Input(shape=(30,30,3), dtype=’float32”) conv1 = Convolution2D(kernel_size=1, filters=32, strides=(1,1), activation=”selu”)(input) conv2 = Convolution2D(kernel_size=2, filters=64, strides=(2,2), activation=”selu”)(conv1) conv3 = Convolution2D(kernel_size=2, filters=128, strides=(2,2), activation=”selu”)(conv2) conv4 = Convolution2D(kernel_size=2, filters=256, strides=(2,2), activation=’selu’)(conv3) ConvLSTM_1 = ConvLSTM2D(32, 2, strides=(1,1) )(conv1) Perhaps change the input shape to match the expectations of the model or change the model to meet the expectations of the data shape? Thank you so much for your great article. What should I set up my model with API if my problem is like this: First, classify if the chemicals are toxic or not, then if they are toxic, what toxic scores they are. I can create two models separately. But combine them together is a good regularization that some papers said that. I think it is a multi-input and multi-output problem. I have a dataset which has the same input shape, combineoutput will be y1 = 0 or 1, and y2= numerical scores if y1=1. I don’t know where and how to put if statment in the combined model. Any advice is highly appreciated. Perhaps start here: Maybe I didn’t explain my problem clearly. I can build models for classification and regression separately using Keras without any problems. My problem is how to combine these two into one multi-input, multi-output model with “an IF STATEMENT”. From the link you provided, I couldn’t find the solutions. Could you please make it clear? Many thanks. What would be the inputs and outputs exactly? It is like this: ex x1 x2 x3 x4 y1 y2 1 0.1 0.5 0.7 0.4 1 0.3 2 0.5 0.2 0.4 0.1 0 3 0.7 0.6 0.3 0.2 0 4 0.12 0.33 0.05 0.77 1 0.55 .. .. .. .. .. .. .. Only when y1 = 1, y2 has a value Above is not a real dataset. There may be many ways to approach this problem. Perhaps you could setup the model to always predict something, and only pay attention to y2 when y1 has a specific value? First of all, thank you so much for your fast response. I still didn’t get it. You said there are many ways to approach it, but I don’t know any of them. There are two datasets (X1, y1), (X2, y2). So I think it is a multi-input and multi-output problem. Should the number of samples between two datasets be equal? Yes. Thank you, Jason, for yet another awesome tutorial! You are a very talented teacher! Thanks, I’m glad it helped. Hi Jason, I always follow your blogs and book (Linear algebra for ML) and they are extremely helpful. Do you have post related to LSTM layer followed by 2D CNN/Maxpool?. If you have already have a post then please provide a link to it. Actually I have some problem with dimensions after using the direction given in this post. But it did not solve my problem. It will be great if you provide help in this regard. Thank you I don’t I have an LSTM feeding a CNN, I don’t think that makes sense. Why would you want to do that? Hello, Firstly thank you so much for the great webpage you have. It has been a great help from the first day I started to work on deep learning. I have a question, though. What is the corresponding loss function for the model with multiple inputs and one output that you have in subsection Multiple Input Model? I want to know if you have X1 and X2 as inputs and Y as outputs, what would be the mathematical expression for the loss function. I want to do the same. However, I am not sure what I will be minimizing. Another question is about loss weights. If I have, for example, two outputs and their two corresponding loss functions if I set the first loss weights equal to zero, would it deactivate the training process for the part related to the first output? Thank you so much in advance for your help. Regards, The model will have one loss function per output. Y is only one output though, a single output model with 2 inputs X1 and X2. If you have a vector output, it is only one loss function, average across each item in the vector output. On non-trivial problems, you will never hit zero loss unless you overfit the training data. Thank you so much for your reply. But, you misunderstood my question regarding multiple outputs. However, I figured that one out. But my question regarding the network containing two inputs (X1 and X2) and one output is still unanswered. I know it has only one loss, but I am not sure what is the loss. It can be any of the following: 1. [ F1(x1)+F2(x2) – Y ] 2. [ F(x1,x2) – Y ] I am not sure which one will be the loss here. I appreciate if you can help me. Thank you so much. The input in a multi-input are aggregated, therefore loss is a function of both inputs. Great Blog! Can you help us with one example code architecture without using Dense layer? Finding it difficult to understand the last part to get rid of Dense layers. Thanks You can use any output layer you wish. Why do you want to get rid of the dense layers? Amazing. Thanks a lot for sharing. Really helpful! You are a wizard… I’m happy it helped. Amazing post! It’s both easy to understand and complex enough to generalize several types of networks. It helps me so much! Thanks!! Thanks, I’m happy that it helped! Hi Jason, You have help me out a lot and i am back to hitting wall. I want molding a multiple output multiple input linear regression model and i cant find anything on the internet. i tried using different key words like multi-target linear regression using keras, multi depended Valarie, multivariate. i cant find anything. Do you have any materials on this? You can specify the number of targets as the number of nodes in the output layer. Yes, I have many examples for time series, you can get started here: Also this: Does that help? Hi Jason, Thanks a lot for the help for the two links, i didn’t know you have your stuff so organized. I am new to machine learning and I think am a bit confuse. lets say i have a bunch of data which are all time average from a time series data. do i really need to use time series? I have 10 columns of input and 5 columns of output. however i am going to be dealing with the time series version of this data set. also the definition of time series = ” from your blog What if my data dont have trends, or patterns? like my data are industrial data like, data from a maybe a engine? i want to predict how it will run at certain time or things like that also i remember reading your post saying that LSTM are not really good for time series forecasting. what different methods can i use ? i dont have image so i cant use CNN. is RNN good? sorry for the long read. Update, i just read your blog on CNN for time series molding , I believe you do not require images , i am so sorry for saying “i dont have image so i cant use CNN.” For the link above you seems to only be using sequences, do my data have to be sequences? Yes. To use a 1D CNN, the model assumes a sequence as input. The time series may or may not have useful temporal structure that a model can use. You can choose to try to capture that or not – you’re right. But, you must respect the temporal ordering of observations, e.g. train on past, test on future, never mix the two. Otherwise, your model evaluation will be invalid (optimistic). See this framework for methods to try and the order to try them: Amazing post like always !! thanks Jason. I have two questions about flattening : can we replace flatten with an Bi-LSTM? If i want to use a Bi-LSTM after Embedding, should i flatten the output of Embeddig before go to the Bi-Lstm? Thanks. No need to flatten. Just Awesome Thanks, I’m glad it helped. Great as always! But would you mind if you let me know how to address the loss function in case of MIMO? It should be calculated separately for each input/output I guess. Thank you! No different, if multiple outputs are numbers, mse is a great start. Can functional models applied to predict stock market This is a common question that I answer here: Thanks a lot for the insightful write-up. I am looking for a way to combine RNN (LSTM) and HMM to predict stock prices based on the combined strength of the two paradigms to achieve better result than ordinary RNN (LSTM). Thank you Sounds like a fun project, let me know how you go. Thanks for the response. But I need an insight from you in this regard. Thank you. Sorry, I don’t have any examples of working with HMMs or integrating them with LSTMs. I would recommend developing some prototypes in order to discover what works. I have created a two output LSTM model to predict the angular and linear velocity, the loss is low in angular velocity and but loss is high in linear velocity. Please tell me, how to reduce the loss. Epoch 9/10 – 18s – loss: 0.4790 – A_output_loss: 0.0826 – L_output_loss: 0.3964 – A_output_acc: 0.6077 – L_output_acc: 0.4952 – val_loss: 0.6638 – val_A_output_loss: 0.0958 – val_L_output_loss: 0.5680 – val_A_output_acc: 0.6059 – val_L_output_acc: 0.3166 Epoch 10/10 – 18s – loss: 0.4752 – A_output_loss: 0.0821 – L_output_loss: 0.3931 – A_output_acc: 0.6084 – L_output_acc: 0.4996 – val_loss: 0.6503 – val_A_output_loss: 0.0970 – val_L_output_loss: 0.5533 – val_A_output_acc: 0.6052 – val_L_output_acc: 0.3176 Here are some suggestions: In the shared layer CNN example, why does the shape changed from 64 to 61, I understand kernel size is 4, but 64/4 has no remainder. Also, do you know if mxnet has similar methods or tutorial on this? Yes, it comes down to same vs valid padding I believe: Sorry, I don’t know about mxnet. Hi Jason, Thanks for your great website and your great books (we have most of them). I do have a question I hope you can help we with. In the article above you describe a large number of different network structures that you can implement. Are there any rules of thumb that describe which network structure works best with which problem? I do bit of work in time series forecasting and anything that I have read tells me to just try different structures, but given the amount of different structures this is quite unpractical. For example, if you have multiple input sources of data do you concatenate them into a single input for an MLP or do you use a multiple input model? I would love to hear your though. You’re welcome. Not really, you must use controlled experiments to discover what works best for your dataset. This might help as a general guide: Hi Jason, I have been following your blog since I started my college project, I got stuck on this page, My problem is I have a dataset of a bioreactor(fermentation process) which has 100 batches of data and each batch has 1000 timesteps and 200 parameters(variables). so 100 * 1000 = 1,00,000 timesteps of 200 variables, I wanted to develop a ‘ Y ‘ like architecture( like MIMO in your post), therefore from one side of ‘Y’ inputs are ‘observed variables’ and from the other side ‘ controlled variables ‘, I want to pass observed variables through a LSTM layer where I am confused with input dimensions and the other is how can I use this model 1. Will I be predicting the next batch given the current batch? 2. Will I be predicting the next time step (t+1) given t? 3. Is it necessary to pass the whole batch size when we wanted to make a prediction bcuz when we were building the model we used dim(1000 * timesteps * 200) My Goal: given the current state of the process(3 or 4 or ‘n’ time steps) I want my model to be predicting n+1 or n+10 time steps also give the controlled variables from other side of ‘Y’. This will help you to better understand the input shape for LSTMs: Thanks a lot, I got 1 more question if you have time to answer and I am sorry to bother you with too many questions. If Xa and Xb are inputs to 2 different networks (Xa-> LSTM) and (Xb-> Dense) and they want to share a common Dense layer in the future to give an output ‘ y ‘. Xa and Xb share the same time index so if I reshape Xa to be 3D(samples, 10, 5) then how should I reshape Xb? You would concat the output of each sub-model. You might need to flatten LSTM output prior to the concat. please dr jason i can’t understand what does None mean and how it is processed from flatten layer to the Dense layer as i read it is (no.of batches proceesed in parallel, no. of features) also i want to know how to code Python to visualize this flatten output (batches included and the features) thank you very much None means not specified, so it will process whatever number is provided. Thank you very much! The whole reading was very helpful!! Especially, the last note on Functional API Python Syntax. No one would care to add this to their tutorial of Deep Learning! Thanks, I’m happy to hear it was helpful! Sir your tutorials are great. Thank you so much. You’re welcome! Thanks for your informative tutorial. I have 2 directories containing RGB images. I am going to use 2 data-generators to read them and then feed these 2 generators into a CNN. Question1: How should I combine these 2 data-generators and more importantly how to use function “fit_generator” for multiple generators so that network can train on whole samples (both 2 directories)? Question2: If I merge these 2 datasets manually (copy all files from one directory to another) to form one single dataset and then use 1 single data-generator to read them and then feed it into CNN. In comparison to method 1 (mentioned in queston 1), does it have effect on output? It means does it increase or decrease accuracy, loss or other metrics? Thanks. A separate generator is used for each directory. Probably. Try it and see. Hello Mr Jason, I also have same queries. Readers will be grateful if you can kindly share any reference code. Thanks and regards Thanks for the suggestion. Best explanation of the functional API I’ve found so far. Thanks James! Hi Jason, Thanks for great blog, the links you have provided for further reading don’t work. Could you please update them? Thanks again! Thanks, which links? Hi , Jason. I have a doubt . why keras is called as keras api?.It is a library and how it will become an api? A code library is a collection of reusable code: The API is the standard interface for using a library (or any software): Thank you jason , Then all the libraries in python are called api? They are libraries, each offers an API. Thank you jason You’re welcome. This is amazing. Thank you so much for this article. You’re welcome! Thank you, Jason, Such a useful post. For the case of Multiple Inputs and Multiple Outputs, it seems that number of examples/samples in each input should be the same for training data. I have a single mode, with two different sets of inputs, which gives two different sets of outputs. A different loss is applied to each (one loss for each, 2 in total). However, I get this error: “All input arrays (x) should have the same number of samples. ” It seems that it is a common issue and no one has a solution for that. Do you have any thoughts? ========================================================== #number of examples are different to input/output 1 and 2 model = Model(inputs= [input1, input2], outputs=[outputs1, outputs2]) model.compile(loss=[loss1,loss2], optimizer=opt, metrics=[‘accuracy’]) model.fit({‘inputs1’:input_data_array1, ‘inputs2′:input_data_array2}, {outputs1′:y_arrray1,outputs2’:y_arrray2, , epochs=100) Yes, the input of input samples must match the number of output samples, e.g. for both targets. I will nominate Jason brownlee for australian of the year! Thanks, you are very kind! Hello Jaison, Thanks for the blog. I am new to machine learning and I have the following question. I created multiple inputs(CNN, LSTM) and a single output model. Since the model is taking image input and text input together, how can I add image augmentation? model.fit([image_train,text_train], label_train, batch_size=BATCH_SIZE, epochs=40, verbose=1, callbacks=callbacks, validation_data=([image_test,text_test], label_test)) You could create augmented images with copies of the text – unaugmented. You might need a custom data generator for this purpose. Hi Jason, Thanks for the blog. Since I’m a Python beginner, this is probably a question more related to Python syntax rather than Keras. For Keras Functional API example: from keras.layers import Input from keras.layers import Dense visible = Input(shape=(2,)) hidden = Dense(2)(visible) The last syntax similliar to type casting in Java, what’s it called and doing in Python ? No, not casting. It is calling a function on the object returned from the constructor which just so happens to be a function to connect the object’s input to the output of another layer passed as an argument. Terrible to read – I know, but it’s easy to write. So according to your explanation, can I write this way? hidden = visible(Dense(2)) No. It would have a different effect, e.g. visible would take the output of Dense(2) as input and the ref to Dense(2) is now not available. Thank you so much for your great post. If I have two inputs but with different number of samples, is it possible to use multi-input, multi-outputs API to build and train a model? My problem is if I give you an image, first, I want to know whether a person is in the image. If yes, I want to know how old he/she is. So I have to prepare two datesets. For example. one dataset includes 10,000 images, in which 5,000 have a person in it and 5,000 don’t have a person in it. Another dataset has 1,000 images with a person in it and label them with age. These two datasets have different sample numbers, some images maybe appear in both datasets. For this problem, it is a multi-input, multi-output problem, but two inputs have different sample numbers, Can I use the Keras’ API to build a model? If not, any other methods would you like to suggest? Many thanks You’re welcome. No, I believe the number of samples must be the same for each input. Thank you for your replay. If the number of samples are the same for each input, when I prepare these two inputs, do I need to pair them? I mean the features of dataset1 and dataset2 have to represent the same sample. Like I gave the example, Number 1 image in the dataset1 have to Number 1 image in the dataset2, and so on. Yes, inputs need to be paired. Thank you very much! You’re welcome. Hi, nice post! Gives a quick and clear introduction to the functional API. Thanks! Hi Jason, Thanks for the blog. I,m splitt the image into blocks. What you’r suggestions to extract features to discriminate between stego and cover image. You’re welcome. What is “stego and cover image”? Hi Jason, Thanks a lot for this great tutorial! I am trying to implement a model with multiple Input layers using your example “Multiple Input Model”. I experience some issues when calling the fit function regarding the shape of the input data. Could you please provide an example of how to call the fit function on your example? Call fit() as per normal and provide a list of inputs with one element in the list for each input in the model. I give many examples of multi-input models on the blog. Perhaps try searching. Thanks! You’re welcome. Thank you so much for your clear explanation. I like to know how to add BatchNormalization and Relu in that. Actually, I am trying to write code for dncnn using functional API. code for sequential API: model=Sequential() model.add(Conv2D(64,(3,3),padding=”same”,input_shape=(None,None,1))) model.add(Activation(‘relu’)) for layers in range(2,16+1): model.add(Conv2D(64,(3,3),padding=”same”)) model.add(BatchNormalization()) model.add(Activation(‘relu’)) model.add(Conv2D(1,(3,3),padding=”same”)) I like to know how to implement using Functional API You can add a batch norm layer via the functional API just like any other layer, such as a dense. No special syntax required.
https://machinelearningmastery.com/keras-functional-api-deep-learning/
CC-MAIN-2021-04
refinedweb
12,138
65.52
Red Hat Bugzilla – Bug 1303422 satellite sync hard coded email FROM does not allow customization for email addresses Last modified: 2017-06-21 08:17:18 EDT Description of problem: satellite-sync currently does not support a default_mail_from address similar to the webui. rather the address is hard coded to root@host which can produce issues for customers when the server uses a short name and mail relays require fqdn. A patch will be included for review to use a config variable, default_mail_from, in the satellite.server CFG namespace to overcome this issue. Created attachment 1119915 [details] patch to add default_mail_from support to satsync In the attached patch, if parseaddr fails, then sndr ends up as ''. Can we instead notice this, log an error, and default back to root@host_label? Otherwise, the emails arrive with no From line, and it makes it much harder for the customer to figure out what machine has a cfg-problem. Grant, I'll take a look when I am back in the office but good catch. Will fix it up. Created attachment 1120892 [details] sat sync patch for default mail this patch removes the need for the parse address lib test plan: 1) modify rhn.conf and break the sat sync by modifying parent with a bogus name server.satellite.rhn_parent = ssatellite.rhn.redhat.com 2) Add a new config var for the default email address for satellite.server namespace in rhn.conf server.satellite.default_mail_from = user@fqdn 3) run satsync with email option, tail the /var/log/maillog file satellite-sync -l --email 4) confirm mail from address shows up in /var/log/maillog 5) now comment out the server.satellite.default_mail_from config var. rerun sat sync. Confirm the mail from address is now root@hostname in /var/log/maillog spacewalk.github c6369d1f57b352e49b116e677f5d2fbc5831d703 NOTE: use the included patch when applying to Satellite, the pune-to-puny namechange will get in the way of cherry-picking the SW commit The patch can be found here - But according to Comment 6, there may be more changes needed. VERIFIED on spacewalk-backend-2.5.3-137 (SAT5.8_20170529) Reproducer: 1. Add "default_mail_from" into rhn.conf > echo "server.satellite.default_mail_from = tester@example.com" >> /etc/rhn/rhn.conf 2. Sync some channel > cdn-sync -c rhn-tools-rhel-x86_64-server-5 --email 3. Check mailbox > mail ... N 11 tester@example.com Thu Jun 1 07:28 661/46162 "CDN sync. report from host-8-179-109.host.centralc"
https://bugzilla.redhat.com/show_bug.cgi?id=1303422
CC-MAIN-2018-30
refinedweb
410
66.94
Definition of JavaScript every() In Javascript, every() is a method that helps in checking whether elements in a given array satisfies a particular condition mentioned. If all the elements satisfy the condition, then true will be returned, else false will be returned. There are certain conditions to call this method. They are: - An array should be present. - Used in the case where each and every element in the array should be tested. - To check whether all the elements fulfill a certain condition. In the below sections, we will see the syntax, working and examples of Javascript every() method. Syntax: Below is the syntax of every() method. array.every(callback(currentvalue ,ind, arr), thisArg) Here, there are two parameters such as: - Callback: Function which test the condition on each and every array element. In this function, certain arguments such as currentvalue, ind, arr are used where Currentvalue is the element that is currently getting processed, ind is the index of the present element and arr is the array where we use the every() method. Among this, currentvalue is a required argument and the other two are optional. - thisArg argument: Argument which is optional in every() method. If this argument is getting used, this value within the callback function will refer the thisArgargument. Return value of every() method is: - True: Callback function returns a true value if every element in the array satisfies the condition. - False: Callback function returns a false value if at least one element in the array do not satisfy the condition. Callback function will stop checking the array if it finds any element that do not satisfy the condition. How does every() Method Works in JavaScript? - In certain cases, we have to check whether each and every element in the array satisfies a particular condition. Normally what will you do? - Yes. We will use a for loop to iterate and check all the elements whether they satisfy that particular condition. - For example, let us consider an array of numbers [3, 5, 9]. The task is to check whether the numbers are greater than 1. The sample code will be as follows: for (int i = 0; i < num.length; i++) { if (numbers[i] < 1) { res = false; break; } } What is Happening in this code? The numbers are iterated and if any element is not satisfying the condition, the res will be set as false and loop gets terminated. Even though the code is simple and straight, it is verbose. As a solution for this, JavaScript offers the every() method that permits you to check each and every element of an array fulfills a condition in a shorter and clear way.So, how will be the code?? Let us see it in the next section. Examples to Implement every() in JavaScript Below are some of the simple javascript programs to understand every() method in a better manner. Example #1 Java script program to check whether the input element is greater than the array elements. Code: <!DOCTYPE html> <html> <body> <p>Input the number in the text field below to check whether the elements in the array are greater than it.</p> <p>Number: <input type="number" id="NumToCheck" value="89"></p> <p>Click the button...</p> <button onclick="sample()">Try it</button> <p>Are the numbers in the array are above the input number? <span id="demo"></span></p> <script> var nums = [45, 32, 78, 21]; function checkNum(num) { return num >= document.getElementById("NumToCheck").value; } function sample() { document.getElementById("demo").innerHTML = nums.every(checkNum); } </script> </body> </html> Output: On executing the code, a number will be asked to input in order to check whether it is greater than all the numbers in the array. Here, 89 is given as input and it can be clearly seen from the code that none of the numbers in the array are greater than 89. Therefore, on clicking the button, the function which contains the every() method will be called and false is returned. In the next case, input number is given as 10 and as all the numbers are greater than 10, true is returned. Example #2 Java script program to check whether the Gender values in the given array are the same. Code: <!DOCTYPE html> <html> <body> <p>Click the below button to check whether the values of the Gender in the array are same.</p> <button onclick="sample()">Try it</button> <p id="demo"></p> <script> var emp = [ { name: "Anna", Gender: "Female"}, { name: "Iza", Gender: "Female"}, { name: "Norah", Gender: "Female"}, { name: "Adam", Gender: "Male"} ]; function isSameGender(elm,ind,arr) { // since there is no element to compare to, there is no need to check the firt element if (ind === 0){ return true; } else { //compare each element with the previous element return (elm.Gender === arr[ind - 1].Gender); } } function sample() { document.getElementById("demo").innerHTML = emp.every(isSameGender); } </script> </body> </html> Output: On executing the code, a button will be displayed similar to example 1. On clicking the button, the function which contains the every() method will be called. Here, every() method checks whether the value of Gender for all the names are the same. It can be clearly seen from the code that one of the names in the array is adam and gender is male. Therefore, on clicking the button, false is returned. In the next case, the Name of adam is changed to annamu and Gender is changed to Female. Since all the values if Gender are the same, true is returned. Example #3 Java script program to check whether the numbers in the given array are odd. Code: <!DOCTYPE html> <html> <body> <script> //function used to check negative numbers function isodd(element, index, array) { return (element % 2 == 1); } var a1 = [7, 9, 13, 21, 73]; if(a1.every(isodd)==true) document.write("All the numbers in the array a1 are odd numbers<br>"); else document.write("All the numbers in the array a1 are not odd numbers<br>"); </script> </body> </html> Output: Here, all the numbers in the array are checked whether they are odd. As you can see, all the numbers are odd and the print statement corresponding to that condition is printed. Recommended Articles This is a guide to JavaScript every(). Here we also discuss the definition and how does every() method works in javascript? along with different examples and its code implementation. You may also have a look at the following articles to learn more –
https://www.educba.com/javascript-every/
CC-MAIN-2020-50
refinedweb
1,067
63.19
Search... FAQs Subscribe Pie FAQs Recent topics Flagged topics Hot topics Best topics Search... Search within Game Development: Game Development TicTacToe game Flip Alvez Greenhorn Posts: 2 I like... posted 10 years ago Number of slices to send: Optional 'thank-you' note: Send Hi every one! Im a total java noob, and ive got to do a Tic tac toe game for my java class and Im getting a lot of difficulties ,and since Ive been a lurker of this forum for a while now, I decided to post here haha . Ive already searched for a tons of code of how to do a tic tac toe java game but i cant understand them very well and the most part of code that ive seen is much more complicated for what i want, I just want a simple tic tac toe game that executes in the console haha.. basically the game must have 2 game modes : SinglePlayer and TwoPlayers. The singlePlayer mode must have three levels of difficulty : Easy, Medium and Hard. I decided to start with the two player mode but im having issues with how to execute the game with the players 'X' and 'O' and put it on the BoardGame. I was thinking of doing a method that recieves the position of where I want to put the symbol and then alternate that symbol with a counter.. but i dont know how lol.. im thinking of doing a boardGame class that contains the board and the plays methods, a player class that contains the players and the Game modes and a TicTacToe class that just executes the game.. Heres my BoardGame class: public class BoardGame { private char[][] board = new char [3][3]; private int count; public void drawBoard(){ String middleLine ="---+---+---"; for(int i = 0; i < 3; i++){ for(int j = 0; j < 3; j++){ board[i][j] = ' '; System.out.print(" " + board[i][j] + " "); if(j!=2){System.out.print("|");} } System.out.print("\n"); if(i!=2){System.out.println(middleLine);} } } public boolean usedPosition(int pos){ if(board[(pos-1)/3][(pos-1)%3] == ' '){ return false; } else{ return true; } } Thanks in advance. P.s - Sorry about my english, its a bit rusty. Mich Robinson Ranch Hand Posts: 275 2 posted 10 years ago Number of slices to send: Optional 'thank-you' note: Send You want methods to : initialise the board and set whose turn it is find the next free square to see if the game is finished (won or drawn) to swap sides Obviously this won't play well but it will allow you to get a program to play. To get it to play slightly better you'll have to prefer the centre and corners before playing the sides. To get it to play much better then you'll need to look up MinMax searches and selecting the best move. PS why do you use pos (1-9) to input a move but then store it in a 3*3 array? wouldn't it be simpler to just have a position array of length 9 and then store the board in that? Arcade : Alien Swarm Board : Chess - Checkers - Connect 4 - Othello Flip Alvez Greenhorn Posts: 2 I like... posted 10 years ago Number of slices to send: Optional 'thank-you' note: Send yeah thats a good ideia, its simpler . Thanks : ) I will try to do a method that recieves the position that i want to store in the board and the player 'X' or 'O' and then i post here the results hehe Ashish Schottky Ranch Hand Posts: 93 posted 10 years ago Number of slices to send: Optional 'thank-you' note: Send At original poster: Its generally a bad idea to implement magic numbers. what you can do is int SIZE=3; private char[][] board = new char [SIZE][SIZE]; may be you can make some classes. 1)Main class could control the game-flow, it the overall management of the game. It might contain SOPln("selectLevel"); while(gameOver==false) switch players and make the moves. 2)AI class where the details about AI will go, if you want to make three levels, make sure you treat the maximum depth in accordance to your level. Say easy,maxPly=2 medium maxPly=5,hard maxPly=9. If your implementation is correct then your code would be unbeatable at the hardest level(Converse, if your AI is beatable on level 9 then for sure there is some bug ;) ) This is just the tip of advice I can offer. Anyways if you would like me to explain you more regarding the same, just drop me a PM and then I would even start to blog on how to write tic-tac-toe. It would surely improve my english writing skills. This parrot is no more. It has ceased to be. Now it's a tiny ad: Thread Boost feature reply reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads TicTacToe Game TicTacToe Game.. TicTacToe user vs computer TicTacToe in Java Game Tutorials -->> TicTacToe More...
https://www.coderanch.com/t/539935/java/TicTacToe-game
CC-MAIN-2021-43
refinedweb
838
66.47
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn More This one could be easily become one of the Job Interview Questions we publish here at Dev102.com, but I decided to write a “regular” post about this issue because it is an important concept and not a just a puzzle or a brain teaser. Take a look at the following code, can you tell what will the output be? public class BaseType { public BaseType() { Console.WriteLine("Call base ctor."); DoSomething(); } public virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } } public class DerivedType : BaseType { public DerivedType() { Console.WriteLine("Call derived ctor."); } public override void DoSomething() { Console.WriteLine("Derived DoSomething"); } } public class MainClass { public static void Main() { DerivedType derived = new DerivedType(); Console.ReadLine(); } } The output of this program is: Did you ever need to convert List(T1) to List(T2)? One example might be when implementing an interface. you might need to expose a collection of other interfaces (or maybe the same interface), But you usually hold the concrete type implementing the interface in the collection. Lets look at the following example: Breeze : Designed by Amit Raz and Nitzan Kupererd
http://www.dev102.com/tag/derived/
CC-MAIN-2014-42
refinedweb
196
50.53
Bug with simplicial complexes or computing homology in Sage 7.3? I have the following code that has been running fine in Sage 6.5––Sage 7.2 on both my Mac OS 10.10.5 and a Unix system. I just tried upgrading to Sage 7.3 on my Mac, and it no longer works correctly. It doesn't crash, it just gives wrong answers. I will copy the code below, but it reminds me of the issue I had when I posted the following question:... The code below is complicated, but the output of sampler(n) is supposed to be a random Q-acyclic simplicial complex with n vertices. So if I write S=sampler(12) S.homology() I should expect to see something like {0: 0, 1: 0, 1: 0} or {0: 0, 1: C2, 1: 0} as outputs, but definitely not {0: 0, 1: Z, 2: Z} The homology should be trivial or finite groups. But in Sage 7.3 on my Mac, I get Z parts in the homology. I thought my code was broken at first, but copying and pasting it back into Sage 7.2 on my Mac, it seems to work fine. @cached_function def f(X): return (X.transpose() * X).inverse() def ortho(X): X.set_immutable() return X*f(X)* X.transpose() def TR(M): D=M.diagonal() X=GeneralDiscreteDistribution(D) return X.get_random_element() def shrink(X,Q,n): W=Q[:,n] W.set_immutable() X2=X-W*f(W)*W.transpose()*X X3=X2.transpose().echelon_form().transpose() return X3[:,0:X3.ncols()-1] def sampler(N): S=range(1,N+1) Z = SimplicialComplex(Set(range(1, N+1)).subsets(3)) L=list(Z.n_faces(2)) T=Z.n_skeleton(1) C=Z.chain_complex() M=C.differential(2).dense_matrix().transpose() M2=M.transpose().echelon_form().transpose() R=M2.rank() X=M2[:,0:R] Q=ortho(X) n=TR(Q) T.add_face(list(L[n])) j = binomial(N,2) - N for i in range(j): X=shrink(X,Q,n) Q=ortho(X) n=TR(Q) T.add_face(list(L[n])) return(T)
https://ask.sagemath.org/question/35057/bug-with-simplicial-complexes-or-computing-homology-in-sage-73/
CC-MAIN-2017-39
refinedweb
347
61.73
Joel Pobar's CLR weblog CLR Program Manager: Reflection, LCG, Generics and the type system... Translate This Page Translate this page Microsoft® Translator Options Blog Home Share this RSS for posts Atom RSS for comments Search Search this blog Search all blogs No tags have been created or used yet. Archive Archives September 2006 (1) November 2005 (1) October 2005 (1) July 2005 (1) November 2004 (1) September 2004 (2) July 2004 (2) June 2004 (3) May 2004 (2) April 2004 (6) March 2004 (4) February 2004 (4) January 2004 (4) December 2003 (2) November 2003 (2) October 2003 (1) Good For Nothing Compiler (PDC - TLN410) and other goodies MSDN Blogs > Joel Pobar's CLR weblog > Good For Nothing Compiler (PDC - TLN410) and other goodies Good For Nothing Compiler (PDC - TLN410) and other goodies MSDNArchive 4 Oct 2005 2:26 PM 14 Joe Duffy and I were really impressed with the amount of people who showed up for the PDC session “Write a Dynamic Language compiler in an hour” at the PDC last month. It confirmed my belief that customers care for details about compiler technologies and the managed libraries that enable them. We promised source download from commnet and our blogs, so here it is: Thanks Dominic , for your help on the original presentation , much appreciated! What follows are resources I found useful to get bootstrapped in to the world of compiler construction. If you have any other resources you'd like to see appear on this list, drop me an email. Tools, Languages, Source and more...: GPPG (The Gardens Point Parser Generator) : Yacc/Bison like parser generator that emits C#. This was just released recently, and looks pretty solid. If you look on the link, the QUT folk talk about this being built for a "Ruby .NET" project in the context of parsing the full Ruby grammar. There doesn't seem to be anything mentioned officially about the Ruby .NET project but given their track record on delivery, I'm happy to get my hopes up. Can't wait to get my hands on it! PEAPI and PERWAPI : Managed API for reading and writing managed executables. It's a lower level Reflection.Emit like interface, gives you more control over the metadata bits and bytes. I believe the Mono C# compiler uses PERWAPI as it's backend. Fast too. IronPython : Python compiler for .NET, incubated in my team, the CLR, fronted by Jim Hugunin . A full dynamic language compiler with source, released under a liberal Shared Source license. We're getting close to 1.0 release on this one. I think this is a great starting point for anyone looking to write or port a dynamic language - Python has some interesting problems and solutions around interop (language, BCL, etc.), performance (late-bound binding and invocation), code loading, IDE integration and more. It has a very active community. F# : A functional language with ML like flavor from Don Syme at MSR Cambridge. Very stable, fantastic VS.NET integration (both 2003 and 2.0 Beta 2) and the F# to BCL interop stuff is awesome. Lately, Don has been expanding the breadth of F# language features and investing in developer compiler interactivity (IDE/REPL etc.) to make it play great in the .NET ecosystem. I'd love to see an F# book though - mapping OCaml to F# can be tedious if you've never approached either before. Rotor (SSCLI) : Subset source to the Common Language Runtime, C# and JScript.NET compilers, shipped with an academic/hobbiest friendly license. Runs on multiple platforms and architectures (Windows, FreeBSD, MacOS - x86, PPC). It's great having the source to the CLR, C# and JScript.NET handy. I recommend buying the Rotor book (see below) if you want to ramp quickly on the source. ECMA Specification : Submission of the CLI (Common Language Infrastructure) to the ECMA standards organization. More commonly known as "the docs to the product". Has all the execution semantics, rules and metadata bits of the CLR - exellent resource for compiler writers. List of .NET Languages : A pretty complete list of languages that run on top of the CLR/.NET platform. Has a lot of links to the respective project websites. Some of the languages come with source too. DOTNET-LANGUAGE-DEVS : This is the place where commercial, academic and hobbiest compiler writers hang out. The list is light on traffic, but you can usually expect a rapid response to questions. DOTNET-CLR : A mailing list for CLR geeks. Some of the language people hang out on this list. Great place to ask questions about compiler<-->VM problems you might face when targeting the CLR. Lambda the Ultimate : A great programming languages weblog. Technologies: Reflection.Emit MSDN documentation : Documentation for the Reflection.Emit namespace. Useful if you intend on using Reflection.Emit as the backend for your compiler. Hello World, Reflection.Emit style : Hello, World! DynamicMethod (Lightweight Code Generation) : MSDN documentation for LCG. LCG is a Whidbey (2.0) technology for lightweight, GC reclaimable code generation. It uses familiar Reflection.Emit API's. Recommended in most scenarios where code generation is required and assemblies and type generation is not needed. Hello World, Lightweight Code Generation (LCG) : Hello, World, this time using LCG as the code generation technology. Reflection Performance article : An article I wrote a few months back on how to improve the performance of common Reflection scenarios. Has some stuff on LCG and binding. Under the hood of Dynamic Languages : Part 1 of a blog post I did about how dynamic languages map to the runtime. A lot of what I talk about here can be found in source form in the IronPython compiler. Part 2 is coming soon. ;) Whidbey Delegates : Describes the relaxed signature matching changes we've done for Whidbey in the delegate space. What isn't mentioned is the open instance/closed static delegates that we've added - the Whidbey delegate docs describe these changes but I don't believe the latest docs are on the web yet. These small but powerful additions open up a whole new range of calling convention opportunities, especially in the dynamic languages space. Late-bound invocation : Some notes I wrote up on late-bound invocation using Reflection (if you're going dynamic, then I hope this helps you out). Part 2 here . Compiler books: Compiling for the .NET Common Language Runtime , John Gough, 0130622966 The bible for compiler writers targeting the CLR. Inside Microosft .NET IL Assembler , Serge Lidin, 0735615470 All you need to know about IL and the ILASM/ILDASM tools that ship with the SDK. If your compiler is going to target textual IL and call on the ILASM compiler to cook you up an exe, this book will be invaluable. Compilers , Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman, 0201100886 The bible. Also featured in the movie Hackers , as a l33t resource for those hacker people. Modern Compiler Implementation in ML , Andrew W. Appel, 0521582741 I haven't finished reading this book yet, but so far so good. It'd be better if it was "Modern Compiler Implementation in F#", but I'm biased... :) (also somes in C and Java flavors) Programming Language Pragmatics , Michael L. Scott, 1558604421 I love the chapter on concurrent programming languages towards the end of the book. Lots of really great archaic info on languages like Ada, Fortran and Prolog. Great stuff. Advanced Compiler Design and Implementation , Steven Muchnick, 1558603204 . NET Platform books (useful to understand what you're targeting) : Essential .NET, Volume 1: The Common Language Runtime , Don Box, 0201734117 Suitably high level to get a good understanding of the CLR. I recommend this one to all CLR newbies. Shared Source CLI Essentials , David Stutz, Ted Neward, Geoff Shilling, 059600351X Total ego booster when coupled with the Rotor source. Impress your friends and co-workers with your deep understanding of the CLR. Graduates of this book are prone to be hired by the CLR team. Recommended to compiler writers who want a sharp understanding of what they are targeting. The Common Language Infrastructure Annotated Standard , Jim Miller, 0321154932 If you find the ECMA spec lacks a little oomph, try this book from CLR Architect (and part-time opera singer) Jim Miller. Annotates the ECMA spec with design decision explainations and other runtime facts. Also known to produce CLR new hires. Applied Microsoft .NET Framework Programming , Jeffrey Richter, 0735614229 Jeff is the man. A broader platform book that goes a little deeper than Essential .NET. Useful to newbie compiler writers for two reasons: you get to learn the platform API's to write your compiler, and it gives a good insight in to how the backend works. Professional .NET Framework 2.0 , Joe Duffy, 0764571354 Where's Joe's face? 14 Comments Blog - Comment List MSDN TechNet © 2014 Microsoft Corporation. Trademarks Privacy & Cookies Report Abuse 5.6.426.415
http://blogs.msdn.com/b/joelpob/archive/2005/10/04/476965.aspx
CC-MAIN-2015-06
refinedweb
1,475
66.13
GIT(1) Git Manual GIT(1) NAMEgit - the stupid content tracker SYNOPSISgit [--version] [--help] [-c <name>=<value>] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path] [-p|--paginate|--no-pager] [--no-replace-objects] [--bare] [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>] <command> [<args>] DESCRIPTIONGit is a fast, scalable, distributed revision control system with an unusually rich command set that provides both high-level operations and full access to internals. See gittutorial(7) to get started, then see Everyday Git[1] for a useful minimum set of commands. The Git User's Manual. Formatted and hyperlinked version of the latest Git documentation can be viewed at. OPTIONS--version Prints the Git suite version that the git program came from. --help environment variable. --bare Treat the repository as a bare repository. If GIT_DIR environment is not set, it is set to the current working directory. --no-replace-objects Do not use replacement refs to replace Git objects. See git- replace(1) for more information. --literal-pathspecs Treat pathspecs literally, rather than as glob patterns. This is equivalent to setting the GIT_LITERAL_PATHSPECS environment variable to 1. GIT COMMANDSWe) Checkout a branch or paths to the working tree.-log(1) Show commit logs. git-merge(1) Join two or more development histories together. git-mv(1) Move or rename a file, a directory, or a symlink. git-notes(1) Add or inspect object notes. git-pull(1) Fetch from and integrate. gitk(1) The Git repository browser.-whatchanged(1) Show logs with difference each commit introduces. gitweb(1) Git web interface (web frontend to Git repositories)., modify and delete) Compare a tree to the working tree or index..-i18n(1) Git's i18n setup code for shell scripts. git-sh-setup(1) Common Git shell script setup code. git-stripspace(1) Remove unnecessary whitespace. CONFIGURATION MECHANISMAny Git command accepting any <object> can also use the following symbolic notation: HEAD indicates the head of the current branch. <tag> a valid tag name (i.e. a refs/tags/<tag> reference). <head> a valid head name (i.e. a refs/heads/<head> reference).Please see gitglossary(7). ENVIRONMENT VARIABLESVarious, EMAIL see git-commit-tree(1)-1 check-ignore, and git whatchanged_TRACE_PACK_ACCESS If this variable is set to a path, a file will be created at the given path logging all accesses to any packs. For each access, the pack file name and an offset in the pack is recorded. This may be helpful for troubleshooting some pack-related performance problems. GIT_TRACE_PACKET If this variable is set, it shows a trace of all packets coming in or out of a given program. This can help with debugging object negotiation or other protocol issues. Tracing is turned off at a packet starting with "PACK".). DISCUSSIONReport bugs to the Git mailing list <git@vger.kernel.org[6]> where the development and maintenance is primarily done. You do not have to be subscribed to the list to send a message there. SEE ALSOgittutorial(7), gittutorial-2(7), Everyday Git[1], gitcvs-migration(7), gitglossary(7), gitcore-tutorial(7), gitcli(7), The Git User's Manual[2], gitworkflows(7) GITPart of the git(1) suite NOTES1. Everyday Git 2. Git User's Manual 3. Git concepts chapter of the user-manual 4. howto 5. Git API documentation 6. git@vger.kernel.org mailto:git@vger.kernel.org Git 1.8.4.1 09/27/2013 GIT(1)
http://leaf.dragonflybsd.org/cgi/web-man?command=git&section=1
CC-MAIN-2013-48
refinedweb
566
52.87
Unleash awesomeness Private packages, team management tools, and powerful integrations. Simplify your workflow and supercharge your projects. Use Orgs for free npm Orgs is now free for all users to organize and collaborate on public code. Easy permissions Use Orgs to manage permissions for multiple team members all at once. Quick configuration Simplify package management with security groups and one-click configuration. Reusable code Stop reinventing the wheel. Discover and re-use code across projects. Secure your private code Publish and manage packages in a private namespace. Control who else can work with your code. Seamlessly mix open source and private dependencies into your projects. Publish and use unlimited packages. Create many small blocks so you can re-use code more easily. Do more, faster Powerful integrations and a simple workflow help you build amazing things with one integrated tool. Easy to use Find, install, and publish both private and public code with the same workflow Everything you need Harness the power of over 650,000 modules used by over 10 million developers worldwide Integrated with your stack Connect the tools you already use and harness new ways to seamlessly test, secure, and deploy your code npm account Free Discover, reuse, update, and share code with millions of developers worldwide. npm Orgs Free Control publishing for groups and manage varying permissions for different teams and roles. Private packages $7/month Add restricted access to your packages. Combine private and open source code in the same project.
https://www.npmjs.com/features?utm_source=house&utm_medium=package%20page&utm_term=Unleash%20awesomeness&utm_content=hed&utm_campaign=orgs
CC-MAIN-2019-09
refinedweb
246
57.06
Enums, Macros, Unicode and Token-Pasting Hi, I am Rocky a developer on the Visual C++ IDE team. I would like to discuss the C++ programming technique of creating macro generated enums. I recently used this for distinguishing various C++ types such as class, variable, and function. Having the list of types in one file makes it easy to add types and allows for many different types of uses. The examples I have below mirror uses in code that I have worked with. I have also seen this technique used in many places in various other source bases, but I have not seen it discussed much in text books – so I thought I would highlight this technique. Consider this enum: enum Animal { dog, cat, fish, bird }; Now dog can be used in place of 0. You can get compiler enforced type safety that macros do not provide. The VS debugger will also show the friendly value of the enum instead of integers. However, functions that print out enum values need better formatting. This code can help: wchar_t* AnimalDiscription[] = { L“dog”, L“cat”, L“fish”, L“bird” }; With this array, debugging code can now print the friendly value of the enum by using the enum to index into the string array. With macro generated enums both the enum and the friendly names can be maintained as one entity. Consider the file animal.inc: MYENUM(dog) MYENUM(cat) MYENUM(fish) MYENUM(bird) And the following C++ code: enum Animal { #define MYENUM(e) _##e, #include “animal.inc” #undef MYENUM }; wchar_t* AnimalDescription[] = { #define MYENUM(e) L“_” L#e, #include “animal.inc” #undef MYENUM }; Now editing animal.inc will update both the enum and the friendly text of the enum. In this case I added an underscore in front of the animal names I used before to get the macro to work correctly. The token-pasting operator ## cannot be used as the first token. The stringizing operator # creates a string from the operator. By adding an L right before the stringizing operator the resulting string is a wide string. These macro generated enums can be “debugged” by using the compiler switch /EP or /P. This will cause the compiler to output the preprocessor file: enum Animal { _dog, _cat, _fish, _bird, }; wchar_t* AnimalDescription[] = { L“_” L“dog”, L“_” L“cat”, L“_” L“fish”, L“_” L“bird”, }; C++ allows for a comma after the last entry of the enum and the array initializer. This macro string replacement technique can be further expanded to produce code. Here is an example of using string replacement to create function prototypes: #define MYENUM(e) void Order_##e(); #include “animal.inc” #undef MYENUM This expands to: void Order_dog(); void Order_cat(); void Order_fish(); void Order_bird(); You may wish to do some action based on the kind of animal. If you switch on the kind of animal, here is an example of creating case labels and function calls: #define MYENUM(e) case _##e: Order_##e(); break; #include “animal.inc” #undef MYENUM This expands to: case _dog: Order_dog(); break; case _cat: Order_cat(); break; case _fish: Order_fish(); break; case _bird: Order_bird(); break; In this example the function definitions would need to be added for each of the Order_dog(), Order_cat(), etc.. If you were to add a new animal to animal.inc, you would not need to remember that you would also need to add a new Order_ function definition for this new animal. The linker would give you an error reminding you! Macro string replacement is a powerful tool that can be leveraged to allow for internal data to be stored in one spot. Keeping this data in one spot reduces the chances for errors, missed cases or missed matched cases. Join the conversationAdd Comment without external includes: #define ANIMALS(FOO) FOO(dog) FOO(fish) FOO(cat) #define DO_DESCRIPTION(e) L"_" L#e, #define DO_ENUM(e) _##e, wchar_t* AnimalDescription[] = { ANIMALS(DO_DESCRIPTION) }; enum Animal { ANIMALS(DO_ENUM) }; Very interesting. FYI, the Intel IPP library uses this very same technique to generate a different set of functions for each processor architecture. Nice. I’ve used macros in a similar fashion to maintain the mapping between code symbolic names and strings that are exposed outside the program in various ways. When used appropriately, macros can prevent errors and even allow the compiler to detect errors in these mappings before they get frozen into shipping code forever. Thanks for the new ideas of how to use macros. Also, I just realized I’ve recently used a similar technique as ajax16384. Good stuff. I don’t understand why you’re risking conflicting with the reserved library identifiers by prefixing the enum values with underscores. "… I added an underscore in front of the animal names I used before to get the macro to work correctly." Why? This works perfectly well: #define MYENUM(e) e, Many identifiers that start with an underscore are reserved for the implementation of the compiler and the standard libraries. Adrian raises a good issue here. The reason for using #define MYENUM(e) _##e, instead of #define MYENUM(e) e, was to demonstrated the token-pasting operator ##. Furthermore, the token-pasting operator ## allows for padding the enum with some prefix to help prevent name collisions. This allows developers to edit these include files, such as animal.inc, without worrying about collisions. In my work I needed to represent types such as class, template, and enum. By padding the enum I can have: MYENUM(template) MYENUM(class) MYENUM(enum) And, there are no collions with padding. As Adrian points out, padding with the underscore alone does not provide much protection aginst colisions. In practice I have used #define MYENUM(e) tok##e, for enums of tokens. For my example, define MYENUM(e) animal_##e, would be more likely to not cause collisions. Also, if there is a collision, in many cases the compiler will provide an error. Hrm, one of the most ancient tricks in the C hacker’s book that’s for sure. One major downside you need to consider is that this makes the code ugly and difficult to read. I wouldn’t want to maintain a codebase littered with preprocessor statements. Consider writing a simple code generator instead. One that generates readable code for instance. Then you can generate to your heart’s happiness and aren’t limited to the preprocessor’s quirks (and some times compiler-specific quirks). Unfortunately VC++ has had a history of issues dealing with dependencies once you start using code generators (usually invoked via a makefile project). So mileage varies. Sometimes you need build twice to actually build. Sorry, but I find that horrendous. I suppose it might be useful in some situations but I can’t think of a time when I have ever wanted such a thing. The code is ugly and difficult to read. IMO, macros obfuscate code and are easy to write and/or use dangerously with unexpected results. They are best avoided unless there is a very compelling reason to use them, which there isn’t here, IMO. This is especially true in C++ (compared to C) since C++ often provides better, safer and easier-to-read alternatives. I agree that it is good to have a mechanism which keeps enum values and name strings in sync, but that good is far offset by the bad of ugly, unreadable macro code combined with being forced to separate the list of names out into another file. For the switch statement, it would be better if the compiler noticed the switch was on an enum and emitted a warning if all cases were not covered (and there was no default clause, of course). Maybe the compiler already does that; I’m not sure. If it does then this is pointless IMO as you might as well add all the calls to the new Order_XYZ functions when you write the Order_XYZ function itself, and the compiler will tell you all the places you need to do so. Sorry if it seems like I’m having a tinkle on the fireworks here; I just have a strong dislike of code like this, especially when it’s coming from Microsoft’s VC++ team! I despise code like this. It’s lazy, ugly, bloated and stupid. No wonder Microsoft software is getting ugly, bloated and slow–I’ll bet it’s chock full of undisciplined crap like this. Wait until you need an enum with dozens of entries, I bet you’ll wanna try using this sort of ‘stupid’ and ‘ugly’ solution too! <i> I despise code like this. It’s lazy, ugly, bloated and stupid. No wonder Microsoft software is getting ugly, bloated and slow–I’ll bet it’s chock full of undisciplined crap like this. </i> Macros is something C++ (especially the Standards Committee) has always tried to avoid. Such use of macros like this will make code unreadable and cause maintainance problems. I think it’s not a good idea to play with such thing. That’s an ugly solution, I would put the values in a XML file and then use a XSL file to generate the code. "That’s an ugly solution, I would put the values in a XML file and then use a XSL file to generate the code." are you kidding me? I would use boost preprocessor library for this, but it is nice to see a way of doing it without any library. Nothing beats having everything in sync automatically. <i>Wait until you need an enum with dozens of entries, I bet you’ll wanna try using this sort of ‘stupid’ and ‘ugly’ solution too!</i> I have. Many times. And I’ve found that it’s better in the long run to do it manually. (I went through a phase where I used macros for a lot of things, including token replacement. I even remember going through my personal class library and ripping many of those out.) I stand by my belief that this is a bad idea. (I do agree that the XML/XSL solution is even worse.) I use this kind of technique for the sole purpose of having one single place in the source code where the definition exists (I prefer declaring just MYENUM(bird) and not both tok_bird and "bird"). This guarantees that the enum name and the string description will match, and that *must* be what we strive for. I don’t like how this kind of use of the preprocessor confuses intellisense in the IDE. And when used on code, it confuses the debugger (can’t display the source for the you’re in when you’re stepping through a function whose source is generated by the preprocessor.) Often the best you can do is get taken to the line that says MYENUM(foo). I do like the idea of using XSLT or text translation (.tt) or any other roll-your-own preprocessor, except that the debugger again, will take you to code from the generated .cpp file, rather than to your original .xml or .tt file. And intellisense often is baffled as to the true origin of the symbol. I totally agree that using the leading underscore is wrong. If you are in love with the underscore, put it at the end, instead of the beginning. prefer foo_ to _foo. Because, as was pointed out, the leading underscore symbols are reserved by the implementation. I personally prefer putting the enum symbols in class or namespace to resolve collisions. I really don’t think this is bad style at all. In fact, it shows a maturity of understanding when to use macros and for what purpose. We want to avoid declaring something in two places…the text "bird" would appear in multiple places in the source code for the purposes of symmetric declarations were it not for the pre-processing. And currently, using the built-in preprocessor is the only way we have within the language to accomplish such a thing. It’s standard C++, which is good. The only reason to use this code is to generate both a compile-time symbol, and a compile time string literal at the same time. If the language had the ability to declare something as both a string literal and a symbol in the same breath, we’d use it, but the only way seems to be to use preprocessing of some sort. Please see here: This discussion plainly shows the limitations of the current single-pass compilation paradigm. Look to multipass macro assemblers such as FASM; or even to the microsoft C# compiler. It should indeed be possible to generate code and symbols in a single source file, from a variable amount of internal and external information sources, while having correct and logical intellisense and debugging. It’s theoretically possible, IMO. I use the macro list version all the time in my own code. (not the include version). It is elegant IMO and standard C++. It’s no more ugly than the ? : construct or templates. Before you use them they look scary, after you use them they don’t look so scary anymore. Code generation has the problem that it adds another dependency. You try to build on a new machine and if they don’t have your particlar code generator installed they are out of luck. I started using this because it solves the specific problem it’s trying to solve, keeping lists of items in sync. Before I started using this I had cases where I spent hours debugging something only to find that some table was out of sync somewhere. Since I started using this I never have to worry about that problem again. I switch to code generation if I have to keep lists in since across two or more languages but otherwise it’s standard and cross platform and for every case I’ve used it make my code FAR EASIER to maintain.
https://blogs.msdn.microsoft.com/vcblog/2008/04/30/enums-macros-unicode-and-token-pasting/
CC-MAIN-2016-30
refinedweb
2,330
62.98
The documentation you are viewing is for Dapr v1.4 which is an older version of Dapr. For up-to-date documentation, see the latest version. Setup &: apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: daprsystem namespace: default spec: mtls: enabled: true workloadCertTTL: "24h" allowedClockSkew: "15m" The file here shows the default daprsystem configuration settings. The examples below show you how to change and apply this configuration to run the following command to edit it: kubectl edit configurations/daprsystem --namespace <DAPR_NAMESPACE> Once the changes are saved, perform a rolling update to the control plane: kubectl rollout restart deploy/dapr-sentry -n <DAPR_NAMESPACE> kubectl rollout restart deploy/dapr-operator -n <DAPR_NAMESPACE> kubectl rollout restart statefulsets/dapr-placement-server -n <DAPR_NAMESPACE> Note: the sidecar injector does not need to be redeployed Disabling mTLS with Helm kubectl create ns dapr-system helm install \ --set global.mtls.enabled=false \ --namespace dapr-system \ dapr \ dapr/dapr Disabling mTLS with the CLI dapr init --kubernetes --enable-mtls=false Viewing logs In order to view Sentry logs, run the following command: kubectl logs --selector=app=dapr-sentry --namespace <DAPR_NAMESPACE> Bringing your own certificates Using Helm, you can provide the PEM encoded root cert, issuer cert and private key that will be populated into the Kubernetes secret used by Sentry. Note: This example uses the OpenSSL command line tool, this is a widely distributed package, easily installed on Linux via the package manager. On Windows OpenSSL can be installed using chocolatey. On MacOS it can be installed using brew brew install openssl Create config files for generating the certificates, this is necessary for generating v3 certificates with the SAN (Subject Alt Name) extension fields. First save the following to a file named root.conf: [req] distinguished_name = req_distinguished_name x509_extensions = v3_req prompt = no [req_distinguished_name] C = US ST = VA L = Daprville O = dapr.io/sentry OU = dapr.io/sentry CN = cluster.local [v3_req] basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, cRLSign, keyCertSign extendedKeyUsage = serverAuth, clientAuth subjectAltName = @alt_names [alt_names] DNS.1 = cluster.local Repeat this for issuer.conf, paste the same contents into the file, but add pathlen:0 to the end of the basicConstraints line, as shown below: basicConstraints = critical, CA:true, pathlen:0 Run the following to generate the root cert and key openssl ecparam -genkey -name prime256v1 | openssl ec -out root.key openssl req -new -nodes -sha256 -key root.key -out root.csr -config root.conf -extensions v3_req openssl x509 -req -sha256 -days 365 -in root.csr -signkey root.key -outform PEM -out root.pem -extfile root.conf -extensions v3_req Next run the following to generate the issuer cert and key: openssl ecparam -genkey -name prime256v1 | openssl ec -out issuer.key openssl req -new -sha256 -key issuer.key -out issuer.csr -config issuer.conf -extensions v3_req openssl x509 -req -in issuer.csr -CA root.pem -CAkey root.key -CAcreateserial -outform PEM -out issuer.pem -days 365 -sha256 -extfile issuer.conf -extensions v3_req Install Helm and pass the root cert, issuer cert and issuer key to Sentry via configuration: kubectl create ns dapr-system helm install \ --set-file dapr_sentry.tls.issuer.certPEM=issuer.pem \ --set-file dapr_sentry.tls.issuer.keyPEM=issuer.key \ --set-file dapr_sentry.tls.root.certPEM=root.pem \ --namespace dapr-system \ dapr \ dapr/dapr Updating Root or Issuer Certs If the Root or Issuer certs are about to expire, you can update them and restart the required system services. First, issue new certificates using the step above in Bringing your own certificates. Now that you have the new certificates, you can update the Kubernetes secret that holds them. Edit the Kubernetes secret: kubectl edit secret dapr-trust-bundle -n <DAPR_NAMESPACE> Replace the ca.crt, issuer.crt and issuer.key keys in the Kubernetes secret with their corresponding values from the new certificates. Note: The values must be base64 encoded If you signed the new cert root with a different private key, restart all Dapr-enabled pods. The recommended way to do this is to perform a rollout restart of your deployment: kubectl rollout restart deploy/myapp with a custom configuration. Setting up mTLS with the configuration resource Dapr instance configuration When running Dapr in self hosted mode, mTLS is disabled by default. you can enable it by creating the following configuration file: apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: daprsystem namespace: default spec: mtls: enabled: true In addition to the Dapr configuration, you will also need to provide the TLS certificates to each Dapr sidecar instance. You can do so by setting the following environment variables before running the Dapr instance: export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` export NAMESPACE=default $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) $env:NAMESPACE="default" If using the Dapr CLI, point Dapr to the config file above to run the Dapr instance with mTLS enabled: dapr run --app-id myapp --config ./config.yaml node myapp.js If using daprd directly, use the following flags to enable mTLS: daprd --app-id myapp --enable-mtls --sentry-address localhost:50001 --config=./config.yaml Sentry configuration Here’s an example of a configuration for Sentry that changes the workload cert TTL to 25 seconds: apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: daprsystem namespace: default spec: mtls: enabled: true workloadCertTTL: "25s" In order to start Sentry with a custom config, use the following flag: ./sentry --issuer-credentials $HOME/.dapr/certs --trust-domain cluster.local --config=./config.yaml Bringing your own certificates In order to provide your own credentials, create ECDSA PEM encoded root and issuer certificates and place them on the file system. Tell Sentry where to load the certificates from using the --issuer-credentials flag. The next examples creates root and issuer certs and loads them with Sentry. Note: This example uses the step tool to create the certificates. You can install step tool from here. Windows binaries available here Create the root certificate: step certificate create cluster.local ca.crt ca.key --profile root-ca --no-password --insecure Create the issuer certificate: step certificate create cluster.local issuer.crt issuer.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 8760h --no-password --insecure This creates the root and issuer certs and keys. Place ca.crt, issuer.crt and issuer.key in a desired path ( $HOME/.dapr/certs in the example below), and launch Sentry: ./sentry --issuer-credentials $HOME/.dapr/certs --trust-domain cluster.local Updating Root or Issuer Certs If the Root or Issuer certs are about to expire, you can update them and restart the required system services. First, issue new certificates using the step above in Bringing your own certificates. Copy ca.crt, issuer.crt and issuer.key to the filesystem path of every configured system service, and restart the process or container. By default, system services will look for the credentials in /var/run/dapr/credentials. Note:If you signed the cert root with a different private key, restart the Dapr instances. Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://v1-4.docs.dapr.io/operations/security/mtls/
CC-MAIN-2021-49
refinedweb
1,216
50.73
Running_Python_Code The following sections document various ways to store and execute Python code in Origin. The Python Console is ideal if you have just a few lines of Python code to execute. Intellisense is supported in the console, so it is also very useful to explore objects and methods available in the originpro package. Use the Connectivity: Python Console... menu in Origin to open Python Console window. You can develop your Python code using Code Builder. Code highlighting, intellisense, and debugging are supported. Use the Connectivity: Open utitled.py... menu in Origin to open Code Builder. Once you have edited your code, you can run the code by pressing the F5 key. It is not even necessary to save the code to a file prior to executing. The Tools menu in Code Builder has three entries specific for Python: You can run code saved in a .py file from anywhere in Origin where LabTalk Script is supported, such as the Script Window, using the Run command. // Run code from file test.py saved in your User-Files Folder run -pyf test.py; // Run code from file saved in a specific folder string str$ = "C:\temp\test.py"; run -pyf %(str$); You can pass variables to your Python code from LabTalk script. Try the following: import sys if __name__ == '__main__': print(len(sys.argv)) print(sys.argv[1]) print(sys.argv[2]) double var1 = 123.456; string var2$ = 'hello'; run -pyf test.py "$(var1)" "%(var2$)"; 3 123.456 'hello' The .py file containing your Python code can be attached to the Origin project. Once it is attached, you can execute the code using the script command: // Execute Python code stored in file test.py attached to project run -pyp test.py; // Can leave out the .py extension run -pyp test; To create and attach a file to the project, do the following: Python code can be stored in a text object placed on a graph, worksheet or matrixsheet. The code can then be executed using script command from another object such as a button, using the run command: // Run the python code stored in text object named 'pycode' run -pyb pycode; Try the following: pip -check scipy pandas; run -pyb pycode; Instead of running Python code, you can define Python functions and call the functions from Origin such as from Column Formula, Import Wizard and from Fitting Functions.
http://cloud.originlab.com/doc/python/Running_Python_Code
CC-MAIN-2021-10
refinedweb
397
74.79
Hi!As far as I can see, I'm still maintainer of software suspend. Thatdid not stop you from crying "split those patches" when I tried tosubmit changes to my code, and you were pretty pissed off when I triedto push trivial one liners without contacting maintainers.And now you pushed ton of crap into Linus' tree, breaking userlandinterfaces in the stable series (/proc/acpi/sleep), killing copyrights(Andy Grover has copyright on drivers/acpi/sleep/main.c), andrewriting code without even sending diff to maintainer (no, I did notsee a mail from you, and you modified swsusp heavily). You did notbother to send code to the lists, so that noone could review it.Great. This way we are going to have stable PM code... in 2056.Linus, could you make sure Patricks patches are at least reviewed onthe lists? Pavel--- a/include/linux/pm.h Fri Aug 15 01:15:23 2003+++ b/include/linux/pm.h Mon Aug 18 15:31:58 2003@@ -186,8 +186,30 @@ #endif /* CONFIG_PM */ ++/*+ * Callbacks for platform drivers to implement.+ */ extern void (*pm_idle)(void); extern void (*pm_power_off)(void);++enum {+ PM_SUSPEND_ON,+ PM_SUSPEND_STANDBY,+ PM_SUSPEND_MEM,+ PM_SUSPEND_DISK,+ PM_SUSPEND_MAX,+};++extern int (*pm_power_down)(u32 state);If you defined enum, you should also use it. @@ -1114,7 +986,8 @@ static int __init resume_setup(char *str) {- strncpy( resume_file, str, 255 );+ if (strlen(str))+ strncpy(resume_file, str, 255); return 1; }Why are you obfuscating the code?You changed return type of do_magic() to int, but did not bother toupdate assembly code, as far as I can see. Did you test those changes?ugly hack. We were passing suspend level before. Why did you have tobreak it?--
http://lkml.org/lkml/2003/8/22/183
CC-MAIN-2014-42
refinedweb
274
64
Our tutorial here is meant to learn the fundamentals of Java programming and if you look at my another tutorials on WPF, you will know that we can develop Windows UI using XAML. The alternative to WPF in Java is JavaFX. If you are new to Java programming, I actually suggest that you start learning JavaFX rather than Java Swing. Swing is still very popular and there are still many programmers using it. However, Oracle has put it in maintenance mode. It will be replaced by JavaFX and I will work out a tutorials on JavaX soon. We still have to learn Java Swing if you want to have a good grasp of Java UI programming and here, let’s use the following simple example to illustrate it.Code Block package javaapplication3; import javax.swing.*; public class JavaApplication3 { public static void main(String[] args) { JFrame frame = new JFrame( "HelloJava" ); JLabel label = new JLabel("Hello, I am a Java Program!", JLabel.CENTER ); frame.add(label); frame.setSize( 300, 300 ); frame.setVisible( true ); } } In this program, the windows is formed by using the JFrame class. We create an instance, called frame of the class JFrame. The same goes to the text or the label that will go into the windows. This is done by using the JLabel class.
https://codecrawl.com/2014/11/12/java-swing/
CC-MAIN-2018-43
refinedweb
216
65.01
Data structure that represents a partial merkle tree. More... #include <merkleblock.h> Data structure that represents a partial merkle tree. It represents a subset of the txid's of a known block, in a way that allows recovery of the list of txid's and the merkle root, in an authenticated way.. The serialization is fixed and provides a hard guarantee about the encoded size: SIZE <= 10 + ceil(32.25*N) Where N represents the number of leaf nodes of the partial tree. N itself is bounded by: N <= total_transactions N <= 1 + matched_transactions*tree_height The serialization format: Definition at line 54 of file merkleblock.h. Construct a partial merkle tree from a list of transaction ids, and a mask that selects a subset of them. Definition at line 133 of file merkleblock.cpp. Definition at line 147 of file merkleblock.cpp. calculate the hash of a node in the merkle tree (at leaf level: the txid's themselves) Definition at line 57 of file merkleblock.cpp. helper function to efficiently calculate the number of nodes at given height in the merkle tree Definition at line 70 of file merkleblock.h. extract the matching txid's represented by this partial merkle tree and their respective indices within the partial tree. returns the merkle root, or 0 in case of failure Definition at line 149 of file merkleblock.cpp. Get number of transactions the merkle proof is indicating for cross-reference with local blockchain knowledge. Definition at line 113 of file merkleblock.h. Definition at line 88 of file merkleblock.h. recursive function that traverses tree nodes, storing the data as bits and hashes Definition at line 77 of file merkleblock.cpp. recursive function that traverses tree nodes, consuming the bits and hashes produced by TraverseAndBuild. it returns the hash of the respective node and its respective index. Definition at line 95 of file merkleblock.cpp. flag set when encountering invalid data Definition at line 67 of file merkleblock.h. the total number of transactions in the block Definition at line 58 of file merkleblock.h. node-is-parent-of-matched-txid bits Definition at line 61 of file merkleblock.h. txids and internal hashes Definition at line 64 of file merkleblock.h.
https://doxygen.bitcoincore.org/class_c_partial_merkle_tree.html
CC-MAIN-2020-29
refinedweb
371
56.96
t_getprotaddr - get the protocol addresses #include <xti.h> int t_getprotaddr( int fd, struct t_bind *boundaddr, struct t_bind *peeraddr) The t_getprotaddr() function returns local and remote protocol addresses currently associated with the transport endpoint specified by fd. In boundaddr and peeraddr the user specifies maxlen, which is the maximum size (in bytes) of the address buffer, and buf which points to the buffer where the address is to be placed. On return, the buf field of boundaddr points to the address, if any, currently bound to fd, and the len field specifies the length of the address. If the transport endpoint is in the T_UNBND state, zero is returned in the len field of boundaddr. The buf field of peeraddr points to the address, if any, currently connected to fd, and the len field specifies the length of the address. If the transport endpoint is not in the T_DATAXFER, T_INREL, T_OUTCON or T_OUTREL states, zero is returned in the len field of peeraddr. If the maxlen field of boundaddr or peeraddr is set to zero, no address is returned. ALL - apart from T_UNINIT On failure, t_errno is set to one of the following: - [TBADF] - The specified file descriptor does not refer to a transport endpoint. - [TBUFOVFLW] - The number of bytes allocated for an incoming argument (maxlen) is greater than 0 but not sufficient to store the value of that argument. - zero is returned. Otherwise, a value of -1 is returned and t_errno is set to indicate the error. t_bind().
http://pubs.opengroup.org/onlinepubs/007908799/xns/t_getprotaddr.html
crawl-003
refinedweb
248
59.03
Deploy Commerce Mock Application in the Kyma Runtime - How to create a Namespace in the Kyma runtime - How to deploy the Kyma mock application, which includes a Kyma APIRuleto expose the API to the Internet The Kyma mock application contains lightweight substitutes for SAP applications to ease the development and testing of extension and integration scenarios based on Varkes. Together with SAP BTP, Kyma runtime, it allows for efficient implementation of application extensions without the need to access the real SAP applications during development. - Step 1 The Kyma mock applications can be found in the xf-application-mocks repository. Within the repo you can find each of the mock applications and their Deployment files within the respective folder. The process outlined in the tutorial is the same for each, but focuses on configuring the Commerce mock. Download the code by choosing the green Code button and then choosing one of the options to download the code locally. You can instead run the following command within your CLI at your desired folder location:Shell/BashCopy git clone - Step 2 Open the Kyma console and create the devNamespace by choosing the menu option Namespaces and then choosing the option Create Namespace. Provide the name dev, and then choose Create. Namespaces separate objects inside a Kubernetes cluster. The concept is similar to folders in a file system. Each Kubernetes cluster has a defaultnamespace to begin with. Choosing a different value for the namespace will require adjustments to the provided samples. Open the devNamespace by choosing the tile, if it is not already open. Apply the Deployment of the mock application to the devNamespace by choosing the menu option Overview if not already open. Within the Overview dialog, choose Deploy a new workload -> Create Deployment. Choose the tab YAML. Copy the contents of the file /xf-application-mocks/commerce-mock/deployment/k8s-deployment.yamlinto the Deployment pane over-writing the preexisting content. Scroll farther down the YAML tab to view the Service pane. Enable the option Expose a separate Service and copy the contents of the file /xf-application-mocks/commerce-mock/deployment/k8s-service.yamlinto the Service pane over-writing the preexisting content. After copying the content of both the k8s-deployment.yamland k8s-service.yamlinto their corresponding panes choose Create. The service can also be created within the menu option Discovery and Network -> Services. The new deployment and service is represented as a declarative YAML object which describes what you want to run inside your namespace. Choose the menu option Storage -> Persistent Volume Claims and then choose Persistent Volume Claim. Copy the contents of the file /xf-application-mocks/commerce-mock/deployment/k8s-pvc.yamlinto the pane over-writing the preexisting content. After copying the contents of the k8s-pvc.yamlchoose Create. Create the APIRuleof the mock application to the devNamespace by choosing the menu option Discovery and Network -> API Rules and then choosing Create API Rule. Provide the name commerce-mock, choose the service commerce-mockand enter commercefor the Subdomain. Enable each of the Methods and choose Create. Even API rules can be created by describing them within YAML files. You can find the YAML definition of the APIRuleat /xf-application-mocks/commerce-mock/deployment/kyma.yaml. - Step 3 Open the APIRulesin the Kyma console within the devNamespace by choosing the Discovery and Network > APIRulesmenu option. Open the mock application in the browser by choosing the Host value.*******.kyma.ondemand.com. If you receive the error upstream connect..., the application may have not finished starting. Wait for a minute or two and try again. Leave the mock application open in the browser, it will be used in a later step. - Step 4 In this step, you will create a System in the SAP BTP which will be used to pair the mock application to the Kyma runtime. This step will be performed at the Global account level of your SAP BTP account. Open your global SAP BTP account and choose the System Landscape menu option. Under the tab Systems, Choose the Add System option, provide the name commerce-mock, set the type to SAP Commerce Cloud and then choose Add. Choose the option Get Token, copy the Token value and close the window. This value will expire in five minutes and will be needed in a subsequent step. If the token expires before use, you can obtain a new one by choosing the Get Tokenoption shown next to the entry in the Systems list. - Step 5 In this step, you will create a Formation. A Formation is used to connect one or more Systems created in the SAP BTP to a runtime. This step will be performed at the Global account level of your SAP BTP account. Within your global SAP BTP account, choose the System Landscape menu option. Choose the tab Formations and choose the Create Formation option. Provide a Name and choose your Subaccount where the Kyma runtime is enabled. Choose Create. Choose the option Include System, select commerce-mock for the system and choose Include. What is the function of a Formation? - Step 6 The pairing process will establish a trust between the Commerce mock application and in this case the SAP Kyma runtime. Once the pairing is complete, the registration of APIs and business events can be performed. This process allow developers to utilize the APIs and business events with the authentication aspects handled automatically. Navigate back to the mock application browser window and choose Connect. Paste the copied value in the token text area and then choose Connect. If the token has expired, you may receive an error. Simply return to [Step 4: ](Create a System) and generate a new token. Choose Register All to register the APIs and events from the mock application. - Step 7 Navigate back to the Kyma home workspace by choosing Back to Namespaces. In the Kyma home workspace, choose Integration > Applications. Choose the mp-commerce-mock application by clicking on the name value shown in the list. After choosing the system, you should now see a list of the APIs and events the mock application is exposing. Congratulations! You have successfully configured the Commerce mock application. What status indicates that the application/system is ready?
https://developers.sap.com/tutorials/cp-kyma-mocks.html
CC-MAIN-2022-40
refinedweb
1,034
57.06
0 I have just started looking at simple cryptography in order to learn python better. Encryptions and decryptions should be speedy and accurate so im hoping it will improve my programming skills. I recently wrote this quick script for a shift cypher but i wasnt sure how to be able to define the shift. import string m = string.maketrans('abcdefghijklmnopqrstuvwxyz', 'pqrstuvwxyzabcdefghijklmno') a = 'this should have been fairly easy to crack considering the length of the message' n = string.maketrans('pqrstuvwxyzabcdefghijklmno', 'abcdefghijklmnopqrstuvwxyz') x = string.translate(a, m) print x y = string.translate(x, n) print y for example if i shifter the letter a over 4 times it would be the letter e. I could make 26 encryption variables and 26 decryption variables but this isnt really pythonic or efficient. can anyone point me in the right direction? Thanks for your time. Edited by Archenemie: n/a
https://www.daniweb.com/programming/software-development/threads/362289/python-shift-cyphers
CC-MAIN-2017-17
refinedweb
146
59.9
The Principles Behind TEA-Combine Attempting to Programmatically Combine Stateful Components in The Elm Architecture It’s been a while since I fiddled with Elm. Though I don’t use it regularly I sometimes take a stroll around it to see how it’s doing. This time I stumbled upon tea-combine, a small library made with the purpose of easily glueing together subcomponents of a bigger application. Since I only use Elm for fun it’s unlikely I’ll ever really need it. Regardless I was curious to understand how it worked under the hood and the author prompted me to write a small tutorial about it, so here we are. This will be an interesting example on how to use the basic type combination offered by a bare functional language. In the first part I will implement manually what tea-combine does automatically; then I will proceed to explain in detail how such combination works, stepping through some functions that achieve this result. The Use Case Let’s take the library’s front page example and try to implement it in vanilla Elm, without fancy combinations. The result will be a barren web page, with just an interactive counter and two checkboxes. I encourage you to know what The Elm Architecture is (and Elm syntax in general) before delving deeper here. In a way, Elm can be considered more of a UI framework than a real programming language, and TEA is the template it uses to build applications. To build our web page we need three things: - a Modelthat embeds the state of the web page (in this case, a number and two booleans) - an updatefunctions that reacts to events and changes the Modelaccordingly - a viewfunction that renders our data as html on the web page The Model should be a custom type that will be handled by update and view. With three elements to keep track of we can define it as a record: type alias Model = { checkboxState1 : Bool, checkboxState2 : Bool, counterState : Int } Each widget gets its own field in it. Then we define the events that will be fired by the html and the update function that codifies the corresponding reaction. There are three Msg variants: two of them simply signal that a checkbox has been toggled and the third mentions that the counter has been changed, carrying the new value to be updated. type } Brilliant. Finally, we move onto displaying these information with the view function: view : ] [] ] Notice how Html is not really a type on its own, but a type constructor that needs our Msg to be completed. The view returns a special flavor of Html enhanced with the events that we specified. Patch those three elements together in a Browser.sandbox element and we have a complete Elm app that implements the Simple.elm example for tea-combine. Here’s the full version. module Main exposing (main)import Browser import Html exposing (Html) import Html.Attributes exposing (..) import Html.Events exposing (..)main = Browser.sandbox { init = { checkboxState1 = False, checkboxState2 = False, counterState = 0 } , view = view , update = update } -- MODELtype alias Model = { checkboxState1 : Bool, checkboxState2 : Bool, counterState : Int }-- UPDATEtype } -- VIEWview : ] [] ] Before we proceed further, I am obligated to specify that this is exactly how Elm is supposed to work: encode the information in the Model and handle it yourself (eventually with helper functions). In the next steps I’m going to explain how to make subcomponents for counters and checkboxes even though it is an exceedingly overkill procedure for such simple elements. The Elm philosophy is to avoid this approach at all costs, even though it is inevitable once the application grows beyond a certain point. Here however I’m trying to build a conceptual example, so we’re going to ignore politics and good manners for the sake of science. Industrializing the Process While the end result is not bad, you might notice that it could be perfected. For example, the rendering of the two checkboxes has a lot of repeated code; defining an helper function would make it more readable. The same goes for the counter, even if there is only one. We might need multiple counters in the future though, so making the code cleaner now is a good idea. The update function and Model could be split as well. It’s not much code now, but if the widget management grew more complex it would become quite bothersome to handle all those cases in full. Let’s add two separate modules, CheckBox.elm and Counter.elm, as smaller MVU (Model-View-Update) patterns. This is exactly what tea-combine does in its example (here and here — take a look). Now that we effectively created two separate components we need to embed them into a bigger application. It’s a matter of replacing the subcomponent piece in every part of our bigger Elm Architecture: -- MODELtype alias Model = { checkboxState1 : CheckBox.Model, checkboxState2 : CheckBox.Model, counterState : Counter.Model } -- UPDATEtype Msg = CheckboxToggle1 CheckBox.Msg | CheckboxToggle2 CheckBox.Msg | CounterNumber Counter.Model update : Msg -> Model -> Model update msg model = case msg of CheckboxToggle1 unit -> { model | checkboxState1 = CheckBox.update unit model.checkboxState1 } CheckboxToggle2 unit -> { model | checkboxState2 = CheckBox.update unit model.checkboxState2 } CounterNumber counter -> { model | counterState = Counter.update counter model.counterState } -- VIEWview : Model -> Html Msg view model = Html.div [] [ Counter.view model.counterState , CheckBox.view model.checkboxState1 , CheckBox.view model.checkboxState2 ] Let me once again remind you that this is a purposefully contrived example. The amount of abstraction we are using is not worth given the simplicity of the application; I’m doing this for the sake of argument. Unfortunately, this does not compile. Running elm make stops with the following error: Detected errors in 1 module. -- TYPE MISMATCH -------------------------------------------------- src/Main.elmThe 2nd element of this list does not match all the previous elements:57| [ Counter.view model.counterState 58|> , CheckBox.view model.checkboxState1 59| , CheckBox.view model.checkboxState2 60| ]This `view` call produces:Html CheckBox.MsgBut all the previous elements in the list are:Html Counter.MsgHint: Everything in the list needs to be the same type of value. This way you never run into unexpected values partway through. To mix different types in a single list, create a "union type" as described in: <> The problem is that the top view function returns Html Msg — with Msg being the main’s module message type. We created the submodule’s view as a completely autonomous Model -> Html Msg function, where Msg is the submodule’s message type , even if it has the same name (namespaces save us here). For now, we fix this error by passing the message to be used to the view function of the widgets. It’s a temporary solution until we find a better way to compose them. checkBoxView : CheckBox.Model -> msg -> Html msg checkBoxView model msg = Html.input [ checked model , type_ "checkbox" , onClick msg ] [] counterView : Counter.Model -> msg -> Html Msg counterView model msg = Html.span [] [ Html.button [ onClick <| CounterNumber <| model - 1 ] [ Html.text "-" ] , Html.text <|Debug.toString model , Html.button [ onClick <| CounterNumber <| model + 1 ] [ Html.text "+" ] ]view : Model -> Html Msg view model = Html.div [] [ counterView model.counterState <| CounterNumber model.counterState , checkBoxView model.checkboxState1 <| CheckboxToggle1 () , checkBoxView model.checkboxState2 <| CheckboxToggle2 () ] Now it works, and it handles its subcomponents majestically! Are we done? Boilerplate Code No, we are not done yet. Although it is true that we defined two separate and (almost) autonomous components, there should be still something buzzing you. Whenever you add a new instance of those components you need to write some “glue” code to attach it to your application: a new field in the Model, a new Msg case, a new corresponding new branch in the update function. The only part that’s completely reusable is the helper rendering function. Admittedly, it’s not much. Adding a new line here and there, just calling a function with a different parameter to differentiate. If you have multiple components of the same type (i.e. 20 checkboxes) you might also create a checkbox List and handle them with an index. It’s not much, but it’s there. This is a common problem in Elm applications. Even in Richard Felman’s Real World Example we can see this pattern: the Main.elm file is basically just a huge routing switch for all of the website’s pages. It’s a 300-lines file of boilerplate code — “glue” code that doesn’t express any significant meaning, and that could be mechanized or abstracted away — over a 4000 LoC project. The official stance of the Elm team is that “it’s not a big deal”. While to an extent this is true, it doesn’t mean we cannot look for a solution. Lack of Abstraction This apparently simple problem becomes much harder for a language lacking powerful abstraction tools like Elm. For instance, if we had access to some form of ad hoc polymorphism (that you might know as Typeclasses, Traits, Interfaces or something else depending on your coding background) it would be trivial to define a component as something that is “Updatable” and “Renderable” through self-provided functions. Then, the Model would just contain a list of those components and invoke the provided update and view functions as needed. Unfortunately, we can’t do that in Elm. For the sake of simplicity and performance its creator decided to leave out those tools and, to the best of everyone’s knowledge, there is no plan to introduce them at some point in the future. Again, we shouldn’t be discouraged, as tea-combine provides a very smart solution even without these tools. Automatically Combine the Components Let’s see how we can mix together multiple components that expose just their version of a MVU application (a Model type, Msg type and update function, and a view function), just like we did with Counter and CheckBox, but without having to manually write out the glue code. We start with the Model. Combining the Model In our previous example, the top Model type was a record with a field for each component. A record is nice to handle manually but not exactly easy to automate, as we are forced to create new fields ourselves. A list seems more approachable before you realize that it can only contain elements of the same type, and that’s no good with heterogeneous components (for homogeneous content,the list option is available in tea-combine as well). The only structure that remains is the tuple. Indeed, we could define our Model as a tuple containing instances of the subcomponents’ Models, just like this: type alias Model = (Counter.Model, CheckBox.Model, CheckBox.Model) This works, but only briefly. If we try to add a fourth element (for example, another checkbox), the compiler stops us: Detected errors in 1 module. -- BAD TUPLE ------------------------------------------------------ src/Main.elmI only accept tuples with two or three items. This has too many:23| type alias Model = (Counter.Model, CheckBox.Model, CheckBox.Model, CheckBox.Model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I recommend switching to records. Each item will be named, and you can use the `point.x` syntax to access them.Note: Read <> for more comprehensive advice on working with large chunks of data in Elm. You read that correctly: tuples in Elm are only two or three items wide. I wouldn’t call myself an expert, but I like to study different approaches and I fiddle with many programming languages. While I often marvel at their exotic features, it’s the first time I stumble upon an exotic limitation. The rationale behind this is that bigger tuples are rarely needed in practice, and records are better at this job anyway. We briefly stop to thank Elm for knowing what’s better for us and try to work things out differently. There is in fact another way to stuff arbitrary elements in a tuple: nesting tuples. This type alias Model = (((Counter.Model, CheckBox.Model), CheckBox.Model), CheckBox.Model) is a tuple containing a tuple containing a tuple containing a counter and a checkbox, and a checkbox, and a checkbox. Quite a mouthful, but allowed nonetheless. This finally brings us to the first piece of tea-combine: the initWith function: initWith : model2 -> model1 -> Both model1 model2 initWith m2 m1 = ( m1, m2 ) This function is polymorphic on its two parameters (any type will do) and simply builds a tuple out of them. Both is just a type alias for tuples of two elements. Take a look at the usage of this function in the Simple.elm example. Counter.init 0 |> initWith (CheckBox.init False) |> initWith (CheckBox.init False) This creates a structure exactly like the one we defined before, ((Counter.Model, CheckBox.Model), Checkbox.Model), and it can be arbitrarily nested. If you are somewhat accustomed to type theory, you can see the combined Model is a product type of all the submodules’ Models. Combining the Update When using with The Elm Architecture we are working with two main custom types: Model and Msg. You might be tempted to combine the Msg types in the same way as we did with the Model, but that’s not quite right. While the application Model must contain a piece of each component at all times, a Msg is from only one of the components at any given moment. In other words, it can be Either one of them. The resulting type is a sum (or tagged union) of all those message types. Tea-combine uses Either for this job. You can imagine an Either type just like a tuple, but only one of the possible elements is present at any given time. Here is how tea-combine does this, with the updateWith function: updateWith : Update model2 msg2 -> Update model1 msg1 -> Update (Both model1 model2) (Either msg1 msg2) updateWith u2 u1 = Either.unpack (Tuple.mapFirst << u1) (Tuple.mapSecond << u2) This is a little more complex than initWith, mostly because it must return a function instead of a normal value. Update model msg is just an alias for msg->model->model, which is the specified type for update functions. The main point is the unpack function of the Either module. Its type signature is (a -> c) -> (b -> c) -> Either a b -> c: it takes two functions that return the same type c and builds a function that takes an Either and returns said c. In human words, it’s saying: give me a way to handle both cases of an Either type and I will handle the choice for you; if it’s an a I will apply the first function, if it’s a b the second one, returning a c regardless. The parameters given to unpack are two partially composed functions. I will focus on the first one, and give the second for granted (the principle is identical). The first function is (Tuple.mapFirst << u1), and it’s supposed to fill the place of the (a->c) argument for unpack. Here a is msg1 and b is Both model1 model2. The << operator stands for function composition; written with a lambda function it would be \msg -> \(m1, _) -> u1 msg m1 Which means: take a msg , then take a tuple with m1 as the first element and apply those to u1. The pieces are all there, and the composition operator turns them into a single function that adheres to the requirements for unpack. The second part is the same, only on the second element of the tuple. To recap: updateWith takes two update functions on separate models and messages and turns them into a single function on the product of the models and the sum of them messages — all the models and one of the messages coming from any of the components. This construction can be nested and indeed it is when applied several times. Counter.update |> updateWith CheckBox.update |> updateWith CheckBox.update This results in a function with the signature Update (Both (Both Counter.Model CheckBox.Model) CheckBox.Model)) (Either (Either Counter.Message CheckBox.Message) CheckBox.Message). Quite a mouthful again, but it’s automatically handed to us. We need not to worry about this composition at all, just roll with it. Composing the View Adding up view function is just a repetition of the update case, but with a twist. There is a viewBoth function that mashes together two views just like initWith does for Models and updateWith does for updates. viewBoth : View model1 msg1 -> View model2 msg2 -> Both model1 model2 -> Both (Html (Either msg1 msg2)) (Html (Either msg1 msg2)) viewBoth v1 v2 ( m1, m2 ) = ( Html.map Left <| v1 m1, Html.map Right <| v2 m2) This is nothing new, so I’m not going to unwrap it in detail. The view function usually handles lists of Html elements, so we need to transform this part a bit. There is another function that supports this task, joinViews joinViews : View model1 msg1 -> View model2 msg2 -> Both model1 model2 -> List (Html (Either msg1 msg2)) joinViews v1 v2 m = let ( h1, h2 ) = viewBoth v1 v2 m in [ h1, h2 ] Don’t get distracted by the (as usual) huge type signature, and just look at the body: take two views, combine them and return them as a List. Then, one more piece: once we have a list of two views we need a way to append another one. This is a job for withView withView : View model2 msg2 -> (model1 -> List (Html msg1)) -> Both model1 model2 -> List (Html (Either msg1 msg2)) withView v2 hs ( m1, m2 ) = List.append (List.map (Html.map Left) <| hs m1) [ Html.map Right <| v2 m2 ] This is a little more convoluted, so I’ll explain it step by step. This function takes a simple view and the output from joinViews (a function transforming a Model into a list of Html elements) and returns a combined view that acts on the combination of Model and Msg. It’s important to remember that the functions under scrutiny are almost exclusively polymorphic; a model1type parameter could be the simple state of a single component or a composition of multiple models. Onto the body: the main purpose of this function is to append an element, so of course it starts by calling List.append: for its first argument (the element to be added) it applies hs to m1, obtaining a List of Html msg1; then it applies Left the contents (the messages) of each Html in the list through a double map, turning them into instances of Either. All of this is a new element for the list to be returned. That is built by invoking v2 on m2; this creates a single Html msg2, that is turned into Html (Either msg1 msg2) with the Right map and then into a list between the two square brackets. Simple, isn’t it? We finally have all the pieces to explain the view function of the Simple.elm example: Html.div [] << (joinViews Counter.view CheckBox.view |> withView CheckBox.view ) joinViews creates the initial list to which CheckBox.view is then appended. Everything is combined with Html.div [] for a final function that will convert a Model (made up by the combination of models) into the final Html rendering. Conclusion Phew! Is it over? For this article yes, but tea-combine goes even further. First of all, we never even considered the possibility of adding subscriptions and commands to our components. Those are handled in a similar fashion by the Effectful module. Then there is the option to manage an array of components of the same family (instead of heterogeneous, arbitrary components) with viewEach, viewSome, updateEach and updateAll: the former two allow to render all or part of the Model array as html, while the latter update an array of Models with a single update function or list of updates, respectively. Right now you should be able to define multiple, heterogeneous, stateful components with their own MVU pattern and combine them at leisure in bigger applications. The idea is very simple but clever, and I enjoyed untangling those function signatures like they were puzzles. That being said, there are some downsides to the approach. Mostly it’s about the composition: it’s supposed to be automatic, but there might be cases where you need to look inside the resulting Model. In these situations the only way around is to know how the tuples are built and extract what you need manually. Remember that even if you don’t add type signatures the elm compiler can infer them for you, so you don’t need to calculate them yourself. You also need to be careful about the order of composition. In the Simple.elm example the three widgets are composed in the same order for the three fields of the sandbox record; failing to do so (e.g. starting with a CheckBox instead of Counter in the update function) will result in a cryptic compiler error complaining about a type mismatch: this is the result of different pieces having their elements in different order, which make them incompatible with one another. Although it carries the same information, (Counter.Model, CheckBox.Model) is different from (CheckBox.Model, Counter.Model). To sum it up, tea-combine automatically combine MVU components to be reused into bigger applications; the construction is mechanic but it falls onto the developer to ensure it is identical every time, and it might required to be manually unwrapped to perform certain tasks. A little cumbersome, but certainly not worse than writing all the boilerplate code yourself.
https://maldus512.medium.com/the-principles-behind-tea-combine-49120b9137d0?source=post_internal_links---------5----------------------------
CC-MAIN-2021-10
refinedweb
3,567
63.39
I have a game with a level editor. I hope that users can share this levels. The idea is generate a URL with a intent that open the game, and the level. (Or if the user dont have the game installed, open a playstore url) But i have no idea of how to do this. I dont know how to open the game by a link in the navegator and dont know how to "pass" a param to the game to say open the level "X". I read about intent, but people are using android studio, java, manifest and other things that i dont know if i need or not... Someone did something like this? Im lost in this area :S Answer by cronem · Feb 22, 2017 at 09:14 AM Hello, I've just run into a similar issue, and I could figure out how to open the App from a URL, at least. First, open your .apk build in the Android Studio and edit the AndroidManifest file in there. In my case, my app bundle identifier is "com.vizAR.OppaAR" and I wanted to open the app by accessing "", so I add this info into the AndroidManifest, inside my main activity: <intent-filter> <action android: <category android: <category android: <data android:scheme="http" android:host="" android: </intent-filter> And at my site I created a button with the following html code: <a href="intent://;scheme=http;package=com.vizAR.OppaAR;end;">Open App</a> So when I click at the button, it opens the app, if it's installed on the phone, or the PlayStore, if it isn't. I still haven't figure out how to pass a param to the game, but it's the first step, anyway ;) If you did resolve that, please tell me. Good luck! Great job! Thanks! i am looking to pass a parameter too :/ Answer by CartageVR · Jun 15, 2017 at 05:23 PM Hi, here my solution working for me to complete cronem answer. First , you complete URL with args like this for a string in this example <a href="intent://;scheme=http;package=com.vizAR.OppaAR;S.namestring=mystring;end;">Open App</a> The "S" at the beginning of the "namestring" parameter defines it as a String. Other types can be: String => 'S' Boolean =>'B' Byte => 'b' Character => 'c' Double => 'd' Float => 'f' Integer => 'i' Long => 'l' Short => 's' then create a start code like this example : public class getExtraName : MonoBehaviour { // Use this for initialization void Start () { string yourname = ""; AndroidJavaClass UnityPlayer = new AndroidJavaClass("com.unity3d.player.UnityPlayer"); AndroidJavaObject currentActivity = UnityPlayer.GetStatic<AndroidJavaObject>("currentActivity"); AndroidJavaObject intent = currentActivity.Call<AndroidJavaObject>("getIntent"); bool hasExtra = intent.Call<bool>("hasExtra", "namestring"); if (hasExtra) { AndroidJavaObject extras = intent.Call<AndroidJavaObject>("getExtras"); yourname = extras.Call<string>("getString", "namestring"); } } } Hope helpfull Add comment · Share Nice, I'll try that! Anyway, the solution I found earlier was to use this plugin, which worked really well: . Cheers, Answer by majaus · Jul 16, 2017 at 03:53 PM Thanks to both. Finally, im using the deep link plugin. In the plugin documentation, it say about use a url like this: test But if you want that Play Store open if the aplication is not intalled, you can write this: test2 I noticed that are a conflict whit manifest.xml if you use google play services pluggin, im not using services for the moment, but in future i need to study how to resolve this issue... can you send plugin. Android Share intent Twitter not working? 0 Answers NoSuchMethodError when calling putExtra 0 Answers Android Native share (intent) an image with a link 4 Answers Which "using directive" to use "Intent" for open url in android ? 1 Answer Share URL link with Android native dialog 0 Answers
https://answers.unity.com/questions/1283643/open-android-app-from-url.html
CC-MAIN-2019-43
refinedweb
625
62.48
Naming Things in Code June 05, 2009 — code, language When I’m designing software, I spend a lot of time thinking about names. For me, thinking about names is inseparable from the process of design. To name something is to define it. In the beginning was the Word, and the Word was with God, and the Word was God. The Gospel According to John One of the ways I know a design has really clicked is when the names feel right. It may take some time for this to happen (I rename things a lot when I’m first putting them down in code), but that’s OK. Good design doesn’t happen fast. Of course, good names alone don’t make a good design, but it’s been my experience that crappy names do prevent one. With that in mind, here’s the guidelines I try to follow when naming things. The examples here are in C++, but work more or less for any language. Types (Classes, Interfaces, Structs) The name should be a noun phrase. Bad: Happy Good: Happiness Do not use namespace-like prefixes. That’s what namespaces are for. Bad: SystemOnlineMessage Good: System::Online::Message Use just enough adjectives to be clear. Bad: IAbstractFactoryPatternBase Good: IFactory Do not use “Manager” or “Helper” or other null words in a type name. If you need to add “Manager” of “Helper” to a type name, the type is either poorly named or poorly designed. Likely the latter. Types should manage and help themselves. Bad: ConnectionManager XmlHelper Good: Connection XmlDocument, XmlNode, etc. If a class doesn’t represent something easily comprehensible, consider a concrete metaphor. Bad: IncomingMessageQueue CharacterArray SpatialOrganizer Good: Mailbox String Map If you use a metaphor, use it consistently. Bad: Mailbox, DestinationID Good: Mailbox, Address Functions (Methods, Procedures) Be terse. Bad: list.GetNumberOfItems(); Good: list.Count(); Don’t be too terse. Bad: list.Verify(); Good: list.ContainsNull(); Avd abbrvtn. Bad: list.Srt(); Good: list.Sort(); Name functions that do things using verbs. Bad: obj.RefCount(); Good: list.Clear(); list.Sort(); obj.AddReference(); Name functions that return a boolean (i.e. predicates) like questions. Bad: list.Empty(); Good: list.IsEmpty(); list.Contains(item); Name functions that just return a property and don’t change state using nouns. Bad: list.GetCount(); Good: list.Count(); (In C#, you’d actually use properties for this.) Don’t make the name redundant with an argument. Bad: list.AddItem(item); handler.ReceiveMessage(msg); Good: list.Add(item); handler.Receive(msg); Don’t make the name redundant with the receiver. Bad: list.AddToList(item); Good: list.Add(item); Only describe the return in the name if there are identical functions that return different types. Bad: list.GetCountInt(); Good: list.GetCount(); message.GetIntValue(); mmessage.GetFloatValue(); Don’t use “And” or “Or” in a function name. If you’re using a conjunction in the name, the function is likely doing too much. Break it into smaller pieces and name accordingly. If you want to ensure this is an atomic operation, consider creating a name for that entire operation, or possibly a class that encapsulates it. Bad: mail.VerifyAddressAndSendStatus(); Good: mail.VerifyAddress(); mail.SendStatus(); Does it Matter? Yes, I firmly believe it does. A module with well-named parts quickly teaches you what it does. By reading only a fraction of the code, the you’ll quickly build a complete mental model of the whole system. If it calls something a “Mailbox” you’ll expect to see “Mail”, and “Addresses” without having to read the code for them. Well-named code is easier to talk about with other programmers, helping knowledge of the code to spread. No one wants to try to say “ISrvMgrInstanceDescriptorFactory” forty times in a meeting. Over on the other side, poor names create an opaque wall of code, forcing you to painstakingly run the program in the your head, observe its behavior and then create your own private nomenclature. “Oh, DoCheck() looks like it’s iterating through the connections to see if they’re all live. I’ll call that AreConnectionsLive().” Not only is this slow, it’s non-transferrable. From the code I’ve seen, there’s a strong correspondence between a cohesive set of names in a module, and a cohesive module. When I have trouble naming something, there’s a good chance that what I’m trying to name is poorly designed. Maybe it’s trying to do too many things at once, or is missing a critical piece to make it complete. It’s hard to tell if I’m designing well, but one of the surest guides I’ve found that I’m not doing it well is when the names don’t come easy. When I design now, I try to pay attention to that. Once I’m happy with the names, I’m usually happy with the design.
http://journal.stuffwithstuff.com/2009/06/05/naming-things-in-code/
CC-MAIN-2014-15
refinedweb
811
68.06
React Hooks Tutorial — Create a Number Trivia Generator Website first start by creating a react app. Since our website will be very simple, we will use the create-react-app command. If you don’t have it installed, you can install it using npm. Run the following in your terminal or CMD: npm i -g create-react-app This will install create-react-app globally. Now we can create our React app: create-react-app numbers-trivia Running this command will create a directory in the working directory with the name you supply for the React app. I named it numbers-trivia but you can call it whatever you want. Inside that directory, it will also install all the packages and files needed to run the website. It will install packages like react, react-dom, react-scripts and more. Once it’s done, change into the newly created directory and start the server: cd numbers-trivia npm start Once you start the server, a web page of your website in your favorite browser will open. You will see a page with just the logo and a link to learn React. Before we start changing the page, if you are not familiar with React, let’s take a look at the structure of our React app. It should look something like this: In our root directory, we will find a bunch of directories. node_modules holds all the packages React needs to run and any package you might add. public holds the files our server serves. The directory that we will spend most of our time in is the src directory. This directory will hold all of our React components. As you can see there are a bunch of files in that directory. index.js is basically the file that renders our highest React component. We only have one React component right now which is in App.js. If you open it, you will find the code that renders the logo with the link to learn React that you currently see on the website. So, in order to see how changing our App component will change the content of the website, let’s modify the code by removing the current content and replacing it with our favorite phrase: “Hello, World!” import React, { Component } from 'react' import logo from './logo.svg' import './App.css' class App extends Component { render() { return ( <div className="App"> <header className="App-header"> <h1>Hello, World!</h1> </header> </div> ) } }export default App We just replaced the content of the header element. Now, if your server is still running you will see that your page updated without you refreshing it, and you will see that the previous content is replaced with Hello, World! So now we know how and where we should edit our React components in order to get the result we want. We can go ahead and start with our objective. What we’ll do for this website is the following: - Show a welcome message the first time the user opens the website, then replace it with a message prompting the user to try the generator. - Render a form with text and select inputs. The text input is where the user can enter the number they want to see trivia about, and the select input will provide the user with options related to the trivia. - On submitting the form, send a request to this API to fetch the trivia we need. - Render the trivia for the user to see it. Let’s start by organizing our directory structure first. In React it’s good practice to create a directory inside src holding all the components. We’ll create a directory called components. Once you create the directory, move App.js into there. We will also create a directory called styles and move App.css and index.css into it. When you do that, you will need to change the imports in your files as following: - in index.js: import React from 'react'; import ReactDOM from 'react-dom'; import '../styles/index.css'; import App from './components/App'; import as serviceWorker from './serviceWorker'; 2. in App.js: import React, { Component } from 'react'; import logo from './logo.svg'; import '../styles/App.css'; Our directory structure should look like this now: We will go ahead now and start building our webpage. The first thing in our objectives list is showing a welcome message when the user first opens the webpage. It will show up for 3 seconds, and then changes to another message that will prompt the user to try out the trivia generator. Without hooks, this could be done by using React’s lifecycle method componentDidMount which runs right after the component first renders. Now, we can use the effect hook instead. It will look something like this: useEffect(() => { //perform something post render }); The function you pass to useEffect will be executed after every render. This combines the lifecycles methods componentDidMount and componentDidUpdate into one. What if you want to do something just after the first time the component renders, like in componentDidMount? You can do this by passing a second parameter to useEffect. useEffect accepts an array as a second parameter. This parameter is used as a condition on when to perform the passed function. So, let’s say you want to change a counter only after the variable counter changes, you can do it like so: useEffect(() => document.title = `You have clicked ${counter} times`, [counter]); This way, the function passed to useEffect will run after render only if the value for counter changes. If we want the function to run only after the first render, we can do that by passing an empty array as the second parameter. Let’s come back now to what we want to do. We want to show a welcome message when the user first opens the page, then change that message after 3 seconds. We can do that by adding the following inside App.js:; Here’s what we’re doing: - Line 1: We added an import for useEffect - Line 4: We changed our class component into a function component - Line 5–10: we added an effect to our function component. This effect sets a timer after 3 seconds that will change the text in the element with the id welcomeMessage. Because we passed an empty array to useEffect, this effect will only run once. - Line 11–17: We replaced the previous code in App.js to render an h1 element having the id welcomeMessage, which is our target element. Okay, now go to our web page and see the changes. At first, the welcome message “Welcome to Numbers Trivia!” will show up, then 3 seconds later it will change into “Try Out Our Trivia Generator!” We just used a React Hook! Next, let’s create our form input elements. To do that, we will create a new React component called Form. Create in the directory components the file Form.js. For now, it will just include the following: import React from 'react'; function Form(props){ }export default Form; This will create the new React component. We’re just importing React, then we’re creating a function called Form. As we said earlier in the tutorial, with the use of hooks we can now create components as stateful functions rather than classes. And in the last line, we’re exporting Form in order to import it in other files. In the form, we will have a text input and select elements. This is based on the API we’re using. In the API, two parameters can be sent: - number: the number you want to get the trivia for. It can be an integer, a date of the form month/day, or the keyword random which will retrieve facts about a random number. - type: the type of information you want to get. There are a few types: math, date, year, or, the default, trivia. We will consider the text input element as optional. If the user does not enter a number or a date, we will send the keyword random for the number element. Let’s add the following code inside the Form function in order to render our form: function Form(props){ return (<form> <div> <input type="text" name="number" placeholder="Enter a number (Optional)" /> </div> <div> <select name="type"> <option value="trivia">Trivia</option> <option value="math">Math</option> <option value="date">Date</option> <option value="year">Year</option> </div> <button type="submit">Generate</button> </form>); } This will create the form with the text input and select and button elements. After that, we need to import and render the Form component in our App component:; We have changed the imports to import our Form component, and we added <Form /> to render the form. Let’s also add more styles just to make our form look a little better. Add the following at the end of App.css: form { font-size: 15px; } form input, form select { padding: 5px; } form select { width: 100%; } form div { margin-bottom: 8px; } If you refresh the page now, you will find it has changed and now it’s showing our form. Now, we need to add some logic to our form. On Submit, we need to get the values of our input elements, then call the API to retrieve the results. To handle form elements and their values, you need to use the state of the component. You make the value of the element equal to a property in the state of the component. Before hooks, in order to get the value in the state you would have to use this.state.value, and then to set the state, you will need to call this.setState. Now, you can use the state hook. The state hook is the useState function. It accepts one argument, which is the initial value, and it returns a pair of values: the current state and a function that updates it. Then, you will be able to access the current state using the first returned value, and update it using the second returned value which is the function. Here’s an example: const [count, setCount] = useState(0); in this example, we call the useState hook and pass it 0, and we set the returned value equal to count and setCount. This means that we have created state variable called count. Its initial value is 0, and to change its value we can use setCount. For our Form component, we need two state variables, one for the text input which we will call number, and one for the select input which we will call type. Then, on change event for these two input elements, we will change the values for number and type using the function returned by setState. Open our Form component and change it to the following: import React, { useState } from 'react';function Form(props){ let [number, setNumber] = useState("random"); let [type, setType] = useState("trivia"); function onNumberChanged(e){ let value = e.target.value.trim(); if(!value.length){ setNumber("random"); //default value } else { setNumber(value); } } function onTypeChanged(e){ let value = e.target.value.trim(); if(!value.length){ setType("trivia"); //default value } else { setType(value); } } return (<form> <div> <input type="text" name="number" placeholder="Enter a number (Optional)" value={number} onChange={onNumberChanged} /> </div> <div> <select name="type" value={type} onChange={onTypeChanged}> <option value="trivia">Trivia</option> <option value="math">Math</option> <option value="date">Date</option> <option value="year">Year</option> </select> </div> <button type="submit">Generate</button></form>);}export default Form; - Line 1: add an import for useStatehook. - Line 3–4: create two state variables numberand typeusing useState. Here we pass random as the initial value for number, and trivia as initial value for type because they are the default values for the parameters in the API. - Line 5–10: implement input change handler functions for both text and select inputs, where we change the value of the state variables using the functions returned by useState. If the value is unset, we automatically change the values to the default value. - Line 13: pass the onNumberChangedfunction to onChangeevent for text input. - Line 16: pass the onTypeChangedfunction to onChangeevent for select input. In addition, let’s go back to our App component to modify it and use states. Instead of modifying our welcome message by changing the innerHTML of the element, we will use a state. Our App component should now be like this: import React, {useEffect, useState} from 'react'; import Form from './Form'; import '../styles/App.css'; function App() { const [ welcomeMessage, setWelcomeMessage ] = useState( "Welcome to Numbers Trivia!", );useEffect(() => { setTimeout(() => { setWelcomeMessage("Try Out Our Trivia Generator!"); }, 3000); }, []);return ( <div className="App"> <header className="App-header"> <h1>{welcomeMessage}</h1> </header> <Form/> </div> ); }export default App; Now, we are using useState to declare and initialize our welcome message. It will return welcomeMessage , our state variable, and setWelcomeMessage , which we will use to change the value of welcomeMessage after 3 seconds from “Welcome to Numbers Trivia!” to “Try Out Our Trivia Generator!” What’s left now is to add a function to handle the form’s onSubmit event. On submit, we will send a request to the API with our parameters, then display the result. In order to perform the request, we need to install axios: npm i axios Then, require axios at the beginning of Form.js: const axios = require('axios'); Now, add the following function below onTypeChanged in our Form component: function onSubmit(e){ e.preventDefault(); axios.get('' + number + '/' + type) .then(function(response){ let elm = document.getElementById('result'); elm.innerHTML = response.data; }).catch(function(e){ console.log("error", e); //simple error handling }); } We’re just performing a request to the API, passing the number and type then displaying the result in an element of id result (which we will add in a minute). In case of an error, we’re just displaying it in the console just as a simple error handling. Now, let’s add this function as the handler for the form onSubmit event in the return statement in Form.js: <form onSubmit={onSubmit}> The only thing left is to add the #result element. We can add it in App.js before <Form /> : <div id="result" style={{marginBottom: '15px'}}></div> Alright, now go to your website and discover all new trivia about numbers!
https://shahednasser.medium.com/react-hooks-tutorial-create-a-number-trivia-generator-website-32b6b3b52c3e?source=post_page-----32b6b3b52c3e--------------------------------
CC-MAIN-2021-25
refinedweb
2,378
63.39
This tutorial entitles “Number Pattern 1 using Loop in Java” will teach you on how you can display a number pattern forming a triangle using for loop. Please follow all the steps to complete this tutorial. Looping is a common feature use in any programming language especially in Java Program. This is a repetition statement with condition. If the condition is meet, the repetition or the looping stop. There are three types of loop, the while loop, do while loop, and for loop. Number Pattern 1 using Loop in Java Steps - Create a new class and name it what you want to name. 2. Create a main method inside your class. [java]public static void main(String args[]){ }[/java] 3. Insert the code below inside your main method for looping. [java]for (int i = 1; i <=5;i++){ for (int j = 1;j <= i;j++){ System.out.print(j); } System.out.println(); }[/java] Remember: 1 start the looping and 5 end the looping. 4. Run your program and the output should look like the image below. 5. Complete Source Code [java]public class PatternNumber1 { public static void main(String args[]){ for (int i = 1; i <=5;i++){ for (int j = 1;j <= i;j++){ System.out.print(j); } System.out.println(); } } }[/java] About The Number Pattern 1 using Loop In Java Suggested Topic you may Like: How to Check Highest Number in Java Sum of Two Integers using Java
https://itsourcecode.com/free-projects/java-projects/number-pattern-1-using-loop-java/
CC-MAIN-2021-49
refinedweb
239
65.22
Hey guys, I made an ofstream program with the help of this tutorial: It works great, but I wanted to implement a couple lines of code that would return the user back to the main function for them to try to name the file again in the else statement below. I have what I added in red that's causing it to loop constantly. Should I include it AFTEr the myfile.close statement? Here's my code: Code:#include <iostream> #include <string> #include <Windows.h> #include <fstream> // An Of stream basically writes text into a file, pretty cool using namespace std; int main() { ofstream myfile; // This creates an ofstream variable named myfile char filename[50]; // This creates a character variable named filename // that will store up to 50 characters that the user enters string userInput; // This string just holds the user's input cout << "Please enter the file name you wish you wish to write on: " << endl; cin.getline(filename, 50); // Again, this is pretty much the same thing as cin>>, and it // will store 50 characters that the user inputs into the // filename variable myfile.open(filename); // This opens the file itself if(myfile.is_open()) { // This function checks if the file is open cout << "Enter the text you wish to insert the text file: " <<endl; getline(cin, userInput); // This will store whatever the user types into the // variable userInput myfile << userInput; // Basically, you're just labeling the file's title // with whatever the user just inputted } else cout << "Sorry, an error has occurred. Please wait 3 seconds and try again" << endl; Sleep(3000); system("CLS"); return main(); // This return statement is from previous tutorials // and actually ripped straight from my music program. // This refers to the Windows.h library that contains // the sleep command that will force the computer to pause // for 3 seconds and return the user to the main function // so that the user can try again. myfile.close(); // This closes the file after successfully completing the if // statement system("PAUSE"); return 0; }
http://forums.devshed.com/programming/950536-stream-program-errors-last-post.html
CC-MAIN-2014-23
refinedweb
336
53.24
Plant Physiol, April 2001, Vol. 125, pp. 1567-1576 Department of Energy Plant Research Laboratory, Michigan State University, East Lansing, Michigan 48824-1312. The availability of the sequence for the entire genome of Arabidopsis allows a detailed analysis of all the genes involved in a particular biological process, regardless of the plant species in which the system was first identified. One such process is the import of cytoplasmically synthesized precursor proteins into chloroplasts. Most of the current information regarding this process, including the identification of components of the import apparatus that mediates it, has come from biochemical studies in pea (Pisum sativum; Fig. 1; Chen and Schnell, 1999; Keegstra and Cline, 1999; Keegstra and Froehlich, 1999; May and Soll, 1999; Schleiff and Soll, 2000). From these studies it has been determined that nuclear-encoded, chloroplast-localized enzymes are synthesized in the cytoplasm as precursors containing an N-terminal transit peptide not seen in the mature protein within the chloroplast (for review, see Bruce, 2000). A precursor protein initially interacts with a complex located within the outer membrane of the chloroplast envelope that consists of at least three subunits: translocon at the outer envelope membrane of chloroplasts (Toc) 159, Toc75, and Toc34 (Waegemann and Soll, 1991; Hirsch et al., 1994; Kessler et al., 1994; Perry and Keegstra, 1994; Schnell et al., 1994; Seedorf et al., 1995; Tranel et al., 1995). These early events involve the hydrolysis of GTP, presumably by Toc159 and Toc34, which are known to be GTP-binding proteins (Kessler et al., 1994; Seedorf et al., 1995). A recent report by Sohrt and Soll (2000) has also implicated a fourth component, Toc64, as being a member of the outer membrane import machinery. Hydrolysis of low concentrations of ATP in the cytoplasm or intermembrane space results in the irreversible association of precursor proteins with the translocation machinery of both the outer and inner envelope membranes (Olsen et al., 1989; Olsen and Keegstra, 1992). The import complex of the chloroplastic inner envelope membrane also consists of at least three subunits: translocon at the inner envelope membrane of chloroplasts (Tic) 110, Tic20, and Tic22 (Kessler and Blobel, 1996; Lübeck et al., 1996; Kouranov and Schnell, 1997; Kouranov et al., 1998). Two additional components, Tic55 and Tic40, have also been reported to be a part of this translocon, but their inclusion is more controversial (Wu et al., 1994; Ko et al., 1995; Caliebe et al., 1997; Stahl et al., 1999). Complete translocation of precursor proteins into the chloroplast interior is accomplished via the hydrolysis of ATP within the stroma (Theg et al., 1989). This ATP hydrolysis is presumably mediated by stromal molecular chaperones, at least one of which, heat shock protein (Hsp) 93 (a member of the Hsp100 family of molecular chaperones), has been found to interact with the import complex (Akita et al., 1997; Nielsen et al., 1997). As the precursor enters the chloroplast, the transit peptide is cleaved off by the stromal processing peptidase (SPP) and the mature protein begins the process of folding and assembly (Oblong and Lamppa, 1992; VanderVere et al., 1995; Richter and Lamppa, 1998). Although virtually all of the conclusions described above were derived from work done with pea chloroplasts, expressed sequence tags (ESTs) for homologs of the various import components can be identified in the databases for a variety of monocots and dicots, including maize, tomato, and Arabidopsis. More importantly, the recent completion of the Arabidopsis genome sequencing project (The Arabidopsis Genome Initiative, 2000) has made it possible to find, in this species, homologs of those components for which no ESTs exist. In addition to establishing the general significance of the components of the import apparatus, identification of Arabidopsis homologs for the subunits of the pea import complex will allow the use of this species to perform molecular work that is not practical and/or possible with pea, including isolation of "knockout" mutants and generation of transgenic plants expressing sense or antisense copies of the genes encoding one or more of these components. In this paper, we analyze the Arabidopsis genomic, cDNA, and EST information currently available in GenBank concerning each of the known and putative subunits of the chloroplast protein import machinery. All of these components have homologs of high sequence identity within the Arabidopsis genome that are expressed and likely act as functional counterparts to the pea proteins. For several of these translocation components, multiple putative homologs are present in the Arabidopsis genome. However, in most cases, it is unclear whether all copies are expressed, or if they are, whether they are all acting as functional homologs within Arabidopsis chloroplasts. The information revealed by this analysis will allow important new questions to be raised, and further experimental work can then be designed to answer them in the near future. Outer Envelope Membrane Proteins Toc159, a GTP-binding protein, is postulated to be the first subunit of the import complex with which an incoming precursor protein interacts, serving as the receptor for transit peptides (Waegemann and Soll, 1991; Hirsch et al., 1994; Kessler et al., 1994; Perry and Keegstra, 1994; Ma et al., 1996). There are three homologs of this protein in Arabidopsis (Table I) designated AtToc159, AtToc132, and AtToc120 based on their predicted molecular masses (Bauer et al., 2000). All three are expressed, as demonstrated by the presence of at least one Arabidopsis EST for each and by reverse transcriptase-PCR experiments (Bauer et al., 2000). The pea Toc159 protein is composed of three domains: an N-terminal acidic region, a central domain encompassing the GTP-binding motifs, and a C-terminal domain containing the membrane-spanning regions (Chen et al., 2000a). AtToc159 shares approximately 48% identity with the pea protein, most of which is concentrated in the central and C-terminal domains (approximately 69% identity in these regions). Pea Toc159 and AtToc159 are highly acidic, especially in their N-terminal regions (Bölter et al., 1998a; Bauer et al., 2000; Chen et al., 2000a). Approximately 30% and 27%, respectively, of the amino acids in this domain are Asp or Glu (Table II). This is in contrast to the other members of the outer membrane import complex (Toc75, Toc34, and Toc64) in which the percentage of acidic residues ranges from 9% to 11% for the Arabidopsis isoforms. One of the defining features of transit peptides is that they lack acidic amino acids, resulting in an overall basic pI and net positive charge (Keegstra et al., 1989). Thus, it is interesting to speculate that the N-terminal acidic domain of Toc159, which is localized on the cytoplasmic face of the chloroplast, is involved in an electrostatic interaction with positively charged transit peptides, increasing the overall efficiency of precursor protein binding (Bölter et al., 1998a). This is similar to the situation described by the acid chain hypothesis for the early interaction of basic mitochondrial targeting sequences with their acidic receptors (Komiya et al., 1998). AtToc132 and AtToc120 show less overall identity with pea Toc159 (approximately 37% and 39%, respectively), the majority of which is again concentrated in the central and C-terminal domains (approximately 50% for each). In addition, their levels of identity to AtToc159 are also relatively low (approximately 37% and 38%, respectively). On the other hand, the two proteins share approximately 70% amino acid identity with each other. This suggests that AtToc132 and AtToc120 share a common ancestor that diverged from AtToc159 before these two proteins diverged from one another. AtToc132 and AtToc120 are also highly acidic in their N-terminal regions (approximately 28% and 26% acidic residues, respectively). In fact, this is the main feature shared between the Arabidopsis homologs at their N-termini. There is very little conservation of primary structure between the three proteins before the GTP-binding domain (Bauer et al., 2000). However, despite a maintenance of the overall percentage of acidic residues within the N-terminal domains, the pI of the N-termini and the whole proteins differs between the three isoforms (Table II). Thus, the question arises of whether these subtle changes in size and overall charge between the Arabidopsis Toc159 homologs reflect differences in the types of precursors with which these proteins interact (Bauer et al., 2000). It is interesting to note that mutant Arabidopsis plants that lack AtToc159 are still able to import some, but not all, chloroplastic proteins, suggesting that some other factor, perhaps AtToc132 and/or AtToc120, is substituting for AtToc159 in the import of some precursors (Bauer et al., 2000). Toc75 has been shown to form a voltage-gated, peptide-sensitive channel in artificial lipid bilayers (Hinnah et al., 1997). Thus, it is hypothesized that this protein forms the channel through which precursor proteins cross the outer envelope membrane (Perry and Keegstra, 1994; Schnell et al., 1994; Tranel et al., 1995; Hinnah et al., 1997). Analysis of the Arabidopsis genome sequence reveals at least three coding regions that have strong similarity to pea TOC75: AtTOC75-III, AtTOC75-I, and AtTOC75-IV, named according to their chromosomal location. Only one of these genes, AtTOC75-III, is represented by an EST. More than 10 ESTs for this gene can be found, but none currently exist for the other two homologs. In addition, of the three, AtToc75-III shows the highest levels of identity with the pea protein (approximately 74%). As a consequence, it is likely that AtToc75-III is the major Toc75 isoform in Arabidopsis cells. AtToc75-III and AtToc75-I are quite similar to one another in size and amino acid sequence, sharing >60% identity throughout their lengths. On the other hand, AtToc75-IV displays some striking differences from its two homologs. First of all, the protein encoded by AtTOC75-IV is much smaller at 407 amino acids in length versus 818 amino acids for the protein encoded by AtTOC75-III. Furthermore, the region of similarity between AtToc75-IV and the other two Arabidopsis homologs is confined to the C-termini of the larger proteins. It appears that AtTOC75-IV may represent just the last six exons of AtTOC75-III. In fact, this gene seems to be an extreme case of a more common phenomenon. For a few components, including Toc75 and Toc159, BLAST searches reveal several small regions with high levels of sequence similarity to these subunits throughout the genome. Although these putative open reading frames do show similarity to the import components outside of commonly found motifs (i.e. nucleotide-binding domains), the regions of similarity are not extensive. In general, they constitute less than one-quarter of the total length of the queried import component, not enough to really be considered a possible functional homolog. One possible explanation for the occurrence of these presumably unexpressed regions of similarity is that these short open reading frames are examples of the evolutionary process of exon shuffling in progress. In the case of AtToc75-IV, the region of similarity extends for approximately 50% of the length of the larger Toc75 homologs. It is possible that this may be enough for the protein made by AtTOC75-IV to be functional. Future research should address this problem, but some observations suggest that it may indeed be needed in Arabidopsis cells. It is interesting to note that the levels of identity between this coding region and its "parent" are quite high at the amino acid level (approximately 65% with AtToc75-III) and at the nucleotide level. Moreover, the splicing pattern of AtTOC75-IV is identical to that seen in the 3' region of AtTOC75-III, implying that selection pressure on AtTOC75-IV may still be relatively high. Toc34, another GTP-binding protein of the translocation apparatus, is hypothesized to have a regulatory function during precursor import (Kessler et al., 1994; Seedorf et al., 1995; Kouranov and Schnell, 1997). This subunit has two homologs in Arabidopsis named AtToc34 and AtToc33 based on their predicted molecular masses (Jarvis et al., 1998). ESTs are present for both of these homologs within the Arabidopsis database, and their expression has been verified via northern and western-blot analysis (Jarvis et al., 1998; Gutensohn et al., 2000). It appears that the two proteins, which are >60% identical to each other and to the pea protein, can at least partially substitute for one another within plant cells. Arabidopsis mutants that lack AtToc33 display a delayed greening phenotype and reduced levels of chloroplast protein import early in their development, but are otherwise normal (Jarvis et al., 1998; Gutensohn et al., 2000). The genes for AtToc34 and AtToc33 provide an example of the evolutionary process of gene duplication. Each coding region consists of six introns and seven exons; five of the seven exons are exactly the same size between the two genes. In addition, in every case, the exon-intron junctions occur at homologous positions within the sequences. Thus, it appears that these two coding regions have diverged from one another only relatively recently after the duplication of a common ancestral gene. A fourth putative subunit of the outer envelope membrane import apparatus, Toc64, was recently isolated (Sohrt and Soll, 2000). The amino acid sequence for this component contains an amidase domain, but the protein itself has no measurable amidase activity (Sohrt and Soll, 2000). In addition, Toc64 contains three tetratricopeptide repeats (TPR), which are hypothesized to be involved in protein-protein interactions with cytosolic factors complexed with a precursor protein and/or with the precursor itself, perhaps serving as a docking site for the incoming protein (Sohrt and Soll, 2000). Within the Arabidopsis genome there are three coding regions that display extensive similarity with the pea protein outside of the amidase domain and/or the TPR motifs. These homologs have been designated AtToc64-III, AtToc64-V, and AtToc64-I. For all three isoforms, cognate ESTs have been isolated. However, only AtToc64-III and AtToc64-V contain regions similar to both the amidase domain and the TPR motifs of pea Toc64 (approximately 67% and 50% identical, respectively). Thus, although it is likely that the proteins encoded by AtTOC64-III and AtTOC64-V could serve as functional homologs of pea Toc64 within Arabidopsis cells, further experiments will need to be done to determine whether AtToc64-I, which lacks the TPR motifs, is playing a similar role. Inner Envelope Membrane Proteins The first component of the inner membrane import complex to be cloned and characterized was Tic110 (Kessler and Blobel, 1996; Lübeck et al., 1996). This subunit consists of a large globular domain localized in the chloroplast stroma and anchored to the envelope by a membrane-spanning -helix at the N terminus (Kessler and Blobel, 1996; Jackson et al., 1998). Based on this topology it has been proposed that Tic110 acts as an anchor for stromal molecular chaperones involved in precursor protein import (Kessler and Blobel, 1996; Jackson et al., 1998). Preliminary evidence suggests that Tic110 may physically interact with at least one molecular chaperone (M. Akita and K. Keegstra, unpublished data). BLAST searches on the Arabidopsis genome sequence reveal only one coding region, AtTIC110, similar to the pea gene (Table I). The protein encoded by AtTIC110 is expressed and displays high levels of identity (approximately 68%) to pea Tic110. In addition, it appears to have the same overall structure as the pea protein, with a predicted transmembrane helix at the N terminus followed by a large hydrophilic domain. Thus, it is reasonable to conclude that AtTic110 acts as a functional homolog of pea Tic110 within Arabidopsis cells. The gene structure for AtTIC110 is quite complicated; the coding region consists of 15 exons and 14 introns. Overall, the coding region is 5,261 bp in length, with 42% of this length comprising the introns. This complexity is in contrast to the genes encoding the Arabidopsis Toc159 isoforms. The coding regions for these proteins are also quite long, ranging from 3,270 bp (AtTOC120) to 4,595 bp (AtTOC159) in length. However, they contain only one small intron (83 bp; AtTOC159) or none at all (AtTOC132 and AtTOC120). This diversity in gene structure is seen for the other components of the import complex as well. The genes encoding the Arabidopsis homologs of Tic20 and Tic55 are relatively simple (two or fewer introns), whereas the genes for the remaining subunits are more complicated, containing between six and 23 introns (Table I). Tic20, an integral protein of the inner envelope membrane, is believed to form at least a portion of the channel through which chloroplast precursors traverse the inner membrane (Kouranov and Schnell, 1997; Kouranov et al., 1998). The Arabidopsis genome contains two genes encoding proteins, AtTic20-I and AtTic20-IV (designated according to the chromosomal locations of the genes), that are similar to pea Tic20. Both of these genes have corresponding ESTs within the Arabidopsis database. AtTic20-I is highly similar to the pea protein, sharing >60% identity with pea Tic20. As a consequence, it is likely to act as the functional counterpart to the pea protein in Arabidopsis chloroplasts. On the other hand, AtTic20-IV is only approximately 33% identical to pea Tic20 and approximately 40% identical to AtTic20-I. Although these levels of identity are relatively high, it is quite low for this system; most of the putative Arabidopsis homologs for the other import components show much higher levels of identity to their pea counterparts and related Arabidopsis isoforms. Thus, it appears that these two Tic20 isoforms may have diverged from one another earlier in evolution than is the case for isoforms of some of the other subunits of the import complex. BLAST searches for Arabidopsis homologs of pea Tic20 reveal a third putative isoform on chromosome II. However, this protein is much smaller (by approximately 70 amino acids) than the other two Arabidopsis homologs. More importantly, BLAST searches using this putative isoform as the query sequence fail to detect either AtTic20-I or AtTic20-IV. Thus, it was concluded that this coding region, despite sharing approximately 26% identity with pea Tic20 at the amino acid level, should not be considered an Arabidopsis homolog of the pea protein. Tic22 is localized in the intermembrane space of the chloroplast envelope and appears to be peripherally associated with the inner envelope membrane (Kouranov and Schnell, 1997; Kouranov et al., 1998). Due to its localization, it has been proposed that Tic22 may be involved in the formation of contact sites between the import complexes of the outer and inner membranes (Kouranov and Schnell, 1997; Kouranov et al., 1998). Within the Arabidopsis genome there are at least two coding regions, AtTIC22-IV and AtTIC22-III, of high similarity to pea TIC22. These genes are expressed, as determined by the presence of several ESTs for each in the database. The encoded proteins share approximately 62% and 41% identity, respectively, with pea Tic22. Tic55, an iron-sulfur protein believed to play a regulatory role during chloroplast protein import (Caliebe et al., 1997), and Tic40, which is proposed to recruit chaperones to the site of precursor protein import (Wu et al., 1994; Ko et al., 1995; Stahl et al., 1999), each have one clear homolog of high similarity in Arabidopsis. ESTs exist for both AtTic55 and AtTic40. The proteins display approximately 78% and 52% identity, respectively, with their pea counterparts. Thus, it is likely that they serve as functional homologs to the corresponding pea proteins. Soluble Factors It is thought that molecular chaperones within the chloroplast stroma provide the driving force, through the hydrolysis of ATP, for the translocation of precursor proteins into the chloroplast interior (Chen and Schnell, 1999; Keegstra and Cline, 1999; Keegstra and Froehlich, 1999). At the present time the best candidate for this role is Hsp93, a member of the Hsp100 family of chaperones that is consistently found in import complexes isolated from pea chloroplasts (Akita et al., 1997; Nielsen et al., 1997; Kouranov et al., 1998). This chaperone has at least two homologs (Table I) predicted to be present in Arabidopsis chloroplasts, AtHsp93-V (approximately 88% identity to pea Hsp93) and AtHsp93-III (approximately 83% identity to the pea protein; Nakabayashi et al., 1999). These two proteins, along with pea Hsp93, belong to the caseinolytic protease (Clp) C class of Hsp100 chaperones. Hsp100 proteins of other classes, specifically the ClpB and ClpD classes, that are predicted to be chloroplast-localized can also be detected in the Arabidopsis genome, as can potentially chloroplastic members of the Hsp70 and Hsp60 chaperone families. This diversity of stromally localized chaperones raises the question of whether Hsp93 is the only chaperone that interacts with the protein import complex or whether other types of chaperones could substitute for it in different species. Further work will be needed to confirm that the AtHsp93 homologs directly interact with the import complex in Arabidopsis chloroplasts as Hsp93 does in pea chloroplasts. Although no stromal Hsp70 proteins have been found to interact with import complexes (Akita et al., 1997; Nielsen et al., 1997), there is evidence to suggest that Hsp70 molecules do bind to precursor proteins before and/or during envelope translocation (Schnell et al., 1994; Wu et al., 1994; Kourtz and Ko, 1997; Ivey et al., 2000; May and Soll, 2000). Furthermore, an outer membrane-associated Hsp70 protein, which faces the intermembrane space of the chloroplast envelope, is believed to interact with precursor proteins as they move between the outer and inner membrane translocons (Marshall et al., 1990; Schnell et al., 1994). Within the Arabidopsis genome there are several coding regions that encode proteins similar to known Hsp70 molecules from other species. These Arabidopsis Hsp70 proteins can be classified into one of four groups: proteins of approximately 650 residues that likely represent cytosolic Hsp70 molecules, proteins that are 668 or 669 residues long and contain an obvious signal peptide at their N-termini, molecules with clear chloroplastic (two proteins) or mitochondrial (one protein) targeting motifs, and proteins that do not fit into any of the previous three groups. Of the proteins within the last group, only one shows some characteristics of a chloroplast transit peptide at its N terminus. Sequence alignment between this protein and the two obvious chloroplast-targeted Hsp70 molecules is shown in Figure 2. The only known intermembrane space protein that has been cloned is Tic22 (Kouranov et al., 1998). An analysis of the transit peptide for pea Tic22 reveals that it has a relatively high incidence of acidic amino acids: three within the 50 residues of its length (Kouranov et al., 1998). AtTic22 has five acidic residues within the same region. The paradigm for chloroplast transit peptides is that they are deficient in acidic amino acids, having no more than two over their length (Keegstra et al., 1989). Thus, the transit peptides for pea and Arabidopsis Tic22 are somewhat unusual, and this fact may account for why these proteins are targeted to the intermembrane space of the chloroplast envelope rather than the stroma, although this has not been experimentally verified. We analyzed the transit peptides of the possible chloroplastic Hsp70 proteins to see if we could detect, based on what is observed from the transit peptide of Tic22, which one (or ones) might be targeted to the intermembrane space. However, all three of these proteins display a low incidence of acidic amino acids within their presumed transit peptides (Fig. 2). Thus, either the presence of acidic amino acids within the transit peptide is not the determining factor for intermembrane space targeting or Arabidopsis may not contain an intermembrane space-localized Hsp70 protein as has been suggested for pea (Marshall et al., 1990; Schnell et al., 1994). Further experimental work will be needed to differentiate between these possibilities. The SPP (also known as the chloroplast processing enzyme [CPE]) is a metalloendopeptidase that cleaves transit peptides off precursor proteins as they enter the chloroplast stroma (Oblong and Lamppa, 1992; VanderVere et al., 1995; Richter and Lamppa, 1998). This component has one homolog in Arabidopsis, named AtCPE, which shares approximately 75% identity with the pea protein (Richter and Lamppa, 1998). The SPP currently is the only constituent of the import machinery whose molecular function has been studied in enough detail to be unequivocally assigned (Richter and Lamppa, 1998, 1999). Analysis of the Arabidopsis sequence database has revealed that homologs of high sequence similarity can be found for each of the chloroplast protein import components that were originally identified in pea. This suggests that the protein import system is conserved between pea and Arabidopsis, making Arabidopsis a valid model for its study. It is likely that the import complex is conserved in other plant species as well. EST sequences similar to the known import components can be found in many species, including maize, soybean, and rice. In addition, antibody cross-reactivity studies on species as diverse as mosses and tomato have suggested that at least some of the subunits of the import machinery can be found in all chloroplast-containing eukaryotes (J. Davila-Aponte and K. Keegstra, unpublished data). Various lines of evidence have also indicated that cyanobacteria contain homologs of at least some of the import components (Bölter et al., 1998b; Reumann and Keegstra, 1999; Reumann et al., 1999). Thus, the chloroplast protein import system is likely to be conserved, at least in part, in all plant (and related) species. For at least seven (Toc159, Toc75, Toc34, Toc64, Tic20, Tic22, and Hsp93) of the 11 known import components, multiple homologs can be found within the Arabidopsis genome. In all but one of these cases it is known that more than one of these homologs is expressed within Arabidopsis cells (Jarvis et al., 1998; Bauer et al., 2000; Gutensohn et al., 2000). This observation immediately suggests that multiple isoforms of the same subunit may be present in the same cells at the same time (Jarvis et al., 1998; Bauer et al., 2000; Chen et al., 2000b; Gutensohn et al., 2000). If this is the case, then one may imagine the existence of multiple types of import complexes within the chloroplast envelope, each with their own particular precursor specificity. For example, if all three Arabidopsis Toc159 homologs are expressed within the same cell, then the chloroplasts within that cell may have a mixture of import complexes: some containing AtToc159, others containing AtToc132, and still others containing AtToc120. However, because the stoichiometry of the subunits within the outer membrane translocon is not known, it is also possible that all three may exist within the same import complex. It is obvious that such questions cannot be answered by sequence analysis alone, and further experiments will be needed to address these issues. The possibility of multiple isoforms for some of the protein import components within Arabidopsis chloroplasts also raises the question of whether the same situation is present in pea plants. Is Arabidopsis "unusual" in having multiple genes for at least some of the subunits of the import complex or is this the case in pea as well? So far, only one isoform has been identified for each component of the pea import apparatus. However, this fact does not mean that additional homologs do not exist within the pea genome. Since the pea import components were all initially isolated via biochemical means, it is possible that isoforms not present at high concentrations or at the particular stage of development studied would be missed. At this time there is not enough pea sequence information in GenBank to determine if multiple genes for the import components may indeed also be found in this species. It is interesting to note that none of the coding regions for the Arabidopsis import components are found close to one another within the genome. Even for the components that have multiple putative isoforms, the genes encoding these proteins are located on separate chromosomes (see Table I). This is in contrast to the situation known for several other gene families (Lin et al., 1999; Mayer et al., 1999; The Arabidopsis Genome Initiative, 2000). Often, homologs of a particular coding region can be found nearby in the genome, if not in tandem (Lin et al., 1999; Mayer et al., 1999; The Arabidopsis Genome Initiative, 2000). In the case of the chloroplast protein import complex, however, the genes encoding the various subunits are found scattered throughout the genome. The explanation for this observation is not clear. Perhaps recombination in the areas immediately surrounding the genes for the import components is suppressed due to the essential nature of the import complex genes themselves or other genes in their local environment. Additional work will be needed to test this hypothesis. It has been known for many years that the components of the pea chloroplast protein import complex show little sequence similarity to proteins of known function from other organisms (with the exception of the molecular chaperones and the SPP), including the subunits of the protein import systems of other organelles (Reumann and Keegstra, 1999; Reumann et al., 1999). Thus, it has not been possible to use information gained from the genetic study of other protein import systems to learn more about the functions of the individual subunits in the chloroplast import complex. Identification of the Arabidopsis homologs for the pea import components has now made it practical to analyze the functions of these proteins genetically, especially through the use of knockout mutants and antisense technology. Such experiments are already being carried out in several laboratories, and three reports have recently emerged from these investigations (Jarvis et al., 1998; Bauer et al., 2000; Gutensohn et al., 2000). The study of knockout mutants and antisense plants for each of the import components should lead to a better understanding of their molecular functions. Cross-complementation studies in knockout mutants will also be useful in determining whether the putative Arabidopsis import complex isoforms are the functional homologs of the corresponding pea proteins, as is predicted. However, it should be noted that since several of these proteins appear to have multiple isoforms within Arabidopsis cells, double and triple mutants may need to be constructed in some cases before component function can be analyzed in detail. Despite this limitation, the genetic study of chloroplast protein import in Arabidopsis should provide a great deal of information concerning this system in the coming years. All sequence comparisons were done using the BLASTN, BLASTP, and TBLASTN programs (versions 2.0) available from the National Center for Biotechnology Information (; Altschul et al., 1990; Altschul et al., 1997). The weight matrix used was the blosum62 matrix, and no settings were changed from the default. The database searched was the Arabidopsis Database Project, found at The Arabidopsis Information Resource (), which contains genomic and EST sequences. This database was checked for the final time between October 30, 2000 and November 5, 2000, just before manuscript submission. During manuscript revision, a recheck of the database between January 11, 2001, and January 18, 2001, found no additional homologs. A sequence was considered a homolog only if the following conditions were met, unless otherwise noted: (a) using the pea (Pisum sativum) sequence as the query, one of the BLAST programs used detected this sequence with an expect value of less than or equal to 0.0001; (b) using the putative Arabidopsis homolog as the query, one of the BLAST programs used detected the pea sequence and other Arabidopsis isoforms with an expect value of less than or equal to 0.0001; (c) the region of similarity between the pea protein and the putative Arabidopsis homolog extended for approximately 50% or more of the sequence lengths; (d) the region of similarity to the pea protein extended beyond common motifs (i.e. nucleotide-binding domains); and (e) the putative Arabidopsis homolog was not already annotated as being more similar to another protein of known function. Levels of identity between different amino acid sequences were calculated with the MegAlign program (Lipman-Pearson algorithm; ktuple = 2, gap penalty = 4, gap length penalty = 12) of the Lasergene software package (DNASTAR, Inc., Madison, WI). Predictions concerning chloroplast targeting were made using the TargetP program (version 1.01), available at (Emanuelsson et al., 2000). We thank K. Bird, Dr. J. Froehlich, Dr. K. Inoue, and Dr. A. Sanderfoot for their helpful comments on this manuscript. Received November 8, 2000; returned for revision January 5, 2001; accepted January 23, 2001. 1 This work was supported in part by the Division of Energy Biosciences at the U.S. Department of Energy (grants to K.K.), by the Cell Biology Program at the National Science Foundation (to K.K.), and by the Graduate Fellowship Program at the National Science Foundation (to D.J.-C.). * Corresponding author; e-mail keegstra{at}msu.edu This article has been cited by other articles:
http://www.plantphysiol.org/cgi/content/full/125/4/1567
crawl-002
refinedweb
5,408
50.57
Answered by: Problem using XPath to investigate Message Headers. I am having what appears to be a strange problem. My goal was to use XPath to investigate custom headers in a message. I would create an XPathDocument by passing it an XmlReader from the message.Headers.GetReaderAtHeader() method. Now, if I try to use an XPathNavigator and it's select methods on a Message object that was created by the WCF stack, it doesn't work. If I create a Message (the same EXACT message) via Message.CreateMessage(...) using a string representation of the SOAP message, the XPath queries work fine. I know why its not working, but not sure if its a bug or if there is a reason behind it. The Message instance that is built by the WCF stack is a BufferedMessage instance while that returned by Message.CreateMessage(...) is a StreamedMessage instance. When debugging, I noticed that the StreamedMessage version successfully parsed out my custom headers prefix and namespace (which is needed for XPath resolution) while the BufferedMessage version simply had tag names and no namespace information. For example, if I have a custom header that looks like this: <myPrefix:myHeader xmlns: <myPrefix:anotherTag /> </myPrefix:myHeader> The StreamedMessage's HeaderInfo classes report the name as "myHeader" prefix "myPrefix" and namespaceURI "uri:MyUri". The BufferedMessage reports blank for everything but name, which is reported as "myPrefix:myHeader". Any comments/suggestions? Question Answers). All replies - Hi, and sorry for the late reply. Before I post code, I decided to move to the June CTP to see if it fixes my problem. I had to port some changes, and have one issue that I can't seem to resolve. I was using the MatchAllEndpointBehavior. It was removed. Is there a replacement? Ok, I've made the change to AddressFilterMode and everything is back to how it was. Here is the text for the BufferedMessage with which I'm having my problem: > <app> </app:package> <a:To s:urn:EchoService</a:To> </s:Header> <s:Body> <Echo xmlns=""> <echoBack>Testing</echoBack> </Echo> </s:Body> </s:Envelope> I did narrow the problem down to another problem pre-XPath use. When I execute the following code:int packageIndex = message.Headers.FindHeader("package",""); I get a result of -1. However, if I execute the following:message = Message.CreateMessage(XmlReader.Create(new StringReader(message.ToString())), 1000000, message.Headers.MessageVersion); int packageIndex = message.Headers.FindHeader("package",""); Also, if I execute the following on the original message (without executing Message.CreateMessage) I also get 3: int packageIndex = message.Headers.FindHeader("app:package",""); However, if I try to get a reader *suing GetReaderAtHeader(packageIndex) at header 3 via this method (not specifying the namespace), and use that to create an XPathDocument, none of my XPath queries resolve because the reader does not have namespace information. If I use the Message.CreateMessage workaround, and create a reader for header 3 from that message, all of my XPath queries resolve. Does that help? OK. I think we're homing in on the issue. Off the top of my head, I'd say either the custom header's OnWriteXXX implementations have a bug, or the buffering process does. Do you have any custom channels on the stack that add this header? What encoding/transport are you using? Have you tried turning off logging/tracing? Somehow the localname of the header has become "app:package". This could either be a bug in the product (calling Name instead of LocalName on the reader), or a bug in the header implementation. You only get a BufferedMessage if you are using something that needs a copy of the message (like logging), so let's try turning that off and see if we can rule that out. Also check the implementation of your custom header. The "Name" property is actually the local name, so if you're using a QName for this it could be the problem. Also, if you override OnWriteStartHeader make sure you're calling the WriteStartElement overload with 3 arguments (assuming you want to use a specific prefix). If none of this issues turn out to be the problem please send me your full binding and any configuration and I'll try to replicate the issue locally. No custom channels on the stack that add this header. The header is added by the client via a MessageInspector. I'm using Binary Encoding with a Named Pipe Transport. I do not have logging or tracing turned on. Now, since I don't have logging or tracing turned on, is there anything else that would give me a BufferedMessage instance? This could be a separate problem (or not) but if I resolve it this way at least its a temp fix. What I'm writing is a SOAP intermediary, so I am receiving the message in a blind fashion; the intermediary doesn't care who the message is from or what its structured like. It only cares that certain header information exists. To accomplish this, the service is using a contract that has a catch all method whose Action is '*'. In addition, the ServiceBehavior has AddressFilterMode set to Any. Although the local name/QName comment could be something, it shouldn't be since it works in the StreamedMessage case. I do have something that may help, however. If I walkthrough the SOAP intermediaries execution, and I drill down through the BufferedMessage object through the Locals watch, I notice something strange. By going to message.Headers.Non-public Members.headers[3] (My header), I'm able to see that the debugger is catching InvalidCastExceptions at both the MessageHeader and ReadableHeader properties, with the detail of "Unable to cast object of type 'HeaderInfo' to type 'System.ServiceModel.Channels.MessageHeader'." on the MessageHeader property. If this doesn't help, I can send you whatever you need. Thanks I did some poking around in the code and the BufferedMessage can also be created when using buffered mode in the transport, but I don't think that's the issue. The reason the StreamedMessage case works is that the Xml isn't technically being created correctly, but winds up being valid. The Xml resulting from msg.ToString() is what you wanted, so reparsing it in CreateMessage creates the message that was intended. Binary is much more permissive that Text. It doesn't validate what isn't needed to parse the recieved bytes (for performance). "a:b" is a valid localname since the prefix and localname are being sent as separate tokens in the protocol. It doesn't need to be able to split them at a ":". Given this and the behavior you describe I'm almost positive this is bug in how the custom header writes itself out. What is your implementation like? Do you use MessageHeader.CreateHeader, or do you have your own subclass? If you're using CreateHeader, set a breakpoint on the call. You should be passing the localname without a prefix. If you have your own subclass, watch the Name property and OnWriteStartHeader method for similar things. As a side note, binary has optimizations in it for when it is allowed to choose its own prefixes. If you don't actually need a particular prefix we recommend that you don't specify one. I think you may have nailed it. I'm using "c:d" (I switched to "c:d" because I'm using the letter "a" later on in this post) on my call to CreateHeader specifically because I wanted the prefix on the "package" entity. The object I am passing to create header is an implementation of IXmlSerializable. On the WriteXml, I execute the following: writer.WriteAttributeString("xmlns","app",null,); I do this because I want to define the "app" prefix since I use it in package's sub-elements. I know it's a work around (look at the WriteAttributeString parameter list). If I use just "d" rather than "c:d" I do not get the prefix (obviously) but more importantly, during serialization, I end up with two prefix definitions in the package element: one for my "app" prefix because of my WriteAttributeString and one for the prefix "a" (WCF default). If I leave CreateHeader with "c:d" the definition for prefix "a" is left off. Is there anyway then to define *just* my prefix and not use "c:d" on CreateHeader? In my case I do need a prefix because of contextual guarantees I must make in the SOAP message. If you need to use a specific prefix you won't be able to do it with CreateHeader. I'd recommend writing a subclass of MessageHeader (should be simple). In OnWriteStartHeader you'll need to call WriteStartElement as appropriate (and define whatever else you need). In OnWriteHeaderContents just create a serializer and call WriteObjectContent. The CreateHeader methods were intended to be simple helpers for commonly created headers. We considered scenarios where you needed control over prefix selection more advanced. - Aaron, I switched to extending MessageHeader, and still no luck. I experience the same behavior when trying to get a reader at the header. It says it can't find it. I'm not sure that it matters how the header is being built on the client (for example, it could be a Java client which obviously doesn't have MessageHeader). The only difference I see is that now "package" does not have the prefix, which is fine, and it is defined only in the XMLNS string. Any other suggestions? This feels like it's related to binary. Not that there's a bug in binary, but that the reduced validation that goes along with binary is causing some other bug to slip through and pop up later. You can try switching to the text encoder and seeing if anything changes. Also, are you ever calling WriteStream? Let's try figuring out what everything looks like. Using your custom header, set a breakpoint where you do the Find and locate the header in the msg.Headers collection. Post the values of the Name and Namespace properties along with the new msg.ToString(). If you would also paste the implementation of you OnWriteStartHeader method it may give me some insight. I tried switching to the TextEncoder, and nothing changed. I have the following information for Name & Namespace with a breakpoint at the find: Name = "package" Namespace = "" Please note that Namespace is a blank string, not null. Here is the new msg.ToString() > </package> <a:To s:urn:EchoService</a:To> </s:Header> <s:Body> <Echo xmlns=""> <echoBack>Testing</echoBack> </Echo> </s:Body> </s:Envelope> I did not overload OnWriteStartHeader because the writer is in 'Content' state at that poinjt, which does not allow me to modify the xlmns attribute.l Instead, I used the following MessageHeader override:protected override void OnWriteHeaderContents(XmlDictionaryWriter writer, MessageVersion messageVersion) { writer.WriteXmlnsAttribute("app",); writer.WriteAttributeString("a", "IsReferenceParameter", null, "true"); if(subpackage != null) { subpackage.WriteXml(writer); } } It's a quite simple implementation. Should I not be using 'WriteXmlnsAttribute'? Let me know if this helps.. Thanks Aaron, it finally works! Your OnWriteStartHeader mod fixed it. I walked through to notice the differences. I had a mistake in my post, but in my first sample message, I said the SOAP message had: <app:package a: but in fact it was: <package a: Now, using your OnWriteStartHeader approach, it adds the prefix and resolves fine. Either way, shouldn't it have worked though? It still seems to only happen with the Buffered Message since <package a: is considered valid with the StreamedMessage.).
https://social.msdn.microsoft.com/Forums/vstudio/en-US/c2a39df8-3943-4c41-acca-6da8e96f0dff/problem-using-xpath-to-investigate-message-headers?forum=wcf
CC-MAIN-2017-22
refinedweb
1,920
56.76
I’ve been fiddling with deepspeech a bunch of late, trying to improve its accuracy when it listens to me. TL;DR: fine-tune the mozilla model instead of creating your own. If this is in your field of interest, start with this post over on the mozilla discourse: An extraneous list of steps I used is below. More than a few are duplicated from the above. This isn’t definitive, and will certainly need to be redone over time. You do NOT need massive resources to compile a small (<10k clips) model, but the larger your GPU the better. You will need deepspeech, the native_client, kenlm, and python3 installed. I used a lot of bash for loops to do the majority of the bulk processing steps. First, I built up my library of clips. Mimic recording studio can be used to do this. I had recorded approximately 800 source clips using a combination of common commands I say to mycroft, the top 500 words in the English language, and several papers from arvix on computing topics. These were recorded in 16 bit, 48khz stereo. Start with the best quality you can, it’s easier to make that worse than try and fix bad source material. If you have saved clips from mycroft, or can easily record noisy or bad voice quality clips, then you should do so. Described here is a way to augment your data with lower-quality clips. I have a pair of small diaphragm condensers connected to a usb audio interface. One mic channel is set for -10db and oriented in approximately 90 degrees from the other in order to make for a slightly different recording on each channel. I used a short shell script to record a sentence twice, and write out the filename and the transcription to a csv file. In the csv file you make, it is vitally important that you limit the amount of odd characters, punctuation, and the like. Also helps to run it through an upper to lower step as well. Super annoying recording script: #!/bin/bash if [[ $# -lt 1 ]] ; then echo "Usage: $0 sentencesfile" fi if [[ $# -eq 2 ]] ; then SEC=$2 else SEC=5 fi RED='\033[0;31m' NC='\033[0m' BLUE='\033[0;34m' YELLOW='\033[1;33m' GREEN='\033[0;32m' C1=1 DS=$(date +%s) SFN=$(echo $1 | tr -s '\057' '\012' | tail -1) echo $SFN if [[ ! -d tmp ]]; then mkdir tmp fi record() { for i in 1 2 ; do echo -e "${NC} 2...." sleep 1 echo -e "${BLUE} 1...." sleep 1 echo -e "${RED} Recording for $SEC seconds!" iterationname="${DS}-${C1}-${i}" #echo -e "${RED} $iterationname" arecord -f dat -d $SEC tmp/${iterationname}.wav echo -e "${GREEN} done recording. ${YELLOW} " echo "${iterationname},${line}" >> tmp/metadata.csv done } echo "Reading lines from file $1" sleep 2 while read line ; do echo -e "${RED}___________________" echo -e "${GREEN}CURRENT SENTENCE: " echo " *************** " echo -e " ** ${YELLOW}${line} " echo -e "${GREEN} *************** " echo -e " line $C1${NC}" record C1=$((C1 + 1)) done < $1 echo -e "${NC}Done!" After recording, I used webrtcvad to trim the silence ( python3 example.py 1 yourwavefilehere.wav This sometimes results in two wave files emerging, usually just one if you’re recorded cleanly enough. You should watch for multiples and pick the correct one as needed. I then used sox ( to split the files into left and right channel wavs at 16 bit,16khz mono. $ sox $i l-$i remix 1 $ sox $i r-$i remix 2 Additionally, I recorded a short clip of background noise in my house (hey it’s where I’ll use this most). With one channel’s wav files, I combined the noise and saved those as an additional set of clips. $ sox -m $j noise.wav n-$j.wav trim 0 $(soxi -D $j) After combining, I used the chorus function of sox to make the quality of the clips slightly worse for another set of clips (play with the values to make it work for you). $ sox $k c-$k chorus .8 .1 25 .4 .1 -t From the other channel, I recorded a fifth set of clips with bad quality (without extra noise). Now I have left, right, noisy left, noisy bad quality left, and bad quality right. If you were so motivated, you could also throw in some speed variations on these for good measure. On the training machine (local or cloud), I made a directory (/opt/voice/dsmodel/), and sub-directories within for wavs, test, dev, and train. I placed all my wav files into the wavs directory. The csv file full of names and transcriptions in the dsmodel dir. For bonus points, you can run the csv file through shuf to randomize the clip order. For deepspeech, you want to put 10% into test, 20% into dev, and the remainder into train. DS wants a particular format for the csv, so I passed each line through a short script to get the filename, find the size of the file in question, and write the file name, size, and transcription to the relevant csv (train.csv/test.csv/dev.csv, each in their respective directory). You end up with something like: wav_filename,wav_size,transcript c-r-1550042834-10-1.wav,86444,we are going to turn before that bridge c-r-1550042834-13-2.wav,78764,four five six seven eight c-r-1550042834-15-2.wav,69164,nine ten eleven twelve c-r-1550042834-17-2.wav,88364,thirteen fourteen fifteen sixteen c-r-1550042834-20-2.wav,69164,twenty thirty forty fifty Don’t knock my sample sentences til you try 'em. The header line is necessary in each csv. To make the alphabet.txt file, you’ll want to grab all the transcriptions and sort into unique characters. cut -d, -f3 test/test.csv >> charlist ;cut -d, -f3 train/train.csv >> charlist; cut -d, -f3 dev/dev.csv >> charlist I found the charparse.cpp* file on the web, and can’t find the source now, will edit if I do. Compile that (g++ -o charparse charparse.cpp), then you can do: charparse < charlist >alphabet.txt IMPORTANT: review your alphabet.txt and make sure it only has letters and minimal punctuation (I have period and apostophe, then lower case a-z. End on a blank line. 29 lines total for mine, you should be similar. Non-latin character sets will certainly differ. Now we build files to model with. Not going to pretend I know what each of these is for, feel free to look up yourself. Start in your dsmodel directory and run: $ lmplz --text vocabulary.txt --arpa words.arpa --o 3 $ build_binary -T -s words.arpa lm.binary $ generate_trie alphabet.txt lm.binary trie In your Deepspeech folder (I used /opt/DeepSpeech), edit your run file. Here’s mine for fun: #!/bin/sh set -xe if [ ! -f DeepSpeech.py ]; then echo "Please make sure you run this from DeepSpeech's top level directory." exit 1 fi;.1 \ --estop_std_thresh 0.1 \ - \ "$@" Two things to keep in mind here are batch_size and n_hidden. Batch_size is basically scaling how much of the data to load per training step. I have an 8gb gpu, and on my dataset this worked. On yours it might be bigger or smaller. More data is usually smaller. Try and keep n_hidden even. The larger you can scale n_hidden, the better your model may be. Try powers of 2 type numbers (256, 512,1024, etc). Higher can be better. If you don’t keep batch_size even, you may experience a tiresome warning on inference later. The early stop parameters are there to help prevent overfitting. I’d recommend keeping them on for any small training set. The learning rate, dropout rate, stddev bits you can use if need be, review after first model completes. After all of that…start a screen session and run your script. In another screen session, set up tensorboard on the output directory. Switch back to your training script and see what error it’s popped up. It’s fairly good about indicating what it’s working on when it errors, ie, it parses the csv files and will indicate what character it doesn’t like in them. Fix anything that comes up, and try again. Depending on your data’s size and your compute resources, go to sleep for the night or check back in ten minutes. mycroft@trainer:~/DeepSpeech$ ./run-me.sh + [ ! -f DeepSpeech.py ] +.05 --estop_std_thresh 0.05 - Preprocessing ['/opt/voice/dsmodel/train/train.csv'] Preprocessing done Preprocessing ['/opt/voice/dsmodel/dev/dev.csv'] Preprocessing done I STARTING Optimization I Training epoch 0... 15% (7 of 46) |######################################## | Elapsed Time: 0:00:04 ETA: 0:00:29 Each epoch took about a minute for me, validation on each epoch about one-tenth that. Pull up tensorboard from the training machine and watch it make cool graphs. And after modeling finished, it looks pretty good: -------------------------------------------------------------------------------- WER: 0.000000, CER: 0.000000, loss: 0.003964 - src: "wiki search pineapples" - res: "wiki search pineapples" -------------------------------------------------------------------------------- WER: 0.000000, CER: 0.000000, loss: 0.004020 - src: "the quick brown fox jumped over the lazy dog" - res: "the quick brown fox jumped over the lazy dog" -------------------------------------------------------------------------------- WER: 0.000000, CER: 0.000000, loss: 0.004198 - src: "sum three and two" - res: "sum three and two" -------------------------------------------------------------------------------- WER: 0.000000, CER: 0.000000, loss: 0.004204 - src: "a large fawn jumped quickly over white zinc boxes" - res: "a large fawn jumped quickly over white zinc boxes" -------------------------------------------------------------------------------- I Exporting the model... I Models exported at /opt/voice/dsmodel/results/model_export/ From here, you can copy your model to your deepspeech server host, and start doing ASR to your voice’s content. To make an mmapped model (from $ convert_graphdef_memmapped_format --in_graph=output_graph.pb --out_graph=output_graph.pbmm So how does it work? Eh…depends. Largely due to my limited training set, it can work on those lines pretty well. Anything beyond that it tends to get way off course. “could you tell me the weather in San Francisco” resulted in… initialize: Initialize(model='/opt/voice/models/output_graph.pb', alphabet='/opt/voice/models/alphabet.txt', lm='/opt/voice/models/lm.binary', trie='/opt/voice/models/trie') creating model /opt/voice/models/output_graph.pb /opt/voice/models/alphabet.txt... TensorFlow: v1.12.0-14-g943a6c3 DeepSpeech: v0.5.0-alpha.1-67-g604c015 Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage. model is ready. STT result: i'm able girls able ship water hallway best surface charparse.cpp #include <iostream> #include <set> int main() { std::set<char> seen_chars; std::set<char>::const_iterator iter; char ch; /* ignore whitespace and case */ while ( std::cin.get(ch) ) { if (! isspace(ch) ) { seen_chars.insert(tolower(ch)); } } for( iter = seen_chars.begin(); iter != seen_chars.end(); ++iter ) { std::cout << *iter << std::endl; } return 0; }
https://community.mycroft.ai/t/customized-speech-models-for-deepspeech/6083
CC-MAIN-2022-21
refinedweb
1,808
76.52
Under IEEE-754, floating point numbers are represented in binary as: Number = signbit \* mantissa \* 2exponent There are potentially multiple ways of representing the same number, using decimal as an example, the number 0.1 could be represented as 1\*10-1 or 0.1\*100 or even 0.01 \* 10 Now suppose that the lowest exponent that can be represented is -100. So the smallest number that can be represented in normal form is 1\*10-100. However, if we relax the constraint that the leading bit be a one, then we can actually represent smaller numbers in the same space. Taking a decimal example we could represent 0.1\*10-100. This is called a subnormal number. The purpose of having subnormal numbers is to smooth the gap between the smallest normal number and zero. It is very important to realise that subnormal numbers are represented with less precision than normal numbers. In fact, they are trading reduced precision for their smaller size. Hence calculations that use subnormal numbers are not going to have the same precision as calculations on normal numbers. So an application which does significant computation on subnormal numbers is probably worth investigating to see if rescaling (i.e. multiplying the numbers by some scaling factor) would yield fewer subnormals, and more accurate results. The following program will eventually generated subnormal numbers: #include <stdio.h> void main() { double d=1.0; while (d>0) {printf("%e\\n",d); d=d/2.0;} } Compiling and running this program will produce output that looks like: $ cc -O ft.c $ a.out ... 3.952525e-323 1.976263e-323 9.881313e-324 4.940656e-324 The downside with subnormal numbers is that computation on them is often deferred to software - which is significantly slower. As outlined above, this should not be a problem since computations on subnormal numbers should be both rare and treated with suspicion. However, sometimes subnormals come out as artifacts of calculations, for example subtracting two numbers that should be equal, but due to rounding errors are just slightly different. In these cases the program might want to flush the subnormal numbers to zero, and eliminate the computation on them. There is a compiler flag that needs to be used when building the main routine called -fns which enables the hardware to flush subnormals to zero. Recompiling the above code with this flag yields the following output: $ cc -O -fns ft.c $ a.out ... 1.780059e-307 8.900295e-308 4.450148e-308 2.225074e-308 Notice that the smallest number when subnormals are flushed to zero is 2e-308 rather than 5e-324 that is attained when subnormals are enabled. A quick search of IEEE-754 shows that it makes no mention of "subnormals". They do talk about "denormalized" numbers, that seem to be what you are talking about. I think that there are far more subtle effects of using denormals than you included in your blog, and that an article on floating point should include a strong warning to the effect that the use of floating point in general is not for the faint of heart. I suggest thorough study, especially if you are going to consider using denormals. Yes denormal == subnormal Yes, the recommended text is "what every computer scientist should know about floating point" Thanks! Darryl.
https://blogs.oracle.com/d/subnormal-numbers
CC-MAIN-2020-45
refinedweb
553
54.32
On Typelevel and Monix Planning the future is difficult, but can bring clarity and purpose. See Finding Focus in Harsh Times for context. On Typelevel # Typelevel is a great community of builders that want to practice FP in Scala. Its “steering committee” (link / archive) is a group of brilliant and kind people that are doing great work in keeping the community welcoming and inclusive. I’m stepping down from the Typelevel “Steering Committee”. Moderating and leading a community is gruesome work made by unsung heroes, and I can’t be a part of it anymore. For some time now I’ve been absent, with my only contributions to steering having been rants, and frankly I’d rather get back to coding or other contributions to Open Source that I can manage. Typelevel is growing, and you can make a difference. If you feel you’re a fit, get involved, as there’s a call for ‘steering committee’ members. The Future of Monix # Monix has been my love project, but due to events unfolding since 2019, with life and the world going mad, I’ve been on an unplanned hiatus from Open Source contributions. I did contribute monix-newtypes, as contributing a new project, scratching an immediate itch, felt easier 🙂 I’ll be forever grateful to Piotr Gawryś, who helped in maintaining and developing Monix, but eventually development stalled. Development of Monix was stalled primarily because small, incremental improvements are no longer possible. And this happened due to the release of Cats-Effect 3. Cats Effect 3 is an awesome new version of an already good library. But while being a necessary upgrade, it fundamentally changes the concurrency model it was based on. The changes are so profound that it’s arguably an entirely new library, and upgrading Monix isn’t easy, because compatibility means updating everything. This means not just Task, but also Cancelable, CancelableFuture, Observable, Iterant, and I mean everything. When Monix started, it had the goal of having “zero dependencies”. Having no dependencies is a virtue, precisely because those dependencies can seriously break compatibility. There is no way for Monix and CE 3 to currently coexist in the same project, due to JVM’s limitations and the decision for Monix to depend directly on Cats-Effect. If Monix were independent of such base dependencies, it could coexist while its maintainers could afford a hiatus. I’m always reminded of Rich Hickey’s thoughts from his Spec-ulation Keynote. TLDR — when you break compatibility, maybe it’s better to change the namespace too. Given in static FP we care about correctness a lot, how to evolve APIs is a really tough problem. I’m still thinking that Monix’s major versions should be allowed to coexist by changing the package name (e.g. monix.v4), but few other projects are doing it. I will be resuming the work on Monix, and will be calling for volunteers. And I hope I won’t let people down again. My current plan is: - Monix will be upgraded to the Cats-Effect 3 model, which will include project-wide changes; - The dependency on Cats and Cats-Effect 3, however, will probably be separated in different subprojects; while this involves “orphaned instances”, this decision is made easier by tooling (modularity, ftw): - the Scala compiler supports custom implicit imports via -Yimports; I don’t recommend it, but the option is there; - Scala 3 automatically suggests possible imports for missing implicits; - IntelliJ IDEA too automatically suggests imports; - I’d like to make monix-bio be part of the main project; - I have some new functionality in mind that will make Monix an unbeatable replacement for scala.concurrent, RxJava, Akka Streams, and the middle-ground that people need in adopting FP in Scala; Looking forward to having fun, I’m very excited about it, actually 🤩 If you’d like to help in building the next version of Monix, contact me.
https://alexn.org/blog/2022/04/05/future-monix-typelevel/
CC-MAIN-2022-21
refinedweb
653
50.46
PageRank of Linked Open Vocabularies (LOV) Datasets are easier to reuse if they use standards that are well-established, particularly in a given domain. A first approach is to ask around – ask people with whom you coauthor , people you trust in your field, etc. A follow-on approach is to examine the “graph reputation” of relevant standards, particularly if they may be represented as resources with outbound links. We can use the PageRank algorithm, just like Google uses it to index the web of documents. An an example, here I outline an initial approach to find the “most reputable” of Linked Open Vocabularies' 778 vocabularies. My starting point is having the API responses for each vocabulary so that lov is a list of dicts, each with keys url: str and api_response: dict. - Collect all outbound links: for entry in lov: entry["outbound_links"] = entry.get("outbound_links", set()) for version in entry["api_response"].get("versions", {}): for field, value in version.items(): if field.startswith("rel") and isinstance(value, list): entry["outbound_links"] |= {v for v in value} - Prepare a stream of self_link, outbound_link pairs: with open("lov-outlinks.csv",'w') as f: for entry in lov: url = entry["url"] for link_url in entry["outbound_links"]: f.write(f"{url},{link_url}\n") if __name__ == "__main__": # for `spark-submit` sc = SparkContext(appName="LovRankings") match_data = sc.textFile("lov-outlinks.csv") xs = match_data.map(get_linking).groupByKey().mapValues(initialize_for_voting) for i in range(20): if i > 0: xs = sc.parallelize(zs.items()) acc = dict(xs.mapValues(empty_ratings).collect()) zs = xs.aggregate(acc, allocate_points, combine_ratings) ratings = [(k, v["rating"]) for k, v in zs.items()] for i, (vocab, rating) in enumerate( sorted(ratings, key=lambda x: x[1], reverse=True)[:100] ): print("{:3}\t{:6}\t{}".format(i + 1, round(log2(rating + 1), 1), vocab)) where, above it: from math import log2 from pyspark import SparkContext from toolz import assoc def get_linking(line): return line.split(",") def initialize_for_voting(outlinks): return {"outlinks": outlinks, "n_outlinks": len(outlinks), "rating": 100} def empty_ratings(d): return assoc(d, "rating", 0) def allocate_points(acc, new): _, v = new boost = v["rating"] / (v["n_outlinks"] + 0.01) for link in v["outlinks"]: if link not in acc.keys(): acc[link] = {"outlinks": [], "n_outlinks": 0} link_rating = acc.get(link, {}).get("rating", 0) acc[link]["rating"] = link_rating + boost return acc def combine_ratings(a, b): for k, v in b.items(): try: a[k]["rating"] = a[k]["rating"] + b[k]["rating"] except KeyError: a[k] = v return a And here is the output of spark-submit lov_pagerank.py: 1 10.6 2 10.3 3 10.3 4 9.0 5 8.9 6 6.3 7 6.3 8 6.3 ... We can see at a glance the “most reputable” vocabularies, and they don’t surprise me. What may be more helpful is to collect candidate vocabularies for your domain and focus on their relative scores in order to gauge whether any are “well-established” in a sense. Even more helpful may be to include multiple “types” of resources – with standards linking to and being linked from various databases and policies. FAIRSharing seems like it could eventually support open investigation of the latter kind. I'd love for you to subscribe.
https://donnywinston.com/posts/pagerank-of-linked-open-vocabularies-lov/
CC-MAIN-2022-27
refinedweb
530
60.01
SoftBank Robotics robots. C++ SDK specificities ¶ Please make sure to have read the Key concepts section first. Additionaly, there are a few things that are C++ specific, one key difference is that there are only generic proxies in the qi framework (yet). The generic proxy has no information about the methods which are bound to these modules. This means that the user must specify himself the name and parameters of the methods: if there is a mistake somewhere, it will raise an exception during execution. This kind of proxy is more error-prone, but very flexible since it can adapt to any module. For user-created modules that have no specialized proxy, this is your only choice. #include <qi/anyobject.hpp> const std::string phraseToSay = "Hello world"; qi::SessionPtr session = qi::makeSession(); session->connect("tcp://nao.local:9559"); qi::AnyObject proxy = session->service("ALTextToSpeech"); proxy.call<void>("say", phraseToSay); // Or, if the method returns something, you // must use a template parameter bool ping = proxy.call<bool>("ping"); Deprecated features ¶ The ancient framework is now deprecated. There is no replacement for specialized proxies yet (though you can still use the old ones with the new framework). ALValue is also deprecated (it can still be used, and most of our API still use it) and is replaced by real types (vector, map, ...). For more information about the cohabitation of the old and new framework, refer to Porting C++ code from NAOqi1 to NAOqi2 . Installation ¶ Please read the C++ SDK - Installation Guide section. Samples and tutorials ¶ The main tutorial can be found in the Creating a new application outside Choregraphe using the qi Framework section. It is recommended that you use qiBuild to build your projects. Please make sure to also read the Local modules section.
https://developer.softbankrobotics.com/nao6/naoqi-developer-guide/sdks/c-sdk/new-qi-c-sdk
CC-MAIN-2020-40
refinedweb
293
56.15
22 July 2011 09:11 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> This is the third time in three months that operations have been disrupted at both plants, which had been restarted about two weeks ago after an earlier unplanned outage. “Production at the methanol plant tripped yesterday which [disrupted operations at] the acetic acid plant as well,” the official said. “Production at the plants is expected to be resumed within the next two days.” The latest outage is expected to be brief and is not expected to have any significant impact on the acetic acid market, sources said. Besides the methanol and the acetic acid plants, Jiangsu Sopo also operates a 500,000 tonne/year ethyl acetate plant at the same site. The ethyl acetate plant is operating normally as there is ample acetic acid feedstock inventory, the official added. Major acetic acid producers.
http://www.icis.com/Articles/2011/07/22/9479271/chinas-jiangsu-sopo-shuts-acetic-acid-plant-on-methanol-outage.html
CC-MAIN-2013-20
refinedweb
145
62.17
In this article we will be seeing how to set hold to documents based on Metadata values in SharePoint 2010. In "Shared Documents" I have created metadata column "Metadata Col". Based on the metadata value documents will be set to hold. Shared Documents has the following documents and documents having the "Metadata Col" value as "Friday" will be set to hold. Steps Involved: namespace Holds{ class Program { static void Main(string[] args) { using (SPSite site = new SPSite("")) { using (SPWeb web = site.RootWeb) { SPList list = web.Lists["Shared Documents"]; SPListItemCollection itemColl = list.Items; foreach (SPListItem item in itemColl) { string value = item["Metadata Col"].ToString(); string[] metadataValues = value.Split('|'); if (metadataValues[0] == "Friday") { SPList listHolds = web.Lists["Holds"]; SPListItem itemHold = listHolds.Items[0]; Hold.SetHold(item, itemHold, "Hold added"); } } } } } }}.
http://www.c-sharpcorner.com/uploadfile/anavijai/set-hold-to-documents-based-on-metadata-value-in-sharepoint-2010/
CC-MAIN-2014-52
refinedweb
127
53.58
Hi, my name is Azim and I work on the Big Data Support Team at Microsoft. If you have had a chance to read an earlier post by Dharshana, you may have seen how we can submit Hive query using the HDInsight PowerShell tools. In this blog, we will cover some basics of the HDInsight PowerShell tools and SDK (aka HdInsight SDK) – hopefully this will help clarify a few things around the HDInsight SDK and get you up and running with the SDK! Why HDInsight SDK? If all of us were happy to manage and access Hadoop components by logging on to a cluster node and run jobs manually, we wouldn't probably need any SDK. But that's not the reality! We all love to be able to manage or interact with our services/clusters remotely from our workstations–we also would like to run jobs or applications programmatically. With the HDInsight SDK, we can access and interact with HdInsight cluster remotely from our workstations, using tools and technologies that we all are familiar with - .Net Framework and Windows PowerShell. With the SDK, we can provision or manage cluster or run a job (MapReduce, Hive, Pig etc.) programmatically – thus allowing to make it a part of a rich workflow or other scheduled jobs. Without the existence of the SDK, for remote and programmatic access to Hadoop, each of us would have to find our own way of coding and scripting to use the native REST API that Hadoop or HDInsight exposes – the SDK hides some of the underlying details from end users and makes it easier. The fact that the HDInsight SDK is based on PowerShell and .Net means we can easily integrate the script/code with existing .Net applications. What is the HDInsight SDK? Since we have used a few different names to call it, I have seen some confusion around the naming – for example, 'Microsoft .Net SDK for Hadoop' vs. 'HDInsight SDK' – let me make an attempt to clarify it J It all started as hadoopsdk codeplex project– and is known as Microsoft .Net SDK for Hadoop. Microsoft .Net SDK for Hadoop is open source and has both Incubator and Released components as described in the roadmap– with the release of Windows Azure HDInsight, we have made the following SDK components as 'Released'– - Windows Azure HDInsight PowerShell - HDInsight .Net SDK - Cross Platform CLI tools (or Node.js CLI tools) Together, we can think of the above Released components as the 'HDInsight SDK' – this aligns with the overall HDInsight umbrella - you may also hear the term HDInsight PowerShell tools and .Net SDK or HDInsight Tools and SDK. The HDInsight SDK components are now integrated with Windows Azure tools and SDK – for example, the HDInsight PowerShell tools are integrated with Windows Azure PowerShell tools, the HDInsight .Net SDK code is under Microsoft.WindowsAzure.Management.HDInsight namespace etc. The above SDK components are also fully tested, production-ready and are supported by Microsoft CSS. Here is a summarized view of the SDK- Where do I get the HDInsight SDK? HDInsight PowerShell Tools: Our HDInsight documentation here has detailed steps for installing and configuring the HDInsight PowerShell tools. Here is a quick rundown of setup/configure steps for the HDInsight Powershell tools- I will cover the install part in this section and configure part in the next section. Follow these steps to install – - Install the Windows Azure PowerShell tools from here - Install the HDInsight PowerShell tools from here, you may need to restart the machine. After you install the HDInsight PowerShell tools, Open the Windows azure PowerShell console on the workstation and run the following cmdlet Get-Command -Module *HDInsight* | Format-Table -Property Name You may get an output like the screenshot below – this is one way to verify that the HDInsight PowerShell tools are installed successfully. But, we are not ready to use the HDInsight PowerShell tools yet! Next step is to prepare your workstation, please review the section 'Preparing your workstation to use the HDInsight SDK' HDInsight .Net SDK: The HDInsight .Net SDK uses Nuget distribution model, which means, you need to install the HDInsight .Net SDK Nuget packages for every Visual studio Project where you intend to use the .Net SDK during development. When you are ready to deploy the code in production, you distribute the .Net SDK DLLs with the application binaries. Here is a quick view of setup/configure steps for the HDInsight .Net SDK - To install HDInsight .Net SDK, follow the steps – - Create a new Visual Studio (2013, 2012 or 2010 –any edition) project or open an existing project - Go to Tools -> Library Package Manager and select 'Package Manager Console' as shown below - - For Visual studio 2012, Package Manager Console will appear at the bottom and you can run the following command to install the package – Install-Package Microsoft.WindowsAzure.Management.HDInsight - After the package is installed successfully, you will see a file called "packages.config" added to your Visual studio Project, with each package added, as shown below. Project References will be updated as well with related DLLs. 5. Next step is to prepare your workstation, please review the section 'Preparing your workstation to use the HDInsight SDK' Cross Platform CLI tools: Our HDInsight documentation here does a great job in describing the steps of installing and configuring the tools – please check it out if you need to access HDInsight from a Non-Windows platform like Linux, Mac etc. Preparing your workstation to use the HDInsight SDK The HDInsight SDK components (PowerShell tools, .Net SDK or Node.js tools) require your Windows Azure subscription information so that it can be used to manage your services. HDInsight SDK leverages Azure Management Certificate to authenticate while accessing subscription resources. There are a few ways you can obtain an Azure Management Certificate- - Create a self-signed certificate following the steps described in Windows azure documentation here - Via Azure PublishSettings file This blog explains nicely what Azure PublishSettings is and how it works, but here are some takeaways– What is a Windows Azure Management Certificate?. What is a PublishSettings file and how does it work? A publish settings file is an XML file which contains information about your subscription. It contains information about all subscriptions associated with a user's Live Id (i.e. all subscriptions for which a user is either an administrator or a co-administrator). It also contains a management certificate which can be used to authenticate Windows Azure Service Management API requests. So when we request a publish settings file from Windows Azure, what Windows Azure does is that it creates a new management certificate and attaches that certificate to all of your subscriptions. The publish settings file contains raw data of that certificate and all your subscriptions. Any tool which supports this functionality would simply parse this XML file, reads the certificate data and installs that certificate in your local certificate store (usually Current User/Personal (or My)). Since Windows Azure Service Management API makes use of certificate based authentication and same certificate is present in both Windows Azure Management Certificates Section for your subscription and in your local computer's certificate store, authentication works seamlessly. Getting Azure Management certificate via Publishsettings file: On each workstation you plan to use the HDInsight SDK (PowerShell or .Net SDK), you can use the following steps to obtain an Azure management certificate- - Sign in to the Windows Azure Management Portal using the credentials for your Windows Azure account. 2. Once the logon to Azure is complete and Portal is open, run the Windows Azure PowerShell command to get the settings file – Get-AzurePublishSettingsFile The Get-AzurePublishSettingsFile cmdlet opens a web page on the [Windows Azure Management Portal] from which you can download the subscription information. The information is contained in a .publishsettings file. 3. Import the Azure settings file to be used by Windows Azure cmdlets, by running the cmdlet – Import-AzurePublishSettingsFile '<Folder>\YourSubscriptionName-DownlodDate-credentials.publishsettings' Here, '<Folder>\YourSubscriptionName-DownlodDate-credentials.publishsettings' is the file you saved in step 2 on your workstation. Import-AzurePublishSettingsFile cmdlet does two things - a. It parses this AzurePublishSettingsFile XML file, reads the certificate data and installs that certificate in your local certificate store (usually Current User/Personal) -The certificate has 'Windows Azure Tools' as 'Issued to' and 'Issued By'. b. It create a file called 'WindowsAzureProfile.xml' under the folder 'C:\Users\userName\AppData\Roaming\Windows Azure PowerShell' - The file contains Subscription Name, SubscriptionId and Azure certificate Thumbprint etc. 4. You are now ready to connect to your subscription and use the HDInsight PowerShell tools and .Net SDK. To view Windows Azure subscription info, run the following Windows azure cmdlet – Get-AzureSubscription Running PowerShell script: You can either run the HDInsight cmdlets directly on Windows Azure PowerShell console or save the script as a file with the .ps1 extension, and run the script file from the Windows Azure PowerShell console. Before you can run a script, you must run the following command from an elevated command prompt to set the execution policy to RemoteSigned: Set-ExecutionPolicy RemoteSigned What can I do with the HDInsight SDK? Now that you have installed the HDInsight SDK and prepared your workstation to use it, what can you do with it? With the current release of the HDInsight SDK, it provides a number of important functionalities around a Windows Azure HDInsight Cluster– Our HDInsight documentation here has some great examples of how you can use HDInsight PowerShell or .Net SDK to provision clusters. More examples of using PowerShell to manage clusters can be found here. For submitting jobs programmatically, you can review the examples here or review the samples here Here is a simple example of running a wordcount MapReduce job via the HDInsight PowerShell cmdlets- # define subscription ID and cluster name $subid = Get-AzureSubscription -Current | %{ $_.SubscriptionId } $clusterName = "HDInsightClusterName" # define the word count MapReduce Job $wordCountJobDef = New-AzureHDInsightMapReduceJobDefinition -JarFile "/example/jars/hadoop-examples.jar" -ClassName "wordcount" $wordCountJobDef.Arguments.Add("/example/data/gutenberg/davinci.txt") $wordCountJobDef.Arguments.Add("/example/output/WordCount") # Submit the MapReduce job $wordCountJob = Start-AzureHDInsightJob -Cluster $clusterName -Subscription $subid -JobDefinition $wordCountJobDef # Wait for the job to complete Wait-AzureHDInsightJob -Subscription $subid -Job $wordCountJob -WaitTimeoutInSeconds 3600 # Get the job standard error output Get-AzureHDInsightJobOutput -Cluster $clusterName -Subscription $subid -JobId $wordCountJob.JobId -StandardError How do I get help on the HDInsight SDK? HDInsight PowerShell tools implement the get-help framework of Windows Azure PowerShell, which is kind of nice and helpful to show what parameters a cmdlet requires or supports. For example, if you wanted to know the usage of cmdlet "New-AzureHDInsightCluster", you would type on a Windows Azure PowerShell console – help New-AzureHDInsightCluster Sample output – And then I would typically use get-help <cmdlet> -full to see the required parameters. In addition to our Azure HDInsight documentation and samples on SDK (some of the links mentioned in the previous section), please feel free to review reference documentation for PowerShell and .Net SDK or contact us in CSS. That's it for today, I hope you have enjoyed the post on HDInsight SDK – looking forward to your comment or feedback J @Azim (MSFT)
https://blogs.msdn.microsoft.com/bigdatasupport/2013/11/21/getting-started-with-the-hdinsight-powershell-tools-and-sdk/
CC-MAIN-2018-22
refinedweb
1,851
51.48
Understand Seam Transaction vs EJB transactionSean Burns May 12, 2008 1:52 PM Hi, My understanding of a Seam Transaction (when you add <transaction:ejb-transaction />) to components.xml is that they work like EJB transactions. So if you throw an exception the transaction does not roll back unless it is a Runtime or @ApplicationException(rollback=true). My code works as expected when I have @Stateless @Name("loginRequest") public class LoginRequestEJB implements LoginRequest { public void login() throws LoginException() { ... updateAttemptsLeft(); throw new LoginException(); ... } } The attempts left gets updated in the DB... but now when I try use a plain Seam component org.jboss.seam.util.Work rolls back all Exceptions. @Name("loginRequest") public class LoginRequest { @Transactional public void login() throws LoginException() { ... updateAttemptsLeft(); throw new LoginException(); ... } } is this a bug or as expected?... and if it is as expected, how do I get the same behaviour of a EJB transaction with a plain Seam component? I am using Seam 2.0.2CR2 with JBoss 4.2.2 and I am using the login component from a Servlet. Thanks, Sean. 1. Re: Understand Seam Transaction vs EJB transactionPete Muir May 12, 2008 7:21 PM (in response to Sean Burns) Hmm, I think you are right. File an issue in JIRA, I need to think about this a bit. 2. Re: Understand Seam Transaction vs EJB transactionemrah seren Sep 1, 2010 6:58 AM (in response to Sean Burns) Hi, We have a project and we have customers and users.. one user opend a customer account (page) for change and write new staff about customer if one of the other users open the same customer and change, first users changing ll be lost.. so i need a system for not lose the changing.. like if one user select a customer, this customer object ll be lock.. i thing this transaction for what i need right?? i didnt understand how to use transactions in my project.. if you now something (links) can you write.. :) thanks all.. 3. Re: Understand Seam Transaction vs EJB transactionNeil Richardson Sep 1, 2010 1:04 PM (in response to Sean Burns) This sounds more like locking than transactions. If you're using Hibernate or JPA then have a look at 'optimistic locking' using the @Version annotation. You basically specify one field of Customer class as a version field private @Version Long version; Hibernate/JPA will then increment the version field every time it updates the row in the database. When it updates the field it checks that the version in the database is the same as the one it loaded - if it's different then it knows that the row has been updated by another user/process and throws an exception. It's very efficient and doesn't require any actual locks in the database but prevents users from overwriting changes made by other users.
https://developer.jboss.org/thread/181981
CC-MAIN-2018-17
refinedweb
476
64.1
#include <AVStreams_i.h> #include <AVStreams_i.h> Inheritance diagram for TAO_StreamEndPoint: Constructor. [virtual] Destructor. Not implemented in the light profile, throws notsupported. [protected] Helper methods to implement add_fep(). Called by StreamCtrl. responder is the peer to connect to. Destroy the stream, Empty the_spec means, for all the flows. disconnect the flows Change the transport qos on a stream. Reimplemented in TAO_StreamEndPoint_A, and TAO_StreamEndPoint_B. Called by the peer StreamEndPoint. The flow_spec indicates the flows (which contain transport addresses etc.) Used to control the flow. Used for public key encryption. Used to "attach" a negotiator to the endpoint. Used to restrict the set of protocols. Used to set a unique id for packets sent by this streamendpoint. Start the stream, Empty the_spec means, for all the flows. Stop the stream. Empty the_spec means, for all the flows. translate from application level to network level qos. hash table for the flownames and its corresponding flowEndpoint reference. Count of the number of flows in this streamendpoint, used to generate unique names for the flows. current flow number used for system generation of flow names. sequence of supported flow names. Key used for encryption. TAO_Forward_FlowSpec_Entry forward_entries_ [FLOWSPEC_MAX]; TAO_Reverse_FlowSpec_Entry reverse_entries_ [FLOWSPEC_MAX]; our local negotiator for QoS. Chosen protocol for this streamendpoint based on availableprotocols property. Our available list of protocols. source id used for multicast.
http://www.theaceorb.com/1.4a/doxygen/tao/av/classTAO__StreamEndPoint.html
CC-MAIN-2017-51
refinedweb
218
63.96
DBM(3X) DBM(3X) NAME dbm, dbminit, dbmclose, fetch, store, delete, firstkey, nextkey - data base subroutines SYNOPSIS #include <<dbm.h>> typedef struct { char *dptr; int dsize; } datum; dbminit(file) char *file; dbmclose() datum fetch(key) datum key; store(key, content) datum key, content; delete(key) datum key; datum firstkey() datum nextkey(key) datum key; DESCRIPTION Note: the dbm() library has been superceded by ndbm(3), and is now implemented using ndbm(). These functions maintain key/content pairs in a data base. The func- tions will handle very large (a billion blocks) databases and will access a keyed item in one or two file system accesses. The functions are obtained with the loader option -ldbm. keys and contents are described by the datum typedef. A datum speci- fies data- base nex- tkey. firstkey() will return the first key in the database. With any key nextkey() will return the next key in the database. This code will traverse the data base: for (key = firstkey(); key.dptr != NULL; key = nextkey(key)) SEE ALSO ar(1V), cat(1V), cp(1), tar(1), ndbm(3) DIAGNOSTICS All functions that return an int indicate errors with negative values. A zero return indicates no error. Routines that return a datum indi- cate errors with a NULL (0) dptr.)). 24 November 1987 DBM(3X)
http://modman.unixdev.net/?sektion=3&page=dbmclose&manpath=SunOS-4.1.3
CC-MAIN-2017-30
refinedweb
216
65.62
Question: Is there a programming language which can be programmed entirely in interactive mode, without needing to write files which are interpreted or compiled. Think maybe something like IRB for Ruby, but a system which is designed to let you write the whole program from the command line. Solution:1 I assume you are looking for something similar to how BASIC used to work (boot up to a BASIC prompt and start coding). IPython allows you to do this quite intuitively. Unix shells such as Bash use the same concept, but you cannot re-use and save your work nearly as intuitively as with IPython. Python is also a far better general-purpose language. Edit: I was going to type up some examples and provide some links, but the IPython interactive tutorial seems to do this a lot better than I could. Good starting points for what you are looking for are the sections on source code handling tips and lightweight version control. Note this tutorial doesn't spell out how to do everything you are looking for precisely, but it does provide a jumping off point to understand the interactive features on the IPython shell. Also take a look at the IPython "magic" reference, as it provides a lot of utilities that do things specific to what you want to do, and allows you to easily define your own. This is very "meta", but the example that shows how to create an IPython magic function is probably the most concise example of a "complete application" built in IPython. Solution:2 Smalltalk can be programmed entirely interactively, but I wouldn't call the smalltalk prompt a "command line". Most lisp environments are like this as well. Also postscript (as in printers) if memory serves. Are you saying that you want to write a program while never seeing more code than what fits in the scrollback buffer of your command window? Solution:3 There's always lisp, the original alternative to Smalltalk with this characteristic. Solution:4 The only way to avoid writing any files is to move completely to a running interactive environment. When you program this way (that is, interactively such as in IRB or F# interactive), how do you distribute your programs? When you exit IRB or F# interactive console, you lose all code you interactively wrote. Smalltalk (see modern implementation such as Squeak) solves this and I'm not aware of any other environment where you could fully avoid files. The solution is that you distribute an image of running environment (which includes your interactively created program). In Smalltalk, these are called images. Solution:5 Any unix shell conforms to your question. This goes from bash, sh, csh, ksh to tclsh for TCL or wish for TK GUI writing. Solution:6 As already mentioned, Python has a few good interactive shells, I would recommend bpython for starters instead of ipython, the advantage of bpython here is the support for autocompletion and help dialogs to help you know what arguments the function accepts or what it does (if it has docstrings). - Screenshots: Solution:7 This is really a question about implementations, not languages, but Smalltalk (try out the Squeak version) keeps all your work in an "interactive workspace", but it is graphical and not oriented toward the command line. APL, which was first deployed on IBM 360 and 370 systems, was entirely interactive, using a command line on a modified IBM Selectric typewriter! Your APL functions were kept in a "workspace" which did not at all resemble an ordinary file. Many, many language implementations come with pure command-line interactive interpreters, like say Standard ML of New Jersey, but because they don't offer any sort of persistent namespace (i.e., when you exit the program, all your work is lost), I don't think they should really count. Interestingly, the prime movers behind Smalltalk and APL (Kay and Iverson respectively) both won Turing Awards. (Iverson got his Turing award after being denied tenure at Harvard.) Solution:8 TCL can be programmed entirely interactivly, and you can cetainly define new tcl procs (or redefine existing ones) without saving to a file. Of course if you are developing and entire application at some point you do want to save to a file, else you lose everything. Using TCLs introspective abilities its relatively easy to dump some or all of the current interpreter state into a tcl file (I've written a proc to make this easier before, however mostly I would just develop in the file in the first place, and have a function in the application to resources itself if its source changes). Solution:9 Not sure about that, but this system is impressively interactive: Solution:10 Most variations of Lisp make it easy to save your interactive work product as program files, since code is just data. Charles Simonyi's Intentional Programming concept might be part way there, too, but it's not like you can go and buy that yet. The Intentional Workbench project may be worth exploring. Solution:11 Many Forths can be used like this. Solution:12 Someone already mentioned Forth but I would like to elaborate a bit on the history of Forth. Traditionally, Forth is a programming language which is it's own operating system. The traditional Forth saves the program directly onto disk sectors without using a "real" filesystem. It could afford to do that because it didn't ran directly on the CPU without an operating system so it didn't need to play nice. Indeed, some implementations have Forth as not only the operating system but also the CPU (a lot of more modern stack based CPUs are in fact designed as Forth machines). In the original implementation of Forth, code is always compiled each time a line is entered and saved on disk. This is feasible because Forth is very easy to compile. You just start the interpreter, play around with Forth defining functions as necessary then simply quit the interpreter. The next time you start the interpreter again all your previous functions are still there. Of course, not all modern implementations of Forth works this way. Solution:13 Clojure It's a functional Lisp on the JVM. You can connect to a REPL server called nREPL, and from there you can start writing code in a text file and loading it up interactively as you go. Clojure gives you something akin to interactive unit testing. I think Clojure is more interactive then other Lisps because of it's strong emphasis of the functional paradigm. It's easier to hot-swap functions when they are pure. The best way to try it out is here: ELM ELM is probably the most interactive you can get that I know of. It's a very pure functional language with syntax close to Haskell. What makes it special is that it's designed around a reactive model that allows hot-swapping(modifying running code(functions or values)) of code. The reactive bit makes it that whenever you change one thing, everything is re-evaluated. Now ELM is compiled to HTML-CSS-JavaScript. So you won't be able to use it for everything. ELM gives you something akin to interactive integration testing. The best way to try it out is here: Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/02/tutorial-interactive-programming.html
CC-MAIN-2019-09
refinedweb
1,240
60.35
Can you define non-changing values? Photo by Gabriel Crismariu on Unsplash Having transitioned from other languages, including PHP and JavaScript, constants are engrained in my practice. When I adopted Python, I quickly found myself asking the question, does Python have constants? The answer is kind of, but not really. Let?s dig deeper! What is a Constant? Before we move on, let?s define what a constant is, in case you?re unfamiliar. A constant value is similar to a variable, with the exception that it cannot be changed once it is set. Constants have a variety of uses, from setting static values to writing more semantic code. How to Implement Constants in Python I said earlier that Python ?kind of, but not really? has constants. What does that mean? It means that you can follow some standard conventions to emulate the semantic feel of constants, but Python itself does not support non-changing value assignments, in the way other languages that implement constants do. If you?re like me and mostly use constants as a way of writing clearer code, then follow these guidelines to quasi-implement constants in your Python code: - Use all capital letters in the name: First and foremost, you want your constants to stand out from your variables. This is even more critical in Python as you can technically overwrite values that you set with the intention of being constant. - Do not overly abbreviate names: The purpose of a constant ? really any variable ? is to be referenced later. That demands clarity. Avoid using single letter or generic names such as N or NUM. - Create a separate constants.py file: To add an element of intentional organization in structure and naming, it?s common practice to create a separate file for constants which is then imported. What does this all look like in practice? We?ll create two files, constants.py and app.py to demonstrate. First, constants.py: # constants.pyRATIO_FEET_TO_METERS = 3.281RATIO_LB_TO_KG = 2.205 Next, main.py: Notice how the constant values are more apparent in the code as well as being outside the file itself. Both of these qualities help communicate the distinction between constant and variable. If you do not want to continually type constant. you can import values individually: from constants import RATIO_LB_TO_KGfrom constants import RATIO_FEET_TO_METERSprint(RATIO_LB_TO_KG) # 3.281print(RATIO_FEET_TO_METERS) # 2.205 Unfortunately, after taking all these steps it?s still possible to overwrite the values: from constants import RATIO_LB_TO_KGprint(RATIO_LB_TO_KG) # 3.281RATIO_LB_TO_KG = 1print(RATIO_LB_TO_KG) # 1 Do you miss using true constants in Python? Do you agree with Python?s decision to exclude them? Leave your thoughts and experiences below!
https://911weknow.com/does-python-have-constants
CC-MAIN-2021-31
refinedweb
439
60.72
Go programmer Siddhartha Singh explains what makes Go different and when you should consider using it. Using examples of successful Go programs, he shows how easily you can set up the Go environment, build programs, and run them. Rather than just focusing on the syntax, learn the important motivating factors for considering Go as your language of choice. Getting Ready to 'Go' Invented at Google, Go was created by the makers of UNIX. The first thing you need to know about Go is that it compiles to machine code. What does this mean? It's ready to replace other compiled languages such as C/C++. To get started using Go, download the Go distribution. To write Go code, you can use command-line tools, or you can download golangide, an open source IDE written in C++. If you know Eclipse, try the Go plug-in GoClipse. One very nice feature of Go is its documentation. For example, suppose you run a command like this: godoc -http=:8081 This will run a web server locally on port 8081. Then type this in your browser: The documentation for the command is displayed in your browser. To see help on running the go command, for example, run this: go --h To be able to build local programs and packages, the go tool has three requirements: - The Go bin directory ($GOROOT/bin or %GOROOT%\bin) must be in the path. - There must be a directory tree with a src directory where the source code for the local programs and packages resides. For example, if you create a directory go under HOME, create a file as ~/go/src/HelloWorld.go, and so on. - The directory above the src directory must be in the GOPATH environment variable. For example, to build the HelloWorld example I just mentioned by using the go tool, do this: $ export GOPATH=$HOME/go $ cd $GOPATH/src $ go build $ ./HelloWorld Hello World! A 'Hello World' Program in Go It's a tradition to start with a Hello World program, so here it is, probably the shortest working Go program you can have: file : HelloWorld.go package main import "fmt" func main() { fmt.Println("Hello, World") } From a command line (terminal in Linux), you run it like this: $ go run HelloWorld.go This command both compiles and runs the program. Now coming back on track. What factors might cause you to switch to writing in a different programming language? Would it be management's decision, or your own? Several factors motivated me, as described in the following section. What Motivated Me to Learn Go? Languages are designed keeping a particular domain in mind, which means one particular language won't fit for all needs. For instance, C/C++ are for systems and/or performance-related programs. PHP is good for building web pages. Java provides platform independence. Similarly, Go is designed for maintaining flexibility when programming for the latest technologies, as well as for writing system programs. Let's examine some of the unique features of Go that help programmers to write scalable, distributed, parallel programs easily—without worrying about installing separate libraries. Factor 1: Pointers Pointers have always fascinated me while writing performance-oriented programs in C/C++. With pointers, you know exactly about memory layouts. Go gives the programmer control over which data structure is a pointer and which isn't. Pointers are important for performance and indispensable if you want to do systems programming, close to the operating system and network. Here's a code sample that shows the usage of pointers: //file pointer.go package main import "fmt" func main() { var i1 = 5 fmt.Printf("An integer: %d, its location in memory: %p\n", i1, &i1) var intP *int intP = &i1 fmt.Printf("The value at memory location %p is %d\n", intP, *intP) } Like most other low-level (system) languages, such as C, C++, and D, Go has the concept of pointers. But pointer arithmetic (such as pointer + 2 to go through the bytes of a string or the positions in an array) often leads to erroneous memory access in C, and therefore to fatal program crashes. Go doesn't allow pointer arithmetic, which makes the language memory-safe. Go pointers more resemble the references from languages like Java, C#, and Visual Basic.NET. For example, the following is invalid Go code: c = *p++ A pointed variable also persists in memory for as long as at least one pointer points to it, which means that the pointer's lifetime is independent of the scope in which it was created. Think of a smart pointer class in C++, which is implemented using reference counting methodology. Factor 2: Object-Oriented Programming (OOP) Go is an OO language, but not in the traditional sense, because it doesn't have the concept of classes and inheritance. However, it supports the concept of interfaces, with which a lot of aspects of object orientation can be made available. An interface defines a set of methods that are abstracts (pure virtual in C++), but these methods don't contain code; that is, they're not implemented. Also, an interface cannot contain variables, and therefore has no context. The prototype of an interface is as follows: type Namer interface { Method1(param_list) return_type Method2(param_list) return_type ... } where Namer is an interface type. Three important aspects of OO languages are encapsulation, inheritance, and polymorphism. How are these aspects envisioned in Go? - Encapsulation (data hiding): In contrast to other OO languages with four or more access levels, Go simplifies to only two: - Package scope: "object" is only known in its own package if it starts with a lowercase letter. - Exported: "object" is visible outside of its package if it starts with an uppercase letter. - Inheritance: By composition; that is, embedding of one or more types (I discuss types later in this article) with the desired behavior (fields and methods). Multiple inheritance is possible through embedding multiple types. - Polymorphism: By interfaces; that is, a variable of a type can be assigned to a variable of any interface it implements. Types and interfaces are loosely coupled; again, multiple inheritance is possible through implementing multiple interfaces. Go's interfaces are not a variant of Java or C# interfaces. They're independent; in other words, they don't know anything about their hierarchy. A type can only have methods defined in its own package. How does this work? Let's say you want to implement an "object serializer" to write to different kinds of stream (say, to a file and a network). In languages like Java, you declare different interfaces to it and implement them in a class containing data. For example, in Java: //File streamer interface interface IFileStreamer { void StreamToFile(); } //Network streamer interface interface INetworkStreamer { void StreamToNetwork(); } // A class StreamWriter to implement these interfaces public class StreamWriter implements IFileStreamer, INetworkStreamer { private String[] mBuf; public void StreamToFile() {} public void StreamToNetwork() {} } In Go, this will be implemented as described below: //Go Step 1: Define your data structures type ByteStream struct { mBuffer string } //Go Step 2: Define an interface that could use the data structure we have //File streamer interface type IFileStreamer interface { StreamToFile() } //Network streamer interface type INetworkStreamer interface { StreamToNetwork() } //Go Step 3: Implement methods to work on data func (byteStream ByteStream) StreamToFile() { WriteToFile(byteStream.mBuffer); } func (byteStream ByteStream) StreamToNetwork() { WriteToNetwork(byteStream.mBuffer); } If you look carefully, it appears to be data-centric; that is, define your data first and build your interface abstractions. Here, hierarchy is kind of built "along the way," without explicitly stating it; depending on the method signatures associated with the type, it's understood as implementing specific interfaces. To this point, it might seem that there isn't much difference. But wait: Consider a case where you get a requirement to add one more stream, called a memory stream. In the case of Java, you would have to declare another interface and implement it in the class. That is, you would have to touch the existing class: interface IMemoryStreamer { void StreamToMemory(); } public class StreamWriter implements IFileStreamer, INetworkStreamer, IMemoryStreamer { ... //Implement the new function void StreamToMemory() {} } Here comes the benefit of Go: You don't have to change the data class. Just write one more function with the type ByteStream: func (byteStream ByteStream) StreamToMemory() { WriteToMemory(byteStream.mBuffer); } This is a shift in the concept of traditional object orientation. Factor 3: Built-in Concurrency Support Go adheres to a paradigm known as Communicating Sequential Processes (CSP, invented by C.A.R. Hoare), also called the message, which is already available in Erlang. To execute tasks in parallel, you have to write "Go routines." Goroutines Asynchronous function call. Just prefix the function call with the word go. No prior knowledge is required at the time of function declaration. Return values, if any, will be discarded. No way exists to kill spawned goroutines. However, they are all killed when main() exits. Channels Go uses channels to synchronize goroutines. A channel is type-safe duplex FIFO, synchronized across goroutines. The syntax looks like this: <- chn to read; chn <- to write It reads/writes blocks until the data (command) is available. Select Statements Like a switch, except it's for channel I/O. It picks up one ready channel I/O operation from several ready/blocked operations. Non-blocking I/O is possible using the default: option. For example: package main import ( "fmt" "math/rand" "time" ) func consumer(c1, c2 chan int) { timeout := time.After(time.Second*5) for { var i int select { case i = <-c1: fmt.Println("Producer 1 yielded", i) case i = <-c2: fmt.Println("Producer 2 yielded", i) case <- timeout: return default: time.Sleep(time.Second) } } } func producer(c chan int) { for i := 0; i < 10; i++ { c <- rand.Int() } } func main() { c1, c2 := make(chan int), make(chan int) go producer(c1) // go routine, prefix 'go' go producer(c2) // go routine, prefix 'go' consumer(c1, c2) } Factor 4: Garbage Collection Oh! So no memory leaks! Great!! The Go developer doesn't have to code to release the memory for variables and structures that are not used in the program. A separate process in the Go runtime, the garbage collector, takes care of that problem: It searches for variables that are not listed anymore and frees that memory. Functionality for this process can be accessed via the runtime package. Garbage collection can be called explicitly by invoking the function runtime.GC(), but this is only useful in rare cases, such as when memory resources are scarce, a large chunk of memory could immediately be freed at that point in the execution, and the program takes only a momentary decrease in performance (for the garbage collection-process). Factor 5: Error Handling Through Multiple Return Values Go has no exception mechanism, like the try/catch in Java or .NET; you cannot throw exceptions. Instead it has a defer, panic, and recover mechanism. There are two primary paradigms today in native languages for handling errors: return codes, as in C, or exceptions, as in OO alternatives. Of the two, return codes are the more frustrating option, because returning error codes often conflicts with returning other data from the function. Go solves this problem by allowing functions to return multiple values. One value returned is of type error and can be checked anytime a function returns. If you don't care about the error value, you don't check it. In either case, the regular return values of the function are available to you. Factor 6: Web Programming In today's world, we need help from a language to write web-based programs. The trend is to write APIs that are REST-based. HTML Templates Go has extremely good and safe support for HTML templates, which are data-driven and make templates suitable for web programming. The package template (html/template) is used for generating HTML output that's safe against code injection. Writing a Standalone Web Server It's easy to write a web server in Go: package main import ( "fmt" "net/http" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hurray, how good is : %s!", r.URL.Path[1:]) } func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8000", nil) } When you run this program and type the URL in your browser, you should see this response: Hurray, how good is : Go! Conclusion I think this one article has been sufficient to give you a feeling of how Go works. Many other features are also available—expandable arrays, hash maps, etc., make Go extremely good for writing programs. Enjoy them as you get ready to "go"!
http://www.informit.com/articles/article.aspx?p=2153658
CC-MAIN-2017-17
refinedweb
2,092
55.95
Instead of upgrading and updating all of your code at once (which is incredibly difficult and prone to bugs), the backwards compatibility package enables you to upgrade one component, one hook, and one route at a time by running both v5 and v6 in parallel. Any code you haven't touched is still running the very same code it was before. Once all components are exclusively using the v6 APIs, your app no longer needs the compatibility package and is running on v6. The official guide can be found here. We recommend using the backwards compatibility package to upgrade apps that have more than a few routes. Otherwise, we hope this guide will help you do the upgrade all at once! React Router version 6 introduces several powerful new features, as well as improved compatibility with the latest versions of React. It also introduces a few breaking changes from version 5. This document is a comprehensive guide on how to upgrade your v4/5 app to v6 while hopefully being able to ship as often as possible as you go. The examples in this guide will show code samples of how you might have built something in a v5 app, followed by how you would accomplish the same thing in v6. There will also be an explanation of why we made this change and how it's going to improve both your code and the overall user experience of people who are using your app. In general, the process looks like this: <Redirect>s inside <Switch> <Route>s The following is a detailed breakdown of each step that should help you migrate quickly and with confidence to v6. React Router v6 makes heavy use of React hooks, so you'll need to be on React 16.8 or greater before attempting the upgrade to React Router v6. The good news is that React Router v5 is compatible with React >= 15, so if you're on v5 (or v4) you should be able to upgrade React without touching any of your router code. Once you've upgraded to React 16.8, you should deploy your app. Then you can come back later and pick up where you left off. It will be easier to make the switch to React Router v6 if you upgrade to v5.1 first. In v5.1, we released an enhancement to the handling of <Route children> elements that will help smooth the transition to v6. Instead of using <Route component> and <Route render> props, just use regular element <Route children> everywhere and use hooks to access the router's internal state. // v4 and v5 before 5.1 function User({ id }) { // ... } function App() { return ( <Switch> <Route exact <Home /> </Route> <Route path="/about"> <About /> </Route> {/* Can also use a named `children` prop */} <Route path="/users/:id" children={<User />} /> </Switch> ); } You can read more about v5.1's hooks API and the rationale behind the move to regular elements on our blog. In general, React Router v5.1 (and v6) favors elements over components (or "element types"). There are a few reasons for this, but we'll discuss more further down when we discuss v6's <Route> API. When you use regular React elements you get to pass the props explicitly. This helps with code readability and maintenance over time. If you were using <Route render> to get a hold of the params, you can just useParams inside your route component instead. Along with the upgrade to v5.1, you should replace any usage of withRouter with hooks. You should also get rid of any "floating" <Route> elements that are not inside a <Switch>. Again, the blog post about v5.1 explains how to do this in greater detail. In summary, to upgrade from v4/5 to v5.1, you should: <Route children>instead of <Route render>and/or <Route component>props withRouterwith hooks <Route>s that are not inside a <Switch>with useRouteMatch, or wrap them in a <Switch> <Redirect>s inside <Switch> Remove any <Redirect> elements that are directly inside a <Switch>. If you want to redirect on the initial render, you should move the redirect logic to your server (we wrote more about this here). If you want to redirect client-side, move your <Redirect> into a <Route render> prop. // Change this: <Switch> <Redirect from="about" to="about-us" /> </Switch> // to this: <Switch> <Route path="about" render={() => <Redirect to="about-us" />} /> </Switch> Normal <Redirect> elements that are not inside a <Switch> are ok to remain. They will become <Navigate> elements in v6. <Route>s Replace any elements inside a <Switch> that are not plain <Route> elements with a regular <Route>. This includes any <PrivateRoute>-style custom components. You can read more about the rationale behind this here, including some tips about how to use a <Route render> prop in v5 to achieve the same effect. Again, once your app is upgraded to v5.1 you should test and deploy it, and pick this guide back up when you're ready to continue. Heads up: This is the biggest step in the migration and will probably take the most time and effort. For this step, you'll need to install React Router v6. If you're managing dependencies via npm: $ npm install react-router-dom # or, for a React Native app $ npm install react-router-native You'll also want to remove the history dependency from your package.json. The history library is a direct dependency of v6 (not a peer dep), so you won't ever import or use it directly. Instead, you'll use the useNavigate() hook for all navigation (see below). <Switch>elements to <Routes> React Router v6 introduces a Routes component that is kind of like Switch, but a lot more powerful. The main advantages of Routes over Switch are: <Route>s and <Link>s inside a <Routes>are relative. This leads to leaner and more predictable code in <Route path>and <Link to> <Switch> React.lazy In order to use v6, you'll need to convert all your <Switch> elements to <Routes>. If you already made the upgrade to v5.1, you're halfway there. First, let's talk about relative routes and links in v6. In v5, you had to be very explicit about how you wanted to nest your routes and links. In both cases, if you wanted nested routes and links you had to build the <Route path> and <Link to> props from the parent route's match.url and match.path properties. Additionally, if you wanted to nest routes, you had to put them in the child route's component. // This is a React Router v5 app import { BrowserRouter, Switch, Route, Link, useRouteMatch, } from "react-router-dom"; function App() { return ( <BrowserRouter> <Switch> <Route exact <Home /> </Route> <Route path="/users"> <Users /> </Route> </Switch> </BrowserRouter> ); } function Users() { // In v5, nested routes are rendered by the child component, so // you have <Switch> elements all over your app for nested UI. // You build nested routes and links using match.url and match.path. let match = useRouteMatch(); return ( <div> <nav> <Link to={`${match.url}/me`}>My Profile</Link> </nav> <Switch> <Route path={`${match.path}/me`}> <OwnUserProfile /> </Route> <Route path={`${match.path}/:id`}> <UserProfile /> </Route> </Switch> </div> ); } This is the same app in v6: // This is a React Router v6 app import { BrowserRouter, Routes, Route, Link, } from "react-router-dom"; function App() { return ( <BrowserRouter> <Routes> <Route path="/" element={<Home />} /> <Route path="users/*" element={<Users />} /> </Routes> </BrowserRouter> ); } function Users() { return ( <div> <nav> <Link to="me">My Profile</Link> </nav> <Routes> <Route path=":id" element={<UserProfile />} /> <Route path="me" element={<OwnUserProfile />} /> </Routes> </div> ); } A few important things to notice about v6 in this example: <Route path>and <Link to>are relative. This means that they automatically build on the parent route's path and URL so you don't have to manually interpolate match.urlor match.path <Route exact>is gone. Instead, routes with descendant routes (defined in other components) use a trailing *in their path to indicate they match deeply <Switch> You may have also noticed that all <Route children> from the v5 app changed to <Route element> in v6. Assuming you followed the upgrade steps to v5.1, this should be as simple as moving your route element from the child position to a named element prop. <Route element> In the section about upgrading to v5.1, we promised that we'd discuss the advantages of using regular elements instead of components (or element types) for rendering. Let's take a quick break from upgrading and talk about that now. For starters, we see React itself taking the lead here with the <Suspense fallback={<Spinner />}> API. The fallback prop takes a React element, not a component. This lets you easily pass whatever props you want to your <Spinner> from the component that renders it. Using elements instead of components means we don't have to provide a passProps-style API so you can get the props you need to your elements. For example, in a component-based API there is no good way to pass props to the <Profile> element that is rendered when <Route path=":userId" component={Profile} /> matches. Most React libraries who take this approach end up with either an API like <Route component={Profile} passProps={{ animate: true }} /> or use a render prop or higher-order component. Also, in case you didn't notice, in v4 and v5 Route's rendering API became rather large. It went something like this: // Ah, this is nice and simple! <Route path=":userId" component={Profile} /> // But wait, how do I pass custom props to the <Profile> element?? // Hmm, maybe we can use a render prop in those situations? <Route path=":userId" render={routeProps => ( <Profile routeProps={routeProps} animate={true} /> )} /> // Ok, now we have two ways to render something with a route. :/ // But wait, what if we want to render something when a route // *doesn't* match the URL, like a Not Found page? Maybe we // can use another render prop with slightly different semantics? <Route path=":userId" children={({ match }) => ( match ? ( <Profile match={match} animate={true} /> ) : ( <NotFound /> ) )} /> // What if I want to get access to the route match, or I need // to redirect deeper in the tree? function DeepComponent(routeStuff) { // got routeStuff, phew! } export default withRouter(DeepComponent); // Well hey, now at least we've covered all our use cases! // ... *facepalm* At least part of the reason for this API sprawl was that React did not provide any way for us to get the information from the <Route> to your route element, so we had to invent clever ways to get both the route data and your own custom props through to your elements: component, render props, passProps higher-order-components ... until hooks came along! Now, the conversation above goes like this: // Ah, nice and simple API. And it's just like the <Suspense> API! // Nothing more to learn here. <Route path=":userId" element={<Profile />} /> // But wait, how do I pass custom props to the <Profile> // element? Oh ya, it's just an element. Easy. <Route path=":userId" element={<Profile animate={true} />} /> // Ok, but how do I access the router's data, like the URL params // or the current location? function Profile({ animate }) { let params = useParams(); let location = useLocation(); } // But what about components deep in the tree? function DeepComponent() { // oh right, same as anywhere else let navigate = useNavigate(); } // Aaaaaaaaand we're done here. Another important reason for using the element prop in v6 is that <Route children> is reserved for nesting routes. This is one of people's favorite features from v3 and @reach/router, and we're bringing it back in v6. Taking the code in the previous example one step further, we can hoist all <Route> elements into a single route config: // This is a React Router v6 app import { BrowserRouter, Routes, Route, Link, Outlet, } from "react-router-dom"; function App() { return ( <BrowserRouter> <Routes> <Route path="/" element={<Home />} /> <Route path="users" element={<Users />}> <Route path="me" element={<OwnUserProfile />} /> <Route path=":id" element={<UserProfile />} /> </Route> </Routes> </BrowserRouter> ); } function Users() { return ( <div> <nav> <Link to="me">My Profile</Link> </nav> <Outlet /> </div> ); } This step is optional of course, but it's really nice for small to medium sized apps that don't have thousands of routes. Notice how <Route> elements nest naturally inside a <Routes> element. Nested routes build their path by adding to the parent route's path. We didn't need a trailing * on <Route path="users"> this time because when the routes are defined in one spot the router is able to see all your nested routes. You'll only need the trailing * when there is another <Routes> somewhere in that route's descendant tree. In that case, the descendant <Routes> will match on the portion of the pathname that remains (see the previous example for what this looks like in practice). When using a nested config, routes with children should render an <Outlet> in order to render their child routes. This makes it easy to render layouts with nested UI. <Route path>patterns React Router v6 uses a simplified path format. <Route path> in v6 supports only 2 kinds of placeholders: dynamic :id-style params and * wildcards. A * wildcard may be used only at the end of a path, not in the middle. All of the following are valid route paths in v6: /groups /groups/admin /users/:id /users/:id/messages /files/* /files/:id/* The following RegExp-style route paths are not valid in v6: /users/:id? /tweets/:id(\d+) /files/*/cat.jpg /files-* We added the dependency on path-to-regexp in v4 to enable more advanced pattern matching. In v6 we are using a simpler syntax that allows us to predictably parse the path for ranking purposes. It also means we can stop depending on path-to-regexp, which is nice for bundle size. If you were using any of path-to-regexp's more advanced syntax, you'll have to remove it and simplify your route paths. If you were using the RegExp syntax to do URL param validation (e.g. to ensure an id is all numeric characters) please know that we plan to add some more advanced param validation in v6 at some point. For now, you'll need to move that logic to the component the route renders, and let it branch its rendered tree after you parse the params. If you were using <Route sensitive> you should move it to its containing <Routes caseSensitive> prop. Either all routes in a <Routes> element are case-sensitive or they are not. One other thing to notice is that all path matching in v6 ignores the trailing slash on the URL. In fact, <Route strict> has been removed and has no effect in v6. This does not mean that you can't use trailing slashes if you need to. Your app can decide to use trailing slashes or not, you just can't render two different UIs client-side at <Route path="edit"> and <Route path="edit/">. You can still render two different UIs at those URLs (though we wouldn't recommend it), but you'll have to do it server-side. <Link to>values In v5, a <Link to> value that does not begin with / was ambiguous; it depends on what the current URL is. For example, if the current URL is /users, a v5 <Link to="me"> would render a <a href="/me">. However, if the current URL has a trailing slash, like /users/, the same <Link to="me"> would render <a href="/users/me">. This makes it difficult to predict how links will behave, so in v5 we recommended that you build links from the root URL (using match.url) and not use relative <Link to> values. React Router v6 fixes this ambiguity. In v6, a <Link to="me"> will always render the same <a href>, regardless of the current URL. For example, a <Link to="me"> that is rendered inside a <Route path="users"> will always render a link to /users/me, regardless of whether or not the current URL has a trailing slash. When you'd like to link back "up" to parent routes, use a leading .. segment in your <Link to> value, similar to what you'd do in a <a href>. function App() { return ( <Routes> <Route path="users" element={<Users />}> <Route path=":id" element={<UserProfile />} /> </Route> </Routes> ); } function Users() { return ( <div> <h2> {/* This links to /users - the current route */} <Link to=".">Users</Link> </h2> <ul> {users.map((user) => ( <li> {/* This links to /users/:id - the child route */} <Link to={user.id}>{user.name}</Link> </li> ))} </ul> </div> ); } function UserProfile() { return ( <div> <h2> {/* This links to /users - the parent route */} <Link to="..">All Users</Link> </h2> <h2> {/* This links to /users/:id - the current route */} <Link to=".">User Profile</Link> </h2> <h2> {/* This links to /users/mj - a "sibling" route */} <Link to="../mj">MJ</Link> </h2> </div> ); } It may help to think about the current URL as if it were a directory path on the filesystem and <Link to> like the cd command line utility. // If your routes look like this <Route path="app"> <Route path="dashboard"> <Route path="stats" /> </Route> </Route> // and the current URL is /app/dashboard (with or without // a trailing slash) <Link to="stats"> => <a href="/app/dashboard/stats"> <Link to="../stats"> => <a href="/app/stats"> <Link to="../../stats"> => <a href="/stats"> <Link to="../../../stats"> => <a href="/stats"> // On the command line, if the current directory is /app/dashboard cd stats # pwd is /app/dashboard/stats cd ../stats # pwd is /app/stats cd ../../stats # pwd is /stats cd ../../../stats # pwd is /stats Note: The decision to ignore trailing slashes while matching and creating relative paths was not taken lightly by our team. We consulted with a number of our friends and clients (who are also our friends!) about it. We found that most of us don't even understand how plain HTML relative links are handled with the trailing slash. Most people guessed it worked like cd on the command line (it does not). Also, HTML relative links don't have the concept of nested routes, they only worked on the URL, so we had to blaze our own trail here a bit. @reach/router set this precedent and it has worked out well for a couple of years. In addition to ignoring trailing slashes in the current URL, it is important to note that <Link to=".."> will not always behave like <a href=".."> when your <Route path> matches more than one segment of the URL. Instead of removing just one segment of the URL, it will resolve based upon the parent route's path, essentially removing all path segments specified by that route. function App() { return ( <Routes> <Route path="users"> <Route path=":id/messages" element={ // This links to /users <Link to=".." /> } /> </Route> </Routes> ); } This may seem like an odd choice, to make .. operate on routes instead of URL segments, but it's a huge help when working with * routes where an indeterminate number of segments may be matched by the *. In these scenarios, a single .. segment in your <Link to> value can essentially remove anything matched by the *, which lets you create more predictable links in * routes. function App() { return ( <Routes> <Route path=":userId"> <Route path="messages" element={<UserMessages />} /> <Route path="files/*" element={ // This links to /:userId/messages, no matter // how many segments were matched by the * <Link to="../messages" /> } /> </Route> </Routes> ); } useRoutesinstead of react-router-config All of the functionality from v5's react-router-config package has moved into core in v6. If you prefer/need to define your routes as JavaScript objects instead of using React elements, you're going to love this. function App() { let element = useRoutes([ // These are the same as the props you provide to <Route> { path: "/", element: <Home /> }, { path: "dashboard", element: <Dashboard /> }, { path: "invoices", element: <Invoices />, // Nested routes use a children property, which is also // the same as <Route> children: [ { path: ":id", element: <Invoice /> }, { path: "sent", element: <SentInvoices /> }, ], }, // Not found routes work as you'd expect { path: "*", element: <NotFound /> }, ]); // The returned element will render the entire element // hierarchy with all the appropriate context it needs return element; } Routes defined in this way follow all of the same semantics as <Routes>. In fact, <Routes> is really just a wrapper around useRoutes. We encourage you to give both <Routes> and useRoutes a shot and decide for yourself which one you prefer to use. Honestly, we like and use them both. If you had cooked up some of your own logic around data fetching and rendering server-side, we have a low-level matchRoutes function available as well similar to the one we had in react-router-config. useNavigateinstead of useHistory React Router v6 introduces a new navigation API that is synonymous with <Link> and provides better compatibility with suspense-enabled apps. We include both imperative and declarative versions of this API depending on your style and needs. // This is a React Router v5 app import { useHistory } from "react-router-dom"; function App() { let history = useHistory(); function handleClick() { history.push("/home"); } return ( <div> <button onClick={handleClick}>go home</button> </div> ); } In v6, this app should be rewritten to use the navigate API. Most of the time this means changing useHistory to useNavigate and changing the history.push or history.replace callsite. // This is a React Router v6 app import { useNavigate } from "react-router-dom"; function App() { let navigate = useNavigate(); function handleClick() { navigate("/home"); } return ( <div> <button onClick={handleClick}>go home</button> </div> ); } If you need to replace the current location instead of push a new one onto the history stack, use navigate(to, { replace: true }). If you need state, use navigate(to, { state }). You can think of the first argument to navigate as your <Link to> and the other arguments as the replace and state props. The Link component in v6 accepts state as a separate prop instead of receiving it as part of the object passed to to so you'll need to update your Link components if they are using state: import { Link } from "react-router-dom"; // Change this: <Link to={{ pathname: "/home", state: state }} /> // to this: <Link to="/home" state={state} /> If you prefer to use a declarative API for navigation (ala v5's Redirect component), v6 provides a Navigate component. Use it like: import { Navigate } from "react-router-dom"; function App() { return <Navigate to="/home" replace state={state} />; } Note: Be aware that the v5 <Redirect /> uses replace logic by default (you may change it via push prop), on the other hand, the v6 <Navigate /> uses push logic by default and you may change it via replace prop. // Change this: <Redirect to="about" /> <Redirect to="home" push /> // to this: <Navigate to="about" replace /> <Navigate to="home" /> If you're currently using go, goBack or goForward from useHistory to navigate backwards and forwards, you should also replace these with navigate with a numerical argument indicating where to move the pointer in the history stack. For example, here is some code using v5's useHistory hook: // This is a React Router v5 app import { useHistory } from "react-router-dom"; function App() { const { go, goBack, goForward } = useHistory(); return ( <> <button onClick={() => go(-2)}> Go 2 pages back </button> <button onClick={goBack}>Go back</button> <button onClick={goForward}>Go forward</button> <button onClick={() => go(2)}> Go 2 pages forward </button> </> ); } Here is the equivalent app with v6: // This is a React Router v6 app import { useNavigate } from "react-router-dom"; function App() { const navigate = useNavigate(); return ( <> <button onClick={() => navigate(-2)}> Go 2 pages back </button> <button onClick={() => navigate(-1)}>Go back</button> <button onClick={() => navigate(1)}> Go forward </button> <button onClick={() => navigate(2)}> Go 2 pages forward </button> </> ); } Again, one of the main reasons we are moving from using the history API directly to the navigate API is to provide better compatibility with React suspense. React Router v6 uses the useNavigation hook at the root of your component hierarchy. This lets us provide a smoother experience when user interaction needs to interrupt a pending route navigation, for example when they click a link to another route while a previously-clicked link is still loading. The navigate API is aware of the internal pending navigation state and will do a REPLACE instead of a PUSH onto the history stack, so the user doesn't end up with pages in their history that never actually loaded. Note: The <Redirect> element from v5 is no longer supported as part of your route config (inside a <Routes>). This is due to upcoming changes in React that make it unsafe to alter the state of the router during the initial render. If you need to redirect immediately, you can either a) do it on your server (probably the best solution) or b) render a <Navigate> element in your route component. However, recognize that the navigation will happen in a useEffect. Aside from suspense compatibility, navigate, like Link, supports relative navigation. For example: // assuming we are at `/stuff` function SomeForm() { let navigate = useNavigate(); return ( <form onSubmit={async (event) => { let newRecord = await saveDataFromForm( event.target ); // you can build up the URL yourself navigate(`/stuff/${newRecord.id}`); // or navigate relative, just like Link navigate(`${newRecord.id}`); }} > {/* ... */} </form> ); } <Link> componentprop <Link> no longer supports the component prop for overriding the returned anchor tag. There are a few reasons for this. First of all, a <Link> should pretty much always render an <a>. If yours does not, there's a good chance your app has some serious accessibility and usability problems, and that's no good. The browsers give us a lot of nice usability features with <a> and we want your users to get those for free! That being said, maybe your app uses a CSS-in-JS library, or maybe you have a custom, fancy link component already in your design system that you'd like to render instead. The component prop may have worked well enough in a world before hooks, but now you can create your very own accessible Link component with just a few of our hooks: import { FancyPantsLink } from "@fancy-pants/design-system"; import { useHref, useLinkClickHandler, } from "react-router-dom"; const Link = React.forwardRef( ( { onClick, replace = false, state, target, to, ...rest }, ref ) => { let href = useHref(to); let handleClick = useLinkClickHandler(to, { replace, state, target, }); return ( <FancyPantsLink {...rest} href={href} onClick={(event) => { onClick?.(event); if (!event.defaultPrevented) { handleClick(event); } }} ref={ref} target={target} /> ); } ); If you're using react-router-native, we provide useLinkPressHandler that works basically the same way. Just call that hook's returned function in your Link's onPress handler and you're all set. <NavLink exact>to <NavLink end> This is a simple renaming of a prop to better align with the common practices of other libraries in the React ecosystem. activeClassNameand activeStyleprops from <NavLink /> As of v6.0.0-beta.3, the activeClassName and activeStyle props have been removed from NavLinkProps. Instead, you can pass a function to either style or className that will allow you to customize the inline styling or the class string based on the component's active state. <NavLink to="/messages" - style={{ color: 'blue' }} - activeStyle={{ color: 'green' }} + style={({ isActive }) => ({ color: isActive ? 'green' : 'blue' })} > Messages </NavLink> <NavLink to="/messages" - className="nav-link" - activeClassName="activated" + className={({ isActive }) => "nav-link" + (isActive ? " activated" : "")} > Messages </NavLink> If you prefer to keep the v5 props, you can create your own <NavLink /> as a wrapper component for a smoother upgrade path.), })} /> ); } ); StaticRouterfrom react-router-dom/server The StaticRouter component has moved into a new bundle: react-router-dom/server. // change import { StaticRouter } from "react-router-dom"; // to import { StaticRouter } from "react-router-dom/server"; This change was made both to follow more closely the convention established by the react-dom package and to help users understand better what a <StaticRouter> is for and when it should be used (on the server). useRouteMatchwith useMatch useMatch is very similar to v5's useRouteMatch, with a few key differences: useRouteMatch({ strict })is now useMatch({ end }) useRouteMatch({ sensitive })is now useMatch({ caseSensitive }) To see the exact API of the new useMatch hook and its type declaration, check out our API Reference. <Prompt>is not currently supported <Prompt> from v5 (along with usePrompt and useBlocker from the v6 betas) are not included in the current released version of v6. We decided we'd rather ship with what we have than take even more time to nail down a feature that isn't fully baked. We will absolutely be working on adding this back in to v6 at some point in the near future, but not for our first stable release of 6.x. Despite our best attempts at being thorough, it's very likely that we missed something. If you follow this upgrade guide and find that to be the case, please let us know. We are happy to help you figure out what to do with your v5 code to be able to upgrade and take advantage of all of the cool stuff in v6. Good luck 🤘
https://beta.reactrouter.com/en/dev/upgrading/v5
CC-MAIN-2022-40
refinedweb
4,820
60.45
Parker Sends Message to Heavyweight Division Parker Sends Message to Heavyweight Division with Savage Knockout of Flores Joseph Parker produced the statement performance he was looking for in Christchurch on Saturday night, despatching American Alexander Flores (17-2-1) with a savage third round knockout. Flores was left bloodied and unconscious after Parker connected with two thunderous right hands to end the contest. Flores was competitive throughout the first two rounds, however Parker clearly packed a significant power advantage, and always looked likely to catch the Mexican-American with a telling blow. “It feels good, really good,” Parker said. “Obviously you hope the other guy is okay but, as a heavyweight boxer, that’s the job you’re trying to do.” Parker joked that his first fight with chest hair (he has previously waxed his chest) was probably responsible for the unmistakeable increase in his power. The win now propels the former WBO heavyweight champion firmly back into the mix for another title shot. WBO Oriental champion Junior Fa made short work of the feature undercard bout, dispatching Argentine journeyman Rogelio Omar Rossi in just 86 seconds. Fa hurt Rossi early with a clubbing right hand and finished the fight with a monster right hand to send Rossi crashing to the canvas. In other results, Canterbury’s Bowyn Morgan stopped Fiji’s Sebastian Singh in three rounds to claim the Pro Box NZ super welterweight title. Singh troubled Morgan early in the fight with some nice combinations but soon wilted under a barrage of close range hooks. Morgan dropped Singh mid-way through the third round with a flurry of punches and, although the Fijian beat the count, the fight was stopped shortly thereafter with Singh failing to respond to another ferocious barrage. Glasgow Commonwealth Games silver medallist David Light continued his impressive run in the cruiserweight division, despatching Lance Bryant in two rounds. The quickest stoppage of the night came from Manu Vatuvei, with the former Warriors and Kiwis league star marking the start of his professional boxing career in style by knocking out 19-fight veteran David Brown Buttabean Letele in just 26 seconds. Letele started the fight on the front foot, landing a solid right hand to Vatuvei’s temple. Vatuvei responded with a crushing uppercut followed to by a right hand to send the Brown Buttabean crashing to the canvas and into a second- and likely permanent - retirement. Another rising star, Andrei Mikhalovich, marked his 23rd birthday with an impressive victory over Adrian Taihia. Mikhalovich threatened a stoppage throughout the fight and eventually put his man down with a thumping body shot late in the final round. Taihia beat the count and was rewarded by the sound of the final bell – although a clear decision went to the promising Mikhalovich. Parker v Flores, presented by Flooring Xtra Results Joseph Parker def Alexander Flores KO 3 Junior Fa def Rogelio Omar Rossie KO 1 Bowyn Morgan def Sebastian Singh KO 3 David Light def Lance Bryant KO 2 Manu Vatuvei def David Letele (Brown Buttabean) KO 1 Andrei Mikhalovich def Adrian Taihia (unanimous decision) Michaela Jenkins def Megyn McLennan (majority decision) Sam Watt def Alistair Boyd (unanimous decision) Corporate bout: Bjorn Horrack def Quintin Poole (split decision)
http://www.scoop.co.nz/stories/CU1812/S00125/parker-sends-message-to-heavyweight-division.htm
CC-MAIN-2019-26
refinedweb
540
53.04
One of the reasons that Python is so valuable is that there are several packages that we can install that extend the capabilities of Python. For example, if we want MATLAB-like functionality matrices numerical analysis, you can use numpy, optimizers, and differential equation solvers. Further, several other packages like matplotlib help us plotting, while Pygame helps develop a graphical user interface and build diverse games. xlwings, on the other hand, allows us to interface with excel. In addition, we have others like open CV, computer vision, etc. Python modules allow you to incorporate other people’s code into your own. As a result, you won’t have to reinvent the wheel every time, saving you a lot of time during development. Python has thousands of modules that can help you save time when programming. In addition, Python modules can be installed in two ways: system-wide and in a virtual environment. A module aids in the logical organization of Python code. When code is separated into modules, it is easier to comprehend and use. In addition, a module, which is a Python object with freely named characteristics, can be bound and referenced. A module is nothing more than a Python code file. In a module, you can describe functions, groups, and variables. A module can also include executable code. Multiple programs can import a module for use in their application; therefore, a single code can be utilized by various applications to complete their tasks faster and more reliably. Introduction to modules There are numerous code packages or code modules available in Python. You don’t have to reimplement existing code when using a module, and you can use code written by others. It simplifies development by allowing you to save time on simple operations like scraping a webpage or reading a CSV file. When you look for a solution to an issue, you’ll frequently encounter modules that you’ve never seen before or that aren’t installed on your computer. You can use these modules after installing them on your machine. Importing modules at the start of your code allows you to load them. As an example, import csv import requests Wheels vs. Source Distributions pip can install from Source Distributions (sdist) or Wheels, and however, if both are available on PyPI, pip will choose the most compatible wheel. You can modify pip’s default behavior using the –no-binary option, for example. Wheels are a pre-built distribution type that allows for speedier installation compared to Source Distributions (sdist), especially when a project incorporates compiled extensions. Instead of rewriting the source distribution in the future, if pip cannot locate a wheel to install, it will construct one locally and cache it for future installs. With pip, you can install modules and packages. Open a terminal and type pip to install a module system-wide. The module will be installed if you type the code below. sudo pip install module-name In addition, you can choose to install to a specific user account -mostly the active user. Achieving these isolated packages for the given user will require you to run the following command. python3 -m pip install --user module-name It will automatically install a Python module. In most cases, you’ll use a virtual environment, or venv, rather than installing modules system-wide. On the other hand, you can run the following command on windows. py -m pip install --user module-name You must have a pip installed for this to work. The installation procedure is dependent on the platform you’re using. Installing pip On Windows, you can install Python modules. First, check whether pip is installed: To see if pip is installed, use the command prompt in Windows to perform the following command: pip --version Note: If the pip isn’t already installed, then you need to install it first. To install the pip package, run the following command: pip install. pip install If the output version is not equal to or greater than 19, execute the following command to update pip: pip install --upgrade pip wheel Use the following command to install packages from other resources: pip install -e git+<a href=" Use the following command to upgrade the packages that are already installed: pip install --upgrade Use the following command to uninstall a package that has already been installed: pip uninstall Installing Python modules on Unix/macOS Make sure you’ve got pip installed. In your terminal, use the following command to see if pip is installed. python3 -m pip –version While pip alone is adequate for installing from pre-built binary files, you should also have up-to-date versions of the setuptools and wheel projects to ensure that you can install from source archives. Use the following command to update the installed pip and setup tools copies: python -m pip install --upgrade pip setuptools wheel Or run the following command if you are on Windows. py -m pip install --upgrade pip setuptools wheel To install the module with pip, use the command below. python3 -m pip install "model-name" Use the following command to install the module’s specific version: python3 -m pip install "model-name==2.2" To install a module’s version between any two numbers, run the following: python3 -m pip install "model-name>=2,<3" To install a specific full-length version that is compatible with your computer, run the following command: python3 -m pip install "model-name ~=2.2.3" Use the following command to upgrade the project version: python3 -m pip install --upgrade model-name To install a required module that is in the text document: python3 -m pip install -r requirements.txt To install the directories that are present in the local system using the following command: python3 -m pip install --no-index --find-links= ProjectName python3 -m pip install --no-index --find-links=/local/dir/ ProjectName python3 -m pip install --no-index --find-links=relative/dir/ProjectName Installation of Python packages manually The vast majority of Python packages now supports pip. If your kit isn’t compatible, you’ll have to install it manually. Before installing any package, make sure you have a Python installation that includes all of the essential files for installing packages by reading the Installation Requirements. Download the kit and extract it to a local directory to install it. If the kit comes with its own set of installation instructions, follow them; if the package isn’t there, run the following command to install the package manually: python .py install Using Setup.Py to Install Python Packages Open a command or terminal window and type: setup.py to install a package that includes a setup.py file. First, cd to the setup.py directory in the root directory. python setup.py install Setup.Py Build Environment Packages installed with setup.py have build requirements that developers must follow. Some prerequisites, however, are optional. Make sure you have the most recent version of setuptools installed: python -m pip install --upgrade setuptools Setup should include the install requires keyword arguments. install_requires=[''], # Optional keyword The setuptools setup.py keyword install_requires is used to define minimal package needs. Consider the following example: install_requires=[''], # Optional keyword For a setup, complete the package build prerequisites. PyPA (Python Packaging Authority) outlines py-based installation in their ‘Sample Project.’ Sample Project is a template package that includes a setup.py file for manual package installation. For tweaking the script and the whole package build environment, the file is annotated using comments. The sample project can be found at [ ]. The setuptools package is used in the Sample Project: “A setuptools based setup module.” Setup.py [ setup.py is the build script for setuptools-based packages. This tutorial will walk you through the process of downloading and installing Python modules. There are various ways to install external modules, but for the sake of this course, we’ll use pip, which is available for both Mac/Linux and Windows. Pip is installed by default in Python 3.8 and newer. However, anyone using an older version of Python will equally benefit from this tutorial because the processes are still quite common. Modules Introduction One of the best things about Python is the abundance of excellent code libraries that are publicly and freely available. These libraries can save you a lot of time coding or make a task (such as creating a CSV file or scraping a webpage) much more manageable. When searching for solutions to problems, you’ll frequently come across sample code that employs code libraries you’ve never heard of. Don’t be scared off by these! You can use these libraries after they’ve been installed on your computer by importing them at the start of your code; you can import as many libraries as you like, for example, import csv import requests import kmlwriter import pprint It can be intimidating for new Python users to download and install external modules for the first time. There are a variety of methods for installing Python modules, which adds to the complication. This article covers one of the simplest and most used methods. The idea is to get the software on your computer that automatically downloads and installs Python modules. We’ll use a tool called pip for this. Note that starting with Python 3.9, pip will be included in the standard installation. There are many reasons why you may not yet have this version, and if you don’t, following instructions should assist you. Instructions for Mac and Linux mac-and-Linux-instructions We can obtain a python script to install pip for us, according to the pip documentation. Furthermore, we can install pip via the command line on a Mac or Linux using the curl command, which downloads the pip installation Perl script. curl -O You must run the get-pip.py file with the Python interpreter after downloading it. However, if you use Python to run the script, it will fail. python get-pip.py The script will very certainly fail because it lacks the necessary rights to update specific folders on your filesystem, which are specified by default to prevent random programs from changing vital files and infecting you with viruses. You can use the sudo command in front of the python command in this case—and in all circumstances where you need to allow a script that you trust to write to your system folders—like this. sudo python get-pip.py Instructions for Windows Like with the other platforms, the quickest approach to installing pip is using the get-pip.py python application, which you can obtain here. You might be afraid of the vast mess of code that awaits you when you open this link. Please don’t be that way. Save this page with the default name of get-pip.py in your browser. It’s a good idea to save this file in your python directory so you know where to look for it later. After you’ve saved the file, you’ll need to run it in one of two methods. If you prefer to use your own Python interpreter, right-click on the file get-pip.py and select “open with,” then choose your own Python interpreter. If you prefer to install pip via the command line on Windows, go to the directory where you saved Python and get-pip.py. We’ll suppose this directory is called python27 in this example, so we’ll type C:>cd python27. To install pip, navigate to this directory and run the command. python get-pip.py Installing-python modules in Python It’s simple to install python modules now that you have pip because it handles all the work for you. When you identify a module you wish to use, the documentation or installation instructions will usually contain the pip command you’ll need, such as pip install requests pip install beautifulsoup4 pip install simplekml Remember that you may need to execute pip with sudo for the same reasons as above (on Mac or Linux systems, but not Windows). sudo pip install requests You may find it helpful to use the -m flag to assist Python in finding the pip module, especially on Windows. python -m pip install XXX Installing Python modules in a virtual environment We can create a different virtual environment from the operating system. It enables you to use the same Python modules as your other engineers in the team. Use the following command to create a virtual environment: virtualenv codevirtualenv After opening codevirtualenv, you will find three directories now: bin, include, and lib. To turn on the virtual environment, run the following command. source codevirtualenv/bin/activate Then we can install any module, in any version, without having to worry about the operating system. We’ll be able to use the same version of modules as other developers this way. pip will only install for this environment. pip install Or you can choose to install a specific version of the given Python module by running the following command. python -m pip install model-name==2.0.5 Alternatively, upgrade the module by running the following command, python -m pip install --upgrade model-name When you are finally ready to exit the virtual environment write, run the following command: deactivate When you use the —user option with Python -m pip install, a package will be installed only for the current user, not for all users on the system. Install Python packages for scientific purposes A lot of scientific Python packages have complicated binary dependencies and are currently challenging to install through pip. It will often be more accessible for users to install these packages using other means than attempting to do so with pip at this time. Working with many Python versions installed at the same time Use the versioned Python commands with the -m flag to run the appropriate copy of pip on Linux, Mac OS X, and other POSIX systems. Below are examples of how you can go about this. python2 -m pip install SomePackage # default Python 2 python2.7 -m pip install SomePackage # specifically Python 2.7 python3 -m pip install SomePackage # default Python 3 python3.9 -m pip install SomePackage # specifically Python 3.9 Use the py Python launcher on Windows in conjunction with the -m switch as follows. py -2 -m pip install SomePackage # default Python 2 py -2.7 -m pip install SomePackage # specifically Python 2.7 py -3 -m pip install SomePackage # default Python 3 py -3.9 -m pip install SomePackage # specifically Python 3.9 Pip commands with the appropriate version numbers may also be available. Additional approaches for Installing modules in Python Using the requirements files When you have a list of modules in a requirements file, for instance, requirements.txt, Python has a way of installing such a list of requirements. Run the following command to achieve this, in Linux or macOS python3 -m pip install -r requirements.txt in Windows py -m pip install -r requirements.txt VCS Installation It is possible to do a module installation from VCS. Mostly, the latter is in an editable form. Below are the variations for installations on Unix and windows in Unix/ macOS python3 -m pip install -e git+ # from git python3 -m pip install -e hg+ # from mercurial python3 -m pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomeProject # from svn python3 -m pip install -e git+ # from a branch in Windows py -m pip install -e git+ # from git py -m pip install -e hg+ # from mercurial py -m pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomeProject # from svn py -m pip install -e git+ # from a branch Using other indexes for installation An alternate index can come in handy when installing modules. You are in a position to search for an additional index while installing besides using PyPI. An example of index installation is as follows. In Unix/macOS python3 -m pip install --index-url model-name In Windows py -m pip install --index-url model-name Using a local src tree for installation Installing from local src in development mode, i.e., the project seems to be installed but may still be edited from the src tree. In Unix/macOS python3 -m pip install -e In Windows py -m pip install -e Additionally, you can install from the src as follows Installing from src in Unix/macOS python3 -m pip install Installing from src in Windows py -m pip install Local archives Installation You can additionally install a given source archive file as follows in Unix and Windows Operating Systems. In Unix/ macOS python3 -m pip install ./downloads/model-name-1.0.4.tar.gz In Windows py -m pip install ./downloads/model-name-1.0.4.tar.gz Further, it is possible to do a local directory installation with archives without necessarily checking the PyPI. In Unix/macOS python3 -m pip install --no-index --find-links= model-name python3 -m pip install --no-index --find-links=/local/dir/ model-name python3 -m pip install --no-index --find-links=relative/dir/ model-name In Windows py -m pip install --no-index --find-links= model-name py -m pip install --no-index --find-links=/local/dir/ model-name py -m pip install --no-index --find-links=relative/dir/ model-name Installing from a different location Create a helper application that delivers the data in a PEP 503 compliant index format. Use the –extra-index-url flag to direct pip to use that index when installing other data sources (for example, Amazon S3 storage). ./s3helper --port=4488 python -m pip install --extra-index-url model-name Prereleases Installation Along with stable versions, you’ll find pre-release and development versions. Pip searches for stable versions by default. In Unix/macOS python3 -m pip install --pre model-name In Windows py -m pip install --pre model-name Setuptools “Extras” Installation At this point, the Setuptools extras can be installed. In Unix /macOS python3 -m pip install SomePackage[PDF] python3 -m pip install SomePackage[PDF]==3.0 python3 -m pip install -e .[PDF] # editable project in current directory In Windows py -m pip install SomePackage[PDF] py -m pip install SomePackage[PDF]==3.0 py -m pip install -e .[PDF] # editable project in the current directory Example: Installing matloplib using pip First, open the terminal if you are on Unix or macOS and run the following command. python Then try to import maplotlib. If the process is successful, then it is already installed. However, if errors are importing, then it is probably not there and requires re-installation. import matplotlib Installing matplotlib To uninstall matplotlib, run the following command on the terminal as shown below. pip install matplotlib Finally, if you choose to uninstall matplotlib, you can efficiently run the following command on the terminal or the command-line interface. pip uninstall matplotlib Conclusion Python is a major open-source development project with a vibrant community of contributors and users that make their products available to other Python developers under open source license terms. It enables Python users to efficiently exchange and interact, taking advantage of the solutions that others have already generated for common (and sometimes even uncommon!) problems and maybe introducing their solutions to the pool of answers.
https://www.codeunderscored.com/how-to-install-python-modules/
CC-MAIN-2022-21
refinedweb
3,227
53.81
"pow" (power) function Discussion in 'Python' started by Russ, Mar 15, 2006.,603 - Alf P. Steinbach - Aug 7, 2003 pow function in math.hBlack Eagle, Jul 9, 2003, in forum: C Programming - Replies: - 8 - Views: - 910 - John Tsiombikas (Nuclear / the Lab) - Jul 10, 2003 math.pow vs powClueless Moron, Nov 27, 2003, in forum: Python - Replies: - 5 - Views: - 1,014 - John J. Lee - Nov 28, 2003 pow(2, 1/2) != pow(2, 0.5) problemMichel Rouzic, Jun 15, 2005, in forum: C Programming - Replies: - 52 - Views: - 1,854 - Alan Balmer - Jun 20, 2005 def power, problem when raising power to decimals, Apr 16, 2008, in forum: Python - Replies: - 8 - Views: - 454 - Mark Dickinson - Apr 17, 2008
http://www.thecodingforums.com/threads/pow-power-function.355507/
CC-MAIN-2015-22
refinedweb
116
81.33
I don't see any use of this import? This is the same as from mlir import ir I couldn't find any thing in the style guide that says which one is preferable. But there are examples for the style I mentioned in the style guide and I couldn't find an example for the style used here. Similar to the previous comment, these can be from mlir.dialects import sparse_tensor as st from mlir import runtime as rt This comment is for the import grouping. I couldn't find anything from the google python guide, but I found something from Imports should be grouped in the following order: Standard library imports. Related third party imports. Local application/library specific imports. You should put a blank line between each group of imports. from mlir.dialects.linalg.opdsl import lang or from mlir.dialects.linalg.opdsl import lang as dsl ? This is a kitchen sink for registering all passes as side effect (useful until every pass has its own proper import). Changed. I was also not sure which one is generally preferred. I though about that, but then def matmul_dsl( A=TensorDef(T, S.M, S.K), B=TensorDef(T, S.K, S.N), C=TensorDef(T, S.M, S.N, output=True)): C[D.m, D.n] += A[D.m, D.k] * B[D.k, D.n] loses a lot of its charm, since T, S, etc. need prefix All other examples I could find use .* for this particular one. WDYT? explicitly name all linalg.opdsl.lang symbols How about this for mlir.dialects.linalg.opdsl.lang? Just name everything we import, so that the actual DSL def below stays compact. I understand that you want to import the individual types to simplify the use syntax, but python style guide says this: Use import statements for packages and modules only, not for individual classes or functions. Imports from the typing module, typing_extensions module, and the six.moves module are exempt from this rule. There seems to be various style guides floating around, and they all say slightly different things. Okay, here is one last attempt, now with just namespace imports and full prefixes.... names only in imports I like the concise syntax as well. I am fine with what you have right now, or adding a comment to this import and use the concise syntax. from mlir.dialects.linalg.opdsl.lang import * Yeah, I am on the fence too. But let's to the right thing here. We can always later introduce a style guide rule for our dsl imports ;-)
https://reviews.llvm.org/D108055?id=366382
CC-MAIN-2022-21
refinedweb
431
76.52
we will use Prepare-MoveRequest, the idea of these steps is to get healthy Mail Enabled User (MEU) that will be used later to move the mailbox from the source forest to the target forest, after finishing Part II now we have the healthy MEU and we can check the LDAP properties for the mandatory attributes required for the move to succeed. The following snapshot shows the LDAP attributes: Now this is the time to run New-MoveRequest to migrate the mailbox from the source forest to the target forest. The following snapshot shows the result of running New-MoveRequest: The error is: Cannot find a recipient that has mailbox GUID The error is clearly saying that there is no MEU with the mandatory attribute msexchmailboxguid, however when we check the MEU LDAP property: The mailbox GUID is there (of course it’s there because Prepare-MoveRequest migrated this attribute, check Part II), so what’s the problem?! We have MEU with the required msexchmailboxguid and each time we try to migrate the mailbox we will get the same error: Cannot find a recipient that has the mailbox GUID. The problem here that when the remote forest implies a child name relationship, Exchange 2010 will think that this is a child domain and then the strange error will be returned. In our case the source forest name is egypt.tailspin.com and the target forest name is tailspin.com so Exchange will think that the target is child domain from the source forest and it will fail. So what’s the solution? We have two painful options: 1. Export all mailboxes as PST files from the source forest and then import it in the target forest: this option is based on big bang approach where there is no co-existence. This option might be considered in small companies where we can disconnect the source forest, export the PSTs and import it to the target forest in reasonable downtime. 2. Co-Existence: when co-existence is required in enterprise companies with thousands of users the only option will be creating Intermediate Forest. Intermediate Forest: As you might guess this will be our option as co-existence is required, in this option we will do the migration on two steps. First we will need to create a new Active Directory Forest with a different name in our scenario we will use nwtraders.com. This forest will contain Exchange 2010 server we can use single server with HUB/CAS/MBX installed on the same server, as this forest will be intermediate and will not serve any users you may decide that high availability is not required. Now the migration will be done on two steps as following: 1. Move the mailbox of the user (batch of users) from the source forest egypt.tailspin.com to the intermediate forest nwtraders.com. 2. Move the mailbox of the user (batch of users) from the intermediate forest nwtraders.com to the target forest tailspin.com. After implementing the intermediate forest it’s very important to complete the following tasks before starting the migration: 1. Apply SSL certificate on the intermediate forest that can is trusted and can be validated from the target forest. If Exchange 2010 server in the target forest can’t validate the certificate moving mailbox will fail. 2. Enable the MRS Proxy service: this service responsible of moving the mailboxes from/to Exchange 2010, as the intermediate Exchange server will be 2010 then moving mailboxes will not work without enabling the MRS Proxy service. The following section contains the detailed steps required to prepare the intermediate forest: 1. Install SSL Certificate This certificate must be trusted and validated from the CAS servers in the target forest. The certificate could be generated from internal Certification Authority trusted by the CAS servers in Corp forest. The steps to request to install the certificate as follow (on Intermediate Forest Exchange Server): a. Request certificate: I. Open Exchange Management Shell: II. $data = New-ExchangeCertificate -GenerateRequest –domainname mail.nwtraders.com,autodiscover.nwtraders.com,servername.nwtraders.com -FriendlyName Int-CAS I. Open Exchange Management Shell: II. $data = New-ExchangeCertificate -GenerateRequest –domainname mail.nwtraders.com,autodiscover.nwtraders.com,servername.nwtraders.com -FriendlyName Int-CAS II. Set-Content -path "C:\CertRequest.req" -Value $Data II. Set-Content -path "C:\CertRequest.req" -Value $Data b. Import the Certificate: I. Import-ExchangeCertificate -PrivateKeyExportable:$true -FileData ([Byte[]]$(Get-Content -Path C:\cert.cer -Encoding Byte -ReadCount 0)) | Enable-ExchangeCertificate -Services IIS I. Import-ExchangeCertificate -PrivateKeyExportable:$true -FileData ([Byte[]]$(Get-Content -Path C:\cert.cer -Encoding Byte -ReadCount 0)) | Enable-ExchangeCertificate -Services IIS 2. Enable MRSProxy Service This step should be completed before moving the mailboxes from the Intermediate forest to the target forest. a. On the Client Access server in the Intermediate Forest (nwtraders.com), open the following file with a text editor such as Notepad: C:\program files\microsoft\Exchange\V14\ClientAccess\ExchWeb\EWS\web.config b. Locate the following section in the Web.config file: <!-- Mailbox Replication Proxy Service configuration --> <MRSProxyConfiguration IsEnabled="false" MaxMRSConnections="100" DataImportTimeout="00:01:00" /> c. Change the value of IsEnabled to "true". d. Save and close the Web.config file. In this part we addressed the second challenge and now we are ready to start the migration, in the next part we will start by configuring co-existence between the three forests. Exchange 2010 Cross-Forest Migration Step by Step Guide – Part I Exchange 2010 Cross-Forest Migration Step by Step Guide – Part II Exchange 2010 Cross-Forest Migration Step by Step Guide – Part III Wow ! Very useful. Waiting for the next one. When? Hi, Great post, as the next one been posted yet? Nice n crisp article series, waiting curiously for part IV......... dude where's part 4? I really really need the co-existence aspect of the migration!!! :-) Shared namespace, free/busy between the forests, etc... Great work so far, but it seems like with part 3 you focused on a potential "gotcha" that most companies who are migrating (due to an acquisition or merger) won't encounter! I'm sure it helped some folks out there and they really appreciated the help, though! I appreciate your effort on this! next one pls Same as others really need co-existence information and how to do so migrated mailbox users can still communicate with each other Well I just found this page a year after it was made but still no sign of any other parts. Can someone else pick it up as it was getting good ! Very useful..can't wait for part IV It is really Good Article, thanks for sharing Best Regards, Raviprem A year and a half later and nothing additional. Will this ever be completed? This would be a very useful source of information. Great write up. I would love to see scenario 1 Great write up. I would love to see scenario 1 or even a part 4 here. Thanks for the series...very helpful. Still no Part IV? amazing information. still waiting part IV. Please write for us :) Thanks for sharing the useful information for Cross forest migration. It's really very complicated and time consuming process. It's require proper planning before starting the migration. To make the migration process faster and easy, there are some third party tools available in the market. You may also take help from them like us. We successfully migrated from Exchange 2010 SP2 server to Exchange 2013 server with the help of this migration utility: One of the best aspects of this advance program is that it repairs and rebuilds the corrupted items in information store database during the migration so that all data could be easily available to all users in destination forest. You may also take a look at all its features ! Good Luck ! Ricky
http://blogs.technet.com/b/meamcs/archive/2011/10/25/exchange-2010-cross-forest-migration-step-by-step-guide-part-iii.aspx
CC-MAIN-2014-35
refinedweb
1,305
56.35
I have one Windows Service project, say GatherData.vbproj. I would like to assign the <Assembly: Instrumented()> namespace during installation, so I can install multiple instrumented services from this one project. Can I put this namespace in the app.config file or as an installutil.exe parameter. I'm thinking the ProjectInstaller can get the namespace, wherever it came from, and assign it during installation. For example, the namespaces might be: I know I can do multiple projects, but prefer one project and one service Exe. Using InstallUtil.Exe is not necessary, just thought I would need to go that route. Hello Lanis Ossman, This forum is for "Discuss general issues about developing applications for Windows." It is Win32 C++ focused. It seems a .Net related issue so I'll move it to .Net forum for more professional support. Best regards, R Lanis Ossman, Thank you for posting here. Based on my research, I find two related references about installing multiple instances of the same windows service. Hope the suggestions in them could be helpful. Best Regards, Xingyu Zhao
https://social.msdn.microsoft.com/Forums/en-US/04fab6cb-ba4e-4001-ab3f-11cb0a5a7226/multiple-instrumented-windows-service-instances-from-one-project?forum=netfxbcl
CC-MAIN-2020-45
refinedweb
179
60.51
Anyways, I've hit a major programming dilemma. I've been working on a program that involves deleting a specified character from a string. For instance, if I were to type a string called "berry" and I wanted to remove the letter 'e', the program should then display "brry". The program specifications call for the process to be done in a function. However, due to my troubles, I've relegated the issue to main() until I am able to come up with decent output. Furthermore, for strings that take into account multiple instances of a similar letter, only the first instance of the letter is to be deleted. Here is my code: #include <stdio.h> #include <string.h> #include <stdlib.h> int main(void) { char input[100], output[100], letter; int char_count, let_pos; printf("\nEnter a string: "); gets(input); printf("\nEnter letter to delete: "); letter = getchar(); for (char_count = 0; char_count < strlen(input); char_count++) { if (letter == input[char_count]) { let_pos = char_count; break; } } for (char_count = 0; char_count < strlen(input); char_count++) { if (char_count == let_pos) ; else strcpy(&output[char_count], &input[char_count]); } printf("%s", output); return 0; }While I can easily pinpoint the character or first instance of multiple characters to be deleted, the actual deletion process is iffy to me. I was considering using the null character to help in the deletion but it would remove all the following characters after the character in 'let_pos'. I've been going around in circles trying to find a coherent solution to the problem but to no avail. Any help would be appreciated. Thanks.
http://forum.codecall.net/topic/70688-deleting-a-character-from-a-string/
CC-MAIN-2014-52
refinedweb
256
53
inherited a half-dozen or so custom modules from a team member who departed. Looking at his code, it seems that he did not really adhere to the style found in the documentation for module developers. For instance, each module has its own perl module (ex. Cisweb.pm) and therefore its own namespace. He also does 'use strict' in all of his .pm and .cgi files, imports the module via use instead of require, doesn't use the %config hash, uses CGI.pm instead of ReadParse, he's used HTML::Template to separate HTML from logic, and so on. I'd like to bring this code back in line with more standard webmin module development practices, and I'm trying to decide the best way to rework this code. Some of the things he's done aren't inherently bad yet create difficulty in the webmin environment. I like the idea of using .pm's and therefore private namespaces, but this does complicate matters a bit because any require '../web-lib.pl' stuff is now going to populate the private namespace instead. I like the idea of doing use strict, but this means I'm then going to need to do use vars qw(%config %in) if I want to do things like ReadParse and init_config(). I like the idea of HTML::Template, and that actually seems to work pretty cleanly as-is. Nevertheless, it's all seeming a bit messy. :-) So my question is simply whether anyone has developed an alternative set of practices to what's in the module devel docs that incorporates some of these practices. I understand that a lot of webmin's style is "dated" because of the desire to support as many platforms and environments as possible, but I have full control over the environment I'm running in and actually would prefer to do this with modules and perhaps ultimately more of an OO style of coding, and I always prefer to do use strict, and suppose I will even with the use vars qw() extra legwork. I'm just curious if any of the more seasoned module devels have suggestions about how to bridge webmin's style with what I'm more used to seeing in my perl experience. -- Fran Fabrizio Senior Systems Analyst Department of Computer and Information Sciences University of Alabama at Birmingham 205.934.0653
https://sourceforge.net/p/webadmin/mailman/webadmin-devel/?viewmonth=200805&viewday=30
CC-MAIN-2017-39
refinedweb
396
60.14
how can i make a calculator using function and procedure.. 1) Open program to write code 2) Write code 3) Compile code 4) If you have compiler errors fix them and go to step 3 else continue 5) Run code and make sure you get what you want else go to step 2 how can i make a calculator using function and procedure.. With all due respect, this is far too broad a question for us to try to answer, even if we were inclined to do the work for you in the first place - which we are not. NathanOliver's answer is about the most any of us can give without a lot more details about what you need to do, and a lot more evidence that you have tried to solve the problem yourself. Whatr specifically do you need help with? If you are having trouble understanding how functions work, there are plenty of older posts on the subject here; a search should turn up dozens. Edited 3 Years Ago by Schol-R-LEA Note: There are many way to design a solution ... Here is just one simple way ... and a shell program that should get you started: // cal_shell.cpp // #include <iostream> using namespace std; // your functions go here ...?? double add( double a, double b ) { return a + b; } double sub( double a, double b ) { return a - b; } // ... the rest of your functions go here // int main() { // print out an intro... // prompt and take in the op requested ...add, sub, etc... cout << "Which op ... Enter + or - or ... "; char op = 0; cin >> op; double a = 0.0, b = 0.0, c = 0.0; // get some doubles // now prompt and take into 'a' annd 'b' valid doubles switch( op ) { case '+' : c = add( a, b ); break; case '-' : c = sub( a, b ); break; // ... rest goes here ...// default: cout << op << " is NOT implemented here yet ...\n"; } cout << "for the binary op " << op << " on " << a << " and " << b << ", the result returned was " << c << endl; cout << "All done ... press 'Enter' to cobtinue/exit ... " << flush; cin.get(); } Edited 3 Years Ago by David W the code for a full standard calculator. what function you want @basit do encourage menex's behavior. If they have a question they should ask there own question. Also you shouldn't answer question just asking for code. ...
https://www.daniweb.com/programming/software-development/threads/494453/calculator-with-function
CC-MAIN-2018-26
refinedweb
384
83.36
A namespace that contains various CIM classes and enumerations. Represents a CIM class. Contains members to convert .NET types to CIM types and CIM types to .NET types. Represents an exception that has occurred during a CIM operation. Represents an instance of a CIM class. Represents a method declaration of a CIM class. Represents a parameter of a CIM method. Represents a parameter declaration. Represents a collection of parameters of a CIM method. Represents a method's return value and out parameter values. Represents a CIM method result base parameter. Represents a single item of a streamed out parameter array. Represents a CIM property. Represents a property declaration of a CIM class. Represents a CIM qualifier. Represents a client-side connection to a CIM server. Represents a CIM subscription result. Represents system properties such as namespace, server name and path. Specifies CIM flags that are used with the class declarations of instances. They represent CIM meta-types (qualifier scopes) as well as a set of well-known CIM qualifiers. These flags can be combined together, with the exception of a few groups of mutually exclusive flags. Differentiates between a push or pull subscription delivery type. This is not supported when using the DCOM protocol. Specifies a CIM type, such as integer, string, or datetime. Specifies error codes defined by the native MI client API. Return to top
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/wmi_v2/mi-managed-api/hh832958(v=vs.85)
CC-MAIN-2019-30
refinedweb
227
54.59
import RPi.GPIO as gpio import time gpio.setmode(gpio.BOARD) gpio.setup(7, gpio.OUT) gpio.setup(11, gpio.OUT) gpio.setup(13, gpio.OUT) gpio.setup(15, gpio.OUT) gpio.output(7, True) gpio.output(11, True) gpio.output(13, True) gpio.output(15, False) time.sleep(0.5) gpio.cleanup() Check out the video for connecting the motors to the L298n H-Bridge. Here's some simple code that will run one of the motors for 0.5 seconds. Notice the use of gpio.BOARD for the gpio naming conventions. Save this script as robot1.py, then, when everything is hooked up and you're ready to roll (get it!?!?!?), do sudo python robot1.py Try swapping around the True and False statements in the outputs, see what changes it makes. Now that we've got one motor working, let's work on adding the other three, and figuring out how to get them to work together!
https://pythonprogramming.net/gpio-motor-control-raspberry-pi/
CC-MAIN-2019-26
refinedweb
160
79.46
I would like to write a console copy program in C++ that copies a file into another file like linux or windows copy programs.this programs take name of file and copy into the same folder with the same name concatenate it to a "copy_" word.this is a simple program but i would like to develop it's features. there are some problems at this way.when you give a executable program name to this simple copy program , the new "copy_executablefile" can't run.for other files such as files with 'pdf' extension it has no problems and copied file (copy_pdfextensionfile) is exactly like the same original file.but in executable file i have a problem.after some check and compare between original file and copied file byte by byte using "Hex Editor" program i discovered that at the end of copied file one byte with 00H value is extra.after removing this extra byte using Hex Editor ,copied file runs successfully.but i can't find out why this extra byte appends into copy_file.what do you know about this prob?please help me.perhaps copying character by character is the reason.because a character is 4byte.what's the solution? you can see this simple prog following. Sincerely Kaveh Shahhosseini 6/May/2011 Code://this prog tested in Ubuntu 10.10 with Gcc compiler and works correctly. //begining of copy program. //======================= #include <iostream> #include <fstream> #include <string.h> using namespace::std; //--------------------------- int main() { char filename1[12],ch,filename2[18]; cout<<"Enter file name:\n"; cin>>filename1; ifstream in(filename1,ios::in); strcpy(filename2,"copy_"); strcat(filename2,filename1); ofstream out(filename2,ios::out); while(!in.eof()) { in.get(ch); out.put(ch); } return 0; //end } //====================
https://cboard.cprogramming.com/cplusplus-programming/137774-console-copy-program-cplusplus.html
CC-MAIN-2017-22
refinedweb
287
60.31
Exclusively choosing one market-cap category over another may deprive you of opportunities. By Jayant R Pai Is it better to invest in large-cap funds for the long run ? – Ashutosh Swain Choose equity mutual funds if you can remain invested for long periods (five years or more). Within these, the choice of the category is a matter of individual choice. The NAV of large-cap funds is less volatile than their mid or small-cap peers. However, exclusively choosing one market-cap category over another may deprive you of opportunities. Hence, invest in multi-cap funds as they offer a combination of stocks across categories (and even sectors and geographies). Should I invest Rs 10,000 every month in a large-cap mutual fund for 10 years to create a education corpus for my son? —Suryansh Gopal Yes. It is a credible idea. You could consider a simple Index Fund investing either in the Sensex or Nifty 50, as it will provide you with exposure to large-cap stocks at a very low cost. Also, do not deviate from the course, and investing consistently over 120 months. Should I buy physical gold like coins or invest in gold ETF/bonds? —Navin Kumar Financial variants of gold (ETF and Gold Sovereign Bonds) have advantages such as lower storage / security-related costs, standardisation, obviating purity and authenticity-related concerns, ability to purchase in smaller denominations, etc. Hence, look at ETFs / bonds. I am 45 and will retire after 15 years. My monthly salary is around Rs 2 lakh. How much should I save for retirement factoring in inflation? What corpus should I have at the time of retirement? —Gautam Uppal The amount required at the time of retirement is not just a function of your income. The amount and duration of your debt burden and ability to save should be considered too, besides one’s psychological orientation towards various asset classes. My assumptions are: You have not taken on any debt; you can save 30% of your monthly income. You prefer to invest in equity mutual fund schemes; expected ‘real’ return (after factoring inflation) 3% per annum. Based on this, you should be able to accumulate a corpus of Rs 1.37 crore. This estimate does not factor in any increase in income (and savings). A financial advisor will be able to provide a more nuanced estimate, after procuring more details from you. (The writer is Head, Products, PPFAS Mutual Fund..
https://www.financialexpress.com/money/mutual-funds/multi-cap-funds-offer-combination-of-stocks-across-categories/1776291/
CC-MAIN-2019-51
refinedweb
410
56.55
In the overabundance of .NET controls, one can probably forgive Microsoft for not having enough Properties for certain controls. On the other hand, some Properties are quite redundant, but that is just my opinion. I have always wanted an Opacity property for most .NET controls. It is probably an impractical thought, but it would have saved me a ton of hours manipulating and improving existing .NET controls. Yes, you are able to set the Background color of controls to Transparent, but this causes the control to be completely see-through. What if you wanted the control to only be semi-transparent, or what if you wanted to be able to set the control's opacity? I'm sure I'm not alone on this. To make controls semi-transparent, you would need to create a component that inherits from the desired control. This will cause the newly created component to act and look like the control in question. Then, you would have to override the existing control's properties and methods.. Practical Create a new Windows Forms project in either C# or VB.NET. Name it anything descriptive. Once the project has been created, add a Component to your project by selecting Project, Add Component. Figure 1 shows a dialog that will be displayed. Provide a nice name for your component. Figure 1: Add Component Add the necessary namespaces to your component. C# using System.ComponentModel; VB.NET Imports System.ComponentModel Ensure that your Panel object inherits from "Panel." C# public partial class Panel_C : Panel VB.NET Partial Public Class Panel_VB Inherits Panel Add the Transparent Window setting. You will use this setting once you override the Window's create parameters. C# private const int WS_EX_TRANSPARENT = 0x20; VB.NET Private Const WS_EX_TRANSPARENT As Integer = &H20 WS_EX_TRANSPARENT specifies that a window created with this style is to be transparent. A full list of all the available Extended Windows Styles can be found here. Add the Constructors. C# public Panel_C() { InitializeComponent(); SetStyle(ControlStyles.Opaque, true); } public Panel_C(IContainer con) { con.Add(this); InitializeComponent(); } VB.NET Public Sub New() SetStyle(ControlStyles.Opaque, True) End Sub Public Sub New(con As IContainer) con.Add(Me) End Sub Add the Opacity Property. C# private int opacity = 50; [DefaultValue(50)] public int Opacity { get { return this.opacity; } set { if (value < 0 || value > 100) throw new ArgumentException("value must be between 0 and 100"); this.opacity = value; } } VB.NET Private iopacity As Integer = 50 <DefaultValue(50)> Public Property Opacity() As Integer Get Return Me.iopacity End Get Set If Value < 0 OrElse Value > 100 Then Throw New ArgumentException("value must be between _ 0 and 100") End If Me.iopacity = Value End Set End Property You now will be able to choose this property from the Properties Window. Override CreateParams. C# protected override CreateParams CreateParams { get { CreateParams cpar = base.CreateParams; cpar.ExStyle = cpar.ExStyle | WS_EX_TRANSPARENT; return cpar; } } VB.NET Protected Overrides ReadOnly Property CreateParams () _ As CreateParams Get Dim cpar As CreateParams = MyBase.CreateParams cpar.ExStyle = cpar.ExStyle Or WS_EX_TRANSPARENT Return cpar End Get End Property The CreateParams property gets all the required creation parameters when a control handle is created. Override OnPaint. C# protected override void OnPaint(PaintEventArgs e) { using (var brush = new SolidBrush(Color.FromArgb (this.opacity * 255 / 100, this.BackColor))) { e.Graphics.FillRectangle(brush, this.ClientRectangle); } base.OnPaint(e); } VB.NET Protected Overrides Sub OnPaint(e As PaintEventArgs) Using brush = New SolidBrush(Color.FromArgb(Me.Opacity _ * 255 / 100, Me.BackColor)) e.Graphics.FillRectangle(brush, Me.ClientRectangle) End Using MyBase.OnPaint(e) End Sub Here, you have created the Panel with the appropriate Opacity setting. Build your project. After you have built your project, you should notice your component in the Toolbox, as shown in Figure 2. Figure 2: Toolbox Double-click the component in the toolbox and choose a BackColor. In the next image (see Figure 3), I have added a button, a standard panel, and the newly created component. I have set the Component's BackColor to 255; 128; 128, and you can clearly see the button underneath it. Figure 3: Transparent Panel in action The source code for this article is available on GitHub. Conclusion When in need, create a user component. As you can see, components are very versatile and easy to manipulate. Semi-Transparent controls can be quite useful in a user interface and it is not difficult to create them.
https://mobile.codeguru.com/csharp/.net/net_general/creating-a-.net-transparent-panel.html
CC-MAIN-2018-47
refinedweb
732
51.44
Memo Take and organize notes like text messages. A statistical hypothesis is a claim about a population parameter such as the mean or a proportion. There are two contradicting statements; a null hypothesis denoted as Ho and the alternative hypothesis denoted as Ha (not the same as alternative facts). The null hypothesis is said to be the claim that is initially assumed to be true while the alternative contradicts the null hypothesis. The goal is to reject or fail to reject the null hypothesis. We never test the alternative but rather we test the null hypothesis and we want to see if we reject the null hypothesis or we fail to reject the null hypothesis. Rejecting the null hypothesis would mean we favor the alternative and to fail to reject the null hypothesis would mean we keep the null hypothesis as the true claim. • The null hypothesis should always be phrased as an equality while the alternative hypothesis can be phrased as an equality or an inequality. • A test statistic is calculated based on the sample data. There are various forms that depend on what you are calculating and how you are calculating it. If you are performing a one-way or two-way ANOVA, your test statistic is known as the f test and if you are calculating the probability of an event to occur using the Central Limit Theorem, you would use the z-score. • A rejection region is based on the test statistic in which is when you decide whether to reject the null hypothesis. The p-value is the area under a standard normal bell curve. If the p-value is smaller than the significance level (usually 0.05 if not specified), then we reject the null hypothesis. Otherwise, we fail to reject. The test statistic for a population mean is as followed: let p_0 = average of sample size let p_1 = given mean let s = point estimate def test_statistic_population_mean(data, p_0) s = 0 p_1 = sum(data) / len(data) for x in data: s += (x - p_1)**0.5 s = (s / (len(data) - 1)**0.5 return (p_1 - p_0) / (s / len(data)**0.5) If your sample size is larger than or equal to 40, we will reference the z-table. Otherwise, we will refer to our t-table; with the caveat that you are assuming the population is normal. Of course, you can always find the p-value using computer. let n = sample size def test_statistic_population_proportion(p_0, p_1, n): return (p_0 - p_1) / (((p_1 * (1 - p_1)) / n)**0.5)
https://articlesbycyril.com/statistics/hypothesis-testing.html
CC-MAIN-2019-43
refinedweb
421
53.92
Exporting Schemas from Classes To generate XML Schema definition language (XSD) schemas from classes that are used in the data contract model, use the XsdDataContractExporter class. This topic describes the process for creating schemas. The Export Process The schema export process starts with one or more types and produces an XmlSchemaSet that describes the XML projection of these types. The XmlSchemaSet is part of the .NET Framework’s Schema Object Model (SOM) that represents a set of XSD Schema documents. To create XSD documents from an XmlSchemaSet, use the collection of schemas from the Schemas property of the XmlSchemaSet class. Then serialize each XmlSchema object using the XmlSerializer. To export schemas Create an instance of the XsdDataContractExporter. Optional. Pass an XmlSchemaSet in the constructor. In this case, the schema generated during the schema export is added to this XmlSchemaSet instance instead of starting with a blank XmlSchemaSet. Optional. Call one of the CanExport methods. The method determines whether the specified type can be exported. The method has the same overloads as the Export method in the next step. Call one of the Export methods. There are three overloads taking a Type, a List of Type objects, or a List of Assembly objects. In the last case, all types in all the given assemblies are exported. Multiple calls to the Export method results in multiple items being added to the same XmlSchemaSet. A type is not generated into the XmlSchemaSet if it already exists there. Therefore, calling Export multiple times on the same XsdDataContractExporter is preferable to creating multiple instances of the XsdDataContractExporter class. This avoids duplicate schema types from being generated. Access the XmlSchemaSet through the Schemas property. Export Options You can set the Options property of the XsdDataContractExporter to an instance of the ExportOptions class to control various aspects of the export process. Specifically, you can set the following options: - KnownTypes. This collection of Type represents the known types for the types being exported. (For more information, see Data Contract Known Types.) These known types are exported on every Export call in addition to the types passed to the Export method. - DataContractSurrogate. An IDataContractSurrogate can be supplied through this property that will customize the export process. For more information, see Data Contract Surrogates. By default, no surrogate is used. Helper Methods In addition to its primary role of exporting schema, the XsdDataContractExporter provides several useful helper methods that provide information about types. These include: - GetRootElementName method. This method takes a Type and returns an XmlQualifiedName that represents the root element name and namespace that would be used if this type were serialized as the root object. - GetSchemaTypeName method. This method takes a Type and returns an XmlQualifiedName that represents the name of the XSD schema type that would be used if this type were exported to the schema. For IXmlSerializable types represented as anonymous types in the schema, this method returns null. - GetSchemaType method. This method works only with IXmlSerializable types that are represented as anonymous types in the schema, and returns null for all other types. For anonymous types, this method returns an XmlSchemaType that represents a given Type. Export options affect all of these methods.
https://msdn.microsoft.com/en-us/library/aa702692(v=vs.90).aspx
CC-MAIN-2016-30
refinedweb
528
57.06
tstatd - Logs real-time accounting daemon SYNOPSIS tstatd [ options ] plugin [zone1:]wildcard1 .. [zoneN:]wildcardN OPTIONS Agregate data from all anonymous logs (wildcards without explicit zone specified) into zone. Default behavior is to create new zone for each anonymous log from its file name. Use file as persistent storage to keep accumulated data across daemon restarts. Default is auto generated from daemon name, specified identity and '.db' suffix. Use only base name (excluding directories and suffix) of anonymous log file for auto-created zones. Change current directory to dir before wildcards expanding. Composition of options: --foreground and --log-level=debug. Don't detach daemon from control terminal, logging to stderr instead log file or syslog. Use name as facility for syslog logging (see syslog (3) for list of available values). Default is 'daemon'. Set minimal logging level to level (see syslog (3) for list of available values). Default is 'notice'. Use logging to file instead of syslog logging (which is default). Do wildcards re-expanding and checking for new and missed logs every num seconds. Default is '60'. Print brief help message about available options. Just a string used in title of daemon process, syslog ident (see syslog(3)), --database-file and --pid-file. Idea behind this options - multiple tstatd instances running simultaneosly. Specify address and port for TCP listen socket binding. Default is '127.0.0.1:3638'. With this option specified same log file could be included into several zones (if log name satisifies several wildcards). Default behavior is to include log file only in first satisified zone. Set number of sliding-windows to num. Default is '60'. Comma-separated plugin supported options (like a mount (8) options). Load content of file into plugin package namespace. This is way to easy customize plugin behavior without creating another plugin. Use file to keep daemon process id. Default is auto generated from daemon name, specified identity and '.pid' suffix. Do logging with level (see syslog (3) for available values) about all unparsed log lines. Hint: use 'none' for ignoring such lines. Default is defining by plugin and usually is 'debug'. Use pattern instead of plugin default regular expression for matching log lines. Load regular expression from file and use instead of plugin default regular expression for matching log lines. Store accumulated data in a persistent storage every num seconds. Default is '60'. Create named timer firing every num seconds for zone. Change effective privileges of daemon process to user. Print version information of tstatd and exit. Set size (duration) of sliding window to num seconds. Default is '10'..
http://search.cpan.org/dist/Tail-Stat/bin/tstatd
CC-MAIN-2017-47
refinedweb
427
53.07
Archived:Qt Carbide.c++ IDE Quick Start Qt SDKs that can work directly with standalone Symbian SDKs are no longer supplied, and Carbide.c++ does not at time of writing integrate with the Qt SDK. Developers should instead see: Using Symbian C++ in the Qt SDK Code Example Compatibility Article This Quick Start is relevant if you want to create applications with Qt using the Carbide.c++ IDE. It assumes that you have already followed the instructions in Archived:Using Qt with Standalone SDKs to set up your (command line) development environment. The article explains how you configure Carbide for Qt development, create a skeleton application using the Carbide.c++ Qt Wizard, and get it up and running on both the Symbian platform Emulator and on the device. - Carbide.c++ 2.3.0 - Qt 4.7.0 (note that some of the following screen shots show Qt 4.6.2 SDK, however the behaviour is identical) - Nokia Smart Installer 1.0.0 - Symbian^1 SDK (any standalone Symbian SDK) Installing Carbide.c++ Carbide.c++ was installed from the Application Developer Toolkit when you followed the instructions in Archived:Using Qt with Standalone SDKs. In summary - Download and install the Carbide.c++ - Extract the Carbide.c++ windows compiler patch file into the \x86Build directory under your Carbide installation, e.g. C:\Symbian\Tools\ADT_1.4\Carbide.c++\x86Build\ - (Optional) Configure the toolchain to allow command line building for the emulator using the Windows start button: All Programs | Symbian Foundation ADT v1.4 | Carbide.c++ | Configure environment for WINSCW command line Starting Carbide.c++ Carbide.c++ is launched from the Windows start button: - All Programs | Symbian Foundation ADT v<ADTVersion> | Carbide.c++ | Carbide.c++ v<CarbideVersion> On start, you will be prompted to select a workspace directory. The workspace directory contains any projects you’ve already created in the workspace and their common settings - such as code-formatting options (you can define multiple workspaces in order to separate completely different tasks). If this is the first time you've run Carbide.c++ the workspace will be empty.If you installed the SDK to drive C:\, an example of a correct workspace path is: C:\Symbian\development\. Once Carbide.c++ has started, close the Welcome tab (by clicking the cross shown circled in red below) to see the default workspace. Configuring Carbide.c++ for Qt development Carbide.c++ is "Qt aware", in that it understands how to integrate with the Qt build system and Qt Designer tool. However you still need to tell it where Qt is installed. If you have multiple versions of Qt, you may also need to tell Carbide.c++ which one to use to build a particular project. Qt versions are managed through the Carbide.c++ Qt Preferences (select Carbide menu: Window | Preferences | Qt). If you have a number of Qt SDKs you can click Default to identify the selected Qt SDK as the default SDK to use for Qt projects. As a general rule you should leave the "Auto update QMAKESPEC..." and "Auto update make command..." checkboxes selected to ensure that changes to the versions automatically update the way your projects are built. To add a new Qt version you select Add to launch the Add Qt Version dialog. - Enter the name of the Qt release - Enter the full path to the Qt \bin directory - Enter the full path of the Qt \include directory (this will be seeded with the correct value based on your specified \bin) - Press Finish to save the dialog. If necessary, you can change the version on a per-project basis through the Project properties settings. Select the project in project explorer and either do File | Properties or right-click | Properties In most cases there is no need to change the per-project settings. As you can see above these use the global default preferred Qt version, and specify for qmake to be run if the .pro file is changed (which is usually desirable). Usually that is all the configuration that is required. Occasionally Carbide will not detect SDKs, or will not properly register COM plugins, resulting in the Qt Designer views not loading into Carbide; workarounds to these issues are discussed in the Troubleshooting section. Qt Carbide.c++ Helloworld Now you've set up the development environment you're ready to start creating a basic Helloworld application using the Carbide.c++ Qt Project wizards. This tutorial is not intended as a lesson in Qt development (although you may learn a little along the way)! It is designed to familiarise you with the Carbide.c++ IDE, and how you get your application onto the phone. If the project builds cleanly and installs then this is the best verification possible that your development environment is set up properly. Creating a Project To launch the Carbide.c++ Create New Project Wizard select: File | New | Qt Project. Choose the Qt GUI Main Window application template (in the Qt GUI section). Note that if you select the templates for the dialog or widget, the following steps are the same. The Next page of the wizard is New Qt Symbian OS C++ Project. Define the project name - in this case HelloWorldQt. Keep the default location selected to create the project in your current workspace (note again, this must be on the same drive as the SDK and not contain spaces or other special characters). The Next page of the wizard is Build Targets. Choose the SDK(s) you want to use for building the project from among those installed to your PC (You can add more SDKs to your project later on). This should include a Symbian platform SDK that has been configured for use with Qt. At time of writing the only C++ Application Development SDK is the Symbian^1 SDK (Note: this is a copy of the S60 5th Edition SDK v1.0). By default all build configurations/targets are selected. If you click next to the SDK you can select/deselect individual build targets: - Emulator Debug (WINSCW) build binaries for the Windows-hosted Symbian platform emulator. - Phone Debug | Release (GCCE) build. Most developers should de-select the ARMV5 options above as shown (the Emulator is needed by all developers, and GCCE is sufficient for most third-party development). The Next page of the wizard sets the Qt Modules. The Qt core and Qt GUI modules are selected by default; these are all that are needed for this tutorial. The Next page of the wizard is BasicSettings. This allows you to specify the name of the main window class, and to specify the application unique identifier (UID). Usually you will leave the main window class unchanged. The UID (actually the SID, but for the moment we can ignore the distinction) defines the private area in the file system in which the application can store its data. Among other things the UID can also be used to programmatically identify and/or start the application. Carbide.c++ generates a random UID value for you starting with ‘0xE’, which is the range of UIDs reserved for internal development and testing. If you want to release your application to the public, you need to get your own unique UID allocated by Symbian Signed. As we do not intend to release our Hello World Qt. Your workspace should look similar to the screenshot below. The application should now build and run, displaying the project name in status area, and Options and Exit in the softkey area. You can skip ahead to the next section to build and run the application. Alternatively, true to the spirit of "HelloWorld" applications everywhere, below we show how to add a menu option and display a "HelloWorld" dialog when it is selected. Launching A Hello World Message First we create a new action and add it to the main window menu bar. The menu action's triggered() signal is connected to the helloPressed() slot. HelloWorldQt::HelloWorldQt(QWidget *parent) : QMainWindow(parent) { ui.setupUi(this); //Add menu action to launch Hello World dialog QAction *displayHello = new QAction(tr("Hello"), this); //Create new action. The use of "tr" means this term is translatable. menuBar()->addAction(displayHello); //add the action to the window menubar (which automatically goes to the "Options" menu connect(displayHello, SIGNAL(triggered()), this, SLOT(helloPressed())); //Call the slot helloPressed() when the action is triggered. } The helloPressed() slot is declared in the class as shown below: public slots: void helloPressed(); The slot implementation to launch the dialog is trivial: #include <QMessageBox> //for the message box ... void HelloWorldQt::helloPressed() { QMessageBox::information(this, tr("Hello"), tr("World!")); //Display information dialog with title "Hello" and text "World" } That's it! Build Configuration Tool icon in the toolbar or by selecting menu: Project | Build Configurations | Set Active and select Emulator Debug. - Then build the current configuration using the Build Tool icon in the toolbar or through the menu: Project | Build Project (You can also select a particular configuration to build from the Build icon selector). There may be a number of build warnings (perhaps related to Multiply defined Symbols); these are expected and can be ignored.Qt.exe,. You may also need to register your SDK with Nokia Developer - this takes a couple of minutes and can be done online. If you decide to launch the emulator and navigate to your application: First, open the menu through the menu symbol on the bottom left of the screen. Your own application will be located at the bottom of the Applications folder; use your mouse to navigate in the emulator’s menus. The application will appear as shown in the right-hand image when it is launched. Debugging on the Emulator The Emulator is the default debug target - you simply click the Debug button . Debugging on the Emulator is not covered further in this tutorial. See Carbide.c++ User Guide > Debugging projects for extensive information on debugging using Carbide.c++. Targeting the device The emulator can be used for most of your development work. However, some situations still require a real device – for example, when you want to use the camera or the acceleration sensor. When you've finished development, you'll also want to build a release version; stripping out debug code and symbol information to make your binaries smaller and more efficient. Specifying the installation package Symbian platform SIS installation files (.sis) are created from package file (.pkg) definitions. SIS file are often then digitally signed (this is a requirement on most phones) and renamed with the file extension .sisx. Qt package files are created automatically from the content of the Qt project file (.pro). By default two files are created HelloWorldQt_template.pkg and HelloWorldQt_installer.pkg. The _template.pkg file contains all files needed by the application except for Qt itself. The installation file produced is not particularly suitable for product release because you can't assume that Qt was already present on the device. Its possible to modify the Qt project file to include the Qt binaries in this package - which makes the SIS file much bigger, but guarantees that it is present on the device. An alternative approach is the use of a "smart installer". In this case the delivered SIS file contains both the application SIS and a small smart installer SIS; if Qt is not present on the device it is automatically installed before the application. The HelloWorldQt_installer.pkg defines this wrapper, and must be built after the application SIS. Symbian platform deployment options are discussed more thoroughly in the article Deploying a Qt Application on Symbian. To tell Carbide which package files you want it to use as part of the build process: - Open the Project properties dialog (select the project folder in Project Explorer and then do menu File | Properties (or right click | Properties)) - Navigate down to Carbide.c++ | Build Configurations | SIS Builder tab - Check that the "Active Configuration" is the one you want (GCCE or ARMv5, Debug or Release) - Press the Add button to specify a package file (.pkg) to be build for the current active configuration. Select the PKG File you want to build from those in the project directory (first HelloWorldQt_template.pkg). The default values for the other settings will create a sis file "HelloWorldQt_template.sis" and a self-signed SIS file "HelloWorldQt_template.sisx" in the current directory. You can change the name of the output SIS and SISX files in the fields provided. Here we have changed the output signed file to HelloWorldQt.sis because that is the filename that the smart installer package file expects. Press OK to save the SIS builder definition and then add another one for HelloWorldQt_installer.pkg (there is no need to do anything other than select the package file). Building for the device To tell the IDE that you want to build for the device, change the active build configuration to a phone-release configuration for GCCE (unless you have the RVCT compiler). As before, use the Build Configuration Tool toolbar icon ( ) to select the active-build configuration. Next, choose to build the current configuration using the toolbar Build icon (or in the menu: Project | Build Project). This will automatically compile the release project using the GCCE compiler and create any SIS files you specified in your SIS builder specification. You now need to transfer the HelloWorldQt_installer.sisx file to your phone to install it (see next section). Note that there is an unsigned version of the file HelloWorldQt_installer.sis created as well - this will usually not be installable, so take care not to select it by accident! Don’t forget to switch back to the Emulator Debug build configuration when you continue development! Installing on the device You can use the PC Suite or OviSuite that came with your phone to install the application on your device - Ensure that the suite is installed and running - Connect your device to the PC via Bluetooth or USB and add the phone to the known devices in the suite (if necessary). - Double-click the .sisx file in Windows Explorer or the Project Explorer window of Carbide.c++. If the PC Suite is not installed on your PC, you can send the file to the phone via Bluetooth or IrDA (if available): - Locate the .sisx file in Windows Explorer - Right-click on it and select Send to | Bluetooth device. You will be prompted to install the application when you open the message. It is also possible that your PC clock is set later than the phone clock, which makes the certificate invalid. Debugging on the device Debugging on a production phone is covered in the topic: Carbide.c++ On-device Debugging Quick Start. Troubleshooting The majority of users will have followed the above instructions to create and run the HelloWorld application. If you do encounter development environment issues, then you should: - Review the Qt Known Issues (on Gitorious). Summary In this tutorial you learned how to configure Carbide.c++ for Qt development, create a skeleton application using Carbide.c++ Qt Project wizard, and how to get it up and running on both the Symbian platform emulator and on the device. Related Information Further reading: - Qt Reference Documentation (recommended) - Qt Developer's Library (Nokia Developer) - Qt on Samsung Symbian Part 1 and Part 2 - A Video Guide for Setting up Qt development environment for Symbian - Qt Creator Quick Start
http://developer.nokia.com/Community/Wiki/index.php?title=Archived:Qt_Carbide.c%252B%252B_IDE_Quick_Start&direction=prev&oldid=175321
CC-MAIN-2013-48
refinedweb
2,548
54.73
GETCPU(2) Linux Programmer's Manual GETCPU(2) getcpu - determine CPU and NUMA node on which the calling thread is running #include <linux/getcpu.h> int getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache); Note: There is no glibc wrapper for this system call; see NOTES. The getcpu() system call identifies the processor and node on which the calling thread or process is currently running and writes them into the integers pointed to by the cpu and node arguments. The processor is a unique small integer identifying a CPU. The node is a unique small identifier identifying a NUMA node. When either cpu or node is NULL nothing is written to the respective pointer. The third argument to this system call is nowadays unused, and should be specified as NULL unless portability to Linux 2.6.23 or earlier is required (see NOTES). The information placed in cpu is guaranteed to be current only at the time of the call: unless the CPU affinity has been fixed using sched_setaffinity(2), the kernel might change the CPU at any time. (Normally this does not happen because the scheduler tries to minimize movements between CPUs to keep caches hot, but it is possible.) The caller must allow for the possibility that the information returned in cpu and node is no longer current by the time the call returns. On success, 0 is returned. On error, -1 is returned, and errno is set appropriately. EFAULT Arguments point outside the calling process's address space. getcpu() was added in kernel 2.6.19 for x86_64 and i386. getcpu() is Linux-specific. Linux makes a best effort to make this call as fast as possible. The intention of getcpu() is to allow programs to make optimizations with per-CPU data or for NUMA optimization. Glibc does not provide a wrapper for this system call; call it using syscall(2); or use sched_getcpu(3) instead. The tcache argument is unused since Linux 2.6.24. In earlier kernels, if this argument was non-NULL, then it specified a pointer to a caller-allocated buffer in thread-local storage that was used to provide a caching mechanism for getcpu(). Use of the cache could speed getcpu() calls, at the cost that there was a very small chance that the returned information would be out of date. The caching mechanism was considered to cause problems when migrating threads between CPUs, and so the argument is now ignored. mbind(2), sched_setaffinity(2), set_mempolicy(2), sched_getcpu(3), cpuset(7), vdso(7) This page is part of release 4.12 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2015-12-28 GETCPU(2) Pages that refer to this page: get_mempolicy(2), mbind(2), sched_setaffinity(2), set_mempolicy(2), syscalls(2), sched_getcpu(3), cpuset(7)
http://man7.org/linux/man-pages/man2/getcpu.2.html
CC-MAIN-2017-30
refinedweb
481
63.9
by Zoran Horvat Mar 18, 2013 Goal of this exercise is to write code which solves 9x9 Sudoku board. Program receives initial Sudoku as a sequence of nine lines of text. On output, program produces solved Sudoku. Example: Suppose that the following sequence of lines is provided on input: ...7..3.1 3..9..... .4.31.2.. .6.4..5.. ......... ..1..8.4. ..6.21.5. .....9..8 8.5..4... Program should solve this puzzle and produce output: 589742361 312986475 647315289 268493517 493157826 751268943 936821754 124579638 875634192 Keywords: Sudoku, search, depth-first search, constraint satisfaction, most constrained first heuristic. Sudoku table consists of 9x9 cells, each cell receiving a single digit between 1 and 9, inclusive. Sub-tables consisting of 3x3 cells are specifically grouped, as shown on the picture below. In order to be solved correctly, Sudoku table must satisfy seemingly simple criteria: on every row, every column and every 3x3 block, each digit must appear exactly once. Picture shows starting position and corresponding Sudoku table when solved (source:). The problem here is how to solve a given Sudoku so that final state of the table satisfies the validation criteria. One possible solution is to search through partial solutions in depth-first manner. In each step we pick up one empty cell in the table and try to put any digit that wouldn't cause a constraint violation. If successful, we simply repeat the operation recursively, on a table which has one empty cell less than before. Should this process come to a dead-end, in which there is such a cell in which no digit can be placed lest to cause a violation, we unroll the process back one step and try to replace the last digit with the next viable candidate, if such digit exists; otherwise, we unroll one step more to try next candidate one level further up the line. In perspective, we hope to search all possibilities until final solution to the Sudoku is found. On the bitter end, we might finish the search without encountering the fully populated table, in which case we simply conclude that initial position was not valid. But now there appears to be a problem. Sudoku table contains 81 cells, most of which are initially blank (57 in the example shown in the picture above). How many tables should we consider before the solution is found? With every step deeper down the search tree, number of tables considered becomes larger and larger - actually it grows exponentially with depth of the search tree. Going down fifty or so steps into depth sounds quite unfeasible. So what should we do to reduce number of tables generated by the algorithm? There is one simple solution which is applicable to many search problems. At every step we should choose the cell in the table which offers smallest number of candidates, i.e. digits that do not generate collision immediately. By looking for such heavily constrained cells we hope to provoke collisions early in the search, while tree is still shallow and number of nodes reasonably small. This search strategy is called "most constrained first" heuristic, deriving its name from the occasion that cell with most constraints upon itself is chosen first in each step of the algorithm. When properly applied, this heuristic can cause such tremendous drop in number of search steps that once unfeasible search scheme becomes fast enough to be used in practice. Algorithm which solves the Sudoku table is now quite simple. It boils down to a function which solves a single cell and then calls itself recursively, trying another candidate each time when recursive call returns failure condition. Here is the pseudo-code: function Solve(table) begin c = cell with smallest number of candidates if c not found then return success (there are no empty cells - puzzle is solved) for each d in candidates(c) begin set value of cell c to d status = Solve(table) if status = success then return success else Clear cell c end return failure (no solution was found or c had empty candidates set) end Solver algorithm has proven to be very short. But is it efficient enough to actually produce a valid solution to a realistic Sudoku table? We will soon find that out from the implementation that follows. Note that actual implementation contains functions that load and print the Sudoku table, but it also contains implementation of a function that extracts list of candidates for a given table cell. This utility function was skipped in pseudo-code provided in previous section, but now we had to implement it in order to make the solution complete. using System; namespace Sudoku { class Program { static char[][] LoadTable() { char[][] table = new char[9][]; Console.WriteLine("Enter Sudoku table:"); for (int i = 0; i < 9; i++) { string line = Console.ReadLine(); table[i] = line.PadRight(9).Substring(0, 9).ToCharArray(); for (int j = 0; j < 9; j++) if (table[i][j] < '0' || table[i][j] > '9') table[i][j] = '.'; } return table; } static void PrintTable(char[][] table, int stepsCount) { Console.WriteLine(); Console.WriteLine("Solved table after {0} steps:", stepsCount); for (int i = 0; i < 9; i++) Console.WriteLine("{0}", new string(table[i])); } static char[] GetCandidates(char[][] table, int row, int col) { string s = ""; for (char c = '1'; c <= '9'; c++) { bool collision = false; for (int i = 0; i < 9; i++) { if (table[row][i] == c || table[i][col] == c || table[(row - row % 3) + i / 3][(col - col % 3) + i % 3] == c) { collision = true; break; } } if (!collision) s += c; } return s.ToCharArray(); } static bool Solve(char[][] table, ref int stepsCount) { bool solved = false; int row = -1; int col = -1; char[] candidates = null; for (int i = 0; i < 9; i++) for (int j = 0; j < 9; j++) if (table[i][j] == '.') { char[] newCandidates = GetCandidates(table, i, j); if (row < 0 || newCandidates.Length < candidates.Length) { row = i; col = j; candidates = newCandidates; } } if (row < 0) { solved = true; } else { for (int i = 0; i < candidates.Length; i++) { table[row][col] = candidates[i]; stepsCount++; if (Solve(table, ref stepsCount)) { solved = true; break; } table[row][col] = '.'; } } return solved; } static void Main(string[] args) { while (true) { char[][] table = LoadTable(); int stepsCount = 0; if (Solve(table, ref stepsCount)) PrintTable(table, stepsCount); else Console.WriteLine("Could not solve this Sudoku."); Console.WriteLine(); Console.Write("More? (y/n) "); if (Console.ReadLine().ToLower() != "y") break; } } } } When source code provided above is compiled and run, it lets the user enter initial Sudoku position and then prints the solved table, if solution was found. In addition, total number of different tables produced during the search is printed, in order to prove efficiency of the most constrained first search heuristic. Enter Sudoku table: ...7..3.1 3..9..... .4.31.2.. .6.4..5.. ......... ..1..8.4. ..6.21.5. .....9..8 8.5..4... Solved table after 149 steps: 589742361 312986475 647315289 268493517 493157826 751268943 936821754 124579638 875634192 More? (y/n) y Enter Sudoku table: ..19...46 3.27..51. .....4.3. 528.71... ..3.4.6.. ...39.725 .5.2..... .67..94.3 83...51.. Solved table after 46 steps: 781953246 342786519 695124837 528671394 973542681 416398725 154237968 267819453 839465172 More? (y/n) y Enter Sudoku table: 83.1..6.5 .......8. ...7..9.. .5..17... ..3...2.. ...34..1. ..4..8... .9....... 3.2..6.47 Solved table after 80 steps: 837194625 549623781 621785934 256817493 413569278 978342516 164278359 795431862 382956147 More? (y/n) n Observe how small is the number of steps required to solve each of these three examples: 149, 46 and 80. This looks almost within the reach of pen-and-paper solver, and it's all thanks to a very efficient search heuristic..
http://codinghelmet.com/exercises/sudoku-solver
CC-MAIN-2019-04
refinedweb
1,275
60.45
02 July 2009 11:00 [Source: ICIS news] SINGAPORE (ICIS news)--Here is Thursday end of day Asian oil and chemical market summary from ICIS pricing. CRUDE: Aug WTI $68.35/bbl down 96 cents/bbl Aug BRENT $67.97/bbl down 82 cents/bbl Crude future fell back on Thursday adding to losses made the previous day. Prices fell after a larger than expected builds in US gasoline stocks. Downbeat economic data, added further downward pressure on prices. At 8:30 GMT on Thursday, the Dubai Mercantile Exchange (DME) September Oman futures contract settled at $68.24/bbl, down $2.10/bbl on the previous day. NAPHTHA: Asian naphtha prices closed softer Thursday. Second half August price indications were pegged at $608.00-609.00/tonne CFR (cost and freight) Japan, first half September at $603.50-604.50/tonne CFR Japan and second half September at $600.00-601.00/tonne CFR Japan. BENZENE: Prices firmed further to $845-860/tonne FOB (free on board) ?xml:namespace> TOLUENE: Prices were hovering at the $760-770
http://www.icis.com/Articles/2009/07/02/9229391/evening-snapshot-asia-markets-summary.html
CC-MAIN-2014-42
refinedweb
177
70.5
Agenda See also: IRC log <scribe> Scribe: Henry S. Thompson <scribe> ScribeNick: ht <alexmilowski> On in just a second ... <avernet> And joining in 2 seconds here... (sorry) Andrew, we have regrets from Paul, right? Nevermind, I've confirmed come on Alex, we're waiting for you. . . No known regrets at this time. . . <MSM> regrets from me for 15 November <alexmilowski> OK <alexmilowski> Screaming child prevents me from unmuting...: [summarizes the 'use syntax' option] ... or issue 70 RT: Let's not get bogged down in details of individual steps HST: Moving on to comment 9 RT: What is meant by 'namespace aware DTD validation'? ... assume it means namespace-aware parsing HST: I think this means,] This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/, Alex/, Alex, Michael (in part)/ Succeeded: s/Issue 18 Comment 6/Issue 13 Comment 6/ Found Scribe: Henry S. Thompson Found ScribeNick: ht WARNING: Replacing list of attendees. Old list: [IPcaller] New list: Ht avernet Andrew richard Alex_Milowski MSM Default Present: Ht, avernet, Andrew, richard, Alex_Milowski, MSM Present: Ht avernet Andrew richard Alex_Milowski MSM Agenda: Got date from IRC log name: 1 Nov 2007 Guessing minutes URL: People with action items: hst nw[End of scribe.perl diagnostic output]
http://www.w3.org/2007/11/01-xproc-minutes.html
CC-MAIN-2017-04
refinedweb
222
66.84
shapeless-contribshapeless-contrib Interoperability libraries for Shapeless UsageUsage This library is currently available for Scala 2.10, 2.11, and 2.12. To use the latest version, include the following in your build.sbt: libraryDependencies ++= Seq( "org.typelevel" %% "shapeless-scalacheck" % "0.6.1", "org.typelevel" %% "shapeless-spire" % "0.6.1", "org.typelevel" %% "shapeless-scalaz" % "0.6.1" ) What does this library do?What does this library do? shapeless-contrib aims to provide smooth interoperability between Shapeless, and Spire. The most prominent feature is automatic derivation of type class instances for case classes, but there is more. If you've got a cool idea for more stuff, please open an issue. ExamplesExamples Derive type classesDerive type classes Consider a simple case class with an addition operation: case class Vector3(x: Int, y: Int, z: Int) { def +(other: Vector3) = Vector3(this.x + other.x, this.y + other.y, this.z + other.z) } If we want to use that in a generic setting, e.g. in an algorithm which requires a Monoid, we can define an instance for spire.algebra.Monoid like so: implicit object Vector3Monoid extends Monoid[Vector3] { def id = Vector3(0, 0, 0) def op(x: Vector3, y: Vector3) = x + y } This will work nicely for that particular case. However, it requires repetition: addition on Vector3 is just pointwise addition of its elements, and the null vector consists of three zeroes. We do not want to repeat that sort of code for all our case classes, and want to derive that automatically. Luckily, this library provides exactly that: import spire.implicits._ import shapeless.contrib.spire._ // Define the `Vector3` case class without any operations case class Vector3(x: Int, y: Int, z: Int) // That's it! `Vector3` is an `AdditiveMonoid` now. Vector3(1, 2, 3) + Vector3(-1, 3, 0)
https://index.scala-lang.org/typelevel/shapeless-contrib/shapeless-spire/0.6.1?target=_sjs0.6_2.11
CC-MAIN-2021-31
refinedweb
299
51.75
Make the Most of the Internal Rate of Return through Modification The accounting rate of return is helpful, but it’s so simple that it’s extremely limited in its ability to provide you with information that’s useful in your attempt to manage assets, investments, and projects. For that, you have something called the modified internal rate of return (MIRR). Modified!? Yeah. The internal rate of return (IRR) is a good equation, but it has some faults that are easily rectified, so no one really even brings it up anymore. The IRR is a calculation that attempts to take the net future cash flows of a project (all the positive and negative cash flows of the project) and the discount rate at which the present value of the net cash flows is zero. Think of it like this: A project is worth $0 at the beginning because it hasn’t produced anything. So in order to determine the IRR, you attempt to calculate the rate at which the net present value of future cash flows is 0. That rate is the IRR. There are a couple problems with the IRR, though: It automatically assumes that all cash flows from the project are reinvested at the IRR rate, which isn’t realistic in most cases. It has difficulty comparing projects that have differing durations and cash flows. Otherwise, the IRR can be used to evaluate a single project or single cash flow. The MIRR tends to be more accurate in its costs and profitability of projects, though, and because the MIRR is a more robust equation with wider applications. You use the following equation to calculate the MIRR: where: n = number of periods FV = Future value PV = Present value Positive Cash Flows = The revenues/value contributions to revenues from the project Reinvestment = The rate generated from reinvesting future cash flows Cost = The investment cost Rate = The rate of financing the investment 1 = A number Most of the time, the reinvestment rate of MIRR is set at the corporation’s cost of capital. Of course, that depends a lot on how efficient the corporation is in its financial management, so keep it an open variable based more on evaluations of the corporations’ financial performance. Anyway, the following quick example shows you how to calculate the MIRR of a project. Say that a project lasting only two years with an initial investment cost of $2,000 and a cost of capital of 10 percent will return $2,000 in year 1, and $3,000 in year 2. Reinvested at a 10 percent rate of return, you compute the future value of the positive cash flows as follows: $2,000(1.10) + $3,000 = $5,200 at the end of the project’s lifespan of two years. Now you divide the future value of the cash flows by the present value of the initial cost, which is $2,000, to find the geometric return for two periods Note that this calculation doesn’t take a financing cost into account. That’s okay, because most corporations can afford $2,000 with no problem. Also note that had you used the IRR instead of the MIRR, the rate of return would have been substantially higher, but also substantially less accurate.
http://www.dummies.com/business/accounting/auditing/make-the-most-of-the-internal-rate-of-return-through-modification/
CC-MAIN-2017-22
refinedweb
545
53.24
THE POLITICS OF THE ANCIENT CONSTITUTION The Politics of the Ancient Constitution An Introduction to English Political Thought, 1603-1642 GLENN BURGESS © Glenn Burgess 1992 9HE. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. First published 1992 by THE MACMILLAN PRESS LTD Houndmills, Basingstoke, Hampshire RG21 2XS and London Companies and representatives throughout the world ISBN 978-0-333-52746-7 ISBN 978-1-349-22263-6 (eBook) DOI 10.1007/978-1-349-22263-6 A catalogue record for this book is available from the British Library Copy-edited and typeset by Grahame & Grahame Editorial, Brighton For Mandy Contents Preface PART 1: 1 2 3 ix EXPLORING THE ANCIENT CONSTITUTION OFENGLAND 1 Ancient Constitutions - Politics and the Past 3 (a) The Idea of an Ancient Constitution (b) Politics and the Past (c) The Ancient Constitution of England in European Perspective 4 7 11 The Ancient Constitution of England 19 (a) Introductory: The Problems of Legal Change and Legal Diversity (b) The Common Law as Reason (c) The Common Law as Custom (d) Conclusion: Views of the History of English Law 58 Problems and Implications 79 (a) Two Controversies: Insularity and Conquest Theory (b) Common Law, Political Theory and Radicalism (c) The Chronology of English Ancient Constitutionalism: Origins and Collapse PART II: THE COMMON LAW MIND AND THE STRUCTURE OF POLITICAL DEBATE 20 37 48 79 86 99 107 4 Some Historiographical Perspectives 109 5 The Elements of Consensus in Jacobean England 115 (a) The Basics: Language, Profession, Audience (b) The Three Languages (I): Common Law (c) The Three Languages (II): Civil Law 115 119 121 vii viii Contents (d) The Three Languages (III): Theology 130 6 Making Consensus Work (a) Consensus and Linguistic Diversity (b) An Excursus on the Royal Prerogative (c) Consensus and Conflict 139 139 162 167 7 Towards Breakdown: 1 New Counsels 1 and the Dissolution of Consensus 179 8 Epilogue: The Crisis of the Common Law 212 Notes 232 Index 286 Preface The idea of the ancient constitution provided the English political nation of the early-seventeenth century with its most important intellectual tools for the conduct of political debate. The common law, from which the ancient constitution derived, had a nearmonopoly when it came to the discussion of such issues as taxation, property rights, and the making of law. Many of its central features were described in 1957 by Professor J. G. A. Pocock, in his book The Ancient Constitution and the Feudal Law. Since that time there have been many attempts to revise some elements of Pocock's account, as well as his own reflections on these attempts published in a new edition of the book in 1987. The Part I of the present work is an attempt to survey the state of play on this matter, and to contribute new material to the discussion. The heart of it is contained in Chapter 2, where it is argued that Pocock's account of the ancient constitution, though in many respects as valid as it ever was, over-emphasized the typicality of Sir Edward Coke, and (partly in consequence of this) under-played the role of reason in the thought of the common lawyers. They certainly thought that the ancient constitution was built on custom, but the temptation for us is to conclude from this that - like some later conservative thinkers -they believed the ancient constitution to be good simply because it was old, and (in any case) changing it would be more inconvenient than it was worth. In fact, I argue, custom was always subservient to reason: the ancient constitution was good because it was a rational system. Custom was a tool used to explain its rationality. It is Part II of this book that justifies its sub-title. The common law was the most important political'language' of early Stuart England, but it was not the only one. In the second half of the book I examine its relationships with the other 'languages', and thus its place in the overall system of political discourse to be found in the period. The aim of this is to show the basic structures and operation of political debate in the pre-Civil-War period. The account is not in all respects a complete one, and more could certainly be said about divine-right theory than I have said here. Inevitably, my focus is on law. As a result, Part II is effectively a re-examination of what were once considered to be some of the great constitutional high-points on ix X Preface the road to Civil War. I have attempted to show that they were nothing of the kind: far from showing an articulate and conscious opposition to the theories used by royal spokesmen, the conflicts of the early Stuart period show a pattern of consensus giving way to confusion, fear and doubt before the actions (rather than the theories) of Charles I. By 1640 there was evident a crisis of the common law, characterized by the growth of doubts about whether it really could fill the role that the doctrine of the ancient constitution gave it. This role was to protect the lives, liberties and estates of Englishmen. There was not so much a theoretical challenge to monarchy as a growing realization of the inadequacy of existing theories to cope with a new situation. Men in fact found it extremely difficult in the 1640s to construct for themselves a language with which to criticize and justify resistance to royal misgovernment. I have aimed to address in this book both a student audience, and an audience of colleagues. There are always risks in aiming at more than a single audience, and some comment on the use of this book might help. Chapter 2 is undoubtedly more complex and technical than other parts. It is the place in which I develop my own views about the nature of the ancient constitution. Specialists in the subject will find in it justification for remarks made elsewhere. However, those more interested in my views on the general nature of political debate in the early-seventeenth century will find that Part II is to a considerable degree (if not entirely) able to be read on its own. Similarly Chapters 1 and 3 will also give between them a reasonable portrait of ancient constitutionalism, even without Chapter 2. Finally, Part II of the book forms one single argument: its parts do not really stand alone. It is intended to be accessible enough for an undergraduate audience (in part surveying the findings of recent historiography), while presenting a line of interpretation that will be of interest to scholars. Throughout the book I have concentrated on public debate. There may be value in asking what people said in the privacy of their families, or wrote in the privacy of their studies, but these are not questions that I have chosen to address. I wanted to uncover the rules governing the conduct of political debate in the public arena, and - with a few exceptions - the evidence I cite is evidence from the public domain: pamphlets and books, legal trials, parliamentary debates. Whether people thought things that they were unable to express publically is a separate issue (and for what it's worth some recent work has now suggested that censorship did not weigh so Preface xi heavily on the expression of opinion in the period than has hitherto been believed). In any case, the structure of public discourse is a subject in its own right. I have, as a general rule, left quotations as I have found them: punctuation and spelling are unaltered; the same is usually true of capitalization. I have not always followed the italicization and other font styles of the originals, and I have usually modernized the usage of the letters i, j, u, and v. I have, of course, corrected obvious typographical errors on occasion. Throughout, the year is assumed to begin on January 1, though in other repects I have followed normal seventeenth century dating habits. This book does not have a bibliography, but I have tried to indicate the most important further reading in my notes. Such notes have been indexed, so those looking for information on particular subjects should consult the index. The most pleasant duty that falls to the writer of prefaces is to recall the names of the friends and colleagues who have helped and inspired his or her work. In many cases we do not know the people personally. The names of those in this category will be found in my footnotes. I am particularly indebted to many of the recent scholars of the early Stuart political world whose work has led me to ask the question: if politics was like this, then what must political discourse have been like? Part II gives my answer to this question. It focuses' on the balance between consensus and conflict, terms central to recent historiographical debate. Prof. Louis Knafla responded generously to my request for help in tracking down some of his own work. Dr Richard Tuck gave of his time and knowledge while I worked on the thesis from which some of this book derives, and Dr Mark Goldie made valuable comments on the completed thesis. During this period I was kept buoyant by the conversations and moral support of friends, notably Dr Jonathan Scott and Mr. Howard Moss. Both have contributed more to my conclusions than they possibly realize. To Dr John Morrill, supervisor of my PhD thesis, and Prof. Colin Davis I have accumulated, and continue to accumulate, debts which I shall never be able to repay. Their friendship and advice at all stages of this project both made it more pleasant and improved the quality of its product. Ideas alone do not make books. In the process of producing the finished product one gathers a further set of friends and, in a wide sense, creditors. Mrs Dawn Hack and Mrs Judy Robertson converted my MS into typescript. Financial aid was provided, at various points, by Trinity College, Cambridge; the Cambridge xii Preface Commonwealth Trust; a Cla11de McCarthy Fellowship from the N.Z. University Grants Committee; and by research grants from the University of Canterbury and its Department of History. I am grateful also to the staff of the Rare Books Room of the Cambridge University Library, where much of the research for this book was carried out, and to the excellent interloans staff at the University of Canterbury Library. My thanks to all of them, and to my publishers, Vanessa Graham in particular, for patience with my dilatoriness. I have been lucky: my parents have always done their utmost to enable me to fulfil my wish to become an historian (however bizarre they may have thought such a wish). Without them, it would not have been possible. My final thanks are reserved for my wife, to whom the book is dedicated. She did not type the manuscript or compile the index; but without her willingness to value the writing of this book as highly as I did myself, its progress would have been much slower. What more can one ask for? GLENN BURGESS Part I Exploring the Ancient Constitution of England 1 Ancient Constitutions Politics and the Past In the past people thought differently. The point is a deceptively obvious one. It is always tempting to interpret ideas from the past in ways that make them more like our own than they really were. No matter how frequently we remind ourselves of the differences between the past and the present, our very habits of mind encourage us to abridge them. The study of past ideas must always be in part a process of defamiliarization. In early-seventeenth century England there were two common ways of legitimating the political rules and arrangements of the present,1 and both of them are liable to look bizarre to modem eyes. One of these ways involved the employment of the concept of custom, the other the concept of grace. Things were legitimate either because they were customary, or because they were the product of God's grace. In the hyper-rationalistic twentieth century neither of these justifications seems particularly persuasive, and a considerable degree of effort is required for us to rethink the thoughts of people for whom these concepts were of such legitimating power. This book will consider in detail the thinking that underlay the concept of custom. Many have argued that this forms the basic language for political debate in the early Stuart period,2 and this is probably so. The language of custom was derived from the practices and attitudes of the common law, and most of the English ruling elite had at least a passing acquaintance with this. In law courts, in pamphlets, in parliamentary debate, the preconceptions of 'the common-law mind' 3 were fundamental to the ways in which political matters were discussed. These preconceptions together formed the concept of the ancient constitution. This concept is an inherently ambiguous one, so we must begin with a careful examination of its possible meanings. 3 The Politics of the Ancient Constitution 4 (A) THE IDEA OF AN ANCIENT CONSTITUTION The ideas that make up the theory of the ancient constitution can be resolved into three basic elements: custom, continuity, and balance. Let us examine each in turn. Professor Pocock has summarized his understanding of the idea of the ancient constitution in these words: The relations of government and governed in England were assumed to be regulated by law; the law in force in England was assumed to be the common law; all common law was assumed to be custom, elaborated, summarized and enforced by statute; and all custom was assumed to be immemorial, in the sense that any declaration or even change of custom ... presupposed a custom already ancient and not necessarily recorded at the time of writing.4 The 'ancient constitution' of England was identified with the common law because that law regulated the relations of government and governed. Even the maker of statute law itself, the institution of King-in-parliament, was also the High Court of Parliament and the highest common-law court in the land. This common law, then, constituted the English polity. Further, the common law, and consequently the ancient constitution itself, were customary. By this was meant two things: first, that English common law was unwritten (lex non scripta), not written as the Roman law was. This raised the problem of how it could be law at all (the Roman law term lex meant written law), and this problem Bracton resolved by making English common law a customary law. It partook of the features of Roman leges (it was general not local), and of consuetudines (it was unwritten).s Thus common law became seen as the national law of England, yet was unusual in being (in origin at least) unwritten. So, where did it come from? how was it known? The answer to this provided the second feature of the customary common law, it was immemorial. This indeed was the defining feature of custom -some action performed or rule followed 'time out of mind'. This phrase 'time out of mind' did not mean simply old. It meant what it said: the practice had existed for as long as could be remembered, or (in other words) that no proof could be had of a time when it had not existed. There was no interest in the origin of customs. Quite the contrary, for if their origin could be discovered they would not be Ancient Constitutions -Politics and the Past 5 customs. Origin presupposed a time before. Because the ancient constitution was customary in this sense, believers in it were led to the doctrine of continuity. The ancient constitution was still in place in the seventeenth century, and it had existed 'time out of mind'. This implied that English legal and constitutional history must be a continuous one. If there were any evidence of breaks or sudden innovations then there would also be evidence of a time before current political arrangements were in force - a time when things were radically different. Therefore the idea of the ancient constitution presupposed that English history could be traced back step by step to its earliest documents, with no intervening ruptures. Professor Pocock has argued that this need for continuity committed people to a denial of the reality of conquest (and above all the Norman Conquest of 1066) in English history. This is a matter of some complexity to which we shall return in later chapters. Finally, it was agreed that the essential political characteristic of the ancient constitution was balance. The fundamental laws of the English polity, as James I remarked early in his reign, gave the king his prerogatives, and gave the subjects security in their liberties and property.6 They guaranteed the balance of prerogatives and liberties. The point could be formulated in a number of different ways. It could be stated as a balance of prerogative and law (rather than prerogative and liberties), as in Sir Francis Bacon's statement that 'so the laws, without the king's power, are dead; the king's power, except the laws be corroborated, will never move constantly, but be full of staggering and trepidation'.? Prerogative and law were mutually enhancing, not contradictory. Where James I balanced prerogatives and liberties, and Bacon balanced prerogative and law, Sir Edward Coke balanced allegiance and protection: for as the subject oweth to the king his true and faithfullligeance and obedience, so the sovereign is to govern and protect his subjects. Between a sovereign and a subject there is 'duplex et reciprocum ligamen'. 8 What is common to all of these variations is the theme of balance, the idea that the king's authority, while it remained unchallengeable, was to work in harmony with the needs of the subject as expressed in the common law. Balance was harmonious, and so in England the prerogatives of the monarch and the liberties of 6 The Politics of the Ancient Constitution the subject co-existed in stable perfection. Thomas Hedley summarized the matter in a way that might look self-contradictory in our eyes, but was not: 'This kingdom enjoyeth the blessings and benefits of an absolute monarchy and of a free estate . . . . Therefore let no man think liberty and sovereignty incompatible, ... rather like twins ... they have such concordance and coalescence, that the one can hardly long subsist without the other'. 9 Though these three components are all essential to the concept of the ancient constitution, there are a variety of ways in which they can be made to cohere. These can be reduced to two. Each may be linked to pre-existing approaches to the common law which can be found in fifteenth and early-sixteenth century writings. Later, we shall have to confront the question of the origins of the 'common-law mind', but it can be said in anticipation that at least part of the answer to this question is provided by features inherent to the English common law tradition itself. Nevertheless that tradition points in divergent directions, and each of the directions has a quite different implication for the idea of an ancient constitution. The first consists of the argument that the English constitution and the common law were, in essentials, unchanging. Custom and continuity resulted in stasis, with institutions and laws passed on from generation to generation essentially unaltered. The view may be found expressed in Fortescue's De Laudibus Legum Anglie (written c. 1470). Fortescue's argument was summarized by Chief Justice Popham in 1607: that the laws of England, had continued as a rock without alteration in all the varieties of people that had possessed this land, namely the Romans, Brittons, Danes, Saxons, Normans, and English, which he imputed to the integrity and justice of these laws, every people taking a liking to them, and desirous to continue them and live by them, for which he cited Fortescues book of the laws of England.lO Or, as Fortescue himself put the matter, through all changes of rule, 'the realm has been continuously ruled by the same customs as it is now.' 11 But this is not the only reading that can be given to the concepts of custom and continuity.12 The alternative reading saw English constitutional history not as stasis but as continual change. Customs were constantly being refined and modified to fit changed circumstances. In this view English law was a body of rules in constant unbroken Ancient Constitutions - Politics and the Past 7 evolution, and at any given time was in perfect accord with its context. The classic expression of this reading was given by John Selden, appropriately in his commentary on Fortescue. 13 Selden argued that the customs of the present were not literally identical to those of the past yet could be said to be the same, just as the ship, that by often mending had no piece of the first materialls, or as the house that's so often repaired, ut nihil ex pristina materia supersit [i.e., that nothing of the original material survives], which yet (by the Civilllaw) is to be accounted the same still.l4 This view of the meaning of custom and continuity, though considerably elaborated by Selden, was not really alien to English legal traditions. A basis for it may be found in St German's Doctor and Student (1523) which was (with Fortescue) one of the most widely read and cited of legal books. It has been said of this book that it argued that, though law was grounded in the principles of reason, nevertheless those principles 'are not necessarily universal and abstract, but may be drawn from historical growth'.lS The Doctor and Student provided an account of custom that both avoided Fortescue's assertion that English customs had existed unchanged since pre-Roman times, and implied that customs were not all of the same age. St German, particularly in the way in which he linked together reason, custom and maxims so that they became almost identical, 16 was clearly aware of the diverse sources from which English law had gradually been accreted over time. For him, custom was a term of art, and the immemoriality of custom not so much a recognition of long immutable existence as of simple ignorance of origin. It was on such a basis as this that an account of legal history such as Selden's could be constructed. Though we must eventually attempt to decide whether the path provided by Fortescue, or that provided by St German, was the one more commonly followed, we need first to explore some of the general contexts in which the doctrine of the ancient constitution can be seen. (B) POLITICS AND THE PAST Modem attitudes toward the role of the past in settling the political questions of the present are generally coloured by the acceptance 8 The Politics of the Ancient Constitution of a principle given classic formulation by Hume and Kant in the eighteenth century, the fact-value distinction.17 This asserts that questions of value (that is questions that require moral decisions or commitments) cannot be answered factually. Questions about the best sort of political organization, or about the rightness of political policies or courses of action, in so far as they are moral questions, cannot be answered by saying that in the past things were done in some particular way, and that this ought to be continued. No matter how frequently people have done something, this cannot be considered proof of its rightness. In short, questions of value can only be answered by rational proof of the rightness of the value one wishes to recommend, not by appeal to experience or the past. But in the period between the Italian Renaissance and the Enlightenment conventional attitudes were quite different from these modem ones. They differed in two ways: first, the distinction between facts and values was blurred, perhaps even non-existent; and, second, the study of the past (or history) was conceived of in ways that made its nature and purpose different from that of modem historical inquiry. For the early modem thinker, values themselves were a sort of 'fact' - they had an objective status and existed, in some sense, even if no one accepted them. Some values were good, some bad, and the judgement between the two was not a subjective one. Rather, God had created the world, and endowed human beings with reason in such a way that objective values were part of the world and human reason could give us access to them. The laws of nature, therefore, not only laid down such things as the principles whereby trees grew and human beings reproduced; it laid down also a basic moral code that was immutable and valid throughout the world. For people who thought in this way values were rather like facts: they were something to be discovered (even discovered empirically), not something of emotional or psychological origin within each subject's mind. Even more important for understanding the thinking that lay behind the doctrine of the ancient constitution is the conventional attitude to history found during the early modem period. During this period the primary justification for the study of history was not so much the need to seek the truth about the past as the need to seek truths that would be valid in the present. Perhaps, indeed, seek is the wrong word; history was more a storehouse of examples that demonstrated, from the record of human experiences, truths already known. One scholar has aptly named this understanding of history Ancient Constitutions - Politics and the Past 9 'the exemplar theory of history' because it was, in the famous formulation derived from Dionysius of Halicamassus, philosophy teaching by examples. 18 Such a view prevailed (at least amongst historians if not among antiquarian scholars) until the advent of the attitudes to historical inquiry associated with historicism. 'Historicism' was a position that, when adopted by historians, tended to result in the idea that the historian should try to understand the past in its own terms (an aim fraught with difficulty and ambiguity), and should place the disinterested search for truth and accuracy above the wish to create a past usable for present purposes. The process by which historicist attitudes came to dominate the modem historical profession is a long and complicated one, spanning the period from the mid-eighteenth to the early-twentieth century. 19 Against such a background as this the idea of the ancient constitution does not look as silly as it might at first sight appear.2o Virtually everyone in the sixteenth and early-seventeenth centuries believed that the past contained lessons of direct relevance to the present. William Blundeville, in 1574, expressed the platitudes of an age. He believed that three subjects were central to political knowledge (peace, sedition and war) and that truth in these matters is partly taught by the Philosophers in generall precepts and rules, but the Historiographers doe teache it much more playnlye by perticular examples and experiences.21 Amongst the purposes of reading history Blundeville numbered 'that by the examples of the wise, we maye learne wisedome wysely to behave our selves in all our actions, as well private as publique ', and 'that we maye be stirred by example of the good to followe the good, and by the example of the evill to flee the evill'.22 History was to provide moral lessons for the present, and that was its basic function. But one should not automatically conclude from this that early modem historical scholarship was based on an attitude to the past similar to that held by the makers of modem television commercials, or the writers of politicians' speeches. The past was used to construct general lessons for the present, but this was not thought to detract from the truth of the knowledge acquired from a study of the past. Rather, past and present existed in a continuum so that lessons from the past were automatically transferrable to the present. In short, present-mindedness was not viewed as an obstacle to historical understanding. The enormous 10 The Politics of the Ancient Constitution energy that early modem antiquarians poured into the examination of the past was in part motivated by the belief that this would be of use in the present- though this need not detract from the view that they were also motivated by a genuine curiosity about the past. But mention of antiquarianism introduces a complication. In good part English ancient constitutionalism was the product of legal antiquarianism. In the early modem period antiquarianism and history were rather different things. The historian often had no direct acquaintance with the surviving documentation from the periods he wrote about. His aim - as expressed by Blundeville was to retell stories from the past with as much literary elegance as possible and in such a way as to bring out the moral lessons to be derived from them. History was not so much a branch of scholarship as a branch of literature, and it did not necessarily involve detailed research. The antiquarian, on the other hand, was a scholar concerned with collecting, preserving and arranging the primary sources for the study of the past. However, the antiquarian did share many of the general preconceptions about the past of the historian. 23 Antiquarians also believed in the direct value of the past for present-day political concems.24 Indeed, it may be argued that it was the fruitful impact of present day concerns that diverted the attention of antiquarians from classical antiquities to the antiquities of England, a process evident from the Elizabethan period onwards. The end result was that antiquarianism became - in the hands, for example, of John Selden- a type of history. It was characterized by an evolutionary model of historical change (perhaps the product of the much greater sense of historical continuity and much weaker sense of anachronism evident in the comparison of the seventeenth century to our own). In the words of one historian, which probably go just a little too far, 'antiquarianism became ... social history'; but unlike normal humanist historiography it was not just 'the narrative of political action' but dealt with the structural evolution of societies and polities.2s Thus the set of attitudes and assumptions that we call ancient constitutionalism arose, in a general sense, from two preconceptions with which early modem intellectual life was deeply imbued. The first was that the past could speak directly to the present, and the historian's job was to enable it to do so. For the traditional political historian this might involve the recounting of the wicked deeds of wicked men (and with any luck the evil consequences of the deeds) as a warning against repeating such actions. A classic Ancient Constitutions - Politics and the Past 11 example of this was the growing popularity (from the 1590s) of Tacitus as a model for historians to emulate -a popularity that had a lot to do with the belief that Tacitus could give political lessons appropriate to the circumstances of late Elizabethan England.26 For the antiquarian, on the other hand, it might mean showing that in times past the structure of political action took a particular form, with the implication that this form should be maintained and not corrupted. The past was a storehouse of moral knowledge waiting to be raided. The second preconception, very strong amongst the legal antiquarians most responsible for the doctrine of the ancient constitution, was that the past and present existed in an evolutionary continuum. It was this that both made the past relevant to the present, and formed the basis for one of the central components of ancient constitutionalism, the idea of continuity. There were ambiguities, which we have touched on already and will explore again, in this idea. Were past and present continuous in the sense of being the same? Were they linked in a continuous evolutionary process? Or was the continuity more complicated still? Perhaps past and present were continuous in the sense of being governed by the same principles, as in Machiavelli's cycle of construction and decay, or in Vico's more complex theory of historical cycles. The fact that the idea of the ancient constitution was deeply embedded in the preconceptions of its age raises one more background question about it. To what extent was it a peculiarly English phenomenon, and to what extent was it but the English version of a general European pattern of ideas? (C) THE ANCIENT CONSTITUTION OF ENGLAND IN EUROPEAN PERSPECTIVE In The Ancient Constitution and the Feudal Law, Professor Pocock began with a chapter on French historical and legal thought in the sixteenth century. In it he recognized the important role that 'constitutional antiquarianism' played in the political thinking of most early modem European countries.V In this sense English ancient constitutionalism was not a unique thing. But Pocock also argued that the English case was, in other senses, unique. Continental theorists were generally exposed to both Roman and customary law, but in England there was a total monopoly of the latter (in the form of the common law). This meant that English legal antiquarians 12 The Politics of the Ancient Constitution and constitutional historians lacked any comparative perspective on the English past. They tended as a consequence to be trapped within their documents, which disguised change by employing various legal fictions.2s Their view of the past was thus blind to change, at least of some types, and they were encouraged 'to interpret the past as if it had been governed by the law of their own day ... to read existing law into the remote past'. 29 The English scholars tended to remain unreceptive to the principles of legal humanism that developed amongst Roman law scholars in Continental Europe (France above all). Legal humanists were able to recognize that the laws of the present were the product of historical change and discontinuity, possibly even the result of clashes between rival law codes (especially Germanic customary law and the civil law of Rome), while English lawyers and historians remained committed - at least when they considered their own country - to simple and unhistorical notions of immemoriality. If this is true then the resemblances between the English idea of an ancient constitution and the myths of ancient French, German and Dutch constitutions that can be found in the sixteenth and seventeenth centuries must be largely illusory. Certainly, such resemblances would have to be balanced against enormous differences.30 In particular English ancient constitutionalism would need to be seen as being based on a very different attitude to history from that found on the Continent. Sixteenth-century historical thought is now widely credited with preempting the insights of historicism, developing a 'new historical relativism', 'a new awareness of change'. Laws, it was thought, 'had to be understood, as they had once existed, in the context of the society which created them'.3 1 But, in contrast to this attitude found in various forms in French scholars like Bude, Hotman and Bodin, the English remained insular. The 'common-law mind' has been described as possessing a fundamentally insular outlook, seeing English history only in English terms, ignoring outside influences, and ignoring change to such an extent that the past came to look very much like the present.32 This idea of the insularity of English common law thought has proved highly controversial.33 This has been in part because the notion of insularity is slightly ambiguous, and has been taken to mean a number of separate things. Consequently, the question of whether or not 'the common law mind' was insular actually conflates several quite distinct issues. These might be separated Ancient Constitutions - Politics and the Past 13 by posing a number of more precise questions. The important ones might include: 1. 2. 3. 4. Did English common-law scholars of the late-sixteenth and early-seventeenth centuries know any civil law, and were they acquainted with the methods of legal humanism? Did they apply this knowledge and these methods in their study of English law? Did they end up seeing the history of English law as possessing a unique pattern, or one that exemplified general European historical evolution? Were the attitude to and theories of history held by commonlaw scholars peculiar to themselves or shared with Continental scholars? It is this last question that we need to answer here. The first three will be considered later, when the common-law attitude to history has been examined in detail. It is clear that Pocock and Kelley do consider English ancient constitutionalism to have been insular in this fourth sense. It was out of touch with the most up-to-date thought on legal and historical science being generated by Roman law scholarship in other parts of Europe. However, there are grounds for believing that the insularity of the English common law mind has in this matter been considerably exaggerated. Partly this has been because the 'modernity' of the historical attitudes of legal humanism has been overplayed; partly it is because the lack of 'modernity' in common law thought has also been excessively magnified. The former of these points will be dealt with here; the latter in Chapter 2. In a series of recent articles Zachary Sayre Schiffman has critically examined the concept of 'Renaissance historicism'.34 He argues that those who have found the birth of historicism in sixteenth-century France (Pocock, Kelley, Huppert, Franklin) 'have mistaken developments in scholarship for the emergence of' a new historical consciousness.35 The view of history that underlay the scholarship of sixteenth-century humanists was very different from that of nineteenth and twentieth century historicists. The humanist scholars had little sense of historical development. Instead, they tended to see history as a teleological process in which potentialities inherent in an entity were gradually made visible over time. The historical process was not an evolution (or revolutionary leap) from one 14 The Politics of the Ancient Constitution state to another that was completely different from it. Everything that happened to a nation, let us say, was inherent or potential in that nation from the beginning. Thus historical'change' was not change in the sense of moving away from an initial starting point; it was, rather, a process by which an entity (such as a nation) became more itself over time. All, that in the beginning existed in an entity only potentially, gradually became actual. Thus Schiffman is able to remark, with regard to La Popeliniere's history of historical writing, that it 'was less concerned with describing the evolution of that body of literature than with discovering its eternal, unchanging essence'.3 6 Essences became more visible as time progressed. This attitude was expressed neatly by Thomas Woods, who translated Hugo Grotius's account of the ancient constitution of Batavia into English in 1649. In Grotius 's account, Woods said, it could be discovered that that Commonwealth which is at the present among us, hath not had its beginning now of late, but that the very same Commonwealth that in former times hath been, is now made more manifest and appeareth more clear and evidenter than ever before.37 If Schiffman is right in believing that this form of teleological historical consciousness underlay sixteenth century legal humanism, then there would be a reasonable basis for believing also that English ancient constitutionalism was not 'insular' in the sense under consideration here. The ancient constitution of England was an object that remained essentially the same, even though it underwent changes of a sort. Schiffman's model of sixteenth century historical consciousness provides, at the least, one possible way of making sense of this fact. It would explain how the 'ancient constitution' could both serve as a normative paradigm to which the present was supposed to adhere, and yet still be an historical object undergoing evolutionary change. Indeed, going a little further, it would explain why the present could not but adhere to the forms of the ancient constitution, and hence automatically conformed to its normative requirements. In short, the ancient constitution was forever changing yet always the same. This, as we shall see, captures exactly the central features of English ancient constitutionalism. There is, thus, some reason for recognizing that the idea of the ancient constitution of England, and the historical consciousness Ancient Constitutions - Politics and the Past 15 that underlay it, was not dissimilar to ideas that could be found in other European countries in the early modern era. Since 1957, when Pocock first uncovered the English common law mind,38 a number of scholars have gradually put together a European context for it. As well as the English, the Dutch, the French, the Scots, and the inhabitants of the Holy Roman Empire had an 'ancient constitution' constructed for them. (No doubt this is a far from exhaustive list.) Though the historical consciousness that underlay these various constructions may have been broadly the same, the European ancient constitutions did differ from one another in other ways. In general, they can be divided into two categories. The basis for such a categorization lies not in differences of historical understanding but in differences of ideological purpose. One type of ancient constitution was developed by, or on behalf of, groups engaged in political struggle of one sort or another. It was a form of ideologically-slanted political propaganda. This type of theory was found in France, Scotland and the Netherlands. English ancient constitutionalism, however, was of a second-type, to which the theory of an ancient German constitution may also have belonged. The key to understanding the difference between the two types is to be found in a consideration of the political circumstances in which the various theories were developed. French, Scottish and Dutch ancient constitutionalism was essentially the work of Calvinist rebels. The major purpose behind it was to demonstrate that though Calvinists might be rebels, they were not innovators. The actions they were following were defended as being in accord with the essential nature of the polity, and this in turn was revealed through historical analysis. What distinguished this form of ancient constitutionalism from the English variety is that it used the past to create a highly partisan account of the present. It formed a critique of the present, as it had to if it were to justify resistance to constituted authority. The classic writers in this tradition of ancient constitutionalism all had specific ideological axes to grind. Fran~ois Hotrnan, who first turned the 'humanist investigation of the French ancient constitution' into a 'revolutionary ideology',39 is a case in point. Hotman's Francogallia (first published in 1573, during the crucial decade of the French Wars of Religion) was in essence a defence of the power of the Estates General (and hence the French aristocracy) over the crown. The public council of the realm was, according to Hotman, constituted for 'the appointing and deposing of kings; next, matters concerning war and peace; the 16 The Politics of the Ancient Constitution public laws; the highest honours, offices and regencies belonging to the commonwealth', and other matters. Indeed, its consent was needed in all'affairs of state' for there was 'no right for any part of the commonwealth to be dealt with except in the council of estates or orders'. 40 Indeed, the fundamental laws of the kingdom limited the power of French kings. They were 'restrained by defined laws', the chief of which decreed that 'they should preserve the authority of the public council as something holy and inviolate', and 'that it is not lawful for the king to determine anything that affects the condition of the commonwealth as a whole without the authority of the public council'.41 Hotman's work, in short, was an attempt to construct an ancient French constitution that provided institutional and legal checks on the monarchy - checks that could be exploited by Huguenot rebels. During the late sixteenth century Dutch and Scottish Calvinists, as well as French, frequently found themselves acting in conflict with civil authority. They too developed ancient constitutionalist theories to defend themselves in this conflict. In Scotland George Buchanan found a 'revolutionary tradition' within Scottish history. Hereditary monarchy was not the essential nature of the Scottish polity. It began as an elective monarchy, and this elective principle was kept alive by occasional depositions. Scottish kings could always, in the last resort, be brought to account.42 So, of course, could Scottish queens: one of Buchanan's major works was written in defence of the deposition of Mary Stuart in 1567, the De jure Regni apud Scotos (1579).43 Resistance theory also played its part in Dutch ancient constitutionalism, though it was of a rather different character from that of Buchanan. The Dutch fashioned their theories to defend a national resistance against Spanish rule after 1568, and so needed to do more than others to construct the very image of themselves as a nation. 44 Paramount amongst those Dutch writings that constructed an ancient Batavian constitution, to serve as a precursor and exemplar for the modem Dutch, was Hugo Grotius's Liber de Antiquitate Reipublicae Batavicae (1610). 45 Grotius wrote to defend a particular form of constitution for the new Dutch republic, a form which left power in the hands of the provinces rather than in any central authority. It was aimed against absolute monarchy, but it was also aimed at defending a preeminent role for the ruling elite of Holland in the affairs of the United Provinces as a whole. Grotius portrayed the Hollanders as heirs to the Batavians, and was thus (when he had shown that the Batavians had possessed a government Ancient Constitutions -Politics and the Past 17 of the sort he was recommending) able to argue that experience and antiquity- 'so many hundred years'- attested to the fitness of this form of government for the Dutch (though he was happy to admit that other peoples might suit different forms of govemment). 46 Schaffer has indicated how Grotius 's approach can be seen as an extension of traditional humanist historiographical practice: 'he expanded his exemplum [i.e. the object being put forward as an example to emulate] beyond the traditional humanist approach, which had usually been restricted to persons and virtues in a rather vague manner, to the whole structure of a society'.47 The remark applies, mutatis mutandis, to all examples of this first type of ancient constitutionalism. It was fundamentally nonnative in character, setting up a model of the way a particular political community ought to be organized and criticizing the present in terms of that model. Ancient constitutionalism of this sort was generally engaged with the issues of its day. It was thus ideological in character, the intellectual weapon of particular groups and individuals. English ancient constitutionalism was different in character from this. It corresponded to a second type of early modem European ancient constitutionalist thinking. The closest parallel to the English model seems to have been found in the German Empire.48 This second type of theory was primarily descriptive in character, and was concerned less with criticizing the present and more with explaining how the present, whatever form it took, was to be justified, and why it was to be accepted. It was not the ideology of a party but the shared language of an entire political nation (at least in the English case)- a mentalite perhaps. The key feature of this variety of ancient constitutionalist theory was possession of an evolutionary theory of history. It did not assert the identity of past and present, but it did assert that a continuous process had transformed the former into the latter in such a way that they were in essence the same. This process of 'change' was characteristically seen as a gradual refinement whereby the customs and laws of a nation remained always in perfect accord with their environment (i.e. the needs of the nation that they served). The English ancient constitution was not a state to which the English ought to return (as the French was, in a sense, to Hotman); it was a state in which they still lived. Thus the English version of ancient constitutionalism was of a very peculiar character. It looked like a glorification of the ancient past, but it was in fact a glorification (and a justification) of the present. The key to explaining its peculiar ideological nature lies in the 18 The Politics of the Ancient Constitution fact that it developed in different circumstances to most continental ancient constitutionalist theories. The French version, for example, was developed as a form of resistance theory, but English ancient constitutionalism took shape during the late sixteenth century when the paramount political need was for a defence of the status quo in church and state. Thus it developed, in sharp contrast to most other parts of Western Europe, into a form of political conformism. English ancient constitutionalism explained why the current shape of the English polity was automatically the best that could be achieved. It was thus an antidote to Calvinist resistance theory, not a form of it. There is a very real sense in which the theory of the ancient constitution of England was but the secular portion of the theory contained in Richard Hooker's great defence of the Elizabethan church, Of the Laws of Ecclesiastical Polity (Books 1-V, 1593-97; later Books not published until 1648 and 1661). Exactly what that sense was we must now discover. 2 The Ancient Constitution of England The phrase 'the. 'The ancient constitution' was not a constitution of the past; it was the present constitution, the constitution of the seventeenth century. This is to say no more than that the ancient constitution was a collection of laws and institutions that had evolved in a continuous process whose beginnings were lost to human memory (including, that is, written records which were a form of collective memory). In short, an ancient constitution was a modem constitution that had ancient foundations. A study of the ancient constitution of England will, then, be a study of the relations between the past and the present. What sort of a process transformed the Saxon polity into that of seventeenthcentury England? Even more important than this was the question of why men ever believed that the past could legitimate the present, or (a more answerable question) how they believed it was able to do so. 2 For seventeenth-century common lawyers the answer to this question was that they conceived of the process linking past and present in such a way that it was able to explain to them why their law was reasonable, why it was a law of reason. The law of England was good law not because it was old but because it was rational. The theory of the ancient constitution was an explanation for this rationality, and consequently it was also a justification of the law and the constitution. The rational was ipso facto the good. Thus our pursuit of the ancient constitution of England will have to be a study of what Pocock called 'the common-law mind'. It will be a study of the way in which early Stuart lawyers conceived of the rational and historical basis of their own law, the common law 19 20 The Politics of the Ancient Constitution of England. This study of the ancient constitution is a study of a process, not of an event. (A) INTRODUCTORY: THE PROBLEMS OF LEGAL CHANGE AND LEGAL DIVERSITY Reason, it can truly be said, was in the eyes of the common lawyers the fabric from which the laws were cut. But to produce from this fabric the finished garments of a legal code was not a simple process. Two concepts were developed by common lawyers to explain why the common law of England was a rational system of law. The first of those concepts was artificial reason; the second, custom. When these concepts have been examined we shall find that the early Stuart lawyers believed in two principles which, on the face of it, do not look to be readily compatible with one another. They believed, firstly, that the law of England was a law of reason; and they believed also that the law of England had undergone a long process of (evolutionary) change. We shall then be able to see how these principles combined to produce a particular attitude to history amongst common lawyers, and a particular view of the nature of legal change. 'History' and 'legal change' might seem odd terms to use of such a reputedly unhistorical group of minds as those of the early Stuart common lawyers. It has been an (unintentional)3 consequence of J. G. A. Pocock's work on The Ancient Constitution and the Feudal Law that it has fostered the view, at least amongst textbook writers, that the common lawyers believed the law to have remained literally unchanged since the time of the ancient (pre-Roman) Britons. Beyond the odd isolated passage in Coke (or Saltern) there is little support for such an interpretation, as we shall see. Furthermore, Pocock's work has also fostered the idea- again this might plausibly be said to be a misreading of the text of The Ancient Constitution and the Feudal Law - that early Stuart common lawyers thought not in terms of reason but in terms of history and custom. They believed English law to be good law more because it was old and of long continuance than because it was rational or reasonable. Behind this view lurks an assumed dichotomy between an attitude that sees laws and institutions justified by abstract reason, and one that sees them justified by history and experience. But this dichotomy can only mislead in the present case, for (we shall discover) early Stuart The Ancient Constitution of England 21 legal thinkers did not see reason and custom/history as alternatives. Rather, custom was one of the means by which the rational essence of the law was made apparent. It was a mode with which reason revealed itself. On both of these matters (immutability versus legal change; reason versus history) we shall find that an extreme position was taken by Sir Edward Coke.4 This must raise considerable doubt about Pocock's decision to make Coke the paradigm example of his 'common-law mind'. By the end of this chapter it should be clear that Coke was in fact an eccentric, and sometimes a confused, thinker. The majority of the common law scholars of the early-seventeenth century possessed an attitude to the past and to the law that was closer to the opinions of John Selden, or even Sir Francis Bacon, than to those of Coke. But even Coke was not totally committed to the idea of an immemorially unchanging law. Like most of his contemporaries, he .recognized (at least on occasion) that the common law had undergone change of a sort. At the outset we need to be clear about the attitude that the early Stuart common lawyers had to legal change. It is necessary initially to distinguish between two propositions. First, the proposition that the law ought not to be changed, and that no good would be likely to come from any such change. And, second, the proposition that any alteration to the law is either impossible or utterly illegitimate. Some seventeenth-century common lawyers certainly came very close to asserting the former proposition in its strongest possible sense, but none of them asserted the second. Possibly one of the most interesting of passages stating the extreme version of the first principle was Sir John Davies's brief remark about the relationship of parliamentary statute to the common law. The common law, he says doth far excell our written Laws, namely our Statutes or Acts of Parliament: which is manifest in this, that when our Parliaments have altered or changed any fundamentall points of the Common Law, those alterations have been found by experimence [sic] to be so inconvenient for the Commonwealth, as that the Common Law hath in effect been restored again, in the same points, by other Acts of Parliament in succeeding Ages.s Coke seems to have held views similar to these.. 6 Yet the interesting thing about such remarks is not what they say but what they have 22 The Politics of the Ancient Constitution carefully refrained from saying. Neither Davies nor Coke seems to have particularly liked the idea that parliament could alter the common law by statute, but they were aware that there was no doubt that it could do so. As a consequence their remarks tend to be a little curmudgeonly in tone. Hence Coke's assertion that statutes (and above all Magna Carta) 'were, for the most part, but declaratories of the ancient Common Laws of England? and that innovation in law is a bad thing, seldom to be recommended.s However, though Davies and Coke might do their utmost to convey the impression that changing the law is a bad thing they did not suggest that it was not possible or allowable. Indeed the very tone of their remarks shows their awareness of the impossibility of denying that the common law could be altered. ln a case where the process outlined by Davies, of change and restoration, is incomplete, so that dangerous innovations remain, Coke can do nothing but accept the legitimacy of the change, albeit with poor grace and in a tone of spluttering irritation. Here he was talking about whether or not inferior judicial officers ought to take money from other people, and remarked: But after that [Act of 3 Ed.l] this rule of the Common Law was altered, and that the Sheriff, Coroner, Gaoler, and other the Kings ministers, might in some case take of the Subject, it is not credible what extortions, and oppressions have thereupon ensued. So dangerous a thing it is, to shake or alter any of the rules or fundamentall points of the Common Law, which in truth are the main pillars, and supporters of the fabrick of the Commonwealth, as elsewhere I have noted more at large ... 9 One might wonder why it should be that Coke and Davies so disliked the idea of legal change, even though they were forced to recognize it as a fact (and, indeed, in other places their jurisprudence required an acceptance of legal change). The first thing to note is that Davies and Coke were untypical in the extent to which they detested innovation. Some common lawyers accepted the theoretical possibility of radical change to the law without the signs of discomfort we have seen in Coke and Davies. Bacon is a case in point, thought he was perhaps untypical at the other extreme: lex regia, that there should be no more parliaments held but that the if the parliament should enact in the nature of the ancient The Ancient Constitution of England 23 king should have the authority of the parliament, or, e converso, if the King by Parliament were to enact to alter the state, and to translate it from a monarchy to any other form; both these acts were good. 10 And, of course, those lawyers strongly influenced by legal humanism were in no doubt that laws could be altered, by authority, and did not think this a matter for regret. Selden, startlingly, asserted that it could be made law that anyone rising before nine o'clock should be put to death.ll Lord Chancellor Ellesmere's view, while much less trenchant than the remarks of Bacon or Selden, was that the law was in continual change. All laws were but leges temporis and became obsolete or were subtly changed as circumstances changed,l2 One should note also that many of the scholars .involved with the Elizabeth Society of Antiquaries were frank about their recognition of the realities of legal change. One anonymous contributor to their proceedings said of parliament that the King called it together 'upon occasion of interpreting, or abrogating old laws, and making of new'.13 In general, then, there seems to be no problem in accepting that law is mutable. But this merely leaves the position of Davies and Coke looking even more wayward. Why were they so recognizably ill at ease with the idea of legal change? Now, in part, the answer to this question is that contrasts between Coke and Davies on the one hand, and Selden, Bacon, Ellesmere and Spelman on the other, have been overdrawn. While all of this latter group were frank in their acceptance of legal change they could also give voice to remarks rather similar to those of Coke. The reason for this is quite simple: the difficulty and inadvisability of altering laws was one of the most tired of political platitudes in early modem Europe, and derived ultimately from Aristotle. It was a matter on which there was almost universal agreement. Sir John Doddridge, in a contribution to the debate on AngloScottish Union that took place in the first decade of the reign of James I, wrote the following: But Iawes were never in any kingedome totallie altered without great danger of the evercion of the whole state. And therefore it is well said by the interpretors of Aristotle that Iawes are not to [be] changed but with these cautions and circumspeccions: 1. Raro, ne incommodum; 2. In melius, ne periculum; 3. Prudenter 24 The Politics of the Ancient Constitution et censim ne reipublicae naufragium ex innovacione sequatur. Lawes are to be changed: 1. Seldome lest suche change prove to the danger of the State; 2. For the better, lest it breede danger to the State; 3. Warilie, and by little and little, lest the shipwreck of the commonwelthe and the totall evercion of all be occasioned by such innovacion.14 The passage of Aristotle from which these precepts ultimately derived was Politics, 1268b-1269a. It is of considerable complexity and subtlety, but one can readily see how it might be read as containing the advice which Doddridge provided. Aristotle pointed out that because circumstances are always changing customs must be altered gradually in order to cope with these changes. And, as regards written laws, he argued that because they must initially be framed in general terms (given that their initial formulator cannot foresee all the particular cases with which they will have to deal) then much fine-tuning will later become necessary. But, he went on to say, even given all this, we should not forget that law depends for its observance upon habits, i.e. upon being customarily observed. Consequently it is dangerous to interfere too lightly with such settled habits. Changes to the law should therefore be made with great circumspection, and in some cases abuses were best left unremedied, for the evil consequences of alteration would inevitably outweigh any minor benefits that might come about.ls Almost identical remarks are found frequently in subsequent thinkers: Aquinas, Bodin and Machiavelli are all cases in point.16 But a more useful example for us to consider is Francis Bacon. We have already seen that Bacon took rather a high view of the capacity of statute to alter even the most fundamental laws of the land, and indeed this accords with the traditional interpretations of Bacon's thought which tend to stress his impatience with lawyers and the law and his willingness to allow strict legality to be overridden in the interests of the state. 17 Yet, whatever his theoretical appreciation of the possibilities and legitimacy of legal change, Bacon shared with Coke, and just about everybody else, the view that such change is often dangerous. In his essay 'Of Seditions and Troubles' he listed 'alteration of laws and customs' as one of 'the causes and motives of seditions'. The essay 'Of Empire' wamed rulers that meddling with the customs of the 'commons' is like to bring trouble.1s But more interesting was the short essay 'Of Innovations'. It did not mention law at all, but much of the essay reads like an attempt to address The Ancient Constitution of England 25 the same question that Aristotle grappled with in the passage cited above. Indeed Bacon gives an account of the matter similarly complex and balanced. Underlying Bacon's view was an almost Machiavellian awareness of the inevitable alterations brought by time itself- 'time is the greatest innovator'- which alterations often meant that customs were becoming or would become redundant, or even pernicious. So we must remember 'that a froward retention of custom is as turbulent a thing as an innovation'. Even so, 'what is settled by custom, though it be not good, yet at least it is fit'. Great care must, therefore, be taken when introducing alteration. Bacon summarized his position with the following (rather platitudinous) advice: It were good therefore that men in their innovations would follow the example of time itself, which indeed innovateth greatly, but quietly and by degrees scarce to be perceived . . .· discover what is the straight and right way, and so to walk in it. 19 Bacon, like Doddridge, was a contributor to the debate on AngloScottish Union, and in his advice to King James on the matter he made points very similar to those made by Doddridge. Experience shows us 'that patrius mos is dear to all men', and any legal changes and innovations cause deep bitterness and discord. Therefore proceed slowly and gradually. 20 The points were reiterated endlessly during the debate, and not merely by common lawyers: the civilian John Hayward made very similar points, and provided two general rules to govern legal change. That change be not great, and that it be gradual.21 The matter then is clear enough: virtually everyone, common lawyers included, was of the opinion that laws could be changed (and in some cases should be). But they were equally aware that such innovations were made only at some price: and that price entailed running the risk of instability and subversion. It was never an easy thing to alter laws, and often one might be better off tolerating the existence of inadequate ones. The matter 26 The Politics of the Ancient Constitution was summarized very neatly, in rather humanistic terms, by another contributor to the debates on Union, Sir Henry Spelman: But some will say a Parliament can do anything. I say it may quickly change the lawe but not the myndes of the people whom in this union we must seek to content.22 The point was rather like Aristotle's: it is quite possible for the relevant authority to declare that the law is altered, but it is another matter to alter those customary habits of the people on which their obedience to the law depended. Given this agreed background, what can we now make of Davies and Coke? The first point to emphasize is that this background is not in itself sufficient to account for what they said. For while Aristotle, and all the others whom we have been considering, recognized that in many cases it was necessary for the law to be changed, that legal innovation (if we ignore its practical difficulties) was sometimes a desirable thing, this recognition seems lacking in Coke and Davies. Legal change, for them, was no doubt possible, but it could not be said to be a desirable or even (by and large) a useful thing. It is possible to lay down the outlines at least for an explanation of this situation. It must initially be recognized that we are doing Sir John Davies a considerable disservice in linking him with Coke.23 Davies viewed the common law as essentially customary, and it was of the essence of his theory of custom that customs changed and evolved over time, so that they gradually came to fit the needs of the nation like a hand fits a glove. ln this sense is the law 'connatural to the Nation'.24 There is no trace, in Davies, of the Fortescuean idea that the laws have been continually and unchangingly in existence since the time of the Britons. Therefore, his statement (quoted above) on legal change should be read as saying no more than that in practice alterations to custom have often been found to be harmful. He thus fits neatly into the Aristotelian context analysed above, albeit at one extreme of the possible spectrum of interpretations of the Aristotelian wisdom. Coke, however, is a different matter.25 As we shall see, he combined ideas similar to those of Davies on the customary nature of the law with a view of the literal immemorial antiquity of the law derived from Fortescue. The two positions were radically incompatible, and have subsequently caused much confusion to the expositor of Coke's writings. The logic of Coke's position forced The Ancient Constitution of England 27 him to attempt to reconcile the idea of evolving customs with legal immutability, and part of this attempted resolution is found in the passages we have discussed above. Coke tried, half-heartedly and with a notable lack of success, to pretend that statutes were declaratory of pre-existent law, or at best restored law to its pristine purity. Essentially he was saying that while in theory statutes could alter law and make new law, in practice (as experience and history tells us) they had not done so. But it must be stressed that even Coke did not believe that laws could not be changed, and as we shall soon see he could at times adopt theories that required a belief in legal mutability. Davies's point, that when 'fundamental} points of the Common Law' have been altered this has proved 'inconvenient for the Commonwealth' and the common law eventually has been restored, was not an assertion of the essential immutability of law (even at face value one should note the restriction of its scope to the fundamental points of the law). Rather it was derived from a widely-shared Aristotelian maxim of political prudence. Only if we assume beforehand that Davies held simple Fortescuean beliefs about the antiquity of the law does it look like anything else. Coke, who did gesture towards such beliefs, was a different matter, and seems (at times) to have wanted to convey the impression that English law had, in some ways, remained unaltered time out of mind. The remainder of this chapter will flesh out .the consequences of these points. It will stress Coke's eccentricity and untypicality and demonstrate that the jurisprudence of the common lawyers involved a view of the history of law incompatible with that of Fortescue. And because Coke had a foot in both these camps we will be able to see that his thought was deeply self-contradictory. The acceptance by the common lawyers (however reluctantly) of legal change poses another broader problem for us. If the lawyers believed that the common law, like all codes of positive law, was a law of reason (as they did), then how could they also have believed that it changed over time? Equally problematic was the question of how laws could differ from place to place. Reason was immutable, both synchronically and diachronically; but law clearly was mutable. It changed over time; it varied from place to place. The problem, in short, is this - how could the common law (mutable) be said to be a law of reason (immutable)? Professor Pocock captured this problem when he talked of the 28 The Politics of the Ancient Constitution paradox inherent in the common law view of custom (hereafter referred to as Pocock's paradox): If the idea that law is custom implies anything, it is that law is in constant change and adaptation, altered to meet each new experience in the life of the people ... yet the fact is that the common lawyers ... came to believe that the common law, and ·with it the constitution, had always been exactly what they were now, that they were immemorial.26 The question of immutable reason versus mutable law can, then, be restated as the question of the law as reason versus the law as custom. If it is one, how can it also be the other? This chapter will build up a picture of the assumptions upon which English ancient constitutionalism rested, but it will -in doing so - be an extended answer to this question. Of considerable help in this is a conceptual distinction inherent in traditional (medieval) natural law theory. This particular formulation comes from St. Thomas Aquinas, but this should not be taken to imply that common lawyers themselves took the idea from this source. In his discussion of the relationship between positive and natural law in the Summa Theologiae, Aquinas wrote: But it should be noted that there are two ways in which anything may derive from natural law. First, as a conclusion from more general principles. Secondly, as a determination of certain general features. The former is similar to the method of the sciences in which demonstrative conclusions are drawn from first principles. The second way is like to that of the arts in which some common form is determined to a particular instance: as, for example, when an architect, starting from the general idea of a house, then goes on to design the particular plan of this or that house. So, therefore, some derivations are made from the natural law by way of formal conclusion: as the conclusion, 'Do not murder', derives from the precept, 'Do harm to no man.' Other conclusions are arrived at as determinations of particular cases. So the natural law establishes that whoever transgresses shall be punished. But that a man should be punished by a specific penalty is a particular determination of the natural law. Both types of derivation are to be found in human law. But those which are arrived at in the first way are sanctioned not only The Ancient Constitution of England 29 by human law, but by the natural law also; while those arrived at by the second method have the validity of human law alone. 27 Some human laws were the formal, demonstrative conclusion of natural laws. Such human laws, as Aquinas indicated, bound with the full force of natural law, and like natural law would be everywhere valid and everywhere the same. Customs and statutes, however, bound only as human laws, but this did not mean they were completely adrift and unrelated to natural law or justice. Rather, they were derived from nature, in the second of the ways indicated by Aquinas, as particular determinations of matters left indifferent (though bounded) by natural laws. Laws of this sort did not have the force of natural law and though, so long as they did not actually conflict with the injunctions of natural law, all such laws were equally legitimate, they need not be seen as equally desirable. Some laws, then, could be demonstrated to derive from the rational principles of natural law, as simple logical deductions. But other laws were rational in a different sense: they determined a particular matter from a range of possible choices left open by reason itself. But what guarantee was there that these determinations were in all cases rational and not arbitrary or erroneous? For most laws there could be no simple demonstration of their rationality. Natural reason alone could neither explain nor justify why it was rational for -land to be inherited by primogeniture in most of England, but by partible inheritance in Kent (under the customary provisions known as gavelkind). Two concepts were employed by common lawyers to link particular positive laws to the law of reason. The first, which we will soon be looking at in some detail, was artificial reason. Natural reason, the sort of reason with which all human beings were endowed naturally, might not reveal the link between the detailed content of the common law and the law of reason (or nature), but there might be an artificial reason which would do so. The second concept was that of custom. The idea that the common law was customary in nature, far from being an alternative to the view that positive law was a law of reason, was in fact parasitic on the idea of the rationality of law. Custom explained how laws were both rational and mutable. There were at least two possible directions in which the concept of custom could have been taken by early Stuart legal commentators. Each of these directions had historical roots in fifteenth and 30 The Politics of the Ancient Constitution sixteenth century legal and political thought. Fortescuezs gave some support to the view that because English law - or at least its basic principles or maxims -was rational, then it must be unchanging. Its customary nature, therefore, might imply that it had been the same since time immemorial. Its origins were lost in the mists of time, but throughout recorded history the law had shown no fundamental alteration. (Whether this is an accurate account of what Fortescue meant to say is a matter we can afford to leave aside.) Other thinkers, however, such as Christopher St German and Richard Hooker, 29 explored the opposite side of Pocock's paradox: for them the customary nature of law meant its continuing evolution not its immobility in time. Pocock's paradox will tum out, upon examination, to be an illusion. It was created by Coke's attempt to combine a theory of custom similar to that found in St German with ideas of legal immutability drawn from Fortescue. But most common-law thinkers did not attempt to follow the Fortescuean path, and so most did not fall into this paradox (a paradox that comes remarkably close to self-contradiction). Indeed, Pocock's attribution of this paradoxical position to 'the common-law mind' in general shows once again the danger of taking Edward Coke as typical. However, before we can fully comprehend the peculiarities of Coke's position, we need to examine one more preliminary matter: the view of custom found in Christopher St German and its difference from Fortescue's thought. St German believed English law to be a rational law. It contained within itself the principles of reason rather than being subject to external measures of its rationality. Human law (according to St German) 'is deryuyed by reason as a thinge whiche is necessaryly & probably folowying of the lawe of reason I & of the lawe of god'. Thus in all true human laws there 'is somewhat of the lawe of reason', though to 'discerne ... the lawe of reason from the lawe posytyue is very harde'.3o Of course, if natural reason was to be used to discriminate between positive laws it was first necessary that the dictates of such reason be clearly distinguished from, and knowable independently of, the positive law. When the student in St German's dialogue came -in chapter v -to examine natural law as one of the six grounds of the law of England (the others being the laws of God, general customs, maxims, particular customs and statutes) he made it clear that this condition was not fulfilled. He said on the matter of the law of reason/nature that: The Ancient Constitution of England 31 It is not vsed amonge them that be lemyd in the Iawes of Englande to reason what thynge is commaundyd or prohybyt by the lawe of nature and what not: but all the resonynge in that behalfe is under this maner: as when anythyng is groundyd upon the lawe of nature : they say that reason wyll that suche a thyng be don/ and yf it be prohybyte by the lawe of nature. They say it is agaynst reason or that reason wyll not suffre that it be don.3 1 Though it is not immediately apparent, what this passage meant was that the law of England contained in itself rules for determining what was and what was not in accord with natural law. To say that a particular thing was not allowable by reason was to say that it was not allowed by a particular category of rules of English law. It was not to say that extra-legal rules prohibited such a thing. Thus Chrimes can say of St German that he 'seems to have known nothing of a law of reason which was not to be found embodied in or rationally derivable from the positive law of the land'.32 St German, in other words, simply used the terminology of the law of reason to provide categories into which certain parts of English law could be divided. The law of reason primary provided the grounds for certain rules of law necessary to make human society possible: the prohibition of murder, perjury, deceit and the breaking of peace. The law of reason secondary general was a category containing the rules outlining the laws of property which are common to all nations. (For most earlier thinkers these two categories would together form the jus gentium.) And the law of reason secondary particular included all laws which the English thought rational but which were not observed in other countries. It was made plain that it was only through an examination of English law itself that we could know what particular laws were included in each of these three categories.33 There was no rational law - at least not for the lawyer - independent of positive law. Thus the entire body of English law was in one way or another the law of reason, since St German's definition of the law of reason secondary particular seems to cover any law not covered by the two other categories. St German appeared to deny this in the English (but not the Latin) versions of his work where he said that it could not be proved 'that all the lawe of the realme is the lawe of reason', but in fact he was arguing (along lines already outlined) that not all law could be demonstrated to be derived from the law of reason. This was largely for practical reasons, or because the particular forms in which rational laws were 32 The Politics of the Ancient Constitution enforced were not in all their detail necessary logical deductions from rational principles. In other words -as the Latin edition makes plain - English law was all rational but it was not necessarily all provable by deductive reason.34 So it is clear that St German believed the laws of England to be laws of reason. Could he also have believed them to be customs, or customary laws? Given what has already been said of custom it will come as no surprise to learn that he could.3S St German believed all laws to be reasonable and in that sense laws of reason. But he did not think that reason alone could tell us why some particular body of laws was in force in a particular place and time. To do this he needed some more specific grounds for the English common law. These grounds were provided by general customs, and by maxims. This inclusion of maxims amongst the grounds of English law raised some interesting questions. Maxims were, after all, rational principles or axioms that were not in need of demonstration, so were they the same as or different from customs? The answer to this would appear to be that they formed a special sub-class of customs which for convenience sake St German treated separately. It was the close linking of maxims and customs that revealed most clearly St German's belief that reason, and nothing else, was at the core of the law. St German introduced his discussion of customs in these terms: The thyrde grounde36 of the lawe of Englande standeth vpon dyuerse generall· Customes of old tyme used through all the realme: which have been accepted and approuyd by our soveraygne lorde the kynge and his progenytours and all theyr subgettes. And bycause the sayd customes be neyther agaynst the lawe of god/nor the lawe of reason/ & have ben alwaye taken to be good and necessarye for the common welth of all the realme. Therefore they have optayned the strengthe of a lawe in so moche that he that doth agaynst them doth agaynst Iustyce and law. And these be tho customes that proprely be called the common lawe . . . And of these general customes and of certayne pryncyples that be called maxymes which also take effecte by the olde custome of the realme ... dependyth moste parte of the lawe of this realme.37 The remainder of the chapter on general customs indicated that all the major common law courts and all the basic principles of the land The Ancient Constitution of England 33 law (especially inheritance rules) depended on custom and had no other immediate grounding. But we have just been told that maxims 'take effect' by these customs. This is ambiguous, but fortunately St German dealt with the complex relations between customs, maxims and reason near the end of the chapter and in the two following chapters in some detail. He raised the crucial question of whether or not customs were demonstrably rational. And of this that is sayd byfore it apperyth that the customes aforesayd nor other lyke unto them/whereof be very many in the Iawes of Englande can not be prouyd to have the strength of a lawe only by reason: for how may it be prouyd by reason that the eldest sone shall onlye enheryte his father & the yonger to have no parte/and that all the daughters together shall share the land if there be no son or that the husbonde shall have the hole of his wife's Iande for terme of his lyfe as tenaunt by the curtesye in such a manner as byfore apperyth.38 The lesson from this argument was put thus: All these and suche other can not be prouyd oonly by reason that it shuld be so and no otherwyse although they be reasonable/ & that with the custome therein vsed suffyseth in the lawe. And so a statute made agaynst such general customes is perfectly valid and ought to be obseruid as law bycause they be not merely the lawe of reason.39 That last phrase 'merely the law of reason' is an interesting one, and linked St German in one direction to Aquinas and in another to Hooker. Aquinas, as we have seen, thought that all laws derived from natural law, but that they could do so in two ways: either they were logical deductions from natural law, or they were determinations of matters on which the rules of natural law gave only very general instruction. When Hooker took over this idea he expressed it by dividing human law into two categories: mixed human law and merely human law. The first category covered those laws that are also laws of nature (i.e. the deductive conclusions of natural law); the second covered those laws that commanded things which were not in themselves commanded by nature. 4 o St German appears to be making a closely related point, but he was looking at things from the opposite direction to Hooker. Customs were not 'mere' laws 34 The Politics of the Ancient Constitution of reason because reason by itself was insufficient to demonstrate their force. So besides being reasonable, there must be some other element involved to explain why they were in force. This other element would appear to be long continuance and the presumption of their goodness and necessity that this established. In his two chapters on maxims St German took the discussion a little further. Maxims were rational axioms like those of geometry and were not themselves in need of proof. They were rather the starting-point for legal proofs and arguments. 4 1 Yet these rational principles could be seen as a sort of custom, and this fact should give us cause for much reflection. If maxims were a category of custdm, then clearly at least some customs were rational principles customarily observed. In fact, St German's words tended to imply that customs as such are simply rational principles. Therefore there could be no conflict between saying that all law is reason and that all law is custom. Here are St German's words on the relationship of customs and maxims: And though all those maxims might be conveniently numbered among the said general customs of the realm, since the generall custom of the Realme be the strength and waraunte of the sayd maxymes: as they be of the general customes of the realme/yet bycause the sayd generall customes be in maner diffused throughout the realm of England and knowen through the realme as well to them that be vnlernyd as lemyd/and may lyghtly be had and knowen and that with little stodye in English law. And the sayde maxymes be onlye knowen in the kynges courtes or amonge them that take great studye in the lawe of the realme/and amonge fewe other persones (nor can they be known easily). Therfore they be set in this wrytnge for severall groundes and he that lysteth ma:y so accompte them/ or yf he wyll he may take them for one grounde after his pleasure/ and in that case by his reckoning only five grounds of the law of England ought to be assigned.42 The distinction, made in this passage, between two types of custom is an extremely interesting one. It has recently received an interesting discussion from the pen of Professor Pocock, when he talks of 'the problem of determining in what community usus et consuetudo were said to operate'.43 Pocock distinguishes two possible solutions to the problem. Customs could be either the practices of the people The Ancient Constitution of England 35 in general, codified into law by some gradual process; or they could be rules customarily applied in the courts. In this latter sense customs are the same as maxims, and both become terms of art. St German, whom Pocock does not cite, adds support to Pocock's distinction, but also helps us to solve some of the subsidiary problems that troubled Pocock. In particular, Pocock had some difficulty in deciding what Sir John Davies might have meant in saying that customary law was recorded only 'in the memory of the people' given that he cannot have meant that English lawyers ought to conduct sociological investigations to determine what the content of the law really was.44 However, we can see from St German's words that the distinction can be read as one between customs known to laymen and those known only in the courts and amongst trained lawyers. St German's examples of the former category consist primarily of rules of land tenure and inheritance, and there is no difficulty in seeing what he might have meant by saying that knowledge of these customs was widely diffused outside the courts. They were rules that governed the day-to-day practices and behaviour of ordinary people and as such would have been customarily observed without, in general, the need for recourse to legal enforcement. They were the practices that held society together as a functioning whole. Maxims, on the other hand, which form the second class of customs, were more specialized principles not known to or customarily observed by the people at large, but instead used as the basis for deciding cases in courts. This formulation of the problem has the distinct advantage of avoiding all awkward questions (such as those asked by Pocock) as to how it can be that popular practices can become laws. It also forms a useful grounding for the two concepts that we shall soon examine. To the first category of custom can be related the concept of custom found in early-seventeenth century legal writers and scholars; to the second category (i.e. maxims) can be related the concept of artificial reason. This was a form of reasoning or of rationality discernible only to those learned in the law. In the following chapter St German elaborated on these points. The Doctor asked the Student how it was that customs ought not to be denied when they cannot be proved to be rational. If they cannot be proved by reason then surely it is a matter of indifference whether they be accepted or denied, unless they are backed up by statute or some other sufficient authority. The Student's reply to this question took an interesting form. It maintained the previously-established division of custom into two types, and made 36 The Politics of the Ancient Constitution plain that the basis of this distinction lay in the ways in which the customs came to be known. Many of the customes & maxymes of the Iawes of Englimde can be knowen by the use and custome of the realme so apparently that it nedeth to have any lawe wrytten therof/for what nedyth it to have any lawe wrytten that the eldest sone shall enheryte his father/or that all the daughters shall enheryte togyther as one heyre . . . The other maxymes and customes of the lawe that be not so openly knowen amonge the people may be knowen partly by the lawe of reason: & partly by the bokes of the lawis of Englande called yeres of termes/& partly by dyuers recordis remaynynge in the kynges courtes & in his tresorye. And specyally by a boke that is called the regestre/ & also by dyuers statutis whein many of the sayd customes and maxymes be ofte resyted/ as to a dylygent sercher wyll euydently appere.45 Thus, there were some customs whose observation. was secondnature to the people, and the universality of the assent given to them was sufficient in itself to provide knowledge of them: they were, therefore, jus non scriptum in its purest form. But the other more detailed and more arcane customs and maxims were accessible only to learned experts. Here we reach the subject of artificial reason. For, as St German has clearly stated, natural reason alone was unable to give knowledge of such customs. Therefore to know what maxims were customarily enforced in English courts one required an artificial reason that could be gained only from a study of the written records of the English courts. These would tell you what had been enforced as law in the past, and therefore what will continue to be enforced as law in the present and future. Customs (in the form of customary maxims) thus played in St German's thought a role closely linked to that of what later lawyers will term artificial reason. This was so because custom was not to be understood in his work as having overtones of reactionary romanticism or irrationalistic conservatism (as it does in modem, i.e. post-Enlightenment, conservative thought). Custom and artificial reason both were concepts developed to explain how positive law could be both rational and yet independent of the control of a person or institution drawing upon natural reason alone. These concepts also helped to explain how the mutable law could be said to be rational. St German's notion of custom, unlike The Ancient Constitution of England 37 Fortescue's, had the potentiality to serve as the basis for a complex historical approach to the law. The difference between Fortescue and St German lay in the fact that the former linked custom to a set of immutable maxims, while the latter linked maxims to the mutability of custom. As a result the entire body of positive law (except for those parts of it which were the law of nature) became mutable and, in theory, capable of patterned historical development. It was only to be expected then that there was nothing in St German's works at all similar to the assertion of the immutability of law made by Fortescue: England, claimed the latter, had been ruled successively by Britons, Romans, Saxons, Danes and Normans, yet 'throughout the period of these nations and their kings, the realm had been continuously ruled by the same customs as it is now'. This proved that these customs must have been, from the beginning, the best, otherwise they would have been altered. 46 The common lawyers of early Stuart England- with the partial exception of Sir Edward Coke - followed the jurisprudence of St German rather than that of Fortescue. They considerably extended the concepts of artificial reason and of custom (that were implicit in St German's work) in ways that we must now consider. (B) THE COMMON LAW AS REASON 47 'Limited law of nature is the law now used in every State'.48 This dictum by John Selden would have been acceptable to every common lawyer in early-seventeenth century England. It occurred in a passage in which he was commenting on Sir John Fortescue's contention- made in his De Laudibus Legum Angliae (written c.1470)49that the laws of England were older than the laws of other nations (and this included Roman law). Selden did not agree with Fortescue, though he tried his utmost to avoid saying so too clearly. Fortescue's view, that the English had been ruled by the same customs continuously since the time of the pre-Roman Britons,so was one that Selden preferred to deal with by creatively reinterpreting rather than by contradicting it. At one point, after he had corrected Fortescue, he concluded his remarks with the statement: 'By this well considered, That of the laws of the realm being never changed will be better understood'. 51 Yet these corrections, ostensibly provided to help us better understand Fortescue's utterances, consisted of the argument that conquests (including the Norman Conquest) lead to 38 The Politics of the Ancient Constitution the alteration of laws. Strictly speaking, there was no contradiction between Selden and Fortescue at this point: the latter merely said that ancient customs remained continuously in force; the former did not deny this but pointed out that new laws and customs were over time added to the original ones. But elsewhere Selden made clear his own view that ancient laws had been frequently abrogated. 5 Z There were two basic reasons for Selden's inability or unwillingness to agree with Fortescue on the antiquity of English law. One was precisely this matter of conquest. England had been successively conquered by various peoples and they had all, except the Romans, contributed to English law. 'But questionless', he wrote, 'the Saxons made a mixture of the British customes with their own; the Danes with old British, the Saxon and their own; and the Normans the like'. 53 The common law as it existed in Selden's own day was a mongrel law, and consequently the tenor of Selden's remarks clearly diverged sharply from the tenor of Fortescue's. The differing tones of the two positions reflected two incompatible legal temperaments. The nature of this incompatibility is better revealed by the second of Selden's reasons for disagreeing with Fortescue. English law, he argued, cannot be the most ancient, for 'all laws in general are originally equally ancient' since they were all 'grounded upon nature ... and nature being the same in all, the beginning of all laws must be the same'.54 That is to say that all law codes were equally grounded upon reason. All legal systems were derivations from natural law, and could therefore be tr!lced back, through whatever mutations, to the time at which natural law was first promulgated to human beings.ss Therefore ail legal systems were of an equal age, and all of them were laws of reason. This, in short, was Selden's critique of Fortescue. There are a number of important implications to be derived from such an argument. Attention should be given to that small word, all. 'All nations were grounded upon nature, and no Nation was, that out of it took not their grounds'.56 This meant not only that all legal codes were equally ancient; it meant that they were also all equally valid, equally rational. If one surveyed the legal systems of all contemporary nations one found diversity and variation - often outright conflict - and yet it cannot be said that any one of these systems is better than the others. Diversity did not alter the fact that codes of positive law, including the English common law, were all 'limited law of nature', and thus were all laws of reason. All law The Ancient Constitution of England 39 codes were also, however, of only local validity because they were all adaptations of the essence of natural law to local circumstance. Selden put the matter at length: As soon as Italy was peopled, this beginning of laws was there, and upon it was grounded the Roman laws, which could not have that distinct name indeed till Rome was built, yet remained always that they were at first, saving that additions and interpretations, in succeeding ages increased, and somewhat altered them, by making a Determination juris naturalis, which is nothing but the Civil Law of any Nation. For although the law of nature be truly said Immutable, yet its as true that its limitable, and limited law of nature is the law now used in every State. All the same may be affirmed of our British laws, or English, or other whatsoever. But the divers opinions of interpreters proceeding from the weakness of mans reason, and the several conveniences of divers States, have made those limitations, which the law of Nature hath suffered, very different. And hence is it that those customs which have come all out of one fountain, Nature, thus vary from and cross one another in several CommonwealthsP Of particular significance in this passage is the mention of 'the weakness of mans reason'. Human beings, Selden was arguing, may be mistaken in their understanding of the natural law, and they may be mistaken in a variety of ways. Consequently, when human beings- operating to the best of their understanding- set about the process of constructing systems of positive law on the initial base of natural laws (reason) they will construct a considerable variety of such systems. This, as well as environmental inf!uence, explained the variety of positive laws. But, most striking of all, Selden did not say that positive-law systems based on fallacious reasoning were any less valid than other systems. It seems to be implied that there were no criteria by which one could distinguish a valid from an invalid understanding of natural law. Another noteworthy fe
https://it.b-ok.org/book/2675119/2b1352
CC-MAIN-2020-10
refinedweb
17,152
54.66
13 replies on 1 page. Most recent reply: Dec 19, 2008 11:54 AM by James Iry 2008 has seen a lot of activity around Scala. All major IDEs now have working Scala plugins. A complete Scala tutorial and reference book was published and several others are in the pipeline. Scala is used in popular environments and frameworks, and is being adopted by more and more professional programmers in organizations like Twitter, Sony Imageworks, and Nature, along with many others. It's also a sign of growing momentum that well-known commentators like Ted Neward, Daniel Spiewak and the JavaPosse have covered Scala in depth. Other bloggers have argued more from the outside, without really getting into Scala all that deeply. I find the latter also useful, because it shows how Scala is perceived in other parts of the programming community. But sometimes, initial misconceptions can create myths which stand in the way of deeper understanding. So, as an effort of engaging in the debate, let me address in a series of blog posts some of the myths that have sprung up in the last months around and about Scala. I'll start with a post by Steve Yegge. Like many of Steve's blogs, this one is quite funny, but that does not make it true. Steve (in his own words) "disses" Scala as "Frankenstein's Monster" because "there are type types, and type type types". (Don't ask me what that means!) In fact, it seems he took a look at the Scala Language Specification, and found its systematic use of types intimidating. I can sympathize with that. A specification is not a tutorial. Its purpose is to give compiler writers some common ground on which to base their implementations. This is not just a theoretical nicety: the implementation of JetBrains' Scala compiler for IntelliJ makes essential use of the language specification; that's how they can match our standard Scala compiler pretty closely. The other purpose of a specification is that "language lawyers" - people who know the language deeply - can resolve issues of differing interpretations. Both groups of people value precision over extensive prose. Scala's way of achieving precision in the spec is to be rather formal and to express all aspects of compile-time knowledge as types. That's a notational trick which let us keep the Scala spec within 150 pages - compared to, for instance, the 600 pages of the Java Language Specification. The way a spec is written has nothing to do with the experience of programmers. Programmers generally find Scala's type system helpful in a sophisticated way. It's smart enough to infer many type annotations that Java programmers have to write. It's flexible enough to let them express APIs any way they like. And it's pragmatic enough to offer easy access to dynamic typing where required. In fact Scala's types get out of the way so much that Scala was invited to be a contender in last JavaOne's Script Bowl, which is normally a shootout for scripting languages. But Steve's rant was not really about technical issues anyway. He makes it clear that this is just one move in a match between statically and dynamically typed languages. He seems to see this as something like a rugby match (or rather football match in the US), where to score you need just one player who makes it to the ground line. Steve's on the dynamically typed side, pushing Rhino as the Next Big Language. Java is the 800-Pound Gorilla on the statically typed side. Steve thinks he has Java covered, because its type system makes it an easy target. Then out of left field comes another statically typed language that's much more nimble and agile. So he needs to do a quick dash to block it. If he can't get at the programming experience, it must be the complexity of the specification. This is a bit like saying you should stay away from cars with antilock braking systems because the internal workings of such systems are sophisticated! Not quite surprisingly, a number of other bloggers have taken Steve's rather lighthearted jokes as the truth without bothering to check the details too much and then have added their own myths to it. I'll write about some of them in the next weeks. def method = "def" ~> name ~ formals ^^ { (n, f) => ...} def method: Parser[Unit] = "def" ~> name ~ formals ^^ { case n ~ f => ... } "a*b".r "\\d*b".r """\d*b""".r implicit def flatten2 [A, B, C](f : (A, B) => C) : (~[A, B]) => C Function1[String, Int] String => Int val a: String => Int = something super SingletonObject.type >: <: <% Enumeration
http://www.artima.com/forums/flat.jsp?forum=106&thread=245183
CC-MAIN-2015-14
refinedweb
784
71.14