text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Some time ago I developed a tutorial showing how to create a password strength meter in an AngularJS application using the zxcvbn JavaScript library by Dropbox. Since React has become the most widely used JavaScript framework in the last few years, I thought it would be helpful to develop a similar tutorial for React applications. If you are interested in the AngularJS article, you can take a look at: Password Strength Meter in AngularJS. In this tutorial, we will create a simple form with fields for fullname, email and password. We will perform some lightweight form validation and also use the zxcvbn library to estimate the strength of the password in the form while providing visual feedback. Here is a demo of what we would have created by the end of this tutorial. Table of Contents Why Measure Password Strength? Passwords are commonly used for user authentication in most web applications and as such, it is required that passwords be stored in a safe way. Over the years, techniques such as one-way password hashing - which involves salting most of the times, have been employed to hide the real representation of passwords being stored in a database. Although password hashing is a great step in securing password, the user still poses a major challenge to password security. A user who uses a very common word as password makes the effort of hashing fruitless, since a bruteforce attack can crack such password in a very short time. Infact, if highly sophisticated infrastructure is used for the attack, it may even take split milliseconds, depending on the password complexity or length. Many web applications today such as Google, Twitter, Dropbox, etc insist on users having considerably strong passwords either by ensuring a minimum password length or some combination of alphanumeric characters and maybe symbols in the password. How then is password strength measured? Dropbox developed an algorithm for a realistic password strength estimator inspired by password crackers. This algorithm is packaged in a JavaScript library called zxcvbn. In addition, the package contains a dictionary of commonly used English words, names and passwords. Getting Started Pre-requisites Before we begin, ensure that you have a recent version of Node installed on your system. We will use yarn to run all our NPM scripts and to install dependencies for our project, so also ensure that you have Yarn installed. You can follow this Yarn installation guide to install yarn on your system. Also, we will use the popular create-react-app package to generate our new React application. Run the following command to install create-react-app on your system if you haven't installed it already. npm install -g create-react-app Create new Application Start a new React application using the following command. You can name the application however you desire. create-react-app react-password-strength NPM >= 5.2 If you are using npmversion 5.2or higher, it ships with an additional npxbinary. Using the npxbinary, you don't need to install create-react-appglobally on your system. You can start a new React application with this simple command: npx create-react-app react-password-strength Install Dependencies Next, we will install the dependencies we need for our application. Run the following command to install the required dependencies. yarn add bootstrap zxcvbn isemail prop-types for Default Styling As you might have noticed, we installed the bootstrap package as a dependency for our application to get some default styling. To include Bootstrap in the application, edit the src/index.js file and add the following line before every other import statement. import "bootstrap/dist/css/bootstrap.min.css"; Start the Application yarn start The application is now started and development can begin. Notice that a browser tab has been opened for you with live reloading functionality to keep in sync with changes in the application as you develop. At this point, your application view should look like the following screenshot: Building the Components Remember we intend to create a simple form with fields for fullname, email and password and also perform some lightweight form validation on the fields. We will create the following React components to keep things as simple as possible. FormField- Wraps a form input field with its attributes and change event handler. FormFieldand adds email validation logic to it. PasswordField- Wraps the password FormFieldand adds the password validation logic to it. Also attaches password strength meter and some other visual cues to the field. JoinForm- The fictitious Join Support Team form that houses the form fields. Go ahead and create a components directory inside the src directory of the application to house all ours in this component. Let's try to break it down a little bit: Input State First, we initialized state for the form field component to keep track of the current valueof the input field, the dirtystatus of the field, and any existing validation errors. A field becomes dirty the moment its value first changes and remains dirty. Handle Input Change Next, we added the hasChanged(e)event handler to update the state valueto the current input value on every change to the input. In the handler, we also resolve the dirtystate of the field. We check if the field is a requiredfield based on props, and add a validation error to the state errorsarray if the value is empty. However, if the field is not a required field or is required but not empty, then we delegate to the validation function passed in the optional validatorprop, calling it with the current input value, and adding the thrown validation error to the state errorsarray (if there is any error). Finally, we update the state and pass a callback function to be called after the update. The callback function simply calls the function passed in the optional onStateChangedprop, passing the updated state as its argument. This will become handy for propagating state changes outside the component. Rendering and Props Here we are simply rendering the input field and its label. We also conditionally render the first error in the state errorsarray (if there are any errors). Notice how we dynamically set the classes for the input field to show validation status using built-in classes from Bootstrap. We also render any children nodes contained in the component. As seen in the component's propTypes, the required props for this component are: type( 'text'or 'password'), label, placeholder. We are using the validate() method of the isemail package for the email validation.; About zxcvbn We finally got to use the zxcvbnJavaScript password strength estimator package in this component. The package exports a zxcvbn()function that takes a (password: string)as its first argument and returns an object with several properties for the password strength estimation. In this tutorial, we would be concerned only with the scoreproperty, which is an integer from 0 - 4(useful for implementing a strength bar). - 0- too guessable - 1- very guessable - 2- somewhat guessable - 3- safely unguessable - 4- very unguessable console.log(zxcvbn('password').score); // 0 See the following video on testing the zxcvbn()method on the browser's console. Here is a breakdown of what is going on in the PasswordField component. Initialization In the constructor(), we created two instance properties: thresholdLangthand minStrengthfrom their corresponding prop passed to the component. The thresholdLengthis the minimum password length before it can be considered considerably long. It defaults to 7and cannot be lower. The minStrengthis the minimum zxcvbnscore before the password is considered to be strong enough. Its value ranges from 0-4. It defaults to 3if not specified appropriately. We also initialized the internal state of the password field to store the current passwordand password strength. Handling Password Changes We defined a password validation function that will be passed to the validatorprop of the underlying FormFieldcomponent. The function ensures that the password length is longer than the thresholdLengthand also has a min zxcvbn()score of the specified minStrength. We also defined a stateChanged()function which will be passed to the onStateChangedprop of the FormFieldcomponent. This function retrieves the updated state of the FormFieldcomponent and uses it to compute and update the new internal state of the PasswordFieldcomponent. A callback function to be called after the internal state update. The callback function simply calls the function passed in the optional onStateChangedprop of the PasswordFieldcomponent, passing the updated FormFieldstate as its argument. Rendering and Props Here we simply rendered the underlying FormFieldcomponent alongside some elements for input hint, password strength meter and password length counter. The password strength meter indicates the strengthof the current passwordbased on the state and is configured to be dynamically invisibleif the password length is 0. The meter will indicate different colors for different strength levels. The password length counter indicates when the password is long enough. It shows the password length if the password is not longer than the thresholdLength, otherwise it shows the thresholdLengthfollowed by a plus(+). The PasswordFieldcomponent accepts two additional optional fields: minStrengthand thresholdLengthname" placeholder="Enter Fullname" initially - that is invalid. We also defined state change watch functions for each field to update the form state accordingly. The watch function checks if there are no errors in a field and updates the form internal state for that field to true - that is valid. These watch functions are then assigned to the onStateChanged prop of each form field component to monitor state changes. Finally, we rendered the form. Notice that we added a validation function to the fullname field to ensure that at least two names separated by a space and containing only alphabet chars are provided. The App Component Up till this point, the browser still renders the boilerplate React application. We will go ahead to modify the App.js file in the src directory to render the JoinForm inside the AppComponent. The App.js file should look like the following snippet: import React from 'react'; import JoinForm from './components/JoinForm'; import './App.css'; const App = () => ( <div className="main-container d-table position-absolute m-auto"> <JoinForm /> </div> ); export default App; Levelling up with SASS We are one step away from the final look and feel of our application. At the moment, everything seems to be a little out of place. We will go ahead to define some style rules in the src/App.scss file to spice up the form. The App.scss file should look like the following snippet: /*); } } } We have succeeded in adding the styles required by our application. Notice the use of generated CSS content in the .strength-meter:before and .strength-meter:after pseudo-elements to add gaps to the password strength meter. We also used the Sass @for directive to dynamically generate fill colors for the strength meter at different password strength levels. The final app screen should look like this: With validation errors, the screen should look like this: And without any errors - that is all fields are valid, the screen should look like this: Conclusion In this tutorial, we have been able to create a password strength meter based on the zxcvbn JavaScript library in.
https://scotch.io/tutorials/password-strength-meter-in-react
CC-MAIN-2018-22
en
refinedweb
In the second part of this series, you saw how to collect the commit information from the git logs and send review requests to random developers selected from the project members list. In this part, you'll see how to save the code review information to follow up each time the code scheduler is run. You'll also see how to read emails to check if the reviewer has responded to the review request. Getting Started Start by cloning the source code from the second part of the tutorial series. git clone CodeReviewer Modify the config.json file to include some relevant email addresses, keeping the royagasthyan@gmail.com email address. It's because the git has commits related to the particular email address which is required for the code to execute as expected. Modify the SMTP credentials in the schedule.py file: FROM_EMAIL = "your_email_address@gmail.com" FROM_PWD = "your_password" Navigate to the project directory CodeReviewer and try to execute the following command in the terminal. python scheduler.py -n 20 -p "project_x" It should send the code review request to random developers for review. Keeping the Review Request Information To follow up on the review request information, you need to keep it somewhere for reference. You can select where you want to keep the code review request information. It can be any database or may be a file. For the sake of this tutorial, we'll keep the review request information inside a reviewer.json file. Each time the scheduler is run, it'll check the info file to follow up on the review requests that haven't been responded to. Create a method called save_review_info which will save the review request information inside a file. Inside the save_review_info method, create an info object with the reviewer, subject, and a unique Id. def save_review_info(reviewer, subject): info = {'reviewer':reviewer,'subject':subject,'id':str(uuid.uuid4()),'sendDate':str(datetime.date.today())} For a unique Id, import the uuid Python module. import uuid You also need the datetime Python module to get the current date. Import the datetime Python module. import datetime You need to initialize the reviewer.json file when the program starts if it doesn't already exist. if not os.path.exists('reviewer.json'): with open('reviewer.json','w+') as outfile: json.dump([],outfile) If the file doesn't exist, you need to create a file called reviewer.json and fill it with an empty JSON array as seen in the above code. This method will be called each time a review request is sent. So, inside the save_review_info method, open the reviewer.json file in read mode and read the contents. Append the new content information into the existing content and write it back to the reviewer.json file. Here is how the code would look: def save_review_info(reviewer, subject): info = {'reviewer':reviewer,'subject':subject,'id':str(uuid.uuid4()),'sendDate':str(datetime.date.today())} with open('reviewer.json','r') as infile: review_data = json.load(infile) review_data.append(info) with open('reviewer.json','w') as outfile: json.dump(review_data,outfile) Inside the schedule_review_request method, before sending the code review request mail, call the save_review_info method to save the review information.) save_review_info(reviewer,subject); send_email(reviewer,subject,body) Save the above changes and execute the scheduler program. Once the scheduler has been run, you should be able to view the reviewer.json file inside the project directory with the code review request information. Here is how it would look: [{ "reviewer": "samson1987@gmail.com", "id": "8ca7da84-9da7-4a17-9843-be293ea8202c", "sendDate": "2017-02-24", "subject": "2017-02-24 Code Review [commit:16393106c944981f57b2b48a9180a33e217faacc]" }, { "reviewer": "roshanjames@gmail.com", "id": "68765291-1891-4b50-886e-e30ab41a8810", "sendDate": "2017-02-24", "subject": "2017-02-24 Code Review [commit:04d11e21fb625215c5e672a93d955f4a176e16e4]" }] Reading the Email Data You have collected all the code review request information and saved it in the reviewer.json file. Now, each time the scheduler is run, you need to check your mail inbox to see if the reviewer has responded to the code review request. So first you need to define a method to read your Gmail inbox. Create a method called read_email which takes the number of days to check the inbox as a parameter. You'll make use of the imaplib Python module to read the email inbox. Import the imaplib Python module: import imaplib To read the email using the imaplib module, you first need to create the server. Log in to the server using the email address and password: Once logged in, select the inbox to read the emails: You'll be reading the emails for the past n number of days since the code review request was sent. Import the timedelta Python module. import timedelta Create the email date as shown: Using the formatted_date, search the email server for emails. typ, data = email_server.search(None, '(SINCE "' + formatted_date + '")') It will return the unique IDs for each email, and using the unique IDs you can get the email details. ids = data[0] id_list = ids.split() first_email_id = int(id_list[0]) last_email_id = int(id_list[-1]) Now you'll make use of the first_email_id and the last_email_id to iterate through the emails and fetch the subject and the "from" address of the emails. for i in range(last_email_id,first_email_id, -1): typ, data = email_server.fetch(i, '(RFC822)' ) data will contain the email content, so iterate the data part and check for a tuple. You'll be making use of the email Python module to extract the details. So import the import email You can extract the email subject and the "from" address as shown: for response_part in data: if isinstance(response_part, tuple): msg = email.message_from_string(response_part[1]) print 'From: ' + msg['from'] print '\n' print 'Subject: ' + msg['subject'] print '\n' print '------------------------------------------------' Here is the complete read_email method: def read_email(num_days): try:]) print 'From: ' + msg['from'] print '\n' print 'Subject: ' + msg['subject'] print '\n' print '------------------------------------------------' except Exception, e: print str(e) Save the above changes and try running the above read_email method: read_email(1) It should print the email subject and "from" address on the terminal. Now let's collect the "from" address and subject into an Instead of printing the subject and the "from" address, append the data to the Here is the modified read_email method: def read_email(num_days): try: email_info = []]) email_info.append({'From':msg['from'],'Subject':msg['subject'].replace("\r\n","")}) except Exception, e: print str(e) return email_info Adding Logging for Error Handling Error handling is an important aspect of software development. It's really useful during the debugging phase to trace bugs. If you have no error handling, then it gets really difficult to track the error. Since you're growing with a couple of new methods, I think it's the right time to add error handling to the scheduler code. To get started with error handling, you'll be needing the logging Python module and the RotatingFileHandler class. Import them as shown: import logging from logging.handlers import RotatingFileHandler Once you have the required imports, initialize the logger as shown: logger = logging.getLogger("Code Review Log") logger.setLevel(logging.INFO) In the above code, you initialized the logger and set the log level to INFO. Create a rotating file log handler which will create a new file each time the log file has reached a maximum size. logHandler = RotatingFileHandler('app.log',maxBytes=3000,backupCount=2) Attach the logHandler to the logger object. logger.addHandler(logHandler) Let's add the error logger to log errors when an exception is caught. In the read_email method's exception part, add the following code: logger.error(str(datetime.datetime.now()) + " - Error while reading mail : " + str(e) + "\n") logger.exception(str(e)) The first line logs the error message with the current date and time to the log file. The second line logs the stack trace to the error. Similarly, you can add the error handling to the main part of the code. Here is how the code with error handling would look: try: commits = process_commits() if len(commits) == 0: print 'No commits found ' else: schedule_review_request(commits) except Exception,e: print 'Error occurred. Check log for details.' logger.error(str(datetime.datetime.now()) + " - Error while reading mail : " + str(e) + "\n") logger.exception(str(e)) Wrapping It Up In this part of the series, you shelved the review request information in the reviewer.json file. You also created a method to read the emails. You'll be using both of these functions to follow up on the code review requests in the final part of this series. Additionally, don’t hesitate to see what we have available for sale and for study in the marketplace, and don't hesitate to ask any questions and provide your valuable feedback using the feed below. Source code from this tutorial is available on GitHub. Do let us know your thoughts and suggestions in the comments below. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/building-a-python-code-review-scheduler-keeping-the-review-info--cms-28316
CC-MAIN-2018-22
en
refinedweb
A Tutorial for Constructing a Plug-in Viewer Jesper Lind and Scott Oveson Microsoft Corporation August 2004 Applies To: Microsoft SQL Server 2005 Analysis Services Summary: This article steps through the process of implementing a plug-in viewer and integrating the viewer into Analysis Services. Provides stub code to enable viewer developers to quickly write a plug-in viewer into Analysis Services. (21 printed pages) Contents Overview Setting Up the Project Implementing IMiningModelViewerControl Assigning a Strong Name to the Assembly Mechanisms to Discover Data Mining Models Update the Plug-in Algorithm to Use the Plug-in Viewer Adding UI Code to the Viewer Overview Microsoft Analysis Services Data Mining Viewer Plug-in Framework defines the necessary interfaces and registry information that third parties can implement, so that their algorithm viewers will be seamlessly integrated in the SQL Server Business Intelligence Development Studio Designer. DM Algorithm Viewer Plug-in Framework is to allow third party algorithm providers to use their own viewers displaying the discovered patterns. Viewers are .NET WinForms UserControls which display patterns of the model. Viewers based on the Plug-in Framework will be integrated seamlessly in the DM Designer in both Project and Immediate modes. Microsoft Analysis Services Data Mining Plug-in Framework supports both Algorithms plug-ins and Viewer plug-ins. A viewer may be used to view more than one type of algorithm. Likewise, several algorithms could use the same viewer. In this tutorial we will describe how to write a viewer, and we will modify the Pairwise_Linerar_Regression model (see algorithm tutorial) to work with the new viewer. Although a viewer could be implemented using a Winforms UserControl and any .NET language, or as an ActiveX control wrapped with a Winforms user control wrapper, the most convenient way to implement a viewer is using Winforms and the C# language. The language is perfectly suited for writing UI components and we will only give the reader of this tutorial a hint of the capabilities. The rest is left to creative UI people and imagination! We render the output of the model as simple text strings in a RichTextBox object, but one can imagine the usage of more graphics. Throughout the tutorial there are snippets of code. This code may be pasted directly into Microsoft Visual Studio to avoid typing (and typos). In some cases you will have to type in a custom string such as a public key token. Setting Up the Project - Create a new C# project. Start by creating a new C# project. Under the folder C# choose Class Library as the project type. In this document we will refer use MyCompanyPluginViewers as the name of the solution. We call the project PairwiseLinearRegressionViewer. - Add a UserControl class. In the solution explorer, right-click on Class1 and select Delete. Then right-click the project and select Add Class. Select User Control as the class type, and give the class a name. We choose to call the class PairwiseLinearRegressionViewer. Your .cs file should now look like this. using System; using System.Collections; using System.ComponentModel; using System.Drawing; using System.Data; using System.Data.OleDb; using System.Windows.Forms; namespace PairwiseLinearRegressionViewer { /// <summary> /// Summary description for PairwiseLinearRegressionViewer. /// </summary> public class PairwiseLinearRegressionViewer : System.Windows.Forms.UserControl { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.Container components = null; public PairwiseLinearRegressionViewer() { // This call is required by the Windows.Forms Form // Designer. InitializeComponent(); // TODO: Add any initialization after the // InitializeComponent call } /// <summary> /// Clean up any resources being used. /// </summary>() { components = new System.ComponentModel.Container(); } #endregion } } Implementing IMiningModelViewerControl The next step in the process is to implement IMiningModelViewerControl. This is the interface that implements the viewer. Here is the interface that we need to implement for the object to be a plug-in viewer. namespace Microsoft.DataWarehouse.Interfaces { public interface IMiningModelViewerControl { IServiceProvider ServiceProvider { get; set; } string ConnectionString { get; set; } string MiningModelName { get; set; } /// <summary> /// Initialize viewer with mining model content /// </summary> void LoadViewerData(object context); /// <summary> /// Notify viewer whether it's being activated or deactivated /// </summary> void ViewerActivated(bool isActivated); } } In addition there is a property called DisplayName that, if exposed, will provide the UI with a localizable string with which to populate the Viewer dropdown box. Note that if this string is not provided, the UI will still load, but the string will be populated using a registry string rather than the property of the plug-in viewer. We will describe the registry settings later in this document. - Add a reference to Microsoft.DataWarehouse.Interfaces. You must be running the 2005 release of Microsoft Visual Studio (codenamed "Whidbey."):Right click on References and select Microsoft.Datawarehouse.Interfaces from the .NET tab. From the same .NET tab, also select System.Drawing.dll, System.Windows.Forms.dll, and System.Data. These references will be necessary as we start adding user controls to the project. Add the following using statements. This will avoid unnecessary typing later on. - Implement IminingModelViewerControl. Update the class definition to implement the IMiningModelViewerControl interface. - Implement DisplayName. - Add the following class members. - Implement ServiceProvider. - Implement ConnectionString. The system will call the viewer with the appropriate connection string! This specifies the connection string to the provider that has access to the Data Mining backend. - Implement MiningModelName. Prior to asking the viewer to render itself, it will be called with a mining model name. This enables us to create an OleDbConnection with which we will connect to the mining model. - Implement LoadViewerData. LoadViewerData will be called when the viewer is asked to render the mining model. In this method we will: - Connect to the server. - Load the content. - Render the content. For now we will just implement step a, and we will revisit this step later after we have registered the plug-in. public void LoadViewerData(object context) { // Open a new connection to the server. connection = new OleDbConnection(this.connectionString); connection.Open(); // Check the status of our connection. if (dataReader != null) { if (!dataReader.IsClosed) dataReader.Close(); } // Create the OleDbCommand. string commandText = string.Format("SELECT * FROM [{0}].CONTENT", this.MiningModelName); if (command == null) { command = new OleDbCommand(); } command.Connection = connection; command.CommandText = commandText; // Execute the command dataReader = command.ExecuteReader(); // Add code to extract information from the schema here. // Close the connection connection.Close(); } - Implement ViewerActived. If the viewer has another window open such as a legend, it may want to bring that window to the user's attention when the view is activated (that is, clicked on, or selected using the tabs in the viewer or by selecting it using the dropdown box). We won't use this feature in this plug-in. - You should now be able to compile the project with no errors. Assigning a Strong Name to the Assembly In this section we will update the viewer with a Strong Name. The first step in this process is to create a public, private key pair. Then sign the Assembly by updating the AssemblyInfo.cs file in the project. - Locate gacutil.exe. Locate gacutil.exe and sn.exe and make sure you can execute them. - Create a key pair. Create a key pair by running sn –k sgKey.snk, which creates the file sgKey.snk containing both a private and a public key. Copy sgKey.snk to the project directory if it's not already located there. Here are the comments in the default AssemblyInfo.cs created for your project. // (*)")] - Update the AssemblyInfo.cs file. Update the AssemblyInfo.cs file with the following line. - You should again be able to build the project with no errors. Mechanisms to Discover Data Mining Models At the time the DM Tools first fetches information about which algorithms are supported by querying the Mining_Services rowset (this may happen when the user first runs the DM Wizard or the first time they click on the Viewers view in the DM designer), it'll retrieve the information from the server about the suggested viewers associated with each algorithm and cache this metadata. When the user selects the viewer tab, a one time initialization occurs to walk the registry collecting information for the registered DM viewers on the client by inspecting the keys under the appropriate VS package. Additionally, on a unique-connection string basis, data structures are built up to merge the list of registered client viewer with the meta data from the server to come up with a list that prioritizes the list of viewers for a given algorithm by the order suggested by the server (for the given connection) if those viewers are registered, and next any other valid viewers for each algorithm. If an assembly is not found the entry will not show up in the list of viewers. For each viewer, reflection is used to inspect the class provided to find a public static property called DisplayName, which allows the viewer assembly to provide a localized display name for the UI. Note: at this point no viewer has been instantiated. When there are multiple viewers associated with an algorithm, the default viewer (first in the dropdown) is the first one specified in the Mining_Service schema rowset that is registered on the client. The viewers are not listed alphabetically, instead they are listed in order of the list suggested in the mining services schema rowset first, followed then by those found on the clients that support the specific algorithm (in no specific order), and lastly by viewers supporting all algorithms via the wildcard (*) like the Generic Content Grid Viewer. When we need to load a new viewer for a mining model (users click on the Viewers view in the Data Mining editor for the first time and by default the first model will be viewed, or they select a new mining model in the dropdown at the top of the viewer), the algorithm associated with the model is retrieved. A lookup against the cached algorithm\viewer information that is retrieved above is performed and the first viewer in the list that is registered on the client is invoked by instantiating the class in the assembly name provided using reflection. This class must be a WindowsForm UserControl and implement the IMiningModelViewerControl interface as defined in Microsoft.DataWarehouse.Interfaces. Note: the viewer may be implemented as an ActiveX control as long as it is wrapped with a WindowsForm UserControl. If the class fails to instantiate, an error is shown, and the viewer is removed from the cached list so no future attempt to instantiate this viewer will be made for the remainder of the application session. Note that this is only in the case where the class could not be instantiated, not in cases where some error occurs after instantiating the class (like loading the data into the viewer). If the class failed to instantiate, then an attempt to instantiate the next viewer in the list is made. All algorithms will have one entry in the Viewers dropdown next to the Mining Model combo box in the Viewers view of the DM editor that will let users see the Generic Content Viewer. Once the viewer class has successfully instantiated, methods on the IMiningModelViewerControl interface are used to instruct the viewer to load the data, and provide services that are useful to the viewer as described in the following section. All viewers (including Microsoft Viewers) are responsible for having a setup program or similar piece of code that creates the necessary registry keys and entries for the viewer in the client machine registry. This may be done manually using regedt32, using simple .reg files, or using a more sophisticated setup program. The keys should be created under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\BIShell\Packages\{4a0c6509-bf90-43da-abee-0aba3a8527f1}\Settings\Designers\DataMiningViewers key if you want the viewer registered for the BI Development Studio, and under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\Shell\Packages\{4a0c6509-bf90-43da-abee-0aba3a8527f1}\Settings\Designers\DataMiningViewers key if you want them registered for use in the SQL Management Studio. This is a consistent granularity with how designers are registered (by package for a given AppID). It allows different viewers to be registered, for example, in the SQL Server Management Studio for administrators than in the BI Developement Studio shell. Under the key mentioned above there is a key created for each new viewer, and under the key, several registry entries describe the information necessary to instantiate the viewer class. - Update the Registry. Run regedt32.exe, and navigate to the path described above. Add new string values with the following information: Algorithms = MyCompany_Pairwise_Linear_Regression Assembly = MyCompanyPluginViewers, PublicKeyToken=ca6cde3098d5be29 Class = MyCompanyPluginViewers.PairwiseLinearRegressionViewer DisplayName = MyCompany Pairwise Linear Regression Viewer Fallback Name The regedt32.exe tool should look something like picture below. - Update the PublicKeyToken. After you have updated the AssemblyInfo.cs file and recompiled you may extract the token from the dll by executing Update the Assembly field in the registry with the Public key token corresponding to your Assembly. Figure 1. The regedt32.exe tool - Copy Assembly to the location of the viewer Assemblies and register it. The viewer assemblies are located in the following directory. Copy MyCompanyPluginViewers.dll - Register the Assembly. After copying the Assembly to the directory above, go to the same directory and register the Assembly by executing the command Update the Plug-in Algorithm to Use the Plug-in Viewer Before we can check whether the viewer shows up we must update the plug-in algorithm. Each Mining Algorithm specifies a list of viewers with which it's compatible. - Modify the plug-in algorithm to use the new viewer. Update the Plug-in algorithm to use the new viewer. Several viewers may be supported by listing them all using a semicolon separated list. This is done in ALGORITHM.IDMAlgorithmMetatdata.cpp: ALGORITHM::GetViewerType. - Copy MyCompanyPluginViewers.dll Copy MyCompanyPluginViewers.dll to C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\. Run gacutil /i MyCompanyPluginViewers.dll From now on when you update the plug-in you will need to copy the .dll to the directory above for the Assembly to be picked up by Analysis Services. - Start Business Intelligence Development Studio. - Verify that the viewer is properly registered. Create a project using the Plug-in algorithm, and go to viewers. If all the steps prior to this worked as expected you should see your viewer in the dropdown box. You may select it, but there won't be anything showing as we have not written any UI code yet. That is the purpose of the next and final section. If your viewer does not show up, then go back and double check every step. It will not work unless all the steps are completed properly. Adding UI Code to the Viewer In this last section we will update the viewer to show the regression formulas generated by the server. There are two ways of doing this. The first is to connect directly to the server and issue a SELECT * FROM [MiningModelName].CONTENT statement to the server. (You don't have to use * if you do not wish to receive the entire schema.) The other way (which is more flexible) is to write one or more stored procedures that you will call to retrieve information from the server. These may be designed and implemented to return exactly what you need from the server. Writing a stored procedure will allow you to have more control over the data you return to the client, and in some cases this may provide a speedup. For example, the select statement above will return all rows from the server. You may implement a stored procedure that calculates the information to be displayed in the viewer and returns this as a simple table to the viewer. In most cases that means less processing to be done on the client, and less data to be transferred as well. - Add a RichTextBox. Add a RichRextBox to the client UI. In the solution explorer, right-click on the PairWiseLinReg.cs and select View Designer. Click on the Toolbox to the left to expand the Toolbox menu. Click on the RichTextBox and insert one into the design area. Modify the size so that the text box occupies most of the area. The designer should now look something like this: Figure 2. The Visual Studio designer Right click on PairwiseLinReg.cs again in the designer and select View Code. There should now have been some code generated for you. Rename richTextBox1 to richTextBox. - Initialize the member variables. - Add code to retrieve the information from the content. Until now we have not written any algorithm-specific code. Now we're about to retrieve the mining model content. This step is probably the most crucial for a successful viewer implementation. Note that the code will have to match the schema of the mining model in order to operate correctly. First add the following code to the class member variables. - Implement LoadViewerData. Next, paste the following code into LoadViewerData() after the comment. The first part of the code that enumerated the schema may be omitted, but it's included for debugging purposes and to show how to access the metadata of the rowset. During development it may be useful to look at the actual data in the schema produced by the server. This is made easy by using the built-in GridViewer that by default supports all mining algorithms. this.richTextBox.Text = string.Empty; // Enumerate the schema – commented out – uncomment // to print the scema to the richTextBox. DataTable schema = dataReader.GetSchemaTable(); int columnCount = schema.Columns.Count; int rowCount = schema.Rows.Count; for (int i = 0; i < rowCount; i++) { DataRow row = schema.Rows[i]; for (int j = 0; j < columnCount; j++) { string columnName = row[j].ToString(); // Print the column name // this.richTextBox.Text += columnName; } } this.richTextBox.Text += "\n\n"; this.richTextBox.Text += "Model output\n------------------------------\n"; // Enumerate the data while(dataReader.Read()) { try { columnCount = dataReader.FieldCount; for (int i = 0; i < columnCount; i++) { if (i == NODE_DISTRIBUTION) { string formula = this.ExtractRegressionFormulaFromDist( (OleDbDataReader)dataReader.GetValue(i), false); // The first node generates an empty string. if (formula != string.Empty) { this.richTextBox.Text += formula + "\n"; } } string dataTypeName = dataReader.GetDataTypeName(i); object dataValue = dataReader.GetValue(i); } } catch (Exception e) { Console.WriteLine(e.Message); } } - Implement ExtractRegressionFormulaFromDist. Finally add, to the end of the file, the function ExtractRegressionFormulaFromDist that is used above. The function enumerates the Distribution column and builds a string representing a pair-wise linear regression formula. //------------------------------------------------ // Helper functions //------------------------------------------------ private string ExtractRegressionFormulaFromDist( OleDbDataReader distReader, bool fCorrCoeff) { string targetAttribute = string.Empty; string regressionFormula = string.Empty; double val = 0.0; // Enumerate the data while (distReader.Read()) { try { // Get the value type object dataValue = distReader.GetValue(VALUE_TYPE); if( dataValue.ToString() == TARGET_ATTRIBUTE ) { // Get the target attribute name dataValue = distReader.GetValue(ATTRIBUTE_NAME); targetAttribute = dataValue.ToString(); regressionFormula = targetAttribute + " = " + regressionFormula; } else if (dataValue.ToString() == Y_INTERSECT ) { dataValue = distReader.GetValue(ATTRIBUTE_NAME); string attName = dataValue.ToString(); if( attName == targetAttribute ) { // This is the sample average of the // attribute. dataValue = distReader.GetValue(ATTRIBUTE_VALUE); val = Convert.ToDouble( dataValue ); regressionFormula += val.ToString(".###"); } else if( attName == "" ) { // Do nothing } else { // Do nothing } } else if( dataValue.ToString() == COEFFICIENT ) { dataValue = distReader.GetValue(ATTRIBUTE_VALUE); if( Convert.ToDouble( dataValue ) > 0 ) regressionFormula += " + "; else regressionFormula += " "; val = Convert.ToDouble( dataValue ); regressionFormula += val.ToString(".###"); regressionFormula += " * "; dataValue = distReader.GetValue(ATTRIBUTE_NAME); regressionFormula += dataValue.ToString(); } else { // Do nothing } } catch (Exception e) { regressionFormula = e.Message; } } regressionFormula = regressionFormula.Trim(); return regressionFormula; } - Verify that the viewer works. You should now be able to build the project and use the new viewer. Don't forget that you will have to copy the Assembly to the location where the other shell files reside (typically %Program Files%\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE). The BI Development Studio UI should look similar to the image below. Figure 3. The BI Development Studio Conclusion Hopefully this article has given you a good jumping-off point for implementing a custom algorithm viewer for use in SQL Server Business Intelligence Development Studio Designer. Good luck and enjoy using Microsoft Analysis Services.
https://technet.microsoft.com/en-US/library/ms345129(v=sql.90).aspx
CC-MAIN-2018-22
en
refinedweb
I want to store objects in an array, where objects are weak, and conforms to a protocol. But when I try to loop it, I get a compiler error: public class Weak<T: AnyObject> { public weak var value : T? public init (value: T) { self.value = value } } public protocol ClassWithReloadFRC: class { func reloadFRC() } public var objectWithReloadFRC = [Weak<ClassWithReloadFRC>]() for owrfrc in objectWithReloadFRC { //If I comment this line here, it will able to compile. //if not I get error see below owrfrc.value!.reloadFRC() } Bitcast requires types of same width %.asSubstituted = bitcast i64 %35 to i128, !dbg !5442 LLVM ERROR: Broken function found, compilation aborted! Generics don't do protocol inheritance of their resolving type in the way that you seem to imagine. Your Weak<ClassWithReloadFRC> type is going to be generally useless. For example, you can't make one, let alone load up an array of them. class Thing : ClassWithReloadFRC { func reloadFRC(){} } let weaky = Weak(value:Thing()) // so far so good; it's a Weak<Thing> let weaky2 = weaky as Weak<ClassWithReloadFRC> // compile error I think the thing to ask yourself is what you are really trying to do. For example, if you are after an array of weakly referenced objects, there are built-in Cocoa ways to do that.
https://codedump.io/share/WXXhd2bYltpr/1/iterate-array-of-weak-references-where-objects-conform-to-a-protocol-in-swift
CC-MAIN-2018-22
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I have 2 custom fields with the same values in both. I'd like to compare the values and show in a 3rd (scripted) field what are the values that I have checked in field #1 and don't have checked in field #2. Moreover, if would help to know how can a loop over the multicheckbox field and access each value there. Thanks def customFieldManager = ComponentAccessor.getCustomFieldManager() def field1Vals = issue.getCustomFieldValue(customFieldManager.getCustomFieldObjectByName("First Field")) as List<Option> field1Vals*.value The last line gives you the string values of field 1, eg \["Yes, "No", "Maybe"] Do the same for field two... leftFields.intersect(rightFields) will give you the common ones, so you can just loop through and render the differences nicely. Thank you Jamie. Writing below some more code I used, maybe will help future user as well. {code:} import com.atlassian.jira.component.ComponentAccessor; def customFieldManager = ComponentAccessor.getCustomFieldManager(); List field1 = issue.getCustomFieldValue(customFieldManager.getCustomFieldObjectByName("field1")) as List<String>; List field1 = issue.getCustomFieldValue(customFieldManager.getCustomFieldObjectByName("field1")) as List<String>; field1*.value; field1*.value; // this how I got to know if the checkbox value (label) is indeed checked (looped me lists for this): if (field1*.value.contains("label1") == true) .
https://community.atlassian.com/t5/Jira-questions/Is-there-a-way-to-show-the-difference-between-2-different-multi/qaq-p/160556
CC-MAIN-2018-22
en
refinedweb
Opened 7 years ago Closed 7 years ago #15066 closed (duplicate) GzipMiddleware does not work with generator-created responses Description With a response like def gen(): yield "something" return HttpResponse(gen()) and a HTTP 200 status code, the GzipMiddleware will always return an empty response because it accesses the response.content property twice. That means the _get_content getter applies ''.join twice to the generator, but a generator can only be iterated once. See patch proposal. Attachments (1) Change History (2) Changed 7 years ago by comment:1 Changed 7 years ago by Note: See TracTickets for help on using tickets. Simple patch
https://code.djangoproject.com/ticket/15066
CC-MAIN-2018-22
en
refinedweb
Strange error, possibly related to var class CTest(object): def __init__(self,arg=None): var('x,y') if type(arg) is list: print [x in ZZ for x in arg] elif arg in PolynomialRing(ZZ,2,(x,y)): pass a=CTest() Traceback (most recent call last): ... File "", line 6, in __init__ UnboundLocalError: local variable 'x' referenced before assignment Replacing var('x,y') with x,y=var('x,y') eliminates the error. So does replacing PolynomialRing(ZZ,2,(x,y)) with [1,2,3]. Or changing the x in [x in ZZ for x in arg] to z. I believe this is Python local versus global variable thing. `var('x y')` injects `x` and `y` into the "global namespace", but Python considers all variables talked about inside a function to be "local", even if they have the same names. (This is good or bad, depending on whom you ask.) But I couldn't isolate the exact problem either, frustrating. Maybe this will help anyway?
https://ask.sagemath.org/question/8854/strange-error-possibly-related-to-var/
CC-MAIN-2017-04
en
refinedweb
We look at this example where we construct an instance from a subclass. If you run the code, you will find that the parent class constructor is also called. This parent constructor is called by the compiler. This is because when constructing a subclass, the member fields in the parent constructor would need to be initialized too. package javaapplication34; public class Music { public class Rock { String r = "Oasis"; public Rock() { System.out.println(r); } } public static class Metal extends Rock { String m = "Metallica"; public Metal() { System.out.println(m); } } public static void main(String[] args) { Metal s = new Metal(); } } When the subclass Metal is constructed, the member field r would be initialized allowing it to be printed out.
http://codecrawl.com/2014/12/04/java-constructing-subclass/
CC-MAIN-2017-04
en
refinedweb
Method Overriding is concept of overriding a same method in the child class. This is fall under the dynamic polymorphism or run time polymorphism. A method defined in the parent class is considered as the common behaviour and the child classes are expected to override the common behaviour with own implementation. There are multiple restrictions enforced while overriding a method in the child class. - When you override a method, the method signature must be same as in the parent class. The method name and parameters has to exactly match for both the methods. The order also should be same. - The return type of the method has one exception, if a parent class method returns an object, child class can return same object or its sub class. Otherwise the code will not compile. In the below example, if parent class returns Employee object, child class can return Manager or Admin object. This feature is introduced from Java 5.0 release. - Static methods can not be overridden. Since method overriding is related to the object binding, static methods can not use this concept. Because static method is bound with class whereas instance method is bound with object. With the above explanations, look at the below example for more details on how to implement method overriding. Employee.java package javabeat.net.core; public class Employee { public void print(String name){ System.out.println("Employee Name : " + name); } //Override with different return type public Employee getInstance(){ return new Employee(); } } Manager.java package javabeat.net.core; public class Manager extends Employee{ public void print(String name){ System.out.println("Manager Name : " + name); } public Manager getInstance(){ return new Manager(); } } Admin.java package javabeat.net.core; public class Admin extends Employee{ public void print(String name){ System.out.println("Admin Name : " + name); } public Admin getInstance(){ return new Admin(); } } OverrideDemo.java package javabeat.net.core; public class OverrideDemo { public static void main (String args[]){ Employee employee = new Employee(); Manager manager = new Manager(); Admin admin = new Admin(); //Directly calling methods employee.print("Nakulan"); manager.print("Nakulan"); admin.print("Nakulan"); //Assign child class object to parent class reference employee = manager; employee.print("Nakulan"); employee = admin; employee.print("Nakulan"); } } Output of the above example will be: Employee Name : Nakulan Manager Name : Nakulan Admin Name : Nakulan Manager Name : Nakulan Admin Name : Nakulan
http://javabeat.net/method-overriding-java/
CC-MAIN-2017-04
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byGriselda Wright Modified about 1 year ago 1 Problem Solving 2 Topics Problem Solving Searching Methods Game Playing 3 Introduction Problem solving is mostly based on searching. Every search process can be viewed as a traversal of a directed graph in which each node represents a problem state and each arc represents a relationship between the states represented by the nodes it connects. The search process must find a path through the graph, starting at an initial state and ending in one or more final states. The graph is constructed from the rules that define the allowable moves in the search space. Most search programs represent the graph implicitly in the rules to avoid combinatorial explosion and generate explicitly only those parts that they decide to explore. 4 Introduction Goal : a description of a desired solution (may be a state (8-puzzle) or a path (traveling salesman)). Search space: set of possible steps leading from initial conditions to a goal. State: a snapshot of the problem at one stage of the solution. The idea is to find a sequence of operators that can be applied to a starting state until a goal state is reached. A state space: the directed graph whose nodes are states and whose arcs are the operators that lead from one state to another. Problem solving is carried out by searching through the space of possible solutions for ones that satisfy a goal. 5 Example: Water jug problem Given two jugs one 4 gallons and the other 3 gallons. The goal is to get 2 gallons in 4 gallon jug. Assumptions: You can fill the jug from the pump You can pour water out of the jug onto the ground You can pour water from one jug to another 6 Example: Water jug problem State space representation 7 Search Five important issues that arise in search techniques are: The direction of the search The topology of the search process Representation of the nodes Selecting applicable rules Using heuristic function to guide the search 8 The Direction of the Search Forward : Data directed search. Start search from the initial state. To reason forward, the left sides (the preconditions) are matched against the current state and the right sides (the results) are used to generate new nodes until the goal is reached Backward: Goal directed search. Start search from the goal state. To reason backward the right sides are matched against the current node and the left sides are used to generate new nodes representing new goal states to be achieved. 9 The Direction of the Search Factors influencing the choice between forward vs. backward chaining are: relative number of goal states to start states – move from the smaller set of states to the larger branching factor – move in the direction with the lower branching factor explanation of reasoning – proceed in the direction that corresponds more closely with the way the user thinks. 10 The Direction of the Search Examples Branching factor In theorem proving goal state is the theorem to be proved and the initial state is the set of axioms. From small set of axioms large number of theorems can be proved. This large set of theorems must go back to the small set of axioms. Branching factor is greater going forward from axioms to theorems. Backward reasoning is more appropriate If the branching factor is same in both directions then relative number of start states to goal states determine the direction of search. Bi-directional search, start from both ends and meet somewhere in between. The disadvantage of this technique is search may bypass each other. 11 Explanation of reasoning MYCIN program that diagnoses infectious diseases uses backward reasoning to determine the cause of patient's illness. A doctor may reason as follows: If an organism has a set of properties (lab results) then it is likely that the organism is X. Even though the evidence is most likely documented in the reverse direction (IF (ORGANISM X) (PROPERTIES Y)) CF The rules justify why certain tests should be performed. 12 Trees The Topology of the Search 13 Graphs The Topology of the Search 14 The topology of the search 1.Check if the generated node already exists 2.If not, add the node 3.If exists, then do: a)Set the node that is being expanded to point to the already existing node corresponding to its successor, rather than to the new one. The new one can be thrown away. b)If looking for the best path, check if the new path is better. If worse do nothing. If better record the new path as the correct path to use to get to the node, and propagate the corresponding change in cost down through successor nodes as necessary. Disadvantage of this topology is that cycles may occur and there is no guarantee for termination 15 Representation of the nodes Arrays Ordered pairs Predicates 16 Representation of the nodes State: location of 8 number tiles Operators: blank moves left, right, left or down Goal test: state matches the configuration on the right Path cost: each step cost 1, i.e. path length for search tree depth. 17 Representation of the nodes Possible state representations in LISP (0 is the blank) ( ) ((0 2 3) (1 8 4) (7 6 5)) ((0 1 7) (2 8 6) (3 4 5)) The representation depends on: how easy to compare,operate on, and store (size). 18 Goal Test >(defvar *goal-state* ‘( )) >(equal *goal-state* ‘( )) t 19 Operators Functions from state to subset of states drive to neighboring city place piece on chess board add person to meeting schedule slide a tile in 8-puzzle Matching Conflict resolution: order (priority) recency Indexing 20 Using heuristic function to guide the search It is frequently possible to find rules which will increase the chance of success. Such rules are termed heuristics and a search involving them is termed a heuristic search. A heuristic function is a function that maps from problem state description to measure of desirability Heuristics for the 8-puzzle problem could be: the number of displaced tiles distance of displaced tiles 21 Implementing heuristic evaluation functions Goal more accurate fails to distinguish (1) tiles out of place (2) sum of distance out of place (3) 2*number of direct tile reversals (1)(2)(3) Start example: 8-puzzle 22 Evaluation of Search Strategies Time complexity: how many nodes expanded so far? Space complexity: how many nodes must be stored in node-list at any given time? Completeness: if solution exists, guaranteed to be found? Optimality: guaranteed to find the best solution? 23 Components of Implicit State-Space Graphs There are three basic components to an implicit representation of a state-space graph. 1.A description with which to label the start node. This description is some data structure modeling the initial state of the environment. 2.Functions that transform a state description representing one state of the environment into one that represents the state resulting after an action. These functions are usually called operators. When an operator is applied to a node, it generates one of that node’s successor’s. 3.A goal condition, which can be either a True-False valued function on state descriptions or a list of actual instances of state descriptions that correspond to goal states. 24 Types of Search There are three broad classes of search processes: 1)Uninformed- Blind Search- –There is no specific reason to prefer one part of the search space to any other, in finding a path from initial state to goal state. –systematic, exhaustive search depth-first-search Breadth-first-search 25 Types of Search 2)Informed – Heuristic search - there is specific information to focus the search. –Hill climbing –Branch and bound –Best first –A* 3)Game playing – there are at least two partners opposing to each other. –Minimax ( pruning) –Means ends analysis 26 Search Algorithms Task –find solution path thro’ problem space –keep track of paths from start to goal nodes –define optimal path if > 1 solution (circumstances) –avoid loops (prevent reaching goal) 27 Depth-first search Uses generate and test strategy. Nodes are generated by applying the applicable rules. Then, each generated node is tested if it is the goal. Nodes are generated in a systematic form. It is an exhaustive search of the problem space. front of the queue. 3.If the goal node has been found announce success, otherwise announce failure. 28 Depth-first search - lists: keep track of progress through state space - open states generated but children not examined - closed states already examined left end of open /queue end end; return (failure) /no states left end. 29 Depth-first search Node visit order: Queuing function: enqueue at left 30 Depth-first search Evolution of the closed and open lists 1.[1] – [] 2.[2 3] – [1] 3.[4 5 3] – [1 2 ] 4.[ ] – [1 2 4] ………………. 31 Depth-first Evaluation Branching factor b, depth of solutions d, max depth m: Incomplete: may wonder down the wrong path. Bad for deep and infinite depth state space Time: b m nodes expanded (worst case) Space: bm (just along the current path) Does not guarantee the shortest path. Good when there are many shallow goals. 32 Breadth-first search It will first explore all paths of length one, then two and if a solution exists it will find it at the exploration of the paths of length N. There is a guarantee of finding a solution if one exists. It will find the shortest path from the solution, it may not be the best one. 33 Breadth-first search The algorithm: 1. Form back of the queue. 3. If the goal node has been found announce success, otherwise announce failure. 34 Breadth-first search Breadth-first search procedure right end of open /queue end end; return (failure) /no states left end. 35 Breadth-first search Node visit order (goal test): Queuing function: enqueue at end ( add expanded node at the end of the list) 36 Breadth-first search Evolution of the open and closed lists 1.[1] – [ ] 2.[2 3 ] – [ 1 ] 3.[3 4 5 ] – [1 2 ] 4.[ ] – [1 2 3 ] ………….. 37 Implementing Breadth-First and Depth- First Search The lisp implementation of breadth first search maintains the open list as a first in first out (FIFO) structure. (defun breadth-first () (cond ((null *open*) nil) (t (let ((state (car *open*))) (cond ((equal state *goal*) ‘success) (t (setq *closed* (cons state *closed*)) (setq *open* (append (cdr *open*) (generate- descendants state *moves*))) ;*moves*:list of the funcs that generate the moves. (breadth-first))))))) 38 Implementing Breadth-First and Depth- First Search ( defun run-breadth (start goal) (setq *open* (list start)) (setq *closed* nil) (setq *qoal* goal) (breadth-first) 39 Implementing Breadth-First and Depth- First Search generate-descendants takes a state and returns a list of its children. (defun generate-descendants (state moves) (cond ((null moves) nil) (t (let (child (funcall (car moves) state)) (rest (generate-descendants. 40 Breadth-first Evaluation Branching factor b, depth of solution d: Complete: it will find the solution if it exists Time. 1 + b + b 2 + …+ b d Space: b k where k is the current depth Space is more problem than time in most cases Time is also a major problem nonetheless 41 Heuristic Search Reasons for heuristics - impossible for exact solution, heuristics lead to promising path - no exact solution but an acceptable one - fallible due to limited information Intelligence for a system with limited processing resources consists in making wise choices of what to do next Heuristics = Search Algorithm + Measure 42 Hill climbing Hill climbing is depth first search with a heuristic measurement that orders choices as nodes are expanded.The algorithm is the same only 2b differs slightly., sort the first elements children, if any by estimated remaining distance, and add the first element's children if any to the front of the queue. 3. If the goal node has been found announce success, otherwise announce failure. 43 Hill climbing 44 Problems that may arise: A local maximum, is a state that is better than all its neighbors, but is not better than some other states farther away. At a local maximum, all moves appear to make things worse. A plateau, A whole set of neighboring states have the same value.It is not possible to determine the best direction. A ridge,. Higher than surrounding area but can not be traversed by single move in any one direction. 45 Hill climbing Some ways of dealing with these: Backtrack: local maximum Make a big jump in one direction to try to get to new section of search space (plateau) Apply two or more rules before doing the test. This corresponds to moving in several directions at once (ridges). 46 Best-first Search Best-first search is a combination of depth-first and breadth- first search algorithms. Forward motion is from the best open node (most promising) so far, no matter where it is on the partially developed tree. The second step of the algorithm changes as: queue and sort the entire queue by estimated remaining distance.. 47 Best-First Search procedure best_first_search; begin open := [Start]; closed = []; while open <> [] do remove leftmost state from open, call it X; if X = goal then return path from Start to X else begin generate children of X; for each child of X do case the child is not on open or closed: begin assign child heuristic value; add child to open end; the child is already on open: if the child reached by shorter path then give state on open shorter path the child is already on closed: if child reached shorter path then begin remove state from closed; add child to open end; end case; put X on closed; re-order states on open by heuristic method end; return failure end. 48 Example of best-first search B-4C-4 D-6 E-5F-5G-4 H-3 IJ LM O-2 N P-3 Q R T A-5 K S 1. open = [A5]; closed = [] 2. eval A5; open = [B4, C4, D6]; closed = [A5] 3. eval B4; open = [C4, E5, F5, D6]; closed = [B4, A5] 4. eval C4; open = [H3, G4, E5, F5, D6] closed = [C4, B4, A5] 5. eval H3; open = [O2, P3, G4, E5, F5, D6]; closed = [H3, C4, B4, A5] 6. eval O2; open = [P3, G4, E5, F5, D6] closed = [O2, H3, C4, B4, A5] 7. eval P3; the solution is found! 49 Branch and Bound Search Shortest path is always chosen for expansion. The path first reaching the destination is optimal In order to be certain that supposed solution is not longer than one or more incomplete paths, instead of terminating when a path is found, terminate when the shortest incomplete path is longer than the shortest complete path. 50 Branch and Bound Search To conduct a branch and bound search: 1. Form a queue of partial paths. Let the initial queue consist of the zero length, zero step path from the root node to no where. 2. Until the queue is empty or the goal has been reached, determine if the first path in the queue reaches the goal node. a) If the first path reaches the goal do nothing. b) If the first path does not reach the goal node, i) remove the first path from the queue ii) form new paths from the removed path by extending one step iii) add the new paths to the queue iv) sort the queue by cost accumulated so far, with least cost paths in front 3. If the goal node has been found announce success, otherwise announce failure. 51 Branch and Bound Search 52 53 Adding underestimates improves efficiency c(total length) = d(already traveled) + e(distance remaining) If the guesses are not perfect, and a bad overestimate somewhere along the true optimal path may cause us to wonder off that optimal path permanently. But underestimates cannot cause the right path to be overlooked. An underestimate of the distance remaining yields an underestimate of the total path, u(total path length). u(total path length) = d(already traveled) + u(distance remaining) 54 Branch and Bound Search If a total path is found by extending the path with the smallest underestimate repeatedly, no further work need be done once all incomplete path distance estimates are longer than some complete path distance. This is true, because a real distance along a completed path can not be shorter than an underestimate of the distance. To conduct a branch and bound search with underestimates: 2b4) sort the queue by the sum of cost accumulated so far and a lower bound estimate of the cost remaining, with the least cost paths in front. 55 A* Search Dynamic-programming principal holds that when looking for the best path from S to G, all paths from S to any intermediate node, I, other than the minimum length path from S to I, can be ignored. The A* procedure is branch and bound search in a graph space with an estimate of remaining distance, combined with dynamic programming principle. If one can show that h(n) never overestimates the cost to reach the goal, then it can be shown that the A* algorithm is both complete and optimal. 56 A* Search To do A* search with lower bound estimates: 2b4) sort the queue by the sum of the cost accumulated so far and a lower bound estimate of the cost remaining, with least cost paths in front. 2b5) If two or more paths reach a common node, delete all those paths except for one that reaches the common node with the minimum cost. 57 Recursive Search in Prolog move(1, 6) move(3, 4) move(6, 7) move(1, 8) move(3, 8) move(6, 1) move(2,7) move(4, 3) move(7, 6) move(2, 9) move(4, 9) move(7, 2) move(8,3) move(9,4) move(8,1) move(9,2) 3X3 knight’s tour problem 58 Recursive Search in Prolog predicates path(integer, integer, integer*) clauses path(Z, Z, L). path(X, Y, L):- move(X, Z), not(member(Z, L), path(Z, Y, [Z|L]) /*x is the member of the list if X is the head of the list or x is a member of the tail */ member(X, [X|T]). member(X, [Y|T]):- member(X, T). goal path(1, 3, [1]). 59 Farmer-Wolf-Cabbage Problem A Farmer with his wolf, goat and cabbage come to the edge of a river they wish to cross. There is a boat at the river’s edge, but of course only the farmer can raw it. The boat also can carry only two things. If the wolf is ever left alone with the goat, the wolf will eat the goat. If the goat is ever left alone with the cabbage, the goat will eat the cabbage. Devise a sequence of crossings of the river so that all four characters arrive safely on the other side of the river. The problem implementation.implementation 60 Search Algorithms in LISP Example: Farmer, wolf, goat and cabbage problem. uses depth first search states are represented as list of four elements. eg: (w e w e) represents the farmer and the goat on the west bank, and wolf and the cabbage on the east bank. make-state takes as arguments the locations of the farmer, wolf, goat and cabbage and returns a state and four access functions, farmer-side, wolf-side, goat-side, and cabbage- side, which take a state and return the location of an individual. 61 (defun make-state (f w g c)(list f w g c)) (defun farmer-side (state) (nth 0 state)) (defun wolf-side (state) (nth 1 state)) (defun goat-side (state) (nth 2 state)) (defun cabbage-side (state) (nth 3 state)) 62 (defun farmer-takes-self(state) (make-state(opposite (farmer-side state)) (wolf-side state) (goat-side state) (cabbage-side state))) In the above procedure a new state is returned regardless of its being safe or not. 63 A safe function should be defined so that it returns nil if a state is not safe. >(safe ‘(w w w w));safe state, return unchanged >(safe ‘(e w w e)); wolf eats goat, return nil. (defun safe(sate) (cond((and (equal(goat-side state) (wolf-side state)) (not(equal(farmer-side state) (wolf-side state))) nil); wolf eats goat ((and(equal(goat-side state) (cabbage-side state)) (not(equal(farmer-side state) (goat-side state))) nil); goat eats cabba (t state))) 64 ;return nil for unsafe states ;filter the unsafe states (defun farmer-takes-self(state) (safe (make-state(opposite (farmer-side state)) (wolf-side state) (goat-side state) (cabbage-side state)))) (defun opposite( side) (cond ((equal side ‘e) ‘w) ((equal side ‘w) ‘e))) 65 (defun farmer-takes-wolf(state) (cond((equal (farmer-side state) (wolf-side state)) (safe (make-state (opposite (farmer-side state)) (oppsite (wolf-side state)) (goat-side state) (cabbage-side state)))) (t nil))) 66 (defun farmer-takes-goat(state) (cond((equal (farmer-side state) (goat-side state)); farmer and on the same side (safe (make-state (opposite (farmer-side state)) (wolf-side state) (oppsite (goat-side state)) (cabbage-side state)))) (t nil))) 67 (defun farmer-takes-cabbage(state) (cond((equal (farmer-side state) (cabbage-side state)) (safe (make-state (opposite (farmer-side state)) (wolf-side state) (goat-side state) (oppsite (cabbage-side state))))) (t nil))) 68 (defun path(state goal) (cond((equal state goal) ‘success) (t (or (path (farmer-takes-self state) goal) (path (farmer-takes-wolf state) goal) (path (farmer-takes-goat state) goal) (path (farmer-takes-cabbage state) goal))))) To prevent path from attempting to generate the children of a nil state, it must first check whether the created state is nil. If it is, the path should return nil. In this definition there is the probability of going into a loop, repeating the same states over and over again. Third parameter, been-list, which keeps track of these visited states, is passed to path. member predicate is used to make sure that the current state is not the member of the been-list. 69 (defun path(state goal been-list) (cond ((null state) nil) ((equal state goal)(reverse (cons state been-list))) ((not (member state been-list :test ‘equal)) (or (path (farmer-takes-self state) goal (cons state been-list)) (path (farmer-takes-wolf state) goal (cons state been-list)) (path (farmer-takes-goat state) goal (cons state been-list)) (path (farmer-takes-cabbage state) goal (cons state been- list)))))) 70 *moves* is a list of functions that generate the moves. In the farmer, goat and cabbage problem *moves* would be defined by (setq *moves* ‘(farmer-takes-self farmer-takes-wolf farmer-takes-goat farmer-takes-cabbage)) (defun run-breadth (start goal) (setq *open* (list start)) (setq *closed* nil) (setq *qoal* goal) (bradth-first) 71 generate-descendants takes a state and returns a list of its children. It also disallows duplicates in the list of children and eliminates any children that are already in the open or closed list. (defun generate-descendants (state moves) (cond ((null moves) nil) (t (let (child (funcall (car moves) state)) (rest (generate-descendat. 72 Breadth-First and Depth-First Search The lisp implementation of breadth first search maintains the open list as a first in first out (FIFO) structure. Open, closed and goal are defined as global variables. (defun breadth-first () (cond ((null *open*) nil) (t (let ((state (car *open*))) (cond ((equal state *goal*) ‘success) (t (setq *closed* (cons state *closed*)) (setq *open* (append (cdr *open*) (generate-descendants state *moves*))) (breadth-first))))))) 73 References Nilsson, N.J. Artificial Intelligence: A new Synthesis, Morgan Kaufmann, 1998 Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/4318624/
CC-MAIN-2017-04
en
refinedweb
. Forget about virtual servers. Remember that 60 hour job with a 24 hour deadline? Built on a PaaS platform and equipped with a couple hundred dollars, you won’t even be staying late today. Today’s post is going to share highlights from a basic file processing application. Something you would find (hopefully better written) in any random enterprise IT shop or SaaS company. It offers a web page that lets you upload files, a button to process files, a basic and poorly written list of the processed and unprocessed files, and an unattended worker. The trick is that this application was written on top of Windows Azure, so I can play tricks with time just by twisting the dial from one file processor to twenty. The Basic File Processor The file processing in the sample application is intended to be a sample workload. It consists of reading files completely into memory and passing them around, spinning through them one character at a time, replacing each character in the line with it’s upper case variant. Very critical stuff, very performant. In addition to running the process via the website, I also need an unattended application that will can run the same processing function. If I owned the server, this would be a scheduled task or service. As an Azure Worker the code will be remarkably similar. Architecture of the Processor The two front-ends access common logic in the Core library, which is responsible for both the processing logic and interacting with storage resources. This being sample code, it is certified as working on my machine and is definitely not production ready. That being said, I did write this in a few evenings, so writing a production-ready service doesn’t have to take that long in normal workdays. The Web Site The website has a single MVC controller with 3 actions: - ~/Home/Index: Displays the list of processed and unprocessed items and buttons for upload and processing - ~/Home/AddFile: The post address for file uploads - ~/Home/ProcessNextItem: An action to process the next queued file public class HomeController : Controller { IStorageLocator _storageLocator; public HomeController() { _storageLocator = new StorageManager("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"); } public ActionResult Index() { var store = new ItemStore(_storageLocator); var model = new StatusViewModel() { ProcessedItems = store.GetProcessedList(), UnprocessedItems = store.GetUnprocessedList() }; ViewData["file"] = TempData["file"]; return View(model); } [HttpPost] public ActionResult AddFile(HttpPostedFileBase file) { if (file != null && file.ContentLength > 0) { var item = new FullItem() { ResourceId = Guid.NewGuid(), Received = DateTime.Now.ToUniversalTime(), IsProcessed = false, FileName = file.FileName }; item.ReadFileFromStream(file.InputStream); new ItemStore(_storageLocator).AddNewItem(item); TempData["file"] = file.FileName + " uploaded and queued for processing."; } else { TempData["file"] = "Processor ignores empty files, sorry."; } return RedirectToAction("Index"); } [HttpGet] public ActionResult ProcessNextItem() { var store = new ItemStore(_storageLocator); new ItemProcessor().ProcessNextItem(store); return RedirectToAction("Index"); } } The key to all of these methods is the ItemStore and ItemProcessor class, all of the rest of the logic is basic presentation layer logic. The Worker Role The worker role consists of a roughly 6 line while(true) statement that asks the ItemStore to process the next item in the queue, then sleeps for 1 second. public override void Run() { while (true) { var storageLocator = new StorageManager("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"); var store = new ItemStore(storageLocator); new ItemProcessor().ProcessNextItem(store); Thread.Sleep(1000); Trace.WriteLine("Working", "Information"); } } Once again, the magic happens in the ItemStore and ItemProcessor instances. The ItemStore The ItemStore class exposes the basic methods we need execute our process: public void AddNewItem(FullItem item) { /* More Code */ } public FullItem RetrieveForProcessing() { /* More Code */ } public void StoreFinishedItem(FullItem item) { /* More Code */ } And a pair of methods for visibility: public IEnumerable<ItemBase> GetUnprocessedList() { /* More Code */ } public IEnumerable<ItemBase> GetProcessedList() { /* More Code */ } Windows Azure offers a number of storage options, each with their own benefits and constraints. For this process I decided to use table storage to track the summary level information about each file processing job, blob storage to store the actual file, and the queue service for managing task execution. public class ItemStore { public static string RAW_BLOB_NAME = "RawItems"; public static string FINISHED_BLOB_NAME = "FinishedItems"; public static string QUEUE_NAME = "ToBeProcessed"; public static string TABLE_NAME = "Items"; ITableStore _table; IBlobStore _rawBlob, _finishedBlob; IQueueStore _queue; public ItemStore(IStorageLocator storageLocator) { _table = storageLocator.GetTable(TABLE_NAME); _rawBlob = storageLocator.GetBlob(RAW_BLOB_NAME); _finishedBlob = storageLocator.GetBlob(FINISHED_BLOB_NAME); _queue = storageLocator.GetQueue(QUEUE_NAME); } public void AddNewItem(FullItem item) { _rawBlob.Create(item.ResourceId, item.File); _queue.Enqueue(item.AsSummary()); _table.Create(item.AsSummary()); } public IEnumerable<ItemBase> GetUnprocessedList() { return _table.GetUnprocessedItems().ToList(); } public IEnumerable<ItemBase> GetProcessedList() { // ? return _table.GetProcessedItems().ToList(); } public FullItem RetrieveForProcessing() { FullItem rawItem = null; var item = _queue.Dequeue(); if (item != null) { rawItem = new FullItem(item); rawItem.File = _rawBlob.Retrieve(item.ResourceId); } return rawItem; } public void StoreFinishedItem(FullItem item) { _finishedBlob.Create(item.ResourceId, item.File); _rawBlob.Delete(item.ResourceId); _table.Update(item.AsSummary()); } } The ItemStore class is built to interact with interfaces for each of these resources, using a single IStorageLocator interface to get instances of those resource interfaces. The class (and application) was driven by the small set of unit tests that helped me define how i wanted the process to work and interact with the resources above. Configurations With all of the pieces defined, we use a pair of configurations to tell Azure how we want to deploy everything. The first configuration defines the services we intend to package and deploy as well as the instance size and any endpoints: <ServiceDefinition name="CloudFileProcessorService" xmlns=""> <WebRole name="Processor_WebRole" vmsize="Small"> <Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> </Bindings> </Site> </Sites> <Endpoints> <InputEndpoint name="Endpoint1" protocol="http" port="80" /> </Endpoints> <Imports> <Import moduleName="Diagnostics" /> </Imports> </WebRole> <WorkerRole name="Processor_WorkerRole" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> </Imports> </WorkerRole> </ServiceDefinition> The second configuration is applied when we deploy the instances above and tells Azure that I want to deploy 1 Processor_WebRole instance and 2 Processor_WorkerRole instances: <ServiceConfiguration serviceName="CloudFileProcessorService" xmlns="" osFamily="1" osVersion="*"> <Role name="Processor_WebRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="Processor_WorkerRole"> <Instances count="2" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Note that I’m telling it to use the local development storage, which is supported by a local storage emulator. In a production configuration I would enter the service location and a generated token. So Where’s the Magic? So where’s the magic that makes this a distribute application instead of 3 days of overtime? It’s sprinkled throughout the system. The architecture of this system would work just as well outside of Azure, provided I offered it stand-ins for the 3 storage resources and deployed the instances and any necessary settings accordingly. Instead of worrying about how to manage deployments and what to use for centralized queueing and storage, I can focus on building an application that simply assumes those resources are available. Is there headroom for performance improvements? Sure, but I can also choose to throw another $15/month server at it, push data to CDNs and blob storage, add caching, or even a SQL Azure instance. This application may be fairly basic, but nothing stops us from following this same pattern for much larger applications. PaaS has removed some of the constraints we take for granted. Even applications that have to run in-house in order to standardize against a database can now consider uploading a subset of that lookup data to a table store, performing most of the heavy lifting in the cloud, then produce a few files to import back into the on-premise system. The total execution time would be longer, but being able to scale part of the job across numerous parallel instances means the actual elapsed time can actually be much shorter. And it means when we have a 60 hour task that’s due in 24, it’s no longer an impossible situation. The source code is available on github along with requirements and links for setting up the emulators locally.
http://blogs.lessthandot.com/index.php/DesktopDev/MSTech/the-cloud-as-a-make/
CC-MAIN-2017-04
en
refinedweb
Hello, can you post a .wxg file that shows the problem? I'm not able to reproduce it... If you add extracode in a widget but there is no extracode for top-level class, the extracode will be ignored. (re py_codegen.py line 658...) Workaround check the extracode checkbox in the top-level container, add an empty line. Hello, can you post a .wxg file that shows the problem? I'm not able to reproduce it... Here it is Config : xp sp3 Python 2.5 wxPython 2.8 wxGlade version from depot as wxglade-d1260418a762.zip To reproduce: Create a new wxGlade application Add a custom widget name: wx.lib.filebrowsebutton.DirBrowseButton Add extra code for the widget: << import wx.lib.filebrowsebutton Thanks for the file test. The current release added proper extra code: import wx.lib.filebrowsebutton Feel free to reopen the bug if it still occurs using a current wxGlade release. Log in to post a comment.
https://sourceforge.net/p/wxglade/bugs/143/
CC-MAIN-2017-04
en
refinedweb
Not sure if this is your problem, but since methods don't have namespaces, I'm not sure why your code could not be reduced to: <dtml-if <dtml-var index.html> <dtml-else> <dtml-var chunk_editFrameset> </dtml-if> The namespace of index_html right now is 'edit' (or another folder that acquires it) ...and... if the namespace of index_html is 'edit' than the namespace of index.html will also be, and ergo, the namespace of chunk_dspPrimaryCol will also be edit (index.html doesn't have a method unto itself), changing your code to: <dtml-if <dtml-var chunk_dspPrimaryColPublic> <dtml-else> <dtml-var chunk_dspPrimaryColEdit> </dtml-if> Just keep in mind, that any object with a namespace (folders, documents, certain class instances) that is in the acquisition path can have that namespace explicitly invokes with <dtml-with name_of_object>. This only works for object with a namespace; methods are not acquirable objects, and though you might call them objects (in a loose sense) they are not objects in a Zopish sense. I don't know if my sugestion will work for you, but hopefully it will help. Sean ========================= Sean Upton Senior Programmer/Analyst SignOnSanDiego.com The San Diego Union-Tribune 619.718.5241 [EMAIL PROTECTED] ========================= -----Original Message----- From: Geoffrey L. Wright [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 03, 2001 3:16 PM To: [EMAIL PROTECTED] Subject: [Zope] Poor Procedural Programmer Needs OOPish Enlightenment or... "A Cry for Namespace Help" I seem to have run into one of those Zope namespace issued that's starting to make me dizzy. I have a index_html method that displays content conditionally depending on where it's called. It looks like this: <dtml-if <dtml-var index.html> <dtml-else> <dtml-var chunk_editFrameset> </dtml-if> The chunk_editFrameset method also displays the same index.html file in one of two frames. This works like a champ. If I'm in a directory called edit when this is called, it displays the frameset. Otherwise, it displays the index.html directly. The index.html method is contains a bunch 'o static html plus another method called chunk_dspPrimaryCol that also conditionally displays information based on where it's called. chunk_dspPrimaryCol looks like this: <dtml-if <dtml-var chunk_dspPrimaryColPublic> <dtml-else> <dtml-var chunk_dspPrimaryColEdit> </dtml-if> This doesn't work like I'd hoped, since I have to move back up the namespace stack to find index.html, and by the time I do I'm no longer in the edit folder. So I _always_ end up displaying the chunk_dspPrimaryColPublic method, even if I call the index_html method from within the edit folder. What I need (I think) is a way to keep all of this activity in the edit Folder. Or I need some other way of solving this problem. Any thoughts? I hope my description was clear enough... -- Geoffrey L. Wright Developer / Systems Administrator (907) 563-2721 ex. 4900 _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) RE: [Zope] Poor Procedural Programmer Needs OOPish Enlightenment sean . upton Wed, 03 Jan 2001 18:50:57 -0800 - [Zope] Poor Procedural Programmer Needs OOPish Enlighte... Geoffrey L. Wright - Re: [Zope] Poor Procedural Programmer Needs OOPish... sean . upton - Re: [Zope] Poor Procedural Programmer Needs OO... Geoffrey L. Wright
https://www.mail-archive.com/zope@zope.org/msg13201.html
CC-MAIN-2017-04
en
refinedweb
Xamarin System Requirements This tutorial has been tested with the following: - Microsoft Visual Studio 2015 - Xamarin for Visual Studio 4.2 This tutorial explains how to integrate Auth0 with a Xamarin application. The Xamarin.Auth0Client helps you authenticate users with any Auth0 supported identity provider via the OpenId Connect protocol built on top of OAuth2. The library is cross-platform, so this information can be applied to either iOS or Android. NOTE: An Objective-C Binding Library for Lock iOS implementations is available at: Lock.Xamarin. Install Xamarin.Auth0Client Component. For more information, see: How to include a Component in a Xamarin Project. Set up the Auth0 Callback URL Go to the Application Settings section in the Auth0 dashboard and make sure that Allowed Callback URLs contains the following value: Integration There are three options for implementing the integration: - Use the Auth0 Lock inside a Web View - the simplest option with only a few lines of code required. - Create your own UI - more work, but higher control over the UI. - Use a specific username and password. Option 1: Auth0 Lock Lock is the recommended option. Here is a snippet of code to paste into your project: using Auth0.SDK; var auth0 = new Auth0Client( "YOUR_AUTH0_DOMAIN", "YOUR_CLIENT_ID"); // 'this' could be a Context object (Android) or UIViewController, UIView, UIBarButtonItem (iOS) var user = await auth0.LoginAsync(this); /* - get user email => user.Profile["email"].ToString() - get Windows Azure AD groups => user.Profile["groups"] - etc. */ Component info Xamarin.Auth0Client is built on top of the WebRedirectAuthenticator in the Xamarin.Auth component. All rules for standard authenticators apply regarding how the UI will be displayed. Option 2: Custom User Interface If you know which identity provider you want to use, you can add the connection parameter and the user will be directed to the specified connection: var user = await auth0.LoginAsync(this, "google-oauth2"); // connection name here NOTE: Connection names can be found on Auth0 dashboard (e.g. saml-protocol-connection). Option 3: Specific Username and Password var user = await auth0.LoginAsync( "sql-azure-database", // connection name here "jdoe@foobar.com", // user name "1234"); // password Access User Information The Auth0User has the following properties: Profile: returns a Newtonsoft.Json.Linq.JObjectobject from Json.NET component containing all available user attributes (e.g.: user.Profile["email"].ToString()). IdToken: a JSON Web Token (JWT) containing all of the user attributes and signed with your client secret. Auth0AccessToken: the access_tokenthat can be used to call the Auth0 APIs. For example, you could use this token to Link Accounts. Download samples Android and iOS samples are available on GitHub at: Xamarin.Auth0Client.
https://auth0.com/docs/quickstart/native/xamarin
CC-MAIN-2017-04
en
refinedweb
This patch fixes a regression introduced by:commit bb29ab26863c022743143f27956cc0ca362f258cAuthor: Ingo Molnar <mingo@elte.hu>Date: Mon Jul 9 18:51:59 2007 +0200This caused the jiffies counter to leap back and forth on cpufreq changeson my x86 box. I'd say that we can't always assume that TSC does "smallerrors" only, when marked unstable. On cpufreq changes these errors can behuge.The original bug report can be found here:: Stefano Brivio <stefano.brivio@polimi.it>---diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.cindex 9ebc0da..d29cd9c 100644--- a/arch/x86/kernel/tsc_32.c+++ b/arch/x86/kernel/tsc_32.c@@ -98,13 +98,8 @@ unsigned long long native_sched_clock(void) /* * Fall back to jiffies if there's no TSC available:- * ( But note that we still use it if the TSC is marked- * unstable. We do this because unlike Time Of Day,- * the scheduler clock tolerates small errors and it's- * very important for it to be as fast as the platform- * can achive it. ) */- if (unlikely(!tsc_enabled && !tsc_unstable))+ if (unlikely(!tsc_enabled)) /* No locking but a rare wrong value is not a big deal: */ return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ); -- CiaoStefano
http://lkml.org/lkml/2007/12/6/377
CC-MAIN-2017-04
en
refinedweb
Language - Flash AS3 Flash ActionScript specifically. Flash ActionScript is capable of using Phidgets only over the Phidget WebService, and it is unlike the majority of the other programming languages we support where the device can be used without the Phidget WebService. The complete Phidget API, including events are supported. We also provide example code in Flash ActionScript for all Phidget devices. Flash ActionScript can be developed with Windows and OS X. Only ActionScript 3 is supported. Interaction with Phidgets is made possible as the library uses web sockets to communicate with Phidgets over the PhidgetWebService. You can compare Flash ActionScript with our other supported languages. Quick Downloads Just need the Flash ActionScript documentation, drivers, libraries, and examples? Here they are: Documentation Example Code Libraries and Drivers - ActionScript Libraries (same file as Examples above) - 32-bit Windows Drivers Installer - 64-bit Windows Drivers Installer - Windows Driver and Library Files (Zipped) - OS X Drivers Installer Getting started with Flash ActionS Instructions are divided up by operating system. Choose: Windows (XP/Vista/7/8) Description of Library Files Flash ActionScript on Windows depend on the following files and folders. The installers in the Quick Downloads section put only the phidget21.dll and PhidgetWebservice21.exe into your system. You will need to manually put the com folder into your system. phidget21.dllcontains the actual Phidget library, which is used at run-time. This needs to be installed on the computer that the Phidget is connected. By default, it is placed in C:\Windows\System32. PhidgetWebservice21.exeallows for controlling Phidgets remotely across the network. This needs to be installed on the computer that the Phidget is connected. comfolder is the Phidget ActionScript library. The computer that is used for Flash development will need this folder. It is to be manually placed in the same directory as your project root. Unlike the majority of the programming languages we support (where applications can directly connect to the Phidgets), Flash can only connect to the Phidgets over the PhidgetWebService. There are potentially three roles that a computer can act as: host, developer, and an end user. It is possible for a single computer to act as more than one of these roles at the same time: - Host: The computer that the Phidget is attached to, and can broadcast device information to any computer over the network. The phidget21.dlland PhidgetWebservice21.exemust be installed on the host. The host must also have the PhidgetWebService started in order for it and other computers to connect to the Phidgets attached to the host. - Developer: The computer that is used to develop Flash applications. This computer needs the comfolder in the root directory of your project. The phidget21.dlland PhidgetWebservice21.exeare only needed if the Phidget is directly attached to the computer. - End user: The computer that is used to run the compiled flash application (i.e., .swf).The phidget21.dlland PhidgetWebservice21.exeare only needed if the Phidget is directly connected to the computer. If the computer is used for developing Flash applications, then it will need the comfolder in the root directory of your project. Here is a table summarizing what files/folders are needed for each computer role: Please see the Phidget WebService page for a high-level introduction to our WebService. If you do not want to use our installer on Windows, you can download the phidget21.dll and manually install them where you want; refer to our Manual Installation Instructions. Flash Professional Adobe Flash professional allows you to develop in ActionScript and control Phidgets over the WebService. We support ActionScript 3.0. Use Our Examples This section will assume that the device is plugged into the host computer, and that the development computer has Flash Professional installed. As the Flash ActionScript library only supports communication with Phidgets through the PhidgetWebService, begin by starting the WebService on the host computer with the default port (5001). To run the examples on a development computer, download the Flash examples and unpack them into a folder. Here, you will find a HelloWorld example which is very basic but which will run with any Phidget. You will also find more in-depth example programs for all devices. The source file will be named the same as the software object for your device. If you are not sure what the software object for your device is, find your Phidget on our webpage, and then check the API documentation for it. When you have found your example, open that .fla file in the Adobe Professional Flash environment. The only thing left to do is to run the examples! Click on Control → Test Movie. Once you have the Flash ActionScript examples running, we have a teaching section below to help you follow them. You may also run the examples by navigating to Control → Test Scene. If you are running the examples with Debug → Debug Movie, you will have to change the Flash Global Security Settings in order for the example to run. More information will be provided about the Flash Global Security Settings in the Running Compiled Code section. Write Your Own Code When you are building a project from scratch, or adding Phidget function calls to an existing project, you'll need to configure your development environment to properly link the Phidget ActionScript library. To begin: 1. Place a copy of the com folder in the root directory of your Flash project. 2. Generate a new ActionScript 3 Flash file. 3. Then, in your code, you will need to include the Phidget ActionScript library. Navigate to Window → Actions to bring up the Actions window and enter in the following: import com.phidgets.*; import com.phidgets.events.*; The project now has access to the Phidget function calls and you are ready to begin coding. The same teaching section which describes the examples also has further resources for programming your Phidget. Running Compiled Code Running a compiled .swf application on an end user computer will prompt the Flash player to display a dialog box mentioning that the application will block all communications with the Internet. 1. Click on the Settings button to bring up the Flash Global Security Settings Manager in your default web browser. Alternatively, you can access the manager with the following URL:. 2. In the Global Security Settings tab, navigate to Edit locations ... → Add locations. 3. Then, browse and add the application or the folder containing the application. This will allow the Flash Player to allow the application to accept any communication with the Internet. OS X Flash ActionScript has excellent support on OS X over the PhidgetWebService. The first step in using Flash ActionScript on Mac is to install Adobe Flash Professional. Once you have the Flash environment installed, setting up a project is exactly the same as on Widows. Please refer to the Windows section for more information on this subject. ActionScript code will be our ActionScript API information, with syntax for all of our functions: - ActionScript C/C++. Code Snippets Specific calls in ActionS ActionScript syntax. However, many additional concepts are covered on the General Phidget Programming page on a high level, such as using multiple Phidgets, handling errors, and different styles of programming. Remember that Actionscript cannot open Phidgets directly - rather, it must use a form of remote open to use the WebService. Step One: Initialize and Open Before you can use the Phidget, you must include a reference to the library in the action frame (Window | Actions). In Actionscript 3.0, the inclusion code would look like this: import com.phidgets.PhidgetInterfaceKit; import com.phidgets.events.*; Now you are ready to declare, initialize, and open your Phidget. in the API manual. Every type of Phidget (Interface Kit, Temperature Sensor, Spatial, etc.) also inherits functionality from the Phidget base class. The Open function will continuously try to connect to a Phidget, based on the parameters given, even trying to reconnect if it gets disconnected. The Phidget WebService as used by ActionScript allows a single Phidget to be opened by multiple applications - this is something that cannot be done with the regular, direct interface. The API manual lists all of the available modes that open provides. In Flash, the parameters can be used to open the first Phidget of a type it can find or based on its serial number. Step Two: Wait for Attachment (plugging in) of the Phidget Simply calling open does not guarantee you can use the Phidget immediately. It needs to be plugged in (attached). If it becomes unplugged, it will be 'detached'. We can handle this by using event driven programming and tracking the AttachEvents and DetachEvents, or checking the isAttached property and waiting until it is true. Our examples provide code snippets for attach event functions and how to hook them in to the Phidget for use. Step Three: Do Things with the Phidget We recommend the use of event driven programming when working with Phidgets. In Actionscript 3.0, we hook an event handler with the following code: phid.addEventListener(PhidgetDataEvent.SENSOR_CHANGE, onSensorChange); function onSensorChange(evt:PhidgetDataEvent):void{ trace (evt.Data); //Echo } With this method, the code inside the onSensorChange function (which you also need to define - check out our examples for ways to do this) will get executed every time the PhidgetInterfaceKit reports a change on one of its analog inputs. The values from the report can be accessed from the PhidgetDataEvent object properties. Some events such as Attach and Detach as discussed above belong to the base Phidget object and thus are common to all types of Phidgets. Others, like this one for the analog sensor change on the Interface Kit, are specific to the Phidget board. Please refer to the API manual for a full list of events and their usage. Some values can be directly read and set on the Phidget and used as an alternative to event driven programming. Simply use the instance properties or call member functions such as getSensorValue(index: int) or setOutputState(index: int, val: Boolean) for Phidget Interface Kits, for example. Step Four: Close and Just like the open call from Step One, you can close the Phidget when you are finished with it in your code. Flash Security Settings During debugging or after publishing the project, you may encounter some difficulties with Flash network security settings either inside or outside of the development environment with Phidgets. Permissions for your project folder can be added through the settings manager at, under “Always trust files in these locations” → “Edit locations...” → “Add location...”. More How-To's The General Phidget Programming page gives more information about: - Using Multiple Phidgets (or a Phidget other than the Interface Kit) - Catching exceptions and errors and using logging - Event catching versus direct polling - And more.... Common Problems and Solutions/Workarounds Problem: My compiled application is experiencing the following security error upon launching: "SecurityError: Error #2010: Local-with-filesystem SWF files are not permitted to use sockets". Solution: The symptom of this problem is similar to the one that is discussed in steps 1 - 3 of the Running Compiled Code section. Please see that section for a remedy. To access the Flash Global Security Settings Manager, go to.
http://www.phidgets.com/docs/Language_-_Flash_AS3
CC-MAIN-2017-04
en
refinedweb
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project. Re: [PATCH]Â Fix =thread-created not showing up when detaching Re: [1/3] [PATCH] value_optimized_out and value_fetch_lazy Re: [2/3] [PATCH] value_optimized_out and value_fetch_lazy Re: [3/3] [PATCH] value_optimized_out and value_fetch_lazy Re: [COMMIT PATCH] Move pagination_enabled declaration to a proper place. [COMMIT PATCH] Use allocate_optimized_out_value instead of set_value_optimized_out. [COMMIT PATCH] value_bits_valid: Fix latent bug. [commit/obv] Remove trailing space in gdb.ada/small_reg_param.exp [commit/obvious] Fix call to get_raw_print_options on mt-tdep.c (was: Re: [patch] Rename "prettyprint" to "prettyformat") [commit] [patch] testsuite build regression on CentOS-5 [Re: [PATCH 1/7] test suite update - gdb.base/[ab]] [commit] change missing DWO complaint to a warning [commit] delete togglelist [commit] delete unnecessary init of *list variables [commit] dwarf2read.c (process_queue): Add type signature to debug output. [commit] Fix FAIL: gdb.ada/small_reg_param.exp: continue to call_me Re: [commit] Improved linker-debugger interface [commit] linux-fork.h: delete detach_fork [commit] main.c (captured_command_loop): Tweak comment. [commit] native-stdio-gdbserver.exp: pass "--" to switch [commit] Re: Runtime regression for gdb.base/reread.exp & co. [Re: [RFA] Remove target_section.bfd] [commit] symtab.c (iterate_over_some_symtabs): Add comment. [commit] symtab.c (iterate_over_some_symtabs): Fix indentation. [commit] target.c (target_async_permitted_1): Fix comment. [committed] micromips_deal_with_atomic_sequence formatting fix [PATCH 0/2] Minor dwarf2read cleanups [PATCH 0/2] New option --skip-unavailable to -stack-list-XXX commands [PATCH 0/3 V3] Test mingw32 GDB in cygwin Re: [PATCH 0/3] a few more fixes from the cleanup checker Re: [PATCH 0/3] add a few new warning options [PATCH 0/3] Fixes for mingw testing in remote host Re: [PATCH 0/3] more simple test suite fixes Re: [PATCH 0/3] Select the current frame in command tdump. [PATCH 0/3] Test mingw32 GDB in cygwin [PATCH 0/4] introduce test suite "parallel" mode [PATCH 0/4] Remove deprecated_throw_reason [PATCH 0/5] AndesTech nds32 port [PATCH 0/7] Implement gdbarch_gdb_signal_{to,from}_target [PATCH 0/8] enable target-async by default Re: [PATCH 00/09] Import unistd and pathmax gnulib modules. Re: [PATCH 00/16] clean up remote.c state [PATCH 00/17] Implement gdbarch_gdb_signal_to_target [PATCH 01/17] Implement the gdbarch.{sh,c,h} bits. [PATCH 02/17] Linux kernel generic support Re: [PATCH 03/16] Add new_remote_state [PATCH 03/17] Alpha support [PATCH 04/17] x86_64 support [PATCH 05/17] ARM support [PATCH 06/17] AVR support [PATCH 07/17] Cris support [PATCH 08/17] h8300 support [PATCH 09/17] i386 support [PATCH 1/2] cleanup: rename is_ref_attr to attr_form_is_ref Re: [patch 1/2] Code cleanup: remote.c *->{async,sync}* [PATCH 1/2] Use mi_getopt in mi_cmd_stack_list_locals and mi_cmd_stack_list_variables [PATCH 1/3] Copy set_unbuffered_mode_saved.o on remote host [PATCH 1/3] Detect GDB is in cygwin [PATCH 1/3] Fix ppc64 single step over atomic sequence testcase [PATCH 1/3] New option --cygwin-tty. Re: [PATCH 1/3] New test case for PR12929. Re: [PATCH 1/3] Tweak gdb.trace/backtrace.exp [PATCH 1/4] more uses of standard_output_file [PATCH 1/4] Remove deprecated_throw_reason from internal_verror. [PATCH 1/5] Code for nds32 target [PATCH 1/5] config support for powerpc64-aix Re: [PATCH 1/5] powerpc64-aix- Processing XLC generated line tables Re: [PATCH 1/5] powerpc64-aix- Processing XLC generated line tables + CHANGELOG Re: [PATCH 1/5] Share 'enum target_hw_bp_type' in GDB and GDBserver. Re: [PATCH 1/7] gdbserver, common: conditionally include the unistd.h [PATCH 1/7] Implement the gdbarch.{sh,c,h} bits. [PATCH 1/8] fix latent bugs in ui-out.c [PATCH 10/17] IA-64 support Re: [PATCH 11/16] move some statics from remote_read_qxfer into struct remote_state [PATCH 11/17] m32r support [PATCH 12/17] m68klinux support [PATCH 13/17] mn10300 support Re: [PATCH 14/16] move async_client_callback and async_client_context into remote_state [PATCH 14/17] s390 support [PATCH 15/17] SPARC support [PATCH 16/17] Xtensa support [PATCH 17/17] MIPS support [PATCH 18/18] AArch64 support [PATCH 2/2] Add options to skip unavailable locals [PATCH 2/2] cleanup: constify argument passed to dwarf form predicates Re: [PATCH 2/3] add -Wold-style-declaration Re: [PATCH 2/3] Don't force interpreter sync mode in execute_gdb_command. Re: [PATCH 2/3] Make test result of gdb.trace/backtrace.exp unique [PATCH 2/3] Remove the directory of DEST in proc gdb_compile_shlib [PATCH 2/3] Support up to 3 conditional branches in an atomic sequence [PATCH 2/3] Unbuffer stdout and stderr in cygwin [PATCH 2/3] Unbuffer stdout and stderr on windows [PATCH 2/4] introduce parallel mode [PATCH 2/4] Remove deprecated_throw_reason from mips_error. [PATCH 2/5] gdbserver for nds32 Re: [PATCH 2/5] Include asm/ptrace.h in mips-linux-nat.c [PATCH 2/5] powerpc64-aix processing xlC generated line table [PATCH 2/5] powerpc64-aix- xcoffread patch [PATCH 2/7] Linux kernel generic support [PATCH 2/8] add target method delegation [PATCH 3/3] Add multiple branches to single step through atomic sequence testcase Re: [PATCH 3/3] Match output in async mode. [PATCH 3/3] native mingw32 gdb, eol format Re: [PATCH 3/3] Select the current frame in command tdump. [PATCH 3/3] Set stdin/stdout/stderr to binary mode in cygwin. [PATCH 3/3] Use the tail name as the output name of compile. [PATCH 3/4] add standard_temp_file [PATCH 3/4] Remove remaining uses of deprecated_throw_reason. [PATCH 3/5] powerpc64-aix config patch [PATCH 3/5] powerpc64-aix inf-ptrace patch Re: [PATCH 3/5] Refactor in mips-linux-nat.c [PATCH 3/5] testsuite for nds32 [PATCH 3/7] Alpha support [PATCH 3/8] PR gdb/13860: make -interpreter-exec console "list" behave more like "list". [PATCH 4/4] add caching procs to test suite [PATCH 4/4] Remove deprecated_throw_reason. Re: [PATCH 4/5] CHANGELOG Re: [PATCH 4/5] Move mips hardware watchpoint stuff to common/ [PATCH 4/5] powerpc64-aix inf-ptrace patch [wrongly sent as PATCH 3/5 earlier] [PATCH 4/5] powerpc64-aix ptrace64 when defined. [PATCH 4/5] Simulator for nds32 [PATCH 4/7] AVR support Re: [PATCH 4/7] gdbserver: conditionally include sys/param.h and sys/time.h [PATCH 4/8] PR gdb/13860: make "-exec-foo"'s MI output equal to "foo"'s MI output. [PATCH 5/5] make calls to ptrace64 in aix-thread.c when defined Re: [PATCH 5/5] MIPS GDBserver watchpoint [PATCH 5/5] powerpc64-aix aix-thread patch [PATCH 5/5] testsuite for nds32 simulator [PATCH 5/7] SPARC support [PATCH 5/8] PR gdb/13860: don't lose '-interpreter-exec console EXECUTION_COMMAND''s output in async mode. Re: [PATCH 6/7] common: add an alternative implementation for xstrvprintf [PATCH 6/7] Xtensa support [PATCH 6/8] make dprintf.exp pass in always-async mode [PATCH 7/7] MIPS support [PATCH 7/8] fix py-finish-breakpoint.exp with always-async [PATCH 8/8] enable target-async [PATCH OB] Remove obsolete comments in board files. [PATCH OB] Remove unused parameter in i386_linux_core_read_xcr0 and i386_in_stack_tramp_p [PATCH PR gdb/15715] 'set history filename' to by immediately converted to absolute path. FW: [PATCH v11 0/5] remove-symbol-file & add-symbol-file Re: [PATCH v11 2/5] Test adding and removing a symbol file at runtime. [PATCH v12 0/5] remove-symbol-file & add-symbol-file [PATCH v12 1/5] New remove-symbol-file command. [PATCH v12 2/5] Documentation for the remove-symbol-file command. [PATCH v12 3/5] 'add-symbol-file' should update the current target sections. [PATCH v12 4/5] Function is_elf_target. [PATCH v12 5/5] Test adding and removing a symbol file at runtime. [PATCH v14 0/5] remove-symbol-file & add-symbol-file [PATCH v14 1/5] New remove-symbol-file command. [PATCH v14 2/5] Documentation for the remove-symbol-file command. [PATCH v14 3/5] 'add-symbol-file' should update the current target sections. [PATCH v14 4/5] Function is_known_elf_target. [PATCH v14 5/5] Test adding and removing a symbol file at runtime. [PATCH v2 0/4] increase the portability of the gdbserver code Re: [PATCH v2 0/5] mips hardware watchpoint support in gdbserver [PATCH v2 0/9] enable target-async by default Re: [PATCH v2 00/16] [PATCH v2 1/4] gdbserver, common: convert some variadic macros to C99 [PATCH v2 1/9] fix latent bugs in ui-out.c [PATCH v2 2/4] gdbserver: avoid empty structs [PATCH v2 2/9] add "this" pointers to more target APIs [PATCH v2 3/4] gdbserver, win32: fix some function typedefs [PATCH v2 3/9] add target method delegation [PATCH v2 4/9] PR gdb/13860: make -interpreter-exec console "list" behave more like "list". [PATCH v2 5/9] PR gdb/13860: make "-exec-foo"'s MI output equal to "foo"'s MI output. [PATCH v2 6/9] PR gdb/13860: don't lose '-interpreter-exec console EXECUTION_COMMAND''s output in async mode. [PATCH v2 7/9] make dprintf.exp pass in always-async mode [PATCH v2 8/9] fix py-finish-breakpoint.exp with always-async [PATCH v2 9/9] enable target-async Re: [PATCH v2] Add convenience variable $_exitsignal Re: [PATCH v3] gdbserver: fix the standalone build [patch v4 00/24] record-btrace: reverse [patch v4 01/24] gdbarch: add instruction predicate methods [patch v4 02/24] record: upcase record_print_flag enumeration constants [patch v4 03/24] btrace: change branch trace data structure [patch v4 04/24] record-btrace: fix insn range in function call history [patch v4 05/24] record-btrace: start counting at one [patch v4 06/24] btrace: increase buffer size [patch v4 07/24] record-btrace: optionally indent function call history [patch v4 08/24] record-btrace: make ranges include begin and end [patch v4 09/24] btrace: add replay position to btrace thread info [patch v4 10/24] target: add ops parameter to to_prepare_to_store method [patch v4 11/24] record-btrace: supply register target methods [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder [patch v4 13/24] record-btrace, frame: supply target-specific unwinder [patch v4 14/24] record-btrace: provide xfer_partial target method [patch v4 15/24] record-btrace: add to_wait and to_resume target methods. [patch v4 16/24] record-btrace: provide target_find_new_threads method [patch v4 17/24] record-btrace: add record goto target methods [patch v4 18/24] record-btrace: extend unwinder [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace [patch v4 20/24] btrace, gdbserver: read branch trace incrementally [patch v4 21/24] record-btrace: show trace from enable location [patch v4 22/24] infrun: reverse stepping from unknown functions [patch v4 23/24] record-btrace: add (reverse-)stepping support [patch v4 24/24] record-btrace: skip tail calls in back trace [PATCH with testcase] Bug 11568 - delete thread-specific breakpoint on the thread exit Re: [PATCH, e500] Fix store.exp failures [patch, sim, mips] Implement unlink, lseek, and stat for MIPS [PATCH, testsuite] Don't run SREC, IHEX and TEKHEX tests for MIPS N64. Re: [PATCH, testsuite] Fix failures in gdb.mi/gdb2549.exp when register 0 doesn't have a name [PATCH/AARCH64] Fix hardware break points Re: [PATCH/v2] fix Bug 15180 Agent style dprintf does not respect conditions [PATCH3/5] 64 bit support in xcoffread + reading auxillary entries correctly [PATCH] [1/2] Add new 'z' format for print command [PATCH] [1/2] value_fetch_lazy - ensure parent is not lazy before accessing. [PATCH] [2/2] Don't raise an error for optimized out sub-fields. [PATCH] [2/2] Resue 'z' formatter from mi register display code [PATCH] [OBV] Look for gdb_prompt in py-explore.exp [patch] [python] Add two different initialization checks for frame filters [PATCH] ada-lang.c:coerce_unspec_val_to_type: Preserve laziness. [PATCH] Catch up with OpenBSD/hppa ptrace(2) changes Re: [PATCH] change gdb to use BFD's "dwz" functions [PATCH] cleanup: constify "struct attribute" function parameter [PATCH] Copy file to host if it is remote [PATCH] Don't call strchr with the NULL character. [PATCH] Enable hw watchpoint with longer ranges using DAWR on Power [PATCH] fix Bug 11568 - delete thread-specific breakpoint on the thread exit Re: [PATCH] Fix bug 15433 - GDB crashes when using agent dprintf, %s format, and an in-line string RE: [patch] Fix cleanup in finish_command [PATCH] Fix for PR15117 [PATCH] Fix PR 12702 - gdb can hang waiting for thread group leader (gdbserver) Re: [PATCH] fix PR 15180 "May only run agent-printf on the target" [PATCH] Fix PR 15692 -dprintf-insert does not accept double quotes [PATCH] Fix PR 15693 - Extra *running event when call-style dprintf hits [PATCH] fix PR symtab/15719 Re: [PATCH] fix remote host test failures in testsuite [patch] Fix SIGTERM signal safety (PR gdb/15358) Re: [PATCH] Fix up msymbol type of dll trampoline to mst_solib_trampoline Re: [PATCH] gdb/testsuite/gdb.base/gnu-ifunc-lib.c: Use %function syntax. Re: [PATCH] gdb/testsuite/gdb.dwarf2: Replaces @ with % sign to allow tests stay compatible with both arm and x86 assembly [PATCH] gdb/testsuite/gdb.threads: Ensure TLS tests link against pthreads. Re: [PATCH] gdb/testsuite/gdb.threads: Make sure TLS tests link against pthreads. [PATCH] Implement way of checking if probe interface can evaluate arguments [PATCH] Improve performance of large restore commands Re: [PATCH] Link GDBserver with -lmcheck on mainline/development too. [PATCH] make default_print_one_register_info print_hex_chars Re: [PATCH] Make file transfer commands work with all (native) targets. [PATCH] MIPS: Define descriptive names for GNU attribute values [PATCH] native mingw32 gdb, eol format Re: [PATCH] NULL dwarf2_per_objfile in dwarf2_per_objfile_free [PATCH] optimized out registers in mi [PATCH] Pass address around within ada-valprint.c [patch] PR 15695, add missing check_typedef's Re: [PATCH] print '--with{,out}-babeltrace' in 'gdb --configuration' Re: [PATCH] Rely on beneath to do partial xfer from tfile target [PATCH] Remove error_pre_print and quit_pre_print [PATCH] remove msymbol_objfile [PATCH] Remove parameter lsal from 'create_breakpoints_sal' in 'struct breakpoint_ops' [patch] Rename "prettyprint" to "prettyformat" Re: [PATCH] Rename 'booke' ptrace interface in ppc-linux-nat.c Re: [PATCH] Revised display-linkage-name [PATCH] Share gdbservre setting for board files native-*gdbserver.exp [PATCH] Share more common target structures between gdb and gdbserver [PATCH] Share ptrace options discovery/linux native code between GDB and gdbserver [patch] testsuite build regression on CentOS-5 [Re: [PATCH 1/7] test suite update - gdb.base/[ab]] [PATCH] testsuite/gdb.base: Enable disp-step-syscall.exp tests for arm targets [PATCH] testsuite/gdb.dwarf2: Enable dw2-error.exp tests for arm targets [PATCH] testsuite/gdb.dwarf2: Fix for dw2-dos-drive failure on ARM [PATCH] testsuite/gdb.dwarf2: Fix for dw2-ifort-parameter failure on ARM [PATCH] Unbuffer stdout and stderr on windows [PATCH] Update pattern to match when value is missing [PATCH] Use DWARF2 CFI unwinder on OpenBSD/hppa [PATCH] wp-replication: Fix test case loop Re: [patch][python] 0 of 5 - Frame filters and Wrappers RE: [patchv2 1/11] Remove xfullpath (use gdb_realpath instead) Re: [patchv2 2/2] Fix CTRL-C for remote.c (PR remote/15297) [PING (docs)] Re: [PATCH] [1/2] Add new 'z' format for print command [ping 2] [RFA][PATCH v4 0/5] Add TDB regset support [ping 2]: [PATCH 0/3] Select the current frame in command tdump. [ping] [RFA][PATCH v4 0/5] Add TDB regset support [PING] Re: [PATCH] [1/2] value_fetch_lazy - ensure parent is not lazy before accessing. [ping]: [PATCH 0/2] New option --skip-unavailable to -stack-list-XXX commands Re: [ping][PATCH 00/17] Implement gdbarch_gdb_signal_to_target [RFA 0/14] Remove quadratic behaviour from probes linker interface [RFA 1/14] Changes to solib.c [RFA 10/14] Changes to solib-som.c [RFA 11/14] Changes to solib-spu.c [RFA 12/14] Changes to solib-sunos.c [RFA 13/14] Changes to solib-target.c [RFA 14/14] Changes to solib-svr4.c [RFA 2/14] Changes to solib-aix.c [RFA 3/14] Changes to solib-darwin.c [RFA 4/14] Changes to solib-dsbt.c [RFA 5/14] Changes to solib-frv.c [RFA 6/14] Changes to solib-ia64-hpux.c [RFA 7/14] Changes to solib-irix.c [RFA 8/14] Changes to solib-osf.c [RFA 9/14] Changes to solib-pa64.c Re: [RFA, doc RFA] set print frame-arguments-raw on|off Re: [RFA, doc RFA] work around gold/15646 [RFA] bad VER in src-release causes snapshot failure Re: [RFA] buildsym.c cleanup Re: [RFA] dwarf2read.c: fix computing list of included symtabs [RFA] Fix mi-var-child-f.exp failures [RFA] Fix namespace aliases (c++/7539, c++/10541) [RFA] Fix varobj/15166 [RFA] nto_find_and_open_solib: Fix setting temp_pathname on failure. Re: [RFA] remove duplicates in search_symbols [RFA] Remove target_section.bfd [RFA] Windows x64 SEH unwinder (v2) [RFA][PATCH v4 0/5] Add TDB regset support [RFA][PATCH v4 1/5] S/390 regmap rework [RFA][PATCH v4 2/5] S/390: Add TDB regset [RFA][PATCH v4 3/5] Dynamic core regset sections support [RFA][PATCH v4 4/5] S/390: Exploit dynamic core regset sections [RFA][PATCH v4 5/5] PowerPC: Exploit dynamic core regset sections Re: [RFC/PATCH] Add new internal variable $_signo Re: [RFC/PATCH] New convenience variable $_exitsignal Re: [rfc] Add help text to start-up text Re: [RFC] Catch exception after stepped over watchpoint. Re: [RFC] Debug Methods in GDB Python [RFC] Support for dynamic core file register notes Re: [RFC][PATCH] GDB kills itself instead of interrupting inferior [RFC][PATCH] Preliminary `catch syscall' support for ARM Linux. [v13 0/5] remove-symbol-file & add-symbol-file [v13 1/5] New remove-symbol-file command. [v13 2/5] Documentation for the remove-symbol-file command. [v13 3/5] 'add-symbol-file' should update the current target sections. [v13 4/5] Function is_known_elf_target. [v13 5/5] Test adding and removing a symbol file at runtime. Build regression on CentOS-5 [Re: [PATCH 3/3] add -Wold-style-definition] Build regression with --enable-targets=all [Re: [RFA] Remove target_section.bfd] Re: Events when inferior is modified RE: fix ARI for version.in change FYI: [testsuite/Ada] Add testing of access to packed arrays. FYI: GDB nightly snapshots still down FYI: minor comment fixes in ptid.h Fw: gdb-7.6 patches for powerpc64-aix gdb-patches Library cleaner Book sterilizer for soleagent each countries Patch contribution Regression for fission-reread.exp and pr13961.exp [Re: [PATCH 2/3] fix init_cutu_and_read_dies] Regression for implptr.exp and pieces.exp [Re: [COMMIT PATCH] value_bits_valid: Fix latent bug.] Re: Regression for implptr.exp and pieces.exp [Re: [COMMIT PATCH] value_bits_valid: Fix latent bug.] Re: regroup --help text in --help) Relocation test fix for target=i686-mingw32 and host=i686-pc-linux RFA: remove mention of "target nrom" RFC: don't call add_target for thread_db_ops RFC: fix ARI for version.in change Re: RFC: fix src-release for version.in move Re: RFC: introduce common.m4 Re: RFC: introduce scoped cleanups RFC: remove pop_target Runtime regression for gdb.base/reread.exp & co. [Re: [RFA] Remove target_section.bfd] Re: Runtime regression for gdb.base/reread.exp & co. [Re: [RFA] Remove target_section.bfd] Setting parity for remote serial Re: Updated patch for Bug 13217 - thread apply all detach throws a SEGFAULT
https://sourceware.org/ml/gdb-patches/2013-07/subjects.html
CC-MAIN-2017-04
en
refinedweb
HARMONY-1884 was created. Thanks, 2006/10/16, Alexei Zakharov <alexei.zakharov@gmail.com>: > I am ok with this. :) Will file a JIRA soon. > > Regards, > > 2006/10/16, Tim Ellison <t.p.ellison@gmail.com>: > > I'm fine with marking it as a non-bug difference, with the option to fix > > it if we find some compelling application that relies on this non-spec > > behavior. Is that weasely enough? > > > > Regards, > > Tim > > > > Alexei Zakharov wrote: > > > Hi Tim, > > > > > > <-- persuasion starts here > > > > > > Let me cite the spec describing design patterns for properties, > > > JavaBeans spec v1.01-A (Aug 8, 1997), page 55: > > > > > > --- > > > 8.3 Design Patterns for Properties > > > > > > 8.3.1 Simple properties > > > By default, we use design patterns to locate properties by looking for > > > methods of the form: > > > > > > public <PropertyType> get<PropertyName>(); > > > public void set<PropertyName>(<PropertyType> a); > > > > > > 8.3.2 Boolean properties > > > In addition, for boolean properties, we allow a > > > > > > public boolean is<PropertyName>(); > > > > > > 8.3.3 Indexed properties > > > If we find a property whose type is an array "<PropertyElement>[]", > > > then we also look for methods of the form: > > > > > > public <PropertyElement> get<PropertyName>(int a); > > > public void set<PropertyName>(int a, <PropertyElement> b); > > > --- > > > > > > So we have only three design patterns specified for properties. That's > > > all. I didn't found any mentioning about any extra design patterns and > > > I've never heard anything about setDefaults() or smth. like it. > > > > > > On the other hand, if I understand things correctly the Introspector > > > class should be the decision-making center for such type of things. > > > I.e. if Introspector says there is no properties then there should be > > > no properties. RI doesn't seem to be using Introspector in the example > > > I've described ealier. Thus I still think it looks like RI bug. > > > > > > <-- end of persuasion > > > > > > Thanks and regards, > > > > > > 2006/10/14, Tim Ellison <t.p.ellison@gmail.com>: > > >> That is strange behavior, since as you point out it does not set a > > >> parametrized value, however, I wonder if there is some assumption that > > >> the setFoo() method may be a mutator anyway, e.g. setDefaults() or > > >> something like that? Just guessing. > > >> > > >> In this case it may be safer to follow the RI -- but I'm open to > > >> persuasion. > > >> > > >> Regards, > > >> Tim > > >> > > >> Alexei Zakharov wrote: > > >> > Hi all, > > >> > > > >> > Let me disturb you with another boring "RI inconsistency in beans" > > >> > –type of message. :) It seems I found a bug in RI. In > > >> > java.beans.EventHandler. I think RI incorrectly determines properties > > >> > here. According to spec, common sense and even the RI's implementation > > >> > of java.beans.Introspector the following bean should not contain any > > >> > properties: > > >> > > > >> > public static class MyBean { > > >> > public void setProp1() {} > > >> > } > > >> > > > >> > because "setProp1()" is not a valid setter method – it does not > > >> > contain a new value to set. > > >> > However, the following test fails on RI: > > >> > > > >> > <--- > > >> > import java.beans.*; > > >> > > > >> > public class TestBeanInfo1 { > > >> > public static class MyBean { > > >> > public void setProp1() {} > > >> > } > > >> > > > >> > public static void main(String argv[]) throws Exception { > > >> > MyBean bean = new MyBean(); > > >> > // "prop1" is neither the name of writeable property nor the > > >> > name of any public method > > >> > Object proxy = EventHandler.create( > > >> > PropertyChangeListener.class, bean, "prop1"); > > >> > > > >> > // just to show that Introspector doesn't see the property > > >> > with name "prop1" > > >> > PropertyDescriptor[] pds = > > >> Introspector.getBeanInfo(MyBean.class, > > >> > Introspector.USE_ALL_BEANINFO).getPropertyDescriptors(); > > >> > for (int i = 0; i < pds.length; i++) { > > >> > System.out.println("Property found: " + pds[i].getName()); > > >> > } > > >> > > > >> > // should throw exception > > >> > try { > > >> > ((PropertyChangeListener) proxy).propertyChange( > > >> > new PropertyChangeEvent(bean, "prop1", "1", "2")); > > >> > System.out.println("FAIL"); > > >> > } catch (Throwable t) { > > >> > System.out.println("PASS"); > > >> > } > > >> > } > > >> > } > > >> > <--- > > >> > > > >> > So it determines "prop1" as a valid property. IMHO this behavior is > > >> > inconsistent and we should not follow RI. But I like to hear opinions > > >> > from the rest of the community. > > >> > > > >> > Thanks, > > > -- > Alexei Zakharov, > Intel Enterprise Solutions Software Division, Russia > -- Alexei Zakharov, Intel Enterprise Solutions Software Division, Russia --------------------------------------------------------------------- To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org For additional commands, e-mail: harmony-dev-help@incubator.apache.org
http://mail-archives.apache.org/mod_mbox/harmony-dev/200610.mbox/%3C2c9597b90610161018ve4f3c4es9a41b68b64ba50c1@mail.gmail.com%3E
CC-MAIN-2017-04
en
refinedweb
Python provides extensive support in its standard library for working with email (and newsgroup) messages. There are three general aspects to working with email, each supported by one or more Python modules. Communicating with network servers to actually transmit and receive messages. The modules poplib, imaplib, smtplib, and nntplib each address the protocol contained in its name. These tasks do not have a lot to do with text processing per se, but are often important for applications that deal with email. The discussion of each of these modules is incomplete, addressing only those methods necessary to conduct basic transactions in the case of the first three modules/protocols. The module nntplib is not documented here under the assumption that email is more likely to be automatically processed than are Usenet articles. Indeed, robot newsgroup posters are almost always frowned upon, while automated mailing is frequently desirable (within limits). Examining the contents of message folders. Various email and news clients store messages in a variety of formats, many providing hierarchical and structured folders. The module mailbox provides a uniform API for reading the messages stored in all the most popular folder formats. In a way, imaplib serves an overlapping purpose, insofar as an IMAP4 server can also structure folders, but folder manipulation with IMAP4 is discussed only cursorily?that topic also falls afield of text processing. However, local mailbox folders are definitely text formats, and mailbox makes manipulating them a lot easier. The core text processing task in working with email is parsing, modifying, and creating the actual messages. RFC-822 describes a format for email messages and is the lingua franca for Internet communication. Not every Mail User Agent (MUA) and Mail Transport Agent (MTA) strictly conforms to the RFC-822 (and superset/clarification RFC-2822) standard?but they all generally try to do so. The newer email package and the older rfc822, rfc1822, mimify, mimetools, MimeWriter, and multifile modules all deal with parsing and processing email messages. Although existing applications are likely to use rfc822, mimify, mimetools, MimeWriter, and multifile, the package email contains more up-to-date and better-designed implementations of the same capabilities. The former modules are discussed only in synopsis while the various subpackages of email are documented in detail. There is one aspect of working with email that all good-hearted people wish was unnecessary. Unfortunately, in the real-world, a large percentage of email is spam, viruses, and frauds; any application that works with collections of messages practically demands a way to filter out the junk messages. While this topic generally falls outside the scope of this discussion, readers might benefit from my article, "Spam Filtering Techniques," at: <> A flexible Python project for statistical analysis of message corpora, based on naive Bayesian and related models, is SpamBayes: <> Without repeating the whole of RFC-2822, it is worth mentioning the basic structure of an email or newsgroup message. Messages may themselves be stored in larger text files that impose larger-level structure, but here we are concerned with the structure of a single message. An RFC-2822 message, like most Internet protocols, has a textual format, often restricted to true 7-bit ASCII. A message consists of a header and a body. A body in turn can contain one or more "payloads." In fact, MIME multipart/* type payloads can themselves contain nested payloads, but such nesting is comparatively unusual in practice. In textual terms, each payload in a body is divided by a simple, but fairly long, delimiter; however, the delimiter is pseudo-random, and you need to examine the header to find it. A given payload can either contain text or binary data using base64, quoted printable, or another ASCII encoding (even 8-bit, which is not generally safe across the Internet). Text payloads may either have MIME type text/* or compose the whole of a message body (without any payload delimiter). An RFC-2822 header consists of a series of fields. Each field name begins at the beginning of a line and is followed by a colon and a space. The field value comes after the field name, starting on the same line, but potentially spanning subsequence lines. A continued field value cannot be left aligned, but must instead be indented with at least one space or tab. There are some moderately complicated rules about when field contents can split between lines, often dependent upon the particular type of value a field holds. Most field names occur only once in a header (or not at all), and in those cases their order of occurrence is not important to email or news applications. However, a few field names?notably Received?typically occur multiple times and in a significant order. Complicating headers further, field values can contain encoded strings from outside the ASCII character set. The most important element of the email package is the class email.Message.Message, whose instances provide a data structure and convenience methods suited to the generic structure of RFC-2822 messages. Various capabilities for dealing with different parts of a message, and for parsing a whole message into an email.Message.Message object, are contained in subpackages of the email package. Some of the most common facilities are wrapped in convenience functions in the top-level namespace. A version of the email package was introduced into the standard library with Python 2.1. However, email has been independently upgraded and developed between Python releases. At the time this chapter was written, the current release of email was 2.4.3, and this discussion reflects that version (and those API details that the author thinks are most likely to remain consistent in later versions). I recommend that, rather than simply use the version accompanying your Python installation, you download the latest version of the email package from <> if you intend to use this package. The current (and expected future) version of the email package is directly compatible with Python versions back to 2.1. See this book's Web site, <>, for instructions on using email with Python 2.0. The package is incompatible with versions of Python before 2.0. Several children of email.Message.Message allow you to easily construct message objects with special properties and convenient initialization arguments. Each such class is technically contained in a module named in the same way as the class rather than directly in the email namespace, but each is very similar to the others. Construct a message object with a Content-Type header already built. Generally this class is used only as a parent for further subclasses, but you may use it directly if you wish: >>> mess = email.MIMEBase.MIMEBase('text','html',charset='us-ascii') >>> print mess From nobody Tue Nov 12 03:32:33 2002 Content-Type: text/html; charset="us-ascii" MIME-Version: 1.0 Child of email.MIMEBase.MIMEBase, but raises MultipartConversionError on calls to .attach(). Generally this class is used for further subclassing. Construct a multipart message object with subtype subtype. You may optionally specify a boundary with the argument boundary, but specifying None will cause a unique boundary to be calculated. If you wish to populate the message with payload object, specify them as additional arguments. Keyword arguments are taken as parameters to the Content-Type header. >>> from email.MIMEBase import MIMEBase >>> from email.MIMEMultipart import MIMEMultipart >>> mess = MIMEBase('audio','midi') >>> combo = MIMEMultipart('mixed', None, mess, charset='utf-8') >>> print combo From nobody Tue Nov 12 03:50:50 2002 Content-Type: multipart/mixed; charset="utf-8"; boundary="===============5954819931142521==" MIME-Version: 1.0 --===============5954819931142521== Content-Type: audio/midi MIME-Version: 1.0 --===============5954819931142521==-- Construct a single part message object that holds audio data. The audio data stream is specified as a string in the argument audiodata. The Python standard library module sndhdr is used to detect the signature of the audio subtype, but you may explicitly specify the argument subtype instead. An encoder other than base64 may be specified with the encoder argument (but usually should not be). Keyword arguments are taken as parameters to the Content-Type header. >>> from email.MIMEAudio import MIMEAudio >>> mess = MIMEAudio(open('melody.midi').read()) SEE ALSO: sndhdr 397; Construct a single part message object that holds image data. The image data is specified as a string in the argument imagedata. The Python standard library module imghdr is used to detect the signature of the image subtype, but you may explicitly specify the argument subtype instead. An encoder other than base64 may be specified with the encoder argument (but usually should not be). Keyword arguments are taken as parameters to the Content-Type header. >>> from email.MIMEImage import MIMEImage >>> mess = MIMEImage(open('landscape.png').read()) SEE ALSO: imghdr 396; Construct a single part message object that holds text data. The data is specified as a string in the argument text. A character set may be specified in the charset argument: >>> from email.MIMEText import MIMEText >>> mess = MIMEText(open('TPiP.tex').read(),'latex') Return a message object based on the message text contained in the file-like object file. This function call is exactly equivalent to: SEE ALSO: email.Parser.Parser.parse() 363; Return a message object based on the message text contained in the string s. This function call is exactly equivalent to: SEE ALSO: email.Parser.Parser.parsestr() 363; The module email.Encoder contains several functions to encode message bodies of single part message objects. Each of these functions sets the Content-Transfer-Encoding header to an appropriate value after encoding the body. The decode argument of the .get_payload() message method can be used to retrieve unencoded text bodies. Encode the message body of message object mess using quoted printable encoding. Also sets the header Content-Transfer-Encoding. Encode the message body of message object mess using base64 encoding. Also sets the header Content-Transfer-Encoding. Set the Content-Transfer-Encoding to 7bit or 8bit based on the message payload; does not modify the payload itself. If message mess already has a Content-Transfer-Encoding header, calling this will create a second one?it is probably best to delete the old one before calling this function. SEE ALSO: email.Message.Message.get_payload() 360; quopri 162; base64 158; Exceptions within the email package will raise specific errors and may be caught at the desired level of generality. The exception hierarchy of email.Errors is shown in Figure 5.1. SEE ALSO: exceptions 44; The module email.Generator provides support for the serialization of email.Message.Message objects. In principle, you could create other tools to output message objects to specialized formats?for example, you might use the fields of an email.Message.Message object to store values to an XML format or to an RDBMS. But in practice, you almost always want to write message objects to standards-compliant RFC-2822 message texts. Several of the methods of email.Message.Message automatically utilize email.Generator. Construct a generator instance that writes to the file-like object file. If the argument mangle_from_ is specified as a true value, any occurrence of a line in the body that begins with the string From followed by a space is prepended with >. This (non-reversible) transformation prevents BSD mailboxes from being parsed incorrectly. The argument maxheaderlen specifies where long headers will be split into multiple lines (if such is possible). Construct a generator instance that writes RFC-2822 messages. This class has the same initializers as its parent email.Generator.Generator, with the addition of an optional argument fmt. The class email.Generator.DecodedGenerator only writes out the contents of text/* parts of a multipart message payload. Nontext parts are replaced with the string fmt, which may contain keyword replacement values. For example, the default value of fmt is: [Non-text (%(type)s) part of message omitted, filename %(filename)s] Any of the keywords type, maintype, subtype, filename, description, or encoding may be used as keyword replacements in the string fmt. If any of these values is undefined by the payload, a simple description of its unavailability is substituted. Return a copy of the instance with the same options. Write an RFC-2822 serialization of message object mess to the file-like object the instance was initialized with. If the argument unixfrom is specified as a true value, the BSD mailbox From_ header is included in the serialization. Write the string s to the file-like object the instance was initialized with. This lets a generator object itself act in a file-like manner, as an implementation convenience. SEE ALSO: email.Message 355; mailbox 372; The module email.Charset provides fine-tuned capabilities for managing character set conversions and maintaining a character set registry. The much higher-level interface provided by email.Header provides all the capabilities that almost all users need in a friendlier form. The basic reason why you might want to use the email.Header module is because you want to encode multinational (or at least non-US) strings in email headers. Message bodies are somewhat more lenient than headers, but RFC-2822 headers are still restricted to using only 7-bit ASCII to encode other character sets. The module email.Header provides a single class and two convenience functions. The encoding of non-ASCII characters in email headers is described in a number of RFCs, including RFC-2045, RFC-2046, RFC-2047, and most directly RFC-2231. Construct an object that holds the string or Unicode string s. You may specify an optional charset to use in encoding s; absent any argument, either us-ascii or utf-8 will be used, as needed. Since the encoded string is intended to be used as an email header, it may be desirable to wrap the string to multiple lines (depending on its length). The argument maxlinelen specifies where the wrapping will occur; header_name is the name of the header you anticipate using the encoded string with?it is significant only for its length. Without a specified header_name, no width is set aside for the header field itself. The argument continuation_ws specified what whitespace string should be used to indent continuation lines; it must be a combination of spaces and tabs. Instances of the class email.Header.Header implement a .__str__() method and therefore respond to the built-in str() function and the print command. Normally the built-in techniques are more natural, but the method email.Header.Header.encode() performs an identical action. As an example, let us first build a non-ASCII string: >>> from unicodedata import lookup >>> lquot = lookup("LEFT-POINTING DOUBLE ANGLE QUOTATION MARK") >>> rquot = lookup("RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK") >>> s = lquot + "Euro-style" + rquot + " quotation" >>> s u'\xabEuro-style\xbb quotation' >>> print s.encode('iso-8859-1') Euro-style quotation Using the string s, let us encode it for an RFC-2822 header: >>> from email.Header import Header >>> print Header(s) =?utf-8?q?=C2=ABEuro-style=C2=BB_quotation?= >>> print Header(s,'iso-8859-1') =?iso-8859-1?q?=ABEuro-style=BB_quotation?= >>> print Header(s, 'utf-16') =?utf-16?b?/v8AqwBFAHUAcgBvACOAcwBOAHkAbABl?= =?utf-16?b?/v8AuwAgAHEAdQBvAHQAYQBOAGkAbwBu?= >>> print Header(s,'us-ascii') =?utf-8?q?=C2=ABEuro-style=C2=BB_quotation?= Notice that in the last case, the email.Header.Header initializer did not take too seriously my request for an ASCII character set, since it was not adequate to represent the string. However, the class is happy to skip the encoding strings where they are not needed: >>> print Header('"US-style" quotation') "US-style" quotation >>> print Header('"US-style" quotation','utf-8') =?utf-8?q?=22US-style=22_quotation?= >>> print Header('"US-style" quotation','us-ascii') "US-style" quotation Add the string or Unicode string s to the end of the current instance content, using character set charset. Note that the charset of the added text need not be the same as that of the existing content. >>> subj = Header(s,'latin-1',65) >>> print subj =?iso-8859-1?q?=ABEuro-style=BB_quotation?= >>> unicodedata.name(omega), unicodedata.name(Omega) ('GREEK SMALL LETTER OMEGA', 'GREEK CAPITAL LETTER OMEGA') >>> subj.append(', Greek: ', 'us-ascii') >>> subj.append(Omega, 'utf-8') >>> subj.append(omega, 'utf-16') >>> print subj =?iso-8859-1?q?=ABEuro-style=BB_quotation?=, Greek: =?utf-8?b?zqk=?= =?utf-16?b?/v8DyQ==?= >>> unicode(subj) u'\xabEuro-style\xbb quotation, Greek: \u03a9\u03c9' Return an ASCII string representation of the instance content. Return a list of pairs describing the components of the RFC-2231 string held in the header object header. Each pair in the list contains a Python string (not Unicode) and an encoding name. >>> email.Header.decode_header(Header('spam and eggs')) [('spam and eggs', None)] >>> print subj =?iso-8859-1?q?=ABEuro-style=BB_quotation?=, Greek: =?utf-8?b?zqk=?= =?utf-16?b?/v8DyQ==?= >>> for tup in email.Header.decode_header(subj): print tup ... ('\xabEuro-style\xbb quotation', 'iso-8859-1') (', Greek:', None) ('\xce\xa9', 'utf-8') ('\xfe\xff\x03\xc9', 'utf-16') These pairs may be used to construct Unicode strings using the built-in unicode() function. However, plain ASCII strings show an encoding of None, which is not acceptable to the unicode() function. >>> for s,enc in email.Header.decode_header(subj): ... enc = enc or 'us-ascii' ... print `unicode(s, enc)' ... u'\xabEuro-style\xbb quotation' u', Greek:' u'\u03a9' u'\u03c9' SEE ALSO: unicode() 423; email.Header.make_header() 354; Construct a header object from a list of pairs of the type returned by the function email.Header.decode-header(). You may also, of course, easily construct the list decoded_seq manually, or by other means. The three arguments maxlinelen, header_name, and continuation_ws are the same as with the email.Header.Header class. SEE ALSO: email.Header.decode_header() 353; email.Header.Header 351; The module email.Iterators provides several convenience functions to walk through messages in ways different from email.Message.Message.get_payload() or email.Message.Message.walk(). Return a generator object that iterates through each content line of the message object mess. The entire body that would be produced by str(mess) is reached, regardless of the content types and nesting of parts. But any MIME delimiters are omitted from the returned lines. >>> import email.MIMEText, email.Iterators >>> mess1 = email.MIMEText.MIMEText('message one') >>> mess2 = email.MIMEText.MIMEText('message two') >>> combo = email.Message.Message() >>> combo.set_type('multipart/mixed') >>> combo.attach(mess1) >>> combo.attach(mess2) >>> for line in email.Iterators.body_line_iterator(combo): ... print line ... message one message two Return a generator object that iterates through each subpart of message whose type matches maintype. If a subtype subtype is specified, the match is further restricted to maintype/subtype. Write a "pretty-printed" representation of the structure of the body of message mess. Output to the file-like object file. SEE ALSO: email.Message.Message.get_payload() 360; email.Message.Message.walk() 362; A message object that utilizes the email.Message module provides a large number of syntactic conveniences and support methods for manipulating an email or news message. The class email.Message.Message is a very good example of a customized datatype. The built-in str() function?and therefore also the print command?cause a message object to produce its RFC-2822 serialization. In many ways, a message object is dictionary-like. The appropriate magic methods are implemented in it to support keyed indexing and assignment, the built-in len() function, containment testing with the in keyword, and key deletion. Moreover, the methods one expects to find in a Python dict are all implemented by email.Message.Message:has_key(), .keys(), .values (), .items(), and .get(). Some usage examples are helpful: >>> import mailbox, email, email.Parser >>> mbox = mailbox.PortableUnixMailbox(open('mbox'), ... email.Parser.Parser().parse) >>> mess = mbox.next() >>> len(mess) # number of headers 16 >>> 'X-Status' in mess # membership testing 1 >>> mess.has_key('X-AGENT') # also membership test 0 >>> mess['x-agent'] = "Python Mail Agent" >>> print mess['X-AGENT'] # access by key Python Mail Agent >>> del mess['X-Agent'] # delete key/val pair >>> print mess['X-AGENT'] None >>> [fld for (fld,val) in mess.items() if fld=='Received'] ['Received', 'Received', 'Received', 'Received', 'Received'] This is dictionary-like behavior, but only to an extent. Keys are case-insensitive to match email header rules. Moreover, a given key may correspond to multiple values?indexing by key will return only the first such value, but methods like .keys(), .items(), or .get_all() will return a list of all the entries. In some other ways, an email.Message.Message object is more like a list of tuples, chiefly in guaranteeing to retain a specific order to header fields. A few more details of keyed indexing should be mentioned. Assigning to a keyed field will add an additional header, rather than replace an existing one. In this respect, the operation is more like a list.append() method. Deleting a keyed field, however, deletes every matching header. If you want to replace a header completely, delete first, then assign. The special syntax defined by the email.Message.Message class is all for manipulating headers. But a message object will typically also have a body with one or more payloads. If the Content-Type header contains the value multipart/*, the body should consist of zero or more payloads, each one itself a message object. For single part content types (including where none is explicitly specified), the body should contain a string, perhaps an encoded one. The message instance method .get_payload(), therefore, can return either a list of message objects or a string. Use the method .is_multipart() to determine which return type is expected. As the epigram to this chapter suggests, you should strictly follow content typing rules in messages you construct yourself. But in real-world situations, you are likely to encounter messages with badly mismatched headers and bodies. Single part messages might claim to be multipart, and vice versa. Moreover, the MIME type claimed by headers is only a loose indication of what payloads actually contain. Part of the mismatch comes from spammers and virus writers trying to exploit the poor standards compliance and lax security of Microsoft applications?a malicious payload can pose as an innocuous type, and Windows will typically launch apps based on filenames instead of MIME types. But other problems arise not out of malice, but simply out of application and transport errors. Depending on the source of your processed messages, you might want to be lenient about the allowable structure and headers of messages. SEE ALSO: UserDict 24; UserList 28; Construct a message object. The class accepts no initialization arguments. Add a header to the message headers. The header field is field, and its value is value.The effect is the same as keyed assignment to the object, but you may optionally include parameters using Python keyword arguments. >>> import email.Message >>> msg = email.Message.Message() >>> msg['Subject'] = "Report attachment" >>> msg.add_header('Content-Disposition','attachment', ... filename='report17.txt') >>> print msg From nobody Mon Nov 11 15:11:43 2002 Subject: Report attachment Content-Disposition: attachment; filename="report17.txt" Serialize the message to an RFC-2822-compliant text string. If the unixfrom argument is specified with a true value, include the BSD mailbox "From_" envelope header. Serialization with str() or print includes the "From_" envelope header. Add a payload to a message. The argument mess must specify an email.Message.Message object. After this call, the payload of the message will be a list of message objects (perhaps of length one, if this is the first object added). Even though calling this method causes the method .is_multipart () to return a true value, you still need to separately set a correct multipart/* content type for the message to serialize the object. >>> mess = email.Message.Message() >>> mess.is_multipart() 0 >>> mess.attach(email.Message.Message()) >>> mess. is_multipart () 1 >>> mess.get_payload() [<email.Message.Message instance at 0x3b2ab0>] >>> mess.get_content_type() 'text/plain' >>> mess.set_type('multipart/mixed') >>> mess.get_content_type() 'multipart/mixed' If you wish to create a single part payload for a message object, use the method email.Message.Message.set-payload(). SEE ALSO: email.Message.Message.set_payload() 362; Remove the parameter param from a header. If the parameter does not exist, no action is taken, but also no exception is raised. Usually you are interested in the Content-Type header, but you may specify a different header argument to work with another one. The argument requote controls whether the parameter value is quoted (a good idea that does no harm). >>> mess = email.Message.Message() >>> mess.set_type('text/plain') >>> mess.set_param('charset','us-ascii') >>> print mess From nobody Mon Nov 11 16:12:38 2002 MIME-Version: 1.0 Content-Type: text/plain;>> mess.del_param('charset') >>> print mess From nobody Mon Nov 11 16:13:11 2002 MIME-Version: 1.0 content-type: text/plain Message bodies that contain MIME content delimiters can also have text that falls outside the area between the first and final delimiter. Any text at the very end of the body is stored in email.Message.Message.epilogue. SEE ALSO: email.Message.Message.preamble 361; Return a list of all the headers with the field name field. If no matches exist, return the value specified in argument failobj. In most cases, header fields occur just once (or not at all), but a few fields such as Received typically occur multiple times. The default nonmatch return value of None is probably not the most useful choice. Returning an empty list will let you use this method in both if tests and iteration context: >>> for rcv in mess.get_all('Received',[]): ... print rcv ... About that time A little earlier >>> if mess.get_all('Foo',[]): ... print "Has Foo header(s)" Return the MIME message boundary delimiter for the message. Return failobj if no boundary is defined; this should always be the case if the message is not multipart. Return a list of string descriptions of contained character sets. Return a string description of the message character set. For message mess, equivalent to mess.get_content_type().split ("/") [0]. For message mess, equivalent to mess.get_content_type().split ("/") [1]. Return the MIME content type of the message object. The return string is normalized to lowercase and contains both the type and subtype, separated by a /. >>> msg_photo.get_content_type() 'image/png' >>> msg_combo.get_content_type() 'multipart/mixed' >>> msg_simple.get_content_type() 'text/plain' Return the current default type of the message. The default type will be used in decoding payloads that are not accompanied by an explicit Content-Type header. Return the filename parameter of the Content-Disposition header. If no such parameter exists (perhaps because no such header exists), failobj is returned instead. Return the parameter param of the header header. By default, use the Content-Type header. If the parameter does not exist, return failobj. If the argument unquote is specified as a true value, the quote marks are removed from the parameter. >>> print mess.get_param('charset',unquote=l) us-ascii >>> print mess.get_param('charset',unquote=0) "us-ascii" SEE ALSO: email.Message.Message.set_param() 362; Return all the parameters of the header header. By default, examine the Content-Type header. If the header does not exist, return failobj instead. The return value consists of a list of key/val pairs. The argument unquote removes extra quotes from values. >>> print mess.get_params(header="To") [('<mertz@gnosis.cx>', '')] >>> print mess.get_params(unquote=0) [('text/plain', ''), ('charset', '"us-ascii"')] Return the message payload. If the message method is_multipart() returns true, this method returns a list of component message objects. Otherwise, this method returns a string with the message body. Note that if the message object was created using email.Parser.HeaderParser, then the body is treated as single part, even if it contains MIME delimiters. Assuming that the message is multipart, you may specify the i argument to retrieve only the indexed component. Specifying the i argument is equivalent to indexing on the returned list without specifying i. If decode is specified as a true value, and the payload is single part, the returned payload is decoded (i.e., from quoted printable or base64). I find that dealing with a payload that may be either a list or a text is somewhat awkward. Frequently, you would like to simply loop over all the parts of a message body, whether or not MIME multiparts are contained in it. A wrapper function can provide uniformity: #!/usr/bin/env python "Write payload list to separate files" import email, sys def get_payload_list(msg, decode=l): payload = msg.get_payload(decode=decode) if type(payload) in [type(""), type(u"")]: return [payload] else: return payload mess = email.message_from_file(sys.stdin) for part,num in zip(get_payload_list(mess),range(1000)): file = open('%s.%d' % (sys.argv[1], num), 'w') print >> file, part SEE ALSO: email.Parser 363; email.Message.Message.is_multipart() 361; email.Message.Message.walk() 362; Return the BSD mailbox "From_" envelope header, or None if none exists. SEE ALSO: mailbox 372; Return a true value if the message is multipart. Notice that the criterion for being multipart is having multiple message objects in the payload; the Content-Type header is not guaranteed to be multipart/* when this method returns a true value (but if all is well, it should be). SEE ALSO: email.Message.Message.get_payload() 360; Message bodies that contain MIME content delimiters can also have text that falls outside the area between the first and final delimiter. Any text at the very beginning of the body is stored in email.Message.Message.preamble. SEE ALSO: email.Message.Message.epilogue 358; Replaces the first occurrence of the header with the name field with the value value. If no matching header is found, raise KeyError. Set the boundary parameter of the Content-Type header to s. If the message does not have a Content-Type header, raise HeaderParserError. There is generally no reason to create a boundary manually, since the email module creates good unique boundaries on it own for multipart messages. Set the current default type of the message to ctype. The default type will be used in decoding payloads that are not accompanied by an explicit Content-Type header. Set the parameter param of the header header to the value value. If the argument requote is specified as a true value, the parameter is quoted. The arguments charset and language may be used to encode the parameter according to RFC-2231. Set the message payload to a string or to a list of message objects. This method overwrites any existing payload the message has. For messages with single part content, you must use this method to configure the message body (or use a convenience message subclass to construct the message in the first place). SEE ALSO: email.Message.Message.attach() 357; email.MIMEText.MIMEText 348; email.MIMEImage.MIMEImage 348; email.MIMEAudio.MIMEAudio 347; Set the content type of the message to ctype, leaving any parameters to the header as is. If the argument requote is specified as a true value, the parameter is quoted. You may also specify an alternative header to write the content type to, but for the life of me, I cannot think of any reason you would want to. Set the BSD mailbox envelope header. The argument s should include the word From and a space, usually followed by a name and a date. SEE ALSO: mailbox 372; Recursively traverse all message parts and subparts of the message. The returned iterator will yield each nested message object in depth-first order. >>> for part in mess.walk(): ... print part.get_content_type() multipart/mixed text/html audio/midi SEE ALSO: email.Message.Message.get_payload() 360; There are two parsers provided by the email.Parser module: email.Parser.Parser and its child email.Parser.HeaderParser. For general usage, the former is preferred, but the latter allows you to treat the body of an RFC-2822 message as an unparsed block. Skipping the parsing of message bodies can be much faster and is also more tolerant of improperly formatted message bodies (something one sees frequently, albeit mostly in spam messages that lack any content value as well). The parsing methods of both classes accept an optional headersonly argument. Specifying headersonly has a stronger effect than using the email.Parser.HeaderParser class. If headersonly is specified in the parsing methods of either class, the message body is skipped altogether?the message object created has an entirely empty body. On the other hand, if email.Parser.HeaderParser is used as the parser class, but headersonly is specified as false (the default), the body is always read as a single part text, even if its content type is multipart/*. Construct a parser instance that uses the class _class as the message object constructor. There is normally no reason to specify a different message object type. Specifying strict parsing with the strict option will cause exceptions to be raised for messages that fail to conform fully to the RFC-2822 specification. In practice, "lax" parsing is much more useful. Construct a parser instance that is the same as an instance of email.Parser.Parser except that multipart messages are parsed as if they were single part. Return a message object based on the message text found in the file-like object file. If the optional argument headersonly is given a true value, the body of the message is discarded. Return a message object based on the message text found in the string s. If the optional argument headersonly is given a true value, the body of the message is discarded. The module email.Utils contains a variety of convenience functions, mostly for working with special header fields. Return a decoded string for RFC-2231 encoded string s: >>> Omega = unicodedata.lookup("GREEK CAPITAL LETTER OMEGA") >>> print email.Utils.encode_rfc2231(Omega+'-man@gnosis.cx') %3A9-man%40gnosis.cx >>> email.Utils.decode_rfc2231("utf-8"%3A9-man%40gnosis.cx") ('utf-8', '', ':9-man@gnosis.cx') Return an RFC-2231-encoded string from the string s. A charset and language may optionally be specified. Return a formatted address from pair (realname,addr): Return an RFC-2822-formatted date based on a time value as returned by time.localtime(). If the argument localtime is specified with a true value, use the local timezone rather than UTC. With no options, use the current time. Return a list of pairs (realname,addr) based on the list of compound addresses in argument addresses. >>> addrs = ['"Joe" <jdoe@nowhere.lan>','Jane <jroe@other.net>'] >>> email.Utils.getaddresses(addrs) [('Joe', 'jdoe@nowhere.lan'), ('Jane', 'jroe@other.net')] Return a unique string suitable for a Message-ID header. If the argument seed is given, incorporate that string into the returned value; typically a seed is the sender's domain name or other identifying information. Return a timestamp based on an email.Utils.parsedate_tz() style tuple. >>> email.Utils.mktime_tz((2001, 1, 11, 14, 49, 2, 0, 0, 0, 0)) 979224542.0 Parse a compound address into the pair (realname,addr). Return a date tuple based on an RFC-2822 date string. >>> email.Utils.parsedate('11 Jan 2001 14:49:02 -0000') (2001, 1, 11, 14, 49, 2, 0, 0, 0) SEE ALSO: time 86; Return a date tuple based on an RFC-2822 date string. Same as email.Utils.parsedate(), but adds a tenth tuple field for offset from UTC (or None if not determinable). Return a string with backslashes and double quotes escaped. >>> print email.Utils.quote(r'"MyPath" is d:\this\that') \"MYPath\" is d:\\this\\that Return a string with surrounding double quotes or angle brackets removed. >>> print email.Utils.unquote('<mertz@gnosis.cx>') mertz@gnosis.cx >>> print email.Utils.unquote('"us-ascii"') us-ascii The module imaplib supports implementing custom IMAP clients. This protocol is detailed in RFC-1730 and RFC-2060. As with the discussion of other protocol libraries, this documentation aims only to cover the basics of communicating with an IMAP server?many methods and functions are omitted here. In particular, of interest here is merely being able to retrieve messages?creating new mailboxes and messages is outside the scope of this book. The Python Library Reference describes the POP3 protocol as obsolescent and recommends the use of IMAP4 if your server supports it. While this advice is not incorrect technically?IMAP indeed has some advantages?in my experience, support for POP3 is far more widespread among both clients and servers than is support for IMAP4. Obviously, your specific requirements will dictate the choice of an appropriate support library. Aside from using a more efficient transmission strategy (POP3 is line-by-line, IMAP4 sends whole messages), IMAP4 maintains multiple mailboxes on a server and also automates filtering messages by criteria. A typical (simple) IMAP4 client application might look like the one below. To illustrate a few methods, this application will print all the promising subject lines, after deleting any that look like spam. The example does not itself retrieve regular messages, only their headers. #!/usr/bin/env python import imaplib, sys if len(sys.argv) == 4: sys.argv.append('INBOX') (host, user, passwd, mbox) = sys.argv[1:] i = imaplib.IMAP4(host, port=143) i.login(user, passwd) resp = i.select(mbox) if r[0] <> 'OK': sys.stderr.write("Could not select %s\n" % mbox) sys.exit() # delete some spam messages typ, spamlist = i.search(None, '(SUBJECT) "URGENT"') i.store(','.join(spamlist.split()),'+FLAGS.SILENT','\deleted') i.expunge() typ, messnums = i.search(None,'ALL').split() for mess in messnums: typ, header = i.fetch(mess, 'RFC822.HEADER') for line in header[0].split('\n'): if string.upper(line[:9]) == 'SUBJECT: ': print line[9:] i.close() i.logout() There is a bit more work to this than in the POP3 example, but you can also see some additional capabilities. Unfortunately, much of the use of the imaplib module depends on passing strings with flags and commands, none of which are well-documented in the Python Library Reference or in the source to the module. A separate text on the IMAP protocol is probably necessary for complex client development. Create an IMAP instance object to manage a host connection. Close the currently selected mailbox, and delete any messages marked for deletion. The method imaplib.IMAP4.logout() is used to actually disconnect from the server. Permanently delete any messages marked for deletion in the currently selected mailbox. Return a pair (typ,datalist). The first field typ is either OK or NO, indicating the status. The second field datalist is a list of returned strings from the fetch request. The argument message_set is a comma-separated list of message numbers to retrieve. The message_parts describe the components of the messages retrieved?header, body, date, and so on. Return a (typ,datalist) tuple of all the mailboxes in directory dirname that match the glob-style pattern pattern. datalist contains a list of string names of mailb
http://etutorials.org/Programming/Python.+Text+processing/Chapter+5.+Internet+Tools+and+Techniques/5.1+Working+with+Email+and+Newsgroups/
CC-MAIN-2017-04
en
refinedweb
wolfram@schlich.org wrote:> Dan Hollis <goemon@anime.net> :-)>>> > it has happened to me. dual athlon mp 1.2ghz w/ crucial reg. ddr sdram.> after setting CONFIG_MK7 to CONFIG_MK6 all works fine.The following worked for me (Duron@800, Epox 8KTA2L)root@WormHole:/usr/src# diff -u linux-2.4.9/arch/i386/lib/mmx.c{~,}--- linux-2.4.9/arch/i386/lib/mmx.c~ Tue May 22 19:23:16 2001+++ linux-2.4.9/arch/i386/lib/mmx.c Fri Aug 31 15:51:58 2001@@ -97,7 +97,7 @@ return p; }-#ifdef CONFIG_MK7+#if 0 /* * The K7 has streaming cache bypass load/store. The Cyrix III, K6 and-- Radu-Adrian Feurdeanmailto: raf@chez.com------------------------------------------------------------------------------+#if defined(__alpha__) && defined(CONFIG_PCI)+ /*+ * The meaning of life, the universe, and everything. Plus+ * this makes the year come out right.+ */+ year -= 42;+#endif -- From the patch for 1.3.2: (kernel/time.c), submitted by Marcus Meissner-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2001/8/31/61
CC-MAIN-2017-04
en
refinedweb
- Cookie policy - Advertise with us © Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885. The last few projects have been relatively easy, but with your skills racing forward as they are it's definitely time you tackled something much bigger. How does 360 lines of code sound to you? Scary? We're going to produce a game that's simple and fun to play, and along the way you'll learn some valuable new coding skills. The game we're going to make is called Bang! and the idea is simple: as different-coloured fireworks launch into the air, the player needs to use their mouse to select the fireworks with the same colour, then press Space to explode them. The more they select of the same colour, the more points they get. Just to make things a bit more lively, I'm going to introduce you to something called a particle system, which is a very common special effects technique for games that simulates explosions, fire, smoke and much more. Sound good? Sound worth the effort of writing almost 400 lines of code? Let's go! Create a new command-line project in MonoDevelop and call it Bang. As with Project Four: Fish Feast and Project Six: Bubble Trouble you'll need to copy the SdlDotNet.dll and Tao.Sdl.dll files into your project directory then add them as references in your project. While you're adding references, add one for System.Drawing. You'll also need to change the output directory for your program so that it points to the root directory of your project - again, see project four for how to do that. In the source code for this project I've provided a "media" directory containing pictures for use in the game; copy that into your project directory so that it's in the same place as the Main.cs file that MonoDevelop made for you. That's all the media in place; now we just need to create the three class definitions that be used to power the game. Right-click on the Bang project (not "Solution Bang" but the "Bang" that's beneath it) and choose Add > New File, then choose Empty Class from the window that appears. Name it "Bang". Repeat the procedure to create classes called Explosion and Firework, which gives us the three classes that will power our game. In Bang.cs, Explosion.cs and Firework.cs, set the "using" statements to these: using SdlDotNet.Core; using SdlDotNet.Graphics; using SdlDotNet.Graphics.Primitives; using SdlDotNet.Input; using System; using System.Collections.Generic; using System.Drawing; While you're in each of those files, you'll see that MonoDevelop "helpfully" created a constructor method for each class - just delete them, because we won't be using them. As a reminder, constructor methods get called when an object of a class is created, and they look something like this: public Bang() { } Like I said, just delete them. Now open up Main.cs and replace the Console.WriteLine() call with this: Bang game = new Bang(); game.Run(); That will create a new instance of our game and start it running - or at least it will once we make the Run() method! Open up Bang.cs and put these variable declarations inside the Bang class: Surface sfcGameWindow; Surface sfcBackground = new Surface("media/background.jpg"); Random Rand = new Random(); const int GameWidth = 640; const int GameHeight = 480; const int GameHalfWidth = GameWidth / 2; That gives us just enough data to fire up a basic game screen and show the background. But first we need to drop in a few basic method to handle running a simple, skeleton version of our game. First up, the Run() method. this needs to create the game window, hook up methods for the events we'll be using, then hand control over to SDL. Which events will we be using? Well, quite a few, actually: we want to read mouse movement, mouse button clicks, keyboard presses, game ticks and the quit signal, so we need to hook them all up to empty methods that we'll fill out later. Here's how the first-draft Run() method should look - put this in Bang.cs: public void Run() { sfcGameWindow = Video.SetVideoMode(GameWidth, GameHeight); Events.MouseMotion += new EventHandler<MouseMotionEventArgs>(Events_MouseMotion); Events.MouseButtonDown += new EventHandler<MouseButtonEventArgs>(Events_MouseButtonDown); Events.KeyboardDown += new EventHandler<KeyboardEventArgs>(Events_KeyboardDown); Events.Tick += new EventHandler<TickEventArgs>(Events_Tick); Events.Quit += new EventHandler<QuitEventArgs>(Events_Quit); Events.Run(); } You'll need to provide skeleton methods for Events_MouseMotion(), Events_MouseButtonDown(), Events_KeyboardDown(), Events_Tick() and Events_Quit(). They can be pretty much empty for now, with the exception of Events_Tick() and Events_Quit() which will use the same calls to Update()/Draw() and Events.QuitApplication() as seen in project six. If you missed that project, go back and start there first because I'm not going to explain it all again here! So, here are the skeleton methods for you to put into Bang.cs: void Events_MouseMotion(object sender, MouseMotionEventArgs e) { } void Events_MouseButtonDown(object sender, MouseButtonEventArgs e) { } void Events_KeyboardDown(object sender, KeyboardEventArgs e) { } void Events_Tick(object sender, TickEventArgs e) { Update(e.SecondsElapsed); Draw(); } private void Draw() { sfcGameWindow.Blit(sfcBackground); sfcGameWindow.Update(); } private void Update(float elapsed) { } void Events_Quit(object sender, QuitEventArgs e) { Events.QuitApplication(); } As you can see, the Draw() method just draws the background image then tells SDL to update the screen. Finally, copy in the PointOverRect method that we used in previous games to detect whether a mouse click is over a rectangle: bool PointOverRect(float x1, float y1, float x2, float y2, int width, int height) { if (x1 >= x2 && x1 <= x2 + width) { if (y1 >= y2 && y1 <= y2 + height) { return true; } } return false; } Phew! That's all the setup code out of the way: if you run the "game" now you'll see it brings up a window that shows the background. Nothing happens - after all that work! As you can imagine, it's fairly common practice to take a snapshot of your project right now, then squirrel that away to use to kickstart any future game projects. The first thing we're going to do is define what we need to store about a firework to make it functional in the game: The combination of the angle and colour information is enough to let us know which graphic should be used to draw the firework. Here's how the Firework class looks in C#: class Firework { public int Angle; public Color Colour; public float X; public float Y; public float XSpeed; public float YSpeed; public bool IsSelected; } As you should recall from project six, using floats for position rather than integers allows us to store the position of things more accurately by making frame-independent movement possible. More on this later! And now comes the first tricky bit: we need to load all the firework graphics into the game. This might sound easy, but it's actually quite hard because we need to make sure it's all carefully organised by direction and type. As far back as project two we looked at the List generic data type, which is an array that holds data of a specific type. That works well for storing data in a given order: you add items to the list, then read them back out simply by referring to their position in the list. In this tutorial I want to introduce you to a new data type called a Dictionary, which lets you add items to an array at a specific location rather than just at the next available slot. What can the location be? Well, just about anything - if you want to store something at location "fish", that's fine. If you want to store something at location 90, that's fine - even if locations 0 through 89 don't exist. Unlike lists, dictionaries store their items in any positions you want - you can even define what kind of data the position is. As with a List, you can store anything you want in a Dictionary, and this is where things can become a bit tricky. You see, we're going to use a Dictionary to store Dictionaries, and each dictionary inside will store SDL Surfaces for the fireworks we'll be drawing to the screen. If that doesn't make sense, it will once you see the code that actually loads the fireworks. First, add this code just before the Run() method: List<Firework> Fireworks = new List<Firework>(); Dictionary<int, Dictionary<Color, Surface>> FireworkTypes = new Dictionary<int, Dictionary<Color, Surface>>(); Technically it's "bad style" to have a dictionary storing other dictionaries, but it's a really quick way of solving our problem! Regardless, it's that last line that's likely to trip you up. Let me show you a simpler example: a dictionary designed to hold people's names and ages: Dictionary<string, int> People = new Dictionary<string, int>(); Like I said earlier, a dictionary store any value at any location. In the People dictionary above, we're telling Mono that the value type we want to store will be an int, and the location - known as the "key" - will be specified as a string. Using that example, we could add items to the dictionary like this: People.Add("Paul Hudson", 29); People.Add("Ildiko Hudson", 30); People.Add("Nick Veitch", 59); We can then read values back out from the dictionary like this: Console.WriteLine("Paul's age is " + People["Paul Hudson"]); There is one proviso, though, which is that you shouldn't add two values to a dictionary using the same key. That is, code like this will cause an error: People.Add("Paul Hudson", 29); People.Add("Ildiko Hudson", 30); People.Add("Nick Veitch", 59); People.Add("Paul Hudson", 17); Let's go back to the dictionary we'll be using in this project: Dictionary<int, Dictionary<Color, Surface>> FireworkTypes = new Dictionary<int, Dictionary<Color, Surface>>(); What that means is that our key will be an integer, and the value is a dictionary with Colours as the keys and SDL Surfaces as the values. If you look in the media directory for this game you'll see there are three types of firework: blue, green and red. And for each colour, we also have the same firework at three different angles. So what we're going to do is add each colour and firework image for a given angle to a dictionary, then add that list to the dictionary at the specified angle. That means we can retrieve the correct picture for a firework by knowing its angle and colour. So, let's start by loading all the firework images that point upwards. Add this to the start of your Run() method: Dictionary<Color, Surface> fwtypes = new Dictionary<Color, Surface>(); fwtypes.Add(Color.Blue, new Surface("media/firework_blue.png")); fwtypes.Add(Color.Green, new Surface("media/firework_green.png")); fwtypes.Add(Color.Red, new Surface("media/firework_red.png")); FireworkTypes.Add(90, fwtypes); First we create a new dictionary with Color for the key and Surface for the value. We then add the three firework images for this angle, each time using the correct colour value for it. Once that's done, we add that dictionary to the FireworkTypes parent dictionary with the key 90 - an angle of 0 is pointing to the left, so 90 is pointing directly upwards. Using this method, if you wanted to read the SDL Surface for the blue firework at 90 degrees, you would use FireworkTypes[90][Color.Blue], which is nice and easy to read. We need to load all the other firework images as well, so add this code beneath the code from above: fwtypes = new Dictionary<Color, Surface>(); fwtypes.Add(Color.Blue, new Surface("media/firework_blue_135.png")); fwtypes.Add(Color.Green, new Surface("media/firework_green_135.png")); fwtypes.Add(Color.Red, new Surface("media/firework_red_135.png")); FireworkTypes.Add(135, fwtypes); fwtypes = new Dictionary<Color, Surface>(); fwtypes.Add(Color.Blue, new Surface("media/firework_blue_45.png")); fwtypes.Add(Color.Green, new Surface("media/firework_green_45.png")); fwtypes.Add(Color.Red, new Surface("media/firework_red_45.png")); FireworkTypes.Add(45, fwtypes); And now for a little bit of SDL magic: if you look at the firework pictures in the media directory, you'll notice they all have a magenta background. We can tell SDL that we want magenta to be drawn as transparent on the screen, which will cause all those magenta parts of the pictures to be invisible. To do that, we just need to loop through every angle and every picture, setting its TransparentColor and Transparent values. Put this just after the previous code: foreach(Dictionary<Color, Surface> direction in FireworkTypes.Values) { foreach (Surface firework in direction.Values) { firework.TransparentColor = Color.Magenta; firework.Transparent = true; } } We've used the foreach loop a lot in previous projects, but this is the first time we've used it to loop over a dictionary, and also the first time we've used it to read a complex data type. So: as you might have guessed, when looping over a dictionary, you can choose to read either the keys (FireworkTypes.Keys) or the values (FireworkTypes.Values). Each firework is available at three angles, but all of them have a magenta background. We can knock that out with SDL. You also need to specify the exact kind of data you're getting back - this is nothing new, as we've been using things like "foreach (string str in somestrings)" for a while, but the difference here is that when you read a generic data type back in, such as a dictionary, you need to tell Mono exactly what kind of dictionary you're working with. Yes, this can be a bit of a pain, but if you remember all the way back to project one you can use the "var" keyword in place of data types, so if you wanted to you could be a bit lazy and write this: foreach(var direction in FireworkTypes.Values) { foreach (Surface firework in direction.Values) { firework.TransparentColor = Color.Magenta; firework.Transparent = true; } } Once we've set the TransparentColor and Transparent properties, those magenta parts will no longer appear - SDL will ensure they are automatically invisible. Now that all the firework images are loaded into RAM, let's make the game launch some fireworks so we can make sure everything is working. This task can be broken down into four small ones that we'll tackle individually: We'll tackle them in order of difficulty, starting with tracking the last time a firework was launched. Add this to the collection of variables in Bang.cs, just above the Run() method: int LastLaunchTime; Then put this line somewhere in the Run() method: LastLaunchTime = Timer.TicksElapsed; That tells starts the game off with a zero counter, meaning that it will wait a few seconds before launching the first firework. Task number one solved! Next, drawing all the fireworks on the screen. As discussed earlier, Firework objects use floats for their X and Y position, whereas SDL needs integers for drawing to the screen. So, inside the Draw() method we need to loop over each firework, force its position floats into integers, then draw them at the screen. I also told you that to pull out the blue firework at 90 degrees we need to use this code: FireworkTypes[90][Color.Blue]. So, to pull out any given surface for a firework, we just need to use its Angle and Colour values. Put this code into your Draw() method, after drawing the background and before updating the game window: foreach (Firework firework in Fireworks) { sfcGameWindow.Blit(FireworkTypes[firework.Angle][firework.Colour], new Point((int)firework.X, (int)firework.Y)); } Next up, we need to move each of the active fireworks. We're also going to take the opportunity here to remove any fireworks that have moved off the screen. If you think back to the definition of the Firework class, you'll remember that each firework has an X and Y position, but also that it has X and Y speeds, so to move a firework across the screen we just need to subtract their X and Y speeds from their X and Y positions. So, put this code into your Update() method: for (int i = Fireworks.Count - 1; i >= 0; --i) { Firework firework = Fireworks[i]; firework.X -= firework.XSpeed; firework.Y -= firework.YSpeed; if (firework.Y < -300) { Fireworks.RemoveAt(i); } } Note that we need to loop backwards over the array because we're potentially removing one or more fireworks and it would cause problems if we remove a firework during a loop - see project four if that all seems a bit hazy. Why do you think I'm using -300 rather than just looking for the height of the firework? The answer is simply a game-play one: if a firework is just off the screen and the player wants to explode it, we need to give them a bit of leeway because it's likely the explosion will just about make it onto the screen. The last task is the most complicated one: we need to launch a new firework every few seconds. In principle, this is as simple as adding code like this to the Update() method: if (LastLaunchTime + 4000 < Timer.TicksElapsed) { LaunchFireworks(); } ...but that's just shifting the work to a new method! So, add that code into Update(), and let's take a look at what a LaunchFireworks() method should do. First, it needs to reset the LastLaunchTime variable to the current time, so that the game waits another four seconds before launching more fireworks. Then it needs to decide randomly whether it should launch fireworks from the left, from the right or from the bottom. And then it needs to create five fireworks, each with a random colour. Each direction (from the left, right, or bottom) has to create five fireworks at different positions on that side of the screen, so there are fifteen ways to create a firework. Hopefully that should be setting off alarm bells in your head: this is something that definitely calls for a method all of its own. Let's start there: let's create a method that launches precisely one firework at a given angle, and at a specific X/Y position: void CreateFirework(int angle, int xpos, int ypos) { Firework firework = new Firework(); firework.X = xpos; firework.Y = ypos; firework.XSpeed = (float)(Math.Cos(angle * Math.PI / 180)) * 5; firework.YSpeed = (float)(Math.Sin(angle * Math.PI / 180)) * 5; switch (Rand.Next(0, 3)) { case 0: firework.Colour = Color.Blue; break; case 1: firework.Colour = Color.Green; break; case 2: firework.Colour = Color.Red; break; } firework.Angle = angle; Fireworks.Add(firework); } There's nothing really new in there - the two long-ish lines that call Math.Cos() and Math.Sin() have both been used and explained in project six, with the exception that I've put "* 5" on the end to make them all move a bit faster! As each firework is created, we add it to the list of active fireworks (the Fireworks list) so that it can be moved and drawn correctly. That CreateFirework() method launches a firework at an angle and position, so all we need to do is write the LaunchFireworks() method that calls CreateFirework() once for each firework it wants to create. Put this somewhere into Bang.cs: void LaunchFireworks() { // reset the timer LastLaunchTime = Timer.TicksElapsed; // pick a random direction for the fireworks switch (Rand.Next(0, 3)) { // fire your fireworks here } } You can have as many firework launch patterns as you want, but for the sake of this simple tutorial we're only going to have three: up, from the left and from the right. Put this code where I've marked "// fire your fireworks here": case 0: // fire five, straight up CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width - 50, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width - 100, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width + 50, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width + 100, GameHeight); break; case 1: // fire five, from the left to the right CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 300); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 250); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 200); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 150); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 100); break; case 2: // fire five, from the right to the left CreateFirework(45, GameWidth, GameHeight - 300); CreateFirework(45, GameWidth, GameHeight - 250); CreateFirework(45, GameWidth, GameHeight - 200); CreateFirework(45, GameWidth, GameHeight - 150); CreateFirework(45, GameWidth, GameHeight - 100); break; That's quite a lot of code, but it's really dull stuff - it's just repetitive calls to CreateFirework(), varying the angle, X and Y positions of the parameters. For example, when firing fireworks up from the bottom of the screen, we fire one in the middle, two off to the left and two off to the right. Firing from the left or right, we keep the X position the same, and vary the height. There is one minor point I ought to clear up, and that's the use of Color.Blue for reading the width values. The reason this is used is because we need to offset the fireworks' positions by their width, so they appear off the screen. As all our fireworks are the same size, we can read any colour and it will be correct, so I just used the first one, Color.Blue. With those four tasks complete, the game is starting to come together a little bit: if you build and run the code, you'll see the same background, but now fireworks will fly across the screen. Of course, you can't actually do anything with them yet, but then again we're only half way through this project! Our game now has fireworks that fly up the screen in various directions - it's a bit dull, but at least things are starting to come together now. For the player to be able to score points, we need to let them a) choose which fireworks should be exploded, then b) explode them. Choosing which fireworks should be exploded is pretty straightforward: when the mouse button is clicked, or if the mouse is moved when the mouse button is held down, we need to see whether the cursor is over any fireworks. If it is, then we need to select it. However, here's the catch that makes the game more difficult: the player can select only one colour at a time, which means if they have a green firework selected then click a red firework, we need to deselect the green firework. On the other hand, if they have a green firework selected and then click another green firework, they both become selected. Put this method, CheckSelectFirework() into Bang.cs: void CheckSelectFirework(Point point) { // loop over every active firework foreach (Firework firework in Fireworks) { if (PointOverRect(point.X, point.Y, firework.X, firework.Y, FireworkTypes[firework.Angle][firework.Colour].Width, FireworkTypes[firework.Angle][firework.Colour].Height)) { // a firework was selected! foreach (Firework firework2 in Fireworks) { // now loop over every other firework if (firework2.IsSelected && firework2.Colour != firework.Colour) { // deselect any other fireworks that aren't this colour firework2.IsSelected = false; } } // finally, select the new firework firework.IsSelected = true; } } } That's all very simple in code, but it makes the game a lot harder to play! Now to run that method whenever the mouse is moved with the mouse button down or when the mouse button is pressed, we just need to modify Events_MouseButtonDown() and Events_MouseMotion(). Both of these methods receive the mouse position as part of their parameters, so we just need to pass that onto CheckSelectFirework() so that it can check whether any fireworks were selected. Modify your code to this: void Events_MouseButtonDown(object sender, MouseButtonEventArgs e) { CheckSelectFirework(e.Position); } void Events_MouseMotion(object sender, MouseMotionEventArgs e) { if (Mouse.IsButtonPressed(MouseButton.PrimaryButton)) { CheckSelectFirework(e.Position); } } Finally, we need to make it easier for players to see which fireworks are currently selected, because selecting a different colour will unselect previously selected fireworks! So, go into your Draw() method and modify the Fireworks foreach loop to this: foreach (Firework firework in Fireworks) { if (firework.IsSelected) { short box_left = (short)(firework.X - 3); short box_top = (short)(firework.Y - 3); short box_right = (short)(FireworkTypes[firework.Angle][Color.Blue].Width + firework.X + 3); short box_bottom = (short)(FireworkTypes[firework.Angle][Color.Blue].Height + firework.Y + 3); sfcGameWindow.Draw(new Box(box_left, box_top, box_right, box_bottom), Color.White); } sfcGameWindow.Blit(FireworkTypes[firework.Angle][firework.Colour], new Point((int)firework.X, (int)firework.Y)); } With that in place, when the fireworks are drawn a box is drawn behind them if they are selected. One of the (many!) advantages of SDL is that it makes it very easy to draw shapes on the screen such as boxes, circles and lines. In this code, we define the four corners of the box: it's left and top edges are the firework's X and Y positions, and the right and bottom edges are the position plus the firework width. In both cases I've added or subtracted 3 to make the box a little bigger than the firework to make it look better. If you run the game now, you'll be able to select fireworks by clicking on them. Having the option to hold down the mouse button and just wave the cursor over fireworks to select them makes the game a little easier, so it's good to have both. Clicking on fireworks selects them, showing a box to make it clear they are highlighted. And now it's time for the main event. Or at least it's time to start working towards the main event: we need to make it possible for players to explode the fireworks they've selected. First, add a new variable just under the declaration of "int LastLaunchTime" so that we can track the player's score: int Score; Second, create a new method that will update the window's title with the score whenever we call it: void SetWindowCaption() { Video.WindowCaption = "Bang! Score: " + Score; } At the start of the game, we should update the window title with an empty score, so add this at the start of the Run() method: SetWindowCaption(); And now all we need to do is make pressing the space key detonate any selected fireworks. This takes just under 40 lines of code, so you might be forgiven for thinking this is quite hard to do. But the truth is that it's really easy: Updating the score actually takes up almost half the code for the method, because we want to add more score for exploding multiple fireworks - ie, exploding two fireworks together should give the player more points than exploding both fireworks individually. All this is done inside the Events_KeyboardDown() method, and should only happen when the player presses space. So, modify your method so that it looks like this: void Events_KeyboardDown(object sender, KeyboardEventArgs e) { if (e.Key == Key.Space) { int numexploded = 0; for (int i = Fireworks.Count - 1; i >= 0; --i) { Firework firework = Fireworks[i]; if (firework.IsSelected) { // destroy this firework! Fireworks.RemoveAt(i); ++numexploded; } } // how much score should be added? switch (numexploded) { case 0: // nothing; rubbish! break; case 1: Score += 200; break; case 2: Score += 500; break; case 3: Score += 1500; break; case 4: Score += 2500; break; case 5: Score += 4000; break; } SetWindowCaption(); } } Go ahead and run the game now: fireworks fly up, you can select them with your mouse, then press space to destroy them. Is the game finished? Well, yes. But what it really misses is some sort of interest: making fireworks disappear is very dull, hardly worthy of a game called "Bang!" And, of course, I did promise you a particle system, right? We're going to add colourful explosions to the game to make it look a little nicer. To do that, we need to make a couple of changes to Bang.cs: All those changes add up to just 12 lines of code, at which point we can crack on with making the Explosion class. First, add this line up in the variable declaration section, just beneath the FireworkTypes dictionary: public List<Explosion> Explosions = new List<Explosion>(); Next, we need to call a non-yet-created method, ExplodeFirework(), for each firework that is being detonated because the player pressed space. This needs to pass in the firework that is being destroyed, so add this line into your Events_KeyboardDown() method, just beneath the comment "// destroy this firework!": ExplodeFirework(firework); That ExplodeFirework() method isn't terribly complicated, but it's worth keeping it as a separate method in case we ever need to explode fireworks from anywhere outside the Events_KeyboardDown() method - you might add a smart bomb power up for example. What ExplodeFirework() needs to do is create a new instance of the Explosion class, passing in the position that the explosion should be, how fast the particles should move, how long each particle should live, and also what colour should be used to draw it all. Once that's done, the new explosion should be added to the Explosions list so that we can keep track of it. Turning all that into C#, we get this: void ExplodeFirework(Firework firework) { // find the horizontal centre of the firework float xpos = firework.X + FireworkTypes[firework.Angle][firework.Colour].Width / 2; // create the explosion at that centre in the firework's colour Explosions.Add(new Explosion(xpos, firework.Y, 200, 2000, firework.Colour)); } The final change we need to make in Bang.cs is to draw any active explosions, and remove any ones that are finished. Put this code just before sfcGameWindow.Update() in your Draw() method: for (int i = Explosions.Count - 1; i >= 0; --i) { if (Explosions[i].Particles.Count == 0) { // this explosion is done, remove it Explosions.RemoveAt(i); } else { // it's alive, draw it Explosions[i].Render(sfcGameWindow); } } As with any other loop where we might remove elements, we need to loop through this one backwards. Otherwise all that code is pretty straightforward. What's that you say? Where did Particles and Render() come from? How can we create an Explosion and pass in all those variables without a constructor? Simple: we can't. And that's why we still have one more thing to do: we need to create the Explosion class. This really is the last thing we have to do in this project, but I've saved the best - and hardest - to last. As per usual, before we jump into the code, let's spec out how explosions should work. Each explosion needs: Going a step further, we need to make a class for the particles as well, because each particle needs to store some information: So, the first step to creating our particle system is to create the basic classes for the explosion and its particles - we'll call the classes Explosion and ExplosionDot. Go into the Explosion.cs file and put these two classses in there: public class ExplosionDot { public float X; public float Y; public float XSpeed; public float YSpeed; public int TimeCreated; } public class Explosion { float X; float Y; Random Rand = new Random(); Color Colour; int Speed; int Lifespan; List<ExplosionDot> ExplosionDots = new List<ExplosionDot>(); } The only real code we need inside the Explosion class is a constructor and a Render() method, the first of which is very straightforward: we need to copy all the parameters into the explosion for later reference, then create 100 particles. Here it is: public Explosion(float x, float y, int speed, int lifespan, Color colour) { // copy all the variables into the Explosion object X = x; Y = y; Colour = colour; Speed = speed; Lifespan = lifespan; // now create 100 particles for (int i = 0; i < 100; ++i) { ExplosionDot dot = new ExplosionDot(); dot.X = X; dot.Y = Y; // one of the parameters passed in is the speed to make particles move // we use that as a range from -speed to +speed, making each particle // move at a different speed. dot.XSpeed = Rand.Next(-Speed, Speed); dot.YSpeed = Rand.Next(-Speed, Speed); dot.TimeCreated = Timer.TicksElapsed; Particles.Add(dot); } } And that just leaves the Render() method for explosions. To make the code a bit easier to read in Bang.cs, I've made this Render() method both update the particle positions and draw the particles - it's not ideal because it means we can't draw the explosions without making them move, which rules out things like a pause option, but it's OK for now and something you can tackle for homework. The best way to explain this code is to scatter comments through it, so here goes: public void Render(Surface sfc) { // loop backwards through the list of particles because we might remove some for (int i = Particles.Count - 1; i >= 0; --i) { ExplosionDot dot = Particles[i]; // update the particle's position dot.X += dot.XSpeed / Events.Fps; dot.Y += dot.YSpeed / Events.Fps; // if this dot has outlived its lifespan, remove it if (dot.TimeCreated + Lifespan < Timer.TicksElapsed) { Particles.RemoveAt(i); continue; } // figure out how much time has passed since this particle was created int timepassed = Timer.TicksElapsed - dot.TimeCreated; // ... then use it to calculate the alpha value for this particle int alpha = (int)Math.Round(255 - (255 * ((float)timepassed / Lifespan))); // if the particle is basically invisible, don't draw it if (alpha < 1) continue; // otherwise, create a colour for it based on the original colour // and the alpha for this particle Color thiscol = Color.FromArgb(alpha, Colour.R, Colour.G, Colour.B); // now draw the particle short left = (short)Math.Round(dot.X); short top = (short)Math.Round(dot.Y); short right = (short)(Math.Round(dot.X) + 2); short bottom = (short)(Math.Round(dot.Y) + 2); sfc.Draw(new Box(left, top, right, bottom), thiscol); } } There are two points of interest in that code: the code to create an alpha value for the particle, then the call to Color.FromArgb() to turn that alpha value into a colour that can be drawn. If particles have a lifespan, as they do in this example, then the best way to draw them is usually to fade them out slowly so that they are 100% opaque when created, and 0% opaque when they are about to be destroyed due to old age. In SDL, alpha values are expressed as values between 0 (wholly transparent) and 255 (wholly opaque), so what we need to do is what value a given particle should have by comparing its creation time against the current time and its lifespan. For the purpose of a really simple example, let's say a particle was created at time 0, has a lifespan of 200, and the current time is 100, so the particle is exactly half way through its life. What the code does is to figure out the amount of time that has passed by subtracting the current time from the created time. In our example, the current time is 100, so we subtract from that the created time, which is 0, giving 100 - 100 milliseconds have passed since the particle was created. Next it takes that value and divides it by the lifespan of particles, which is 200, giving 0.5. It then takes that value and multiplies it by 255, giving 127.5. Finally, it subtracts that from 255, giving, again, 127.5. Finally, that number gets rounded to an integer, giving 128, so the final alpha value for this pixel is 128. If you're wondering the we need to subtract the value from 255, it will become clear if you take different input values. For example, if the particle was created at time 0, the current time was 100, but the lifespan of particles was 110, you get this: 100 - 0 = 100; 100 / 110 = 0.909090909; 0.909090909 * 255 = 231.818181818. As you can see, the particle has gotten closer to its end of life, its alpha value has gone up rather than down - it's going from transparent to opaque! We want the exact opposite of that, so we just take the finished value and subtract it from 255 so that over time the particles get more transparent until eventually becoming invisible. Once the alpha value is calculated, we can create a new Mono Color object from it by using the Color.FromArgb() method. This takes colour values in the order Alpha, Red, Green, Blue, so we just specify the new alpha value as the first parameter then use the colour value that was specified when the explosion was created for the other parameters. And with that the game is finished: if you run it now you'll see pretty explosions when fireworks are detonated, and it only looks nicer when you destroy multiple at once! This is the effect our particle system gives us: a hundred or so little particles flying outwards from the centre, fading away as they get older. This has been the biggest project to date, and I think the finished product is something you can be really proud of - it's a fun game, it's easy to play, and you've learned a lot of neat stuff along the way! Up until this project, all the little classes we've defined as part of our projects have basically been glorified data stores - they haven't had any intelligence of their own. But in this project, the Explosion class has two methods of its own: the functionality required to make it work is encapsulated inside the class, so you can copy it around to other projects fairly easily. You've also learned how to use the Dictionary data type for when a simple List isn't enough, how to draw boxes using SDL, and, most impressive of all, how a simple particle system works. And actually it's in particle systems that you can have the most fun extending this project - it's easy to add gravity (just modify each particle's YSpeed value each update) or wind, to make particle systems that keep firing new particles rather than expelling them all at once, and so on. Play around and see what you can achieve! It's been a lot of work, but the payoff is another big batch of learning, plus another finished project. four coding problems; the first three are required, and the last one is for coders who want a bit more of a challenge. If you're having trouble trying to figure out the third problem, you should look at the Math.Sin() and Math.Cos() code for an example of how to do it. Oh!".
http://www.tuxradar.com/content/hudzilla-coding-academy-project-ten
CC-MAIN-2017-04
en
refinedweb
Hi all, I'm debugging a particularly nasty problem in a moderately complex bit of Python, and I really need something that I can use to just print out the state of an object. I also do a lot of Perl, and that's available in Perl with the Data::Dumper module. I know Python has pprint, but that stops when it encounters an object ie. it only dumps sequences and hashes, which is less than what I want. As I like to illustrate my examples, here's a simple class in Python: #!/usr/bin/env python class RequestHeader: def __init__(self, _service_context, _request_id, _response_expected, _object_key, _operation, _requesting_principal): self.service_context = _service_context self.request_id = _request_id self.response_expected = _response_expected self.object_key = _object_key self.operation = _operation self.requesting_principal = _requesting_principal return def get_request_id(self): return self.request_id # ... other methods ... Now, if I attempt to use pprint to "dump" the object to a stream as follows ... #!/usr/bin/env python from requestheader import RequestHeader import pprint rh1 = RequestHeader('some context', 13, 1, 'an object key', 'op1name', 'principal1') rh2 = RequestHeader('some other context', 14, 2, 'different object key', 'opname2', 'principal2') rh_list = [ rh1, rh2 ] pprint.pprint(rh_list) ... I get: [<requestheader.RequestHeader instance at 0x81150d4>, <requestheader.RequestHeader instance at 0x8127b8c>] Now, to show you what I actually want, here it is in Perl. First the class definition: #!/usr/bin/env perl use warnings; use strict; package RequestHeader; use English; # (Perl uses packages for class namespaces) sub new { # Get the constructor arguments my ($class, $service_context, $request_id, $response_expected, $object_key, $operation, $requesting_principal) = @ARG; # We will represent instances of this class with a hash table my $self = { service_context => $service_context, request_id => $request_id, response_expected => $response_expected, object_key => $object_key, operation => $operation, requesting_principal => $requesting_principal }; # Bless the reference to the hash so that method calls know which # class (or package) this object belongs to bless $self, $class; # Return the new object reference return $self; } sub get_request_id { my ($self) = @ARG; return $self->{request_id}; } # ... other methods ... return 1; # Return module load success Now, here's the code to "dump" some of those objects ... #!/usr/bin/env perl use warnings; use strict; use RequestHeader; use Data::Dumper; my $rh1 = RequestHeader->new('some context', 13, 1, 'an object key', 'op1name', 'principal2'); my $rh2 = RequestHeader->new('some other context', 14, 2, 'different object key', 'op2name', 'principal2'); my $rh_list = [ $rh1, $rh2 ]; print Dumper($rh_list); When I run this, I get: $VAR1 = [ bless( { 'requesting_principal' => 'principal2', 'object_key' => 'an object key', 'service_context' => 'some context', 'response_expected' => 1, 'operation' => 'op1name', 'request_id' => 13 }, 'RequestHeader' ), bless( { 'requesting_principal' => 'principal2', 'object_key' => 'different object key', 'service_context' => 'some other context', 'response_expected' => 2, 'operation' => 'op2name', 'request_id' => 14 }, 'RequestHeader' ) ]; ... which is what I need for debugging purposes. Now, I know I can add methods to the RequestHeader class for printing, but I don't want to do that for every single class I want to dump. Now, as I'm constantly harping on, the Python and Perl object models are quite similar, so a generic "object dumper" should be possible. Does one exist? Thanks, Derek.
https://mail.python.org/pipermail/python-list/2003-February/236977.html
CC-MAIN-2017-04
en
refinedweb
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, do you have any idea what could be missing? David Nielsen schrieb: > tor, 08 11 2007 kl. 22:37 +0100, skrev Christoph Höger: >> Hi, >> >> I could not build your 0.16 package. It fails with >> >> ./src/Mono.Addins.Gui/Mono.Addins.Gui/AddinInstallerDialog.cs(40,35): >> error CS0246: The type or namespace name `PackageCollection' could not >> be found. Are you missing a using directive or an assembly reference? >> ./src/Mono.Addins.Gui/Mono.Addins.Gui/AddinManagerDialog.cs(39,30): >> error CS0246: The type or namespace name `SetupService' could not be >> found. Are you missing a using directive or an assembly reference? >> >> although all buildrequirements are fulfilled. > > hrmm that did not used to happen.. I promise once gmcs gets unfucked on > x86_64 I'll get right back to my Fedora duties and I'll have a look at > that. > > Pain makes the world feel real.. bring it! > > - David > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - iD8DBQFHNKzohMBO4cVSGS8RAhRlAJ9FjIDDtXqyDD3mpfgNAoyVa92MwQCeLrl4 fmVPi76m4rmdNOtGP/vXGRg= =qdMo -----END PGP SIGNATURE-----
https://www.redhat.com/archives/fedora-devel-list/2007-November/msg00630.html
CC-MAIN-2014-15
en
refinedweb
An interface which all classes which wish to visit a Property should implement. More... #include <rtt/base/PropertyIntrospection.hpp> An interface which all classes which wish to visit a Property should implement. When you call PropertyBase::identify( PropertyIntrospection* ), the object will call one of below methods to expose its type to the caller. Definition at line 60 of file PropertyIntrospection introspect(), and RTT::base::PropertyBagVisitor::introspectAndDecompose(). Referenced by introspect_T().
http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1base_1_1PropertyIntrospection.html
CC-MAIN-2014-15
en
refinedweb
Groovy supports regular expressions natively using the ~"..." expression. Plus Groovy supports the =~ (create Matcher) and ==~ (matches regex) operators. e.g.Error rendering macro 'code': Invalid value specified for parameter 'lang' import java.util.regex.Matcher import java.util.regex.Pattern assert "cheesecheese" =\~ "cheese" // lets create a regex Pattern pattern = \~" // group demo matcher = "[abc]" =\~ "\\[(.*)\\]" matcher.matches(); // must be invoked assert matcher.group(1) == "abc" // is one, not zero would be nice to supply other Perl amenities, such as !~ and in-place edits of string variables. This is a job for someone familiar with the range of use cases, and their expressions in terms of Java Matchers. – John Rose.)
http://docs.codehaus.org/pages/viewpage.action?pageId=22171
CC-MAIN-2014-15
en
refinedweb
Stream.Read Method Assembly: mscorlib (in mscorlib.dll) Parameters - buffer An array of bytes. When this method returns, the buffer contains the specified byte array with the values between offset and (offset + count - 1) replaced by the bytes read from the current source. - offset The zero-based byte offset in buffer at which to begin storing the data read from the current stream. - count The maximum number of bytes to be read from the current stream. Return ValueThe total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.. The following example shows how to use Read to read a block of data. using System; using System.IO; public class Block { public static void Main() { Stream s = new MemoryStream(); for (int i=0; i<100; i++) s.WriteByte((byte)i); s.Position = 0; // Now read s into a byte buffer. byte[] bytes = new byte[s.Length]; int numBytesToRead = (int) s.Length; int numBytesRead = 0; while (numBytesToRead > 0) { // Read may return anything from 0 to numBytesToRead. int n = s.Read(bytes, numBytesRead, numBytesToRead); // The end of the file is reached. if (n==0) break; numBytesRead += n; numBytesToRead -= n; } s.Close(); // numBytesToRead should be 0 now, and numBytesRead should // equal 100. Console.WriteLine("number of bytes read: "+numBytes.
http://msdn.microsoft.com/en-us/library/29tb55d8(v=vs.85)
CC-MAIN-2014-15
en
refinedweb
Hello, After installing VS2010 RTM, I get the following error after selecting "Use Intel C++" on any C++ project and trying to compile it: C:\\Program Files (x86)\\MSBuild\\Microsoft.Cpp\\v4.0\\Platforms\\Win32\\PlatformToolsets\\Intel Parallel Composer 2011\\Microsoft.Cpp.Win32.Intel Parallel Composer 2011.targets(37,5): error MSB4062: The "ICMessage" task could not be loaded from the assembly Intel.Build.ICLTasks.ICMsgTask, Version=12.0.0.0, Culture=neutral, PublicKeyToken=3c0c138f5bbab72f. Could not load file or assembly 'Intel.Build.ICLTasks.ICMsgTask, Version=12.0.0.0, Culture=neutral, PublicKeyToken=3c0c138f5bbab72f' or one of its dependencies. The system cannot find the file specified. Confirm that the declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. Any clues about what could be causing the problem? Thank you, Ricardo.
https://software.intel.com/pt-br/forums/topic/265804
CC-MAIN-2014-15
en
refinedweb
iGraphics3D Struct ReferenceThis is the standard 3D graphics interface. More... #include <ivideo/graph3d.h> Detailed DescriptionThis 701 of file graph3d.h. Member Function Documentation Activate all given buffers. Activate the buffers in the default buffer holder. Start a new frame (see CSDRAW_XXX bit flags).)!. Enables offsetting of Z values.. If this is 0 then the target will be the main screen. Otherwise it is a texture. After calling g3d->FinishDraw() the target will automatically be reset to 0 (main screen). Note that on some implementions rendering on a texture will overwrite the screen. So you should only do this BEFORE you start rendering your frame. - Parameters: - Controls shadow drawing. Activate or deactivate all given textures depending on the value of 'textures' for that unit (i.e. deactivate if 0).. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structiGraphics3D.html
CC-MAIN-2014-15
en
refinedweb
1. dialog application, which launches the property page dialog. Below is the screen shot of hosting dialog: The below screen shot is the property page: Note the sample has two pages and this will sufficient for the reader to add more on their property page dialog. When you click Settings… button in the main dialog, the property page dialog will be opened. Once you change any one of the default value from the displayed dialog, the apply button will be enabled. Clicking the apply button will make you change permanent not considering whether you cancel the dialog or click ok. You can also save the changes by clicking the OK button also. Then what is the use of apply button? In real world if you want to show the changes visually, the button is very useful and the user of the application will look at the visual changes and tune their settings further. Let us go ahead and start creating the sample. 3. How do we Create Property Page Dialog? The below skeleton diagram explains how do we create the property page dialog. First, we should create property pages. Then these property pages are attached to the property sheet, which provides the buttons required for property page dialog. OK and Cancel buttons are common for a dialog, and the Apply button is specially provided for property page dialogs. Creating the property pages is almost equal to creating the dialog boxes. In the resource, you can ask for property page and you will get a borderless dialog. In this dialog, you should drop the controls that you want for your property page. In the above skeleton picture, first we will create property page1 and page2. Then the required controls are dropped into page1 and pagw2. Finally, through the source code we will add these pages to the property sheet created at runtime. 4. Create Property Pages How do you create a dialog? Property page also created similar to that. The below given video shows creating first page of the property dialog. Steps 1) From the Resource file add the Property Page 2) Then provide a meaningful ID Name for it 3) Open the Property page visual studio editor 4) From the Toolbox 3 radio buttons are added to it. So that’s all we do for creating the pages of the property sheet that create a page template, drop the controls on it. Repeat the same process for all the pages. Once the pages are ready you should create associated class for it. The video provided below shows how do we create a class for the Property page added in the previous video: Steps 1) The Property page template is opened in visual studio 2) Add class menu option is invoked from the context menu of the Property page template (By Right click) 3) In the class dialog a class name is chosen, and base class is set to CPropertyPage 4) Created class is shown in the class view The Second page of sample is created property page 1 way as shown in video1 and video2. Now we have page1 and pag2 for the property dialog is ready. The design of second property page is shown below: 5. Add Control Variables Now the Color and Font property page templates are ready. Now we will associate a variable to the controls in these property page templates. First a variable is associated to the radio buttons. For all the three radio buttons, only one variable is associated and we treat these radio buttons as single group. First we should make sure that the tab order (Format->tab Order or Ctrl+d when the dialog is opened in the editor) for all the radio buttons goes consecutively. Then for the first radio button in the tab order, set the group property to true. The below specified video shows adding a control variable for the Radio buttons: Steps 1) From the resource view, Property page for the font is opened 2) Make sure Group property is set to true. If not set it to true 3) Add variable dialog is opened for first radio button 4) Variable category is changed from control to variable 5) A variable of type BOOL is added (Later we will change this as int through the code) Likewise we add three more value type variables for each text box control in the property page two. The below screen shot shows an int value variable m_edit_val_Red added for the first edit box. For blue and green also variables are added as shown in the below screen shot. 6. OnApply Message Map To follow the code explanation with me, search for the comment //Sample in the solution and in the search result follow the Order 01,02,03 etc ON_MESSAGE_VOID is a nice handler for dealing with custom messages that does not require passing any arguments. In out sample we are going to use this handler for dealing with WM_APPLY user define message. Below is the code change required for the dialog-based project. 1) First a required header is included in the dialog class header file //Sample 01: Include the header required for OnMessageVoid #include <afxpriv.h> 2) In the same header file declaration for the void message handler is given. //Sample 02: Declare the Message Handler function afx_msg void OnApply(); 3) Next in the CPP file, ON_MESSAGE_VOID Macro is added in between Begin Message Map and End Message Map. The OnApply is not yet defined, so you will get a compiler error when you compile the program at present. To avoid this provide a dummy implementation for OnApply like void CPropPageSampleDlg::OnApply() {} //Sample 03: Provide Message map entry for the Apply button click ON_MESSAGE_VOID(WM_APPLY, OnApply) 4) WM_APPLY is not yet defined. So declare that user defined massage in the stdAfx.h. WM_USER macro is useful to define a user defines message in a safe way. That is the WM_APPLY does not clash with any existing user defined message as we use it safely like WM_USER+1 //Sample 04: Define the user defined message #define WM_APPLY WM_USER + 1 7. Change Radio Button Variable In video 3, we added a Boolean type variable for the radio buttons group. It will be very useful if we change this variable type from BOOL to integer type. When user makes a radio button selection, the data exchange mechanism will automatically set the variable to denote the currently selected radio button. You will get more clarity when we write the code for radio check state later. For now we will just change Boolean variable type to integer. 1) In the PropPageFont.h file, the variable type is changed from Boolean to Integer //Sample 05: Change the variable type to Int int m_ctrl_val_radio_font; 2) Next in the constructor of the CPropPageFont, variable is initialized to –1. This value denotes that none of the radio button is initilially the class CPropPageSampleDlg is created by the application wizard. Moreover, we are going to launch the Property page dialog from this dialog as a child dialog. The CPropPageSampleDlg will take the settings from the property page and caches it. When the property page is opened for next time, the settings cached by this parent dialog are supplied back to the property pages. 1) First the variables required to cache settings are declared in the class declaration, which is in the header file //Sample 07: Add Member variables to keep track of settings private: int m_selected_font; int m_blue_val; int m_red_val; int m_green_val; 2) Next in the OnInitDialog, these variables are initialized based on what the property page should show on very first display. //Sample 08: Initialize the member variables m_selected_font = -1; m_red_val = 0; m_green_val = 0; m_blue_val = 0; 9. Create Property Dialog and Display it From the dialog class the property page dialog is created and displayed as modal dialog. Once this property page dialog is closed by the user, the settings set by him/her is read back and cached inside the parent dialog. 1) In the button click handler, first we create a property sheet); 2) Next we create the property pages in the heap. First we declare the variables in the header file of the dialog class, then we declare the required variables in the class with private scope //Sample 9.2: Include Property pages #include "PropPageFont.h" #include "PropPageColor.h" //Sample 07: Add Member variables to keep track of settings private: int m_selected_font; int m_blue_val; int m_red_val; int m_green_val; CPropPageFont* m_page1_font; CPropPageColor* m_page2_color; 3) In the implementation file (Look at step 1), after creating the property sheet with title settings, we create both the property pages (i.e.) Font and Color pages. //Sample 9.3: Create Property Pages m_page1_font = new CPropPageFont(); m_page2_color = new CPropPageColor(); 4);); 6) When the property dialog is closed, we check the return value and cache (Copy) the settings provided in the pages to the calling dialog’s member variables. These variables are used to initialize the property page dialog when it is opened for next time. Note that during the button click, we create the pages on heap, copy the dialog members to the pages, add the pages to sheet and display it as modal dialog and when it closed before deleting the pages from heap we copy the settings into the local members. / enables when the UI elements in the pages are changed. Say for example typing the new red value in the text box will enable the apply button. Once you click the apply button, the changes are informed to the parent. In our case we send the data entered or changed by the user so for, to the parent dialog that launched this property page. In real world, the apply button will immediately apply the settings to the application. So before clicking OK, user can observe the effect of the changed settings just by clicking the apply button. So now,. The below video shows providing the handler for the Radio button click: Steps 1) FONT property page is opened 2) First Radio button in the group is clicked 3) In the properties pane, navigation moved to control events 4) BN_CLICKED event is double clicked (You will enter the code editor) 5) The process is repeated for other two radio buttons. The same way the EN_CHANGED event for all the three text boxes is provided. The screen shot below shows the request for the event handler for the control event EN_CHANGED:: //Sample 11: we will implement that now. The property page will send the notification to this dialog when the user clicks the apply button of the property page. Have a look at the implementation below: //Sample 12: a new instances of property pages are created when we display it. Now refer the code at section 9.4, you will get an idea of how the data flow of the settings will happen. 1) When the Parent about to display the property page it copies the cached data to the property pages 2) When user clicks the OK button, this OnApply is called. Refer section 9.6 3) When user clicks the Apply button, WM_APPLY user message is sent to the CPropPageSampleDlg The below code will send the WM_APPLY message to the parent dialog: BOOL CPropPageFont::OnApply() { //Sample 13: Set the Modified flag to false, and send message to dialog class user clicks the apply button. As we are just going to send the message to the parent dialog of the property page when Apply button is clicked by the user, providing the overridden version of function in either Font or Color page is sufficient. The below video shows adding the OnApply override: Steps 1) Property page for CPropPageFont is opened 2) In the Property Page Overrides toolbar icon is selected 3) Then, OnApply Override is added to the source code. The above video shows the sample in Action.
http://cppandmfc.blogspot.com/2012/08/mfc-creating-and-using-property-page.html
CC-MAIN-2014-15
en
refinedweb
The Function class is an abstract base class that, given a set of coordinates, will compute a value and possibly a gradient and hessian at that point. More... #include <math/optimize/function.h> The Function class is an abstract base class that, given a set of coordinates, will compute a value and possibly a gradient and hessian at that point. The keyval constructor reads the following keywords: matrixkit Gives a SCMatrixKit object. If it is not specified, a default SCMatrixKit is selected. value_accuracy Sets the accuracy to which values are computed. The default is the machine accuracy. gradient_accuracy Sets the accuracy to which gradients are computed. The default is the machine accuracy. hessian_accuracy Sets the accuracy to which hessians are computed. The default is the machine accuracy. throw_if_tolerance_exceeded If this is true, then an exception will be thrown if a result cannot be computed to the desired accuracy. The default is true. in sc::MolecularEnergy. Change the coordinate system and apply the given transform to intermediates matrices and vectors. Reimplemented in sc::UnionShape, sc::Uncapped5SphereExclusionShape, sc::MolecularEnergy, sc::ReentrantUncappedTorusHoleShape, sc::NonreentrantUncappedTorusHoleShape, sc::UncappedTorusHoleShape, and sc::SphereShape. Reimplemented in sc::MolecularEnergy. Return the SCMatrixKit used to construct vectors and matrices.::MolecularEnergy,. Set the SCMatrixKit that should be used to construct the requisite vectors and matrices. Reimplemented in sc::LMP2, sc::PT2R12, sc::MBPT2, sc::SumMolecularEnergy, sc::HCoreWfn, sc::MBPT2_R12, sc::CCR12, sc::HSOSKS, sc::SpinOrbitalPT2R12, sc::UKS, sc::MolecularEnergyCCA, sc::CLKS, sc::TaylorMolecularEnergy, sc::ExtendedHuckelWfn, sc::ExternPT2R12, sc::Shape, sc::UHF, sc::CLHF, sc::HSOSHF, sc::OSSHF, and sc::TCHF.
http://www.mpqc.org/mpqc-snapshot-html/classsc_1_1Function.html
CC-MAIN-2014-15
en
refinedweb
On Tue, 2004-09-28 at 13:38, Mike Waychison wrote:> -----BEGIN PGP SIGNED MESSAGE-----> Hash: SHA1> > John McCutchan wrote:> |> | --Why Not dnotify and Why inotify (By Robert Love)--> |> > | * inotify has an event that says "the filesystem that the item you were> | watching is on was unmounted" (this is particularly cool).> > | +++ linux/fs/super.c 2004-09-18 02:24:33.000000000 -0400> | @@ -36,6 +36,7 @@> | #include <linux/writeback.h> /* for the emergency remount stuff */> | #include <linux/idr.h>> | #include <asm/uaccess.h>> | +#include <linux/inotify.h>> |> |> | void get_filesystem(struct file_system_type *fs);> | @@ -204,6 +205,7 @@> |> | if (root) {> | sb->s_root = NULL;> | + inotify_super_block_umount (sb);> | shrink_dcache_parent(root);> | shrink_dcache_anon(&sb->s_anon);> | dput(root);> > This doesn't seem right. generic_shutdown_super is only called when the> last instance of a super is released. If a system were to have a> filesystem mounted in two locations (for instance, by creating a new> namespace), then the umount and ignore would not get propagated when one> is unmounted.> > How about an approach that somehow referenced vfsmounts (without having> a reference count proper)? That way you could queue messages in> umount_tree and do_umount..I was not aware of this subtlety. You are right, we should make sureevents are sent for every unmount, not just the last.John-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2004/9/28/154
CC-MAIN-2014-15
en
refinedweb
Problem with proxy settings in Internet Explorer Hello, I use HttpClient 3.1 on Windows XP + Java 1.6_10. I have following problem I can't figure out: Connections are always created with Internet Explorer browser proxy settings, even if I don't set up proxy with HttpClient. I tried to disable using browser settings by these switches: <br /> System.getProperties().put("proxySet", "false");<br /> System.getProperties().put("proxyHost", "");<br /> System.getProperties().put("http.proxySet", "false");<br /> System.getProperties().put("http.proxyHost", "");<br /> System.getProperties().put("http.proxyPort", "");<br /> System.getProperties().put("socksProxyHost", "");<br /> + their remove() variants<br /> but it doesn't help. They are simply ignored. The problem is that connection created with browser settings does not work at all (connection timeout - although it works fine in the browser). I don't see proxyHost property in my System properties either (although the system proxy is used). It completely looks like JDK bug to me. When I disable proxy settings in IE while my app is running, I can connect succesfully => it's loaded from registry on the fly. Does anyone know how to get rid off browser settings in the code? Thanks for your advices -Vity Yeah, that's it. I explored code of the ProxySelector (a class I've never heard...), it pointed to [code]sun.net.spi.DefaultProxySelector[/code], where I found out this property: [code]System.setProperty("java.net.useSystemProxies", "false");[/code] which solves my problem. Thank you Solarix Hi, Try this: import java.net.ProxySelector; ProxySelector.setDefault(null);
https://www.java.net/node/686071
CC-MAIN-2014-15
en
refinedweb
In this tutorial you create an ASP.NET web page that binds LINQ queries to entities using the Entity Framework mapping. If you have not already done so, install the World example database prior to attempting this tutorial. See the tutorial Section 5 File, New, Web Site.... From the Visual Studio installed templates select ASP.NET Web Site. Click OK. 5 method.(); } } ... Note that code. Copyright © 2004, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices Here's an example of DataBinding to a WinForms DataDridView. I spent about a day getting this working... too long for something so simple, IMHO... so I thought I'd share my learnings. <code lang="C#" target="NET3.5"> using System; using System.Data; using System.Windows.Forms; using MySql.Data.MySqlClient; using System.Data.Odbc; using System.Globalization; using System.Data.Common; namespace MySqlConnection { public partial class Form1 : Form { // I'm using MySql 5.0 and Visual C# Express on Windows XP SP3, but this // should work (more or less the same) for Visual Studio 2005 onwards, // and for any later versions of MySql and/or MySql-Connector // // 1. Download and install BOTH MySql Connectors from: // ODBC: dev.mysql.com/downloads/connector/odbc/ // NET: dev.mysql.com/downloads/connector/net/ // // 2. Create a ODBC Connection Definition // Control Panel -> Admin Tools -> Data Sources // !!! You need to define a SYSTEM (not a user) DSN !!! // ODBC dev.mysql.com/doc/refman/5.0/en/connector-odbc-configuration-dsn-windows.html#connector-odbc-configuration-dsn-windows-5-1 // // 3. Create a new project and add References // 1) Create a new Windows-Forms Application. // 2) Add a reference to (presuming default install dir): // C:\Program Files\MySQL\MySQL Connector Net 6.4.4\Assemblies\v2.0\MySql.Data.dll // (Connector/ODBC doesn't require a reference, apparently) // 3) Add a DataGridView to Form1, leaving it with the default name: dataGridView1. // 4) Copy-paste this code over the default Form1.cs // // References. The Connection String // (ODBC) dev.mysql.com/doc/refman/5.0/en/connector-odbc-configuration-connection-parameters.html // (Net) dev.mysql.com/doc/refman/5.0/en/connector-net-connection-options.html // Examples: private static readonly CultureInfo MY_CULTURE = CultureInfo.InvariantCulture; public Form1() { InitializeComponent(); } private enum Connector { NET, ODBC }; private void Form2_Load(object sender, EventArgs e) { try { var sql = "SELECT * FROM customer"; // Connector/NET or Connector/ODBC? DbDataAdapter dataAdapter = false ? QueryViaConnectorNet(sql) : QueryViaConnectorOdbc(sql); PopulateTheDataGridView(dataAdapter); } catch ( Exception ex ) { ErrMsgBox(this, ex); this.Close(); // no point continuing!!! } } private void PopulateTheDataGridView(DbDataAdapter dataAdapter) { var dataTable = new DataTable() { Locale = MY_CULTURE }; dataAdapter.Fill(dataTable); dataGridView1.DataSource = dataTable; } private DbDataAdapter QueryViaConnectorNet(string sql) { this.Text = "Data Binding to MySql 5.0 via Connector/NET"; var connectionString = "SERVER=localhost;" + "DATABASE=test;" + "USER=root;" return new MySqlDataAdapter(sql, connectionString); } private DbDataAdapter QueryViaConnectorOdbc(string sql) { this.Text = "Data Binding to MySql 5.0 via Connector/ODBC 5.1"; var connectionString = "DRIVER={MySQL ODBC 5.1 Driver};" + "DSN=MySql_Test;" // System DSN (Data Source Name) + "SERVER=localhost;" + "DATABASE=test;" + "USER=root;" + "OPTION=3"; return new OdbcDataAdapter(sql, connectionString); } #region From public static class MsgBox public static String APP_NAME = System.Reflection.Assembly.GetExecutingAssembly().GetName().Name; private static void ErrMsgBox(IWin32Window owner, Exception ex) { ErrMsgBox(owner, ex.GetType().Name + ": " + ex.Message); } public static void ErrMsgBox(IWin32Window owner, String msg) { Clipboard.SetText(msg); // just handy!!! MessageBox.Show( owner , msg , APP_NAME + " Error" , MessageBoxButtons.OK , MessageBoxIcon.Exclamation ); } #endregion } } </code>
http://dev.mysql.com/doc/connector-net/en/connector-net-tutorials-entity-framework-databinding-linq-entities.html
CC-MAIN-2014-15
en
refinedweb
How to: Define a Custom Modeling Toolbox Item To make it easy to create an element or group of elements according to a pattern that you often use, you can add new tools to the toolbox of modeling diagrams in Visual Studio Ultimate. You can distribute these toolbox items to other Visual Studio Ultimate users. A custom tool creates one or more new elements in a diagram. You cannot create custom connection tools. For example, you could make a custom tool to create elements such as these: A package linked to the .NET profile, and a class with the .NET stereotype. A pair of classes linked by an association to represent the Observer pattern. You can use this method to create element tools. That is, you can create tools that you drag from the toolbox onto a diagram. You cannot create connector tools. To define a custom modeling tool Create a UML diagram that contains an element or group of elements. These elements can have relationships between them, and can have subsidiary elements such as ports, attributes, operations or pins. Save the diagram using the name that you want to give the new tool. On the File menu, use Save…As. Using Windows Explorer, copy the two diagram files to the following folder or any subfolder: YourDocuments \Visual Studio 2012\Architecture Tools\Custom Toolbox Items Create this folder if it does not already exist. You might have to create both Architecture Tools and Custom Toolbox Items. Copy both diagram files, one with a name that ends "…diagram" and the other with a name that ends "…diagram.layout" You can make as many custom tools as you like. Use one diagram for each tool. (Optional) Create a .tbxinfo file as described in How to Define the Properties of Custom Tools, and add it to the same directory. This allows you to define a toolbox icon, tooltip, and so on. A single .tbxinfo file can be used to define several tools. It can refer to diagram files that are in subfolders. Restart Visual Studio. The additional tool will appear in the toolbox for the appropriate type of diagram. A custom tool will replicate most of the features of the source diagram: Names. When an item is created from the toolbox, a number is added to the end of the name if necessary to avoid duplicate names in the same namespace. Colors, sizes and shapes Stereotypes and package profiles Property values such as Is Abstract Linked work items Multiplicities and other properties of relationships The relative positions of shapes. The following features will not be preserved in a custom tool: Simple shapes. These are shapes that are not related to model elements, that you can draw on some kinds of diagrams. Connector routing. If you route connectors manually, the routing will not be preserved when your tool is used. The positions of some nested shapes, such as Ports, are not preserved relative to their owners. A toolbox information (.tbxinfo) file allows you to specify a toolbox name, icon, tooltip, tab, and help keyword for one or more custom tools. Give it any name, such as MyTools.tbxinfo. The general form of the file is as follows: <?xml version="1.0" encoding="utf-8" ?> <customToolboxItems xmlns=""> <customToolboxItem fileName="MyObserverTool.classdiagram"> <displayName> <value>Observer Pattern</value> </displayName> <tabName> <value>UML Class Diagram</value> </tabName>  <f1Keyword> <value>ObserverPatternHelp</value> </f1Keyword> <tooltip> <value>Create a pair of classes</value> </tooltip> </customToolboxItem> </customToolboxItems> The value of each item can be either: As shown in the example, <bmp fileName="…"/> for the toolbox icon and <value>string</value> for the other items. - or - <resource fileName="Resources.dll" baseName="Observer.resources" id="Observer.tabname" /> In this case, you supply a compiled assembly in which the string values have been compiled as resources. Add a <customToolboxItem> node for each toolbox item you want to define. The nodes in the .tbxinfo file are as follows. There is a default value for each node. You can edit the bitmap file in Visual Studio, and set its height and width to 16 in the Properties window. You can distribute toolbox items to other Visual Studio users by packaging them into a Visual Studio Extension (VSIX). You can package commands, profiles, and other extensions into the same VSIX file. For more information, see Deploying Visual Studio Extensions. The usual way to build a Visual Studio extension is to use the VSIX project template. To do this, you must have installed Visual Studio SDK. To add a Toolbox Item to a Visual Studio extension Create and test one or more custom tools. Create a .tbxinfo file that references the tools. Open an existing Visual Studio extension project. - or - Define a new Visual Studio extension project. On the File menu, choose New, Project. In the New Project dialog box, under Installed Templates, choose Visual C#, Extensibility, VSIX project. Add your toolbox definitions to the project. Include the .tbxinfo file, the diagram files, bitmap files, and any resource files, and make sure that they are included in the VSIX. In Solution Explorer, on the shortcut menu of the VSIX project, choose Add, Existing Item. In the dialog box, set Objects of Type: All Files. Locate the files, select them all, and then choose Add. Set the following properties of all the files that you have just added. You can set their properties at the same time by selecting them all in Solution Explorer. Be careful not to change the properties of the other files in the project. Copy to Output Directory = Copy Always Build Action = Content Include in VSIX = true Open source.extension.vsixmanifest. It opens in the extension manifest editor. Under Metadata, add a description for the custom tools. Under Assets, choose New and then set the fields in the dialog as follows: Type = Custom Extension Type Type = Microsoft.VisualStudio.ArchitectureTools.CustomToolboxItems Source = File on filesystem. Path = your .tbxinfo file, for example MyTools.tbxinfo Build the project. To verify that the extension works, press F5. The experimental instance of Visual Studio starts. In the experimental instance, create or open a UML diagram of the relevant type. Verify that your new tool appears in the toolbox and that it creates elements correctly. To obtain a VSIX file for deployment: In Windows Explorer, open the folder .\bin\Debug or .\bin\Release to find the .vsix file. This is a Visual Studio Extension file. It can be installed on your computer, and also sent to other Visual Studio users. To install custom tools from a Visual Studio Extension Open the .vsix file in Windows Explorer or in Visual Studio. Choose Install in the dialog box that appears. To uninstall or temporarily disable the extension, open Extension Manager from the Tools menu. You can make an extension that, when it is installed on another computer, will display tool names and tooltips in the language of the target computer. To provide versions of the tool in more than one language Create a Visual Studio Extension project that contains one or more custom tools. In the .tbxinfo file, use the resource file method to define the tool's displayName, toolbox tabName, and the tooltip. Create a resource file in which these strings are defined, compile it into an assembly, and refer to it from the tbxinfo file. Create additional assemblies that contain resource files with strings in other languages. Place each additional assembly in a folder whose name is the culture code for the language. For example, place a French version of the assembly inside a folder that is named fr. You should use a neutral culture code, typically two letters, not a specific culture such as fr-CA. For more information about culture codes, see CultureInfo.GetCultures method, which provides a complete list of culture codes. Build the Visual Studio Extension, and distribute it. When the extension is installed on another computer, the version of the resource file for the user's local culture will be automatically loaded. If you have not provided a version for the user's culture, the default resources will be used. You cannot use this method to install different versions of the prototype diagram. The names of elements and connectors will be the same in every installation. Ordinarily, in Visual Studio, you can personalize the toolbox by renaming tools, moving them to different toolbox tabs, and deleting them. But these changes do not persist for custom modeling tools created with the procedures that are described in this topic. When you restart Visual Studio, custom tools will reappear with their defined names and toolbox locations. Furthermore, your custom tools will disappear if you perform the Reset Toolbox command. However, they will reappear when you restart Visual Studio.
http://msdn.microsoft.com/en-us/library/vstudio/ee292090.aspx
CC-MAIN-2014-15
en
refinedweb
Using Namespaces XML documents support namespaces. For example, if you add the following namespace to the XML file in Listing One (right after the <xml> tag -- both are shown), you need to include the namespace in both your LINQ-to-XML and your XPath queries. <?xml version="1.0" encoding="utf-8"?> <jack:Blackjack xmlns: The XML in Listing Two shows the proper placement of the namespace jack added to the XML from Listing One. The code in Listing Three incorporates the namespace in the LINQ-to-XML to obtain the net amount won (or lost) from the XML file in Listing One. The second half of the listing uses the XPathSelectElement method and an XPath query to obtain the same value. <?xml version="1.0" encoding="utf-8"?> <jack:Blackjack xmlns: <jack:Player <jack:Statistics> <jack:AverageAmountLost>-28.125</jack:AverageAmountLost> <jack:AverageAmountWon>30.681818181818183</jack:AverageAmountWon> <jack:Blackjacks>1</jack:Blackjacks> <jack:Losses>8</jack:Losses>44 <jack:NetAverageWinLoss>5.9210526315789478</jack:NetAverageWinLoss> <jack:NetWinLoss>112.5</jack:NetWinLoss> <jack:PercentageOfBlackJacks>0.041666666666666664</jack: PercentageOfBlackJacks> <jack:PercentageOfLosses>33.333333333333329</jack:PercentageOfLosses> <jack:PercentageOfPushes>16.666666666666664</jack:PercentageOfPushes> <jack:PercentageOfWins>45.833333333333329</jack:PercentageOfWins> <jack:Pushes>4</jack:Pushes> <jack:Surrenders>1</jack:Surrenders> <jack:TotalAmountLost>-225</jack:TotalAmountLost> <jack:TotalAmountWon>337.5</jack:TotalAmountWon> <jack:Wins>11</jack:Wins> </jack:Statistics> </jack:Player> </jack:Blackjack> using System.Xml; using System.Xml.Linq; using System.Xml.XPath; private static void UseNamespace() { const string filename = "..\\..\\CurrentStatsWithNamespace.xml"; XDocument doc = XDocument.Load(filename); XNamespace jack = ""; XElement winLoss1 = doc.Element(jack + "Blackjack") .Element(jack + "Player").Element ( jack + "Statistics").Element(jack + "NetWinLoss"); Console.WriteLine(winLoss1); Console.ReadLine(); XmlReader reader = XmlReader.Create(filename); XElement root = XElement.Load(reader); XmlNameTable table = reader.NameTable; XmlNamespaceManager manager = new XmlNamespaceManager(table); manager.AddNamespace("jack", ""); XElement winLoss2 = doc.XPathSelectElement( "./jack:Blackjack/jack:Player/jack:Statistics/jack:NetWinLoss", manager); Console.WriteLine(winLoss2); Console.ReadLine(); } In the example, an XmlReader was created from the XML file. The root XElement was obtained from the reader, followed by the Nametable. The Nametable is an instance of the System.Xml.Nametable class, and it contains the atomized names of the elements and attributes of the XML document. If a name appears multiple times in an XML document, it is stored only once in a Nametable, as a Common Language Runtime (CLR) object. Such storage permits object comparisons on these elements and attributes rather than a much more expensive string comparison. (This is managed for you.) Next, the table is used to create an XmlNamespaceManager and the desired XML namespace string is added to the manager. Finally, the XmlNamespaceManager is passed as an argument to the XPathSelectElement method. The XPath query is "./jack:Blackjack/jack:Player/jack:Statistics/jack:NetWinLoss". The subpath "jack:" demonstrates how to incorporate the namespace in the XPath query. Our examples use the XPath support provided by LINQ-to-XML in the System.Xml.Linq namespace. XPath support is provided in System.Xml.XPath too, and you would use different classes and behaviors if you were to use that approach. As an exercise, if you are interested, you can experiment by implementing the equivalent behaviors using the capabilities of the XPath namespace.
http://www.drdobbs.com/windows/comparing-linq-to-xml-with-xpath/209904294?pgno=2
CC-MAIN-2014-15
en
refinedweb
Provided by: mono-xsp4_4.2-2.1_all NAME XSP - Mono ASP.NET Web Server (xsp4 and xsp42) SYNOPSIS xsp4 [options] or mod-mono-server [options] or fastcgi-mono-server [options] DESCRIPTION XSP, mod-mono-server and fastcgi-mono-server are hosts for ASP.NET-based applications. If run as `xsp4',. OPTIONS -. --no-hidden.. --help Shows the list of options and exits. --verbose Prints extra messages. Useful for debugging. --pidfile FILE Writes the xsp4 PID to the specified file. MONO RUNTIME OPTIONS The format of the .webapp files used for --appconfigfile and --appconfigdir is: . AUTHORS The Mono XSP server was written by Gonzalo Paniagua Javier (gonzalo@ximian.com). Fastcgi- mono-server was written by Brian Nickel <>. ENVIRONMENT VARIABLES MONO_ASPNET_NODELETE If set to any value, temporary source files generated by ASP.NET support classes will not be removed. They will be kept in the user's temporary directory. FILES Web.config, web.config ASP.NET applications are configured through these files, the configuration is done on a per-directory basis. For more information on this subject see the page. SEE ALSO mono(1),dbsessmgr(1),asp-state(1),mod_mono(8),makecert(1) For more information on creating certificates, see: System.Web, System.Web.Hosting namespaces. is Microsoft's official site for ASP.NET MORE INFORMATION The Mono project () is a collaborative effort led by Novell () to implement an open source version of the .NET Framework. MAILING LISTS Mailing lists are listed at the
https://manpages.ubuntu.com/manpages/bionic/en/man1/xsp.1.html
CC-MAIN-2022-33
en
refinedweb
At this point in the course we've gone over routing to pages, but we can also route to endpoints. Endpoints are server-side routes, so They provide "backend" functionality within the SvelteKit application providing a great place to, for example, make an external API request. Endpoints are modules written in .js or .ts files that export functions corresponding to HTTP methods. These endpoint files become API routes in our application. For example, our [name].svelte page might request data from /product/sticker, which could be represented by the endpoint src/routes/product/sticker.js . Go ahead and create this file within our product folder called /product/sticker.js. This will trigger the creation of an endpoint which is basically a server-side route for our app. Since this is technically a route, we keep our endpoints in our routes folder. The first thing you probably notice is the interesting file name we gave our endpoint. Firstly, rather than ending in .svelte like our previous page files, our endpoint ends in .js. This is because, as already mentioned, endpoints are written in .js or .ts files. Now, if we wanted, we could also add a file extension before the .js like this, /product/sticker.json.js, which just tells us the type of data that will be returned from the endpoint. All together, our file name is saying that we have an endpoint at /producst/sticker , because it follows the same routing structure as our site, that is written in a .js file and will return json data. This is optional, so I'm going to chose not to add it for this example, and you'll see why in a bit. As mentioned earlier, endpoints export request handler functions corresponding to HTTP methods. Request handlers make it possible to read and write data that is only available on the server. For example, within this file we can export a GET, PATCH, DELETE ... or any valid HTTP method. Endpoints also have access to fetch in case you need to request data from external APIs. For example, within our endpoint file, we can make a GET request on this route by exporting an async function called get like this. export async function GET() {} Now, we can reach out to a database and retrieve some data. Since we do not have a database set up, let's instead return an object with some hard coded data representing the response. In our example, let's declare our value product, and return the product in our body like this. export async function GET({ params }) { const product = { name: 'sticker' src: '' price: '$10' }; return { body: { product } }; } Now, this is the basics of what might be returned by an endpoint. Remember, this is server side, so it will not be available to the client where you typically don't want to use any sensitive information for security reasons. As previously mentioned, endpoints are serverless routes. Endpoints follow the same routing structure as our site, so Just like with our page components, our endpoint routes are determined based on the name of the file. If we route to /product/sticker.js, we can visit this endpoint page just like we would a normal route, and we can view the json data being returned by this endpoint. Again, this is generated server-side, so we can reach out directly to a database and use sensitive information here. Currently our endpoint is only returning our sticker data, but we want it to use a dynamic parameter, just like our product page does, to dynamically fetch data associated with whatever product page we are currently on. We are able to use dynamic parameters with endpoints the same way we use them with pages. Let's update our endpoint name to /product/[name].js, where name is the dynamic parameter, so it will once again be within square brackets. Our method accepts an object of parameters, so in this case we can access our dynamic name by passing in params, and using it like this. export async function GET({ params }) { const name = params.name; } Now we could use this to reach out to a database and fetch data associated with this specific product. We don't have a database set up, but for the sake of example that might look something like this. import db from database; export async function GET({ params }) { var product = db.collection.find(params.name) return { body: { { product } } } } I'm going add a hard coded array of products, and use our dynamic param to search this array and return the correct product to sort of mock a database. const products = [ { name: 'cup', price: '$10', quantity: 1, src: '', }, { name: 'shirt', price: '$10', quantity: 1, src: '', }, { name: 'jacket', src: '', price: '$80.00', quantity: 1, }, { name: 'sticker', src: '', price: '$8.00', quantity: 1, }, ]; export async function GET({ params }) { let product = products.find((product) => product.name === params.name); return { body: { product }, }; } Now that we've created our endpoint, let's use this endpoint in our page to display the data returned from the endpoint. Notice the name of our endpoint is the same as the page that we are fetching its data in. Anytime an endpoint's filename is the same as a page (except for the extension), the page gets its props from the endpoint. We call these page endpoints, and they allow you to pass in props directly from the endpoint. If you remember earlier in this module when we created out endpoint, I decided not to add the .json file name extension. This is because page endpoints cannot use this. We will learn how to use this with standalone endpoint in the next module. Looking back at our endpoint, we are returning our body which contains a single value, product. This body is what allows us to have props passed directly into the page. Back in our /products/[name].svelte page, we can write export let product within our <script> tag. Now our product data is coming directly from our endpoint, so the info being displayed on the page will change with the dynamic param, name. As you can see, page endpoints allow you to run code server side in svelte by exporting an async get function. Now, what if we want to hit this same endpoint from a different page that has a different name than the endpoint? Well, to do that we will need to use the load function which we will go over in the next module.
https://vercel.com/docs/beginner-sveltekit/endpoints
CC-MAIN-2022-33
en
refinedweb
table of contents NAME¶ msync - synchronize a file with a memory map SYNOPSIS¶ #include <sys/mman.h> int msync(void *addr, size_t length, int flags); DESCRIPTION¶ msync() flushes changes made to the in-core copy of a file that was mapped into memory using mmap(2) back to the filesystem. Without use of this call, there is no guarantee that changes are written back before munmap(2) is called. To be more precise, the part of the file that corresponds to the memory area starting at addr and having length length is updated. The flags argument should specify exactly one of MS_ASYNC and MS_SYNC, and may additionally include the MS_INVALIDATE bit. These bits have the following meanings: - MS_ASYNC - Specifies that an update be scheduled, but the call returns immediately. - MS_SYNC - Requests an update and waits for it to complete. - MS_INVALIDATE - Asks to invalidate other mappings of the same file (so that they can be updated with the fresh values just written). RETURN VALUE¶ On success, zero is returned. On error, -1 is returned, and errno is set appropriately. ERRORS¶ - TO¶ POSIX.1-2001, POSIX.1-2008.).) NOTES¶ According.
https://manpages.debian.org/bullseye/manpages-dev/msync.2.en.html
CC-MAIN-2022-33
en
refinedweb
These release notes provide information about new features, notable technical changes, features in technology preview, bug fixes, known issues, and related advisories for Red Hat build of Quarkus 2.7. Information about upgrading and backward compatibility is also provided to help you transition from an earlier. 1. About Red Hat build of Quarkus Red Hat build of Quarkus is a Kubernetes-native Java stack that is optimized for use with containers and Red Hat OpenShift Container Platform. Quarkus is designed to work with popular Java standards, frameworks, and libraries such as Eclipse MicroProfile, Eclipse Vert.x, Apache Camel, Apache Kafka, Hibernate ORM (JPA), and RESTEasy (JAX-RS). As a developer, you can choose the Java frameworks you want for your Java applications, which you can run in Java Virtual Machine (JVM) mode or compile and run in native mode. Quarkus provides a container-first approach to building Java applications. The container-first approach facilitates the containerization and efficient execution of microservices and functions. For this reason, Quarkus applications have a smaller memory footprint and faster startup times. In addition, Quarkus optimizes the application development process with capabilities, such as unified configuration, automatic provisioning of unconfigured services, live coding, and continuous testing that allows you to get instant feedback on your code changes. For information about the differences between the Quarkus community version and Red Hat build of Quarkus, see Differences between the Quarkus community version and Red Hat build of Quarkus. 2. Differences between the Quarkus community version and Red Hat build of Quarkus As an application developer, you can access two different versions of Quarkus: the Quarkus community version and the productized version, Red Hat build of Quarkus (RHBQ). The following table describes the differences between the Quarkus community version and RHBQ. Red Hat build of Quarkus supports the building of native Linux executables by using the Red Hat build of Quarkus Native builder, which is a productized distribution of Mandrel. For more information, see Compiling your Quarkus applications to native executables. Building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other distributions of GraalVM is not supported for Red Hat build of Quarkus. 3. New features, enhancements, and technical changes This section provides an overview of the new features, enhancements, and technical changes introduced in Red Hat build of Quarkus 2.7. 3.1. Cloud 3.1.1. Support for Service Binding CR Red Hat build of Quarkus 2.7 supports the generation of Service Binding custom resources (CR), which means you can now define custom service bindings by using Red Hat build of Quarkus configuration. For more information, see the new Service Binding guide. 3.2. Core 3.2.1. Increased support for Java 17 in JVM mode Red Hat build of Quarkus 2.7 includes improved developer tools for using Java 17. With these improved developer tools, you can now generate projects based on Java 17 without manually configuring the version. Quarkus 2.7 also includes a Technology Preview for Java 17 in native mode. By default, the code.quarkus.redhat.com starter code targets Java 11. If you want an application to target Java 17, ensure to set the expected Java version in the Configure your application section of. This is especially relevant when you are building applications for containers because the base images differ depending on the Java versions. For more information, see Deploying your Quarkus applications to OpenShift. For more information about supported Java and OpenJDK versions, log in to our Customer Portal and see Red Hat build of Quarkus Supported Configurations. 3.3. Data 3.3.1. Dev Services for Infinispan Red Hat build of Quarkus 2.7 supports Dev Services for the Infinispan in-memory data grid through the quarkus-infinispan-client extension. Now you no longer need to set up and configure a local Infinispan server to test your applications. When your Quarkus application includes the quarkus-infinispan-client extension, in either dev or test mode, Quarkus automatically starts an Infinispan server instance and configures your application. If Docker is detected, the container starts, and the connection to the server is automatically stabilized. You can configure the quarkus-infinispan-client through the following application.properties file attributes: For more information about Dev Services, see the following resources: 3.3.2. Hibernate Search integration Red Hat build of Quarkus now integrates the Hibernate Search 6.1 component ( hibernate-search-orm-elasticsearch) providing powerful indexing and full-text search capabilities to your Quarkus application. Hibernate Search integrates with Hibernate Object Relational Mapper (ORM) to automatically extract data from entities into remote indexes. Hibernate Search also provides a Java API, which you can use to run search queries against those indexes and retrieve the resulting managed entities. Through the hibernate-search-orm-elasticsearch extension, Red Hat build of Quarkus 2.7 provides support for the following remote indexes: OpenSearch Elasticsearch Quarkus also builds the Hibernate Search metamodel at compilation time, enabling your application to start and run faster. You can use the hibernate-search-orm-elasticsearch extension with RESTEasy Classic and RESTEasy Reactive. Integration with Hibernate Reactive is not supported. For more information about the hibernate-search-orm-elasticsearch extension, see the Hibernate Search guide. 3.3.3. JPA entity listeners Red Hat build of Quarkus 2.7 now supports Java Persistence API (JPA) entity listeners for JVM and native modes. JPA entity listeners enable you to receive notifications about a particular lifecycle event of an entity. Using Quarkus, you can now create an entity listener with an annotated callback method for any of the following entity lifecycle events: @PrePersist @PostPersist @PostLoad @PreUpdate @PostUpdate @PreRemove @PostRemove For more information about JPA callback annotations, see the 'Interceptors and events' section of the Hibernate ORM user guide. 3.3.4. Programmatic API for caching Red Hat build of Quarkus 2.7 enhances the existing annotations support for caching by introducing a programmatic API, which you can use to store, retrieve, or delete values from any cache that is declared by using the annotations API. All operations from the programmatic API are non-blocking and rely on Mutiny. To use the programmatic API to access cached data, you must first retrieve an instance of io.quarkus.cache.Cache. You cannot use the programmatic API to create a new cache. You can use the programmatic API only to access a cache that was created with the annotations API. For more information, see the 'Caching using the programmatic API' section of the Application Data Caching guide. 3.3.5. Support for Hibernate ORM interceptors Red Hat build of Quarkus 2.7 introduces support for Hibernate Object Relational Mapper (ORM) interceptors, which you can use to enable advanced auditing within your applications. The org.hibernate.Interceptor interface provides callbacks from the session, enabling your Red Hat build of Quarkus application to inspect and manipulate properties of a persistent object before it is saved, updated, deleted, or loaded. To ensure that Red Hat build of Quarkus automatically configures Hibernate ORM to use the interceptor, you can annotate org.hibernate.Interceptor with the @PersistenceUnitExtension qualifier. For named persistence units, use @PersistenceUnitExtension("nameOfYourPU"). For more information, see Using Hibernate ORM and JPA. 3.3.6. Hibernate ORM SQL load script enhancements In Red Hat build of Quarkus 2.7, the Hibernate Object Relational Mapper (ORM) is enhanced to allow multiple SQL files in the SQL load script. Now, you can specify multiple comma-separated SQL files in the quarkus.hibernate-orm.sql-load-script configuration property, which runs when Hibernate ORM starts. You can use an SQL load script to perform various actions when creating database tables from scratch. For example, with the SQL load script, you can run CREATE DATABASE and CREATE TABLE actions or run database update actions, such as INSERT and UPDATE. The SQL load script runs when the Quarkus application starts. Example: quarkus.hibernate-orm.sql-load-script=import-1.sql,import-2.sql 3.3.7. Support for Oracle data sources You can now use Oracle Database as a data source for your applications. Red Hat build of Quarkus 2.7 introduces support for the Oracle JDBC driver extension, quarkus-jdbc-oracle, which you can use while developing applications in JVM and native modes. Red Hat build of Quarkus 2.7 now also provides Dev Services for Oracle Database, which means that you can start your Oracle database automatically in dev and test modes. These Red Hat build of Quarkus Dev Services use a container image of Oracle Database Express Edition (XE). 3.3.8. Reactive Microsoft SQL Server client encryption In Red Hat build of Quarkus 2.7, the Reactive Microsoft SQL Server client now supports SSL/TLS encryption. You can now secure connections from your Quarkus applications to an SSL/TLS enabled Microsoft SQL Server. The Reactive client supports the following Microsoft SQL Server encryption levels: none, login packet only, and entire connection. Use the following client encryption configuration properties of Quarkus to enable SSL/TLS encryption when connecting to a Microsoft SQL Server instance. When your Reactive Microsoft SQL Server client connects, the encryption level is negotiated through the combined configuration of the Quarkus Reactive Microsoft SQL Server client encryption properties and the Microsoft SQL Server SSL support state, as outlined in the following table: The encryption negotiation fails when you enable SSL/TLS in the Quarkus client and the Microsoft SQL Server instance that the client is connecting to does not support encryption. In this case, the client terminates the connection. 3.4. Installation and upgrade 3.4.1. Hibernate ORM upgraded to version 5.6.5 In Red Hat build of Quarkus 2.7, Hibernate Object Relational Mapper (ORM) is upgraded to version 5.6.5, which includes feature enhancements and several fixes. For more information about the Hibernate ORM updates, see the New features, enhancements, and technical changes and Changes that affect backwards compatibility sections. 3.4.2. Quarkus Apache Kafka components upgraded to version 3.1.0 In Red Hat build of Quarkus 2.7, components that are based on Apache Kafka are upgraded to Apache Kafka version 3.1.0. 3.4.3. Quarkus SmallRye reactive messaging components upgraded to version 3.13.0 In Red Hat build of Quarkus 2.7, SmallRye reactive messaging components are upgraded to version 3.13.0 to interact with Apache Kafka. You can upgrade the SmallRye reactive messaging components and take advantage of the additional features, such as: Improved functionality with message converters on injected channels, Apache Kafka Consumer streams, failure handlers for Apache Kafka serialization, and merged configuration implementation Upgraded the components to Vert.x 4.2.1, Vert.x Mutiny Bindings 2.15.0, and vertx-stack-depchain4.1.5 Reduced the log levels for Apache Kafka messages Apache Kafka connector option to consume messages in batches Enabled channels listing inside the KafkaClientServicecomponent Updated MicroProfile reactive messaging to version 2.0.1 3.5. Logging 3.5.1. Simplified logging with Panache Red Hat build of Quarkus 2.7 provides simplified logging, which is sometimes referred to as logging with Panache. When you are developing applications, you no longer need to declare a logger in your class or inject it. Instead, you can call static methods from the io.quarkus.logging.Log class. Quarkus automatically declares a logger in each class that uses the Log API and redirects the static method calls to this logger. Simplified logging is provided by default, so you do not need to configure or enable the feature. Simplified logging with Panache works directly in Quarkus application classes and not in external dependencies therefore, you cannot use this logging feature in your external libraries. Static method calls on the Log class that are not processed by Quarkus at build time, throw an exception. Rather than declaring a logger field, use the simplified logging API, as shown in the following example. package com.example; import io.quarkus.logging.Log; class MyService { public void doSomething() { Log.info("Simple!"); } } The fully qualified name of the class is used as the logger category, so in the simplified logging API example, the logger category is com.example.MyService. 3.6. Messaging 3.6.1. Batch message processing on Quarkus Apache Kafka Quarkus provides support for Apache Kafka through the SmallyRye reactive messaging framework. With this framework, you can send and receive messages by using the Apache Kafka connector component. With Red Hat build of Quarkus 2.7, the Apache Kafka connector component is enhanced to enable batch processing of incoming messages, which means that the Quarkus application can receive and process multiple Apache Kafka messages at once. By processing messages in batches, you can write multiple messages to a database at the same time, thereby improving performance. By default, incoming channels receive each Kafka message individually, however, in the background, the Kafka consumer client polls the broker continuously and receives the messages in batches. Quarkus 2.7 receives the batch of messages that is returned by the poll all at once. You do not need to configure batch mode on your Quarkus application manually. Instead, if an incoming method receives a Kafka parameter type that is compatible with batch mode, the Quarkus application detects this and configures it for you. Batches are not formed by grouping messages according to a specific criteria. Instead, the Kafka consumer groups messages according to the polled records. For more information about how you can interact with Kafka by using Quarkus, see the Apache Kafka reference guide. 3.7. Scalability and performance 3.7.1. UPX compression for native executables The Ultimate Packer for eXecutables (UPX) is used as a compression tool to reduce the size of executables. UPX compression increases the build time, when you use high-compression levels and increases the memory usage of the application. For more information about the UPX compression, see Compress Native Executables with UPX. 3.7.2. Base images for native executables The following base images are used to make the containerization of native executables easier by providing the requirements to run these native executables: The registry.access.redhat.com/ubi8/ubi-minimal:8.5image provides a complete image with a package manager and a shell. You can use this image if the image size is not the primary constraint. The quay.io/quarkus/quarkus-micro-image:1.0image is a smaller base image, providing the native executable requirements, such as glibc, libstdc++, and zlib. Also, both base images support UPX compressed executables. quay.io/quarkus/quarkus-micro-image:1.0image is for development use only and not supported in a production environment. For more information about the base images, see Quarkus base runtime image. 3.7.3. Ability to test multi-module projects In Red Hat build of Quarkus 2.7, you can now perform continuous testing for tests available in any module of a project in dev mode. To perform continuous testing for tests available in a particular module that is running from quarkus:dev mode, you can set the following property to true in application.properties file: quarkus-core_quarkus.test.only-test-application-module=true To control which modules are tested, you can use quarkus.test.include-module-pattern and quarkus.test.exclude-module-pattern properties. 3.8. Web 3.8.1. REST Client Reactive enhancements The REST Client Reactive component that is integrated with Red Hat build of Quarkus 2.7 is improved with the following enhancements: Configuration: In addition to the standard MicroProfile configuration properties, you can now configure your clients with properties consistent with other Quarkus extensions, namely properties with the quarkus.rest-clientprefix. Dev UI: You can view a list of available client interfaces from the Quarkus Dev UI. You can also open available client interfaces within your IDE. Multipart messages: You can send multipart/form-datarequests and parse multipart/form-dataresponses. ParamConverter: You can register ParamConverterfor clients and servers. In previous versions, you could only register ParamConverterfor servers. Proxy support: The integrated REST Client Reactive component of Quarkus supports sending requests through a proxy. The following enhancements to the proxy are also introduced: Proxy authentication is supported. You can configure local proxy settings specific to a client in an application. You configure global proxy settings so that all clients in an application observe and propagate settings from a global configuration. You can configure your application to observe and propagate proxy settings from your JVM properties configuration. Recursive sub-resources: Quarkus supports recursive sub-resources so you can build deeper hierarchies of sub-resources. 4. Support and compatibility You can find detailed information about the supported configurations and artifacts that are compatible with Red Hat build of Quarkus 2.7 and the high-level support life cycle policy on the Red Hat Customer Support portal as follows: For a list of supported configurations, OpenJDK versions, and tested integrations, see Red Hat build of Quarkus Supported Configurations. For a list of the supported Maven artifacts, extensions, and BOMs for Red Hat build of Quarkus, see Red Hat build of Quarkus Component Details. For general availability, full support, and maintenance support dates for all Red Hat products, see Red Hat Application Services Product Update and Support Policy. 4.1. Product updates and support life cycle policy In Red Hat build of Quarkus, a feature release can be either a major or a minor release that introduces new features or support. Red Hat build of Quarkus release version numbers are directly aligned with Quarkus community project release versions. The version numbering of a Red Hat build of Quarkus feature release matches the community version that it is based on. Red Hat does not release a productized version of Quarkus for every version the community releases. The cadence of the Red Hat build of Quarkus feature releases is about every six months. Red Hat build of Quarkus provides full support for a feature release right up until the release of a subsequent version. When a feature release is superseded by a new version, Red Hat continues to provide a further six months of maintenance support for the release, as outlined in the following support life cycle chart [Fig. 1]. During the full support phase and maintenance support phase of a release, Red Hat also provides 'service-pack (SP)' updates and 'micro' releases to fix bugs and Common Vulnerabilities and Exposures (CVE). New features in subsequent feature releases of Red Hat build of Quarkus can introduce enhancements, innovations, and changes to dependencies in the underlying technologies or platforms. For a detailed summary of what is new or changed in a successive feature release, see New features, enhancements, and technical changes. While most of the features of Red Hat build of Quarkus continue to work as expected after you upgrade to the latest release, there might be some specific scenarios where you need to change your existing applications or do some extra configuration to your environment or dependencies. Therefore, before upgrading Red Hat build of Quarkus to the latest release, always review the Changes that affect backwards compatibility and Deprecated components and features sections of the release notes. 4.2. Tested and verified environments Red Hat build of Quarkus 2.7 is available on the following versions of Red Hat OpenShift Container Platform and Red Hat Enterprise Linux 8, with the listed supported installation container images. Please note the Tested and Supported columns for each CPU architecture. For versions in the following table, the deployment environments of which are not tested, see this Support of Red Hat Middleware products and components on Red Hat OpenShift knowledge-based article to get the information about their support status. For a list of supported configurations, see the Red Hat build of Quarkus Supported Configurations page (login required). 4.3. Development support Red Hat provides development support for the following Red Hat build of Quarkus features, plug-ins, extensions, and dependencies: Continuous Testing Dev Services Dev UI Local development mode Remote development mode Maven Protocol Buffers Plugin 5. Deprecated components and features The components and features listed in this section are deprecated with Red Hat build of Quarkus 2.7. They are included and supported in this release, however, no enhancements will be made to these components and features and they might be removed in the future. For a list of the components and features that are deprecated in this release, log in to the Red Hat Customer Portal and view the Red Hat build of Quarkus Component Details page. 5.1. Removal of deprecated methods and classes in Mutiny The following Mutiny methods and classes have been removed: method MultiCollect<T> Multi<T>::collectItems() method MultiGroup<T> Multi<T>::groupItems() method MultiTransform<T> Multi<T>::transform() method Uni<T> Uni<T>::cache() method Multi<T> MultiOverflow<T>::drop(Consumer<T>) class MultiTransform<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiGlobalSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnCancellationSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnCompletionSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnFailureSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnItemSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnOverflowSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnRequestSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnSubscribeSpy<T> method MultiTransform<T> AbstractMulti<T>::transform() @ MultiOnTerminationSpy<T> 6. Technology Previews This section lists features and extensions that are available as a Technology Preview in Red Hat build of Quarkus 2.7. The features and components listed in this section are provided as Technology Preview. about Red Hat Technology Preview features, see Technology Preview Features Scope. 6.1. Support for Hibernate Reactive In Red Hat build of Quarkus 2.7, Hibernate Reactive is provided as a Technology Preview feature. Hibernate Reactive is a reactive API for Hibernate Object Relational Mapper (ORM). Hibernate Reactive supports non-blocking database drivers, which means your applications can interact with relational databases in a reactive way. 6.2. Support for Java 17 in native mode In Quarkus 2.7, building native executables by using Java 17 is provided as a Technology Preview feature. To build native executables for production deployments, use Java 11. 6.3. Technology Preview extensions and dependencies For a list of extensions and dependencies available as Technology Preview in Red Hat build of Quarkus 2.7, see the Red Hat build of Quarkus Component Details page (login required). Quarkus RestEasy Reactive Qute Quarkus Reactive Rest Client (Mutiny) Quarkus RestEasy Reactive MicroProfile REST Client Quarkus Kubernetes Service Binding Quarkus Reactive Microsoft SQL Server Quarkus Oracle JDBC Driver 7. Changes that affect backwards compatibility This section describes changes that affect the backwards compatibility of Red Hat build of Quarkus 2.7 with applications based on earlier releases. You must address these changes when upgrading your applications to Red Hat build of Quarkus 2.7 to ensure that your applications continue to function after the upgrade. 7.1. quarkus-vertx-web is now quarkus-reactive-routes The artifact quarkus-vertx-web is renamed as quarkus-reactive-routes. If you use quarkus-vertx-web as the artifact name, Red Hat build of Quarkus 2.7 displays a warning message while building your application. Therefore, you can switch to the new artifact coordinates, io.quarkus:quarkus-reactive-routes. 7.2. A change in RESTEasy Reactive return type on a JAX-RS method for multipart responses A backward-incompatible change in RESTEasy Reactive was incorporated between product versions 2.7.5 and 2.7.6. This change doesn’t allow the use of javax.ws.rs.core.Response as a return type on a JAX-RS method for multipart responses, and org.jboss.resteasy.reactive.RestResponse must be used instead. 7.3. The quarkus-spring-web extension decoupled from RESTEasy Classic In Red Hat build of Quarkus 2.7, the quarkus-spring-web extension works with both RESTEasy Classic, which is the quarkus-resteasy extension, and RESTEasy Reactive, which is the quarkus-resteasy-reactive extension. quarkus-spring-web no longer has a hard dependency on quarkus-resteasy-jackson. If you are using quarkus-spring-web, explicitly add quarkus-resteasy-jackson or quarkus-resteasy-reactive-jackson to your applications. 7.4. The rest-data-panache extension decoupled from RESTEasy Classic In Red Hat build of Quarkus 2.7, the extensions quarkus-hibernate-orm-rest-data-panache, quarkus-spring-data-rest, and quarkus-mongodb-rest-data-panache can work with both RESTEasy Classic, which is the quarkus-resteasy extension and RESTEasy Reactive, which is the quarkus-resteasy-reactive extension. rest-data-panache no longer has a hard dependency on quarkus-resteasy. If you are using the panache extensions, explicitly add quarkus-resteasy or quarkus-resteasy-reactive to your applications. 7.5. Hibernate ORM default database creation mode changed when Dev Services are in use In Red Hat build of Quarkus 2.7, when Dev Services are in use and no other extensions that manage the schema are present, the default database creation mode for Hibernate Object/Relational Mapping (Hibernate ORM) is drop-and-create. This change ensures that Dev Services always start from a clean state. This change only affects Dev Services. When Dev Services are not in use, the default database creation mode continues to be none, like in the previous versions of Red Hat build of Quarkus. 7.6. JNDI is disabled by default In Red Hat build of Quarkus 2.7, the Java Naming and Directory Interface (JNDI) is disabled by default. To enable the JNDI, you can use the quarkus.naming.enable-jndi=true property. 7.7. gRPC - Client Interceptors require @GlobalInterceptor annotation Global gRPC client interceptors must be annotated with @io.quarkus.grpc.GlobalInterceptor since version 2.7. Before Quarkus 2.7, all the CDI beans implementing io.grpc.ClientInterceptor were considered as global interceptors, i.e., applied to all injected gRPC clients. Since 2.7, it is possible to make a client-specific interceptor by annotating the injection point of a gRPC client with the @io.quarkus.grpc.RegisterClientInterceptor annotation. 7.8. gRPC server interceptors require @GlobalInterceptor annotation Starting from Red Hat build of Quarkus 2.7, you must annotate the global server interceptors using the @io.quarkus.grpc.GlobalInterceptor annotation. Also, in Red Hat build of Quarkus 2.7, you can implement a non-global interceptor and add it to a gRPC service in the implementation class. For example, you can use the following code to implement your interceptor: @ApplicationScoped public class MyInterceptor implements ServerInterceptor { @Override public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(ServerCall<ReqT, RespT> serverCall, Metadata metadata, ServerCallHandler<ReqT, RespT> serverCallHandler) { // ... } } After implementing your interceptor, you can use the io.quarkus.grpc.RegisterInterceptor annotation in the service implementation as shown in the following example: @GrpcService @RegisterInterceptor(MyInterceptor.class) public class MyServiceImpl implements MyService { // ... } 7.9. gRPC - Metrics are now handled by Micrometer gRPC metrics are now handled by the quarkus-micrometer extension. gRPC server and client metrics are automatically enabled when your application depends on this extension. If you do not want to collect these metrics, disable them using: quarkus.micrometer.binder.grpc-client.enabled=false quarkus.micrometer.binder.grpc-server.enabled=false The quarkus.grpc.metrics.enabled property has no effect anymore. 7.10. HTTP endpoints with gRPC services using extensions Starting from Red Hat build of Quarkus 2.7, quarkus-grpc does not pull quarkus-http extension. Therefore, to serve the HTTP endpoints along with gRPC services, you can add any of the following extensions to your Quarkus project: io.quarkus:quarkus-http io.quarkus:quarkus-resteasy io.quarkus:quarkus-resteasy-reactive In case you are already using one of the previous extensions in your Quarkus project, then no actions are required. 7.11. Reactive Routes - Changes in how produces work with Multi The produces attribute of the @Route annotation was only used for content negotiation. Starting with Quarkus 2.7, it is also used to indicate how objects produced by the Multi need to be serialized in the HTTP response. When a route returns a Multi<T>, it is possible to: send the item one by one, without any modification (raw stream) wrap the Multi as a JSON Array, where each item is sent one by one produce a server-sent-event stream produce JSON (also named ND-JSON) stream Before Quarkus 2.7, to express the three last possibilities, you had to wrap the produced Multi using ReactiveRoutes.asJsonArray, ReactiveRoutes.asEventStream, and ReactiveRoutes.asJsonStream. Unfortunately, this approach does not work when Quarkus security is enabled. To workaround that problem, in Quarkus 2.7, indicate the serialization you need using the produces attribute of @Route. If the first value of the produced array is application/json, text/event-stream, application/x-ndjson or application/stream+json, the associated serialization is used. So, instead of: returning ReactiveRoutes.asJsonArray(multi), return multi directly and set produces="application/json" returning ReactiveRoutes.asEventStream(multi), return multi directly and set produces="text/event-stream" returning ReactiveRoutes.asJsonStream(multi), return multi directly and set produces="application/x-ndjson" 7.12. Reactive Routes - Change of the JSON response format The JSON response format produced when a constraint violation occurs was changed in an incompatible way. If a reactive route method parameter violates the rules defined by Bean Validation, then an HTTP 400 response is produced. If the request accepts the JSON payload, the response should follow the Problem format. However, before version 2.7, the produced JSON included an incorrect property: details. The payload now follows the format (the property is called detail). 7.13. Removal of deprecated methods from OIDC OIDC TokenConfigResolver methods deprecated in 2.2, and TokenStateManager methods deprecated in 2.3 have now been removed. If any, it should have a minimal impact since only TokenConfigResolver and TokenStateManager methods returning Uni can work without blocking the IO thread. Therefore, they should be used in real-world applications. 7.14. Removal of extensions from Red Hat build of Quarkus The following extensions are removed from Red Hat build of Quarkus 2.7 and moved to the Quarkiverse Hub repository: Apache ActiveMQ Artemis Quarkus Amazon Alexa Quarkus Amazon Services, which includes the following set of extensions: DynamoDB Identity and Access Management (IAM) Key Management Service (KMS) S3 Secrets Manager Simple Email Service (SES) Simple Notification Service (SNS) Simple Queue Service (SQS) Systems Manager (SSM) Quarkus Consul Config Quarkus JGit Quarkus JSch Quarkus Logging Sentry Quarkus Neo4j Quarkus Reactive Messaging HTTP Quarkus Apache Tika Quarkus File Vault For a list of extensions hosted in the Quarkiverse Hub, see Quarkiverse Hub. 7.15. Prefixes for iteration metadata inside a loop In Red Hat build of Quarkus 2.7, you cannot directly use the keys to access the iteration metadata inside a loop. Instead, a prefix is used to avoid possible collisions with variables from the outer scope. By default, the alias of an iterated element suffixed with an underscore is used as a prefix. For example, the hasNext key must be prefixed with it_ inside an {#each} section: {it_hasNext}. It must also be in the form of {item_hasNext} inside a {#for} section with the item element alias. You can configure the prefix using either EngineBuilder.iterationMetadataPrefix() for standalone Qute or using the quarkus.qute.iteration-metadata-prefix configuration property in a Quarkus application. You can provide a custom string value or use one of the following special constants: <alias_>: The alias of an iterated element suffixed with an underscore. This is the default prefix. For example, {item_hasNext} <alias?>: The alias of an iterated element suffixed with a question mark. For example, {item?hasNext} <none>: No prefix. This is the default behavior in the previous versions. For example, {hasNext} 7.16. Smallrye Reactive Messaging packages renamed The following table shows the list of SmallRye Reactive Messaging packages in the smallrye-reactive-messaging-provider module that are renamed in Red Hat build of Quarkus 2.7: 7.17. Aligning with Eclipse MicroProfile Config Specification Based on the Eclipse MicroProfile Config Specification, ConfigProperties injection points that target already annotated org.eclipse.microprofile.config.inject.ConfigProperties beans needs to be annotated with the same qualifier too. 7.18. Hibernate ORM: SQLServer2016Dialect is now the default dialect for MSSQL databases In Red Hat build of Quarkus 2.7, the default MSSQL dialect for Hibernate ORM is SQLServer2016Dialect. If you use Microsoft SQL Server 2012, set the configuration property quarkus.hibernate-orm.dialect to org.hibernate.dialect.SQLServer2012Dialect. 7.19. Narayana LRA extension requirements In Red Hat build of Quarkus 2.7, for the quarkus-narayana-lra extension to work you need to implement a server JAX-RS and a REST Client. This means that you must have any one of the following dependencies in your Quarkus application: The set of quarkus-resteasy-jacksonand quarkus-rest-client The set of quarkus-resteasy-reactive-jacksonand quarkus-rest-client-reactive 7.20. Change in default key encryption algorithm for quarkus-smallrye-jwt-build In Red Hat build of Quarkus 2.7, the default key encryption algorithm is changed in quarkus-smallrye-jwt-build from RSA-OAEP-256 to RSA-OAEP. You can configure the RSA-OAEP-256 algorithm using the smallrye.jwt.new-token.key-encryption-algorithm property as shown in the following example: smallrye.jwt.new-token.key-encryption-algorithm=RSA-OAEP-256 7.21. Default working directory changed in development mode In Red Hat build of Quarkus 2.7, the default working directory in development mode is changed from the target directory to the project directory that you are currently working in. If your Quarkus application is unable to resolve file paths to the current working directory, you might need to append target to the path of the file you want to access. For example, for a project in the code-with-quarkus directory, the working directory in development mode is changed from code-with-quarkus/target to code-with-quarkus. 8. Fixes Quarkus 2.7 provides increased stability and includes a number of fixes for issues that were identified in earlier releases. 8.1. Security fixes 8.1.1. Quarkus 2.7.6 QUARKUS-2076 CVE-2021-3520 LZ4: memory corruption due to an integer overflow bug caused by the memmoveargument QUARKUS-1969 CVE-2020-36518 Jackson-databind: denial of service caused by a large depth of nested objects 8.1.2. Quarkus 2.7.5 QUARKUS-1970 CVE-2021-43797 Netty: control chars in header names may lead to HTTP request smuggling QUARKUS-1902 CVE-2022-0981 Quarkus: privilege escalation vulnerability with RestEasy Reactive scope leakage in Quarkus QUARKUS-1842 CVE-2022-21724 PostgreSQL: jdbc-postgresql: Unchecked Class Instantiation when providing Plug-in Classes QUARKUS-1833 CVE-2021-22569 protobuf-java: potential DoS in the parsing procedure for binary data QUARKUS-1832 CVE-2022-21363 MySQL-connector-java: Difficult to exploit vulnerability allows a high privileged attacker with network access by using multiple protocols to compromise MySQL Connectors QUARKUS-1372 CVE-2021-3914 Smallrye-health-ui: persistent cross-site scripting in endpoint QUARKUS-1029 CVE-2021-29429 Gradle: information disclosure through temporary directory permissions QUARKUS-993 CVE-2021-29428 Gradle: local privilege escalation through system temporary directory QUARKUS-992 CVE-2021-29427 Gradle: repository content filters do not work in Settings pluginManagement QUARKUS-800 CVE-2020-13949 libthrift: potential DoS when processing untrusted payloads 8.2. Resolved issues 8.2.1. Quarkus 2.7.6 QUARKUS-2197 Use -H:PrintAnalysisCallTreeType=CSV to generate CSV call tree files QUARKUS-2196 Provide contextualized response in dev and test when Jackson deserialization fails QUARKUS-2195 Ease restriction on the native-sources type and container image extension QUARKUS-2194 Fix NPE when setting the logger level to null QUARKUS-2193 Ensure that connection issues can be handled in Reactive REST Client QUARKUS-2192 Update SmallRye Config to 2.9.2 QUARKUS-2191 Set proper content type for DevUI template rendering QUARKUS-2190 Make some additional address resolver options configurable QUARKUS-2189 Update SmallRye OpenAPI to 2.1.22 QUARKUS-2188 Set content-length to 0 when the Reactive REST Client sends empty POST or PUT QUARKUS-2187 Qute message bundles - ignore loop metadata during validation QUARKUS-2185 Respect custom ConfigRoot.name in ConfigInstantiator & fix minLogLevel handling QUARKUS-2184 Support large file downloads with Reactive REST Client QUARKUS-2182 Support large file uploads with Reactive REST Client QUARKUS-2181 Execute test plan with the deployment classloader as the TCCL QUARKUS-2180 Improve usability of RESTEasy Reactive server and client multipart handling QUARKUS-2179 Support Uni and CompletionStage results for multipart responses in REST Client Reactive QUARKUS-2178 Panache EntityBase should flush on the right entity manager QUARKUS-2176 Qute - fix validation of "cdi:" namespace expressions QUARKUS-2175 Support date headers correctly in event server QUARKUS-2174 Allow reflective access to Jaeger DTO classes' fields QUARKUS-2173 Verify that modules are compiled in the correct order in dev mode QUARKUS-2172 Register Protobuf enums in the native executable QUARKUS-2169 Fix to enable non @QuarkusTest unit tests to run in parallel QUARKUS-2165 Disable async Vert.x DNS resolver when for Spring Cloud Config QUARKUS-2164 Resolve config folder with user.dir for YAML source QUARKUS-2163 Apply some small optimizations to the generated Reactive REST Client QUARKUS-2161 Add all existing project source directory in application model QUARKUS-2160 Add closeBootstrappedApp mojo param to QuarkusBootstrapMojo QUARKUS-2157 Improve dynamic resolution of MessageBodyWriter providers QUARKUS-2155 Skip Keycloak DevService if quarkus.oidc.provider is set QUARKUS-2154 Update Infinispan to 13.0.10 QUARKUS-2153 Fix WebSockets codestart generated package name QUARKUS-2152 Maven 3.8.5 compatibility, update maven-invoker to 3.1.0 QUARKUS-2151 Spring API JAR version is updated to version 5.2.0.SP6 QUARKUS-2150 Support modules with non-empty default classifier QUARKUS-2148 Enable passing of query parameters as a Map to the REST Client Reactive so advance knowledge of parameters is not needed QUARKUS-2105 Upgrade to Hibernate Search 6.1.5.Final 8.2.2. Quarkus 2.7.5 QUARKUS-1382 JDK 17 applications deployed to OpenShift using S2I do not fail at start OCPBUGSM-33066 CoreDNS now resolves the domain name for github.com in OpenShift 4.8 on OpenStack QUARKUS-1544 @ServerExceptionMapper Exception type injection was allowed for use-cases where users simultaneously handle multiple exceptions QUARKUS-1225 REST Client Reactive supports JSON serialization in multipart forms QUARKUS-1547 Graadle and Maven multi-module projects that use the @MessageBundle annotated class now run tests successfully. QUARKUS-1417 Quarkus native compilations now work in the FIPS-compliant environment QUARKUS-1752 Missing dependencies for quarkus-universe-bom have been added QUARKUS-1724 Deployments of a serverless application into OCP are now stable 9. Known issues This section lists known issues with Red Hat build of Quarkus 2.7. 9.1. quarkus.openshift.arguments does not work on OpenShift with native mode enabled Setting quarkus.openshift.arguments in native mode when the entrypoint is not set nor overridden by quarkus.openshift.command causes the following CreateContainerError error: `runc create failed: unable to start container process: exec: \" ARG1 \ ": executable file not found in $ PATH`. ARG1 can be set up using quarkus.openshift.arguments This problem occurs because the manifests of Kubernetes and OpenShift containers have two optional arguments: command and args. If these arguments are omitted, the container runs with the entry point defined in the container image. However, problems arise when only the args argument is specified. When specifying 'args', specify command too. If not, args will be treated as the actual command, which results in Pod failure in most cases. To work around this problem, set the entry point with quarkus.openshift.command manually. For the 'quarkus.native.builder-image' parameter not defined by the user (which is therefore set to its default value), change the entry point to 'quarkus.openshift.command=/home/quarkus/application'. - 9.2. Inability of the Reactive REST client to upload files bigger than 2 Gb Quarkus REST Reactive client fails with the OutOfMemory error when uploading files bigger than 2 Gb. 9.3. Reactive REST client downloads only the first 2 Gb of file Downloading a file bigger than 2 Gb causes an issue where the Reactive REST client trims the file at 2044 MiBs. OutOfMemory log exceptions accompany the event. 9.4. Reactive REST client with NPE fails to download a multipart data that contains java.io.File The Reactive REST client fails with the NullPointerException (NPE) processing error when downloading multipart data that contains an object of the java.io.File type. 9.5. Reactive REST client duplicates paths to sub-resources When the Reactive REST client is used to access sub-resources of some HTTP resource, it duplicates the part of the address which identifies these sub-resources. @Path("/resource") public interface Client { @Path("/sub") SubClient getSubResource(); interface SubClient { @GET @Path("/something") String getSomething(); } } The call client.getSubResource().getSomething(); maps to a request to `/resource/sub/sub/something, instead of /resource/sub/something. 9.6. HTTP backlog wrong default value when using native epoll transport When quarkus.http.accept-backlog is not configured and the epoll native transport (from Netty) is enabled simultaneously, Quarkus can not handle more than x requests, where x is the cardinality of CPU cores. Using the application.properties (or from the command line), configure the quarkus.http.accept-backlog to -1: java `-Dquarkus.http.accept-backlog=-1` -jar .... 10. Advisories related to this release Before you start using and deploying Red Hat build of Quarkus 2.7, review the advisories about enhancements, bug fixes, and CVE fixes for other technologies and services related to the release. 10.1. Quarkus 2.7.6 10.2. Quarkus 2.7.5 11. Quarkus metering labels for Red Hat OpenShift You can add metering labels to your Quarkus pods and check Red Hat subscription details with the OpenShift Metering Operator. Do not add metering labels to any pods that an operator or a template deploys and manages. You can apply labels to pods using the Metering Operator on OpenShift Container Platform version 4.8 and earlier. From version 4.9 onward, the Metering Operator is no longer available without a direct replacement. Quarkus can use the following metering labels: com.company: Red_Hat rht.prod_name: Red_Hat_Runtimes rht.prod_ver: YYYY-Q1 rht.comp: "Quarkus" rht.comp_ver: 2.7.6 rht.subcomp: {sub-component-name} rht.subcomp_t: application Revised on 2022-07-27 07:47:21 UTC
https://access.redhat.com/documentation/en-us/red_hat_build_of_quarkus/quarkus-2-7/guide/5cbab82b-042a-4a68-ac5e-54901a9cc222
CC-MAIN-2022-33
en
refinedweb
Sensor partner integration This article provides information about the Azure FarmBeats Translator component, which enables sensor partner integration. Using this component, partners can integrate with FarmBeats using FarmBeats Datahub APIs and send customer device data and telemetry to FarmBeats Datahub. Once the data is available in FarmBeats, it is visualized using the FarmBeats Accelerator and can be used for data fusion and for building machine learning/artificial intelligence models. Before you start To develop the Translator component, you will need the following credentials that will enable access to the FarmBeats APIs. - API Endpoint - Tenant ID - Client ID - Client Secret - EventHub Connection String See this section for getting the above credentials: Enable Device Integration Translator development REST API-based integration Sensor data integration capabilities of FarmBeats are exposed via the REST API. Capabilities include metadata definition, device and sensor provisioning, and device and sensor management. Telemetry ingestion The telemetry data is mapped to a canonical message that's published on Azure Event Hubs for processing. Azure Event Hubs is a service that enables real-time data (telemetry) ingestion from connected devices and applications. API development The APIs contain Swagger technical documentation. For more information on the APIs and their corresponding requests or responses, see Swagger. Authentication FarmBeats uses Microsoft Azure Active Directory authentication. Azure App Service provides built-in authentication and authorization support. For more information, see Azure Active Directory. FarmBeats Datahub uses bearer authentication, which needs the following credentials: - Client ID - Client secret - Tenant ID Using these credentials, the caller can request an access token. The token needs to be sent in the subsequent API requests, in the header section, as follows: headers = {"Authorization": "Bearer " + access_token, …} The following sample Python code gives the access token, which can be used for subsequent API calls to FarmBeats. import requests import json import msal # Your service principal App ID CLIENT_ID = "<CLIENT_ID>" # Your service principal password CLIENT_SECRET = "<CLIENT_SECRET>" # Tenant ID for your Azure subscription TENANT_ID = "<TENANT_ID>" AUTHORITY_HOST = '' AUTHORITY = AUTHORITY_HOST + '/' + TENANT_ID ENDPOINT = "https://<yourfarmbeatswebsitename-api>.azurewebsites.net" SCOPE = ENDPOINT + "/.default" context = msal.ConfidentialClientApplication(CLIENT_ID, authority=AUTHORITY, client_credential=CLIENT_SECRET) token_response = context.acquire_token_for_client(SCOPE) # We should get an access token here access_token = token_response.get('access_token') HTTP request headers Here are the most common request headers that need to be specified when you make an API call to FarmBeats Datahub. API requests To make a REST API request, you combine the HTTP (GET, POST, or PUT) method, the URL to the API service, the Uniform Resource Identifier (URI) to a resource to query, submit data to, update, or delete, and one or more HTTP request headers. The URL to the API service is the API endpoint you provide. Here's a sample: https://\<yourdatahub-website-name>.azurewebsites.net Optionally, you can include query parameters on GET calls to filter, limit the size of, and sort the data in the responses. The following sample request is to get the list of devices. curl -X GET "" -H "Content-Type: application/json" -H "Authorization: Bearer <Access-Token>" Most GET, POST, and PUT calls require a JSON request body. The following sample request is to create a device. (This sample has an input JSON with the request body.) curl -X POST "" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer <Access-Token>" -d "{ \"deviceModelId\": \"ID123\", \"hardwareId\": \"MHDN123\", \"reportingInterval\": 900, \"name\": \"Device123\", \"description\": \"Test Device 123\",}" Data format JSON is a common language-independent data format that provides a simple text representation of arbitrary data structures. For more information, see json.org. Metadata specifications FarmBeats Datahub has the following APIs that enable device partners to create and manage device or sensor metadata. /DeviceModel: DeviceModel corresponds to the metadata of the device, such as the manufacturer and the type of device, which is either gateway or node. /Device: Device corresponds to a physical device present on the farm. /SensorModel: SensorModel corresponds to the metadata of the sensor, such as the manufacturer, the type of sensor, which is either analog or digital, and the sensor measure, such as ambient temperature and pressure. /Sensor: Sensor corresponds to a physical sensor that records values. A sensor is typically connected to a device with a device ID. For information on each of the objects and their properties, see Swagger. Note The APIs return unique IDs for each instance created. This ID needs to be retained by the Translator for device management and metadata sync. Metadata sync The Translator should send updates on the metadata. For example, update scenarios are change of device or sensor name and change of device or sensor location. The Translator should have the ability to add new devices or sensors that were installed by the user post linking of FarmBeats. Similarly, if a device or sensor was updated by the user, the same should be updated in FarmBeats for the corresponding device or sensor. Typical scenarios that require updating a device or sensor are a change in a device location or the addition of sensors in a node. Note Delete isn't supported for device or sensor metadata. To update metadata, it's mandatory to call /Get/{id} on the device or sensor, update the changed properties, and then do a /Put/{id} so that any properties set by the user aren't lost. Add new types and units FarmBeats supports adding new sensor measure types and units. For more information about the /ExtendedType API, see Swagger. Telemetry specifications The telemetry data is mapped to a canonical message that's published on Azure Event Hubs for processing. Azure Event Hubs is a service that enables real-time data (telemetry) ingestion from connected devices and applications. Send telemetry data to FarmBeats To send telemetry data to FarmBeats, create a client that sends messages to an event hub in FarmBeats. For more information about telemetry data, see Sending telemetry to an event hub. Here's a sample Python code that sends telemetry as a client to a specified event hub. import azure from azure.eventhub import EventHubClient, Sender, EventData, Receiver, Offset EVENTHUBCONNECTIONSTRING = "<EventHub Connection String provided by customer>" EVENTHUBNAME = "<EventHub Name provided by customer>" write_client = EventHubClient.from_connection_string(EVENTHUBCONNECTIONSTRING, eventhub=EVENTHUBNAME, debug=False) sender = write_client.add_sender(partition="0") write_client.run() for i in range(5): telemetry = "<Canonical Telemetry message>" print("Sending telemetry: " + telemetry) sender.send(EventData(telemetry)) write_client.stop() The canonical message format is as follows: { "deviceid": "<id of the Device created>", "timestamp": "<timestamp in ISO 8601 format>", "version" : "1", "sensors": [ { "id": "<id of the sensor created>", "sensordata": [ { "timestamp": "< timestamp in ISO 8601 format >", "<sensor measure name (as defined in the Sensor Model)>": <value> }, { "timestamp": "<timestamp in ISO 8601 format>", "<sensor measure name (as defined in the Sensor Model)>": <value> } ] } ] } All key names in the telemetry JSON should be lowercase. Examples are deviceid and sensordata. For example, here's a telemetry message: { "deviceid": "7f9b4b92-ba45-4a1d-a6ae-c6eda3a5bd12", "timestamp": "2019-06-22T06:55:02.7279559Z", "version" : "1", "sensors": [ { "id": "d8e7beb4-72de-4eed-9e00-45e09043a0b3", "sensordata": [ { "timestamp": "2019-06-22T06:55:02.7279559Z", "hum_max": 15 }, { "timestamp": "2019-06-22T06:55:02.7279559Z", "hum_min": 42 } ] }, { "id": "d8e7beb4-72de-4eed-9e00-45e09043a0b3", "sensordata": [ { "timestamp": "2019-06-22T06:55:02.7279559Z", "hum_max": 20 }, { "timestamp": "2019-06-22T06:55:02.7279559Z", "hum_min": 89 } ] } ] } Note The following sections are related to other changes (eg. UI, error management etc.) that the sensor partner can refer to in developing the Translator component. Link a FarmBeats account After customers have purchased and deployed devices or sensors, they can access the device data and telemetry on the device partners' software as a service (SaaS) portal. Device partners can enable customers to link their account to their FarmBeats instance on Azure by providing a way to input the following credentials: - Display name (an optional field for users to define a name for this integration) - API endpoint - Tenant ID - Client ID - Client secret - EventHub connection string - Start date Note The start date enables the historical data feed, that is, the data from the date specified by the user. Unlink FarmBeats Device partners can enable customers to unlink an existing FarmBeats integration. Unlinking FarmBeats shouldn't delete any device or sensor metadata that was created in FarmBeats Datahub. Unlinking does the following: - Stops telemetry flow. - Deletes and erases the integration credentials on the device partner. Edit FarmBeats integration Device partners can enable customers to edit the FarmBeats integration settings if the client secret or connection string changes. In this case, only the following fields are editable: - Display name (if applicable) - Client secret (should be displayed in "2x8***********" format or the Show/Hide feature rather than clear text) - Connection string (should be displayed in "2x8***********" format or Show/Hide feature rather than clear text) View the last telemetry sent Device partners can enable customers to view the timestamp of the last telemetry that was sent, which is found under Telemetry Sent. This is the time at which the latest telemetry was successfully sent to FarmBeats. Troubleshooting and error management Troubleshoot option or support If customer is unable to receive device data or telemetry in the FarmBeats instance specified, the device partner should provide support and a mechanism for troubleshooting. Telemetry data retention The telemetry data should also be retained for a predefined time period so that it can be useful in debugging or resending the telemetry if an error or data loss occurs. Error management or error notification If an error affects the device or sensor metadata or the data integration or telemetry data flow in the device partner system, customer should receive a notification. A mechanism to resolve any errors should also be designed and implemented. Connection checklist Device manufacturers or partners can use the following checklist to ensure that the credentials provided by the customer are accurate: - Check to see whether an access token is received with the credentials that were provided. - Check to see whether an API call succeeds with the access token that was received. - Check to see whether the EventHub client connection is established. Next steps For more information about the REST API, see REST API. Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/azure/industry/agriculture/sensor-partner-integration-in-azure-farmbeats
CC-MAIN-2022-33
en
refinedweb
Does Call by reference Java supports? This is a very basic question asked in every interview and should known to each programmer also. Everyone says "no" as Java does not support pointers. No one actually see the address of the object in Java. No one can dream of call by reference without pointers with their experience of C/C++. Of course, they are very correct. In Java the method parameters can be primitive data types or object references. Both are passed as value only but small tricky explanation is here for object references. I repeat, when primitive data types are passed as method parameters, they are passed by value (a copy of the value) but incase of object references, the reference is copied (of course, here also a copy only) and passed to the called method. That is, object reference is passed as a value. So, original reference and the parameter copy both will refer the same Java object. As both refer the same object, if the calling method changes the value of its reference (passed from calling method), the original object value itself changes. Note that object itself is not passed, but it’s references is passed. Object reference is a handle to a Java object. Do not compare this reference term with the used in C/C++. In C/C++, the reference directly points to the memory address of a variable and can be used for pointer manipulations. Finally to say, in Java, everyone is passed by value. But with objecct, the value of the reference is passed. Let us see two programs on call by value and call by reference. Case 1: Call-by-value or Pass-by-value In the following program, a data type int is passed as parameter to a method call. public class CallByValue { public void display(int y) { y = 20; } public static void main(String args[]) { CallByValue cbv = new CallByValue(); int x = 10; cbv.display(x); System.out.println(x); // prints 10 and not 20 } } The variable x value 10 is passed to parameter of y of display() method. As copy of x is passed, changing y does not change x value in main() method. Case 2: Call by reference Java or Pass-by-reference Here, for display() method the reference of StringBuffer object is passed. public class CallByReference { public void display(StringBuffer sb2) { sb2.append("World"); System.out.println(sb2); // prints HelloWorld } public static void main(String args[]) { CallByReference cbr = new CallByReference(); StringBuffer sb1 = new StringBuffer("Hello"); cbr.display(sb1); System.out.println(sb1); // prints HelloWorld } } The value of object reference sb1 is passed to sb2. sb1 literal value "Hello" is passed to sb2. Now sb1 reference and sb2 reference refer the same object. Changing sb2 affects sb1 also. Both sb1 and sb2 prints "HelloWorld". View All for Java Differences on 90 Topics 6 thoughts on “Call by value and Call by reference Java Tutorial” What do you think about this program? Is it call by value or reference as per your post? class Test { public static void swap(Integer i, Integer j) { Integer temp = new Integer(i); i = j; j = temp; } public static void main(String[] args) { Integer i = new Integer(10); Integer j = new Integer(20); swap(i, j); System.out.println(“i = ” + i + “, j = ” + j); } } public class Test { int x=12, y=25; public static void main() { int x=5, z=10; String s= “Java is Fun”; System.out.println(x+” “+ y + “ “ + z); Function1(x,z); System.out.println(x+ “ “+y + “ “+z); System.out.println(s); Function2(s); System.out.println(s); }// end of main public static void Function2( String s) { System.out.println(x+” “+ y ); s=”Java is Platform Independent”; System.out.println(s); return; } // end of Function2 public static void Function1(int x, int z) { System.out.println(x+ “ “+y + “ “+z); x++; z++; y++; System.out.println(x+ “ “+y + “ “+z); return; } // end of Function1 }// end of class 1. Identify the calling and the called methods. 2. Which function is called using CBV and which is called using CBR? 3. Identify the local variables in each method. 4. Identify the global variables. 5. Identify the composite datatypes in the above code 6. Give the output of the above code class CallByReference { public void display(StringBuffer sb2) { sb2.append(“World”); System.out.println(sb2); // prints HelloWorld } public void setData(Integer i) { ++i; } public static void main(String args[]) { CallByReference cbr = new CallByReference(); StringBuffer sb1 = new StringBuffer(“Hello”); cbr.display(sb1); System.out.println(sb1); // prints HelloWorld Integer a = new Integer(19); cbr.setData(a); //It should append the value of a System.out.println(a); //It should print 20 as per the code which you explained but its giving 19 } } thank u sir but there is little confusion in pass by reference .. plz try it in more easy way how can we use call by reference in numeric data……? You cannot use on numeric data. If required, convert int to Integer and then use.
https://way2java.com/java-general/call-by-value-and-call-by-reference-in-java-tutorial/
CC-MAIN-2022-33
en
refinedweb
A method in Java, like a function in C/C++ (in fact, a function of C/C++ is called a methods in Java), embeds a few statements. A method is delimited (separated) from the remaining part of the code by a pair braces. Method increases reusability. When the method is called number of times, all the statements are executed repeatedly. Java comes with static methods, final methods and abstract methods. All the three varies in their functionalities. This "Java Method Example" tutorial gives with normal methods. Other methods you can refer this site later. Java Method Example public class Demo { public void display() // a method without any parameters { System.out.println("Hello World"); } public void calculate(int length, int height) // a method with parameters { System.out.println("Rectangle Area: " + length*height); } public double show(double radius) // a method with parameter and return value { System.out.println("Circle Area: " + Math.PI*radius*radius); return 2*Math.PI*radius; } public static void main(String args[]) { Demo d1 = new Demo(); // create object to call the methods d1.display(); d1.calculate(10, 20); double perimeter = d1.show(5.6); System.out.println("Circle Perimeter: " + perimeter); } } Three methods are given with different variations and called from main() method. To call a method, an object is required. An object d1 of Demo class is created and called all the methods. 1. How to write methods, how to call variables from methods is discussed with good notes in Using Variables from Methods. 2. More in depth study is available at Using Methods and Method Overloading .
https://way2java.com/uncategorized/java-method-examples/
CC-MAIN-2022-33
en
refinedweb
go to bug id or search bugs for New Comment: Description: ------------ I am using XHProf with Drupal (see drupal.org) to Profile my website. The problem I ran into is that Drupal (Devel Module) uses the Site's Full Name as the XHProf Namespace for the XHProf reports, which often results in the namespace containing spaces in the filenames for each report (e.g.- "4da346cf71423.Example Website Name"). This causes the Profiler Report UI to not be able to find the correct file, since it gets the namespace ($_GET["space"]) from the $_GET variables without url decoding it, and uses it as the file extension (e.g.- it looks for "4da346cf71423.Example%20Website%20Name" instead of "4da346cf71423.Example Website Name"). I propose the following fix, which solved my problem and seems to be working fine. Modify the "xhprof_get_param_helper()" function in "xhprof_lib.php" to call urldecode() on the value of all $_GET variables. This will make sure that encoded spaces (and other encoded characters) are converted to real spaces. I've included the patch under "Reproduce Code". Reproduce code: --------------- --- /xhprof/xhprof_lib/utils/xhprof_lib.php 2011-04-11 10:45:44.000000000 -0700 +++ /xhprof/xhprof_lib/utils/xhprof_lib.php 2011-04-11 11:29:22.000000000 -0700 @@ -666,7 +666,7 @@ function xhprof_get_param_helper($param) { $val = null; if (isset($_GET[$param])) - $val = $_GET[$param]; + $val = urldecode($_GET[$param]); else if (isset($_POST[$param])) { $val = $_POST[$param]; } Add a Patch Add a Pull Request On second thought, this would be better classified as a bug.
https://bugs.php.net/bug.php?id=59709&edit=2
CC-MAIN-2022-33
en
refinedweb
Closed Bug 281988 Opened 18 years ago Closed 17 years ago Stop sharing DOM object wrappers between content and chrome Categories (Core :: DOM: Core & HTML, defect, P1) Tracking () mozilla1.8beta2 People (Reporter: jst, Assigned: brendan) References Details (Whiteboard: extension buster? needed for b2) Attachments (22 files, 14 obsolete files) Since the the beginning of time we've always shared JS wrapper for our native DOM objects between content and chrome, and this has always caused security problems, some we've found, others I'm sure we haven't. Our approach in tackling these problems so far has been to use Caillon's XPCNativeWrapper() helper in JS to reach the actual DOM properties from chrome and not properties the page set. But that means that developers need to be aware of this problem, and lots are, but not all, and we're all human so occatinally we forget to use the wrapper and we introduce potential security problems. Not sharing JS wrappers between content and chrome is fairly easy, but it has its consequences. Like XBL bindings must not be attached more than once just because a content DOM object is accessed from chrome. And not only that, but XBL properties etc that were attached to the content DOM object won't be visible to chrome. I imagine that it's ok for us (at least from a technical point of view) to make content XBL unreachable from chrome, and that's what my patch does, and it doesn't appear to break anything obvious (that I've seen so far), but it may break other apps, who knows... I'll attach a patch for others to look at that makes us not share JS wrappers between content and chrome (but we'll still share among chrome windows, and among content windows, of course). Please comment and share thoughts on this issue. I believe that at some point in the future we're going to have to make a change like this, and the sooner we do that the easier it'll be for us. But yeah, it would've been even easier to do this 3 years ago... We could, though it won't be easy, provide a mechanism for chrome code to specify on a per JS stack frame basis (or whatever) that it wants to temporarily share JS wrappers, but I don't know yet that we'll need to do that. Thoughts? Oops, that can't be the right patch -- it looks like a s3kr3t plugindir one I reviewed recently! ;-) /be The change, as described, seems pretty reasonable to me. The XBL thing may bite some extensions, but the really popular ones that I can think of that bind XBL to content (flashblock, specifically) should be ok... Yeah, duh, that was the right patch filename, but wrong directory. This is what I meant. Attachment #174106 - Attachment is obsolete: true Does this include things like frame names reflected into the window object? This might break a few odds and ends such as the XUL directory listing. If chrome accesses an object and sets properties on the wrapper, can content later access those properties? I can't quite tell from the patch, but the code in the patch seems asymmetric wrt chrome and content... What about things like DOMI? The JS object browser in DOMI would be seriously affected by this, correct?. --BDS If it turns out that we do need to be able to access the content-wrapper from chrome, maybe one option would be to have some XPC method that allowed you to manually fetch the content-wrapper given a chrome one. Rather then doing it per window or some such which seems like it's easy to forget that you've done for a certain window. To clarify comment 6, if content can access the chrome-wrapper (say chrome ends up doing JS access first, then content does JS access), then we still have a problem... (In reply to comment #6) > If chrome accesses an object and sets properties on the wrapper, can content > later access those properties? I can't quite tell from the patch, but the code > in the patch seems asymmetric wrt chrome and content... Yeah, the code is asymmetric, but we want that. We only ever want to "prevent" chrome from seeing JS wrapper changes that were done in content, the other way around we don't want to prevent anything, but in reality all access is prevented the other way around since content can generally never access chrome. In the odd case where content code has requested UniversalBrowserRead/Write then IMO content code *should* be able to see chrome wrappers n' all, since there's nothing we want to hide going that way, IMO. (In reply to comment #7) > What about things like DOMI? The JS object browser in DOMI would be > seriously affected by this, correct? Correct, this screws DOMI in a big way. Need to think about what to do there... >. I don't think so. This is not about protecting one domain from seeing or not seeing something in another domain, this is strictly about making chrome development safer. And no matter what wrapper is created for any given domain, there's much more to protect than what's on the wrapper, most of what we're protecting is in the underlying object that's the same no matter what wrapper is used. i gave bz a list of things i expected to be screwed when i first saw this bug reported, (i didn't list domi, i presumed someone else would cover it for me), domi-sidebar, xul error pages (bz said biesi would check on it), my company's product, my previous company's product, loading chrome in a tab (chrome://navigator/content/, chrome://messenger/content/, chrome://chatzilla/content/, or ...), loading *any* chrome url in winEmbed/mfcEmbed/gtkEmb .... :) timeless, I think you have the wrong bug... This isn't the bug about dropping chrome privs for chrome:// uris in a content docshell. So I thought some more about how to do this w/o breaking DOM Inspector etc, and how to provide a way for chrome to access the content wrappers in the rare case when it really *needs* to, which AFAIK won't ever happen in our apps (except in the DOMI case). Ideally we'd have a privilege-like mechanism that we could use in chrome to enable wrapper sharing only in a given scope, but unfortunately that'll be *really* hard to do w/o a lot of suck. We'd need to make XPConnect aware of this and make it check for this *every* time a wrapper is accessed from JS, even in the case where the wrapper already exists (which needs to be as fast as possible, now it's basically just a locked hash lookup). I don't want to mess with that code, it's performance critical. And approaches like providing a method that returns the content wrapper to chrome only goes so far, since any code (read XBL methods/getters/setters) on the content wrapper won't work as expected when accessed in the content scope, even if it's accessed through the right wrapper. So the alternative to me looks like a two-sided approach. 1: For apps like DOMI (and Venkman?) we'll add a global function on chrome windows that it can set that will from then on enable XPConnect wrapper sharing (of wrappers that have not been created thus far) between this chrome window and content windows. This function can only enable sharing, disabling it gets tricky (since XPConnect is caching the already created wrappers etc, so a predictable on/off is not easily doable). 2: For trancient access to a content wrapper from chrome code we'd introduce a new eval()-like method that would evaluate an expression in the given scope, and on the JS context of the given scope (i.e. pass it a content window and the code will be run on the content windows context, just as if it was accessed from the content window). The JS that is run in this method runs with the priveleges of the given scope, i.e. no elevated rights for the code that runs from within this new uber-eval(). Maybe this should be evalInContext(), or something like that, and I think we need a way to not only eval a string of JS, but also a way to call methods (and access properties) on objects from chrome, that means we'd need a way to pass arguments here. Anyone see anything obviously wrong with an approach like this? Or is there an easier approach here that I'm just not seeing? I'll start hacking on this, yell if I should stop. i'd argue domi and venkman should probably fall into #2. i don't want someone to write a page that waits to attack a domi or venkman user. the code should be careful. and i do believe i meant this bug when i commented in it, i hadn't read or heard about the other bug at the time. and my implementations of all these things has always involved exiting the local <browser/> region and sticking things back into the <xul:window/> or vice versa. (In reply to comment #11) [...] > xul error pages (bz said biesi would check on it), I can't imagine how this would break XUL error pages... [...] > loading chrome in a tab (chrome://navigator/content/, > chrome://messenger/content/, chrome://chatzilla/content/, or ...), nor would I expect any of these to break. > loading *any* chrome url in winEmbed/mfcEmbed/gtkEmbed/..., I can't see how this would change things in any way for embedd This shouldn't change anything for embedders, we're only changing how *chrome* JS sees content JS objects, embedders are in no way impacted by that that I can see. > .... :) I guess I'm not getting what you're saying here... (In reply to comment #14) > i'd argue domi and venkman should probably fall into #2. i don't want someone to > write a page that waits to attack a domi or venkman user. the code should be > careful. The whole point of DOMI and Venkman is to see what's on the webpage, what's on the JS objects etc, this change won't open up any new security exploits that we don't already have, so noone gets more vulnerable due to this change. And this is merely about preventing content from shadowing DOM properties from chrome, no matter what a webpage does should ever give content code any elevated priveleges, and if there's ever a bug that does that, this won't save us. I'd be all for only doing #2, but I don't have (nor has anyone else I know) the resources to make DOMI and Venkman use it, so #1 makes a whole lot of sense to me. my version of xul error pages reached from a chrome page into a content page to play with the pretty error. (In reply to comment #17) > my version of xul error pages reached from a chrome page into a content page to > play with the pretty error. Well then your version of XUL error pages is free to use either of these approaches here to get around any possible problems. And the only such problems would be if your code relied on XBL properties/methods in the content page (which seems rather far-stretched to me). I claim we want this for the 1.8b2 milestone, to iron out any problems so it ships in Firefox 1.1. /be Flags: blocking1.8b2+ Flags: blocking-aviary1.1+ While testing my extensions with this (for related tracking bug 289231), I noticed something unusual. My extensions broke, but *not* when they were triggered by the first page load in a new tab. Reloading a tab, or visiting some other site first in the new tab did result in breakage. Since breakage is probably the (unfortunately) expected result of this bug, I wonder if the current patch is missing some special case for a new tab. FYI, whoever is testing this. Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050405 Firefox/1.0.3 So, just to be clear: . If chrome-code stores a property/object on a contentWindow, will the property + wrapper be lost as soon as the chrome-function returns? . Presently, Adblock (among others) is broken, because it stores a per-page list of all blocking-metadata in each webpage's scope. It does this by adding an array on contentWindow._AdblockObjects, from chrome. The array now "disappears". If this is expected breakage, with no possible workaround, then I have to ask: what kind of security problems, besides "occatinally we forget to use the wrapper" is this patch really addressing? (In reply to comment #22) > If this is expected breakage, with no possible workaround, [...] The patch is incomplete and there would most definitely be a workaround before this was landed for real. But since no existing code is written to use the workaround it's valid to test for breakage with this half of the patch. And boy, have we seen breakage! Far more than I expected, though we knew it was risky. > then I have to ask: what kind of security problems, besides "occatinally > we forget to use the wrapper" is this patch really addressing? Just that. The current model is the chrome author needs to remember to get content properties safely each time. The intent is to switch to a safe default and require the places that really mean it to explicitly code "get me the javascript property". In that model the downside of forgetting is you get nothing, as opposed to the current downside of forgetting being a potential security issue depending on how the data is used. Obviously this isn't going to fly on the branch. It'll have a bumpy landing on the trunk, too :-( I vote to kill this patch entirely, before it does more harm. . Breaking the Intuitive + useful scoping between chrome and content, just because a few people "forget to use the wrapper", is a very poor choice. Perhaps borderline absurd. It: a) throws numerous extensions + descriptions + tutorials out the window; b) breaks a statistical majority of code against the future minority gain; c) inreases the chrome/extension-dev learning curve; d) doesn't appear qualified by any real security issues (unless forgetfulness counts). . Caillon's wrapper works. So why not leave well-enough alone? My viewpoint is always about keeping the bar low for people trying to learn the scripting layer around Mozilla, so I put my "struggling learner" hat on while reading this. I vote against this fix because it just makes Mozilla-the-platform harder to learn and harder to use. Can anyone point to a fixed and a not-fixed script example for me to munch on, even if it's old code? - N. This patch makes it easier, not harder, to write good code for the Mozilla platform. At the moment it is very easy to write highly insecure code, and learning how to work around it is hard. This patch makes it easy to write secure code, and it would only be hard to learn how to write potentially insecure code. (In reply to comment #24) > just because a few people "forget to use the wrapper" You clearly have no idea what the scope of the problem is. I'd estimate that the wrapper is used in no more than 30-40% of the places it _should_ be used in in well-audited Firefox code (all of which postdates it). In most extensions it's not used at all. In SeaMonkey code that predates the existence of the wrapper it's rarely used. > a) throws numerous extensions + descriptions + tutorials out the window Yes, this is a problem. But looking at the code that is broken by this wrapper (various extensions mostly), it's broken because it's susceptible to being exploited by the web page. In other words, this patch is doing exactly what it should be -- preventing people from writing extensions with security holes, which is what they are doing now. > b) breaks a statistical majority of code against the future minority gain; It would fix about half a dozen existing open bugs that I know of. It would allow vast simplification of the code in both Firefox and extensions that care about security. It would keep extensions that don't care about security from being exploited. I'm not sure where you got the idea that there is "future minority gain". There is a very distinct gain here, if nothing else in a safer browser for our users and a lot less engineering time spent on whacking this issue in every single place it shows up. If it's of any interest to you, I saw another dozen or two places in Firefox context menu code that aren't using the wrapper and should be while I was looking up the code example for Nigel. This is code that's been audited to death already and uses the wrapper stuff extensively. > c) inreases the chrome/extension-dev learning curve; Frankly, no. What it does do is make it a lot harder to write extensions or chrome that have security holes by design. It makes it _easier_ to write safe code. That's the whole point, in fact. > d) doesn't appear qualified by any real security issues (unless forgetfulness > counts). Forgetfulness, just like any other human factor, is a real security issue. Having security policies in place that are easy to follow is a much better security architecture than having hard to follow ones (the latter is what we have now). (In reply to comment #25) > I vote against this fix because it just makes Mozilla-the-platform harder to > learn and harder to use. This seems to be a common misconception. Can you, or someone else, give a reasonable example that is: 1) Harder to do with this proposal. 2) Doesn't open one up to security holes. ? > Can anyone point to a fixed and a not-fixed script example for me to > munch on, even if it's old code? I'm not sure what you mean by "fixed" and "not-fixed". I can point you to an example of safe code as it has to be written now and code that would become safe with the proposed patch. The relevant code is in the contentAreaClick in browser/base/content/browser.js Current code (I left out some parts not needed to illustrate the point; the density of XPCNativeWrapper calls is not quite this high in the original code, because there's other logic in between them): wrapper = new XPCNativeWrapper(linkNode, "href", "getAttribute()", "ownerDocument"); target = wrapper.getAttribute("target"); if (!target || target == "_content" || target == "_main") { var docWrapper = new XPCNativeWrapper(wrapper.ownerDocument, "location"); var locWrapper = new XPCNativeWrapper(docWrapper.location, "href"); if (!webPanelSecurityCheck(locWrapper.href, wrapper.href)) return false; } Code as it could be written: var target = linkNode.getAttribute("target"); if (!target || target == "_content" || target == "_main") { if (!webPanelSecurityCheck(linkNode.ownerDocument.location.href, linkNode.href)) return false; } Note the vastly improved clarity and ease of authoring of the code. Stupid question. Looking at the "broken" extensions, so we know if there are other ways to do what they are trying to accomplish? Or are some of these extensions relying on things that will never work? See comment 13. Boris: Unfortunately, since you did mention comment13: . Toggling wrapper-sharing at the window-level will only result in the popular extensions turning it on, making things "insecure" again. . As for evalInContext(), throwing away calling-privileges would kill access to functionality like: setting an in-page eventlistener that runs privileged code. Or: clobbering a native deliberately, from chrome, with an intelligent getter that returns the native code for all chrome-calls (but something else, for in-page calls). . After considering the contentAreaClick example: . Why not flag everything set by in-page code, at the time it's set. Whenever a flagged / potentially-clobbered value is accessed: a) the code in charge of retrieval would check for a "preferNative" flag in the calling scope-chain -- allowing window-level or isolated toggling; b) if the flag is found, the retrieval code would lookup the native value (if any). This would permit existing extensions to continue unbroken (most don't need to access in-page clobbers), while securing the greater areas of concern. The DOMI would simply need the "preferNative" flag, at window-level; something a scripted-overlay could externally lend. make that: "preferLocal", sorry. Native-lookup would be default. > Toggling wrapper-sharing at the window-level will only result in the popular > extensions turning it on, making things "insecure" again. That's the choice of those extension authors. They're free to write insecure code if they really want to (and others are free to publicise this fact, of course)... Note that the window-level thing is there for cases when explicitly annotating every access would be too hard. It's NOT the preferred method of doing things. > throwing away calling-privileges would kill access to functionality jst, would the proposal actually remove that functionality? I'm not up enough on my XPConnect and JSeng to tell.... For similar reasons, deferring to brendan and jst on the counterproposal in comment 30. My proposal to jst this morning is this: Give chrome and content separate wrappers, but make a one-way binding from chrome to content whereby any property set on a chrome wrapper for native instance x sets the same property on x's content wrapper -- but not vice versa (content sets a property on its wrapper for x, nothing happens to chrome's wrapper for x). Comment 30 describes something in terms that are hard to implement. Where is this flag, and how does trusted code find the "native" value that was clobbered? In the prototype object? That too could be overwritten. You end up needing two wrappers for each native instance x, in order to avoid a pigeon-hole problem. Wherefore my counterproposal. /be Brendan: Since your proposal likewise solves the breakage, it sounds good. . Per your question: Caillon's XPCNativeWrapper uses Components.lookupMethod(), to determine what a wrapped-native's original property/method was. See xpccomponents.cpp#2004 -- I was thinking my proposal would do the same. The flag itself could just be a variable (__preferLocal__), which would make checking up the chain extremely simple; all top-level contexts (window) would have it default-defined on their prototype: chrome:false, content:true. . Also: with(){..} statements seem the direct equivalent of jst's evalInContext(), from comment13. Could you clarify why usage is more recently deprecated? It's a very useful thing. .append (for completeness): The other flag -- "potentially clobbered / in-page" -- would be set by the backend, at parse-time, and not script-accessible. Member lookup would always check for it. )? i was explicitly asked to determine whether jst's proposed patch breaks our app, the answer is yes. the spider is 100% hosed since it starts out as chrome and then pokes through looking at the web page content to get things like document.links. (and yes, we're aware of the security concerns, for the most part it isn't an issue for our app, although we do use xpcnativewrapper and were screwed when it was moved.) i haven't tested the rest of our code, i decided to take a detour and see how i can work around this security measure (it's possible). i'm 99.9% certain the code will also break our other modules, as they all behave the same (some use xpcnativewrapper more than others, but they all start out as chrome and spend their time interacting with content). So a Web page could trick your spider into running arbitrary code? That seems like something you _should_ be concerned about. what i'm concerned about is whatever my manager and his manager tells me to be concerned about. note that as we're a commercial product, we don't license people to spider random web pages, only their own apps, as such, if their own apps decide to attack mozilla, well... fine then. and no, it's not ideal, yes we're working on it, but our app is not a generic web browser, it has a special domain where the rules and realm work differently. rue: we're working on something sort of like what you sketched, but not exactly. Regarding with statements, they are deprecated because in function f(o) { with (o) return x; return null; } f({x:42}) returns 42, while f({}) returns null -- but setting a global x = 42 before calling f({}) also returns 42. Only at runtime do we know what names bind to which properties of what object. One way to keep with (for utmost compatibility) is to extend JS (a la VB, IIRC) to allow with users to specify that they mean "in the object named by this with statement": function f(o) { with (o) return .x; return null; } (Obviously a contrived example, since it's so short.) With is not the same as evalInContext, however, since it extends the scope chain whereas evalInContext replaces the scope chain. /be ok, fwiw domi has the ability to violate the new model jst created in the preceding patches, i can make my code use that approach. (In reply to comment #36) > )? It depends on where they were compiled. If evalInContext is like eval, you pass it a string containing JS program source. If that contains the listener function it will be compiled in the content context, and have content privs. Adding a chrome-loaded (therefore chrome-privileged) function as a listener would work, but would not use evalInContext by itself since there would be no property path to the function. The chrome script would have to set the function as the value of a content (wrapper) property first, then evalInContext("thatContentNode.addEventListener(thatContentNode.listenerRef)") or some such. Ugly. We're trying now to avoid separate wrappers, to preserve compatibility while restoring by-default security, more or less along the lines rue sketched. The devil is in the details, though. /be (In reply to comment #37) > i was explicitly asked to determine whether jst's proposed patch breaks our > app, the answer is yes. the spider is 100% hosed since it starts out as chrome > and then pokes through looking at the web page content to get things like > document.links. document.links should work just fine even with separate wrappers so I don't understand why that should break your spider. The only thing that is affected is *js* set properties on native objects. So the only way it'd be affected is if the webpages you're crawling overrides document.links and makes it point to some custom js-object. Is that really what you are doing? try it yourself, use domi, pick any page with at least one link. 0. open navigator to a page w/ at least one link [perhaps verify that it works, javascript:void(alert(document.links.length))] 1. open domi 2. in domi, file>inspect window>navigator 3. in the left pane, select document-dom nodes 4. expand this path: #document/window/hbox/vbox[2]/hbox/tabbrowser/xul:tabbox/xul:tabpanels/xul:browser 5. in the right pane, select object-javascript object 6. expand this path: Subject/contentDocument/links/length with the patch from this bug, the result i get is 0. note that there is a way to get domi to give you the other answer, and if necessary i will use it and complain when someone breaks it. (fwiw links.length is the only thing that's broken, contentDocument.documentElement.innerHTML pretends the document is empty, among other properties that are relatively unhappy but willing to lie.) i was starting to investigate what it would take to do what domi did, but venkman crash :) Comment 44 needs investigation, but it doesn't prove a general problem (note that we are not trying to make jst's separate-wrappers patch in this bug land for 1.0.3, so don't fret). In fact it explicitly says something odd is going on with document.links.length. Timeless: are you gonna try to debug again? /be Actually, I have a related question. In our proposed setup, how do array-like properties on node lists work? So if I do document.getElementsByTagName("head")[1], say? Those aren't on interfaces, really... bz: [] is a shorthand for .item(...). the same had to apply to XPCNativeWrapper before it. Thanks bz: I see how retaining simple-but-currently-insecure syntax plus a fix yields simple-secure-syntax at the cost of less access to content js properties and/or more complex syntax with no cost. Re comment 33: if s/set/set or get/ applies, what happens if you watch() a content property from the chrome? More generally: I don't believe content-set js properties are interesting only to the special cases of DOM Inspector, spiders and compiled apps. Chrome scripting is useful as a general purpose macro-like language when it manipulates living, running Web applications, just as VBA-for-Excel does for spreadsheets containing formulae. That large class of chrome uses shouldn't be made obscure or impossible to do. - N. jst just checked in branch patches for bug 289074 that implement what Brendan was talking about yesterday in this bug. Initial testing seems to show the extensions broken by attachment 179766 [details] [diff] [review] are not broken by the new patch (AdBlock, chatzilla, dom inspector,...). Both Firefox and Suite 1.7.7 nightlies are available for testing, look for today's evening builds. How much more do we expect here for 1.8b2? We're coming into the end game for b2 so we need to make quick work of remaining issues or push them off to 1.8b3. We need to fix this for 1.8b2. Nigel's right, we shouldn't break getting content-set, non-overriding DOM properties from chrome -- we just need to sandbox them rigorously. To do that without adding the cost of a thunk per content-set getter, setter, and method, we should use jst's separate wrappers approach, with a magic mirror between the chrome and content wrappers, such that content can't affect chrome's wrapper, and chrome runs content-defined getters/setters/methods in the content scope, on the content context (necessary for native method invocation to result in the content principal being found by caps code). The magic mirror will need to thunk methods, but only when chrome script gets a content-set native method on the content wrapper. Content-set scripted methods carry their principals in their script if uncloned function objects, or in their immutable scope chain if cloned function objects. More tomorrow. /be > content-set, non-overriding DOM properties from chrome -- we just need to > sandbox them rigorously. Just to be crystal clear: we must bypass, not sandbox, content-set DOM-overriding properties. /be Work in progress, does not compile even (nsTHashtable.h and friends are not so friendly to opaque struct types). But the idea is not only to wrap content native method getter/setter/values safely, but (this part is not even started here) to set content property foo when the corresponding chrome wrapper for the same DOM native content object has foo set. /be (In reply to comment #53) This seems reasonable to me, and with the code to set properties across scopes I think this could go in with a really low risk of breaking extensions. As for the hash usage, you could just use pldhash directly as there's already other code in nsDOMClassInfo.cpp that does that... Assignee: jst → brendan Status: NEW → ASSIGNED Priority: -- → P1 Target Milestone: --- → mozilla1.8beta2 Whiteboard: extension buster? needed for b2 And testing. Please help. jst, I see that nsGlobalWindow::OnFinalize assertion. Do I need your Aviary version, or some part of it? /be Attachment #181807 - Attachment is obsolete: true This patch picks up the old JS_SetObjectPrincipalsFinder API I introduced in late 2003 when working with caillon on eval-based native function exploits, and revises it incompatibly to locate the findObjectPrincipals callback in the JS runtime, not in a particular JSContext. Besides economizing and matching the other principals callback (for XDR), this guarantees coverage: I am worried about a JS component loading and running on a "safe context" that doesn't have a findObjectPrincipals callback configured. This patch beefs up the belt-and-braces checks in obj_eval and script_exec to throw an invalid indirect call error in any case where the eval function object, or the script object, has object principals different from its calling script's subject principals. But the main change here is to nsScriptSecurityManager::GetFunctionObjectPrincipals, so it no longer skips native frames when walking the stack to find subject principals. Skipping natives is a very old design decision (Netscape 4 at least, possibly 3), which assumes that natives are trusted code that have no trust identity, as scripts (with their codebase or certificate principals) do. This assumption is flawed, as the eval and script object exploits recently fixed demonstrate. Evil script can store a native function object reference somewhere (in the DOM, or in a JS object created to masquerade as a DOM subtree), knowing that the native will be called in a certain way, and run with chrome scope and principals. We've been patching the symptoms of this general flaw in caps for 1.0.x, but this fix addresses the root of the problem. I'm attaching optimistically, still recompiling and testing, but an earlier version of this patch that didn't touch jsapi.h fixed the known testcases. /be Attachment #182036 - Attachment is obsolete: true Attachment #182537 - Flags: superreview?(jst) Attachment #182537 - Flags: review?(shaver) Attachment #182537 - Flags: approval1.8b2+ Attachment #182537 - Attachment description: no shared wrappers, but general native function security (prerequisite to any future patch) → no unshared wrappers, but general native function security (prerequisite to any future patch) Comment on attachment 182537 [details] [diff] [review] no unshared wrappers, but general native function security (prerequisite to any future patch) This appears pretty sane, and has some portions pretty similar to a patch that I think I worked on with you a while ago (not sure why that fell on the floor). r=caillon Attachment #182537 - Flags: review+ Comment on attachment 182537 [details] [diff] [review] no unshared wrappers, but general native function security (prerequisite to any future patch) sr=jst Attachment #182537 - Flags: superreview?(jst) → superreview+ Comment on attachment 182537 [details] [diff] [review] no unshared wrappers, but general native function security (prerequisite to any future patch) >Index: caps/include/nsScriptSecurityManager.h > // Returns null if a principal cannot be found. Note that rv can be NS_OK > // when this happens -- this means that there was no script associated > // with the function object. Callers MUST pass in a non-null rv here. > static nsIPrincipal* >- GetFunctionObjectPrincipal(JSContext* cx, JSObject* obj, nsresult* rv); >+ GetFunctionObjectPrincipal(JSContext* cx, JSObject* obj, JSStackFrame *fp, >+ nsresult* rv); Can you document what fp is used for, if it can be null, etc.? >+ // No chaining to a pre-existing callback here, we own this problem space. >+ ::JS_SetObjectPrincipalsFinder(sRuntime, ObjectPrincipalFinder); Should we warn if we find a pre-existing one? (And if we exit with one that's not us, perhaps?) > * All eval-like methods must use JS_EvalFramePrincipals to acquire a weak > * reference to the correct principals for the eval call to be secure, given > * an embedding that calls JS_SetObjectPrincipalsFinder (see jsapi.h). > */ > extern JS_PUBLIC_API(JSPrincipals *) > JS_EvalFramePrincipals(JSContext *cx, JSStackFrame *fp, JSStackFrame *caller); I'd like a Get in that name to keep me from reading "Eval" as a verb, but that's not what this patch is about. r=shaver on the JSAPI stuff. Attachment #182537 - Flags: review?(shaver) → review+ I checked in attachment 182537 [details] [diff] [review]. /be Brendan, sorry, but this Checkin have caused a serious Regression. I am using Windows-CREATURE-Tinderbox-Build 20050400 at moment. First Regression: Unable to install *.xpi with this Build. JS-Console throws: Error: uncaught exception: Permission denied to create wrapper for object of class UnnamedClass and XP-Install is aborted without installation. Second Regression: At Mozilla Startup JS-Console throws: Error: uncaught exception: Permission denied to get property UnnamedClass.Constructor without doing anything exept starting JS-Console. Third Regression: MailNews is unuseable at Moment, Thredpane is complete grey. Startup MailNews give a lot of Errors in JS-Console, e.g.: Error: uncaught exception: Permission denied to get property UnnamedClass.Constructor and using Mnenhy installed in Profile with older Build: Error: goMnenhy is not defined Source File: chrome://mnenhy-headers/content/mnenhy-headers-msgHdrViewOverlay-loader.js Line: 79 Error: goMnenhy is not defined Source File: chrome://mnenhy-headers/content/mnenhy-headers-msgHdrViewOverlay-loader.js Line: 49 and some more. Also Error: Error: uncaught exception: Permission denied to get property HTMLDivElement.nodeType was shown up. Formerly I have used Windows-CREATURE-Tinderbox-Build 2005050322 without this Patch, and it worked fine. Can you please backout this or fix it soon? TIA. Add Screenshot from broken (regressed) Threadpane in MailNews Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8b2) Gecko/20050504 Firefox/1.0+ 02:16pdt this checkin is causing serious trouble If you try to install any extension FF locks up the moment the filecheck starts I do not know if all possible regressions should be reported here. Opening external links in a new tab is also broke, it opens a blank new tab. (Options -> tabs -> Open links from other applications in -> A new tab in most recent window) Apparently some of our UI needs to call a content native from chrome, yet have chrome privileges. I haven't debugged to see why. This means we can't use object principals for content natives, for now. It also says we have to change *something* -- we can't have it both ways. /be I checked in a change to nsScriptSecurityManager.cpp to make it match the branch patch (which already landed on AVIARY_1_0_1...BRANCH and MOZILLA_1_7_BRANCH). I changed nsScriptSecurityManager::GetFunctionObjectPrincipals to skip native frames again. I also re-ordered the tests to avoid preferring cloned function object principals to eval or new Script principals (reviewers take note). That was an independent bug in yesterday's checkin, but not (I believe -- need to test more) relevant to the regression. Testing more today should show why our chrome counts on calling content-scoped natives with chrome privileges. XBL seems implicated. Builds should be restaged, I'll talk to people who can help do that in a bit. /be Ok, the profile manager works again. Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8b2) Gecko/20050504 Firefox/1.0+ 10:02pdt build Adblock still causes Error: [Exception... "'adblockEMLoad: TypeError: adblockEntry has no properties' when calling method: [nsIDOMEventListener::handleEvent]" nsresult: "0x8057001e (NS_ERROR_XPC_JS_THREW_STRING)" location: "<unknown>" data: no] but appears functional. everything else seems to work again again comment 68 is an unrelated problem caused by changes in Extension Manager's DOM. The exception appears in earlier builds as well. The initial checkin caused bug 292864, bug 292860. (In reply to comment #70) > The initial checkin caused bug 292864, bug 292860. And bug 292871. Also bug 292902 I guess. jst should look too. In thinking about native function hazards more deeply, it's clear that if we have only one bit (0 for scripted, 1 for native), we don't have enough information to decide whether to use the native function object's scope to find its principals. (We do have enough bits to decide how to handle the scripted case, of course; that is easy because scripts carry their own compiler-sealed principals, and cloned function objects have scope-sealed principals that override their prototype's script's principals.) But if the bit is 1, i.e., we have a native frame, we can't just skip it as we've done for ages -- that leaves us open to attacks involving dangerous natives. Nor can we use the native function object's scope-sealed principals, either -- that obvious broke a bunch of XBL-ish stuff yesterday. We need another bit. That bit is the condition (native function is a bound method and this frame is a call of it on a different initial |this| parameter than the native function object's parent [to which it is bound by native code using the JS API]). Both LiveConnect and XPConnect use the JSFUN_BOUND_METHOD flag to force |this| to bind to a reflected method's parent. That's good, it makes methods able to know the type of their |this| (obj in the JS API JSNative signature) parameter statically. But references to dangerous methods, though the methods are |this|-bound to their immutable parent scope objects, may nevertheless be extracted and called on a different |this| param. The JS engine will override that nominal param with the method's parent, and do the call. This patch adds a frame flag, JSFRAME_REBOUND_METHOD, that the JS invocation code sets in this case, and only this case (whether the function object is scripted or native, but that's just an aside). This patch then modifies nsScriptSecurityManager::GetFunctionObjectPrincipal to test, when about to skip a native frame, that the frame is not for a rebound call to a bound method. If it is, then the object principals for the method are returned, instead of the frame being skipped in search of a scripted caller. This defeats all native attacks using evil bound methods. Unbound methods (JS native functions by default are unbound) must be individually protected from misuse. I've done that in the last patch, on the trunk, for eval and new Script. I don't know of other dangerous unbound native methods, but welcome comment. After this patch is tested and goes in (I expect it will not break anything but exploits), we still have to consider the shared vs. split wrapper issue about which this bug is nominally concerned. /be Attachment #182713 - Flags: superreview?(shaver) Attachment #182713 - Flags: review?(caillon) Attachment #182713 - Flags: approval1.8b2+ Comment on attachment 182713 [details] [diff] [review] distinguish rebound method calls from unbound and default-bound calls jst pointed out a flaw that can be fixed -- not sure why my testing didn't show the same hole he saw... new patch in a minute. /be Attachment #182713 - Attachment is obsolete: true Attachment #182713 - Flags: superreview?(shaver) Attachment #182713 - Flags: review?(caillon) Attachment #182713 - Flags: approval1.8b2+ Two comments, of which the last one is mostly me brainstorming and might not be right at all :) Using the native function object's scope-sealed principal really feels like the best solution so it sucks that we apparently can't use that. Would be good to investigate what exactly is hindering this. What I feel a bit uneasy about is that we detect the case that the |this| pointer was changed. When in reality the danger isn't that the |this| pointer is tampered with but that we're tricked into calling a native function without knowing. Could we make it so that when a native function is set as a member we flag the _function_ as REBOUND and then always give the frame calling that function the treatment you are now no matter if the parent was changed or not. Ideally we should really flag the member rather then the function, but I'm not sure if that's possible. shaver can do second sr, I'd be delighted. jst's point was that one of the attacks rebinds the bound method in the same object that it is bound to already, so my parent != thisp check was not enough. This patch checks for a name change in the case where the this parameter is not being changed (is already the bound method's parent). The interdiff -w output is easy to read. I'll attach that next. /be Attachment #182729 - Flags: superreview?(jst) Attachment #182729 - Flags: review?(caillon) Comment on attachment 182729 [details] [diff] [review] fixed version of last patch Fixed, but gdb is misleading me, and for some reason my tests are passing. jst helped a ton and we now see that this patch can't work. The idea is sound, but XPConnect (unlike LiveConnect) does *not* make bound methods for each wrapped native instance. We need a different approach in detail, but the same in general. In the mean time, I believe this is good for LiveConnect safety. I'll test that in another bug (and seek testing help, since FC3 gdb is pretty damn broken). /be Attachment #182729 - Flags: superreview?(jst) Attachment #182729 - Flags: review?(caillon) jst has taken XPCNativeWrapper and rewritten it in C++ with a backward-compatible API, but with extra smarts: it wraps deeply (lazily), and if the second argument is not a string, it is taken as a constructor with which to do an instanceof test (so you can make sure you're getting what you expect). After talking at length today, he and I agreed to move his C++ version into XPConnect. We intend to automate it as a wrapper around XPCWrappedNatives, when chrome is operating on content, with auto-unwrapping when flowing back into a content native method, and with identity/equality ops in JS working as before. More on that soon. /be This isn't done yet, but it works fairly well. This replaces the current XPCNativeWrapper that's written in JS in a backwards compatible way. It does that by looking at the arguments the constructor gets and if all arguments following the first one are strings then it defaults to working like the current one, only it's all dynamic and doesn't care what the string arguments to the constructor are. This new XPCNativeWrapper implementation exposes the wrapped object through a "wrappedJSObject" property (same name that's used with double wrapping in XPConnect today). In addition to that this also does automatic wrapping and unwrapping for XPCNativeWrapper, that is, you can pass an XPCNativeWrapper to any XPCOM method that expects an interface pointer and we'll get the wrapped object and pass it in stead of the XPCNativeWrapper, and also when chrome is accessing content code this patch does automatic wrapping with XPCNativeWrapper. The part that's really lame in this patch is the detection code that figures out if chrome is accessing a content object so that it knows to wrap the result in an XPCNativeWrapper. Brendan's got some ideas on that, the code is in xpcconvert.cpp. The other part that's lame is that the implementation of this new XPCNativeWrapper is far from ideal. It doesn't use getters and setters, and for every wrapping operation if always creates a new wrapper when it could re-use existing ones. We'd need a hash from XPCWrappedNative to XPCNativeWrapper to make that work. Should be pretty easy to do. I'll be out of town (and out of reach, mostly) for the coming week, so this is pretty much where I leave this for others to look at and hopefully continue working on. Oh, and I forgot to mention that I intentionally left a bunch of printf()'s in the code to possibly help show this work or not work. The main point here is to get people trying the patch out. I think I persuaded dveditz to try it. It's working for me so far. /be Attachment #183880 - Flags: review?(shaver) benjamin, can you attach that followup patch we crave to set xpcnativewrappers=yes for Firefox's five (or so) chrome URI prefixes? Thanks again, /be My design notes on the patch: -- someone suggest a place where this should live for good. /be With the patch applied I crash if I open (suite) browser and then mail from the window->mail menu or via ctrl+2. Starting mail from the commandline works fine. The other things missing from the first attempt, besides the crash fix, are: 1. Code to configure the five chrome packages in firefox with bsmedberg's fine new xpcnativewrappers=yes option. 2. A call or calls to JS_FlagSystemObject to mark chrome windows and their descendants as "system". Bed soon, more tomorrow. /be Comment on attachment 183880 [details] [diff] [review] jst's + bsmedberg's + my patch to optimize the former and enable the latter >Index: browser/base/content/browser.js >+ // content.wrappedJSObject.defa...? Nix the comment? >@@ -4411,18 +4407,17 @@ nsContextMenu.prototype = { >- var wrapper = new XPCNativeWrapper(this.link, "href", "baseURI", >- "getAttributeNS()"); >+ var wrapper = this.link; Fix indent? >Index: chrome/src/nsChromeRegistry.cpp >+static PRBool >+CheckFlag(const nsSubstring& aFlag, const nsSubstring& aData, PRBool& aResult) Document what aResult is and what the return value is? And maybe what the function does? > nsChromeRegistry::ProcessManifestBuffer(char *buf, PRInt32 This only changes "content" packages, right? But can't all non-"skin" packages can execute script (looking at nsChromeRegistry::AllowScriptsForPackage here)? More precisely, would it make sense to set all non-"content" stuff in a package to unconditionally require wrapping? >+ xpc->WantXPCNativeWrappers(urlp.get()); If this fails, we should bail out or something. We don't want to be starting if things that expect to be security-checked are not. >Index: js/src/jsdbgapi.c >+JS_GetScriptedCallerFilenameFlags(JSContext *cx, JSStackFrame >+ if (!fp) >+ fp = cx->fp; fp can still be null here. >Index: js/src/jsdbgapi.h Could we document these methods somewhere? I know we don't do it in the JS headers usually; do we have any in-tree API docs on JS? >Index: js/src/xpconnect/src/xpcconvert.cpp >+ printf("Content accessed from chrome, wrapping wrapper in XPCNativeWrapper\n"); Let's stick #ifdef DEBUG_XPCNativeWrapper around these? This is as far as I got so far; more tomorrow. after applying the patches firefox seems to run fine, dhtml tests pass, normal browsing. the dom abuse seems stopped at first sight, but it is still possible chrome to call content when a javascript object is directly passed to chrome - as in sidebar.getInterfaces(m) where "m" is a luser js object. bug 289074 testcases attachment 179946 [details] and attachment 180189 [details] (both abuse navigator.preference) are not fixed by this patch. The other testcases appear to be fixed. At least two of my extensions (Web Developer 0.9.3 and ConQuery 1.5.4) are broken, and break the browser. Most of the window comes up, but bookmarks aren't loaded and various tabbrowser things like filling in the location bar and the security UI don't work. The __proto__ change to browser.js also needs to be made to tabbrowser.js and nsContextMenu.js. At startup I get an error about a redeclaration of XPCNativeWrapper. Is that expected since it's now implemented in xpconnect itself? "JavaScript error: chrome://global/content/XPCNativeWrapper.js, line 56: redeclaration of const XPCNativeWrapper" Similar to the changes made to browser.js, I think we want this. Apply on top of previous patches. The earlier nsChromeRegistry patch only reads the xpcnativewrappers from app-chrome.manifest, but doesn't put that notation there. You can't even hand-edit for testing because that file gets blown away every start (in a debug build). This bit adds contents.rdf processing to the chrome registry, and xpcNativeWrappers="true" to most of the Firefox content rdf-manifest files. More will have to be done to cover the suite. Adding this does not fix the navigator.preference exploit. I'm not awake enough to decide if that means this patch isn't working or if that's just a different vulnerability. Please do not use "the rest of the chrome registry patch"... I have the *.manifest build automation in my tree and am testing now. This patch is ready-to-land by itself; it registers all the ffox content packages using .manifests instead of contents.rdf and fixes a logic error in my last patch. Please make sure to NOT check in the xpfe changes from the "get rid of more __proto__" patch. The chrome registry changes need to be ported to suite first; at the moment (as of last patches posted in this bug) suite is NOT doing auto-wrapping. (In reply to comment #95) > at the moment ... suite is NOT doing auto-wrapping. It really ought to, eh? We don't want to ship 1.8b2 with the mfsa2005-41 holes anymore than the Deer Park alpha. If we're actually doing a suite release based on Gecko 1.8b2 (which it's not clear to me that we are, given the current suite situation), then yes, we need to make similar chrome registry changes there. Neil, do you think you could do that? Or know someone who could? fwiw, with comment 82 and comment 87 on winxp, the jsshell crashes on every test in the js library, firefox won't start with a specified profile via -P, and it appears to not be able to load chrome urls from the command line. Comment on attachment 183880 [details] [diff] [review] jst's + bsmedberg's + my patch to optimize the former and enable the latter >Index: js/src/xpconnect/src/XPCNativeWrapper.cpp >+ReWrapIfDeepWrapper(JSContext *cx, JSObject *obj, jsval v, jsval >+ // Re-wrap non-primitiv values if this is a deep wrapper (deep "non-primitive" >+ if (!rvalWrapper) { >+ return ThrowException(NS_ERROR_UNEXPECTED, cx); NS_ERROR_OUT_OF_MEMORY seems to make more sense. >+XPC_NW_GetOrSetProperty(JSContext *cx, JSObject *obj, jsval id, >+ // Be paranoid, don't let people use this as another objects "object's" >+ printf("Mapping wrapper[%d] to wrapper.item(%d)\n", JSVAL_TO_INT(id), #ifdef DEBUG_XPCNativeWrapper >+ printf("Calling setter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); Same. >+ printf("Calling getter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); Same. >+XPC_NW_NewResolve(JSContext *cx, JSObject *obj, jsval id, uintN >+ // Be paranoid, don't let people use this as another objects "object's" Probably file a followup on the "XXX make sure this doesn't get collected" comment? >+ printf("Wrapping function object for for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); DEBUG_XPCNativeWrapper >+ // member. This new functions parent will be the method to call from "function's" >+XPC_NW_Construct(JSContext *cx, JSObject *obj, uintN argc, jsval >+ printf("Wrapping already wrapped object\n"); DEBUG_.... >+ printf(" %s\n", ::JS_GetStringBytes(JSVAL_TO_STRING(argv[i]))); Same. >+XPC_NW_toString(JSContext *cx, JSObject *obj, uintN argc, jsval *argv, >+ // Be paranoid, don't let people use this as another objects "object's" >+ resultString.Append(NS_REINTERPRET_CAST(jschar *, I'm pretty sure you need to cast to PRUnichar* here to avoid build bustage on some platforms... >+XPCNativeWrapper::AttachNewConstructorObject(XPCCallContext &ccx, >+ JSObject *class_obj = >+ ::JS_InitClass(ccx, aGlobalObject, nsnull, ... >+ NS_ASSERTION(class_obj, "Can't initialize XPCNW class."); That could be null on OOM or whatever. Change to warning, esp. since we bail out next line if it's null. With those utter nits picked and the chrome registry changes I asked for, sr=bzbarsky Attachment #183880 - Flags: superreview?(bzbarsky) → superreview+ Comment on attachment 183880 [details] [diff] [review] jst's + bsmedberg's + my patch to optimize the former and enable the latter >- reference = focusedWindow.__proto__.getSelection.call(focusedWindow); >+ reference = focusedWindow.getSelection.call(focusedWindow); Why not just reference = focusedWindow.getSelection(); ? >+ /* Objects always require "deep locking", i.e., rooting by value. */ >+ if (lock || type == GCX_OBJECT) { >+ if (lock == 0 || type != GCX_OBJECT) { So in this case, if I have a non-object which is already locked, and I lock it again, I will have |lock| but type != GCX_OBJECT. So I'll fall into this code, which nets out to: >+ if (!rt->gcLocksHash) { >+ rt->gcLocksHash = >+ JS_NewDHashTable(JS_DHashGetStubOps(), NULL, >+ sizeof(JSGCLockHashEntry), >+ GC_ROOTS_SIZE); >+ if (!rt->gcLocksHash) > goto error; > } else { >+#ifdef DEBUG >+ JSDHashEntryHdr *hdr = > JS_DHashTableOperate(rt->gcLocksHash, thing, > JS_DHASH_LOOKUP); >+ JS_ASSERT(JS_DHASH_ENTRY_IS_FREE(hdr)); >+#endif > } >+ lhe = (JSGCLockHashEntry *) >+ JS_DHashTableOperate(rt->gcLocksHash, thing, JS_DHASH_ADD); >+ if (!lhe) >+ goto error; >+ lhe->thing = thing; >+ lhe->count = 1; > } else { and end up with a count of 1 on the lhe. Then if we do a single Unlock on the same non-object, we'll hit a count of 0 and remove it from the hash table, when we should still have an entry for it. Or we'll blow the JS_DHASH_ENTRY_IS_FREE(hdr) assertion instead if the gcLocksHash is already allocated and we're locking a second time. I must be missing something here. Spell it out for me? > JSBool > js_UnlockGCThingRT(JSRuntime *rt, void *thing) > { >+ uint8 *flagp, flags; > JSGCLockHashEntry *lhe; > > if (!thing) > return JS_TRUE; > >+ flagp = js_GetGCThingFlags(thing); > JS_LOCK_GC(rt); >+ flags = *flagp; > >+ if (flags & GCF_LOCK) { >+ lhe = (JSGCLockHashEntry *) >+ JS_DHashTableOperate(rt->gcLocksHash, thing, JS_DHASH_LOOKUP); Are we guaranteed that gcLocksHash is allocated by this point? If we only did one lock on a non-object, we'll just have flagged it as GCF_LOCK, and never had to allocate the hash, I think. (Update: mconnor just crashed here, I think.) >+ * Try to inherit flags by prefix. We assume there won't be more than a >+ * few (dozen! ;-) prefixes, so linear search is tolerable. >+ * XXXbe every time I've assumed that in the JS engine, I've been wrong! I didn't mentally execute the code, so forgive me if this is a naive question, but: what keeps us from having a prefix entry per web-loaded script? Is it simply that we will tend to have one script prefix per loaded page, and trust our other limiting factors to keep us to a few dozen? Do we care about the DOS attack that comes from a bunch of <script src="data:a = <random>"></script> entries? (It's always been fun to watch you recover from those O(small-enough) assumptions, though!) >+/* void wantXPCNativeWrappers (in string filenamePrefix); */ >+NS_IMETHODIMP >+nsXPConnect::WantXPCNativeWrappers(const char *aFilenamePrefix) This function seems strangely named to me, in that we want wrappers when manipulating objects that _don't_ come from a system prefix, and Want here makes me think that we want wrappers "for" the provided prefix, rather than that the script running in this prefix wants (Expects?) to get wrappers when reaching out. The IDL doc comment doesn't help much either, since it doesn't really explain which side of the line we're "matching" for. nsXPConnect::addSystemPrefix ? >+ if (wrapper->GetScope() != xpcscope) >+ { >+ // Cross scope access detected. Check if chrome code >+ // is accessing non-chrome objects, and if so, wrap >+ // the XPCWrappedNative with a XPCNativeWrapper to >+ // prevent userdefined properties from shadowing DOM >+ // properties from chrome code. >+ >+ uintN flags = JS_GetScriptedCallerFilenameFlags(ccx, nsnull); >+ if((flags & JSFILENAME_SYSTEM) && >+ !JS_IsSystemObject(ccx, wrapper->GetFlatJSObject())) This looks nice, but I can't figure out where the system flag gets set on objects. I thought it might come from being created while a system-prefixed script was running, but I don't see that flag inheritance, and I don't see any calls to FlagSystemObject anywhere else. >+ printf("Calling setter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); >+ printf("Calling getter for %s\n", >+ ::JS_GetStringBytes(JSVAL_TO_STRING(id))); #ifdef DEBUG_xpcwrappednative or some such, pls. >+ // Be paranoid, don't let people use this as another objects "object's" >+ } else if (member->IsAttribute()) { >+ // An attribute is being resolved. Define the property, the value >+ // will be dealt with in the get/set hooks. >+ >+ // XXX: We should really just have getters and setters for >+ // properties and not do it the hard and expensive way. Agreed; is there a bug on file? >+ // XXX: make sure this doesn't get collected before it's hooked in >+ // someplace where it's kept around. Mmm, yes. File a bug? r-, I think the gcLockHash issues are probably real pain, and I'd like to understand the GCF_SYSTEM stuff better, even if it isn't really a bug. Attachment #183880 - Flags: superreview+ Attachment #183880 - Flags: review?(shaver) Attachment #183880 - Flags: review- Comment on attachment 183905 [details] [diff] [review] The rest of the chrome registry patch bsmedberg says not to use this one. Attachment #183905 - Attachment is obsolete: true bz: I've fixed those nits, mostly jsts unique possessive unpunctuation style ;-). shaver: thanks, I was sleepy during that lock bit reclamation. But you then got sleepy too -- the prefix list is not one per script filename, but one per call to the new JS_FlagScriptFilenamePrefix API (see the wiki), and that is governed by chrome -- so there's no DOS-from-web-content hazard. Still, the list could get large if extensions go crazy, but that would be a signal to flip the default to xpcnativewrappers=yes. I'll have a better all-in-one patch shortly. /be It turns out that attachment 183912 [details] [diff] [review] messes up Thunderbird more than a little bit because thunderbird repackages chrome in its own inimitable way and uses a hand-written installed-chrome.txt. I've been planning on getting rid of that chrome repackaging for a while, and I've finally done it. win32 installer continues to work correctly as well. Scott, you don't need to review the browser/* or chrome/* bits, just the mail/* and various xpfe/* changes which affect tbird. Attachment #183912 - Attachment is obsolete: true Attachment #183946 - Flags: review?(mscott) (In reply to comment #97) >Neil, do you think you could do that? Or know someone who could? It wouldn't be easy to use with the provided xpconnect model. Toolkit converts any contents.rdf files to .manifest files; these are then parsed on every launch and any xpcnativewrapper flag is used to notify xpconnect. Suite on the other hand simply maintains two RDF data sources, one for the install data source and one for the current profile data source. It would have to manually scrape the data sources for xpcnativewrapper flags on every profile switch. Fortunately we can probably hack something in at the end of AddToCompositeDataSource(). Note that there's no way to turn the flag off, so if you switch to a profile with an installation of an extension that needs them turned off you're out of luck... (In reply to comment #96) > (In reply to comment #95) > > at the moment ... suite is NOT doing auto-wrapping. > > It really ought to, eh? We don't want to ship 1.8b2 with the mfsa2005-41 holes > anymore than the Deer Park alpha. Who is "we"? Suite releases are being done by volunteers now, not by mozilla.org staff or MF employees. My brain hurts enough without having to worry about the suite, so I am not going to worry about it. It's someone else's turn... they're "it" :-/. /be I give up on doing an all-in-one patch. I'll attach a patch to the infrastructure and let bsmedberg track the shaver-induced nsIXPConnect method name change, and attach a follow-on patch. I'm also not going to worry about rolling up dveditz' __proto__-policing patch, although I've applied it locally. /be This does include bsmedberg's chrome code changes, but not the manifest changes all over the place (but I've got those in my tree, I think). I can't debug -- XPCOM autoreg runs *every* time and horks gdb badly. Anyone else see this re-registration bug? What's the bug # tracking it? Mainly this needs testing and debugging help. Boris is gonna do that, but more are welcome to join in. /be Attachment #183880 - Attachment is obsolete: true Attachment #183887 - Attachment is obsolete: true Patch in comment 103 makes a suite build die with: error: file '../../../mozilla/toolkit/components/cookie/content/contents.rdf' doesn't exist at ../../../mozilla/config/make-jars.pl line 428, <STDIN> line 207. Given that some of our tinderbox tests only run on suite tinderboxen, we might want to not check that in as-is... bsmedberg: bz and I both see XPCOM registration every single time, in our debug firefox builds. I also crash trying to run with -profileManager. No time to debug that. I use -P test (and my test profile has worked well for trunk builds in general), but when darin said what he was using, -profile <name>, I tried that and got WARNING: NS_ENSURE_TRUE(NS_SUCCEEDED(rv)) failed, file nsAppRunner.cpp, line 1325 followed by app exit. Is there a bug filed on any of this? /be Firefox starts up fine, and this latest patch fixes the navigator.preference testcase again. Don't have extensions in this profile, will have to quit and try those separately. Web Developer and ConQuery extensions still mess up browser chrome. Web Developer actually appears to work (haven't tried all features), but having it enabled seems to interrupt browser initialization. Bookmarks aren't loaded, securityUI isn't initialized, doesn't load the start page, etc. Menus and toolbar are there, though. The very first time I ran with this patch I got a PR_assert in the JS_AQUIRE_LOCK() in js_SaveScriptFilenameRT(), ultimately called from bsmedberg's new chrome stuff but everything looked kosher on his end. Haven't been able to reproduce that. This deals with the zero-contexts-in-runtime condition bz saw during the double (!) XPCOM component registration madness that afflicts both his and my build at startup, every time. For him, this led to prefix losses that were not covered up by later chrome re-registration. Something about his .mozconfig differs from mine because I didn't note any prefix lossage. Anyway, this patch keeps script filename prefixes added by the new jsdbgapi.h entry point JS_FlagScriptFilenamePrefix around till runtime destruction. If we need to unload extensions, switch profiles, or otherwise unflag prefixes, we can add the obvious counterpart API. /be OK, I tracked down the issue I was seeing. In SaveScriptFilename, when we're handling the |flags != 0| case, we screw up any time a filename that is strictly shorter than all already-inserted filenames is inserted, as long as it's the third or later one. Proof: sfp will get set to non-null any time head->next != head (so we have more than one thing in the list already). The only time it's set to null after this point is if |sfp->length <= length|, which will never happen if the new filename is shorter than all existing ones. So if we had > 1 filenames inserted and insert a new short one, we'll end up not actually adding it to the list and instead just munging the flags of whatever the last thing in the list was some more. I added an |sfp = NULL;| at the very end of the loop, and now I actually get XPCNativeWrappers created when I open the context menu. On a related note, it seems to me that if there is only one thing in the list we'll never enter the loop, so if someone adds it again we'll have two entries for it in the list. (In reply to comment #113) > sfp will get set to non-null any time head->next != head (so we have more > than one thing in the list already). I pointed out that this is a list with one or more entries, not two or more. Then I realized bz and I were looking at a dump of the circular list where the list head (a JSCList) was being mis-cast to ScriptFilenamePrefix. So we were chasing a phantom. The real bug here, which we're hunting now, is that bz gets no prefix for chrome://browser/ -- but I do. /be (In reply to comment #114) > The real bug here, which we're hunting now, is that bz gets no prefix Duh, bz pointed out the real bug -- terminating that for loop by reaching the end of the circular list (wrapping to the header) without nulling sfp. This patch fixes that bug, and optimizes/cleans-up XPCNativeWrapper.cpp slightly. Getting close. Interdiff of last patch and this one next. /be For bc and others following the bouncing patch-ball. /be JS_FlagScriptFilenamePrefix(JSRuntime * 0x0215a348, const char * 0x0012ee64, unsigned long 1) line 1299 + 17 bytes nsXPConnect::FlagSystemFilenamePrefix(nsXPConnect * const 0x020f3690, const char * 0x0012ee64) line 1287 + 16 bytes nsChromeRegistry::ProcessManifestBuffer(char * 0x02150830, int 162, nsILocalFile * 0x020f25e8, int 0) line 1920 nsChromeRegistry::ProcessManifest(nsILocalFile * 0x020f25e8, int 0) line 1780 + 24 bytes nsChromeRegistry::CheckForNewChrome(nsChromeRegistry * const 0x020ef4c8) line 1141 + 19 bytes nsChromeRegistry::Init() line 541 nsChromeRegistryConstructor(nsISupports * 0x00000000, const nsID & {...}, void * * 0x0012f404) line 50 + 128 bytes nsGenericFactory::CreateInstance(nsGenericFactory * const 0x020ef480, nsISupports * 0x00000000, const nsID & {...}, void * * 0x0012f404) line 82 + 21 bytes nsComponentManagerImpl::CreateInstanceByContractID(nsComponentManagerImpl * const 0x020d1ac0, const char * 0x013cc9a4, nsISupports * 0x00000000, const nsID & {...}, void * * 0x0012f404) line 1987 + 24 bytes nsComponentManagerImpl::GetServiceByContractID(nsComponentManagerImpl * const 0x020d1ac4, const char * 0x013cc9a4, const nsID & {...}, void * * 0x0012f470) line 2414 + 50 bytes CallGetService(const char * 0x013cc9a4, const nsID & {...}, void * * 0x0012f470) line 95 nsGetServiceByContractID::operator()(const nsID & {...}, void * * 0x0012f470) line 278 + 19 bytes nsCOMPtr<nsIToolkitChromeRegistry>::assign_from_gs_contractid(nsGetServiceByContractID {...}, const nsID & {...}) line 1272 + 17 bytes nsCOMPtr<nsIToolkitChromeRegistry>::nsCOMPtr<nsIToolkitChromeRegistry>(nsGetServiceByContractID {...}) line 678 ScopedXPCOMStartup::SetWindowCreator(nsINativeAppSupport * 0x01aaf6f8) line 651 ProfileLockedDialog(nsILocalFile * 0x01aa7618, nsILocalFile * 0x01aa7618, nsIProfileUnlocker * 0x00000000, nsINativeAppSupport * 0x01aaf6f8, nsIProfileLock * * 0x0012fac8) line 1095 + 12 bytes SelectProfile(nsIProfileLock * * 0x0012fac8, nsINativeAppSupport * 0x01aaf6f8, int * 0x0012fab4) line 1472 + 40 bytes XRE_main(int 1, char * * 0x01aa7028, const nsXREAppData * 0x0123901c kAppData) line 1823 + 51 bytes main(int 1, char * * 0x01aa7028) line 61 + 18 bytes mainCRTStartup() line 338 + 17 bytes KERNEL32! 7c816d4f() in PR_Lock: + lock 0x00000000 me->flags 136 in js_SaveScriptFilenameRT: + filename 0x0012ee64 "chrome://browser/" flags 1 + rt 0x0215a348 rt->scriptFilenameTableLock 0x00000000 - sfe 0x0012ee64 - next 0x6f726863 next CXX0030: Error: expression cannot be evaluated keyHash CXX0030: Error: expression cannot be evaluated key CXX0030: Error: expression cannot be evaluated value CXX0030: Error: expression cannot be evaluated keyHash 792356205 key 0x6f72622f flags 1919251319 mark 47 '/' + filename 0x0012ee75 "" FWIW, the flags are -P <name> or -profile <path>... and I'm pretty sure you have to create the <path> folder before you call -profile <path>. (In reply to comment #117) > Thanks, dveditz saw this too (bz and I do not). Perils of early init. Fixing for the next patch. Easy inline interdiff preview below. /be ----- cut here ----- diff -u js/src/jsscript.c js/src/jsscript.c --- js/src/jsscript.c 19 May 2005 05:56:38 -0000 +++ js/src/jsscript.c 19 May 2005 16:45:51 -0000 @@ -1112,11 +1112,16 @@ { ScriptFilenameEntry *sfe; + /* This may be called very early, via the jsdbgapi.h entry point. */ + if (!rt->scriptFilenameTable && !js_InitRuntimeScriptState(rt)) + return NULL; + JS_ACQUIRE_LOCK(rt->scriptFilenameTableLock); sfe = SaveScriptFilename(rt, filename, flags); JS_RELEASE_LOCK(rt->scriptFilenameTableLock); if (!sfe) return NULL; + return sfe->filename; } diff -u js/src/xpconnect/src/xpcprivate.h js/src/xpconnect/src/xpcprivate.h --- js/src/xpconnect/src/xpcprivate.h 19 May 2005 05:56:47 -0000 +++ js/src/xpconnect/src/xpcprivate.h 19 May 2005 16:46:01 -0000 @@ -139,7 +139,7 @@ #define DEBUG_xpc_hacker #endif -#if defined(DEBUG_brendan) || defined(DEBUG_jst) +#if defined(DEBUG_brendan) || defined(DEBUG_bzbarsky) || defined(DEBUG_jst) #define DEBUG_XPCNativeWrapper 1 #endif I looked into the web developer extension issues. First off, the security UI issue is not caused by this patch. It's caused by bug 294815 and has been around on tip for a while now (since April 11). That exception is what breaks every single other thing listed as going wrong in comment 111 (as in, when I commented out the throwing line in browser.js I got bookmarks, start page, etc). The line in question is and since it only runs when the securityUI is null (due to bug 294815), it's just obviously wrong.... mconnor says we have a bug on that already somewhere, with patch even. Brendan, the reason we saw wrapping with the webdeveloper extension is that it gets its documents like so: const documentList = webdeveloper_getDocuments(getBrowser(). browsers[mainTabBox.selectedIndex].contentWindow, new Array()); The thing is, getting the contentWindow property of a <xul:browser> goes through XBL, so ccx.mJSContext->fp->down->script->filename is "chrome://global/content/bindings/browser.xml" (and ccx.mJSContext->fp->script is null as usual). If I look at ccx.mJSContext->fp->down->down->script->filename then it's "chrome://webdeveloper/content/css.js" as expected, but that doesn't help us, of course. This is likely to bite other extensions too, I would bet, since getting the window for the "current tab" is pretty common. Perhaps the answer is to not set the "want wrappers" flag on chrome://global/ for now? What lives in that package anyway? The issue we were seeing with webdeveloper.js failing on "headElementList[0] has no properties" is due to the following code in XPC_NW_GetOrSetProperty: if (!member->IsAttribute()) { // Getting the value of a method. Just return and let the value // from XPC_NW_NewResolve() be used. return JS_TRUE; } The problem is that in this case |member| is "item()", since |id| is an integer. And "item()" is a function, not an attribute. So we bailed. At the same time, XPC_NW_NewResolve expects us to handle the "|id| is an integer" case in this code. Changing this block to say: if (!member->IsAttribute() && methodName == id) { // Getting the value of a method. Just return and let the value // from XPC_NW_NewResolve() be used. Note that if methodName != id // then we fully expect that |member| is not an attribute and we need // to keep going and handle this case below. return JS_TRUE; } makes things happy over here. > Perhaps the answer is to not set the "want wrappers" flag on > chrome://global/ for now? What lives in that package anyway? tabbrowser.xml and contentAreaUtils.js do, and both have had security issues in the past. ViewSource and PrintPreview are also, dunno if that's any kind of problem (if so it's probably currently a problem). One other option if a lot of extensions break is to hack the contentWindow (and contentDocument?) props on these bindings (browser/tabbrowser) to return an unwrapped object. I just tried that, and it does work (give the unwrapped object to the webdeveloper extension), but even code that wants wrapping gets an unwrapped object (since we don't go back out of JS when returning here). So we'd need to audit all calling code for these two getters and use XPCWrappedNative manually for them in Mozilla code... Shouldn't be too bad, really, if we have to do this. One interesting thing I ran into -- the context menu never sees the content wrappers for some reason, just chrome wrappers for the nodes. Not sure why, really. But in any case, when the context menu comes up this.target.ownerDocument is not an XPCNativeWrapper and isn't equal to the wrappedJSObject of getBrowser().contentWindow.document (which _is_ wrapped). (In reply to comment #119) > > Thanks, dveditz saw this too (bz and I do not). Perils of early init. Fixing > for the next patch. Easy inline interdiff preview below. Ok, I start up without crashing now. On the first run with firefox -P Debug and set MOZ_NO_REMOTE=1 it says that -P is not recognized but appeared to select the profile anyway. On subsequent starts, the -P message does not appear. So I talked with bz about the problem in comment 121 (.contentWindow etc being a wrapper when accessed by an extension). I wonder if making <browser>s have an idl-interface with the contentWindow and contentDocument as attributes would solve the problem. Then we should unwrap into a native object and then rewrap without the XPCNativeWrapper when extensions access it. The interface would still be implemented in xbl-js of course, we'd just add it to the 'implements' list. (In reply to comment #120) > The line in question is > and > since it only runs when the securityUI is null (due to bug 294815), it's just > obviously wrong.... mconnor says we have a bug on that already somewhere, with > patch even. bug 292604 I thought about it some more, and I don't think the XBL-idl approach will work. The basic problem is that the call from extension JS into the XBL JS will just be a JS call. No XPConnect involved unless the XBL-bound thing is passed to a native method or something like that.). Does that sound reasonable? If so, I think I'll try implementing it; I'm having no other bright ideas. (In reply to comment #128) > > mconnor says we have a bug on that already somewhere, with patch even. > bug 292604 I applied that patch and the securityUI exception went away but my symptoms didn't clear up. Still no bookmarks or start page if Web Developer is enabled. Separate error: Clicking on an error link from the Javascript console opens viewsource but does not take you to the line containing the error. You get the error "pre has no properties" from chrome://global/content/viewSource.js line 292 Will it really be a js-to-js call even if it's declared as an interface? Won't it just look like any other interface to XPConnect, which once we call it happen to call into a XPConnect-created vtable? Or are we 'clever' enough to notice when js calls a js-implemented interface on an XPCOM object and optimize that? (In reply to comment #131) > Will it really be a js-to-js call even if it's declared as an interface? JS-to-JS calls do not involve natives (whether wrapped or double-wrapped by XPCNativeWrapper around the usual XPCWrappedNative [names suck, don't change them]). (In reply to comment #129) >). Hmm, why wouldn't we make XPCNativeWrapper do this automagically, all the time? When it is called from "system" chrome, it does its thing, but when called from non-"system" chrome, it simply forwards get/set property calls and other hooks to its wrappedJSObject. /be > Hmm, why wouldn't we make XPCNativeWrapper do this automagically, all the time? Depends on how expensive the "is caller system?" check is. I was trying to minimize the number of such checks, I guess. I agree that if this is not an issue, then it's simpler to just do this in XPCNativeWrapper. (In reply to comment #130) > I applied that patch and the securityUI exception went away but my symptoms > didn't clear up. Hmm... odd... they cleared up for me when I did basically that... > Separate error: Clicking on an error link from the Javascript console This is fixed by the change described in comment 122. This is looking good for b2/fx1.1a1. /be Attachment #183982 - Attachment is obsolete: true Attachment #184058 - Flags: review?(bzbarsky) Attachment #184058 - Flags: approval1.8b2+ Comment on attachment 184058 [details] [diff] [review] infrastructure patch, v5 >Index: browser/base/content/browser.js > function checkForDirectoryListing() >- content.defaultCharacterset = getMarkupDocumentViewer().defaultCharacterSet; >+ content.wrappedJSObject.defaultCharacterset = >+ getMarkupDocumentViewer().defaultCharacterSet; This change is good. The rest of the changes in this file shouldn't go in until we're flagging our UI as wanting wrappers. >Index: chrome/src/nsChromeRegistry.cpp I already made some comments on this part; I'd really like them to be addressed, but Brendan's not the one to do it, seems to me. Benjamin, could you fix that stuff up? >Index: js/src/xpconnect/src/xpcwrappednative.cpp Do we need to worry about updating the "system" flag when changing scopes? What about wrapper reparenting? Probably followup-bug fodder here. Other remaining followups: 1) Sort out whether we need to do something for contentDocument/Window. Need to test adblock and friends. 2) Sort out the scope thing with context menus I ran into (working on this). r=bzbarsky Attachment #184058 - Flags: review?(bzbarsky) → review+ I checked in just the js and dom changes, plus the gutting of the XPCNativeWrapper.js files. It's up to bsmedberg to land the rest, to make this patch actually auto-wrap content when accessed from app chrome. /be Brief summary of problem: To find the right XPC scope to look for a wrapper in, we get the parent object and get the XPC scope from that. But in this case the parent got a XPCNativeWrapper stuck on it, so we were coming our with the wrong scope. So we created wrappers for the target node, etc all in the XPC scope of the chrome window instead of in the scope of the content window. All the patch does is fix up the "get the XPC scope from that" step to know about XPCNativeWrapper Attachment #184070 - Attachment is obsolete: true Attachment #184071 - Flags: superreview?(brendan) Attachment #184071 - Flags: review?(brendan) One other comment on the chrome reg changes, in addition to my other ones. What is + entry->flags |= PackageEntry::XPCNATIVEWRAPPERS; actually doing? I'm not seeing that flag tested for anywhere... Comment on attachment 184071 [details] [diff] [review] Same, but with right include. r/sr/a=me, thanks -- bz is a tough patch's best friend. /be Attachment #184071 - Flags: superreview?(brendan) Attachment #184071 - Flags: superreview+ Attachment #184071 - Flags: review?(brendan) Attachment #184071 - Flags: review+ Attachment #184071 - Flags: approval1.8b2+ Comment on attachment 183946 [details] [diff] [review] Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses [checked in] r=mscott for the mail, mailnews and toolkit changes. I'm doing this under duress as I think it will take a couple days at least to shake these changes out for Thunderbird which means I won't be able to release alpha one for 1.1 when Firefox is ready (if it is ready tomorrow). Also, make sure you tweak the installer to remove qute.jar and messenger.jar. Also, I'm assuming you'll back me up when I start adding jar.mn ifdefs to xpfe, editor and toolkit to stop packaging files that my repackaging work was avoiding for me. Once this lands I'll start going through the JAR files by hand to identify the new files that we are building with that we weren't before so e can take them back out. I"d like to do that before we land the patch but I understand the pressures to get this in for Firefox supercede that. Attachment #183946 - Flags: review?(mscott) → review+ I wanted to say that if we need a day or three to get this right, that's ok -- the state of the trunk is such that we can't predict Firefox and Thunderbird alpha 1 will be on the same day, although I agree they should have about the same versions of common code. I also wanted to back mscott's position that we should support "minimal linking" in the sense that good compiled code linkers do, but for chrome packaging. We shouldn't require Thunderbird to pick up pieces it doesn't want. But if there is a "what's in the common code platform" issue underlying the surface conflict here, let's have that out in a separate venue. /be (In reply to comment #144) > Ah, I noticed that too, and filed bug 294893 for it. Are you / is anyone sure that this checkin is causing it? Comment on attachment 183946 [details] [diff] [review] Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses [checked in] This is checked in with an additional fix of other-licenses/branding and with some additional comments to match bz's review. Attachment #183946 - Attachment description: Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses. → Stop repackaging tbird chrome, and use the same flat manifests as the rest of the world uses [checked in] Attachment #184112 - Flags: superreview?(bzbarsky) Attachment #184112 - Flags: review?(bzbarsky) Attachment #184112 - Flags: approval1.8b2- This is probably already reviewed, I'm just looking for a sanity check. /be Attachment #184113 - Flags: superreview?(bzbarsky) Attachment #184113 - Flags: review?(benjamin) Attachment #184113 - Flags: approval1.8b2+ Attachment #184112 - Flags: superreview?(bzbarsky) Attachment #184112 - Flags: superreview+ Attachment #184112 - Flags: review?(bzbarsky) Attachment #184112 - Flags: review+ Comment on attachment 184113 [details] [diff] [review] cleanup/fixup patch for various chrome files sr=bzbarsky, but I'm going to debug why this stuff broke things, exactly. All these changes should be no-ops except the first hunk. Attachment #184113 - Flags: superreview?(bzbarsky) → superreview+ Attachment #184112 - Attachment description: Move xmlprettyprint to the "global" package, rev. 1 → Move xmlprettyprint to the "global" package, rev. 1 [checked in] Attachment #184112 - Flags: approval1.8b2? Ah, the following line breaks: var searchStr = focusedWindow.__proto__.getSelection.call(focusedWindow); since there proto of the XPCNativeWrapper has no getSelection or anything on it. I guess that's more or less expected. Two more issues that we found: 1) We can end up with an XPCWrappedNative as the __parent__ of an XPCNativeWrapper. This is wrong. I'll post a patch to fix.. Brendan, I think this is much cleaner than fixing every single PreCreate method... Attachment #184115 - Flags: superreview?(brendan) Attachment #184115 - Flags: review?(brendan) Comment on attachment 184115 [details] [diff] [review] Fix parenting r+sr+a=me with conforming brace and if( style. /be Attachment #184115 - Flags: superreview?(brendan) Attachment #184115 - Flags: superreview+ Attachment #184115 - Flags: review?(brendan) Attachment #184115 - Flags: review+ Attachment #184115 - Flags: approval1.8b2+ (In reply to comment #151) > Two more issues that we found: > > 1) We can end up with an XPCWrappedNative as the __parent__ of an > XPCNativeWrapper. This is wrong. I'll post a patch to fix. It's easy to do, and here bz did indeed transpose XPCWrappedNative and XPCNativeWrapper in this sentence. Let's say it again, with shorter names: a wn should never have an nw as its parent. >. I believe the nw linkage up the tree, via __parent__ (and parentNode, but that's automated for the deep nw case) should mirror the wn up-linkage. The "down" linkage is isomorphic already for the deep nw case, which is the main point of this automation layer. At the top of the __parent__-linked ancestor line is a nw wrapping a content window, and its object principal should be the same as the content window's. To make this work, we either (a) teach caps about XPCNativeWrappers; or (b) make XPCNativeWrapper objects have private nsISupports that can QI in the way that caps requires to find object principals. Thoughts on the last choice? /be After building with these changes, I am no longer able to: 1) Open the account Manager 2) Open the Filter Dialog 3) Open the Junk Mail Dialog I'm not seeing any JS errors or console messages. Could it be related to the security fix? I'll keep digging. I also removed some ThrowException calls that were redundant (when a fallible JS API entry point returns false or null, an exception has already been thrown, or an OOM error reported). /be Attachment #184129 - Flags: superreview?(bzbarsky) Attachment #184129 - Flags: review?(bzbarsky) Attachment #184129 - Flags: approval1.8b2+ Comment on attachment 184129 [details] [diff] [review] XPCNativeWrapper.cpp patch to make deep wrapper __parent__ also deep >Index: XPCNativeWrapper.cpp >+ // JS_NewObject already thread (or reported OOM). s/thread/threw/ r+sr=bzbarsky Attachment #184129 - Flags: superreview?(bzbarsky) Attachment #184129 - Flags: superreview+ Attachment #184129 - Flags: review?(bzbarsky) Attachment #184129 - Flags: review+ My patches at attachment 184113 [details] [diff] [review] and attachment 184129 [details] [diff] [review] are checked in. /be Since the private of our XPCNativeWrapper's JSObject is an XPCWrappedNative anyway, we may as well flag ourselves as JSCLASS_PRIVATE_IS_NSISUPPORTS. That removes the need for the GetScopeOfObject() hack, fixes object principals, and likely some other places in the code that look for the natives for a JSObject. Attachment #184130 - Flags: superreview?(brendan) Attachment #184130 - Flags: review?(brendan) Comment on attachment 184130 [details] [diff] [review] Object principal fixup BRILLIANT! (best Basil Fawlty voice ;-). /be Attachment #184130 - Flags: superreview?(brendan) Attachment #184130 - Flags: superreview+ Attachment #184130 - Flags: review?(brendan) Attachment #184130 - Flags: review+ Attachment #184130 - Flags: approval1.8b2+ I'm also no longer able to switch folders in the folder pane. I keep getting a JS error saying "Component is not Available" when we call: GetMessagePaneFrame().location Got as far as listing the URL prefixs that want wrappers. Also needs chrome:xpcNativeWrappers="true" added to a few contents.rdf files. the original change to mail\base\jar.mn removed (accidentally?) three files that Thunderbird needs to run. I've added those back in and I also needed to register an additional chrome package to avoid errors complaining about not being able to find contextHelp.js. This fixes the problems with: 1) dialogs in thunderbird not opening. Still having problems with: 1) start page no longer loads in the message pane's iframe. 2) unable to switch folders, JS exception saying Component is not Available. Attachment #184139 - Attachment is obsolete: true Attachment #184151 - Flags: superreview?(brendan) Attachment #184151 - Flags: review?(benjamin) Neil, don't we need to flag xpfe/communicator as needing wrappers too? Also, I'd use NS_ARRAY_LENGTH() instead of sizeof() - 1. (In reply to comment #166) > Also, I'd use NS_ARRAY_LENGTH() instead of sizeof() - 1. these two have different meanings... Comment on attachment 184151 [details] [diff] [review] suite changes checked in including communicator sr=bzbarsky if the communicator package is also marked as wanting wrappers. Attachment #184151 - Flags: superreview?(brendan) → superreview+ Comment on attachment 184151 [details] [diff] [review] suite changes checked in including communicator Gets this security stuff working in Suite builds. Local version of patch contains update for review comments. Attachment #184151 - Flags: approval1.8b2? Attachment #184151 - Attachment description: suite changes → suite changes checked in including communicator Sorry Neil, but your last Checkin for this Bug has caused another regression. When I try to compose a Message (Posting in Newsgroup) Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b2) Gecko/20050523 Mnenhy/0.7.2.10001 {Build ID: 2005052302} crashes imediatly and reproduceable almost for me. Mozilla Build 2005052221 without this patch works fine. (In reply to comment #170) >Sorry Neil, but your last Checkin for this Bug has caused another regression. Nah, it just detected a previously existing problem (filed as bug 295200). If you want a workaround, then exit Mozilla, edit dist/bin/chrome/chrome.rdf and add a line chrome:xpcNativeWrappers="true" under urn:mozilla:package:editor. (In reply to comment #172) >. > > I am seeing the same thing. It works if you click back and forth to another folder a couple of times. That, or click on another message, then back to the folder you want. Scott, I don't see a bug on the start page issue. Please file one, and file bugs for any other issues you run into? This bug is far too unwieldy to put more patches or discussion here... Flags: blocking1.8b2+ (In reply to comment #172) > With a clean clobber build from this morning, I am still unable to do the following: > > 1) Switch mail folders (make sure you've loaded a message then try to switch) I filed a separate bug 295222 before I found this discussion. I suppose it could be marked a duplicate. Trying to wade through all my backed up bugmail. So is this bug fixed and much of this just fall out that should have been filed as separate bugs, or is this issue still being addressed? The last couple of patches here were just fallout, yes. This bug is fixed. So are all the regressions we know about. ;) Status: ASSIGNED → RESOLVED Closed: 17 years ago Resolution: --- → FIXED But we still track dependencies such as bug 295937 here. Good way to be sure all followup-bugs are fixed. /be Please remove the following entries from allmakefiles. configure log message: creating toolkit/components/passwordmgr/resources/content/contents.rdf sed: ./toolkit/components/passwordmgr/resources/content/contents.rdf.in: No such file or directory creating toolkit/content/buildconfig.html creating toolkit/components/passwordmgr/resources/content/contents.rdf sed: ./toolkit/components/passwordmgr/resources/content/contents.rdf.in: No such file or directory creating gfx/gfx-config.h Comment 179 seems to be unrelated to this bug... crot0@infoseek.jp, did you get the wrong bug number? (In reply to comment #180) > Comment 179 seems to be unrelated to this bug... crot0@infoseek.jp, did you get > the wrong bug number? In the chekin log, it is bug281988... Oh, that part... Please file a separate bug on that, make it block this one, assign it to Benjamin. Component: DOM → DOM: Core & HTML
https://bugzilla.mozilla.org/show_bug.cgi?id=281988
CC-MAIN-2022-33
en
refinedweb
#include <GeomInt_IntSS.hxx> performs general intersection of two surfaces just now creates 2D-curve on given surface from given 3D-curve general intersection of two surfaces general intersection using a starting point intersection of adapted surfaces intersection of adapted surfaces using a starting point converts RLine to Geom(2d)_Curve. puts into theArrayOfParameters the parameters of intersection points of given theC2d1 and theC2d2 curves with the boundaries of the source surface.
https://dev.opencascade.org/doc/refman/html/class_geom_int___int_s_s.html
CC-MAIN-2022-33
en
refinedweb
18 of file TGroupButton.h. #include <TGroupButton.h> GroupButton default constructor. Definition at line 45 of file TGroupButton.cxx. GroupButton normal constructor. Definition at line 53 of file TGroupButton.cxx. GroupButton default destructor. Definition at line 63 of file TGroupButton.cxx. Display Color Table in an attribute canvas. Definition at line 70 of file TGroupButton.cxx. Execute action of this button. If an object has been selected before executing the APPLY button in the control canvas, The member function and its parameters for this object is executed via the interpreter. Definition at line 103 of file TGroupButton.cxx. Execute action corresponding to one event. This member function is called when a Button object is clicked. Reimplemented from TButton. Definition at line 152 of file TGroupButton.cxx. Save primitive as a C++ statement(s) on output stream out. Reimplemented from TButton. Definition at line 221 of file TGroupButton.cxx.
https://root.cern.ch/doc/v614/classTGroupButton.html
CC-MAIN-2022-33
en
refinedweb
README complexjscomplexjs This is a library for complex numbers calculations in javascript using plain objects immutably. It has been designed to be used for geometry applications, for which a little API has been included. Complex numbers APIComplex numbers API Any object with the properties re or im (or r and arg) qualifies as a complex number for this API, so there are no creation methods for them. Geometry APIGeometry API Complex number calculations examplesComplex number calculations examples Creating a complex number is as simple as: // Cartesian form const c_cartesian = { re: 1, im; 0 }; // Polar form const c_polar = { r: 1, arg: Math.PI / 2 }; Both forms can be mixed, and the form of the first number will be preserved: import {csum} from 'complexjs'; csum(c_cartesian, c_polar); // => {re: 1, im: 1} csum(c_polar, c_cartesian); // => {r: 1.414, arg: 0.785} All the basic functions are provided. All of them are pure functions: import { cmul, cdiv, cmod, conjugate } from 'complexjs'; cmul(c_cartesian, c_polar); // => {re: 0, im: 1} cdiv(c_cartesian, c_polar); // => {re: 0, im: -1} cmod(c_cartesian); // => 1 conjugate(c_polar); // => {r: 1, arg: - 1.5707} Using plain objectsUsing plain objects The API is designed so that any object can be passed to the functions, and any property other than re and im (or r and arg) will remain unchanged. This can be useful to compose plain objects. const c_object = { rgb: [255, 255, 255], re: 1, im: 1 }; const c_number = { re: 5, im: -1 }; csum(c_object, c_number) // => {rgb: [255, 255, 255], re: 6, im: 0} In case both objects have properties, they will be merged. The first parameter will override the second one if the have a property with the same name. Geometry applications examplesGeometry applications examples Complex numbers serve as an elegant representation of 2d geometry. To make its use even simpler, some methods have been created to wrap the algebra to match its geometric meaning. vector(1, 2) // => {re: 1, im: 2} const square = [ vector(0, 0), vector(1, 0), vector(1, 1), vector(0, 1) ]; const scaledSquare = square.map(c => scale(c, 2)); const translatedSquare = scaledSquare.map(c => translate(c, vector(-1, -1))); const rotatedSquare = translatedSquare.map(c => rotate(c, Math.PI / 2)); // => [ // vector(1, -1), // vector(1, 1), // vector(-1, 1), // vector(-1, -1) // ];
https://www.skypack.dev/view/complexjs
CC-MAIN-2022-33
en
refinedweb
@Generated(value="OracleSDKGenerator", comments="API Version: 20190415") public class StartDbSystemRequest extends BmcRequest<Void> getBody$, getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue clone, finalize, getClass, notify, notifyAll, wait, wait, wait public StartDbSystemRequest() public String getDbSystem StartDbSystemRequest.Builder toBuilder() Return an instance of StartDbSystemRequest.Builder that allows you to modify request properties. StartDbSystemRequest.Builderthat allows you to modify request properties. public static StartDbSystem>
https://docs.oracle.com/en-us/iaas/tools/java/2.38.0/com/oracle/bmc/mysql/requests/StartDbSystemRequest.html
CC-MAIN-2022-33
en
refinedweb
sd_bus_set_sender - Man Page Configure default sender for outgoing messages Synopsis #include <systemd/sd-bus.h> int sd_bus_set_sender(sd_bus *bus, const char* name); int sd_bus_get_sender(sd_bus *bus, const char** name); Description. Notes These APIs are implemented as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file. See Also systemd(1), sd-bus(3), sd_bus_message_set_sender(3) Referenced By sd-bus(3), sd_bus_message_set_destination(3), systemd.directives(7), systemd.index(7). The man page sd_bus_get_sender(3) is an alias of sd_bus_set_sender(3).
https://www.mankier.com/3/sd_bus_set_sender
CC-MAIN-2022-33
en
refinedweb
I have been asked a couple of times how you can integrate multiple Revit add-ins from different sources together into a single ribbon panel. This topic has also been discussed on the web, e.g. towards the end of whether to install to the add-ins tab or make new ribbon tab, and at Autodesk University, where Jose Guia presented CP3766 – Tying All of Your Revit Add-ins into a Pretty Little Ribbon. Unfortunately, none of these offer any very useful solution, or show or share any code. I have a very simple approach to suggest, though, which has been around for several years already: The RvtSamples SDK application reads a text file listing any number of .NET assemblies defining any number of external commands and generates a ribbon panel populated with buttons to launch them all. Each entry in that file occupies seven lines specifying a group, menu text, description, large image, small image, .NET assembly path and full name of the external command to launch. Here is the first entry in that file: Analysis EnergyAnalysis Model Demonstrates how to use EnergyAnalysisModel API. LargeImage: Image: Z:\...\bin\Debug\EnergyAnalysisModel.dll Revit.SDK.Samples.EnergyAnalysisModel.CS.Command Only the two last lines of the entry are really relevant. Al the rest is decorative, and hopefully informative as well. In fact, that is the one and only method that I use myself to launch both SDK sample external commands and all The Building Coder sample code. Furthermore, the text file read by RvtSamples includes support for include files that I implemented back in 2008, similar to the standard C #include pre-processor directive. The only important information required to populate the RvtSamples text file is the .NET assembly filename of the DLL implementing the external command, and the full external command implementation class name, including its namespace prefix. Here is the complete EnergyAnalysisModel Revit SDK sample add-in manifest, showing the information corresponding to the RvtSamples.txt entry above: <?xml version="1.0" encoding="utf-8"?> <RevitAddIns> <AddIn Type="Command"> <Assembly>EnergyAnalysisModel.dll</Assembly> <ClientId>6f559488-4285-40b7-bfca-043bb69ea0a7</ClientId> <FullClassName>Revit.SDK.Samples.EnergyAnalysisModel.CS.Command</FullClassName> <Text>EnergyAnalysis Model</Text> <Description>Demonstrates how to use EnergyAnalysisModel API.</Description> <VisibilityMode>AlwaysVisible</VisibilityMode> <VendorId>ADSK</VendorId> <VendorDescription>Autodesk,</VendorDescription> </AddIn> </RevitAddIns> As you can see, the content of the Assembly and FullClassName tags correspond exactly to the two last lines specified in RvtSamples.txt. Where can this information be obtained? Well, the simplest and most direct source would the add-in manifest, if one is available. To load an external command on its own, the assembly path and implementation class name are listed in the Assembly and FullClassName add-in manifest tags. Unfortunately, if the add-in you wish to integrate defines an external application to create a custom panel, it may not list all its command names in the manifest file. Where can they be obtained from then? Well, several tools exist which can read .NET assemblies and display their contents. I talked about Reflector way back in the early days of the blog. It since became commercial. Victor Chekalin mentioned using dotPeek, and my colleague Adam added that he uses the ILSpy .NET decompiler and is perfectly happy with that. Basically, the information is made accessible via the .NET Reflection namespace functionality. To make things really simple for you non-programmer guys, I went and implemented a little Revit add-in external command which does nothing but list the full class names of all other external commands defined in any assembly you care to point it at. It can be run in Revit without even opening a document, in zero document state. It prompts you to select a DLL file, opens it as a .NET assembly, and uses reflection to determine all the classes defined in it derived from the IExternalCommand interface. These are listed in a read-only dynamically generated resizable form. Here is the result of pointing it at the simpler DockableDialog sample I published last week: The code is very simple. The ExternalCommandLister class is instantiated with a .NET assembly filename and extracts all external command definitions from it like this: class ExternalCommandLister { string _assembly_filename; string[] _external_commmand_class_names; /// <summary> /// Display error message /// </summary> /// <param name="msg">Message to display</param> void ErrorMsg( string msg ) { Debug.WriteLine( "External Command Lister: " + msg ); TaskDialog.Show( "External Command Lister", msg ); } public ExternalCommandLister( string assembly_filename ) { _assembly_filename = assembly_filename; _external_commmand_class_names = null; if( !File.Exists( assembly_filename ) ) { throw new ArgumentOutOfRangeException( "assembly_filename", "file not found" ); } try { // No need to load the Revit API assemblies, // because we are ourselves a Revit API add-in // inside of Revit, so they are guaranteed to // be present. //Assembly revit = Assembly.LoadFrom( "C:/Program Files/Autodesk/Revit Architecture 2014/RevitAPI.dll" ); //string root = "C:/Program Files/Autodesk Revit Architecture 2014/"; //Assembly adWindows = Assembly.LoadFrom( root + "AdWindows.dll" ); //Assembly uiFramework = Assembly.LoadFrom( root + "UIFramework.dll" ); //Assembly revit = Assembly.LoadFrom( root + "RevitAPI.dll" ); // Load the selected assembly into // the current application domain: Assembly asm = Assembly.LoadFrom( assembly_filename ); if( null == asm ) { ErrorMsg( string.Format( "Unable to load assembly '{0}'", assembly_filename ) ); } else { IEnumerable<Type> types = asm.GetTypes() .Where<Type>( t => null != t.GetInterface( "IExternalCommand" ) ); _external_commmand_class_names = types .Select<Type,string>( t => t.FullName ) .ToArray(); } } catch( Exception ex ) { ErrorMsg( string.Format( "Exception '{0}' processing assembly '{1}'", ex.Message, assembly_filename ) ); } } public string AssemblyFilename { get { return Path.GetFileName( _assembly_filename ); } } public string[] CommandClassnames { get { return _external_commmand_class_names; } } } The one single important line is really just IEnumerable<Type> types = asm.GetTypes() .Where<Type>( t => null != t.GetInterface( "IExternalCommand" ) ); It asks the assembly for all the types it defines and extracts the ones derived from IExternalCommand, i.e. the external command implementation classes. The external command mainline Execute implementation prompts the user to select a DLL file, instantiates an ExternalCommandLister instance, queries the command names and displays them in a form created on the fly like this: [Transaction( TransactionMode.Manual )] public class Command : IExternalCommand { /// <summary> /// Define the initial .NET assembly folder. /// </summary> const stringInitial folder.</param> /// <param name="filename">Selected filename on success.</param> /// <returns>Return true if a file was successfully selected.</returns> static bool FileSelect( string folder, out string filename ) { OpenFileDialog dlg = new OpenFileDialog(); dlg.Title = "Select .NET Assembly or Cancel to Exit"; dlg.CheckFileExists = true; dlg.CheckPathExists = true; dlg.InitialDirectory = folder; dlg.Filter = ".NET Assembly DLL Files (*.dll)|*.dll"; bool rc = ( DialogResult.OK == dlg.ShowDialog() ); filename = dlg.FileName; return rc; } void DisplayExternalCommands( string filename, IWin32Window owner ) { ExternalCommandLister lister = new ExternalCommandLister( filename ); string[] a = lister.CommandClassnames; int n = a.Length; System.Windows.Forms.Form form = new System.Windows.Forms.Form(); form.Size = new Size( 400, 150 ); form.Text = string.Format( "{0} defines {1} external command{2}", lister.AssemblyFilename, n, ( 1 == n ? "" : "s" ) ); form.FormBorderStyle = FormBorderStyle.SizableToolWindow; System.Windows.Forms.TextBox tb = new System.Windows.Forms.TextBox(); tb.Dock = System.Windows.Forms.DockStyle.Fill; tb.Location = new System.Drawing.Point( 0, 0 ); tb.Multiline = true; tb.TabIndex = 0; tb.WordWrap = false; tb.ReadOnly = true; tb.Text = string.Join( "\r\n", lister.CommandClassnames ); form.Controls.Add( tb ); form.ShowDialog( owner ); } public Result Execute( ExternalCommandData commandData, ref string message, ElementSet elements ) { IWin32Window revit_window = new JtWindowHandle( ComponentManager.ApplicationWindow ); string filename; while( FileSelect( _assembly_folder_name, out filename ) ) { DisplayExternalCommands( filename, revit_window ); } return Result.Succeeded; } } The reason I implemented this as a Revit command instead of a stand-alone command-line console application was simply to be sure that the Revit API assemblies are already present before I try to load the add-in assemblies. I implemented such a stand-alone console application in the past that loaded RevitAPI.dll itself, but that was a long time ago. Now the number of Revit API assemblies is larger, and other restrictions may have been added as well. A complex external command may obviously depend on additional dependencies in addition to the Revit API assemblies. In that case, it might be harder or impossible to simply load it as shown above. One option then might be to load the other add-in into Revit first using its own loading mechanism, and then try to access its assembly data. Another caveat is the ClientId tag in the external add-in manifest. For the simple add-ins that I create, it is hardly used, except for extensible storage access. It might be important for other applications as well, though. Here is JtExternalCommandLister.zip containing the complete source code, Visual Studio solution and add-in manifest of this external command. I hope you find this useful and that it helps resolves discussions such as the one pointed to above. Addendum by Rudolf Honke of Mensch und Maschine acadGraph GmbH: If different add-ins create custom ribbon panels in the same tab, whichever one of them arrives last will obviously run into a collision attempting to create a tab that already exists. Worse still: the Revit API provides no method to query the existence of a specific tab. It may be possible using the .NET UI Automation library. However, you can always attempt to retrieve a tab with a specific name using GetRibbonPanels(tabName). If the tab does not exist, this method will throw an exception. The two add-ins could therefore safely add their panels to the same tab using the following approach: public Autodesk.Revit.UI.Result OnStartup( UIControlledApplication application ) { string tabName = "TBC"; string panelName = "TBC"; try { List<RibbonPanel> panels = application .GetRibbonPanels( tabName ); } catch { // Tab "TBC" does not yet exist, // so create new application.CreateRibbonTab( tabName ); } RibbonPanel panel = application .CreateRibbonPanel( tabName, panelName ); // Add your buttons here return Result.Succeeded; } In principle, this approach may be regarded as bad coding style, because exceptions are and should remain exceptional. Since there is no other way to obtain the required information in this case, though, one is left no choice. Actually an alternative might exist using the .NET UIAutomation library, but such an approach may be awfully slow.
https://thebuildingcoder.typepad.com/blog/2013/05/external-command-lister-and-adding-ribbon-commands.html
CC-MAIN-2022-33
en
refinedweb
Creating Laravel Packages. For Dummies. A complete tutorial to Laravel Packages, from version 5.1 to 8+ Unfortunately, Laravel's documentation on how to create a package is an introduction, at best. It’s important to read it - to understand the concept - but it’s not a step-by-step guide to creating your first package. This is. Last updated in March 2020 — used it to create a tutorial for Backpack for Laravel add-ons. Step 1. Install Laravel For this, check out Laravel’s docs. As an alternative, use an existing Laravel application. Don’t forget you need to: composer install and chmod -R o+w storage chmod -R o+w vendor Step 2. Create your package We’re going to use this CLI tool to generate a new package. Follow the instructions in the Installation chapter there to create a new package. If you already have it, run: php artisan packager:new myvendor mypackage If unsure, use your github username for myvendor. For example, Jeffrey Way uses “way”. This will create a /packages/ folder in your root directory, where your package will be stored so you can build it. It will also pull a very basic package template, created by thephpleague. Everything you have right now is in packages/myvendor/mypackage. Now let’s customise it and add some boilerplate code, everything that most Laravel Packages will need. - Replace everything you need in composer.json, CHANGELOG.md, CONTRIBUTING.md, LICENSE.md, README.md. Make it yours. -, so let’s create some empty files for that. cd /src/ mkdir Http mkdir Http/Controllers echo "<?php " >Http/routes.php mkdir config echo "<?php " >config/config.php mkdir resources mkdir resources/views/ You use the routes, config and controller files just like you use the ones in your application. Nothing changes there. But remember that the controller should have the package’s namespace: namespace MyVendor\MyPackage\Http\Controllers; 4. Check that your service provider is in your app’s /config/app.php If not, add it: "MyVendor\MyPackage\MyPackageServiceProvider", 5. Check that you autoload your package in composer.json: "autoload" : { "psr-4": { “Domain\\PackageName\\”: “packages/Domain/PackageName/src” } }, 6. Let’s recreate the autoload cd ../../../.. composer dump-autoload 7. If you have a config file to publish, do: php artisan vendor:publish 8. Test it. If you are having problems, start by doing a dd(‘testing) in your service provider’s boot() function. If your package is working fine, you should make it available for others to use. Step 3. Put it on GitHub cd packages/domain/packagename/ git init git add . git commit -m “first commit” Create a new GitHub repository. git remote add origin git@github.com:yourusername/yourrepository.git git push -u origin master git tag -a 1.0.0 -m ‘First version’ git push --tags The tags are the way you will version your package, so it’s important you do it. Step 4. Put it on Packagist On Packagist.org, submit a new package. Enter you package’s GitHub URL and click Check. If any errors occur, follow the onscreen instructions. When you’re done, you’re taken to your package’s have a working package online, you can now require it in composer. Step 5. Keep working on it from your vendor folder If the application where you’ve developed the package had this sole purpose — to help you develop the package, you’re done. But if you’ve developed the package in a bigger project, where you are now requiring it, you have a problem — your composer.json has both: "require": { "domain/package-name": "^1.0" } and "autoload" : { "psr-4": { “Domain\\PackageName\\”: “packages/Domain/PackageName/src” } }, Plus, the same files are in /packages/myvendor/mypackage AND in /vendor/myvendor/mypackage. Let’s solve this: 1. Delete the /packages/myvendor/mypackage folder. 2. Delete the psr-4 mention in your root composer.json. Done. Now you only have it in /vendor/ and that’s where your application is using it from. But now you can’t push updates, because /vendor/myvendor/mypackage/ doesn’t have your git repository, only the files. Let’s fix that. 3. Delete /vendor/myvendor/mypackage: cd ../../.. rm -rf vendor/myvendor/mypackage 4. Run composer with the — prefer-source flag, so it clones the repo: composer install --prefer-source That’s it, you can now cd to /vendor/myvendor/mypackage, make your changes and push them, just like any other git repository.
https://tabacitu.medium.com/creating-laravel-5-packages-for-dummies-ec6a4ded2e93
CC-MAIN-2022-33
en
refinedweb
Modify Gatsby's GraphQL data types using createSchemaCustomization - React - Gatsby - JavaScript - GraphQL Hi friends, yesterday I published a little post about how to Add data to Gatsby's GraphQL layer using sourceNodes, this post will be expanding on the topic of data management but this time i'm going to hone in on how to modify GraphQL's inferred data types so that you can use the new upgraded gatsby-plugin-image with remotely sourced images. If you'd prefer to jump ahead here's a demo repo: ... and a live demo can be seen here: The Problem The "problem" with remotely sourced images is that they are usually returned by API's as url's, E.g When GraphQL see's this it correctly infers the data type as a String. However, in order for gatsby-plugin-image to process the image there are two requirements. - The GraphQL node must be of type File - The image needs to have been downloaded and exist on your local filesystem In this post i'll explain how you can satisfy both of these requirements using a combination of createRemoteFileNode from gatsby-source-filesystem and createSchemaCustomization which is utility function available in the Gatsby Node API The code i'll be explaining below expands on yesterday's post: Add data to Gatsby's GraphQL layer using sourceNodes so i'd advise you have a read of that before diving in. Pre-Flight Checks You'll need to have all the following dependencies installed and configured in gatsby-config.js The docs can be found here: gatsby-plugin-image yarn add gatsby-plugin-image gatsby-plugin-sharp gatsby-source-filesystem gatsby-transformer-sharp # npm install gatsby-plugin-image gatsby-plugin-sharp gatsby-source-filesystem gatsby-transformer-sharp --save To use the approach I'll be using below you'll also need to have gatsby-source-filesystem installed, but don't worry about adding it to gatsby-config.js yarn add gatsby-source-filesystem # npm install gatsby-source-filesystem --save If you don't already have one, you'll need a gatsby-config.js at the root of you project: ...srcgatsby-config.jspackage.json And finally you'll need to add the following to gatsby-config.js // gatsby-config.jsmodule.exports = {plugins: [`gatsby-plugin-image`,`gatsby-transformer-sharp`,{resolve: `gatsby-plugin-sharp`,options: {defaults: {quality: 70,formats: ['auto', 'webp', 'avif'],placeholder: 'blurred'}}}]}; I prefer to set the defaults for gatsby-plugin-sharp in my gatsby-config.js, this is optional, but i'd advise it. 1. Source The Image The following code can all be written in gatsby-node.js I use onCreateNode which is a Gatsby function called each time a new node is created. By using an if condition I'm able to only call createRemoteFileNode if the node.internal.type equals apod, which is the new node I created in yesterday's post. // gatsby-node.jsconst { createRemoteFileNode } = require('gatsby-source-filesystem');exports.onCreateNode = async ({ node, actions: { createNode }, createNodeId, cache, store }) => {if (node.internal.type === 'apod') {node.image = await createRemoteFileNode({url: node.url,parentNodeId: node.id,createNode,createNodeId,cache,store});}}; Starting from the top I destructure createRemoteFileNode from gatsby-source-filesystem, more on that in a moment. Next I define and export onCreateNode. onCreateNode can be an async function and accepts a number of parameters including but not limited to the following. - node - actions - createNodeId, - cache - store node This is the new node type sourced from the NASA API and already exists in Gatsby's data layer actions Actions are the equivalent to actions bound with bindActionCreators in Redux.. cache Cache is the .cache directory Gatsby creates in/on your local filesystem store This is Gatsby's Data layer / The Redux state object To the best of my knowledge all of the above are required. You might not see errors if you don't include cache or store when creating a remote file node but I have experienced odd behavior if I failed to included them. The next bit deals with sourcing the image using the image url returned by the NASA API. node.image I create a new object on the node and call it image which will be the response from createRemoteFileNode. createRemoteFileNode | params This function comes from gatsby-source-filesystem and accepts the following parameters - url - parentNodeId - createNode - createNodeId - cache - store url The source url of the remote file parentNodeId The id of the parent node (i.e. the node to which the new remote File node will be linked to. createNode The action used to create nodes, I covered this in more detail in yesterday's post #actions-createNode createNodeId A helper function for creating node ids, I covered this in more detail in yesterday's post #createNodeId cache As above store As Above With all of the above in place you should now be able to query the new image node in the GraphiQL explorer. Visit to investigate. {apod {urlimage {relativePath}}} Which should give you a response similar to the below {"data": {"apod": {"url": "","image": {"relativePath": ".cache/caches/default-site-plugin/bcd18c3c0f372d1ad0d180fa82cde702/AR2835_20210701_W2x1024.jpg"}}},} You can see the new image node and by querying the relativePath you can see that the file exists on disc in the .cache/caches directory. Compare this to the url which has remained as it was, a remote url. This satisfies one of the two requirements I mentioned above, but GraphQL still thinks the data type is a String... but we know it's now actually a File 2. Modify the GraphQL type To see GraphQL's inferred types Gatsby have exposed an additional action called printTypeDefinitions which can be called from Gatsby Node using this function: createSchemaCustomization // gatsby-node.jsexports.createSchemaCustomization = ({ actions: { createTypes, printTypeDefinitions } }) => {printTypeDefinitions({ path: './typeDefs.txt' });}; If you've added the above to gatsby-node.js you can now run gatsby build and you should see a file pop up in your filesystem called typeDefs.txt. Open it and scroll to the bottom, i've removed quite a lot from the below snippet for brevity, but the main thing to notice is that GraphQL has inferred that the new image node has a child node called url and is typed as a String 👎 type apod implements Node @derivedTypes @dontInfer {...title: Stringurl: Stringimage: apodImage}type apodImage @derivedTypes {...url: String} To correct this you can manually override GraphQL's type inference and provide your own type definitions. You can do this by using createTypes from actions exports.createSchemaCustomization = ({actions: { createTypes, printTypeDefinitions }}) => {+ createTypes(`+ type apod implements Node {+ image: apodImage+ }+ type apodImage @dontInfer {+ url: File @link(by: "url")+ }+ `);printTypeDefinitions({ path: './typeDefs.txt' });}; This looks a little peculiar if you're new to GraphQL and being honest I found this really difficult so here's my best attempt to explain what's going on. type apodImage apodImage first needs to be set to @dontInfer. This is a way to tell GraphQL that I know best and i'll handle the types so don't worry about inferring the data type. url Finally it's here where I tell GraphQL that the image.url is of type File and I link it to the url defined by the url parameter from createRemoteFileNode 🥵 If you delete the typeDefs.txt file from your local filesystem and run gatsby build again and investigate the types you should now see the following. type apod implements Node @dontInfer {...title: Stringurl: Stringimage: apodImage}type apodImage {url: File @link(by: "url")} And now GraphQL correctly understands that image.url is of type File -- Hooray! 🎉 This now satisfies both of the above mentioned requirements! If you see any weird looking errors in your terminal it might be best to run gatsby clean before running gatsby build since we're messing with a few low level things Using GatsbyImage gatsby-plugin-image exports two components, <StaticImage /> and <GatsbyImage /> I won't explain why we need to use <GatsbyImage /> but there's a good explanation in the docs: Using the Gatsby Image components With the type now set as File you can now query the image.url using childImageSharp.gatsbyImageData. The query i've used in index.js looks a little something like this {apod {iddateexplanationmedia_typeservice_versiontitleurlimage {url {childImageSharp {gatsbyImageData}}}}} Which should return something similar to the below. You should be able to see the various image data objects, placeholder, images, and sources. All of this can be passed on to the <GatsbyImage /> component. {"data": {"apod": {"id": "63a09eef-2a28-5632-94c6-50061b62a0bf","date": "2021-07-02","explanation": .","media_type": "image","service_version": "v1","title": "AR2835: Islands in the Photosphere","url": "","image": {"url": {"childImageSharp": {"gatsbyImageData": {"layout": "constrained","placeholder": {"fallback": "token plain">},"images": {"fallback": {"src": "/static/8574311e0c9d3b7520b2714c8baa995e/862d2/AR2835_20210701_W2x1024.jpg","srcSet": "/static/8574311e0c9d3b7520b2714c8baa995e/ac769/AR2835_20210701_W2x1024.jpg 256w,\n/static/8574311e0c9d3b7520b2714c8baa995e/0e233/AR2835_20210701_W2x1024.jpg 512w,\n/static/8574311e0c9d3b7520b2714c8baa995e/862d2/AR2835_20210701_W2x1024.jpg 1024w","sizes": "(min-width: 1024px) 1024px, 100vw"},"sources": [{"srcSet": "/static/8574311e0c9d3b7520b2714c8baa995e/c4e41/AR2835_20210701_W2x1024.avif 256w,\n/static/8574311e0c9d3b7520b2714c8baa995e/542bf/AR2835_20210701_W2x1024.avif 512w,\n/static/8574311e0c9d3b7520b2714c8baa995e/59a35/AR2835_20210701_W2x1024.avif 1024w","type": "image/avif","sizes": "(min-width: 1024px) 1024px, 100vw"},{"srcSet": "/static/8574311e0c9d3b7520b2714c8baa995e/053d8/AR2835_20210701_W2x1024.webp 256w,\n/static/8574311e0c9d3b7520b2714c8baa995e/93623/AR2835_20210701_W2x1024.webp 512w,\n/static/8574311e0c9d3b7520b2714c8baa995e/41185/AR2835_20210701_W2x1024.webp 1024w","type": "image/webp","sizes": "(min-width: 1024px) 1024px, 100vw"}]},"width": 1024,"height": 683}}}}}},} Jsx To return the above image data you can use <GatsbyImage /> with the getImage helper to pass the data on to <GatsbyImage /> via the image prop import React from 'react';import { useStaticQuery, graphql } from 'gatsby';import { GatsbyImage, getImage } from 'gatsby-plugin-image';const IndexPage = () => {const {apod: { id, date, explanation, media_type, service_version, title, image }} = useStaticQuery(graphql`query {apod {iddateexplanationmedia_typeservice_versiontitleimage {url {childImageSharp {gatsbyImageData}}}}}`);return (<main><p>{date}</p><h1>{title}</h1><p>{explanation}</p><GatsbyImage alt={title} image={getImage(image.url)} /> // oh hai!<p>{`id: ${id}`}</p><p>{`media_type: ${media_type}`}</p><p>{`service_version: ${service_version}`}</p></main>);};export default IndexPage; ... and there you have it, modifying GraphQL's data types for remotely sourced images! I've used this approach many times in various projects and covered it quite conclusively with Benedicte Raae on our pokey internet show Gatsby Deep Dives with Queen Raae and the Nattermobs Pirates If you're looking for a similar solution when working with remote images in Markdown or MDX I wrote a post that can be found on the Gatsby blog: MDX Embedded Images with the All-New Gatsby Image Plugin How am I doing? Hey! Lemme know if you found this helpful by leaving a reaction.
https://paulie.dev/posts/2021/07/gatsby-create-schema-customization/
CC-MAIN-2022-33
en
refinedweb
Mapping private/protected/… properties in Entity Framework 4.x Code First More than a year ago I was blogging about how to map private/protected/… properties in Code First CTP4 for the time being. Well it has been a long time and a lot of changed. The code there isn’t absolutely up to date with current Entity Framework 4.3. Although you can go raw and create the expression tree from i.e. string yourself (Mono.Linq.Expressions can be quite handy) it’s not nice and more importantly it’s not strongly typed. If you don’t want to use data annotations or couple your configuration classes into entity classes you’re still not lost. In the above linked post in comments Drew Jones came with a nice idea. You entity classes will be partial classes and somewhere else you’ll create second part of that class with the expressions needed to express the properties. Let’s take a look at example (you can adjust access modifiers according to your needs and structure). public partial class FooBar { private int ID { get; set; } private string Something { get; set; } } public partial class FooBar { public class PropertyAccessExpressions { public static readonly Expression<Func<FooBar, int>> ID = x => x.ID; public static readonly Expression<Func<FooBar, string>> Something = x => x.Something; } } And now in mapping you’ll just use the expressions. .Property(FooBar.PropertyAccessExpressions.ID); .Property(FooBar.PropertyAccessExpressions.Something); Nice isn’t it? It’s separated (kind of – you can’t have entity classes in one assembly and the expressions in other) and it’s strongly typed.
https://www.tabsoverspaces.com/232731-mapping-private-protected-properties-in-entity-framework-4-x-code-first
CC-MAIN-2022-33
en
refinedweb
Hi, I use sheet.Cells[1, 2].Style.Number = 4; to style my format in excel as number. It works only if the number i use as input have fraction, and if its a whole number i get it in excel as a string. example: the number i try to put in the cell is 15,42 then i got the formation as a number in excel but if the number is 15,00 u get 15 as a string and not 15,00 as number Regards, Tjiper Hi, Hi Tjiper, I don’t find the issue using your scenario a bit. To get the value from a cell in the sheet, you may use Cell.StringValue attribute. If you still could not figure it out, kindly give us your sample code with the template file(s), we will check it soon. Thank you. i am trying to write a number that is formated as a number in excel. and i am getting it as a string in excel if its a a whole number. Hi,<?xml:namespace prefix = o Thank you for considering Aspose. Well, I have tried to implement your scenario and it works fine. Please see my sample code, Sample Code: //Instantiating a Workbook object Workbook workbook = new Workbook(); //Adding a new worksheet to the Workbook object workbook.Worksheets.Add(); //Obtaining the reference of the newly added worksheet by passing its sheet index Worksheet worksheet = workbook.Worksheets[0]; //Adding the current system date to "A1" cell worksheet.Cells[2,1].PutValue(15.00); //Setting the display format of the cell to number 4 worksheet.Cells[2,1].Style.Number = 4; //Saving the Excel file workbook.Save("C:\\book1.xls", FileFormatType.Default); If you still face any confusion, please share your code and template file (As suggested by Amjad), we will look into it. Thank You & Best Regards, thanks for your support. i had the same code above, but i didnt convert the number i had to a double before i place it in the cell, but now since i convert it to a double it works. regards Tjiper
https://forum.aspose.com/t/i-dont-get-my-number-formated-as-numbers/81294
CC-MAIN-2022-33
en
refinedweb
Dear all, <?xml:namespace prefix = o Please find attached a sample excel file containing a long data table (worksheet 2). The worksheet has a defined header. The first 7 rows should be printed out on every page needed to show the table content. There is also a footer that provides additional information (page number, department, date…). If I save the workbook as pdf, there are two issues. - The content of the table interleaves the footer area for all pages > 1 - The bookmark “detail” is linked to the last generated page instead to the first page, but in the code it is set to Cell[0,0] Please find also the resulting pdf document as well as the sample code attached. Thanks in advance Erik
https://forum.aspose.com/t/page-break-converting-long-data-tables-to-pdf/67542
CC-MAIN-2022-33
en
refinedweb
Easy ways to make your python code more Concise and efficient! I just wanted to share some tricks to make code more concise and more efficient that are really simple to implement F-Strings F-Strings come in really handy when you need a clean way to use strings with variables, and to format them! If you want to make your strings and prints look much cleaner, and perform better, F-strings are by far the easiest way, lets say we have some variables: name='bob' address='123 Park Place' Hours=(9,17) A lot of people would print it like this: import random print('Hello '+name+',\nwe have a package to deliver to you at '+address+', If it is alright with you, we will be dropping your package off at '+str(random.choice(list(set(range(24))-set(range(*Hours)))))+' oclock') While that approach certainly works, It looks quite a bit cleaner to use F-Strings An F string is declared by putting an f before quotations like f'' or f"", all you have to do to get your variables in your string, is to wrap them in brackets within those quotations, I.E f'Hello, {name}', If we were to do what we did above with F-strings this is how it would appear: print(f'Hello {name},\nwe have a package to deliver to you at {address}, if it is alright with you, we will be dropping youre package off at {random.choice(list(set(range(24))-set(range(*Hours))))} oclock') And lets say we are just making a program that says hi, name=input() print('hello, '+name+' how are you doing') print(f'hello, {name} how are you doing') The f-string looks quite a bit cleaner, and also allows you to use formatting to do things like this: print(f'{"hello":->10} {name},\n how are you doing') The :->10 just means, ( : means this is going to format) ( - Can be any character you want to be used as padding (or none at all)) ( > implies which direction, you can also use ^ or <) ( 10 is how much to pad it by, doing a higher number would move it further) Lambdas, Maps, Listcomps, Zips, and Filter, and Enumerate (tools for iteration) Listcomps and maps One of the easiest ways to make your code more precise, and potentially faster is with the use of These handy functions and loop syntax, Lets say that we are trying to remove all odd numbers from a list of numbers that have been converted to strings, The long way to accomplish this would probably be something along the lines of this list1 = ['0', '7', '21', '3', '0', '8', '13', '10', '21', '8', '19', '17', '3', '5', '2', '18', '18', '2', '15', '13', '20', '1', '22'] #list for the converted strings list2=[] for num in list1: #check if even if int(num)%2==0: #add to final list list2.append(int(num)) That method is definitely Great, but it can be vastly improved upon with something called a list comprehension, a list comprehension is essentially the concise version of the code above. A list comprehension is always enclosed in brackets, always iterates over something and always returns a list, here are a few examples of the syntax in different scenarios: If you want an if else, this is the syntax: newlist = [<output> if <condition> else <output> for item in iterable] #an example of this in use evenodd=['even' if number%2==0 else 'odd' for number in range(30)] What is happening here is that it checks the conditional every time it iterates over an item, and the output is what will appear in the position it was on in the old list. The next syntax style is when you only want items if a certain condition is true, I.E We only want to keep the item if it has a 'b' in it l=['bad','bass','oops','fish','salt','bin'] onlybs=[text for text in l if 'b' in text] #^the code above basically goes through these steps #for every text in the list #if there is the letter b in text #add that text to the new list #That listcomp is equivelant to this: newl=[] for text in l: if 'b' in text: newl.append(text) Now that we now how to use listcomps,our adjusted odd remover would like this: list2=[x for x in list1 if int(x)%2==0] In that specific case, converting to an int before checking if it was even was fine, but in some situations, youl end up needing to convert the whole list before you iterate, which is where maps come in. ap is a function that needs a function, and an iterable, and all it does is apply that function to every iterable. There is one caveat to maps though, the first one being that it doesnt actually store all the values until converted to a list, or iterated over, and the second one being that due to its unique implementation, it is quite a bit faster then listcomps when given a builtin method I.E #map(function,iterable) strin=map(str,range(50)) print(strin) print(list(strin)) what we did there was apply str() to every item in range(50), which is a concise and easy way to convert large amounts of items types. zip, enumerate and filter enumerate A lot of times when iterating, knowing which index you are on is useful, and thats exactly what enumerate is for. Simply all it does is simplify counting which index you are on. c=0 for x in range(50): print(x) c+=1 #this can be done with enumerate like this for index,item in enumerate(range(50)): print(f'{item} is at {index}th index') What enumerate does is take an iterable, and create a new one that is a list of tuples where the first item is the index, and the second item is the actual part of the itarable. when iterating over an enumerate, it is best to unpack it into two variables. Her is a real world use case for enumerate: def findall(iterable,target): res=[] for index,item in enumerate(iterable): if item==target: res.append(index) return res in listcomp form: def findall(iterable,target): return [count for count,item in enumerate(iterable) if item==target] filter lets say we want to remove all digits from a string, this could be done with a listcomp or a normal loop, But filter can do it much more concisely lol='eufu803fnhanus830j' rmdigit=''.join([x for x in lol if x.isdigit()==False]) rd=''.join(filter(str.isalpha,lol)) Just like maps, filter is only better than a listcomp performance-wise if it is using a builtin function Zips lets say we have a list of cars, a list of their colors,a list of their model numbers, and a list of their years, and we want to have all of them together, the best way to do this is with the zip function, instad of iterating over all the lists. name=['corola','tokoma','aventador','focus'] numb=[334,556,7778,3321] colors=['green','beige','grey','pink'] years=[2020,2019,2021,2013] cars=zip(name,numb,colors,years) #as apposed to cars2=[] for x in range(len(name)-1): cars2.append(name[x],numb[x],colors[x],years[x]) Although the zip doesn't seem much shorter, than the naive method, it does a lot more in cases where lists could be uneven. It automatically accounts for the lengths of the iterates which means it wont error if one is longer than the other. Another great use of zips is in portioning out an iterable i.e: foo=[1,4,5,7,83,3,13,56,6] #we want to divid it into sections of 3 bar=zip(foo[::3],foo[1::3],foo[2::3]) Lambdas Though lambdas dont really have anything to do with iterables, they are quite often used within maps filters and other things. Basically a lambda is a mini one line function Here is how you define one: =lambda : I.E printdown=lambda s:print('ew') if 'q' in s else print('\n'.join(list(s))) Using sets for better SPEED, and more concise functions sets are a unique type of data structure, they cant have duplicates, dont have an order, are slow to iterate over but are huge speed boosts in certain scenarios. Detecting duplicates One of the most useful things that sets can help you with is detecting/removing duplicates, because a set conversion removes duplicates, it will be shorter than what it was converting from which allows you to use stuff like isdupe=lambda i:len(set(i))==len(i) instead of def isdupe2(i): for item in i: if i.count(item)!=1: return False return True That second example actually brings us to another efficiency trick with set Making count more effecient This one is pretty simple, you dont to need more counts of items than there are items so reducing a list to just unique items makes getting the count of a given item much faster def isdupe2(i): for item in set(i): if i.count(item)!=1: return False return True or the very barebones count dictionary (using a dictcomp :D) frequency = lambda t:{l:t.count(l) for l in set(t)} Super set,Subset,difference Finding wheather all items in one thing are countained in another can best be done by converting iterables to sets,and then using the built issubset and issuperset methods subset =lambda x,y:set(x).issubset(y) In speed One of the most important thing about sets is that checking if an item is contained is significantly faster with a set than with a list or a tuple i.e l=[7,33,3282801,83] g=set(l) s=7 in l u=7 in g Removing duplicates, set and dict.fromkeys If you want to remove all duplicates from a list and dont care about order, its as simple as this: nodupes=lambda t:list(set(t)) If you care about order: nodupes2=lambda r:list(dict.fromkeys(r)) more info shorthand if-else One of the easiest ways to make code look cleaner is to use shorthand if else if else i.e: def iseven(i): return True if i%2==0 else False #^shorthand def iseven2(i): if i%2==0: return True else: return False Dataclasses This one is pretty simple, The example on their docs pretty much explains why its so great from dataclasses import dataclass @dataclass class InventoryItem: name: str unit_price: float quantity_on_hand: int = 0 #looks a lot cleaner than class item2: def __init__(self, name: str, unit_price: float, quantity_on_hand: int=0): self.name = name self.unit_price = unit_price self.quantity_on_hand = quantity_on_hand it makes classes meant for storing data a lot a easier Conclusion I hope this was helpful and that you had fun reading, and learned a lot! :) p.s if youre wondering about the performcance differences between map and listcomp, here ya go The essay typer includes a number of customer testimonials from satisfied customers who have achieved their goals using this writing service. The consumer experience with this service is full of good reviews that ensure sterling market reputation and popularity. As a writer, I'm working on an article that can describe this easily, currently I'm working in vector art company near me, with highly professional designer nad writers of the USA. Nice tutorial! Just a tip, add the pyextention at the end of the three ticks to get syntax highlighting, like this:
https://replit.com/talk/learn/Easy-ways-to-make-your-python-code-more-Concise-and-efficient/133554
CC-MAIN-2022-33
en
refinedweb
AG <div id="myGrid" style="height: 150px; width: 600px" class="ag-theme-alpine"></div> import { Grid } from 'ag-grid-community'; import 'ag-grid-community/dist/styles/ag-grid.css'; import 'ag-grid-community/dist/styles/ag-theme-alpine.css'; var gridOptions = { columnDefs: [ { headerName: 'Make', field: 'make' }, { headerName: 'Model', field: 'model' }, { headerName: 'Price', field: 'price' } ], rowData: [ { make: 'Toyota', model: 'Celica', price: 35000 }, { make: 'Ford', model: 'Mondeo', price: 32000 }, { make: 'Porsche', model: 'Boxter', price: 72000 } ] }; tag..
https://awesomeopensource.com/project/ag-grid/ag-grid
CC-MAIN-2022-33
en
refinedweb
Migrating to the Scripted Object Map: Common conversion problems The Squish IDE offers a basic conversion wizard to convert test suites from the text based Object Map to the new script based Object Map. You can find this functionality in the Object Map section of the Test Suite Settings as well as in the top right corner of the Object Map Editor. The conversion is a very basic best-effort operation and is not guaranteed to produce a working Test Suite. Following are some common conversion problems. Real Names that contain Symbolic Names When test scripts use Real Names that contain Symbolic Names, the conversion will replace the Symbolic Names inside the Real Names with Scripted Symbolic Names, which will break the test script execution. This is an example of code that would not get converted correctly: clickButton(waitForObject("{text='New' type='QToolButton' unnamed='1' visible='1' window=':Address Book_MainWindow'}")) This is what the conversion would produce: clickButton(waitForObject("{text='New' type='QToolButton' unnamed='1' visible='1' window=names.address_Book_MainWindow}")) To solve this issue the old Real Name can be converted to a Scripted Real Name manually: clickButton(waitForObject({"text": "New", "type": "QToolButton", "unnamed": "1", "visible": "1", "window": names.address_Book_MainWindow})) Custom functions that take Symbolic Names as parameters Existing test frameworks might already be using custom script objects to identify gui elements (e.g. when using page objects) and there might be convenience functions inside the test scripts that work with both Symbolic Names and custom script objects. Since the conversion will simply replace all Symbolic Names with Scripted Symbolic Names all custom functions that are used with Symbolic Names also need to support Scripted Symbolic Names or the test scripts will not execute correctly. This is an example of code that might not get converted correctly: def customVerification(objectOrName): if isinstance(objectOrName,str): #do something with the symbolic name else: #do something else def main(): startApplication("addressbook") customVerification(":Address Book.New_QToolButton") This is what the conversion would produce: def customVerification(objectOrName): if isinstance(objectOrName,str): #do something with the symbolic name else: #do something else def main(): startApplication("addressbook") customVerification(names.address_Book_New_QToolButton) To solve this issue the function code needs to be adapted manually to be compatible with dictionaries that squish uses as scripted real names. Symbolic Names that use characters that need to be escaped in the script language When Symbolic Names contain characters that need to be escaped in the script language but not in the object map itself the name will simply not be replaced (e.g. the @ in Perl needs to be escaped, but not inside the object map). This Perl code will not get converted because the object map entry will be "John.Doe@froglogic.com_Item". test::compare(waitForObject(":John.Doe\@froglogic.com_Item")->text, "John.Doe\@froglogic.com") To solve this issue the names will have to be converted manually.
https://doc-snapshots.qt.io/squish/migrating-to-the-scripted-object-map-common-conversion-problems.html
CC-MAIN-2022-33
en
refinedweb
JAX Overview & References JAX is a Google research project built upon native Python and NumPy functions to improve machine research learning. The official JAX page describes the core of the project as "an extensible system for composable function transformations," which means that JAX takes the dynamic form of Python functions and converts them to JAX-based functions that work with gradients, backpropogation, just-in-time compiling, and other JAX augmentations. Basic Deep Learning: JAX Edition Google lists the following code at the top of their JAX page: import jax.numpy as jnp from jax import grad, jit, vmap def predict(params, inputs): for W, b in params: outputs = jnp.dot(inputs, W) + b inputs = jnp.tanh(outputs) # inputs to the next layer return outputs # no activation on last layer def loss(params, inputs, targets): preds = predict(params, inputs) return jnp.sum((preds - targets)**2) grad_loss = jit(grad(loss)) # compiled gradient evaluation function perex_grads = jit(vmap(grad_loss, in_axes=(None, 0, 0))) # fast per-example grads This short example provides the two main functions of a deep learning algorithm, predict and loss, adapted for JAX functionality. We’ll break down the code segment as an entry analysis of both JAX and deep learning: jax.numpyis JAX’s adapted version of the NumPy API, created to prevent standard NumPy functionality from breaking JAX functions when the two packages differ. Make sure to use jax.numpyfunctions instead of regular numpyfunctions. jaxis the main library, from which important functions like grad, jit, vmap, and pmapare used. predictsimulates the neural network’s predictions based on the dot product of the weights and activation values added to the biases, all of which are given in the paramsparameter. The next layer of neurons is then calculated using the current layer, eventually returning the last layer when paramsis fully processed. lossuses standard mean-squared error loss calculation, using the current predictionsand comparing them with targetsthat the user defines. This mirrors standard NumPy deep learning very closely, but JAX shortens the runtime in very important ways which we soon describe. Runtime Optimization jit Autograd and XLA are the two fundamental components of JAX, with XLA (accelerated linear algebra) handling the runtime and compiling aspects of JAX. Take the following example, adapted from the JAX page: def slow_f(x): # Element-wise ops see a large benefit from fusion return x * x * x + x * 2.0 * x + x x = jnp.ones((2000, 2000)) fast_f = jit(slow_f) %timeit -n10 -r3 fast_f(x) %timeit -n10 -r3 slow_f(x) 3.97 ms ± 2.53 ms per loop (mean ± std. dev. of 3 runs, 10 loops each) 52.1 ms ± 1.83 ms per loop (mean ± std. dev. of 3 runs, 10 loops each) JAX is designed to work with CPUs, GPUs, and TPUs, each a quicker processor than the last. THe example output comes from the most basic CPU setup, and JAX’s jit function still ran significantly faster than the native Python function. The discussion around compile times and runtimes seems like an arbitrary conversation when we’re dealing with small datasets — who cares if my code executes in 5 milliseconds instead of 15? This optimization, however, is vital for neural networks. Consider a simple deep learning task of identifying a lowercase letter from an image with 36x36 pixel resolution. The input layer would have 36 * 36 = 1296 neurons and the output layer would have 26 neurons, one for every letter. Without any hidden layers, we’re already over 33,000 connections, and in reality, we’d need hidden layers for determining tiny parts to letters, patterns, or some other method for transitioning between image and output. A program that might take an hour on a standard system might now take 30 seconds using TPUs and jit compiling — now the conversation is not arbitrary. vmap vmap is a function that provides "auto-vectorization" for whatever batch you have. Batches are essentially variably-sized samples of your population of training data used in one iteration, after which the model is updated. Imagine the simple solution of looping through every image in your batch, resulting in a vector with the activation values of the image. This vector is then multiplied by the model matrix, resulting in a different matrix. This process works, but it is incredibly slow, as a different intermediate matrix is created with each iteration. By using vmap, loops are pushed to the most primitive level possible. This speeds up compilation time as iterating over simple elements is quicker than the same with complex elements. For our purposes, this means that the activation vectors are compiled as an activation matrix — as Google puts it, "at every layer, we’re doing matrix-matrix multiplication rather than matrix-vector multiplication." The code for this has a unique format. Pay close attention to the following implementation: from jax import vmap predictions = vmap(partial(predict, params))(input_batch) # or, alternatively predictions = vmap(predict, in_axes=(None, 0))(params, input_batch) vmap wraps the predict function in parentheses, then takes the parameters and/or input batch wrapped in another set of parentheses. Autodifferentiation If you recall the XLA-Autograd duo that composed JAX, autodifferentiation comes from Autograd and shares its API. JAX uses grad for calculating gradients, which allows for differentiation to any order. We’ll recontextualize why this matters for machine learning. The goal of any good model is to reduce the error present — we obviously want the model to be good at predicting things, otherwise there’s no point. The gradient of a function, in this case the error, will indicate the direction to move to minimize the function. In other words, in any-dimensional space, the gradient will tell us which weights in the model need adjusting. Once you understand the importance of gradients, the function implementation becomes trivial — it just takes a number as a parameter to evaluate the gradient at that point. Google gives the example of the hyperbolic tangent function, and we get the following results after using grad: def tanh(x): # Define a function y = jnp.exp(-2.0 * x) return (1.0 - y) / (1.0 + y) grad_tanh = grad(tanh) print(grad_tanh(2.0)) 0.07065082 And that’s it! Combining all of the features we’ve shown will give you a great leap into your machine learning project, and it’s all streamlined to make the code easier to follow.
https://the-examples-book.com/programming-languages/python/jax
CC-MAIN-2022-33
en
refinedweb
Does Application.OpenURL("mailto:address@domain.com") work for popping up a native email prompt on mobile? doco Application.OpenURL("mailto:address@domain.com") It's well worth noting that if you just want to send a very simple one with NO dynamic text, no body, no unusual characters, just replace the spaces with %20 manually like so: public void LinkHelpEMAIL() { string t = "mailto:blah@blah.com?subject=Question%20on%20Awesome%20Game"; Application.OpenURL(t); } exactly like that (do not escape the at symbol). saves typing, tested on both. much related: @ina This can also help you > Opening URL and send Email from Unity For complex actions, Cross Platform Native Plugins allows option to send nicely formatted HT$$anonymous$$L body and attachments. This plugin is free to use and supports iOS and Android. Answer by cregox · Dec 04, 2013 at 06:42 PM I'll just organize in one post what's already written around... Yes, it does work! But it has to be sanitized or else it will crash silently. void SendEmail () { string email = "me@example.com"; string subject = MyEscapeURL("My Subject"); string body = MyEscapeURL("My Body\r\nFull of non-escaped chars"); Application.OpenURL("mailto:" + email + "?subject=" + subject + "&body=" + body); } string MyEscapeURL (string url) { return("+","%20"); } This works both on iOS and Android. (PS I "think" you can also use Uri.EscapeDataString(body); .. not sure) PS : you can leav string email = ""; empty and the user will decide later string email = ""; Thank you. It is working for Android and iOS both. In case someone wants to make function as tiny as possible: public static void SendEmail(string email, string subject, string body) => Application.OpenURL("mailto:{0}?subject={1}&body={2}".F(email, subject.$$anonymous$$yEscapeURL(), body.$$anonymous$$yEscapeURL())); static string $$anonymous$$yEscapeURL(this string url) => UnityWebRequest.EscapeURL(url).Replace("+", "%20"); And then you can call it like ExternalCommunication.SendEmail("somebody@gmail.com", "About stuff", "Something to talk about"); 2019: It seems it doesn't work on the latest unity 2019 1 4 1 Have tried on iOS and Android. The xcode log writes Filtering mail sheet accounts for bundle ID: com.xxx.xxx, source account management: 1 2019-05-31 20:56:42.158166+0300 appname[3637:1425152] [$$anonymous$$ailShare] can send mail: 1 mail send And nothing happend Answer by David · May 06, 2011 at 05:14 PM Note, this explanatory example code, does not literally work, as you must escape certain strings. Teaching code only. Yes. Using Application.OpenURL() you could open safari, maps, phone, sms, mail, youtube, itunes, app store and books store. You could find all the url schemes you could need here These are for Objective-C but if you get the idea of the url scheme it would be really easy to do it on Unity. Here you have an example of a simple script to send a mail with address, subject and body. C# mail example: public string email; public string subject; public string body; // Use this for initialisation .. EXAMPLE ONLY, DOES NOT WORK void Start () { Application.OpenURL("mailto:" + email + "?subject:" + subject + "&body:" + body); } There is nothing iPhone-specific about these schemes. They work fine with Android (and web, and standalone). hey Application.OpenURL("mailto:" + email + "?subject=" + subject + "&body=" + body); } not :, correct is = It's not a very well written answer and looks even more like it should be a comment to david's answer, but it's correct ;) This answer worked for me: Basically you use, then you replace "+" with "%20". Worked like a charm! $$anonymous$$y application went to suspend mode. Once i send the email, native email setup should off! How it is possible to restrict to Go suspend mode ? Answer by slake_it · Oct 29, 2015 at 09:55 AM this is the same code of @cawas created as a class with static functions for easy usage using UnityEngine; /// <summary> /// - opens the mail app on android, ios & windows (tested on windows) /// </summary> public class SendMailCrossPlatform { /// <param name="email"> myMail@something.com</param> /// <param name="subject">my subject</param> /// <param name="body">my body</param> public static void SendEmail (string email,string subject,string body) { subject = MyEscapeURL(subject); body = MyEscapeURL(body); Application.OpenURL("mailto:" + email + "?subject=" + subject + "&body=" + body); } public static string MyEscapeURL (string url) { return("+","%20"); } } Don't take my word for it, but something about making "static functions for easy usage" rubs me the wrong way and re$$anonymous$$ds me of this: $$anonymous$$aybe, and just maybe, you rather use the Toolbox pattern design for doing easy to use global access things like this: I think it's just important to be aware of what static means, what it should be used for and I'm pretty confident using it as globals is not it. That being said, do go ahead and make whatever works! ;P static Answer by Yuriy-Ivanov · Jun 18, 2017 at 06:56 PM Please note that there is no reliable way to add any attachments with mailto, you'll have to use underlying platforms email API with native plugins to achieve that. We've recently published a new asset UTMail - Email Composition and Sending Plugin, which supports composing emails with attachments and also sending them directly with SMTP. It works on multiple platforms and provides a convenient API. Best regards, Yuriy, Universal Tools. Sending Mobile Email via Unity 2 Answers How to get current battery life on mobile device 3 Answers Fur on Characters for Unity Mobile Android / iPhone 1 Answer Exit function for iPhone or Android app 2 Answers Native list views on Mobile? 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/61669/applicationopenurl-for-email-on-mobile.html?sort=oldest
CC-MAIN-2022-33
en
refinedweb
This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project. Tom Tromey writes: > Here's what happens: configure decides that iconv() exists in libc > (which is true). Then the code includes <iconv.h>, which in your case > defines things like this: > > #define iconv_open libiconv_open It is wrong for configure to look at a symbol called 'iconv' or 'libiconv' without including <iconv.h>. Because you have to include <iconv.h> in order to use iconv_t and the functions, and you have to accept the iconv.h file that you get, depending on $CC, $CFLAGS, $CPPFLAGS. Using the Solaris function with the GNU libiconv header, or the GNU libiconv function with the Solaris header, will lead to trouble. The #define iconv_open libiconv_open is there to let the trouble happen at link time, rather than at run time. (Really, the use of the FreeBSD iconv header with the GNU libiconv function would lead to core dumps.) Therefore the best strategy is to 1. include <iconv.h> and try to link without -liconv (because on glibc systems, or when -liconv is already in the LDFLAGS, you don't need it), then 2. include <iconv.h> and try to link with -liconv. The following macro, being used in gettext and fileutils for a year, works in all cases. Also note that the use of ICONV_CONST in gcc/java/lex.c could avoid warnings on systems whose iconv function takes a second argument of type 'char **', not 'const char **'. The macro defines @LIBICONV@ for use in Makefiles; its value is either empty or "-liconv". Bruno ============================= iconv.m4 ============================== #serial AM2 dnl From Bruno Haible. AC_DEFUN([AM_ICONV], [ dnl Some systems have iconv in libc, some have it in libiconv (OSF/1 and dnl those with the standalone portable GNU libiconv installed). AC_ARG_WITH([libiconv-prefix], [ --with-libiconv-prefix=DIR search for libiconv in DIR/include and DIR/lib], [ for dir in `echo "$withval" | tr : ' '`; do if test -d $dir/include; then CPPFLAGS="$CPPFLAGS -I$dir/include"; fi if test -d $dir/lib; then LDFLAGS="$LDFLAGS -L$dir/lib"; fi done ]) AC_CACHE_CHECK(for iconv, am_cv_func_iconv, [ am_cv_func_iconv="no, consider installing GNU libiconv" am_cv_lib_iconv=no AC_TRY_LINK([#include <stdlib.h> #include <iconv.h>], [iconv_t cd = iconv_open("",""); iconv(cd,NULL,NULL,NULL,NULL); iconv_close(cd);], am_cv_func_iconv=yes) if test "$am_cv_func_iconv" != yes; then am_save_LIBS="$LIBS" LIBS="$LIBS -liconv" AC_TRY_LINK([#include <stdlib.h> #include <iconv.h>], [iconv_t cd = iconv_open("",""); iconv(cd,NULL,NULL,NULL,NULL); iconv_close(cd);], am_cv_lib_iconv=yes am_cv_func_iconv=yes) LIBS="$am_save_LIBS" fi ]) if test "$am_cv_func_iconv" = yes; then AC_DEFINE(HAVE_ICONV, 1, [Define if you have the iconv() function.]) AC_MSG_CHECKING([for iconv declaration]) AC_CACHE_VAL(am_cv_proto_iconv, [ AC_TRY_COMPILE([ #include <stdlib.h> #include <iconv.h> extern #ifdef __cplusplus "C" #endif #if defined(__STDC__) || defined(__cplusplus) size_t iconv (iconv_t cd, char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft); #else size_t iconv(); #endif ], [], am_cv_proto_iconv_arg1="", am_cv_proto_iconv_arg1="const") am_cv_proto_iconv="extern size_t iconv (iconv_t cd, $am_cv_proto_iconv_arg1 char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft);"]) am_cv_proto_iconv=`echo "[$]am_cv_proto_iconv" | tr -s ' ' | sed -e 's/( /(/'` AC_MSG_RESULT([$]{ac_t:- }[$]am_cv_proto_iconv) AC_DEFINE_UNQUOTED(ICONV_CONST, $am_cv_proto_iconv_arg1, [Define as const if the declaration of iconv() needs const.]) fi LIBICONV= if test "$am_cv_lib_iconv" = yes; then LIBICONV="-liconv" fi AC_SUBST(LIBICONV) ]) ===========================================================================
https://gcc.gnu.org/legacy-ml/gcc-bugs/2001-06/msg01398.html
CC-MAIN-2022-33
en
refinedweb
OS module is a python module that allows you to interact with the operating systems. It uses various functions to interact with the operating system. Using it you can automatically tell the python interpreter to know which operating system you are running the code. But while using this module function sometimes you get AttributeError. The AttributeError: module ‘os’ has no attribute ‘uname’ is one of them. In this entire tutorial, you will learn how to solve the issue of module ‘os’ has no attribute ‘uname’ easily. The root cause of the module ‘os’ has no attribute ‘uname’ Error The root cause of this attributeError is that you must be using the uname() function wrongly. The import part of the os module is right but the way of using the uname() is wrong. If you will use os.uname() on your Windows OS then you will get the error. import os print(os.uname()) Output Solution of the module ‘os’ has no attribute ‘uname’ The solution of the module ‘os’ has no attribute ‘uname’ is very simple. You have to properly use the uname() method. If your operating system is Unix then its okay to use os.uname(). But in case you are using the Windows operating system then import platform instead of import os. In addition call platform.uname() instead of os.uname(). You will not get the error when you will run the below lines of code. import platform print(platform.uname()) Output Conclusion OS module is very useful if you want to know the system information. But there are some functions that lead to attributerror as that function may not support the current Operating system. If you are getting the ‘os’ has no attribute ‘uname’ error then the above method will solve your error. I hope you have liked this tutorial. If you have any queries then you can contact us for help. You can also give suggestions on this tutorial. Join our list Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
https://gmailemail-login.email/module-os-has-no-attribute-uname-solved/
CC-MAIN-2022-33
en
refinedweb
I have a requirement to support a query syntax on my resources like so: I can hook that up so that the people controller receives that action via map.connect like so: map.connect ‘:people_query_regexp’, :controller => ‘people’, :index => ‘index’, :requirements => { :people_query_regexp => /people.*/ } But people is a RESTful resource, so I want to do the same thing using map.resources instead of map.connect. If I change the routing to: map.resources ‘:parties_query_regexp’, :controller => ‘parties’, :requirements => { :parties_query_regexp => / parties.*/ } …then the server fails on startup like so: lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_controller/routing/ route_set.rb:141:in define_hash_access': (eval):3:indefine_hash_access’: compile error (SyntaxError) (eval):1: syntax error, unexpected ‘:’, expecting ‘\n’ or ‘;’ def hash_for_:people_query_regexp_index_pat… It chokes on a syntax error because there’s a colon in the middle of the method name it’s trying to define. So is there a way around this? What am I doing wrong? This is Rails 2.2.2, btw. Thanks. –mark
https://www.ruby-forum.com/t/routing-via-regexp-with-map-resources/171623
CC-MAIN-2022-33
en
refinedweb
A Remotsy lib for use the Restfull API Project description Remotsy python library Remotsy is an infrared blaster device, is cloud controlled, this a Python library to control the Remotsy device via Rest API. Installation $ pip install remotsylib3 Example from remotsylib3.api_async import (API, run_remotsy_api_call) if __name__ == "__main__": client = API() #Do the login and get the token token = run_remotsy_api_call(client.login(args.username, args.password)) #Get the list of the controls lst_ctl = run_remotsy_api_call(client.list_controls()) for ctl in lst_ctl: print("id %s Name %s" % (ctl["_id"], ctl['name'])) Authentication You can use your remotsy username and password, but for security is recomended to generate a application password, logon in and use the option App Passwords. Documentation API The API documentation and links to additional resources are available at Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution remotsylib3-0.0.1.tar.gz (4.0 kB view hashes)
https://pypi.org/project/remotsylib3/0.0.1/
CC-MAIN-2022-33
en
refinedweb
copy Class: mlreportgen.report.Reporter Package: mlreportgen.report Create copy of reporter object and make deep copies of certain property values Syntax reporterObj2 = copy(reporterObj1) Description returns a copy of the specified reporter object. The returned copy contains a deep copy of any property value of reporterObj2 = copy( reporterObj1) reporterObj1 that references a reporter, DOM object, or mlreportgen.report.ReporterLayout object. As a result, the corresponding property value in reporterObj2 refers to a new, independent object. You can modify the properties of the original or new object without affecting the other object. Input Arguments reportObj1 — Reporter to copy reporter object Reporter to copy, specified as an object of a reporter class. Output Arguments reportObj2 — Copy of reporter reporter object Copy of reporter, returned as an object of a reporter class. Examples Copy a Reporter Object This example copies a MATLABVariable reporter to show the effect of a deep copy operation on a reporter property. Modifying a property of the Text object in the TextFormatter property of the copy of the MATLABVariable object does not affect the original MATLABVariable object. import mlreportgen.report.* obj1 = MATLABVariable; The Bold property of the Text object referenced by the TextFormatter property is empty. obj1.TextFormatter.Bold ans = [] Copy the MATLABVariable object. In the copy, set the Bold property of the Text object referenced by the TextFormatter property to true. obj2 = copy(obj1); obj2.TextFormatter.Bold = true; In the original MATLABVariable object, the Bold property of the object referenced by the TextFormatter property is still empty. obj1.TextFormatter.Bold ans = [] More About reporter class A reporter class is a Report API class that is a subclass of the mlreportgen.report.ReporterBase class, which is an undocumented, internal class. deep copy To make a deep copy of a handle object, the copy operation recursively copies property values that are handles to objects so that all of the underlying data is copied. By contrast, with a shallow copy, the copy operation copies the handle. The underlying data is not copied. When you copy a reporter, the copy operation makes a deep copy of any property value that is a reporter object, an mlreportgen.report.ReporterLayout object, or a DOM object.
https://www.mathworks.com/help/rptgen/ug/mlreportgen.report.reporter.copy.html
CC-MAIN-2022-33
en
refinedweb
>> apply a 2D transposed convolution operation in PyTorch? We can apply a 2D transposed convolution operation over an input image composed of several input planes using the torch.nn.ConvTranspose2d() module. This module can be seen as the gradient of Conv2d with respect to its input. The input to a 2D transpose convolution layer must be of size [N,C,H,W] where N is the batch size, C is the number of channels, H and W are the height and width of the input image, respectively. Generally a 2D transposed convolution operation is applied on the image tensors. For a RGB image, the number of channels is 3. The main feature of a transpose convolution operation is the filter or kernel size and stride. This module supports TensorFloat32. Syntax torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size) Parameters in_channels – Number of channels in input image. out_channels – Number of channels produced by transpose convolution operation. kernel_size – Size of the convolving kernel. Along with the above three parameters, there are some optional parameters also such as stride, padding, dilation, etc. We will take examples of these parameters in detail In the following Python example,. Steps You could use the following steps to apply a 2D transpose convolution operation − Import the required library. In all the following examples, the required Python library is torch. Make sure you have already installed it. To apply 2D transpose convolution operation on images we need torchvision and Pillow as well. import torch import torchvision from PIL import Image Define input tensor or read the input image. If an input is an image, then we first convert it into a torch tensor. Define in_channels, out_channels, kernel_size, and other parameters. Next define a transpose convolution operation convt by passing the above-defined parameters to torch.nn.ConvTranspose2d() convt = nn.ConvTranspose2d(in_channels, out_channels, kernel_size) Apply the transpose convolution operation convt on the input tensor or the image tensor. output = convt(input) Next print the tensor after the transpose convolution operation. If the input was an image tensor, then to visualize the image, we first convert the tensor obtained after transpose convolution operation to a PIL image and then visualize the image. Let's have a look at some examples for more clear understanding. Input Image We will use the following image as the input file in Example 2. Example 1 In the following Python example, we perform 2D transpose convolution operation on input tensor. We apply different combinations of kernel_size, stride, padding, and dilation. # Python 3 program to perform 2D transpose convolution operation import torch import torch.nn as nn '''torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0) ''' in_channels = 2 out_channels = 3 kernel_size = 2 convt = nn.ConvTranspose2d(in_channels, out_channels, kernel_size) # conv = nn.ConvTranspose2d(3, 6, 2) '''input of size [N,C,H, W] N==>batch size, C==> number of channels, H==> height of input planes in pixels, W==> width in pixels. ''' # define the input with below info N=1 C=2 H=4 W=4 input = torch.empty(N,C,H,W).random_(256) # input = torch.randn(2,3,32,64) print("Input Tensor:\n", input) print("Input Size:",input.size()) # Perform transpose convolution operation output = convt(input) print("Output Tensor:\n", output) print("Output Size:",output.size()) # With square kernels (3,3) and equal stride convt = nn.ConvTranspose2d(2, 3, 3, stride=2) output = convt(input) print("Output Size:",output.size()) # non-square kernels and unequal stride and with padding convt = nn.ConvTranspose2d(2, 3, (3, 5), stride=(2, 1), padding=(4, 2)) output = convt(input) print("Output Size:",output.size()) # non-square kernels and unequal stride and with padding and dilation convt = nn.ConvTranspose2d(2, 3, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) output = convt(input) print("Output Size:",output.size()) Output Input Tensor: tensor([[[[115., 76., 102., 6.], [221., 173., 23., 205.], [123., 23., 112., 18.], [189., 178., 167., 143.]], [[239., 180., 226., 88.], [224., 30., 196., 224.], [ 57., 222., 47., 84.], [ 25., 255., 201., 114.]]]]) Input Size: torch.Size([1, 2, 4, 4]) Output Tensor: tensor([[[[ 48.1156, 64.6112, 64.9630, 47.2604, 3.9925], [74.9169, 80.7055, 138.8992, 82.8471, 54.3722], [20.0938, 49.5610, 30.2914, 93.3563, 3.1597], [-27.1410, 118.8138, 92.8670, 50.6170, 37.5564], [-27.7676, 6.5762, 33.6408, 6.7176, -8.8372]], [[ -18.2188, -56.5362, -49.8063, -43.3336, -16.8645], [ -23.4012, -6.1607, 40.5064, -17.4547, -25.1738], [ -5.7752, 53.6838, -27.9412, 36.7660, 44.0866], [ -23.5205, 1.1443, -29.0826, -34.7213, -4.1535], [ 5.6746, 38.4026, 72.8414, 59.2990, 34.9241]], [[ -35.0380, -31.4031, -38.0059, -19.3247, -5.6272], [-109.2401, -12.9763, -62.2776, -31.0825, 19.2766], [ -93.6596, -18.5403, -67.5457, -61.8533, 32.3005], [ -27.7020, -71.3938, -18.9532, -26.8304, 20.0184], [ -29.2334, -85.8179, -35.4292, -16.4065, 19.0788]]]], grad_fn=<SlowConvTranspose2DBackward>) Output Size: torch.Size([1, 3, 5, 5]) Output Size: torch.Size([1, 3, 9, 9]) Output Size: torch.Size([1, 3, 1, 4]) Output Size: torch.Size([1, 3, 5, 4]) Example 2 In the following Python example, we perform 2D transpose convolution operation on an input image. To apply 2D transpose convolution, we first convert the image to a torch tensor and after transpose convolution, again convert it to a PIL image for visualization. # Python program to perform 2D transpose convolution operation # Import the required libraries import torch import torchvision from PIL import Image import torchvision.transforms as T # Read input image img = Image.open('car.jpg') # convert the input image to torch tensor img = T.ToTensor()(img) print("Input image size:", img.size()) # size = [3, 464, 700] # unsqueeze the image to make it 4D tensor img = img.unsqueeze(0) # image size = [1, 3, 464, 700] # define transpose convolution layer # convt = nn.ConvTranspose2d(in_channels, out_channels, kernel_size) convt = torch.nn.ConvTranspose2d(3, 3, 2) # apply transpose convolution operation on image img = convt(img) # squeeze image to make it 3D img = img.squeeze(0) # now image is again 3D print("Output image size:",img.size()) # convert image to PIL image img = T.ToPILImage()(img) # display the image after convolution img.show() ''' Note: You may get different output image after the convolution operation because the weights initialized may be different at different runs. ''' Output Input image size: torch.Size([3, 464, 700]) Output image size: torch.Size([3, 465, 701]) Note that you may see some changes in the image obtained after each run because of the initialization of weights and biases. - Related Questions & Answers - How to apply a 2D convolution operation in PyTorch? - How to apply a 2D Max Pooling in PyTorch? - How to apply a 2D Average Pooling in PyTorch? - How to perform a permute operation in PyTorch? - How to apply a 3×3 convolution matrix using imageconvolution() in PHP? - How to perform an expand operation in PyTorch? - How to apply linear transformation to the input data in PyTorch? - How to apply rectified linear unit function element-wise in PyTorch? - How to apply a 2D or 3D transformation to an element with CSS - How to apply a 2D or 3D transformation to an element with JavaScript? - Explain the Union operation on 2D shapes in JavaFX - Explain the Intersect operation on 2D shapes in JavaFX - Explain the Subtract operation on 2D shapes in JavaFX - How to store a 2d Array in another 2d Array in java? - Apply a mathematical operation on rows of an R data frame stored in a list.
https://www.tutorialspoint.com/how-to-apply-a-2d-transposed-convolution-operation-in-pytorch
CC-MAIN-2022-33
en
refinedweb
.3.0.3.0-bin.tar.gz). Untar it and copy groosh-0.3.0.jar to your $GROOVY_HOME/lib. Try the following script to check if it worked: def gsh = new groosh.Groosh(); gsh.ls().toStdOut(); Have a look at the examples directory for more examples how to use groosh. Documentation The following example shows Groosh in action: def gsh = new groosh.Groosh(); gsh.cat('test_scripts/blah.txt').toStdOut(); Another example : def gsh = new groosh.Groosh(); def f = gsh.find('.', '-name', '*.java', '-ls'); def total = 0; def lines = gsh.grid { values,w | def x = values[2,4,6,10]; def s = x.join(' '); w.println(s); def total += Integer.parseInt(values[6]); }; f.pipeTo(lines); lines.toStdOut(); println "Total: " + tota); Sometimes the name of a shell command conflicts with a Groovy method (for example ''grep''). This means that gsh.grep(...) does not execute the shell command, but the Groovy method grep(...). As a workaround for that you may prefix any shell command with _ this means the example above becomes gsh._grep(...) The following example shows a more elaborate example. It uploads photos to a flickr account using the command line tool flickcurl. A photo set of this images is created and named after the current directory. import static groosh.Groosh.groosh import static org.codehaus.groovy.groosh.stream.DevNull.devnull ids = [:] shell = groosh() //get all images in this folder and upload it to flickr //remember the photo id we get from flickr shell.ls().grep(~/.*jpg/).each { println "Uploading file $it to flickr" flickcurl = shell.flickcurl("upload",it,"friend","family").useError(true) id = flickcurl.grep(~/.*Photo ID.*/)[0].split(":")[1].trim() ids[it] = id println "Photo ID is: $id" } //we need to know the first photo id firstKey = ids.keySet().toList()[0] //create a set with the name of the directory we are in right now //use the id of the first photo as set cover setName = shell.pwd().text.split("/")[-1] println "Creating set: $setName" flickcurl = shell.flickcurl("photosets.create",setName,setName,ids[firstKey]).useError(true) id = flickcurl.grep(~/.*Photoset.*/)[0].split(" ")[2].trim() println "Photoset ID is: $id" //make a backup of the ids in a file for later reference println "Writing ids to a file" file = new File(shell.pwd().text.trim() + "/.flickrset") file << "Photoset:" << id << "\n" ids.each { file << it.key << ":" << it.value << "\n" } //the first photo is already part of the photo set so lets remove it ids.remove(firstKey) //add the remaining photos to the photo set ids.each { println "Adding photo to set at flickr: $it" shell.flickcurl("photosets.addPhoto",id,it.value) | devnull() } println "DONE" Developers Source Control The Groosh source code is available. Community Mailing List(s)
http://groovy.codehaus.org/Groosh
crawl-002
en
refinedweb
Migrate4j - Database Migration Tool for Java Migrate4j is a migration tool for java, similar to Ruby's db:migrate task. Unlike other Java based migration tools, database schema changes are defined in Java, not SQL. This means your migrations can be applied to different database engines without worrying about whether your DDL statements will still work. Schema changes are defined in Migration classes, which define "up" and "down" methods - "up" is called when a Migration is being applied, while "down" is called when it is being rolled back. A simple Migration, which simply adds a table to a database, is written as: package db.migrations; import static com.eroi.migrate.Define.*; import static com.eroi.migrate.Define.DataTypes.*; import static com.eroi.migrate.Execute.*; import com.eroi.migrate.Migration; public class Migration_1 implements Migration { public void up() { createTable( table("simple_table", column("id", INTEGER, primaryKey(), notnull()), column("desc", VARCHAR, length(50), defaultValue("NA")))); } public void down() { dropTable("simple_table"); } } adds improved usability (simplified syntax), additional schema changes and support for more database products. While migrate4j does not yet have support for all database products, we are actively seeking developers interested in helping fix this situation. Visit for more information on how migrate4j can simplify synchronizing your databases. To obtain migrate4j, go to and download the latest release. For questions or to help with future development of migrate4j, email us at migrate4j-users AT lists.sourceforge.net (replacing the AT with the "at symbol"). - Login or register to post comments - 5966 reads - Printer-friendly version (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Jon Chase replied on Mon, 2008/04/28 - 1:17pm Looks nice, but does this offer anything over and above what Liquibase () does (other than sql defined in Java instead of XML)? I've been using Liquibase for automatic database upgrades during app deployments and it's been great - it's so nice not to have to run those scripts manually anymore! I'm glad to see another serious project in this market - it shows that there is definitely interest, and this is something that the agile Java community has needed for a long time. ------------------- Jon Chase SendAlong.com - Securely email large files to anyone todd replied on Mon, 2008/04/28 - 1:47pm Jon, From a feature standpoint, migrate4j is intended to be very similar to Liquibase. However, our main focus is to provide a tool that uses Java to define schema changes. If you're already using Liquibase, db:migrate, or some other migration tool, you'll probably want to stick with that. However, for Java projects that are not using a migration tool, migrate4j is an option to consider. Jon Chase replied on Mon, 2008/04/28 - 1:53pm Todd, Thanks for the clarification - I think Migrage4j is a good option for teams that want to stick to Java and stay out of XML. Jon Stefan replied on Mon, 2008/04/28 - 11:22pm The sourceforge project page indicates that migrate4j is licenced under GPL. This means many open source projects that are using a more liberal license (e.g. Apache) as well as closed source projects won't be able to use it. Is that by intention? todd replied on Tue, 2008/04/29 - 9:14am in response to: srt Stefan, The reason for going GPL right now is to make sure improvements get shared. As much as we'd love migrate4j to be adopted by other projects, we want all work being done to be donated back. Once our codebase is more complete, offering other licensing options is certainly a possibility. Stefan replied on Tue, 2008/04/29 - 10:18am in response to: toddrun I guess you could also achieve this with LGPL which would still make sure enhancements to the library are passed back while allowing more liberal usage of the library in other projects. LGPL is also used by LiquidBase - probably your strongest "competitor" ;) =Stefan
http://java.dzone.com/announcements/migrate4j-database-migration-t
crawl-002
en
refinedweb
Created on 2008-04-04 14:43 by jerome.chabod, last changed 2008-06-23 15:23 by draghuram. shutil generate a NameError (WindowsError) exception when moving a directory from an ext3 to a fat32 under linux To reproduce it: under linux, current path on an ext3 filesytem, try to enter following commands: mkdir toto #on an ext3 partition python import shutil shutil.move("toto", "/media/fat32/toto") # /media/fat32 is mounted on a fat32 filesystem You will produce following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/shutil.py", line 196, in move copytree(src, dst, symlinks=True) File "/usr/lib/python2.5/shutil.py", line 132, in copytree except WindowsError: NameError: global name 'WindowsError' is not defined Tested on ubuntu Feisty and a newly installed Hardy-beta. This problem has been noticed as part of issue1545 and a patch with the fix has been proposed but has a small problem with it. Do you want to take a look? This is duplicate of 3134. I posted a patch there.
http://bugs.python.org/issue2549
crawl-002
en
refinedweb
. <java classname="org.codehaus.gram.Gram" fork="true"> <classpath refid="tool.classpath"/> <!-- the directory where the source code lives --> <arg value="src/java"/> <!-- the groovy script to run to generate stuff --> <arg value="src/script/MyGram.groovy"/> </java> Example script Here's a simple example which just lists all the hibernate peristent classes in your source code def persistentClasses = classes.findAll { it.getAnnotation("hibernate.class") != null } println "Found ${persistentClasses.size()} instances out of ${classes.size()}" persistentClasses.each { c -> println c.simpleName for (p in c.properties) { println " property: ${p.simpleName}" } } Jar Dependencies Gram depends on: Articles You might find some more documentation in the form of blog posts by Andres Almiray:
http://groovy.codehaus.org/Gram
crawl-002
en
refinedweb
Hi Dimitre, > 1. 'mapping operator' must mean "a map function, which is used in an > infix notation". > > So instead of writing: > > map fun list > > we now write: > > list `map` fun (let's ignore that this must be the other way around). > > However, the above defined mapping operator stil operates with > ***any function*** defined on lists. > > At the same time the mapping operator as you define it will operate > only on valid XPath expressions. This is quite limited compared to > the full mapping operator, therefore it would be best if the name of > the 'limited mapping operator' properly reflected this difference in > scope. Absolutely. I suggested a simple mapping operator as a replacement for the more complex for expression that is currently defined in XPath 2.0. I fully recognise the fact that a simple mapping operator gains in simplicity, but loses in completeness - it cannot achieve everything that the for expression can achieve (it can't do joins). I'm sure that you're right that it also cannot achieve everything that a full mapping operator would achieve (perhaps you can give some examples?). (As I don't have your expertise, I cannot tell whether the for expression as currently defined can do everything a full mapping operator can do - perhaps you could comment on that?) My thinking in suggesting a simple mapping operator was that it would meet over 80% of the practical requirements for a mapping operator that is used in XPath; the remainder can be handled in XSLT. > 2. The example you provide of "piping" is actually a functional > composition of two map functions. It can be re-written like this: > > > (map (if (position() mod 2) then . + 50 else .)) ¡Ñ (map (. * 2)) $coordinates > > Here I use the special symbol ⊙ for the functional composition operator. > > So, the piping can be expressed as: > > (map f1) > ¡Ñ (map f2) > ............ > ¡Ñ (map fN) $sequence > > It can easily be proven that the above is equal to: > > map (f1 ¡Ñ f2 ... ¡Ñ fN) $sequence > > While your mapping operator will perform a series of mappings, each > producing an intermediate sequence and may require too much memory, > the last function applies the map function only once. The > composition of all functions is applied on every element of > $sequence and the resulting sequence is produced. No additional > memory for intermediate sequences is necessary. That's a *very* interesting point :) As I understand it, you're saying that it would be more efficient if the two-step mapping was carried out in one step, and showing that you can compose the two steps: $coordinates -> (. * 2) -> if (position() mod 2) then . + 50 else . into one: $coordinates -> if (position() mod 2) then (. * 2) + 50 else (. * 2) Presumably the same observation applies to the equivalent for expression? for $c in (for $d in $coordinates return $d * 2) return if (position() mod 2) then $c + 50 else $c This raises two questions/observations for me. The first question arises because of the fact that in XPath 2.0 sequences cannot contain sequences. This means that the result of a mapping operation does not necessarily have the same cardinality as the source of the mapping operation. As an example, say that the coordinates that we're dealing with are actually x,y,z coordinates, but we're only interested in the x,y coordinates. We could get rid of the z coordinates in one step, as follows: $coordinates -> if (position() mod 3) then . else () -> (. * 2) -> if (position() mod 2) then . + 50 else . Which I think in your syntax is: (map (if (position() mod 2) then . + 50 else .)) ¡Ñ (map (. * 2)) ¡Ñ (map (if (position() mod 3) then . else ())) $coordinates Using the same algorithm as before to compose these steps, I think the expression is: if (position() mod 2) then ((if (position() mod 3) then . else ()) * 2) + 50 else ((if position() mod 3) then . else ()) * 2) But this doesn't work - it adds 50 to the even coordinates from the complete x,y,z coordinate list and operates on them. So if you had a line defined as: (0, 0, 0, 50, 50, 50) then you would get (empty sequences inserted to demonstrate the mapping): (50, 0, (), 100, 150, ()) => (50, 0, 100, 150) whereas the expected result from the individual steps was: (50, 0, 150, 100) Now I might have gone wrong in understanding how functions are composed, which is why I ask the question - can you do function composition in XPath 2.0, given that the cardinality of the sequence can change when a map operation is used and that the position of an item in the sequence should be accessible within the function? My second question is, whether the above issue is manageable or not, whether implementations could be clever enough to spot where the use of simple mapping operators is equivalent to functional composition, in order to optimise these expressions? Perhaps an implementation could recognise that given: $coordinates -> (. * 2) -> if (position() mod 2) then . + 50 else . it could compose the two steps on the right hand side of the first -> into a single operation, and use that to give added efficiency? I also wonder if this can apply to steps in path expressions as well, and whether any XSLT processors currently take advantage of that. You could imagine that as well as keeping a record of the nodes reachable along a particular axis for each node, they might also keep information about the composition of those axes - when they see: row[following-sibling::row/@cust = @cust] they might optimise by providing a quick access to each row's following sibling rows' 'cust' attributes when they initially construct the in-memory representation of the node tree. Mike, if you're listening, does Saxon do that? Have you considered it? > This shows that it is better to have a map() function and a > composition operator for expressions (in case XPath 2.0 will not > fully support higher-order functions). I think that you also need to have functions/expressions as objects in their own right to get a map() function to work as a function, don't you? As I see it, without functions/expressions as objects, we're stuck with using operators or expressions and unfortunately can't express the map() function in functional syntax. > An operator for functional composition has an added benefit that it > can be used with ***any functions***, not only with functions that > operate on sequences. Thus, it can be used to express an arbitrary > sequence of processing (piping) applied on a value of any datatype. I agree - the ability to apply a series of operations to values, expressing those operations as individual steps rather than a single complex operation is really very powerful. For example, start with an element, get its string value (default to 'FOO' if it's empty), normalize it, change it to upper case, and check whether the result equals "FOO". As a composed function this would be: upper-case(normalize-space(if-empty(., 'FOO'))) = 'FOO' Expressed as a series of steps this would be: if-empty(., 'FOO') -> normalize-space(.) -> upper-case(.) -> (. = 'FOO') Fortunately, with the simple mapping operator this works, because in XPath 2.0 individual values are treated exactly the same as single-item sequences. For interest, the equivalent for expression is as follows: for $a in (for $b in (for $c in if-empty(., 'FOO') return normalize-space($c)) return upper-case($b)) return ($a = 'FOO') I *think* that if you view the series of operations on the right hand side of the simple mapping operator as being functionally composed, then you have the functional composition operator that you're looking for? Cheers, Jeni --- Jeni Tennison XSL-List info and archive:
http://aspn.activestate.com/ASPN/Mail/Message/xsl-list/968103
crawl-002
en
refinedweb
. Next,, and, as such, it takes an HttpRequest object as its first parameter. Each view function takes an HttpRequest object as its first parameter. In this case, we call that parameter request. Note that the name of the view function doesn’t matter; Django doesn’t care what it’s called, and. (How does Django find this function, then? We’ll get to that in a moment.).” (A note to the HTML purists: Yes, we know we’re missing a DOCTYPE, and a <head>, and all that stuff. We’re trying to keep it simple.) Finally, the view returns an HttpResponse object that contains the generated HTML. Each view function is responsible for returning an HttpResponse object. (There are exceptions, but we’ll get to those later.) So, to recap, this view function returns an HTML page that includes the current date and time. But where should this code live, how do we tell Django to use this code? The answer to the first question is: This code can live anywhere you want, as long as it’s on your Python path. There’s no other requirement — no “magic,” so to speak. For the sake of putting it somewhere, let’s create a file called views.py, copy this view code into that file and save it into the mysite directory you created in the previous chapter. Your Python path The Python path is the list of directories on your system where Python looks when you use the Python import statement. For example, let’s say your Python path is set to ['', '/usr/lib/python2.4/site-packages', '/home/my/my.) How do we tell Django to use this view see here is the variable urlpatterns. This'^now/$', current_datetime), ) We made two changes here. First, we imported the current_datetime view from its module (mysite/views.py, which translates into mysite.views in Python import syntax). Next, we added the line (r'^now/$', /now/? There’s no need to add a slash at the beginning of the '^now/$' expression in order to match /now/. Django automatically puts a slash before every expression. '^now/' (without a dollar sign at the end), then any URL that starts with now/ would match — such as /now/foo and /now/bar, not just /now/. Similarly, if we had left off the initial caret character ('now/$'), Django would match any URL that ends with now/ — e.g., /foo/bar/now/. Thus, we use both the caret and dollar sign to ensure that only the URL /now/ matches. Nothing more, nothing less. To test our changes to the URLconf, start the Django development server, as you did in Chapter — and you should see the output of your Django view. Hooray! You’ve made your first Django-powered Web page. We should point out several things about what just happened. Here’s the nitty-gritty of what goes on when you run the Django development server and make requests to Web pages: The command python manage.py runserver looks for a file called settings.py. This file contains all sorts of optional configuration for this particular Django instance, but one of the most important settings is one called ROOT_URLCONF. The ROOT_URLCONF setting tells Django which Python module should be used as the URLconf for this Web site. Remember when django-admin.py startproject created the files settings.py and urls.py? Well, the auto-generated settings.py has a ROOT_URLCONF that points to the auto-generated urls.py. Convenient. When a request comes in — say, a request to the URL /now/ — a HttpRequest object as the first parameter to the function. (More on HttpRequest later.) The view function is responsible for returning an HttpResponse object. With this knowledge, you know the basics of how to make Django-powered pages. It’s quite simple, really — just write view functions and map them to URLs via URLconfs. Now’s a good time to point out a key philosophy behind URLconfs, and behind Django in general: the principle of loose coupling. Simply put, loose coupling is a software-development approach that values the importance of making pieces interchangeable. If two pieces of code are “loosely coupled,” then making changes to one of the pieces will have little basic PHP (), for example, the URL of your application is designated by where you place the code on your filesystem. In the CherryPy Python Web framework (), the URL of your application corresponds to the name of the method in which your code lives. This may seem like a convenient shortcut in the short term, but it can get unmanageable in the long run. For example, consider the view function we wrote above, which displays the current date and time. If we wanted to change the URL for the application — say, move it from /now/. And we’ll continue to point out examples of this important philosophy throughout this book. In our URLconf thus far, we’ve only defined a single URLpattern — the one that handles requests to the URL /now/. What happens when a different URL is requested? To find out, try running the Django development server and hitting a page such as or, or even (the site “root”). You should see a “Page not found” message. (Pretty, isn’t it? We Django people sure do like our pastel colors.) Django displays this message because you requested a URL that’s not defined in your URLconf. automatically when you start it. In our first view example, the contents of the page — the current date/time — were dynamic, but the URL (“/now/”) was static. In most dynamic Web applications, though, a URL contains parameters that influence the output of the page. As another (slightly contrived) example, let’s create a second view, which displays the current date and time offset by a certain number of hours. The goal is to craft a site in such a way that the page /now/plus1hour/ displays the date/time one hour into the future, the page /now/plus2hours/ displays the date/time two hours into the future, the page /now/plus3hours/ displays the date/time three hours into the future, and so on. A novice might think to code a separate view function for each hour offset, which might result in a URLconf that looked like this: urlpatterns = patterns('', (r'^now/$', current_datetime), (r'^now/plus1hour/$', one_hour_ahead), (r'^now/plus2hours/$', two_hours_ahead), (r'^now/plus3hours/$', three_hours_ahead), (r'^now/plus4hours/$', four_hours_ahead), ) Clearly, this line of thought is flawed. Not only would this result in redundant view functions, but!” That’d be something like /now /now/plus3hours/ above, a URLpattern is a regular expression, and, hence, we can use the regular expression pattern \d+ to match one or more digits: from django.conf.urls.defaults import * from mysite.views import current_datetime, hours_ahead urlpatterns = patterns('', (r'^now/$', current_datetime), (r'^now/plus\d+hours/$', hours_ahead), ) This URLpattern will match any URL such as /now/plus2hours/, /now/plus25hours/ or even /now/plus100000000000hours/. Come to think of it, let’s limit it so that the maximum allowed offset is 99 hours. That means we want to allow either one- or two-digit numbers; in regular expression syntax, that translates into \d{1,2}: (r'^now/plus\d{1,2}hours/$', hours_ahead), (When building Web applications, it’s always important to consider the most outlandish data input possible, and decide whether the application should support that input or not. We’ve curtailed the outlandishness here by limiting the offset to 99 hours. And, by the way, The Outlandishness Curtailers would be a fantastic, if verbose, band name.) Regular expressions Regular expressions (or “regexes”) are a compact way of specifying patterns in text. While Django URLconfs allow arbitrary regexes for powerful URL-matching capability, you’ll probably only use a few regex patterns in practice. Here’s a small selection of common patterns: For more on regular expressions, see Appendix XXX, Regular Expressions.'^now/plus(\d{1,2})hours/$','^now/$', current_datetime), (r'^now/plus(\d{1,2})hours/$', hours_ahead), ) With that taken care of, let’s write the hours_ahead view. ..admonition:: Coding order In this case, we wrote the URLpattern first and the view second, but in the previous example, we wrote the view first, then the URLpattern. Which technique is better? Well, every developer is different. If you’re a big-picture type of person, it may make most sense to you to write all of the URLpatterns for your application at the same time, at the start of your project, then coding up the views. This has the advantage of giving you a clear to-do list, and it essentially defines the parameter requirements for the view functions you’ll need to write. If you’re more of a bottom-up developer, you might prefer to write the views first, then anchor them to URLs afterward. That’s OK, too. In the end, it comes down to what fits your brain the best. Either approach is valid. hours_ahead is very similar to the current_datetime view we wrote earlier, with a key difference: it takes an extra argument, the number of hours of offset. Here it is: from django.http import HttpResponse import datetime /now/plus3hours/, then offset would be the string '3'. If the requested URL were /now/plus21hours/, above.…and break it! Let’s) Now load up the development server and navigate to /now/plus3hours/.. Some highlights: At the top of the page, you get the key information about the exception: the type of exception, any parameters to the exception (e.g., the "unsupported type" message in this case), the file in which the exception was raised and the offending line number. Under that, just. If this information seems like gibberish to you at the moment, don’t fret — we’ll explain it later in this book. Below, the “Settings” section lists all of the settings for this particular Django installation. Again, we’ll explain settings later in this book. just that — above, work the same way.) Here are a few exercises that will solidify some of the things you learned in this chapter. (Hint: Even if you think you understood everything, at least give these exercises, and their respective answers, a read. We introduce a couple of new tricks here.) Here’s one implementation of the hours_behind view: def hours_behind(request, offset): offset = int(offset) dt = datetime.datetime.now() - datetime.timedelta(hours=offset) html = "<html><body>%s hour(s) ago, it was %s.</body></html>" % (offset, dt) return HttpResponse(html) Not much is different between this view and hours_ahead — only the calculation of dt and the text within the HTML. The URLpattern would look like this: (r'^now/minus(\d{1,2})hours/$', hours_behind), Here’s one implementation of the hour_offset view: def hour_offset(request, plus_or_minus, offset): offset = int(offset) if plus_or_minus == 'plus': dt = datetime.datetime.now() + datetime.timedelta(hours=offset) html = 'In %s hour(s), it will be %s.' % (offset, dt) else: dt = datetime.datetime.now() - datetime.timedelta(hours=offset) html = '%s hour(s) ago, it was %s.' % (offset, dt) html = '<html><body>%s</body></html>' % html return HttpResponse(html) The URLpattern would look like this: (r'^now/(plus|minus)(\d{1,2})hours/$', hour_offset), In this implementation, we capture two values from the URL — the offset, as we did before, but also the string that designates whether the offset should be positive or negative. They’re passed to the view function in the order in which they’re captured. Inside the view code, the variable plus_or_minus will be either the string 'plus' or the string 'minus'. We test that to determine how to calculate the offset — either by adding or subtracting a datetime.timedelta. If you’re particularly anal, you may find it inelegant that the view code is “aware” of the URL, having to test for the string 'plus' or 'minus' rather than some other variable that has been abstracted from the URL. There’s no way around that; Django does not include any sort of “middleman” layer that converts captured URL parameters to abstracted data structures, for simplicity’s sake. To accomplish this, we wouldn’t have to change the hour_offset view at all. We’d just need to edit the URLconf slightly. Here’s one way to do it, by using two URLpatterns: (r'^now/(plus|minus)(1)hour/$', hour_offset), (r'^now/(plus|minus)([2-9]|\d\d)hours/$', hour_offset), More than one URLpattern can point to the same view; Django processes the patterns in order and doesn’t care how many times a certain view is referenced. In this case, the first pattern matches the URLs /now/plus1hour/ and /now/minus1hour/. The (1) is a neat little trick — it passes the value '1' as the captured value, without allowing any sort of wildcard. The second pattern is more complex, as it uses a slightly tricky regular expression. The key part is ([2-9]|\d\d). The pipe character ('|') means “or,” so the pattern in full means “match either the pattern [2-9] or \d\d.” In other words, that matches any one-digit number from 2 through 9, or any two-digit number. Here’s a basic way of accomplishing this. Alter the hour_offset function like so: def hour_offset(request, plus_or_minus, offset): offset = int(offset) if offset == 1: hours = 'hour' else: hours = 'hours' if plus_or_minus == 'plus': dt = datetime.datetime.now() + datetime.timedelta(hours=offset) output = 'In %s %s, it will be %s.' % (offset, hours, dt) else: dt = datetime.datetime.now() - datetime.timedelta(hours=offset) output = '%s %s ago, it was %s.' % (offset, hours, dt) output = '<html><body>%s</body></html>' % output return HttpResponse(output) Ideally, though, we wouldn’t have to edit Python code to make small presentation-related changes like this. Wouldn’t it be nice if we could separate presentation from Python logic? Ah, foreshadowing….
http://djangobook.com/en/beta/chapter03/
crawl-002
en
refinedweb
The. A few years ago, the Computer Sciences Department converted most of its classes from C++ to Java as the principal language for programming projects. CS 537 was the first course to make the switch, Fall term, 1996. At that time virtually all the students had heard of Java and none had used it. Over the next few years more and more of our courses were converted to Java until, by 1998-99, the introductory programming prerequisites for this course, CS 302 and CS 367, were taught in Java. The department now offers a "C++ for Java Programmers" course, CS 368. The remainder of these notes provide some advise on programming and style that may be helpful to 537 students. In particular, we describe threads and synchronized methods, Java features that you probably haven't seen before. Note that some of the examples assume that you are using Java version 1.5 or later. The Java language (API stands for "Application Programming Interface"). At last count, there were over 166 packages in the Platform API, but you will probably only use classes from three of them: Case is significant in identifiers in Java, so if and If are considered to be quite different. The language has a small set of reserved words such as if, while, etc. They are all sequences of lower-case letters. The Java language places no restrictions on what names you use for functions, variables, classes, etc. However, there is a standard naming convention, which all the standard Java libraries follow, and which you must follow in this class. Simple class definitions in Java look rather like class definitions in C++ (although, as we shall see later, there are important differences). public. You can compile this class with the command javac Pair.javaAssuming there are no errors, you will get a file named Pair.class. There are exceptions to the rule that requires a separate source file for each class but you should ignore them. In particular, class definitions may be nested. However, this is an advanced feature of Java, and you should not nest class definitions unless you know what you're doing. There is a large set of predefined classes, grouped into packages. The full name of one of these predefined classes includes the name of the package as prefix. For example, the library class java.util.Random is in package java.util, and a program may use the class with code like this: java.util.Random r = new java.util.Random();The import statement allows you to omit the package name from one of these classes. A Java program that includes the line import java.util.Random;can abbreviate the use of Random to Random r = new Random();You can import all the classes in a package at once with a notation like import java.util.*;The package java.lang is special; every program behaves as if it started with import java.lang.*;whether it does or not. You can define your own packages, but defining packages is an advanced topic beyond the scope of what's required for this course. The import statement doesn't really "import" anything. It just introduces a convenient abbreviation for a fully-qualified class name. When a class needs to use another class, all it has to do is use it. The Java compiler will know that it is supposed to be a class by the way it is used, will import the appropriate .class file, and will even compile a .java file if necessary. (That's why it's important for the name of the file to match the name of the class). For example, here is a simple program that uses two classes: public class HelloTest { public static void main(String[] args) { Hello greeter = new Hello(); greeter.speak(); } } public class Hello { void speak() { System.out.println("Hello World!"); } }Put each class in a separate file (HelloTest.java and Hello.java). Then try this: javac HelloTest.java java HelloTestYou should see a cheery greeting. If you type ls you will see that you have both HelloTest.class and Hello.class even though you only asked to compile HelloTest.java. The Java compiler figured out that class HelloTest uses class Hello and automatically compiled it. Try this to learn more about what's going on: rm -f *.class javac -verbose HelloTest.java java HelloTest. There are exactly eight primitive types in Java, boolean, char, byte, short, int, long, float, and double.. There are four integer types, each of which represents a signed integer with a specific number of bits. The types float and double represent 32-bit and 64-bit floating point values. Objects are instances of classes. They are created by the new operator. Each object is an instance of a unique class, which is itself an object. Class objects are automatically created whenever you refer the class; there is no need to use new. Each object "knows" what class it is an instance of. Pair p = new Pair(); Class c = p.getClass(); System.out.println(c); // prints "class Pair"Each object has a set of fields and methods, collectively called members. (Fields and methods correspond to data members and function members in C++). Like variables, each field can hold either a primitive value (a boolean, int, etc.) or a reference, which is either null or points to another object. When a new object is created, its fields are initialized to zero, null or false as appropriate, but a constructor (a method with the same name as the class) can supply different initial values (see below). By contrast, variables are not automatically initialized. It is a compile-time error to use a variable that has not been initialized. The compiler may complain if it's not "obvious" that a variable is initialized before use. You can always make it "obvious" by initializing the variable when it is declared: int i = 0;You'll probably miss reference parameters most in situations where you want a procedure to return more than one value. As a work-around you can return an object or array or pass in a pointer to an object or array. See Section 2.6.4 on page 62 of the Java book for more information. New objects are create // This is C++ code p = new Pair(); // ... q = p; // ... much later delete p; q -> x = 5; // oops!while deleting them too late (or not at all) can lead to garbage, also known as a storage leak. Each field or method of a class has an access, which is one of public, protected, private, or package. The first three of these are specified by preceding the field or method declaration by one of the words public, protected, or private. Package access is specified by omitting these words. public class Example { public int a; protected int b; private int c; int d; // has package access }It is a design flaw of Java that the default access is "package". For this course, all fields and methods must be declared with one of the words public, protected, or private. As a general rule only methods should be declared public; fields are normally protected or private. Private members can only be accessed from inside the bodies of methods (function members) of the class, not "from the outside." Thus if x is an instance of C, x.i is not legal, but i can be accessed from the body of x.f(). (protected access is discussed further below). The keyword static does not mean "unmoving" as it does in common English usage. Instead is means something like "class" or "unique". Ordinary members have one copy per instance, whereas a static member has only one copy, which is shared by all instances. Ordinary (non-static) fields are sometimes called "instance variables". In effect, a static member lives in the class object itself, rather than instances. public class C { public int x = 1; public static int y = 1; public void f(int n) { x += n; } public static int g() { return ++y; } } // ... elsewhere ... C p = new C(); C q = new C(); p.f(3); q.f(5); System.out.println(p.x); // prints 4 System.out.println(q.x); // prints 6 System.out.println(C.y); // prints 1 System.out.println(p.y); // means the same thing as C.y; prints 1 System.out.println(C.g());// prints 2 System.out.println(q.g());// prints 3 C.x; // invalid; which instance of x? C.f(); // invalid; which instance of f? Static members are often used instead of global variables and functions, which do not exist in Java. For example, Math.tan(x); // tan is a static method of class Math Math.PI; // a static "field" of class Math with value 3.14159... Integer.parseInt("123"); // converts a string of digits into a numberStarting with Java 1.5, the word static can also be used in an import statement to import all the static members of a class. import static java.lang.Math.*; ... double theta = tan(y / x); double area = PI * r * r;This feature is particularly handy for System.out. import static java.lang.System.*; ... out.println(p.x); // same as System.out.println(q.x), but shorterFrom now on, we will assume that every Java file starts with import static java.lang.System.*; The keyword final means that a field or variable may only be assigned a value once. It is often used in conjunction with static to defined named constants. public class Card { public int suit = CLUBS; // default public final static int CLUBS = 1; public final static int DIAMONDS = 2; public final static int HEARTS = 3; public final static int SPADES = 4; } // ... elsewhere ... Card c = new Card(); out.println("suit " + c.suit); c.suit = Card.SPADES; out.println("suit " + c.suit);Each Card has its own suit. The value CLUBS is shared by all instances of Card so it only needs to be stored once, but since it's final, it doesn't need to be stored at all. Java 1.5 introduced enums, which are the preferable way of declaring a variable that can only take on a specified set of distinct values. Using enums, the example becomes enum Suit { CLUBS, DIAMONDS, HEARTS, SPADES }; class Card { public Suit suit = Suit.CLUBS; // default } // ... elsewhere ... Card c = new Card(); out.println("suit " + c.suit); c.suit = Card.SPADES; out.println("suit " + c.suit);One advantage of this version is that it produces the user-friendly output suit CLUBS suit SPADESrather than suit 1 suit 4with no extra work on the part of the programmer. In Java, arrays are objects. Like all objects in Java, you can only point to them. Unlike a C++ array: a[0] ... a[a.length-1]. Once you create an array (using new), you can't change its size. If you need more space, you have to create a new (larger) array and copy over the elements (but see the List library classes below). int[] arrayOne; // a pointer to an array object; initially null int arrayTwo[]; // allowed for compatibility with C; don't use this! arrayOne = new int[10]; // now arrayOne points to an array object arrayOne[3] = 17; // accesses one of the slots in the array arrayOne = new int[5]; // assigns a different array to arrayOne // the old array is inaccessible (and so // is garbage-collected) out.println(arrayOne.length); // prints 5 int[] alias = arrayOne; // arrayOne and alias share the same array object // Careful! This could cause surprises alias[3] = 17; // Changes an element of the array pointed to by // alias, which is the same as arrayOne out.println(arrayOne[3]); // prints 17 all classes you define. This is great for debugging. String s = "hello"; String t = "world"; out.println(s + ", " + t); // prints "hello, world" out.println(s + "1234"); // "hello1234" out.println(s + (12*100 + 34)); // "hello1234" out.println(s + 12*100 + 34); // "hello120034" (why?) out.println("The value of x is " + x); // will work for any x out.println("System.out = " + System.out); // "System.out = java.io.PrintStream@80455198" String numbers = ""; for (int i=0; i<5; i++) { numbers += " " + i; // correct but slow }and more. You can't modify a string, but you can make a string variable point to a new string (as in numbers += " " + i;). The example above is not good way to build up a string a little at a time. Each iteration of numbers += " " + i; creates a new string and makes the old value of numbers into a garbage object, which requires time to garbage-collect. Try running this code public class Test { public static void main(String[] args) { String numbers = ""; for (int i = 0; i < 10000; i++) { numbers += " " + i; // correct but slow } out.println(numbers.length()); } }A better way to do this is with a StringBuffer, which is a sort of "updatable String". public class Test { public static void main(String[] args) { StringBuffer sb = new StringBuffer(); for (int i = 0; i < 10000; i++) { sb.append(" " + i); } String numbers = sb.toString(); out.println(numbers.length()); } } A constructor is method with the same name as the class. It does not have any return type, not even void. operator overloading is not supported.! } } Java supports two kinds of inheritance, which are sometimes called interface inheritance or sub-typing, and method inheritance. is Runnable if it has a method named run that is public1 them (via extends) or define them itself. class Words extends StringTokenizer implements Enumeration, Runnable { public void run() { for (;;) { String s = nextToken(); if (s == null) { return; }. Interfaces can also be used to declare variables. Enumeration e = new StringTokenizer("hello there");Enumeration is an interface, not a class, so it does not have any instances. However, class StringTokenizer is consistent with (implements) Enumeration, so e can point to an object of type StringTokenizer In this case, the variable e has type Enumeration, but it is pointing to an instance of class StringTokenizer. A cast in Java is a type name in parentheses preceding an expression. A cast can be applied to primitive types. double pi = Math.PI; int three = (int) pi; // throws away the fractionA cast can also be used to convert an object reference to a super class or subclass. For example, Words w = new Words("this is a test"); Object o = w.nextElement(); String s = (String) o;. If we were wrong about the type of o we would) { err.println("Oops: " + e); , as well as a call trace. WARNING Never write an empty catch clause. If you do, you will regret it. Maybe not today, but tomorrow and for the rest of your life. king of class. You can define and throw your own exceptions. class SytaxError extends Exception { int lineNumber; Sytax) { err.println(e); } } }Each function must declare in its header (with the keyword throws) all the exceptions that may be thrown by it or any function it calls. It doesn't have to declare exceptions it catches. Some exceptions, such as IndexOutOfBoundsException, are so common that Java makes an exception for them (sorry about that pun) and doesn't require that they be declared. This rule applies to RuntimeException and its subclasses. You shouldThe constructor for the built-in class Thread takes one argument, which is any object that has a method called run. This requirement is specified by requiring that command implement the Runnable interface described earlier. (More precisely, command must be an instance of a class that implements Runnable). The way a thread "runs" a command is simply by calling its run() method. It's as simple as that! In project 1, you are supposed to run each command in a separate thread. Thus you might declare something like this: class Command implements Runnable { String commandLine; Command(String commandLine) { this.commandLine = commandLine; } public void run() { // Do what commandLine says to do } }You can parse the command string either in the constructor or at the start of the run() method. The main program loop reads a command line, breaks it up into commands, runs all of the commands concurrently (each in a separate thread), and waits for them to all finish before issuing the next prompt. In outline, it may look like this. for (;;) { out.print("% "); out.flush(); String line = inputStream.readLine(); int numberOfCommands = // count how many commands(); } }This main loop is in the main() method of your main class. It is not necessary for that class to implement Runnable. Although you won't need it for project 1, the next project will require to to synchronize threads with each other. There are two reasons why you need to do this: to prevent threads from interfering2 and List queue = new ArrayList(); public synchronized void put(Object o) { queue.add(o); notify(); } public synchronized Object get() { while (queue.isEmpty()) { wait(); } return queue.remove(0); } }This class solves the so-call "producer-consumer" problem. (The class ArrayList and interface List are part of the java.util package.) print it: class Buffer { private List queue = new ArrayList(); public synchronized void put(Object o) { queue.add(o); notify(); } public synchronized Object get() { while (queue.isEmpty()) { try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } return queue.remove(0); } (queue.isEmpty()) rather than if (queue. Input/Output, as described in Chapter 20 of the Java book, is not as complicated as it looks. You can get pretty far just writing to System.out (which is of type PrintStream ) with methods print , println , and printf . The method print simply writes converts its argument to a String and writes it to the output stream. The method println is similar, but adds a newline so that following output starts on a new line. The method printf is new in Java 1.5. It expects a String as its first argument and zero or more additional arguments. Each '%' in the first argument indicates a request to print one of the other arguments. The details are spelled out by one or more characters following the '%'. For example, out.printf("pair(%d,%d)%n", pair.x, pair.y);produces exactly the same thing as out.println("pair(" + pair.x + "," + pair.y + ")");but is much easier to read, and to write. The characters %d are replaced by the result of converting the next argument to a decimal integer. Similarly, "%f" looks for a float or double, "%x" looks for an integer and prints it in hexadecimal, and "%s" looks for a string. Fancier formatting is supported; for example, "%6.2f" prints a float or double with exactly two digits following the decimal point, padding with leading spaces as necessary to make the result at least 6 character long. For input, you probably want to wrap the standard input System.in in a BufferedReader , which provides the handy method readLine() BufferedReader input = new BufferedReader(new InputStreamReader(System.in)); for(;;) { String line = input.readLine(); if (line == null) { break; } // do something with the next line } If you want to read from a file, rather than from the keyboard (standard input), you can use FileReader, probably wrapped in a BufferedReader. BufferedReader input = new BufferedReader(new FileReader("somefile")); for (;;) { String line = input.readLine(); if (line == null) { break; } // do something with the next line }Similarly, you can use new PrintWriter(new FileOutputStream("whatever")) to write to a file. The library of pre-defined classes has several other handy tools. See the online manual , particularly java.lang and java.util for more details. int i; Integer ii; ii = 3; // Same as ii = new Integer(3); i = ii; // Same as i = ii.intValue();These classes also serve as convenient places to define utility functions for manipulating value of the given types, often as static methods or defined constants. int i = Integer.MAX_VALUE; // 2147483648, the largest possible int int j =' A List is like an array, but it grows as necessary to allow you to add as many elements as you like. The elements can be any kind of Object, but they cannot be primitive values such as integers. When you take objects out of a List, you have to use a cast to recover the original type. Use the method add(Object o) to add an object to the end of the list and get(int i) to remove the ith element from the list. Use iterator() to get an Iterator for running through all the elements in order.3 List is an interface, not a class, so you cannot create a new list with new. Instead, you have to decide whether you want a LinkedList or an ArrayList. The two implementations have different preformance characteristics. List list = new ArrayList(); // an empty list for (int i = 0; i < 100; i++) { list.add(new Integer(i)); } // now it contains 100 Integer objects // print their squares for (int i = 0; i < 100; i++) { Integer member = (Integer) list.get(i); int n = member.intValue(); out.println(n*n); } // another way to do that for (Iterator i = list.iterator(); i.hasNext(); ) { int n = ((Integer) i.next()).intValue(); out.println(n*n); } list.set(5, "hello"); // like list[5] = "hello" Object o = list.get(3); // like o = list[3]; list.add(6, "world"); // set list[6] = "world" after first shifting // element list[7], list[8], ... to the right // to make room list.remove(3); // remove list[3] and shift list[4], ... to the // left to fill in the gapElements of a List must be objects, not values. That means you can put a String or an instance of a user-defined class into a Vector, but if you want to put an integer, floating-point number, or character into List, you have to wrap it: list.add(new Integer(47)); // or list.add(47), using Java 1.5 autoboxing sum += ((Integer) list.get(i)).intValue();The class ArrayList is implemented using an ordinary array that is generally only partially filled. As its name implies LinkedList is implemented as a doubly-linked list. Don't forget to import java.util.List; import java.util.ArrayList; or import java.util.*; . Lists and other similar classes are even easier to use with the introduction of generic types in Java 1.5. Instead of List l which declares l to be a list of Objects of unspecified type, use List List<Integer> list = new ArrayList<Integer>(); // an empty list for (int i = 0; i < 100; i++) { list.add(i); } // now it contains 100 Integer objects // print their squares for (Iterator<Integer> i = list.iterator(); i.hasNext(); ) { int n = i.next(); out.println(n*n); } // or even simpler for (int n : list) { out.println(n*n); } List<String> strList = new ArrayList<String>(); for (int i = 0; i < 100; i++) { strList.add("value " + i); } strList.set(5, "hello"); // like strList[5] = "hello" String s = strList.get(3); // like o = strList[3]; strList.add(6, "world"); // set strList[6] = "world" after first shifting // element strList[7], strList[8], ... to the // right to make room strList.remove(3); // remove strList[3] and shift strList[4], ... to // the left to fill in the gap The interface Map4 represents a table mapping keys to values. It is sort of like an array or List, except that the "subscripts" can be any objects, rather than non-negative integers. Since Map is an interface rather than a class you cannot create instances of it, but you can create instances of the class HashMap, which implements Map using a hash table or TreeMap which implements it as a binary search tree (a "red/black" tree). Map<String,Integer> table // a mapping from Strings to Integers = new HashMap<String,Integer>(); table.put("seven", new Integer(7)); // key is the String "seven"; // value is an Integer object table.put("four", 4); // similar, using autoboxing Object o = table.put("seven", 70); // binds "seven" to a different object // (a mistake?) and returns the // previous value int n = ((Integer)o).intValue(); out.printf("n = %d\n", n); // prints 7 n = table.put("seven", 7); // fixes the mistake out.printf("n = %d\n", n); // prints 70 out.println(table.containsKey("seven")); // true out.println(table.containsKey("twelve")); // false // print out the contents of the table for (String key : table.keySet()) { out.printf("%s -> %d\n", key, table.get(key)); } n = table.get("seven"); // get value bound to "seven" n = table.remove("seven"); // get binding and remove it out.println(table.containsKey("seven")); // false table.clear(); // remove all bindings out.println(table.containsKey("four")); // false out.println(table.get("four")); // nullSometimes, you only care whether a particular key is present, not what it's mapped to. You could always use the same object as a value (or use null), but it would be more efficient (and, more importantly, clearer) to use a Set. out.println("What are your favorite colors?"); BufferedReader input = new BufferedReader(new InputStreamReader(in)); Set<String> favorites = new HashSet<String>(); try { for (;;) { String color = input.readLine(); if (color == null) { break; } if (!favorites.add(color)) { out.println("you already told me that"); } } } catch (IOException e) { e.printStackTrace(); } int n = favorites.size(); if (n == 1) { out.printf("your favorite color is"); } else { out.printf("your %d favorite colors are:", n); } for (String s : favorites) { out.printf(" %s", s); } out.println(); forgottenThe second argument to the constructor is a String containing the characters that language itself (which is not a surprise, considering that the Java compiler is written in Java). See Chapters 16 and 17 of the Java book for information about other handy classes. 1All the members of an Interface are implicitly public. You can explicitly declare them to be public, but you don't have to, and you shouldn't. 2In particular, it won't necessarily be the one that has been sleeping the longest. 3Interface Iterator was introduced with Java 1.2. It is a somewhat more convenient version of the older interface Enumeration discussed earlier. 4Interfaces Map and Set were introduced with Java 1.2. Earlier versions of the API contained only Hashtable, which is similar to
http://pages.cs.wisc.edu/~solomon/cs537-old/last/java-tutorial.html
crawl-002
en
refinedweb
Abstract. Critics usually point to inefficient garbage collection (GC) as a reason why these types of applications cannot perform well if developed using Java technology. This paper challenges that accusation, and asserts that developers can improve performance significantly, and describes analytical and modeling strategies that will result in acceptable application performance.. The information obtained from the mathematical model enables developers to find the optimum operating conditions for an application, to discover how to determine the optimal size for young and old-generation heaps, and to use an advanced concurrent collector to all but eliminate old-generation collection pause times from an application. Table of Contents 1. Garbage Collection in Java Applications The Java programming language is inherently object-oriented [3], and includes automatic garbage collection. Garbage collection is the process of reclaiming memory taken up by unreferenced objects. Unreferenced objects are ones the application can no longer reach because all references to them have gone out of extent. Traditional languages like C and C++ do not have automatic garbage collection. Developers must both allocate and free memory manually, by using the malloc() and free() functions in C for instance, or the new and delete operators in C++. In Java applications, heap memory is allocated by the new operator, but developers need not free this memory explicitly. Instead, the runtime system routinely determines which objects still have valid references (live objects) and which are unreferenced (dead objects), and automatically frees up the memory allocated to the dead objects. This process is aptly called garbage collection. public class GarbageCollectionExample { public void showGarbageCollection() { String gcObject = new String("Hello, World\n"); // Step 3 return; // Step 4 } public static void main(String args[]) { GarbageCollectionExample gce = new GarbageCollectionExample(); // Step 1 gce.showGarbageCollection(); // Step 2 //... } } This code snippet shows how an object becomes unreferenced in a Java program. gce showGarbageCollection() gcObject showGarbageCollection() main() At this point, we no longer have a reference to the String we knew as gcObject. It is now an unreferenced, "dead" object, and can be collected. The object gce is still referenced in the main method, and is not collectable. String main 2. The Garbage Collector Note that the discussions of garbage collectors, the various GC algorithms, and modeling in the following sections focus on Sun's implementations of the Java Virtual Machine. The Java runtime system's garbage collector runs as a separate thread. As the application allocates more and more objects, it depletes the amount of heap memory available. When this drops below a threshold percentage, the garbage collection process begins. The garbage collector stops all other runtime threads. It marks each object as live or dead, and reclaims the space taken up by the dead objects. There are many different types of garbage collectors. Each is based on a different algorithm and exhibits different behavior. They range from simple reference-counting collectors to very advanced generational collectors. Both the algorithm chosen and the implementation can affect the behavior of the garbage collector. Here are some terms that refer to the implementation of a particular collector – note these are not mutually exclusive: A few of the traditional collectors are discussed below. 2.1 Copying Collector A copying collector employs two or more storage areas to create and destroy objects. If it uses only two storage areas, they are called "semi-spaces." One semi-space (the "from-space") is used to create objects, and once it is full, the live objects are copied to the second semi-space (the "to-space"). Memory is compacted at no cost because only live objects are copied, and they are stored linearly. The semi-spaces now switch roles: new objects are created in the second one until it is full, at which point live objects are copied to the first. The semi-spaces exchange the from-space and to-space roles again and again. Dead objects are not freed explicitly; they're simply overwritten by new objects. In a JVM, a copying collector is a stop-the-world collector. Nevertheless, it is extremely efficient because it traverses the object list and copies the objects in a single cycle, and thus simultaneously collects the garbage and compacts the heap. The time to collect the semi-space, the "pause duration," is directly proportional to the total size of live objects. 2.2 Mark and Compact Collector In a mark-and-compact (more briefly, "mark-compact") collector, objects are created in a contiguous space, the heap. Once the free space falls below a threshold, the collector begins a marking phase, in which it traverses all objects from the "roots," marking each as either live or dead. After the marking phase ends, the compacting phase begins, in which the collector compacts the heap by copying live objects to a new contiguous area. In a JVM, a mark-compact collector is a stop-the-world collector. It suspends all of the application threads until collection is complete and the memory is reorganized, and then restarts them. 2.3 Mark and Sweep Collector The marking phase of a mark-and-sweep ("mark-sweep") collector is the same as that of a mark-compact collector. When the marking phase is complete, the space each dead object occupies is added to a free list. Contiguous space occupied by dead objects is combined to make larger segments of free memory, but because a mark-sweep collector does not compact the heap, memory has a tendency to fragment over time. 2.4 Incremental Collector An incremental collector divides the heap into small fixed-size blocks and allocates the data among them. It runs only for brief periods of time, leaving more processor time available for the application's use. It collects the garbage in only one block at a time, using the train algorithm [7]. The train algorithm organizes the blocks into train-cars and trains. In each collection cycle, the collector checks the cars and trains. If the train to which a car belongs contains only garbage, then the GC collects the entire train. If a car has references from other cars, then the GC copies objects to the respective cars, and, if the destination cars are full, it creates new cars as needed. Because the incremental collector pauses the application threads for only brief periods of time, the net effect is near-pauseless application execution. 2.5 Generational Garbage Collection In most object-oriented languages, including the Java programming language, most objects have very short lifetimes, while a small percentage of them live much longer [6]. Of all newly allocated heap objects, 80-98 percent die within a few million instructions [6]. A large percentage of the surviving objects continue to survive for many collection cycles, however, and must be analyzed and copied at each cycle. Hence, the garbage collector spends most of its time analyzing and copying the same old objects repeatedly, needlessly, and expensively. To avoid this repeated copying, a generational GC divides the heap into multiple areas, called generations, and segregates objects into these areas by age. In younger generations objects die more quickly and the GC collects them more frequently, while in older generations objects are collected less often. Once an object survives a few collection cycles, the GC moves it to an older generation, where it will be analyzed and copied less often. This generational copying reduces GC costs. The GC may employ different collection algorithms in different generations. In younger generations, objects are ephemeral, and both space requirements and numbers of objects needing copying tend to be small, so a copying collector is extremely efficient. In older generations, objects tend to be more numerous and longer-lived, and copying costs make copying collectors (2.2) prohibitively expensive, hence mark-compact (2.1) or mark-sweep (2.3) collectors are preferred. 2.5.1 Generational Garbage Collection in Java Applications Generational collection was introduced to the JVM in v1.2. The heap was divided into two generations, a young generation that used two semi-spaces and a copying collector, and an old generation that used a mark-compact or mark-sweep collector. It also offered an advanced collector, the concurrent, incremental mark-and-sweep ("concurrent inc-mark-sweep") collector, (see 2.6 Advanced Collectors in Java). In the 1.3 and later JVMs [4], the heap is again divided into generations – by default, two generations. The young generation uses a copying collector, while the old generation uses a mark-compact collector. The 1.3 JVM also offers an incremental collector, introducing an optional intermediate generation between the young and old generations. Advanced collectors, like the concurrent inc-mark-sweep collector, are not available in 1.3 but are available in 1.2.2_07, and from the 1.4.1 JVM onwards. More information about the collectors available from Sun can be found at:. 2.6 Advanced Collectors in Java Applications The Java platform provides an advanced collector, the concurrent inc-mark-sweep collector. A concurrent collector [1] takes advantage of its dedicated thread to enable both garbage collection and object allocation/modification to happen at the same time. It uses external bitmaps [1] to manage older-generation memory, and "card tables" [1][2] to interact with the younger-generation collector. The bitmaps are used to scan the heap and mark live objects concurrently, in a "marking phase." Once the marking is complete, the unmarked objects are deallocated in a concurrent "sweeping phase." The collector does most of its work concurrently, suspending application execution only briefly. Note: If "the rate of creation" of objects is too high, and the concurrent collector is not able to keep up with the concurrent collection, it falls back to the traditional mark-sweep collector. 2.7 The Sun Exact VM The research for this paper was done using the 1.2.2_08 version of the JDK (Java 2 SDK, Standard Edition), on Solaris, with the advanced concurrent inc-mark-sweep collector. The Exact VM [15] is a high-performance Java virtual machine developed by Sun Microsystems. It features high-performance, exact memory management. The memory system is separated from the rest of the VM by a well-defined GC interface. This interface allows various garbage collectors to be "plugged in". It also features a framework to employ generational collectors. More information about VMs for Solaris can be obtained from. Collectors and Usage in JDK 1.2.2_08 JDK 1.2.2_08 supports generational collection. Two generations are available, young and old. The young generation employs a copying collector, while the old generation uses the mark-compact collector. The old-generation collector may be replaced with others. 2.7.1.1 JDK 1.2.2_08 Default Usage java program By default the young generation uses 2 MB for each semi-space, for a total of 4 MB. The old generation defaults to 2 MB of initial heap and 24 MB of maximum heap. These defaults may be overridden as described below. 2.7.1.2 Using the Xms and Xms Switches The old generation's default heap size can be overridden by using the -Xms and -Xmx switches to specify the initial and maximum sizes respectively: -Xms -Xmx java -Xms <initial size> -Xmx <maximum size> program For example: java -Xms64m -Xmx128m program 2.7.1.3 Using the genconfig Switch The genconfig switch can be used to specify the heap sizes for the young and old generations explicitly. It also allows an old-generation collector to be specified. (The genconfig option makes the -Xms and -Xmx options unnecessary, but if used with genconfig option -Xmx is still honored as a means to limit the old-generation heap size.) The syntax for genconfig is as follows: genconfig java -Xgenconfig :<initial young size>, <max young size>, semispaces[,promoteall] :<initial old size>, <max old size> [,<collector>] program java -Xgenconfig:8m,8m,semispaces:128m,128m program In this example :8m,8m,semispaces sets the initial and maximum sizes of the young generation's semi-spaces to 8 MB each, and :128m,128m specifies the initial and maximum sizes of the old heap to be 128 MB. :8m,8m,semispaces :128m,128m The young generation always uses a copying collector, but the default mark-compact collector for the old generation can be overridden, for example with an incremental mark-sweep collector, thus: java -Xgenconfig:8m,8m,semispaces:128m,128m, incmarksweep program By default, the young-generation copying collector copies live objects from one semi-space to the other (in a process called "tenuring"1) before copying them into the old heap (called "promoting"). The promoteall option overrides this default behavior: the young collector will immediately promote all live objects into the old heap during each collection. This variation can be useful for applications in which objects that live through a single young GC are likely to live through many collections. An example of its use: promoteall 1Tenuring is a well-known term, meaning "aging." It usually refers to objects being promoted to the older generation, but in our case it has been used to describe aging taking place in the younger generation. This was done to differentiate the two types of garbage collectors in the younger generation: a copy GC, which copies objects to the "to" semispace, and a promotion GC which copies objects to the older generation. java -Xgenconfig :8m,8m,semispace,promoteall :128m,128m,incmarksweep program 3. Problems with Java Garbage Collection Garbage collection makes memory management easier for application developers by eliminating the need to deallocate memory explicitly. This in turn eliminates the possibility of whole classes of errors, including memory leaks and attempts to release or use memory that has not been allocated. Freeing the developer from explicit heap handling increases productivity while making applications robust, but garbage collection comes at a cost. It makes application behavior non-deterministic by introducing latency [1]. It also affects throughput, as collection cycles slow end-to-end performance. To refine an earlier description: A generational garbage collector runs as a separate thread in the Java runtime system. The collector gets activated when the free threshold in either the young generation or the old generation drops below a certain percentage. The young generation's copying collector stops all application threads while it performs a collection. This stop-the-world behavior introduces pauses into an application run, and the duration of the pause is directly proportional to the size of the heap, and the number of live objects. The less sophisticated collectors for old generations, such as mark-compact and mark-sweep, are also stop-the-world collectors. The duration of the pause is proportional to the "occupied" size of the old heap. Because the old heap is typically much larger than the young heap, its pauses are longer but less frequent. The duration and frequency of these pauses is difficult to predict because they are affected by many factors, among them: The pauses garbage collection imposes introduce serial behavior into applications that might otherwise be neatly concurrent. The stop-the-world garbage collector in the JVM uses only a single CPU while performing GC, and it inevitably reduces an application's scalability in a multi-processor environment. Using one of the advanced collectors can mitigate this problem. An incremental collector, which introduces many, very short pauses, can reduce pause duration. The pauses' brevity makes the application seem to run pauselessly, but they may actually degrade overall application performance because each pause still stops-the-world, and, even though it decreases the duration, increasing the frequency usually results in a higher total cost. The advanced, mostly concurrent, inc-mark-sweep collector greatly reduces the serialization problem, by allowing the application threads to run concurrently with a collection cycle. The pauses introduced by the concurrent collector are very small because the majority of the work is done concurrently. While the concurrent GC is running, though, it makes use of CPU cycles and thus reduces application throughput somewhat –– especially that of compute-bound applications. Garbage collection in young generations remains one of the main obstacles to scalability. No matter which collector is chosen for the old generation, the young generation still uses a single-threaded, stop-the-world copying collector. A multi-threaded collector for the young generation is expected to mitigate this problem in the near future. 4. How GC Pauses Affect Scalability and Execution Efficiency Linear scalability of an application running on an SMP(Symmetrical Multi Processing)-based machine is directly limited by the serial parts of the application. The serial parts of a Java application are: The percentage of time spent in the serial portions determines the maximum scalability that can be achieved on a multi-processor machine and in turn the execution efficiency of an application. Because the young– and old-generation collectors all have at least some single-threaded stop-the-world behavior, GC will be a limiting factor to scalability even when the rest of the application can run completely in parallel. Using the concurrent inc-mark-sweep collector will help reduce this effect, but will not completely remove it. The scalability and execution efficiency of an application can be calculated using Amdahl's law. 4.1 Amdahl's Law & Efficiency calculations If S is the percentage of time spent (by one processor) on serial parts of the program and P is the percentage of time spent (by one processor) on parts of the program that could be done in parallel [10], then: Speedup = (S + P) / (S + P / N) -> (1) where N is the number of processors. This can be reduced to: Speedup = 1 / (F + (1 - F) / N) -> (2) where F is the percentage of time spent in the serial parts of the application and N is the number of processors. Speedup = 1 / F -> (3) when N is very large. So if F = 0 (i.e., no serial part), then speedup = N (the ideal value). If F = 1 (i.e., completely serial), then speedup = 1 no matter how many processors are used. The effect of the serial fraction F on the speedup produced for N = 10: 4.1.1 Scaled Speedup Assuming that the problem size is fixed, then Amdahl's Law can predict the speedup on a parallel machine: Speedup = (S + P) / (S + P / N) -> (4) Speedup = 1 / (F + (1 - F) / N) -> (5) More details on Amdahl's Law and scaled speedup can be obtained from [10][11][19][20] . 4.1.2 Efficiency Execution efficiency is defined as: E = Speedup / N -> (6) Execution efficiency translates directly to the CPU percentage used by an application. For a linear speedup this is 1, or 100%. The higher this number is the better, because it translates to higher application efficiency. 5. A SIP Server, the Benchmark Application This section provides background on the application that was used to do the research for this paper. It also supplies background information about why this research was performed. 5.1 Problem Overview Session Initiation protocol (SIP) [18] is a protocol defined by the Internet Engineering Task Force (IETF), used to set up a session between two users in real time. It has many applications, but the one focused on here is setting up a call between users. After a call setup is completed, the SIP portion is complete. The users are then free to pass real-time media (such as voice) between the two endpoints. Note that this portion, the call, does not involve SIP or the servers that are routing SIP call setups. If this protocol is still unfamiliar, it might help to think of it as akin to Hyper Text Transfer Protocol (HTTP) ) [21]. It is a similar, text-based protocol founded on the request/response paradigm. One of the key differences is that SIP supports unreliable transport protocols, such as UDP. When using UDP, SIP applications are responsible for ensuring that packets reach their destination. This is accomplished by retransmitting requests until a response is received. One problem that arises from the model of application-level reliability is retransmission storms. If an application does not respond quickly enough (within 500 ms by default in the specification), the request will be retransmitted. This retransmission continues until a response is received. Each retransmit makes more work for the receiving server. Hence, retransmissions can cause more retransmissions, and thrashing can ensue. A GC cycle that takes longer than 500 ms will cause retransmissions. One that takes several seconds will ensure many retransmissions. These, in conjunction with the new requests that arrive at the server during garbage collection, will make even more work for the server and it will fall further behind. Even absent the overburdening problems just described, other constraints make garbage collection pauses unacceptable. There are carrier grade requirements that state acceptable latencies from the moment an initiating side begins a call to the moment it receives a response from the terminating side. These are typically sub-second times, so a multiple-second pause in a SIP server for GC is unacceptable. 5.2 Call Setups "Call setup" [17] is a concept used throughout this paper. A call setup is simply the combination of an originating side stating that it wishes to initiate a session (a call in this case) and a terminating side responding that it is interested as well. In SIP, there is also the concept of an acknowledgement being sent back to the terminating side after it accepts. After a call setup is complete, the server must maintain call setup state for 32 seconds in order to handle properly any application-level retransmissions that might occur. This specification is the reason the value of 32 seconds is used as an active duration throughout the paper. 5.3 Improvements The original SIP server was able to achieve an execution efficiency of only about 65% on a four-way machine. By using the GC analyzer and making code changes appropriately, its developers boosted execution efficiency to about 90%. Perhaps more importantly, the server now has much more consistent and predictable behavior, and is much faster as well. Latencies have been reduced to the point that they are no longer an issue. Long gone are the days of the monolithic mark-compact collector taking more than 10 seconds to complete the collection of a 768-MB heap. Now maximum stop-the-world GC times are measured below 200 ms – perfect for a near-real-time application like a SIP server. 6. Modeling Application Behavior to Predict GC Pauses Modeling Java applications makes it possible for developers to remove unpredictability attributed to the frequency and duration of pauses for GC. The frequency and pause duration of GC are directly related to the following factors: Developers can construct a theoretical model to show application behavior for various call-setup rates, numbers of objects created, object sizes, object lifetimes, and heap sizes. A theoretical model reflects the extremities of the application behavior for the best conditions and the worst. Once developers know these extremities, they can model a real-life scenario that helps predict the maximum call-setup rate that can be achieved with acceptable pauses, and that shows what can happen when call-setup rates exceed the maximum and application performance deteriorates. 6.1 Best-Case Scenario Assumptions: = (semi-space size / total size of each call setup's objects) = 5 MB / 50 KB = 102.4 call setups / semi-space -> frequency = (call setups * call-setup rate) = 102.4 * 10 = 1,024 ms period = (call setups * time between call setups) = 102.4 * 10 = 1,024 ms = (seconds per hour * (GC pause in ms / 1000)) = 3600 * (50 / 1000) = 180 seconds, or 3 minutes = (Minutes in a hour) – (time lost to young generation collection) = 60 – 3 = 57 minutes. = (call setups per second) * (seconds per minute) * (number of valid minutes in a hour) = 100 * 60 * 57 = 342,000 call setups / hour on 1 CPU = (call setups per second) * (seconds per minute) * (number of minutes available to the application in an hour) = 100 * 60 * 57 = 342,000 call setups / hour on 1 CPU = 3 / 57 = 0.0526 Speedup = 1 / (F + ((1 -F) / N) <- From Amdahl's law Speedup = 1 / (0.0526 + (1 – 0.0526) / 4) Speedup = 1 / 0.289 = 3.45 Efficiency = Speedup / N Efficiency = (3.45 / 4) = 86.37% <- (342,000 * 0.8637) * 4 = 1,181,542 call setups / hour 6.2 Worst-Case Scenario From (#10), we should have received about 102.4 calls frequency = (cps * call-setup rate) = 102.4 * 10 = 1,024 ms period = (cps * time between call setups) = 102.4 * 10 = 1,024 ms The old heap should fill up in about: = (heap size) / (size of objects being promoted) = 512 MB / 5 MB = 536,870,912 / 5,242,880 = 102 promotions total pause duration = (number of promotions * pause duration) = 102 * 200 = 20,400 ms = 20.4 seconds time for promotions = (number of promotions * (frequency + pause duration)) = 102 * (1,024 + 200) = 124,848 ms = 2.08 minutes time for promotions = (number of promotions * (periodicity + pause duration)) = 102 * (1,024 + 200) = 124,848 ms = 2.08 minutes = ((total pause duration * 100) / (60 * (total time for promotions)) = 20.4 * 100 / (60 * 2.08) = 16.34% Speedup = 1 / (.1634 + (1 – .1634) / 4) = 2.673 Execution Efficiency = (2.673 / 4) * 100 = 66.93% = (#23) * (#26) = (2.08 * 0.6693) = 83.52 seconds 6.3 A Real Scenario with Each Call Setup Active for 32 Seconds (Calculated for Concurrent GC) From (10), 102.4 call setups = (call setups received * call-setup rate) = 102.4 * 10 = 1,024 ms = (call setups received * time between call setups) = 102.4 * 10 = 1,024 ms = 2.5 MB of live data <- assumption (30) = 2,621,440 Bytes = (active duration of call setup / frequency) = 32,000 / 1,024 = 31 promotions = (active duration of call setup / periodicity) = 32,000 / 1,024 = 31 promotions the first call setup will be released at the end of the 32,000 ms so in 32,000 ms the number of call setups that can be received: = (active duration of a call setup / callsetup rate) = 32,000 / 10 = 320 calls the first call setup will be released at the end of the 32,000 ms so in 32,000 ms the number of call setups that can be received: = (active duration of a call setup / time between call setups) = 32,000 / 10 = 3,200 calls = 2,621,440 Bytes <- from (32), = (#32) * (#33) = 2,621,440 * 31 = 81,264,640 or 79.36 MB = (32% * 512 MB) = 163.84 MB free = 512 MB – 163.84 MB = 348.16 MB used = (68% * 512 MB) = 348.16 MB used = (initial mark size in MB / promotion size in MB) * frequency of promotion = (348.16 MB / 2.5 MB) * 1,024 ms = 142,606 ms = (initial mark size in MB / promotion size in MB) * periodicity of promotion = (348.16 MB / 2.5 MB) * 1,024 ms = 142,606 ms = ((100% – 28%) * 512 MB) = 368.64 MB used = (remark size in MB / promotion size in MB) * frequency of promotion = (368.64 MB / 2.5 MB) * 1,024 = 150,995 ms = (remark size in MB / promotion size in MB) * periodicity of promotion = (368.64 MB / 2.5 MB) * 1,024 = 150,995 ms = ((#38) / (#35)) = 368.64 / 79.36 = 4.6 = (#33) * (#31) = 31 * 1,024 = 31,744 ms = 31,744 * 4.6 = 146,022 ms = (#39) = 150,995 ms = ((#42) / active duration of a call setup) - (adjust factor for current active-duration segment) = (150,995 / 32,000) – 1 = 3.72 Active-duration segments = ((#42) / active duration of a call setup) – (adjust factor for current active-duration segment) = (150,995 / 32,000) – 1 = 3.72 Active-duration segments = (#43) * (#35) = 3.72 * 79.36 = 295.22 MB 7. Modeling Real-time Application Behavior Using "verbose:gc" Logs The model above is theoretical and relies on a lot of assumptions; but developers can use the "verbose:gc" log from an actual application run to construct a real-world model. The model will allow one to visualize the runtime behavior of both the application and the garbage collector. The verbose:gc logs contain valuable information about garbage-collection times, the frequency of collections, application run times, number of objects created and destroyed, the rate of object creation, the total size of objects per call, and so on. This information can be analyzed on a time scale, and the behavior of the application can be depicted in graphs and tables that chart the different relationships among pause duration, frequency of pauses, and object creation rate, and suggest how these can affect application performance and scalability. Analysis of this information can enable developers to tune an application's performance, optimizing GC frequency and collection times by specifying the best heap sizes for a given call-setup rate (see 14. Sizing the Young and Old Generations' Heaps). 7.1 Java "verbose:gc" Logs Logs containing detailed GC information are generated when a Java application is run with the -verbose:gc flag turned on. For the 1.2.2_08 VM on Solaris, specifying this switch on the command line twice results in even more verbose logs – the level of output that the GC analyzer expects as input: -verbose:gc java -Xgenconfig:8m,8m,semispaces:512M,512M,incmarksweep -verbose:gc -verbose:gc program 7.2 Snapshot of a GC Log The snapshot below is of a verbose:gc log entry generated by the JDK 1.2.2_08 VM when the -verbose:gc switch was specified twice on the command line. Highlighted phrases with superscript numbers are footnoted below. verbose:gc java version "1.2.2" Solaris VM (build Solaris_JDK_1.2.2_08, native threads, sunwjit) semispaces csp0 : data = fac00000 : limit = fb000000: reserve = fb000000 csp1 : data = fb000000 : limit = fb400000: reserve = fb400000 Starting GC at Mon Jun 4 11:42:03. (from-space) space[0]: size=4096kb(100% overhead), free=0kb, maxAlloc=0kb. (to-space) space[1]: size=4096kb, free=3370kb, maxAlloc=3370kb. GC[0]7 in 19ms8: (48Mb, 87% free) -> (48Mb, 94% free)9 [application 179 ms10 requested 52 words] Total GC time: 410 ms11 ++ GC added 0 finalizers++ Pending finalizers = 012 8. Using the GC Analyzer Script to Analyze "verbose:gc" Logs The GC analyzer is a script that analyzes the verbose:gc log on a time scale (wall clock time), and builds a mathematical model. The script provides: Here is snapshot of its output: Application info: Application run time = 235,000 ms Memory = 272 MB Semispace = 16,384 = 187 Average number of objects promoted = 66,761 Average Objects size promoted = 4,436,045 bytes Periodicity of promoted GC = 1,107.3 ms Promotion time = 142.3 ms Percent of app. Time = 11.89% Young GC info: Total number of young GCs = 187 Average GC pause = 149.4 ms Copy/promotion time = 142.3 ms Overhead (suspend, restart threads) time = 7.1 ms Periodicity of GCs = 1,107.3 ms Percent of app. Time = 11.89% Old concurrent GC info: Total GC time = 1,710 ms Total number of GCs = 46 Average Pause = 37.2 ms Periodicity of GC = 5,108.7 ms Old mark-sweep GC info: Total GC time = 0 ms Total number of GCs = 0 Average pause = 0 ms Total old (concurrent + ms) GC info: Cost of concurrent GC = 68,000 ms Percent of app. Time = 33.11% Total GC time = 29,643 ms Average Pause = 127.2 ms Call control info: Call setups per second (CPS) = 90 Call rate, 1 call every = 11.11 ms Number of call setups / young GC = 99.7 Total call-setup throughput = 18,636.03 Total size of objects / call setup = 168,348 bytes Total size of short lived objects / call setup = 123,835 bytes Total size of long live objects / call setup = 44,513 bytes Total size of objects created per young gen GC = 16,777,216 bytes Average number of Objects promoted / call = 670 Execution efficiency of application: GC Serial portion of application = 12.61% Speedup = 2.90 Execution efficiency = 0.7255 CPU Utilization = 72.55% The detailed output is shown in Appendix A. The information generated by the analyzer can be used to tune application performance in the following ways: 9. Reducing Garbage Collection Times Java applications have two types of collections, young-generation and old-generation. 9.1 Young-Generation Collection Times A young-generation collection occurs when a semi-space is full. A copying collector is used to copy (or tenure) the live objects of one semi-space to the other semi-space. When that semi-space becomes full, the live objects could either be copied ("tenured") back to the original semi-space or promoted to the old generation, if they have already aged sufficiently. When developers inspect the GC logs they will find two types of young-generation GCs, called tenure GCs and promotion GCs. 9.1.1 Snippet from a Tenure tenuring (aging) of long-term objects. No promotions are done in a tenure GC. Live objects are copied back and forth between the two semi-spaces, allowing the short-term objects more time to die. 9.1.2 Snippet from a Promotion promotion of long-term objects. Live objects, which survive tenuring, are copied to the old-generation heap space. For a promotion GC, the line "promoted xxxx obj's" will report the number of objects, and the total size of the objects promoted. The snippet also shows evidence of tenuring. Objects still alive when young GC occurs have been tenured, while older objects, about 6%, have been promoted. The young-generation collection time is directly proportional to the size of the semi-spaces, the number of objects created, and the lifetimes of these objects. A collection, and hence a pause, occurs when a semi-space is full. Live objects are tenured, or promoted if the object has aged sufficiently. The cost of a tenure GC is the time to copy live objects to the second semi-space, while the cost of a promotion GC is the time to promote or copy live objects to the old generation. The cost of promoting is a little higher than that of tenuring, because the young collector has to interact with the old collector's heap. Decreasing the semi-space size increases the frequency of young-generation collection and decreases the collection duration, as less live data has accumulated. Similarly, increasing the semi-space size decreases the frequency of young-generation collection and increases the duration of each collection, as more live data accumulates. Note, though, that less frequent collection also provides more time for the short-term data to die. It also partly offsets system overhead like thread scheduling, done at every collection cycle. The net result is a slightly longer collection but at a decreased frequency. To tune the young generation collection time, developers must determine the right combination of frequency, collection time, and heap size. These three parameters are directly affected by the number of objects created per call setup and the lifetimes of these objects. So in addition to sizing the heap, the code may need to change, to reduce the number of objects created per call setup, and also to reduce the lifetimes of some objects. The following output from the GC analyzer helps with this task: See Appendix A for the detailed output. 9.1.3 Using This Information to Tune an Application 9.1.3.1 Using the promoteall Modifier Three categories of object lifetime are important to the decision whether to use the promoteall modifier: Temporary objects are never tenured or promoted because they die almost instantly. Long-term objects are always promoted, because they live through multiple young GCs. Only intermediate objects benefit from tenuring: After the first time they are tenured, they usually die and are collected in the next cycle, sparing the collector the cost of promoting them. If the application has few intermediate objects, using the promoteall modifier decreases the amount of time that the application spends in young GC. This saving comes from not copying long-term objects from one semi-space to the other before promoting them to the old heap anyway. Intermediate objects that are promoted take slightly more time to promote than to tenure. Also, the old-heap collector collects these objects, so it again uses slightly more time to collect them when they do die. However, the old-heap collection happens concurrently rather than in the young collector's stop-the-world fashion, so scalability should improve. 9.1.3.2 promoteall Modifier Usage java -Xgenconfig:8m,8m,semispace,promoteall :128m,128m,incmarksweep program 9.1.3.3 Snippet of a Promotion GC with the promoteall Modifier Starting GC at Fri Jun 8 15:16:16 2001; suspending threads. Gen[0](semi-spaces): size=12Mb(50% overhead), free=0kb, maxAlloc=0kb. space[0]: size=6144kb, free=0kb, maxAlloc=0kb. space[1]: size=6144kb(100% overhead), free=0kb, maxAlloc=0kb. Gen01(semi-spaces)-GC #8234 tenure-thresh=02 481ms3 0%->94% free, promoted 50805 obj's4/2731kb5 Gen[0](semi-spaces): size=12Mb(50% overhead), free=5760kb, maxAlloc=5760kb. space[0]: size=6144kb(100% overhead), free=0kb, maxAlloc=0kb. space[1]: size=6144kb, free=5760kb, maxAlloc=5760kb. GC[0] in 486 ms: (518Mb, 20% free) -> (518Mb, 20% free) [application 264 ms requested 36 words] Total GC time: 980190 ms The above snippet shows all live objects being promoted, so tenure-thresh=0. 9.1.3.4 Tracking Young GCs that Tenure Objects A review of the numbers in "9.1.1 Snippet from a Tenure GC" through "9.1.3.3 Snippet of a Promotion GC with the promoteall Modifier" will reveal how the tenure percentage changes from young GC to young GC. Without the promoteall modifier, this percentage reduces until there are more live objects in the semi-space than are allowed. At that time, these objects are promoted to the old heap, and the threshold is reset to its starting value. There are times when tenuring can help application performance. When intermediate objects are given enough time that they become temporary objects, they are collected before they are ever promoted to the old heap. It is for this reason that tenuring is used in the first place. This strategy works well for many classes of application. Determining whether an application will benefit from tenuring, or from promoting all live objects to the old heap, will help developers configure applications for optimal behavior. 9.1.3.5 Reducing the Size of Promoted or Tenured Objects Depending on the application's behavior, merely increasing the size of the semi-spaces may help by allowing long-term objects to become intermediate objects, and intermediate objects to become temporary objects (see 9.1.3.4 Tracking Young GCs that Tenure Objects). Increasing the semi-spaces may help, but it does increase the time that each young collection takes. The next step is code inspection. There are a few ways to reduce the time it takes to tenure or promote objects. The easiest is to find objects that are kept alive longer than necessary, and to stop referring to them as soon as they are no longer needed. Sometimes, this is as simple as setting the last reference to null as soon as the object is no longer needed. null Also look for objects that are unnecessary in the first place, and simply don't create them. If, as is typical, these are temporary objects, avoiding their creation will help indirectly, by reducing the frequency of young GCs. If they are long-term or intermediate objects, the benefit is more direct: they need not be tenured or promoted. Sometimes it is not as simple as just not creating an object. Developers may be able to reduce object sizes by making non-static variables static, or by combining two objects into one, thus reducing tenure or promotion time. 9.1.3.6 Disadvantages of Pooling Objects In general, object pooling is not usually a good idea. It is one of those concepts in programming that is too often used without really weighing the costs against the benefits. Object reuse can introduce errors into an application that are very hard to debug. Furthermore, retrieving an object from the pool and putting it back can be more expensive than creating the object in the first place, especially on a multi-processor machine, because such operations must be synchronized. Pooling could violate principles of object-oriented design (OOD). It could also turn out to be expensive to maintain, in exchange for benefits that diminish over time, as garbage collectors become more and more efficient and collection costs keep decreasing. The cost of object creation will also decrease as newer technologies go into VMs. Because pooled objects are in the old generation, there is an additional cost of re-use: the time required to clean the pooled objects so they are ready for re-use. Also, any temporary objects created to hold newer data are created in the younger generation with a reference to the older generation, adding to the cost of young GCs [1]. Additionally, the older collector must inspect pooled objects during every collection cycle, adding constant overhead to every collection. 9.1.3.6 Advantages of Pooling Objects The main benefit of pooling is that once these objects are created and promoted to the old generation, they are not created again, not only saving creation time but reducing the frequency of young-generation GCs. There are three specific sets of factors that may encourage adoption of a pooling strategy. The first is the obvious one: pooling of objects that take a long time to create or use up a lot of memory, such as threads and database connections. Another is the use of static objects. If no state must be maintained on a per-object basis, this is a clear win. A good general rule is to make as many of the application's objects and member variables static as possible. When it is not possible to make an object static, imposing a policy of one object per thread can work just fine. Such cases are good opportunities to take advantage of a static ThreadLocal variable. These are objects that enable each thread to know about its own instance of a particular class. ThreadLocal Note that in JDK 1.2 the ThreadLocal implementation has some synchronization and object creation associated with it. This flaw was resolved in v1.3 and beyond. Developers using Java 1.2 may find it beneficial to implement subclasses of Thread and ThreadLocal that work like the 1.3 versions of these classes. Thread 9.1.3.7 Seeking Statelessness An application can achieve its maximum performance if the objects are all or mostly short-lived, and die while still in the young generation. To achieve such short lifetimes, the application must be essentially stateless, or at least maintain state for only a brief period. Minimizing statefulness helps greatly because the young generation's copying collector is very efficient at collecting dead objects. It expends no effort, just skips over them and goes on to the next object. By contrast any object that must be tenured or promoted imposes the cost of copying to a new area of memory. 9.1.3.8 Watching for References that Span Heaps Developers should avoid creating ephemeral or temporary objects after objects are promoted to the old generation. If the application creates temporary objects that have references from or to objects in the old generation , then the cost of scanning them is greater than the cost of scanning objects whose only references are within the young generation. 9.1.3.9 Flattening Objects Keeping objects flat, avoiding complex object structures, spares the collector the task of traversing all the references in them. The fewer references there are, the faster the collector can work – but note that trade-offs between good OOD and high performance will arise. 9.1.3.10 Watching for Back References Look for unnecessary back references to the root object, as these can turn a temporary object into a long-lived one – and can lead to a memory leak, as the reference may never go away. 9.2 Old-Generation Collection Times While the young-generation collector is a stop-the-world collector, the old generation's concurrent inc-mark-sweep collector runs concurrently with the application. It does, however, stop the application briefly to take snapshots of the heap. These snapshots are taken in three stages [1]: 9.2.1 Initial Mark Phase When the free memory in the old heap drops below a certain percentage, usually between 40% and 32%, the old-generation collector starts an "initial mark" phase. The initial mark phase takes a snapshot of the heap, and starts traversing the object graph to distinguish the referenced and unreferenced objects. Marking, in a nutshell, is the process of tagging each object that can still be reached [1], so that it can be preserved when unmarked objects are collected. 9.2.1.1 Snippet of an Initial Mark Phase Starting GC at Fri Jun 8 15:12:23 2001; suspending threads. GC[1]1: initial mark2] in 25 ms3: (518Mb, 32% free4) -> (518Mb, 32% free) [application 2 ms requested 0 words] Total GC time: 877635 ms 9.2.2 Remark Phase Once the concurrent mark phase is complete, the "remark" phase begins. The collector re-walks parts of the the object graph that may have changed since they were scanned intially. 9.2.2.1 Snippet of a Remark Phase Starting GC at Fri Jun 8 15:12:33 2001; suspending threads. GC[11: remark2] in 68 ms3: (518Mb, 27% free)4 -> (518Mb, 27% free) [application 190 ms requested 0 words] Total GC time: 882274 ms 9.2.3 Resize Phase The "resize" GC phase follows the remark phase. In this phase the collector checks to see whether there is enough space. If there is not, it resizes the heap. 9.2.3.1 Snippet of Resize Phase Starting GC at Fri Jun 8 15:12:39 2001; suspending threads. GC[11: resize heap2] in 1 ms3: (518Mb, 78% free)4 -> (518Mb, 78% free) [application 225 ms requested 0 words] Total GC time: 884199 ms Warning: If the object creation rate is too high, the concurrent collector will be unable to keep up with the collection. The free threshold may drop below 5%, and the old generation, instead of using the concurrent inc-mark-sweep collector, will employ the traditional mark-sweep stop-the-world collector to collect the older heap. 9.2.4 Snippet of Traditional Mark-Sweep GC Starting GC at Fri Jun 8 15:16:54 2001; suspending threads. Gen[0](semi-spaces): size=12Mb(50% overhead), free=2kb, maxAlloc=2kb. space[0]: size=6144kb, free=2kb, maxAlloc=2kb. space[1]: size=6144kb(100% overhead), free=0kb, maxAlloc=0kb. Gen0(semi-spaces)-GC #8266 tenure-thresh=0 748ms 0%->94% free, promoted 50616 obj's/2960kb Gen[0](semi-spaces): size=12Mb(50% overhead), free=5760kb, maxAlloc=5760kb. space[0]: size=6144kb(100% overhead), free=0kb, maxAlloc=0kb. space[1]: size=6144kb, free=5760kb, maxAlloc=5760kb. Gen[1]1 (inc-mark-sweep2): size=512Mb, free=16kb, maxAlloc=16kb. Gen1(inc-mark-sweep)-GC #71 6123ms3 5%4->38%5 free Gen[1](inc-mark-sweep): size=512Mb, free=19kb, maxAlloc=19kb. GC[1] in 68766 ms: (518Mb, 5% free) -> (518Mb, 38% free) [application 321 ms requested 1028 words] Total GC time: 1010530 ms 9.2.5 Observations Regarding Old-Generation Collections 9.2.5.1 Undersized Old Heaps After each resize phase, the size of the heap collected should be equal to or greater than the size of an active duration. Depending on the heap size, more than one active-duration segment could be in the old generation. Based on the time of the active duration, and the frequency of the old-generation GC, all of the objects in the active duration could be collectable. The GC analyzer provides this ratio, (size of active durations / resize heap space). This value should be less than 1. A value greater than 1 indicates that, when an old generation GC takes place, there are active-duration segments still alive. With the ratio greater than 1, the call rate could be putting pressure on the older generation, forcing on it more frequent collections. If the ratio is too high, it can actually force the old generation to employ the traditional mark-sweep collector. Simply increasing the size of the old-generation heap can alleviate this pressure. Note, though, that too great an increase could lead to a smear effect (14.2.4 Locality of Reference Problems with an Oversized Heap). The GC analyzer is able to determine the pressure on the older collector, as is described in "11. Detection of Memory Leaks." 9.2.5.2 Checks and Balances At the end of an active duration, the amount of heap resized should be equal to the size of the objects promoted. If there are lingering references or if the timers are not very sensitive to the actual duration, then a memory leak may result. Memory leaks will fill up the old heap, degrading the old collector's performance and increasing the frequency of the old-generation GC. The GC analyzer reports such behavior. For a final verification, at the end of the application run, the total size of the old generation objects collected should be compared to the total size of the objects promoted. These should be approximately equal. See "11. Detection of Memory Leaks" for more details. 10. Reducing the Frequency of Young and Old GCs The frequency of the young GC is directly related to: 10.1 The Size of the Semi-space Increasing the semi-space reduces young-GC frequency but increases the time each collection takes, because the promotion size could be higher. On the other hand, a ratio of temporary to long-lived objects that is very high could decrease the collection time, as more intermediate objects could die before a tenure GC or a promotion GC occurs. Changing the semi-space size entails a trade-off between decreasing frequency and increasing collection times. See "9.1 Young-Generation Collection Times" and "9.2 Old-Generation Collection Times" for details on the effects of a change in the semi-space size on frequency and collection time. 10.2 The Call-Setup Rate and the Rate of Object Creation Frequency of GC is directly proportional to the call-setup rate and to the size of the objects created per call setup. The GC analyzer reports, per call setup, the total size of objects created, and of the temporary and long-term data created, and the average object size. This information can be used to reduce unnecessary object creation and optimize the application code. 10.3 Object Lifetimes The frequency of young GC is also dependent on the lifetimes of the objects created. Long lifetimes lead to unnecessary tenuring in the young generation. Using the promoteall modifier could alleviate this problem, by shifting to the old generation's collector the burden of collecting, sooner or later, all objects alive at the beginning of any young GC. For details on using the promoteall modifier, see "9.1.3.2 promoteall Modifier Usage". The lifetimes of objects have a big impact on application performance, as they affect GC frequency, collection times, and the young and old heaps. If objects never die, then at some time there will be no more heap space available and an out-of-memory exception will arise. Increased object lifetimes reduce old-generation memory available and lead to more frequent old-generation GCs. Object lifetimes also affect the young generation, because the promotion size increases as lifetimes increase, and in turn increases collection times. For maximum performance, references to objects should be nulled when they are no longer needed. 11. Detection of Memory Leaks The analyzer determines the number of active call setups currently alive in the old heap. It does so by taking into account the active duration of call setups and the call-setup rate. If the application is well behaved, the entire active-duration segment should be freed at the end of the live time of the call setups. The analyzer verifies this by calculating the size of the heap that is freed and correlating this information with the size of the active-duration segments that should have been freed. Some of the ratios are shown below; for example:) From the ratios above, the number of active durations that should ideally be freed would be: = (periodicity of old GC / active duration of each call setup) = ((#44) / (#40)) = (166,000 / 32,000) = 5.185 -> (45) From above, the number of active durations freed (#42) is 6.28, which indicates call setups are being freed, and old-heap free memory is in good shape. If the ratio (#42) / (#45) is very low, less than 1, then the old-heap sizing should be inspected, or object references are lingering even after the call setup that generated them is no longer active. Note: A developer should size the heap to hold at least two or more active durations, so that when an old GC takes place, at least 1 or more active durations are freed. From the model above, (#42 = #43 / #41), 6.3 active durations were dead at the end of 160,000 ms. Comparing this to the theoretical or ideal number that could be dead at the time of old GC (#45) reveals that the application is freeing its references properly. A good way to detect memory leaks is to compare the total size of the objects promoted to the total size of the objects reclaimed for the entire test. Any substantial difference signals a leak. Memory leaks lead to various problems: Ratios from the GC analyzer for memory verification: Total size of objects promoted: = 1,661,646 KB -> (51) Total size of objects fully GC collected in application run: = 1,307,750 KB -> (52) Total size of objects promoted: = 1,661,646 KB -> (51) Total size of objects fully GC collected in application run: = 1,307,750 KB -> (52) The difference should be one or more active-duration segments: = (#51) – (#52) = 1,661,646 – 1,307,750 = 353,896 KB -> (53) = (#53) / ((#42) / (#41) = (353,896 * 1,024) / (687,152,824 / 5.22) = 3.31 active durations 12. Finding the Optimal Call-Setup Rate by Using the Rates of Creation and Destruction Using the call-setup rate as input, the analyzer provides various call-setup rate calculations, which can be used to determine the following: If the application is creating too many objects, either temporary or long-lived, developers should devise a strategy to maintain an optimum ratio of temporary to long-lived objects. This tuning will help reduce the young GC collection time and the frequency of collection. Based on the size and number of objects promoted to the old generation, developers can adopt a strategy of sizing the heap to accommodate the desired maximum call-setup rate. A case for pooling could also be made, if the average number of the long-lived objects is large, and it takes many computational cycles to create the objects for each call setup. Call setups per second (CPS) = 200 Call-setup rate, 1 call every = 5 ms Call setup before young gen GC = 80.26 Size of objects / call setup = 78,390 bytes Size of short-lived objects / call = 61,307 bytes Size of long-lived objects / call setup = 17,084 bytes Ratio of young GC (short/long) objects / call setup = 3.59 Average number of objects promoted / call setup = 367 13. Learning the Actual Object Lifetimes It is very important to know the lifetime of an object, whether it is temporary or long-lived. Because objects are associated with call setups, knowing the real lifetime will make it possible to size both the young– and the old-generation heaps properly. If this could be exactly calculated, then the young-generation semi-space could be sized to destroy as many short-lived objects as possible, and promote only the necessarily long-lived objects. The promoteall modifier could be used if the ratio of temporary objects to long-lived objects is low. Knowing the real lifetime also helps developers size the old-generation heap optimally, to accommodate the desired maximum call-setup rate. It also helps detect memory leaks by identifying objects that linger beyond their necessary lifetime. The GC analyzer can currently detect active-duration segments but not individual object lifetimes. The JVMPI (Java Virtual Machine Profiler Interface) [22] can be used to report the lifetime of individual objects, and this information can then be integrated with the output of the GC analyzer to verify heap usage accurately. 14. Sizing the Young and Old Generations' Heaps Heap sizing is one of the most important aspects of tuning an application, because so many things are affected by the sizes of the young– and old-generation heaps: 14.1 Sizing the Young Generation The young generation's copying collector does not have fragmentation problems, but could have locality problems, depending on the size of the generation. The young heap must be sized to maximize the collection of short-term objects, which would reduce the number of long-term objects that are promoted or tenured. Frequency of the collection cycle is also determined by heap size, so the young heap must be sized for optimal collection frequency as well. Basically, finding the optimal size for the young generation is pretty easy. The rule of thumb is to make it about as large as possible, given acceptable collection times. There is a certain amount of fixed overhead with each collection, so their frequency should be minimized. 14.2 Sizing the Old Generation The concurrent inc-mark-sweep collector manages the old-generation heap so it needs to be carefully sized, taking into account the call-setup rate and active duration of call setups. An undersized heap will lead to fragmentation, increased collection cycles, and possibly a stop-the-world traditional mark-sweep collection. An oversized heap will lead to increased collection times and smear problems (14.2.4 Locality of Reference Problems with an Oversized Heap). If the heap does not fit into physical memory, it will magnify these problems. The GC analyzer helps developers find the optimal heap size by providing the following information: Reducing the percentage of time spent in young and old GC will increase application efficiency. 14.2.1 An Undersized Heap and a High Call-Setup Rate As the call-setup rate increases, the rate of promotion increases. The old heap fills faster, and the number of active-duration segments rises. Frequency of old-generation GCs increases because the older generation fills up faster. If the pressure on the old collector is heavy enough, it can force the old collector to revert to the traditional mark-sweep collector. If the old collector is still unable to keep up, the system can begin to thrash, and finally, throw an out-of-memory exception. Increasing the old-generation heap size can alleviate this pressure on the old collector. The heap should be enlarged to the point that a GC collects from one to three active-duration segments. Use the following information from the GC analyzer to determine whether the heap size is configured properly: 14.2.2 An Undersized Heap Causes Fragmentation An undersized heap can lead to fragmentation because the old-generation collector is a mark-sweep collector, and hence never compacts the heap. When the collector frees objects, it combines adjacent free spaces into a single larger space, so that in can be optimally allocated for future objects. Over time, an undersized heap may begin to fragment, making it difficult to find space to allocate an object. If space cannot be found, a collection takes place prematurely, and the concurrent collector cannot be used, because that object must be allocated immediately. The traditional mark-sweep collector runs, causing the application to pause during the collection. Again, simply increasing the heap size, without making it too large, could prevent these conditions from arising. If the heap does fragment, and the call-setup rate slows, then the fragmentation problem will resolve itself. As more and more objects die and are collected, space in the heap becomes available again. Take the extreme example, where there are no call setups for a period of time greater than an active duration. Now all call-setup objects will be collected at the next GC, and the heap will no longer be fragmented. 14.2.3 An Oversized Heap Increases Collection Times When using the concurrent inc-mark-sweep collector, the cost of each collection is directly proportional to the size of the heap. So an oversized heap is more costly to the application. The pause times associated with the collections will still not be that bad, but the time the collector spends collecting will increase. This does not pause the application, but does take valuable CPU time away from the application. To aid its interaction with the young collector, the old heap is managed using a card table, in which each card represents a subregion of the heap. Increasing the heap size increases the number of cards and the total size of the data structures associated with managing the free lists in the old collector. An increased heap size and a high call rate will add to the pressure on the old collector, forcing it to search more data structures, and thus increasing collection times. 14.2.4 Locality of Reference Problems with an Oversized Heap Another problem with an oversized heap is "locality of reference." This is related to the operating system, which manages memory. If the heap is larger than physical memory, parts of the heap will reside in virtual memory. Because the concurrent inc-mark-sweep collector looks at the heap as one contiguous chunk of memory, the objects allocated and deallocated could reside anywhere in the heap. That some objects will be in virtual memory leads to paging, translation-lookahead-buffer (TLB) misses, and cache-memory misses. Some of these problems are solved by reducing the heap size so that it fits entirely in physical memory, but that still does not eliminate the TLB misses or cache-memory misses, because object references in a large heap may still be very far away from each other. This problem with fragmentation and locality of reference is called a "spread," or "smear," problem. Optimizing heap sizes will help avoid this problem. 14.3 Sizing the Heap for Peak and Burst Configuration Developers should test the application under expected sustained peak and burst configuration loads, to discover whether collection times and frequency are acceptable at maximum loads. The heap size should be set to enable maximum throughput under these most demanding conditions. Far better to discover the application's limitations before it encounters peak and burst loads during actual operation. 15. Determining the Scalability and Execution Efficiency of a Java Application As was discussed in (4. How GC Pauses Affect Scalability and Execution Efficiency), the serial parts of an application greatly reduce its scalability on a multi-processor machine. One of the principal serial components of a Java application is its garbage collection. The concurrent inc-mark-sweep collection greatly reduces the serial nature of old-generation collection, but young-generation collection remains serial. The GC analyzer calculates the effects of young- and old-generation GCs. It uses this information to determine the execution efficiency of the application. This, in turn, can be used to reduce the serial percentage and increase scalability. Output from the GC analyzer also helps developers evaluate other solutions, such as running multiple JVMs or using the concurrent collector. 15.1 GC Analyzer Snippet Showing the Cost of Concurrent GC and Execution Efficiency Total old (concurrent + stop-the-world) GC info: Cost of concurrent GC = 68,000 ms Percent of application time = 33.11% Execution efficiency of the application: GC Serial portion of application = 12.61% Speedup = 2.90 Execution efficiency = 0.7255 CPU Utilization = 72.55% 16. Other Ways to Improve Performance Other means to improve application performance include using the Solaris RT (real-time) scheduling class or using the alternate thread library (/usr/lib/lwp), which is the default thread library on Solaris 9. Experimenting with the JVM options that are available, and re-testing with various combinations until an optimal configuration is found, is one way to change application behavior and get more performance without making any code changes. 17. On the Horizon JDK 1.4 does not include the advanced concurrent inc-mark-sweep collector. This collector and perhaps more advanced garbage-collection features should become available from JDK 1.4.1 onwards. The performance-tuning techniques discussed in this paper are applicable across JDK 1.3 and JDK 1.4. The behavior of the concurrent collector in JDK 1.4.1 should be similar to that seen in JDK 1.2.2_08. The verbose:gc log format will be different, so the model needs to be constructed using the new information. 18. Conclusion Insight into how an application behaves is critical to optimizing its performance. A Java application's performance can be tuned and improved by a variety of means related to garbage collection. GC times can be reduced by employing the concurrent collector, and by using properly sized young- and old-generation heaps. Garbage collection times can severely limit the scalability of a Java application. The concurrent inc-mark-sweep collector can be used improve scalability – but the constraints of the young generation's stop-the-world copying collector remain. In the future, parallel young-generation collectors may be available to ameliorate this problem. For now, running multiple JVMs can help take advantage of all the CPU power available on the machine. Using the GC analyzer will enable developers to size the young– and old-generation heaps optimally. It will also point out what types of optimization might most fully enhance an application's performance. By analyzing, optimizing, and re-analyzing, developers can see which variations improve application performance. This is a continuous, iterative process that should eventually yield optimal performance from each application. Using advanced garbage collectors can greatly enhance application behavior. It can even improve scalability on multiprocessor machines. 19. Acknowledgments We would like to thank the Java VM, GC team, (Y.S. Ramakrishna, Ross Knippel), and Sun Labs (Steve Heller) for helping us with the internals of the concurrent GC collector. This work and the analysis would not have been possible without significant help from them. We would also like to thank our colleagues in Market Development Engineering (IPE) at Sun. From dynamicsoft, we would like to thank Jon Schlegel, Allan Andersen and the Java User Agent (UA) engineering team for contributing to this effort. Finally, we would like to thank Brian Christeson for polishing this paper with some sharp proofing and edits. 20. References Appendix A A1. GC Analyzer Usage Usage: gc_analyze.pl [-d] <gclogfile> <CPS> <active_call_duration> <CPUs> <application_run_time_in_ms> A2. GC Analyzer Output with –d Option Running the GC analyzer with the –d option generates detailed output, which includes a summary. The detailed output provides in-depth insight into application behavior. solaris% gc_analyze.pl -d logs/data/re_gc_fi_200_4p_768_RT.log 200 32000 4 Processing logs/data/re_gc_fi_200_4p_768_RT.log ... Call rate = 200 cps ... Active call setup duration = 32,000 ms Number of CPUs = 4 ---- GC Analyzer Summary: logs/data/re_gc_fi_200_4p_768_RT.log ---- Application info: Application run time = 498,000 ms Memory = 774 MB Semispace = 6,144 = 1,243 Average number of Objects promoted = 29,529 Average objects size promoted = 1,377,229 bytes Periodicity of promoted GC = 325.46 ms Promotion time = 74.14 ms Percent of app. time = 18.77% Young GC info: Total number of young GCs = 1,243 Average GC pause = 75.18 ms Copy/promotion time = 74.14 ms Overhead (suspend, restart threads) time = 1.04 ms Periodicity of GCs = 325.46 ms Percent of app. time = 18.77% Old concurrent GC info: Total GC time = 286.00 ms Total number of GCs = 9 Average pause = 31.78 ms Periodicity of GC = 55,333.33 ms Old traditional mark-sweep GC info: Total GC time = 0 ms Total number of GCs = 0 Average pause = 0.00 ms Total old (concurrent + ms) GC info: Cost of concurrent GC = 36,000.00 ms Percent of app. time = 8.91% Total (young and old) GC info: Total GC time = 93,736.00 ms Average pause = 74.87 ms Call control info: Call setups per second (CPS) = 200 Call rate, 1 call every = 5 ms Number# call setups / young GC = 65 Total call throughput = 80,910.00 Total size of objects / call setup = 96,654 bytes Total size of short lived objects / call setup = 75,496 bytes Total size of long live objects / call setup = 21,158 bytes Total size of objects created per young gen GC = 6,291,456 bytes Average number# of Objects promoted / call = 453 Execution efficiency of application: GC Serial portion of application = 18.82% Speedup = 2.56 Execution efficiency = 0.64 CPU Utilization = 63.91% --- GC Analyzer End Summary ---------------- #--- Detailed and possibly confusing calculations; dig into this for more info about what is happening above ---- ---- GC Log stats ... Totals GC0: GCs=#1341, #young_tenure_GCs=0, #young_promoted_GCs=1243 Tenure avgs: thresh=0, time=0, free=0 Promoted avgs: thresh=0, time=74.14, free=100, objects=29529, size=1344.95 Promoted totals: size_total=1671774 Totals GC1: GCs=9, #initmark_GCs=3, #remark_GCs=3 #resize_GCs=3 Init-Mark avgs: time=17, totalmem=774, %=54, app_time=4.33 Remark avgs: time=78, totalmem=774, %=29.66, app_time=208 Resize avgs: time=0.33, totalmem=774, %=84.66, app_time=143.33 Totals ms GC1: GCs=0 ms avgs: time=0, %=0, app_time=0 ---- Young generation calcs ... Average young gen dead objects size / GC = 4,914,226.25 bytes Average young gen live objects size / GC cycle = 1,377,229.74 bytes Ratio of short lived / long lived for young GC = 3.56 Average young gen object size promoted = 1,377,229.74 bytes Average number# of Objects promoted = 29,529.49 Total object promoted times = 93,450 ms Average object promoted times = 74.14 ms Total object promoted GCs = 1243 Periodicity of object promoted GCs = 325.46 ms Total tenure times = 0 ms Total tenure GCs = 0 Average tenure GC time = 0 ms Periodicity of tenure GCs = 0 ms Total number# of young GCs = 1243 Total time of young GC = 93,450 ms Average young GC pause = 75.18 ms Periodicity of young GCs = 325.46 ms --- Old generation calcs ... Total concurrent old gen times = 286 ms Total number# of old gen GCs = 9 Total number# of old gen pauses with 0 ms = 3 Total number# of old gen GCs = 9 Total old gen GCs = 3 cycles -> 1 full GC = 3 Average old gen pauses = 31.77 ms Actual average old gen pauses = 31.77 ms Periodicity of old gen GC = 55,333.33 ms Actual Periodicity of old gen GC = 55,333.33 --- Traditional MS calcs ... Total number# mark sweep old GCs = 0 Total mark sweep old gen time = 0 ms Average mark sweep pauses = 0 ms Average free threshold = 0% Total mark sweep old gen application time = 0 ms Average mark sweep apps time = 0 ms ---- GC as a whole ... Total GC time = 93,736 Average GC pause = 74.86 Actual average GC pause = 74.86 --- Memory or Heap calcs ... Total memory = 774 MB Resize phase thresh = 84.66% Remark phase thresh = 29.66% Initial Mark phase GC thresh (init mark)= 54% Live objects per old GC = 118.68 MB Dead objects per old GC = 425.70 MB Ratio of (short/long) lived objects per old GC = 3.58 --- Memory leak verification ... Total size of objects promoted = 1,671,774 KB Total size of objects full GC collected throughout app. run = 1,307,750.40 KB --- Active duration calcs ... Active duration of each call = 32,000 ms Number# number of calls in active duration = 6,400 Number# of promotions in active duration = 98 Long-lived objects (promoted objects) / active duration = 135,411,421.16 bytes Short-lived objects (tenured or not promoted) / active duration = 483,174,548 bytes Total objects created / active duration = 618,585,969.23 bytes Percent% long-lived in active duration = 21.89% Percent% short-lived in active duration = 78.10% Number# of active durations freed by old GC = 5.07 Ratio of live to freed data = 0.83 Average resized memory size = 687,152,824.32 Time when init GC might take place = 88,225.34 ms Time when remark phase might take place = 134,895.28 ms Periodicity of initial old GC = 166,000 ms Periodicity of old GC = 162,385.78 ms Periodicity of resize phase = 162,385.78 ms --- Application run times calcs ... Total application run times during young GC = 403,519 ms Total application run times during old GC = 1,067 ms Total application run time = 404,586 ms Calculated or specified app run time = 498,000 ms Ratio of young (gc_time / gc_app_time) = 0.23 Ratio of young (gc_time / app_run_time) = 0.18 Ratio of old (gc_time / gc_app_time) = 0.26 Ratio of old (gc_time / app_run_time) = 0.00 Ratio of total (gc_time / gc_app_time) = 0.23 Ratio of total (gc_time / app_run_time) = 0.18 A2. GC Analyzer Download GC Analyzer Download If you have other comments or ideas for future technical tips, please type them here: Have a question about Java programming? Use Java Online Support.
http://developers.sun.com/mobility/midp/articles/garbage/
crawl-002
en
refinedweb
This question seems related, but is sadly unanswered and also seems to be Ionic+Angular: How to make multiple side menus working in a mobile browser My app uses a navigation menu that allows switching between different pages. And depending on the page there’s also a context menu. I can’t figure out how to make these two IonMenus coexist. The goal is to have two IonMenuButtons in the page header. One on the left, for the navigation menu. The other on the right for the context menu. To create a simple example, I set up a blank project with ionic start and added [two of the example menus from the docs page][1] next to the router. Now <App/> looks like this: const App: React.FC = () => ( <IonApp> {/* */} <IonMenu side="start" menuId="first" id='first' contentId="router-outlet"> <IonHeader> <IonToolbar color="primary"> <IonTitle>Start Menu</IonTitle> </IonToolbar> </IonHeader> <IonContent> <IonList> <IonItem>Menu Item</IonItem> </IonList> </IonContent> </IonMenu> {/* */} <IonMenu side="start" menuId="custom" id="custom" contentId="router-outlet"> <IonHeader> <IonToolbar color="tertiary"> <IonTitle>Custom Menu</IonTitle> </IonToolbar> </IonHeader> <IonContent> <IonList> <IonItem>Menu Item</IonItem> </IonList> </IonContent> </IonMenu> {/* */} <IonReactRouter> <IonRouterOutlet id="router-outlet"> <Route path="/home" component={Home} exact={true} /> <Route exact} /> </IonRouterOutlet> </IonReactRouter> </IonApp> ); As you can see, the router has only a single route. In the header of that page, I would like to place two IonMenuButtons. Since that doesn’t work, I’m replacing one of them with a simple button: const Home: React.FC = () => { return ( <IonPage> <IonHeader> <IonToolbar> <IonButtons slot="start"> {/* <IonMenuButton menu="custom" autoHide={false} /> */} <IonButton onClick={() => { (document.getElementById("custom") as any).open(); }} > <IonIcon slot="icon-only" icon={star} /> </IonButton> </IonButtons> <IonTitle>Double menu</IonTitle> <IonButtons slot="end"> <IonMenuButton menu="first" autoHide={false} /> </IonButtons> </IonToolbar> </IonHeader> <IonContent></IonContent> </IonPage> ); }; Now, this code has a variety of problems: - it is not possible to have two IonMenuButtonsin the same header (in Home.tsx). Only one of them is rendered, the second one disappears. - it is not possible to have two Menus, only the second menu becomes available. In App.tsx I’ve placed the Menuwith menuID firstbefore the Menuwith menuID custom. Only this second menu with id customcan be opened. Both the IonMenuButton and my hacky workaround button only work when pointing to custom. Switching the order in App.tsx makes only firstwork. Link to SO question:
https://forum.ionicframework.com/t/allow-multiple-menus/189934
CC-MAIN-2020-24
en
refinedweb
NOTE: Conversational context is currently only available with the Adapt Intent Parser, and is not yet available for Padatious How tall is John Cleese? "John Cleese is 196 centimeters" Where's he from? "He's from England" Context is added manually by the Skill creator using either the self.set_context() method or the @adds_context() decorator. Consider the following intent handlers: @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))def handle_length(self, message):python = message.data.get('PythonPerson')self.speak('{} is {} cm tall'.format(python, length_dict[python]))@intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))def handle_from(self, message):python = message.data.get('PythonPerson')self.speak('{} is from {}'.format(python, from_dict[python])) To interact with the above handlers the user would need to say User: How tall is John Cleese?Mycroft: John Cleese is 196 centimetersUser: Where is John Cleese from?Mycroft: He's from England To get a more natural response the functions can be changed to let Mycroft know which PythonPerson we're talking about by using the self.set_context() method to give context: @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))def handle_length(self, message):# PythonPerson can be any of the Monty Python memberspython = message.data.get('PythonPerson')self.speak('{} is {} cm tall'.format(python, length_dict[python]))self.set_context('PythonPerson', python)@intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))def handle_from(self, message):# PythonPerson can be any of the Monty Python memberspython = message.data.get('PythonPerson')self.speak('He is from {}'.format(from_dict[python]))self.set_context('PythonPerson', python) When either of the methods are called the PythonPerson keyword is added to Mycroft's context, which means that if there is a match with Length but PythonPerson is missing Mycroft will assume the last mention of that keyword. The interaction can now become the one described at the top of the page. User: How tall is John Cleese? Mycroft detects the Length keyword and the PythonPerson keyword Mycroft: 196 centimeters John Cleese is added to the current context User: Where's he from? Mycroft detects the WhereFrom keyword but not any PythonPerson keyword. The Context Manager is activated and returns the latest entry of PythonPerson which is John Cleese Mycroft: He's from England The context isn't limited by the keywords provided by the current Skill. For example @intent_handler(IntentBuilder().require(PythonPerson).require(WhereFrom))def handle_from(self, message):# PythonPerson can be any of the Monty Python memberspython = message.data.get('PythonPerson')self.speak('He is from {}'.format(from_dict[python]))self.set_context('PythonPerson', python)self.set_context('Location', from_dict[python]) Enables conversations with other Skills as well. User: Where is John Cleese from?Mycroft: He's from EnglandUser: What's the weather like over there?Mycroft: Raining and 14 degrees... To make sure certain Intents can't be triggered unless some previous stage in aconversation has occured. Context can be used to create "bubbles" of available intent handlers. User: Hey Mycroft, bring me some TeaMycroft: Of course, would you like Milk with that?User: NoMycroft: How about some Honey?User: All right thenMycroft: Here you go, here's your Tea with Honey from mycroft.skills.context import adds_context, removes_contextclass TeaSkill(MycroftSkill):@intent_handler(IntentBuilder('TeaIntent').require("TeaKeyword"))@adds_context('MilkContext')def handle_tea_intent(self, message):self.milk = Falseself.speak('Of course, would you like Milk with that?',expect_response=True)@intent_handler(IntentBuilder('NoMilkIntent').require("NoKeyword").require('MilkContext').build())@removes_context('MilkContext')@adds_context('HoneyContext')def handle_no_milk_intent(self, message):self.speak('all right, any Honey?', expect_response=True)@intent_handler(IntentBuilder('YesMilkIntent').require("YesKeyword").require('MilkContext').build())@removes_context('MilkContext')@adds_context('HoneyContext')def handle_yes_milk_intent(self, message):self.milk = Trueself.speak('What about Honey?', expect_response=True)@intent_handler(IntentBuilder('NoHoneyIntent').require("NoKeyword").require('HoneyContext').build())@removes_context('HoneyContext')def handle_no_honey_intent(self, message):if self.milk:self.speak('Heres your Tea with a dash of Milk')else:self.speak('Heres your Tea, straight up')@intent_handler(IntentBuilder('YesHoneyIntent').require("YesKeyword").require('HoneyContext').build())@removes_context('HoneyContext')def handle_yes_honey_intent(self, message):if self.milk:self.speak('Heres your Tea with Milk and Honey')else:self.speak('Heres your Tea with Honey') When starting up only the TeaIntent will be available. When that has been triggered and MilkContext is added the MilkYesIntent and MilkNoIntent are available since the MilkContext is set. when a yes or no is received the MilkContext is removed and can't be accessed. In it's place the HoneyContext is added making the YesHoneyIntent and NoHoneyIntent available. As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree.
https://mycroft-ai.gitbook.io/docs/skill-development/user-interaction/conversational-context
CC-MAIN-2020-24
en
refinedweb
Bindings to the parma polyhedra library, allowing to use double description from Python Project description These are Python bindings to the Parma Polyhedra Library. They were extracted from the sagemath project, in order to be used in non-sage projects. This is GPL-licensed, as is Sagemath. To build it you need to have both the ppl and gmp libraries installed in a place where distutils can find it. Then, python setup.py build && python setup.py install If you have trouble, try adding the desired paths to library_dirs in setup.py as a keyword argument to the Extension constructor. To use it, simply import the module, create a matrix of Fractions or integers, and compute the double description ! from pyparma import Polyhedron import numpy as np from fractions import Fraction fractionize = np.vectorize(lambda x: Fraction(str(x))) A = fractionize(np.random.rand(50,3)) poly = Polyhedron(hrep=A) print poly.hrep() Both H-representation and V-representation follow the CDD format i.e.: - H_rep = [b | A] where the polyhedron is defined by b + A x >= 0 - V_rep = [t | V] where V are the stacked vertices (Horizontal vectors) and t is the type: 1 for points, 0 for rays/lines. To run the tests, simply run: nosetests From the top-level directory. To run the tests, you need to have the CDD library installed. I assume that you installed the version that comes with the pycddlib bindings. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyparma/
CC-MAIN-2020-24
en
refinedweb
Hi There, I am trying to make use of the Textblob package within a Dataiku recipe. More specifically I'm trying to create a python recipe which translates a column "Description" from Russian to English using this package. I'm basing myself on the script which I found here in the context of a Kaggle competition: I wanted to have a try to to see how I could incorporate this into a Dataiku Recipe (I took out the references to the progres bar part, which I don't need here). -------------------------------------------------------------- My input is "translate_2" which consists out of two columns -"ID": Integers -"Description": Russian words with a few missings My output is "output" ---------------------------------------------------------------------- I have reworked the code into the result below to integrate it into Dataiku: # -*- coding: utf-8 -*- import dataiku import pandas as pd, numpy as np from dataiku import pandasutils as pdu import sys import textblob # Read recipe inputs train_Raw_filtered = dataiku.Dataset("translate_2") x = train_Raw_filtered.get_dataframe() #Takes data frame as input, then searches and fills missing description with недостающий (russian for "missing") def desc_missing(x): if x['Description'].isnull().sum()>0: x['Description'].fillna("недостающий",inplace=True) return x else: return x x=desc_missing(x) #Translate def translate(x): try: return textblob.TextBlob(x).translate(to="en") except: return x x=translate(x) #Map to new column def map_translate(x): x['en_desc']=x['Description'] return x x=map_translate(x) # Write recipe outputs to dataiku train_Raw_Translated = dataiku.Dataset("output") train_Raw_Translated.write_with_schema(x) The code runs without error. It does impute the "missing" value, but I do not seem to succeed to write the actual translation into the Dataiku recipe output. It just inherits the original values: When I take a look at the logs I find this line which I don't know how to interpret at this point: Bottom line: Any help would be appreciated. Thanks a million. Kind Regards, Tim
https://community.dataiku.com/t5/Using-Dataiku-DSS/Translating-column-Russian-to-English-with-Textblob-using-Python/m-p/2483/translating-column-russian-english-textblob-python-dataiku
CC-MAIN-2020-24
en
refinedweb
I'm working on my first WPF MVVM application. I created a database and wrote queries to fetch album names and song names. Now I want to fill a list in my View with the album names and a second list with corresponding songs. I'm new to c# and WPF. I'd like to know how a view model would like for my controller would look like. My controller: public class BandManagerController { private bandkramEntities _context; public BandManagerController() { _context = new bandkramEntities(); } public List<AlbumData> GetAlbumList() { return _context.albums .Select(a => new AlbumData { AlbumID = a.AlbumID, AlbumName = a.AlbumName, }) .ToList(); } public List<SongData> GetSongList(int albumID) { return _context.songs .Where(s => s.AlbumID == albumID) .Select(s => new SongData { SongID = s.SongID, SongName = s.SongName }) .ToList(); } } I created a helper class with the NotifyOfPropertyChange class and a song and album data class: AlbumData.cs public class AlbumData { public string AlbumName { get; set; } public int AlbumID { get; set; } } SongData.cs public class SongData { public string SongName { get; set; } public int SongID { get; set; } } For a better overview I want to split my Viewmodel into 4 main parts. public class SongViewModel : NotifyOfPropertyChange { public SongViewModel() { } public string SongName { get; set; } public int SongID { get; set; } } public class AlbumViewModel : NotifyOfPropertyChange { public AlbumViewModel() { } public string AlbumName { get; set; } public int AlbumID { get; set; } } I would like to know how 3. and 4. would have to look like to fill the album list with the album names and show corresponding songs in a second list. With MVVM you would lose the controller and just have a ViewModel. There you would have your lists as such(including a notify property changed) private ObservableCollection<SongData> _songList; public ObservableCollection<SongData> SongList { get { return _songList; } set { SetProperty(ref _songList, value, () => SongList); } } Then you would load this list at some point public void LoadSongData(int albumID) { Using(YourContext _context = new YourContext) { SongList = new ObservableCollection(_context.songs .Where(s => s.AlbumID == albumID) .Select(s => new SongData { SongID = s.SongID, SongName = s.SongName }) .ToList()); } }
https://entityframeworkcore.com/knowledge-base/58375128/create-viewmodel-from-controller---model--populate-lists-from-database
CC-MAIN-2020-40
en
refinedweb
Thanks KurtE, sound advice. Looks like its just the examples that have been updated as you say. Thanks KurtE, sound advice. Looks like its just the examples that have been updated as you say. But I am assuming it is like all other linux releases. That once you download it, you need to mark the file as executable and then run it. I usually do it one of two ways. Command line: chmod +x TeensyduinoInstall.linux64 and then run it. Or I bring up a folders window on the download directory and then I do a properties on the file, and go to the permissions page, and again depending on which linux you are running, Like my secondary test machine has Ubunutu 18.04, so on this one you go to permissions tab and click the Allow executing file as program. Then double click on it. EDIT: @mjs513 beat me to this, but I think maybe comments about file attributes should be mentioned probably in same location as talking about udev rules. Which is another hint, if this is the first time you have run Teensyduino on a Linux machine, you need to download and install the udev rules. Hello, I've just tried arduino 1.8.13, teensyduino 1.53b1 and TeensytimerTool v1.0.9. From an old project that work well with 1.8.12/1.5.2, with the new environment I have an error when I try to compile => This is with Teensy 4.0 board.This is with Teensy 4.0 board.Code:D:\Mes documents\Arduino\libraries\TeensyTimerTool\src\Teensy\TCK\TCK.cpp: In function 'void yield()': D:\Mes documents\Arduino\libraries\TeensyTimerTool\src\Teensy\TCK\TCK.cpp:72:67: error: 'processSerialEvents' is not a member of 'HardwareSerial' if (HardwareSerial::serial_event_handlers_active) HardwareSerial::processSerialEvents(); When using Teensy 3.2 all is OK. Thank you, Manu Last edited by Manu; 06-18-2020 at 06:49 PM. Reason: More details Using 1.53b1 with Raspbian Stretch and Arduino 1.8.13; no problems so far.. Is there any compelling reason to keep Adafruit_RA8875 when we have the faster RA8875 library? It's been years, but my recollection is early versions of Adafruit_RA8875 only worked on slow AVR boards with certain font settings, which is why we are bundling a Teensy specific copy... I agree, also if issue with adafruit version we should fix and issue PR. They have been good at picking them up. Actually they would like it if we added some of our speed up code back to gfx/spitft code. Also should compare their canvas stuff versus our framebuffer stuff, but that is longer term. Question also if we have our own version of ra8875 library,does it make sense to also include ra8876 codebase? Probably not for this release. I'm also leaning towards removing the ST7565 library. Those displays are obsolete and seem to be long gone from the market. Maybe OpenGLCD can go too? @KurtE Think the RA8876 library needs some clean up before adding it Teensyduino, so i agree it may be too soon for 1.53 release. @PaulStoffregen Just did a little search for ST7565 they are still out there, Adafruit and EastRising but didn't see anything on Amazon. Think 5years ago was the last time that Adafruit made a change to their library. As for OpenGLCD, never used it, last update to the library was in 2016 just for reference. I would say remove it from Teensyduino as long as theres no uniqueness for Teensies. Does this make sense? Could you give me commit access to RA8875? Or I could send some pull requests - just minor cleanup like compiler warnings and comments in examples... Just sent the invitation to the RA8875 lib There are times like this, it would be good to have a setup using the library manager, such that if anyone actually needs these libraries, they can simply download it independent of Teensyduino. But my guess is it would not be bad to go ahead and prune some of these out of the basic install. Hello, It's maybe the moment to update fastled. The Teensyduino version is 3.3.1 while the current release is 3.3.3 Also, including TeensyTimerTool from Luni could be a good choice since it's a really Teensy library. Thank you, Manu Actually it is quite simple to add libraries to the library manager: it is quite simple to add libraries to the library manager: are times like this, it would be good to have a setup using the library manager I have TeensyStep and the TeensyTimerTool listed there. Output:Output:Code:#include <map> #include <vector> #include <string> #include <functional> using namespace std; void setup() { while (!Serial); Serial.println("Testing std::vector ------------"); vector<string> myVector; myVector.push_back("First string"); myVector.push_back("Last string"); myVector.insert(myVector.begin() + 1, "Second string"); for (string s : myVector) { Serial.println(s.c_str()); } Serial.println("\nTesting std::map--------------"); std::map<string, unsigned> myMap; // use fully qualified name otherwise it clashes with arduino map function myMap["zero"] = 0; myMap["answer"] = 42; Serial.println(myMap["zero"]); Serial.println(myMap["answer"]); Serial.println("\nTesting std::function----------"); function<unsigned(string)> myFunction; myFunction = testFunction; string s = "some string"; Serial.printf("The string '%s' has %u characters\n", s.c_str(), myFunction(s)); } void loop() { } unsigned testFunction(string s) { return s.length(); } Code:Testing std::vector ------------ First string Second string Last string Testing std::map-------------- 0 42 Testing std::function---------- The string 'some string' has 11 characters I updated MIDI and FastLED, and deleted some of the older libs like OpenGLCD. I'm slowly going through the many remaining compiler errors and warnings in various examples.... Is a potential change to an abstract base File still on the table for 1.53 as mentioned here? How do you deal with larger features like that? I'd be willing to help a bit if I can, but I don't see any branches in any of the GitHub repos and I'm guessing that's something you'd be doing the initial changes for to keep it all consistent?!Code:~/Downloads $ file ./TeensyduinoInstall.linux64 ./TeensyduinoInstall.linux64: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, stripped
https://forum.pjrc.com/threads/61388-Teensyduino-1-53-Beta-1/page2?s=bc6851f8320fc5172367141d219b3650
CC-MAIN-2020-40
en
refinedweb
MultiLineEditText Documentation - blastframe last edited by blastframe Hello, I'm using Python Syntax Highlighting with a GeDialog MultiLineEditText. In order to make this work, I have to pass c4d.DR_MULTILINE_PYTHON | c4d.DR_MULTILINE_SYNTAXCOLORas the style's Symbol IDs. This was not clear from the documentation which describes DR_MULTILINE_SYNTAXCOLORas C.O.F.F.E.E. syntax highlighting. It took me some time to figure out that DR_MULTILINE_PYTHONdoes not work on its own and that DR_MULTILINE_SYNTAXCOLORis not strictly for C.O.F.F.E.E.. Could you please explain this better in the documentation? Thank you. Sorry for the late reply, I thought I answered while not. The next documentation will be fixed: DR_MULTILINE_SYNTAXCOLORenables syntax color, coffee, or Python. Since coffee is removed it only helps for the Python Highlighting. DR_MULTILINE_PYTHONenables specific python line return handling e.g. writing def Something():, pressing enter will put the caret (text cursor) to a new line and indent it to match python syntax rule. Cheers, Maxime. - blastframe last edited by
https://plugincafe.maxon.net/topic/12810/multilineedittext-documentation
CC-MAIN-2020-40
en
refinedweb
Crop a part of the image with Python, Pillow (trimming) In Image module of the image processing library Pillow (PIL) of Python, crop() for cutting out a partial area of an image is provided. Here, the following cases will be described with sample code. - Normal crop - Specify outside area - Crop the center of the image - Crop the largest square from the rectangle Please refer to the following post for the installation and basic usage of Pillow (PIL). If you want to create a transparent image by cutting out a shape other than a rectangle (such as a circle), use putalpha(). Import Image from PIL and open the target image. from PIL import Image im = Image.open('data/src/lena.jpg') Normal crop Set the cropping area with box=(left, upper, right, lower). The top left coordinates correspond to (x, y) = (left, upper), and the bottom right coordinates correspond to (x, y) = (right, lower). The area to be cropped is left <= x <right and upper <= y <lower, and the pixels of x = right and y = lower are not included. Be careful not to forget that box requires (). im_crop = im.crop((100, 75, 300, 150)) im_crop.save('data/dst/lena_pillow_crop.jpg', quality=95) If you just want to save the cropped image without using it for other processing, you can write in one line. im.crop((100, 75, 300, 150)).save('data/dst/lena_pillow_crop.jpg', quality=95) Specify outside area If the outside of the image is set in the cropping area, an error does not occur and the image is displayed in black. im_crop_outside = im.crop((100, 175, 300, 250)) im_crop_outside.save('data/dst/lena_pillow_crop_outside.jpg', quality=95) Crop the center of the image If you want to crop the center of the image to any size, it is convenient to define the following function. def crop_center(pil_img, crop_width, crop_height): img_width, img_height = pil_img.size return pil_img.crop(((img_width - crop_width) // 2, (img_height - crop_height) // 2, (img_width + crop_width) // 2, (img_height + crop_height) // 2)) An example of using this function is as follows. im_new = crop_center(im, 150, 200) im_new.save('data/dst/lena_crop_center.jpg', quality=95) Crop the largest square from the rectangle When creating a thumbnail image, for example, it may be desirable to trim a square as large as possible from the rectangular image. Defines a function that crop a square of short side length from the center of the rectangular image. def crop_max_square(pil_img): return crop_center(pil_img, min(pil_img.size), min(pil_img.size)) An example of using this function is as follows. im_new = crop_max_square(im) im_new.save('data/dst/lena_crop_max_square.jpg', quality=95)
https://note.nkmk.me/en/python-pillow-image-crop-trimming/
CC-MAIN-2020-40
en
refinedweb
It is often useful to store information relevant to a user of the app for the duration of that usage session. For example the user may choose to want to save a option or be remembered as logged in. This information can either be stored client side or server side and Quart provides a system to store the information client side via Secure Cookie Sessions. Secure Cookie Sessions store the session information on the Cookie in plain text with a signature to ensure that the information is not altered by the client. They can be used in Quart so long as the secret_key is set to a secret value. secret_key An example usage to store a users colour preference would be, from quart import session ... @app.route('/') async def index(): return await render_template( 'index.html', colour=session.get('colour', 'black'), ) @app.route('/colour/', methods=['POST']) async def set_colour(): ... session['colour'] = colour return redirect(url_for('index')) Sessions can be used with WebSockets with an important caveat about cookies. A cookie can only be set on a HTTP response, and an accepted WebSocket connection cannot return a HTTP response. Therefore the default implementation, being based on cookies, will lose any modifications made during an accepted WebSocket connection.
https://pgjones.gitlab.io/quart/how_to_guides/session_storage.html
CC-MAIN-2020-40
en
refinedweb
Name: jk109818 Date: 07/09/2002 FULL PRODUCT VERSION : java version "1.4.1-beta" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1-beta-b14) Java HotSpot(TM) Client VM (build 1.4.1-beta-b14, mixed mode) FULL OPERATING SYSTEM VERSION : Windows 98 [Version 4.10.2222] A DESCRIPTION OF THE PROBLEM : Using the file chooser, browse to a directory with many files, the dialog takes a long time to show. After it shows, it will takes a long time to show the files. The folder I tested with has: 1545 files and 49 folders. Note that Notepad.exe open this instantly. There is a related bug that is closed: 4621272 It says that it fixes the bug in hopper, unfortunately, I have no way of knowing if j2sdk1.4.1 beta is "hopper" or not. Also, note that the 2 bugs are different. The related bug show slowness when select many files. This bug shows slowness by merely open a directory. Also note that with j2sdk1.4.1 beta, the previous bug is still there, and select 1 file, the select another file, there is a long pause between (inconsistently though, and only happen in a directory with many files - note: just select 1 file, not many files). STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : 1) Run the application 1) Open a JFileChooser 2) Browse to a directory that has many files (it's even better to have the first directory of JFileChooser to have many files to show the effect of slowness). EXPECTED VERSUS ACTUAL BEHAVIOR : Things work faster. REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- import javax.swing.*; import java.io.*; public class BugDemonstration { public static void main(String args[]) { final JFrame frame = new JFrame("The Frame"); frame.setDefaultCloseOperation (JFrame.EXIT_ON_CLOSE); frame.setSize(200,200); frame.setVisible(true); SwingUtilities.invokeLater(new Runnable() { public void run() { JFileChooser file = new JFileChooser(); file.setMultiSelectionEnabled(true); file.setDialogTitle ("Select lots of files..." ); file.showDialog(frame.getContentPane(), "Demonstrate problem"); File [] selected = file.getSelectedFiles(); System.out.println (selected.length+" files selected."); } }); } } ---------- END SOURCE ---------- (Review ID: 158588) ======================================================================
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4712307
CC-MAIN-2018-05
en
refinedweb
0 I need to convert a number (in decimal form) that is between 1 to 4999 to Roman Numeral Form. However, though the code I have is kinda working, its only outputting the thousand digit, not anything less or any part thats less. def int2roman(number): numerals={1:"I", 4:"IV", 5:"V", 9: "IX", 10:"X", 40:"XL", 50:"L", 90:"XC", 100:"C", 400:"CD", 500:"D", 900:"CM", 1000:"M"} result="" for value, numeral in sorted(numerals.items(), reverse=True): while number >= value: result += numeral number -= value return result print int2roman(input("Enter a number (1 to 4999) in decimal form: ")) if I input 1994, I get M instead of the MCMXCIV I should be getting. Any corrections and explanations to my code? Thanks :)
https://www.daniweb.com/programming/software-development/threads/178804/decimal-to-roman-numeral
CC-MAIN-2018-05
en
refinedweb
al_draw_vertex_buffer man page al_draw_vertex_buffer — Allegro 5 API Synopsis #include <allegro5/allegro_primitives.h> int al_draw_vertex_buffer(ALLEGRO_VERTEX_BUFFER* vertex_buffer, ALLEGRO_BITMAP* texture, int start, int end, int type) Description Draws a subset of the passed vertex buffer. The vertex buffer must not be locked. Additionally, to draw onto memory bitmaps or with memory bitmap textures the vertex buffer must support reading (i.e. it must be created with the ALLEGRO_PRIM_BUFFER_READWRITE). Parameters: - vertex_buffer - Vertex buffer to draw - texture - Texture to use, pass NULL to use only color shaded primitves - start - Start index of the subset of the vertex buffer to draw - end - One past the last index of the subset of the vertex buffer to draw - type - A member of the ALLEGRO_PRIM_TYPE(3) enumeration, specifying what kind of primitive to draw Returns: Number of primitives drawn Since 5.1.3 See Also ALLEGRO_VERTEX_BUFFER(3), ALLEGRO_PRIM_TYPE(3) Info Allegro reference manual
https://www.mankier.com/3/al_draw_vertex_buffer
CC-MAIN-2018-05
en
refinedweb
Press Release: May 1, 2003 A New Guide to Design and Deployment of Microsoft's Active Directory: O'Reilly Releases "Active Directory, Second Edition" Sebastopol, CA--When Microsoft introduced Windows 2000, the most important change was the inclusion of Active Directory. While it offers many great benefits, Active Directory has also proved to be a huge headache for network and system administrators to design, implement, and support. "To truly understand Active Directory, one needs to understand several technologies including LDAP, DNS, multi-master replication, the Schema, and GPOs, to name a few," says Robbie Allen, coauthor with Alistair G. Lowe-Norris of Active Directory, Second Edition (O'Reilly, US $44.95). "Although the MS documentation has gotten significantly better, it is more of a reference while our book is intended to be a guide to help the curious or weary understand the big picture." The first edition of this book, "Windows 2000 Active Directory," helped many understand this technology. Now titled "Active Directory, Second Edition," the new version provides system and network administrators, IT professionals, technical project managers, and programmers with a clear, detailed look at Active Directory for both Windows 2000 and Windows Server 2003. The upgraded Active Directory that ships with Windows Server 2003 has more than one hundred new and enhanced features. In addition to the technical details for implementing Active Directory, several new chapters describe the features that have been updated or added in Windows Server 2003 along with coverage of new programmatic interfaces that are available to manage it. "All of the existing chapters have been brought up-to-date with Windows Server 2003, and eight additional chapters have been added to explain new features or concepts not covered in the first edition," explain Allen and Lowe-Norris. has been divided into three sections. Part I introduces in general terms how Active Directory works, giving the reader a thorough grounding in its concepts. Part II covers the issues around properly designing the directory infrastructure, including designing the namespace, creating a site topology, designing group policies for locking down client settings, auditing, permissions, backup and recovery, and a look at Microsoft's future direction with Directory Services. Part III is about managing Active Directory via automation with Active Directory Service Interfaces (ADSI), ActiveX Data Objects (ADO), and Windows Management Instrumentation (WMI). "Active Directory, Second Edition" will help system and network administrators navigate their way through the maze of concepts, design issues, and scripting options in Active Directory, enabling them to get the get the most out of their deployment. Additional Resources: - Chapter 14, "Upgrading to Windows Server 2003" - More information about the book, including Table of Contents, index, author bios, and samples - A cover graphic in JPEG format Active Directory, Second Edition Robbie Allen and Alistair G. Lowe-Norris ISBN 0-596-00466-4, 686 pages, $44.95 (US), $69.95 (CAN),
http://www.oreilly.com/pub/pr/1043
CC-MAIN-2018-05
en
refinedweb
Java and Kotlin are strongly typed languages. It’s not necessary to cast types when working up an object graph. For example public void sort(Collection col){ //todo } sort(new ArrayList()); sort(new HashSet()); This is an example of polymorphism in Java. ArrayList and HashSet are both Collections so it’s acceptable to pass either types to the example sort method. Keep in mind this is not a two way street. This code would not compile. public void sort(List list){ //todo } Collection col = new ArrayList(); sort(col); //Compile error! sort((List) col); //OK Even though col points at an ArrayList and ArrayList implements List, Java forbids you to pass col to sort without a cast. This is because the compiler has no idea that col is pointing at an ArrayList. Keep in mind this is true of Kotlin also. Although we can get our code to compile with a cast, it’s still dangerous code. Let’s tweak it a little big and have col point at a HashSet instead of ArrayList. public void sort(List list){ //todo } Collection col = new HashSet(); //Compiles but throws //ClassCastException sort((List) col); Now the code compiles, but it will fail at run time. There is no way to cast HashSet to a List. HashSet does not implement List in anyway so when the code attempts to make the cast, the code will fail. We have to use the instanceof operator to make sure the cast is safe first. public void sort(List list){ //todo } Collection col = new HashSet(); if (col instanceof List){ //Now it's safe sort((List) col); } This code is now safe. It will check if the runtime type of col is a List first. If the object is a List, it will make the cast. Otherwise, the cast will not get made. Tutorial This portion of the Kotlin Koans tutorial shows off how Kotlin handles casting compared to Java. Here is the Java code that needs to get rewrote in Kotlin. public class JavaCode8 extends JavaCode { public int eval(Expr expr) { if (expr instanceof Num) { return ((Num) expr).getValue(); } if (expr instanceof Sum) { Sum sum = (Sum) expr; return eval(sum.getLeft()) + eval(sum.getRight()); } throw new IllegalArgumentException("Unknown expression"); } } Kotlin has a when keyword that is used for casting. Here is the equivalent Kotlin code. fun todoTask8(expr: Expr): Int { when (expr) { is Num -> return expr.value is Sum -> return todoTask8(expr.left) + todoTask8(expr.right) else -> throw IllegalArgumentException("Unknown expression") } } As usual, Kotlin is more concise than Java. The when block starts with the when followed by the variable in question. You can have any number of is clauses in this statement followed by the type. The variable is automatically cast to the specified type on the right hand side of the -> operator. You can click here to see Part 8 One thought on “Kotlin Koans—Part 9”
https://stonesoupprogramming.com/2017/06/08/kotlin-koans-part-9/
CC-MAIN-2018-05
en
refinedweb
.usecases;19 20 import org.apache.activemq.ActiveMQConnectionFactory;21 22 import javax.jms.JMSException ;23 24 /**25 * @version $Revision: 1.1.1.1 $26 */27 public class TwoMulticastDiscoveryBrokerTopicSendReceiveTest extends TwoBrokerTopicSendReceiveTest {28 29 protected ActiveMQConnectionFactory createReceiverConnectionFactory() throws JMSException {30 return createConnectionFactory("org/apache/activemq/usecases/receiver-discovery.xml", "receiver", "vm://receiver");31 }32 33 protected ActiveMQConnectionFactory createSenderConnectionFactory() throws JMSException {34 return createConnectionFactory("org/apache/activemq/usecases/sender-discovery.xml", "sender", "vm://sender");35 }36 }37 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/activemq/usecases/TwoMulticastDiscoveryBrokerTopicSendReceiveTest.java.htm
CC-MAIN-2018-05
en
refinedweb
import org.apache.commons.discovery.resource.ClassLoaders;61 import org.apache.commons.discovery.resource.classes.*;62 63 64 /**65 * @author Richard A. Sitze66 */67 public interface ResourceClassDiscover68 {69 /**70 * Specify set of class loaders to be used in searching.71 */72 public void setClassLoaders(ClassLoaders loaders);73 74 /**75 * Specify a new class loader to be used in searching.76 * The order of loaders determines the order of the result.77 * It is recommended to add the most specific loaders first.78 */79 public void addClassLoader(ClassLoader loader);80 81 82 public void setListener(ResourceClassListener listener);83 84 85 /**86 * Find named class resources that are loadable by a class loader.87 * Listener is notified of each resource found.88 * 89 * @return FALSE if listener terminates discovery prematurely by90 * returning false, otherwise TRUE.91 */92 public boolean find(String className);93 }94 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/commons/discovery/ResourceClassDiscover.java.htm
CC-MAIN-2018-05
en
refinedweb
Minor bug in default template Create a new site and open up the "Simple" template. Line 33-35: {$ if nonblank .headline $} <p><em>{$ .about $}</em></p> {$ endif $} I suppose line 33 should be: {$ if nonblank .about $} Henrik Jernevad Wednesday, August 27, 2003 It looks weird but it's actually on purpose. That's a little trick we used so that the Index article itself (which doesn't have a headline) can use the same template as all the articles do, without having any "about the author" section. Joel Spolsky Wednesday, August 27, 2003 Oh.. cool. =) Although, it really should check that both headline and about are nonblank, shouldn't it? But that's not possible in CityScript right now, perhaps. Henrik Jernevad Wednesday, August 27, 2003 That's definitely possible... {$ if nonblank .headline $} {$ if nonblank .about $} <p><em>{$ .about $}</em></p> {$ endif $} {$ endif $} since its nested the html will only appear if both are nonblank Michael H. Pryor Wednesday, August 27, 2003 Ohh.. once again, cool.. =) I thought nested conditionals wasn't allowed. But perhaps it's only foreach:es that aren't possible to nest. (Preparing to say "Oh, cool" once again, if I'm proven wrong one more time ;) ) forEach's can nest, too. You just can't use the result of the outer forEach to decide which articles to include on the inner forEach, which makes this feature rather less useful. Recent Topics Fog Creek Home
http://discuss.fogcreek.com/CityDesk/default.asp?cmd=show&ixPost=9182&ixReplies=5
CC-MAIN-2018-05
en
refinedweb
[1]::cin and cout always go on the left-hand side of the statement. - std::cout is used to output a value (cout = character output) - std::cin is used to get an input value (cin = character input) - << is used with std::cout, and shows the direction that data is moving from the r-value to the console. std::cout << 4 moves the value of 4 to the console - >> is used with std::cin, and shows the direction that data is moving from the console into the variable. std::cin >> x moves the value from the console into x (Admin note: Discussion of the std::namespace and using statements has been moved to lesson 1.8a -- Naming conflicts and the std namespace [2])
http://www.learncpp.com/cpp-tutorial/1-3a-a-first-look-at-cout-cin-endl/print/
CC-MAIN-2018-05
en
refinedweb
Fix NameNotFoundException: Wrong Mapping in Deployment Plan In previous posts, I wrote about two common causes of javax.naming.NameNotFoundException: incorrect lookup name and reference not declared. This post covers a third cause: wrong mapping of EJB/resource in appserver-specific deployment plans. This sample project consists of a simple EJB3 stateless session bean HelloEJBBean, and an application client with main class hello.Main. A reference to HelloEJBBean's remote business interface is injected into the client main class. package hello.ejb;Application client main class: import javax.ejb.Remote; @Remote public interface HelloEJBRemote { void hello(); } package hello.ejb; import javax.ejb.Stateless; @Stateless public class HelloEJBBean implements HelloEJBRemote { public void hello() { } } package hello;For the above sample app to work, we do not need any deployment descriptors or deployment plan. Why? because all metadata have been provided with annotations, or have defaults, or can be figured out by appserver one way or another. import hello.ejb.HelloEJBRemote; import javax.ejb.EJB; public class Main { @EJB(beanName="HelloEJBBean") private static HelloEJBRemote helloEJB; public static void main(String[] args) { helloEJB.hello(); } } But some IDEs still generate unnecessary deployment descriptors and deployment plans, which may have wrong mapping info. This happens without you knowing it. If you delete these unnecessary and wrong deployment plans, the next time you rebuild project, they will be regenerated. For instance, NetBeans 5.5 beta generates the following sun-application-client-jar.xml, which is unnecessary and contains the wrong mapping data: <?xml version="1.0" encoding="UTF-8"?>Both the ejb reference name and ejb JNDI name are wrong. If the IDE really thinks a sun-application-client.xml is helpful for whatever reason, the ejb-ref element should be: <!DOCTYPE sun-application-client PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Application Client 5.0//EN" ""> <sun-application-client> <ejb-ref> <ejb-ref-name>helloEJB</ejb-ref-name> <jndi-name>ejb/helloEJB</jndi-name> </ejb-ref> </sun-application-client> <ejb-ref>because the default ejb reference name for an injected ejb is the of the format: <fully-qualified-class-name of the injection target class>/field-name. The default ejb JNDI name depends on appserver implementation, and for EJB3 in JavaEE SDK 5/Glassfish/Sun Java System Application Server, it's the fully qualified class name of the remote business interface. <ejb-ref-name>hello.Main/helloEJB</ejb-ref-name> <jndi-name>hello.ejb.HelloEJBRemote</jndi-name> </ejb-ref> IDE's best effort to generate deployment artifacts is still not good enough. With wrong mapping info in deployment plan, your app may fail to deploy, if the appserver validates the ejb reference at deployment time. Or even if it is deployed, it will fail at request time. All posts in this series for NameNotFoundException: Fix NameNotFoundException: Incorrect Lookup Name Fix NameNotFoundException: Reference Not Declared Fix NameNotFoundException: Wrong Mapping in Deployment Plan 2 comments: You can additionally browse/view the jndi tree on the app server to see under which 'name' your object is deployed. Great and Useful Article. Online Java Training Java Online Training India Java Online Course Java EE course Java EE training Best Recommended books for Spring framework Java Interview Questions Java Course in Chennai Java Online Training India
http://javahowto.blogspot.com/2006/07/fix-namenotfoundexception-wrong.html
CC-MAIN-2018-47
en
refinedweb
Time for a challenge, so I’m going to try 12 blogs of Christmas. The aim is to write 12 blog entries in December (or at least by 5th January which is the 12th day of Christmas). That is one blog entry every 3 days. It’s a catchy title for a challenge (always helps, think Movember) which could be used for any challenge and I’ve twisted my ankle so I doubt I’ll be running; the 12 runs of Christmas does sound nice anyway. Yes it is the 4th already so not a good start. After the last post I’ve been thinking of other examples of where a generator would be useful that was more in keeping with the theme of this blog (sys administration with Python in case you’ve forgotten). Iterating through system calls or an API would be a good candidate but I’ve not been using anything recently that fitted the bill. Another case that sprang to mind was file searching. A reasonable way to do this would be to create a list but why use the memory to create the list if the caller is unlikely to need a list and they can use list comprehension to create a list anyway. So this should make a good generator example. Some of the work is done already by os.walk; this will iterate through each directory giving you a list of files and folders. Normally when you looking for files you would specify a wildcard pattern so I’m going to use regular expressions and return any file that matches using yield. I’ve covered regular expressions a few times before so I’ll skip any explanation and just present the code which takes a directory and a file pattern and returns all the matching files. import os, re</pre> <pre>def filesearch (root, pattern, exact=True): searchre = re.compile(pattern) for parent, dirs, files in os.walk(root): for filename in files: if exact: res = searchre.match(filename) else res = searchre.search(filename) if res: yield os.path.join(parent,filename) for filename in filesearch(r"C:\Temp",r".*\.exe"): print("%s has size %d" % (filename,os.path.getsize(filename))) The only thing to note is I added a third option so you can do a match (the regular expression must match the whole filename) or a search (the regular expression only needs to match part of the filename). This defaults to true which is an exact match. The example should find any executables in the C:\temp folder. Regular expressions are very powerful but not quite as simple using *.exe. Instead the asterisk becomes .* (match any character 0 or more times) and the dot has to be escaped as it is a special character. I’ve just printed the filename and size out but you could equally delete the file if it was bigger than a certain size etc. And that’s my first post of 12 blogs of Christmas. Lets see if I can get all 12 done in time.
https://quackajack.wordpress.com/tag/directory/
CC-MAIN-2018-47
en
refinedweb
Refactoring Extract Method Problem You have a code fragment that can be grouped together. Solution Move this code to a separate new method (or function) and replace the old code with a call to the method. void printOwing() { printBanner(); //print details System.out.println("name: " + name); System.out.println("amount: " + getOutstanding()); } void printOwing() { printBanner(); printDetails(getOutstanding()); } void printDetails(double outstanding) { System.out.println("name: " + name); System.out.println("amount: " + outstanding); } void PrintOwing() { PrintBanner(); //print details Console.WriteLine("name: " + name); Console.WriteLine("amount: " + GetOutstanding()); } void PrintOwing() { PrintBanner(); PrintDetails(GetOutstanding()); } void PrintDetails(double outstanding) { Console.WriteLine("name: " + name); Console.WriteLine("amount: " + outstanding); } function printOwing() { $this->printBanner(); //print details print("name: " . $this->name); print("amount " . $this->getOutstanding()); } function printOwing() { $this->printBanner(); $this->printDetails($this->getOutstanding()); } function printDetails ($outstanding) { print("name: " . $this->name); print("amount " . $outstanding); } def printOwing(self): self.printBanner() # print details print("name:", self.name) print("amount:", self.getOutstanding()) def printOwing(self): self.printBanner() self.printDetails(self.getOutstanding()) def printDetails(self, outstanding): print("name:", self.name) print("amount:", outstanding) printOwing(): void { printBanner(); //print details console.log("name: " + name); console.log("amount: " + getOutstanding()); } printOwing(): void { printBanner(); printDetails(getOutstanding()); } printDetails(outstanding: number): void { console.log("name: " + name); console.log("amount: " + outstanding); } Why Refactor The more lines found in a method, the harder it are declared inside the fragment and not used outside of it, simply leave them unchanged – they will become local variables for the new method. If the variables are declared prior to the code that you are extracting, you will need to pass these variables to the parameters of your new method in order to use the values previously contained in them. Sometimes it.
https://refactoring.guru/extract-method
CC-MAIN-2018-47
en
refinedweb
[SOLVED] Arduino Dock 2: Onion Library still required for Omega2(+) ? - canochordo Hi, I am using an Omega2+ with the Arduino Dock 2. I would like to communicate with the ATmega via I2C. While reading the docs I stumbled upon this article:. Does this still apply to the new Omega2 and Arduino Dock 2 ? I can't find anything related in the Omega2 docs: Onion Arduino Library: - Michael Xie48 Hi @canochordo, You don't need to use the Onion Library for the Omega2 and Arduino Dock2. The most simple way to communicate through I2C: - On the Omega2 side use the i2cget or i2cset command for I2C port 0 and ATmega address 0x08. For example, i2cget -y 0 0x08 (reading from ATmega) i2cset -y 0 0x08 0x01 (sending 0x01 to ATmega) - On the Atmega side, flash the chip wirelessly using the following guide and use the wire library: - Michael Xie48 Another method from the Omega side is to install python and use the I2C python module: - canochordo Thanks @Michael-Xie48 for your quick response I am now able to switch the build-in LED on / off via I2C: Arduino / ATmega #include <Wire.h> void setup() { pinMode(LED_BUILTIN, OUTPUT); digitalWrite(LED_BUILTIN, LOW); Wire.begin(0x08); Wire.onReceive(i2c_on_receive_handler); } void loop() { } void i2c_on_receive_handler(int bytes_num) { digitalWrite(LED_BUILTIN, Wire.read()); } Omega2 root@Omega:~# i2cset -y 0 0x08 0x01 # switch on root@Omega:~# i2cset -y 0 0x08 0x00 # switch off
http://community.onion.io/topic/1352/solved-arduino-dock-2-onion-library-still-required-for-omega2
CC-MAIN-2018-47
en
refinedweb
From Documentation The Story Think of an extremely ordinary scenario, where you want to clear all the texts in a form by clicking a button. While you see the screen shot, you may have already come up with an implementation in mind. For example: public class SomeFormController extends GenericForwardComposer { Textbox usenameTb; Textbox passwordTb; Textbox retypepwTb; // ... // ... Textbox memoTb; public void onClick$clearBtn(Event event) { usenameTb.setValue(""); passwordTb.setValue(""); retypepwTb.setValue(""); // ... // ... memoTb.setValue(""); } } But wait, no. The unordinary part of the story is here: this feature is actually implemented by just(""); } } These are what we foresee in ZK 6: leveraging Annotation power from Java 1.5, and introduction to some new techniques. In this Small Talk we are going to reveal two new weapons: Selector and SelectorComposer. The jQuery/CSS3-like Component Selector In the previous example, Selector is shown as a part of the parameters in Annotation @Wire and @Listen. @Wire("textbox, intbox, decimalbox, datebox") @Listen("onClick = button[label='Clear']") The concept is simple: Selector is a pattern string that matches nodes in a Component tree. In other words, by giving a Selector string, you can specify a collection of Components from a ZUL file. // Collects all the textboxes, intboxes, decimalboxes, and dateboxes as a List and wire to inputs @Wire("textbox, intbox, decimalbox, datebox") List<InputElement> inputs; // Collects all the buttons whose label is "Clear", and adds EventListeners for them @Listen("onClick = button[label='Clear']") public void onClear(MouseEvent event) { // ... } If you know jQuery or CSS selector, this is exactly their counterpart on server side. Syntax The syntax of Selector is closely analogous to CSS3 selector. Component type, class, attribute, pseudo class are used to describe properties of a component. For example: // Matches any Button component "button" // Matches any Component with ID "btn" "#btn" // Matches any Button with ID "btn" "button#btn" // Matches any Button whose label is "Submit" "button[label='Submit']" Combinators are used to describe relations between components. For example: // Matches any Button who has a Window ancestor "window button" // Matches any Button whose parent is a Window "window > button" // Matches any Button whose previous sibling is a Window "window + button" // Matches any Button who has a Window as a senior sibling "window ~ button" // Matches any Button whose parent is a Div and grandparent is a Window "window > div > button" SelectorComposer SelectorComposer is analogous to GenericForwardComposer. But instead of wiring variables by naming convention, the new composer wires them by annotation and specifies the Components by Selector. public class MyComposer extends SelectorComposer { // If the field is a Collection, the composer will wire all Components matched by the selector @Wire("label") private List<Label> labelList; // Same for Array @Wire("label") private Label[] labelArray; // If the field is not a Collection or Array, the first matched Component is wired to the field @Wire("label[value='zk']") private Label label1; // If selector string is not given, it will attempt to wire implicit objects by name or fellows by ID. @Wire private Desktop desktop; @Wire private Button clearBtn; } Event listening is handled in a similar way. However, instead of forwarding the events, it adds the method to the EventListener of the target Components directly. public class MyComposer extends SelectorComposer { // Like auto-forwarding, methods annotated with @Listen will be added to the event listeners of the components described by selector. @Listen("onClick = button#btn") public void onPressButton(Event event) { // The event here will be the original event, not ForwardEvent! } // The event listener will be added to ALL the components that match the selector, not just the first match @Listen("onClick = #myGrid > rows > row") public void onClickAnyRow(MouseEvent event) { // ... } // You can specify multiple event types @Listen("onClick = button[label='Submit']; onOK = textbox#password") public void onSubmit(Event event) { // ... } } More about Selector You can also use Selector independently. For example: Window win; // returns a list of components, containing all labels in the page Selectors.find(page, "label"); // returns all components with id "myId" under the Window win. (including itself) Selectors.find(win, "#myId"); // returns all components whose .getLabel() value is "zk" (if applicable) Selectors.find(page, "[label='zk']"); // returns all captions whose parent is a window Selectors.find(win, "window > caption"); // returns all buttons and toolbarbuttons Selectors.find(page, "button, toolbarbutton"); // you can assemble the criteria: // returns all labels, whose parent is a window of id "win", and whose value is "zk" Selectors.find(page, "window#win > label[value='zk']"); Comparison with CSS3 Selector
https://www.zkoss.org/wiki/Small%20Talks/2011/January/Envisage%20ZK%206:%20An%20Annotation%20Based%20Composer%20For%20MVC
CC-MAIN-2018-47
en
refinedweb
[] An array of rectangles containing the UV coordinates in the atlas for each input texture, or null if packing fails. Packs multiple Textures into a texture atlas. This function will replace the current texture with the atlas made from the supplied textures. The size, format and mipmaps of any of the textures can change after packing. The resulting texture atlas will be as large as needed to fit all input textures but only up to maximumAtlasSize in each dimension. If the input textures can't all fit into a texture atlas of the desired size then they will be scaled down to fit. The atlas will have DXT1 format if all input textures are DXT1 compressed. If all input textures are compressed in DXT1 or DXT5 formats then the atlas will be in DXT5 format. If any input texture is not compressed then the atlas will be in RGBA32 uncompressed format. If none of the input textures have mipmaps then the atlas will also have no mipmaps. If you use non-zero padding and the atlas is compressed and has mipmaps then the lower-level mipmaps might not be exactly the same as in the original texture due to compression restrictions. If makeNoLongerReadable is true then the texture will be marked as no longer readable and memory will be freed after uploading to the GPU. By default makeNoLongerReadable is set to false. using UnityEngine; public class Example : MonoBehaviour { // Source textures. Texture2D[] atlasTextures; // Rectangles for individual atlas textures. Rect[] rects; void Start() { // Pack the individual textures into the smallest possible space, // while leaving a two pixel gap between their edges. Texture2D atlas = new Texture2D(8192, 8192); rects = atlas.PackTextures(atlasTextures, 2, 8192); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Texture2D.PackTextures.html
CC-MAIN-2018-47
en
refinedweb
Data Points - A New Option for Creating OData: Web API By Julie Lerman | June 2013 | Get the Code: C# VB Microsoft .NET developers have been able to create OData feeds since before there was even an OData spec. By using WCF Data Services, you could expose an Entity Data Model (EDM) over the Web using Representational State Transfer (REST). In other words, you could consume these services through HTTP calls: GET, PUT, DELETE and so on. As the framework for creating these services evolved (and was renamed a few times along the way), the output evolved as well and eventually became encapsulated in the OData specification (odata.org). Now there’s a great variety of client APIs for consuming OData—from .NET, from PHP, from JavaScript and from many other clients as well. But until recently, the only easy way to create a service was with WCF Data Services. WCF Data Services is a .NET technology that simply wraps your EDM (.edmx, or a model defined via Code First) and then exposes that model for querying and updating through HTTP. Because the calls are URIs (such as), you can even query from a Web browser or a tool such as Fiddler. To create a WCF Data Service, Visual Studio provides an item template for building a data service using a set of APIs. Now there’s another way to create OData feeds—using an ASP.NET Web API. In this article I want to provide a high-level look at some of the differences between these two approaches and why you’d choose one over the other. I’ll also look at some of the ways creating an OData API differs from creating a Web API. API vs. Data Service at a High Level A WCF Data Service is a System.Data.Services.DataService that wraps around an ObjectContext or DbContext you’ve already defined. When you declare the service class, it’s a generic DataService of your context (that is, DataService<MyDbContext>). Because it starts out completely locked down, you set access permissions in the constructor to the DbSets from your context that you want the service to expose. That’s all you need to do. The underlying DataService API takes care of the rest: interacting directly with your context, and querying and updating the database in response to your client application’s HTTP requests to the service. It’s also possible to add some customizations to the service, overriding some of its query or update logic. But for the most part, the point is to let the DataService take care of most of the interaction with the context. With a Web API, on the other hand, you define the context interaction in response to the HTTP requests (PUT, GET and the like). The API exposes methods and you define the logic of the methods. You don’t necessarily have to be interacting with Entity Framework or even a database. You could have in-memory objects that the client is requesting or sending. The access points won’t just be magically created like they are with the WCF Data Service; instead, you control what’s happening in response to those calls. This is the defining factor for choosing a service over an API to expose your OData. If most of what you want to expose is simple Create, Read, Update, Delete (CRUD) without the need for a lot of customization, then a data service will be your best choice. If you’ll need to customize a good deal of the behavior, a Web API makes more sense. I like the way Microsoft Integration MVP Matt Milner put it at a recent gathering: “WCF Data Services is for when you’re starting with the data and model and just want to expose it. Web API is better when you’re starting with the API and want to define what it should expose.” Setting the Stage with a Standard Web API For those with limited experience with Web API, prior to looking at the new OData support I find it helpful to get a feel for the Web API basics and then see how they relate to creating a Web API that exposes OData. I’ll do that here—first creating a simple Web API that uses Entity Framework as its data layer, and then converting it to provide its results as OData. One use for a Web API is as an alternative to a standard controller in a Model-View-Controller (MVC) application, and you can create it as part of an ASP.NET MVC 4 project. If you don’t want the front end, you can start with an empty ASP.NET Web application and add Web API controllers. However, for the sake of newbies, I’ll start with an ASP.NET MVC 4 template because that provides scaffolding that will spit out some starter code. Once you understand how all the pieces go together, starting with an empty project is the right way to go. So I’ll create a new ASP.NET MVC 4 application and then, when prompted, choose the Empty template (not the Web API template, which is designed for a more robust app that uses views and is overkill for my purposes). This results in a project structured for an MVC app with empty folders for Models, Views and Controllers. Figure 1 compares the results of the Empty template to the Web API template. You can see that an Empty template results in a much simpler structure and all I need to do is delete a few folders. Figure 1 ASP.NET MVC 4 Projects from Empty Template and Web API Template I also won’t need the Models folder because I’m using an existing set of domain classes and a DbContext in separate projects to provide the model. I’ll then use the Visual Studio tooling to create my first controller, which will be a Web API controller to interact with my DbContext and domain classes that I’ve referenced from my MVC project. My model contains classes for Airline, Passengers, Flights and some additional types for airline-related data. Because I used the Empty template, I’ll need to add some references in order to call into the DbContext—one to System.Data.Entity.dll and one to EntityFramework.dll. You can add both of these references by installing the EntityFramework NuGet package. You can create a new Web API controller in the same manner as creating a standard MVC controller: right-click the Controllers folder in the solution and choose Add, then Controller. As you can see in Figure 2, you now have a template for creating an API controller with EF read and write actions. There’s also an Empty API controller. Let’s start with the EF read/write actions for a point of comparison to the controller we want for OData that will also use Entity Framework. Figure 2 A Template for Creating an API Controller with Pre-Populated Actions If you’ve created MVC controllers before, you’ll see that the resulting class is similar, but instead of a set of view-related action methods, such as Index, Add and Edit, this controller has a set of HTTP actions. For example, there are two Get methods, as shown in Figure 3. The first, GetAirlines, has a signature that takes no parameters and uses an instance of the AirlineContext (which the template scaffolding has named db) to return a set of Airline instances in an Enumerable. The other, GetAirline, takes an integer and uses that to find and return a particular airline. public class AirlineController : ApiController { private AirlineContext db = new AirlineContext2(); // GET api/Airline public IEnumerable<Airline> GetAirlines() { return db.Airlines.AsEnumerable(); } // GET api/Airline/5 public Airline GetAirline(int id) { Airline airline = db.Airlines.Find(id); if (airline == null) { throw new HttpResponseException (Request.CreateResponse(HttpStatusCode.NotFound)); } return airline; } The template adds comments to demonstrate how you’d use these methods. After providing some configurations to my Web API, I can check it out directly in a browser using the example syntax on the port my app has assigned:. This is the default HTTP GET call and is therefore routed by the application to execute the GetAirlines method. Web API uses content negotiation to determine how the result set should be formatted. I’m using Google Chrome as my default browser, which triggered the results to be formatted as XML. The request from the client controls the format of the results. Internet Explorer, for example, sends no specific header information with respect to what format it accepts, so Web API will default to returning JSON. My XML results are shown in Figure 4. <ArrayOfAirline xmlns:i= <Airline> <Id>1</Id> <Legs/> <ModifiedDate>2013-02-22T00:00:00</ModifiedDate> <Name>Vermont Balloon Transporters</Name> </Airline> <Airline> <Id>2</Id> <Legs/> <ModifiedDate>2013-02-22T00:00:00</ModifiedDate> <Name>Olympic Airways</Name> </Airline> <Airline> <Id>3</Id> <Legs/> <ModifiedDate>2013-02-22T00:00:00</ModifiedDate> <Name>Salt Lake Flyer</Name> </Airline> </ArrayOfAirline> If, following the guidance of the GetAirline method, I were to add an integer parameter to the request——then only the single airline whose key (Id) is equal to 3 would be returned: If I were to use Internet Explorer, or a tool such as Fiddler where I could explicitly control the request to the API to ensure I get JSON, the result of the request for Airline with the Id 3 would be returned as JSON: These responses contain simple representations of the airline type with elements for each property: Id, Legs, ModifiedDate and Name. The controller also contains a PutAirline method that Web API will call in response to a PUT HTTP request. PutAirline contains code for using the AirlineContext to update an airline. There’s also a PostAirline method for inserts and a DeleteAirline method for deleting. These can’t be demonstrated in a browser URL but you can find plenty of getting-started content for Web API on MSDN, Pluralsight and elsewhere, so I’ll move on to converting this to output its result according to the OData spec. Turning Your Web API into an OData Provider Now that you have a basic understanding of how Web API can be used to expose data using the Entity Framework, let’s look at the special use of Web API to create an OData provider from your data model. You can force your Web API to return data formatted as OData by turning your controller into an OData controller—using a class that’s available in the ASP.NET and Web Tools 2012.2 package—and then overriding its OData-specific methods. With this new type of controller, you won’t even need the methods that were created by the template. In fact, a more efficient path for creating an OData controller is to choose the Empty Web API scaffolding template rather than the one that created the CRUD operations. There are four steps I’ll need to perform for this transition: - Make the controller a type of ODataController and implement its HTTP methods. I’ll use a shortcut for this. - Define the available EntitySets in the project’s WebAPIConfig file. - Configure the routing in WebAPIConfig. - Pluralize the name of the controller class to align with OData conventions. Creating an ODataController Rather than inherit from ODataController directly, I’ll use EntitySetController, which derives from ODataController and provides higher-level support by way of a number of virtual CRUD methods. I used NuGet to install the Microsoft ASP.NET Web API OData package for the proper assemblies that contain both of these controller classes. Here’s the beginning of my class, now inheriting from EntitySetController and specifying that the controller is for the Airline type: I’ve fleshed out the override for the Get method, which will return db.Airlines. Notice that I’m not calling ToList or AsEnumerable on the Airlines DbSet. The Get method needs to return an IQueryable of Airline and that’s what db.Airlines does. This way, the consumer of the OData can define queries over this set, which will then get executed on the database, rather than pulling all of the Airlines into memory and then querying over them. The HTTP methods you can override and add logic to are GET, POST (for inserts), PUT (for updates), PATCH (for merging updates) and DELETE. But for updates you’ll actually use the virtual method CreateEntity to override the logic called for a POST, the UpdateEntity for logic invoked with PUT and PatchEntity for logic needed for the PATCH HTTP call. Additional virtual methods that can be part of this OData provider are: CreateLink, DeleteLink and GetEntityByKey. In WCF Data Services, you control which CRUD actions are allowed per EntitySet by configuring the SetEntitySetAccessRule. But with Web API, you simply add the methods you want to support and leave out the methods you don’t want consumers to access. Specifying EntitySets for the API The Web API needs to know which EntitySets should be available to consumers. This confused me at first. I expected it to discover this by reading the AirlineContext. But as I thought about it more, I realized it’s similar to using the SetEntitySetAccessRule in WCF Data Services. In WCF Data Services, you define which CRUD operations are allowed at the same time you expose a particular set. But with the Web API, you start by modifying the WebApiConfig.Register method to specify which sets will be part of the API and then use the methods in the controller to expose the particular CRUD operations. You specify the sets using the ODataModelBuilder—similar to the DbContext.ModelBuilder you may have used with Code First. Here’s the code in the Register method of the WebApiConfig file to let my OData feed expose Airlines and Legs: Defining a Route to Find the OData Next, the Register method needs a route that points to this model so that when you call into the Web API, it will provide access to the EntitySets you defined: You’ll see that many demos use “odata” for the RoutePrefix parameter, which defines the URL prefix for your API methods. While this is a good standard, you can name it whatever you like. So I’ll change it just to prove my point: Renaming the Controller Class The application template generates code that uses a singular naming convention for controllers, such as AirlineController and LegController. However, the focus of OData is on the EntitySets, which are typically named using the plural form of the entity name. And because my EntitySets are indeed plural, I need to change the name of my controller class to AirlinesController to align with the Airlines EntitySet. Consuming the OData Now I can work with the API using familiar OData query syntax. I’ll start by requesting a listing of what’s available using the request:. The results are shown in Figure 5. <service xmlns="" xmlns: <workspace> <atom:titleDefault</atom:title> <collection href="Airlines"> <atom:titleAirlines</atom:title> </collection> <collection href="Legs"> <atom:titleLegs</atom:title> </collection> </workspace> </service> The results show me that the service exposes Airlines and Legs. Next, I’ll ask for a list of the Airlines as OData with. OData can be returned as XML or JSON. The default for Web API results is the JSON format: { "odata.metadata": "","value":[ { "Id":1,"Name":"Vermont Balloons","ModifiedDate":"2013-02-26T00:00:00" },{ "Id":2,"Name":"Olympic Airways","ModifiedDate":"2013-02-26T00:00:00" },{ "Id":3,"Name":"Salt Lake Flyer","ModifiedDate":"2013-02-26T00:00:00" } ] } One of the many OData URI features is querying. By default, the Web API doesn’t enable querying, as that imposes an extra load on the server. So you won’t be able to use these querying features with your Web API until you add the Queryable annotation to the appropriate methods. For example, here I’ve added Queryable to the Get method: Now you can use the $filter, $inlinecount, $orderby, $sort and $top methods. Here’s a query using the OData filter method: The ODataController allows you to constrain the queries so that consumers don’t cause performance problems on your server. For example, you can limit the number of records that are returned in a single response. See the Web API-specific “OData Security Guidance” article at bit.ly/X0hyv3 for more details. Just Scratching the Surface I’ve looked at only a part of the querying capabilities you can provide with the Web API OData support. You can also use the virtual methods of the EntitySetController to allow updating to the database. An interesting addition to PUT, POST, and DELETE is PATCH, which lets you send an explicit and efficient request for an update when only a small number of fields have been changed, rather than sending the full entity for a POST. But the logic within your PATCH method needs to handle a proper update, which, if you’re using Entity Framework, most likely means retrieving the current object from the database and updating it with the new values. How you implement that logic depends on knowing at what point in the workflow you want to pay the price of pushing data over the wire. It’s also important to be aware that this release (with the ASP.NET and Web Tools 2012.2 package) supports only a subset of OData features. That means not all of the API calls you can make into an OData feed will work with an OData provider created with the Web API. The release notes for the ASP.NET and Web Tools 2012.2 package list which features are supported. There’s a lot more to learn than I can share in the limited space of this column. I recommend Mike Wasson’s excellent series on OData in the official Web API documentation at bit.ly/14cfHIm. You’ll learn about building all of the CRUD methods, using PATCH, and even using annotations to limit what types of filtering are allowed in your OData APIs and working with relationships. Keep in mind that many of the other Web API features apply to the OData API, such as how to use authorization to limit who can access which operations. Also, the .NET Web Development and Tools Blog (blogs.msdn.com/webdev) has a number of detailed blog posts about OData support in the Web API. “Programming Entity Framework” books from O’Reilly Media and numerous online courses at Pluralsight.com. Follow her on Twitter at twitter.com/julielerman. Thanks to the following technical experts for reviewing this article: Jon Galloway (Microsoft) and Mike Wasson (Microsoft) Jon Galloway (Jon.Galloway@microsoft.com) is a Technical Evangelist on the Windows Azure evangelism team, focused on ASP.NET MVC and ASP.NET Web API. He speaks at conferences and international Web Camps from Istanbul to Bangalore to Buenos Aires. He's a co-author on the Wrox Professional ASP.NET MVC book series, and is a co-host on the Herding Code podcast. Mike Wasson (mwasson@microsoft.com) is a programmer-writer at Microsoft. For many years he documented the Win32 multimedia APIs. He currently writes about ASP.NET, focusing on Web API. Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/dn201742.aspx
CC-MAIN-2018-34
en
refinedweb
Created on 2014-01-06 15:30 by torsten, last changed 2015-03-05 17:53 by davin. This issue is now closed. The behaviour of multiprocessing.Queue surprised me today in that Queue.get() may raise an exception even if an item is immediately available. I tried to flush entries without blocking by using the timeout=0 keyword argument: $ /opt/python3/bin/python3 Python 3.4.0b1 (default:247f12fecf2b, Jan 6 2014, 14:50:23) [GCC 4.6.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from multiprocessing import Queue >>> q = Queue() >>> q.put("hi") >>> q.get(timeout=0) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/python3/lib/python3.4/multiprocessing/queues.py", line 107, in get raise Empty queue.Empty Actually even passing a small non-zero timeout will not give me my queue entry: >>> q.get(timeout=1e-6) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/torsten/opensrc/cpython/Lib/multiprocessing/queues.py", line 107, in get raise Empty queue.Empty Expected behaviour for me would be to return the item that is in the queue. I know that there is a kwarg *block* which gives me the desired behaviour: >>> q.get(block=False) 'hi' In my case the get call is embedded in my own module which does not currently expose the block parameter. My local solution is of course to update the wrapper: if timeout == 0: timeout = None block = False However I see a few smells here in the python standard library. First, everything else seems to accept timeout=0 as nonblocking: >>> import threading >>> lock = threading.Lock() >>> lock.acquire(timeout=0) True >>> from queue import Queue >>> q = Queue() >>> q.put("hi") >>> q.get(timeout=0) 'hi' Of special note is that queue.Queue behaves as I would have expected. IMHO it should be consistent with multiprocessing.Queue. Also note that queue.Queue.get() and queue.Queue.put() name their blocking flag "block", while everybody else uses "blocking". As a side note, I think the current approach is flawed in computing the deadline. Basically it does the following: deadline = time.time() + timeout if not self._rlock.acquire(block, timeout): raise Empty timeout = deadline - time.time() if timeout < 0 or not self._poll(timeout): raise Empty On my system, just taking the time twice and computing the delta takes 2 microseconds: >>> import time >>> t0 = time.time(); time.time() - t0 2.384185791015625e-06 Therefore calling Queue.get(block, timeout) with 0 < timeout < 2e-6 will never return anything from the queue even though Queue.get(block=False) would do that. This contradicts the idea that Queue.get(block=False) will return faster than with block=True with any timeout > 0. Apart from that, as Python does not currently support waiting on multiple sources, we currently often check a queue with a small timeout concurrently with doing other stuff. In case the system get really loaded, I would expect this to cause problems because the updated timeout may fall below zero. Suggested patch attached. We have a similar bug with Queue.get(). Queue.get(False) raises an exception Queue.Empty in the case when the queue is actually not empty! An example of the code is attached and is listed below just in case: ---------------------- import multiprocessing import Queue class TestWorker(multiprocessing.Process): def __init__(self, inQueue): multiprocessing.Process.__init__(self) self.inQueue = inQueue def run(self): while True: try: task = self.inQueue.get(False) except Queue.Empty: # I suppose that Queue.Empty exception is about empty queue # and self.inQueue.empty() must be true in this case # try to check it using assert assert self.inQueue.empty() break def runTest(): queue = multiprocessing.Queue() for _ in xrange(10**5): queue.put(1) workers = [TestWorker(queue) for _ in xrange(4)] map(lambda w: w.start(), workers) map(lambda w: w.join(), workers) if __name__ == "__main__": runTest() ---------------------- Hi! Are there any updates on the issue? This same issue came up recently in issue23582. Really, it should have been addressed in this issue here first and issue23582 marked as a duplicate of this one but these things don't always happen in a synchronous or apparently-linear fashion. Adding to what is captured in issue23582, specifically referring to the points raised here in this issue: 1. A call to put does not mean that the data put on the queue is instantly/atomically available for retrieval via get. Situations where a call to put is immediately followed by a non-blocking call to get are asking for a race-condition -- this is a principal reason for having blocking calls with timeouts. 2. A call to get resulting in an Empty exception of course does not mean that the queue is forevermore empty, only that the queue is empty at the moment the call to get was made -- the facility for trapping the Empty and trying again to get more data off the queue provides welcome flexibility on top of the use of blocking/non-blocking calls with/without timeouts. 3. A call to empty is, as indicated in the documentation, not to be considered reliable because of the semantics in coordinating the queue's state and data between processes/threads. 4. Alexei's contributions to this issue are very nearly identical to what is discussed in issue23582 and are addressed well there. 5. As to using a timeout value too small to be effective (i.e. < 2e-6), really this is one example of the larger concern of choosing an appropriate timeout value. In the proposed patch, ensuring that a call to self._poll is made no matter what might potentially buy additional time for the data to be synced and made available (admittedly a happy result, but a fragile, inadvertent win) but it does not address the rest of how get, put, and the others work nor will it necessarily solve the issue being raised here. In Alexei's example, changing the call to get from a non-blocking call to a blocking call with a reasonably small timeout will reliably ensure that everything put on the queue can and will be gotten back by the rest of that code. In multiprocessing, we have queues to help us make data available to and across processes and threads alike -- we must recognize that coordinating data across distinct processes (especially) takes a non-zero amount of time -- hence we have the tools of blocking as well as non-blocking calls both with or without timeouts to properly implement robust code in these situations.
https://bugs.python.org/issue20147
CC-MAIN-2018-34
en
refinedweb