text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Technology news How to Use Sentry and GitLab to Capture React Errors Technology news James Walker | 5 min read Sentry is an error-tracking platform that lets you monitor issues in your production deployments. It supports most popular programming languages and frameworks. GitLab is a Git-based DevOps platform to manage the entire software development lifecycle. GitLab can integrate with Sentry to display captured errors. In this article, we’ll use the two services to stay ahead of issues in a React application. Getting Set up GitLab and Sentry both have self-hosted and SaaS options. The steps in this guide apply to both variants. We’ll assume that you’ve already got a React project ready to use in your GitLab instance. Log in to Sentry and click the “Create Project” button in the top-right corner. Click “React” under the “Choose a platform” heading. This lets Sentry tailor example code snippets to your project. Choose when to receive alerts using the options beneath “Set your default alert settings.” Select “Alert me on every new issue” to get an email each time an error is logged. The “When there are more than” option filters out noise created by duplicate events in a given time window. Give your project a name in the “Project name” field. Click “Create Project” to finish your setup. Adding Sentry to Your Codebase Now, you need to integrate Sentry with your React code. Add the Sentry library to your project’s dependencies using npm: npm install @sentry/react You’ll need to initialize Sentry as soon as possible in your app’s JavaScript. This gives Sentry visibility into errors that occur early in the React lifecycle. Add Sentry’s bootstrap script before your first ReactDOM.render() call. This is typically in index.js: import App from "./App.js"; import React from "react"; import ReactDOM from "react-dom"; import * as Sentry from "@sentry/react"; Sentry.init({ dsn: "my-dsn" }); ReactDOM.render(App />, document.getElementById("react")); Replace my-dsn with the DSN that Sentry displays on your project creation screen. The DSN uniquely identifies your project so that the service can attribute events correctly. Capturing Errors Sentry will automatically capture and report unhandled JavaScript errors. Although it can’t prevent the crash, it lets you know that something’s gone wrong before the user report arrives. Here’s an example App.js: import React from "react"; export default () => { const data = null; return data.map((val, key) => { h1 key={key}>{val}h1>; }); }; This code is broken— data is set to null, so the map property will be undefined. We try to call data.map() regardless so that the app will crash. You should see an issue show up in Sentry. Sentry issues include as much data about the error as possible. You can see the page URL as well as information about the user’s device. Sentry will automatically combine duplicate issues together. This helps you see whether an event was a one-off or a regular occurrence that’s impacting multiple users. Sentry automatically fetches JavaScript source maps when they’re available. If you’re using create-react-app, source maps are automatically generated by npm run build. Make sure that you copy them to your web server so that Sentry can find them. You’ll see pretty stack traces from the original source code instead of the obfuscated stack produced by the minified build output. You can mark Sentry errors as Resolved or Ignored once they’ve been dealt with. You’ll find these buttons below the issue’s title and on the Issues overview page. Use Resolved once you’re confident that an issue has been fixed. Ignored is for cases where you don’t intend to address the root cause. In React sites, this might be the case for errors caused by old browser versions. Error Boundaries React error boundaries let you render a fallback UI when an error is thrown within a component. Sentry provides its own error boundary wrapper. This renders a fallback UI and logs the caught error to Sentry. import * as Sentry from "sentry"; export default () => { const data = null; return ( Sentry.ErrorBoundary fallback={h1>Something went wrong.h1>}> { data.map((val, key) => { h1 key={key}>{val}h1>; }); } Sentry.ErrorBoundary> ); }; Now, you can display a warning to users when an error occurs. You’ll still receive the error report in your Sentry project. Adding GitLab Integration There are two sides to integrating GitLab and Sentry. First, GitLab projects have an “Error Tracking” feature that displays your Sentry error list. You can mark errors as Resolved or Ignored from within GitLab. The second part involves connecting Sentry to GitLab. This lets Sentry automatically create GitLab issues when a new error is logged. Let’s look at GitLab’s Error Tracking screen first. You’ll need to create a Sentry API key. Click your username in the top left of your Sentry UI, and then the API Keys in the menu. Click “Create New Token” in the top-right corner. Add the following token scopes: alerts:read alerts:write event:admin event:read event:write project:read This allows GitLab to read and update your Sentry errors. Next, head to your GitLab project. Click Settings in the side menu, and then Operations. Expand the “Error tracking” section. Paste your Sentry authentication token into the “Auth Token” field and press “Connect.” If you’re using a self-hosted Sentry instance, you’ll also need to adjust the “Sentry API URI” field to match your server’s URI. The “Project” dropdown will populate with a list of your Sentry projects. Select the correct project and press “Save changes.” You’re now ready to use Error Tracking in GitLab. Click Operations> Error Tracking in the left sidebar. You’ll see your Sentry error list. It’s filtered to Unresolved issues by default. This can be changed using the dropdowns in the top-right corner. Click an error to see its detailed stack trace without leaving GitLab. There are buttons to ignore, resolve, and convert to a GitLab issue. Once you’ve opened a GitLab issue, you can assign that item to a team member so that the bug gets resolved. Now, you can add the second integration component—a link from Sentry back to GitLab. Click Settings in your Sentry sidebar, and then Integrations. Find GitLab in the list and click the purple “Add Installation” button in the top-right corner. Click “Next” to see the setup information. Back on GitLab, click your user icon in the top-right corner, followed by “Preferences.” Click “Applications” in the left side menu and add a new application. Use the details shown by Sentry in the installation setup pop-up. GitLab will display an Application ID and Secret Key. Return to the Sentry pop-up and enter these values. Add your GitLab server URL ( gitlab.com for GitLab SaaS) and enter the relative URL path to your GitLab group (e.g. my-group). The integration doesn’t work with personal projects. Click the purple Submit button to create the integration. Sentry will now be able to display GitLab information next to your errors. This includes the commit that introduced the error, and stack traces that link back to GitLab files. Sentry users on paid plans can associate GitLab and Sentry issues with each other. Disabling Sentry in Development You won’t necessarily want to use Sentry when running your app locally in development. Don’t call Sentry.init() if you want to run with Sentry disabled. You can check for the presence of a local environment variable and disable Sentry if it’s set. if (process.env.NODE_ENV === "production") { Sentry.init({ dsn: "my-dsn" }); } NODE_ENV is set automatically by create-react-app. Production builds hardcode the variable to production. You can use this to selectively enable Sentry. Enabling Performance Profiling Sentry can also profile your app’s browser performance. Although this isn’t the main focus of this article, you can set up tracing with a few extra lines in your Sentry library initialization: npm install @sentry/tracing import {Integrations} from "@sentry/tracing"; Sentry.init({ dsn: "my-dsn", integrations: [new Integrations.BrowserTracing()], tracesSampleRate: 1.0 }); Now, you’ll be able to see performance data in your Sentry project. This can help you identify slow-running code in production. Conclusion Sentry lets you find and fix errors before users report them. You can get real-time alerts as problems arise in production. Stack traces and browser data are displayed inline in each issue, giving you an immediate starting point for resolution. Combining Sentry with GitLab provides even tighter integration with the software development process. If you’re already using GitLab for project management, adding the Sentry integration lets you manage alerts within GitLab and create GitLab issues for new Sentry errors.
https://ihomenews.com/technology-news-how-to-use-sentry-and-gitlab-to-capture-react-errors/
CC-MAIN-2022-05
en
refinedweb
Opened 8 years ago Closed 8 years ago #2650 closed Bug (Fixed) _FileListToArrayRec returns an extra row in array with number of Files/Folders returned Description Simple script: #include <File.au3> #include <Array.au3> $f = "D:\Torrents\DP_WLAN_12.03_NT6\" $aFileList =_FileListToArrayRec ($f, "*.inf", 1+4+8, 1, 1, 2 ) _ArrayDisplay($aFileList) This returns: Row|Col 0 [0]|133 [1]|133 [2]|D:\Torrents\DP_WLAN_12.03_NT6\x64\All\W\Atheros\1\netathrx.inf [3]|D:\Torrents\DP_WLAN_12.03_NT6\x64\All\W\Atheros\2\netathrx.inf [4]|D:\Torrents\DP_WLAN_12.03_NT6\x64\All\W\Broadcom\1\bcmwl6.inf [5]|D:\Torrents\DP_WLAN_12.03_NT6\x64\All\W\Broadcom\2\bcmwl6.inf [6]|D:\Torrents\DP_WLAN_12.03_NT6\x64\All\W\Broadcom\3\bcmwl6.inf ...... Note row 1, it's a duplicate of row 0. Without sorting ($f, "*.inf", 1+4+8, 1, 0, 2 ) result is fine. Result of dir /s /b is here: Contents of DP_WLAN_12.03_NT6 are these two files extracted in it: Tested with _FileListToArrayRec from the latest beta 3.3.11.3- same result. Attachments (0) Change History (5) comment:1 Changed 8 years ago by Melba23 comment:2 Changed 8 years ago by guinness I am not getting any issue either. comment:3 Changed 8 years ago by itaushanov@… You are right, issue seems fixed in 3.3.11.3. comment:4 Changed 8 years ago by Melba23 Good. M23 comment:5 Changed 8 years ago by Melba. Using that file structure I can reproduce the problem with 3.3.10.2, but 3.3.11.3 gives me the correct result as the bug which caused the problem was fixed (by me) in revision 9156. I have just downloaded the 3.3.11.3 zip and the fix is definitely in place in File.au3 so are you sure that you are actually running 3.3.11.3? M23
https://www.autoitscript.com/trac/autoit/ticket/2650
CC-MAIN-2022-05
en
refinedweb
Python Connector Libraries for SAP ERP Data Connectivity. Integrate SAP ERP with popular Python tools like Pandas, SQLAlchemy, Dash & petl. Easy-to-use Python Database API (DB-API) Modules connect SAP data with Python and any Python-based applications. Features - SQL-92 access to SAP R/3, NetWeaver, and ERP / ECC 6.0 data - Compatible with distributed SAP systems and dedicated or custom application servers - Connect to live SAP NetWeaver SAP . - Write SQL, get SAP NetWeaver data. Access SAP through standard Python Database Connectivity. - Integration with popular Python tools like Pandas, SQLAlchemy, Dash & petl. - Simple command-line based SAP data exploration with easy access to SAP RFCs. - Full Unicode support for data, parameter, & metadata. CData Python Connectors in Action! Watch the video overview for a first hand-look at the powerful data integration capabilities included in the CData Python Connectors.WATCH THE PYTHON CONNECTOR VIDEO OVERVIEW Python Connectivity with SAP NetWeaver Full-featured and consistent SQL access to any supported data source through Python - Universal Python SAP Connectivity Easily connect to SAP SAP SAP Connector includes a library of 50 plus functions that can manipulate column values into the desired result. Popular examples include Regex, JSON, and XML processing functions. - Collaborative Query Processing Our Python Connector enhances the capabilities of SAP with additional client-side processing, when needed, to enable analytic summaries of data such as SUM, AVG, MAX, MIN, etc. - Easily Customizable and Configurable The data model exposed by our SAP SAP with Python CData Python Connectors leverage the Database API (DB-API) interface to make it easy to work with SAP from a wide range of standard Python data tools. Connecting to and working with your data in Python follows a basic pattern, regardless of data source: - Configure the connection properties to SAP - Query SAP to retrieve or update data - Connect your SAP data with Python data tools. Connecting to SAP in Python To connect to your data from Python, import the extension and create a connection: import cdata.saperp as mod conn = mod.connect("User=user@domain.com; Password=password;") #Create cursor and iterate over results cur = conn.cursor() cur.execute("SELECT * FROM MARA") rs = cur.fetchall() for row in rs: print(row) Once you import the extension, you can work with all of your enterprise data using the python modules and toolkits that you already know and love, quickly building apps that help you drive business. Visualize SAP Data with pandas The data-centric interfaces of the SAP Python Connector make it easy to integrate with popular tools like pandas and SQLAlchemy to visualize data in real-time. engine = create_engine("saperp///Password=password&User=user") df = pandas.read_sql("SELECT * FROM MARA", engine) df.plot() plt.show()
https://www.cdata.com/drivers/sap/python/
CC-MAIN-2022-05
en
refinedweb
import mplhep.pyplot as pltwhich would work just like matplotlib .pyplotexcept it would include our histplot functions ?
https://gitter.im/HSF/mpl-hep?at=5ca389f70aad635019180ed5
CC-MAIN-2022-05
en
refinedweb
React State and Props In this lesson we’re going to explore React State and Props, but just before we do, let’s perform a small refactor on our Tweet List component. We’ve currently got a little duplication in our UI, where we have two hardcoded tweets: <div className="card mb-2"> <p className="card-body"> This is a tweet </p> </div> Knowing React treats everything as components, it seems logical we could take this “Tweet” markup and turn it into its own component and we can! Create a new javascript file and call it Tweet.js, then add the standard React component markup we saw earlier… import React from 'react'; export default class Tweet extends React.Component { render() { return () } } Now cut this html from List.js… <div className="card mb-2"> <p className="card-body"> This is a tweet </p> </div> And go ahead and paste it into the render method in Tweet.js… You should end up with this in Tweet.js: import React from 'react'; export default class Tweet extends React.Component { render() { return ( <div className="card mb-2"> <p className="card-body"> This is a tweet </p> </div> ) } } And finally, back in List.js, we can render our new Tweet component as many times as we like… import React from 'react'; import Tweet from './Tweet' export default class List extends React.Component { render() { return ( <> <h3>Tweets</h3> <Tweet /> <Tweet /> </>); } } Nice, we have two tweets. But I’m guessing we don’t want to keep modifying this markup every time we want to post a new tweet! Let’s make this a little more dynamic. What we really want is a way to loop over a collection of tweets and render our Tweet component for each one. UI state in your components To this end we can use “state” in our React components. State takes the form of a javascript object which is private to (and controlled by) a component. This makes it a great place to store any data you want your UI to “react” to. Want to show a list of people? Add an array of “person” javascript objects to state. Want to show a list of tweets? Add an array of “tweet” objects instead… Let’s give it a go; head over to List.js and add this state to the top of the class (just before the render method)… export default class List extends React.Component { state = { tweets: [ "One tweet", "Two tweets", "Three tweets", "Four" ] } // existing render method render() { In this case tweets is an array of strings. It’s worth noting this could just as easily be an array of objects… state = { tweets: [ { contents: "One tweet" }, { contents: "Two tweets" }, { contents: "Three tweets" }, { contents: "Four" }, ] } But for now, we’ll stick to our list of strings. Now to render our Tweet component for each item in the Tweets array. Replace the existing render method in List.js with this one: render() { return ( <> <h3>Tweets</h3> { this.state.tweets.map(tweet => <Tweet />) } </>); } Here we’re using map, which is a javascript function for looping over items in an array and doing something with them. In this case we’re using it to loop over our array of tweets (strings) in order to render our Tweet component for each one. Props for passing data between components Run this in the browser and you’ll see four identical Tweet cards, but what about the text we want each one to display? (“One tweet”, “Two tweets” etc.) To show each string from our array we need a way to pass it along to our Tweet component. Something like this… render() { return ( <> <h3>Tweets</h3> {this.state.tweets.map(tweet => <Tweet text={tweet}/>) } </>); } Here we’ve “forwarded” the array item, aliased as tweet (a string in our case) to our Tweet component via a property called text. This is very similar to lambdas in C#. If you’ve ever written x=>x. in C#, this is pretty much the same thing, so I could have called tweet => anything I fancied and it would still work. But our Tweet component still hasn’t the foggiest what to do with this text property! So let’s rectify that… Head over to Tweet.js and replace the hardcoded “This is a tweet” with {this.props.text}. render() { return ( <div className="card mb-2"> <p className="card-body"> {this.props.text} </p> </div> ) } React gives us access to any properties set on our components, using the props object. In this case we named our property text and so we can access this in our Tweet component via this.props.text. Check it out in the browser now and you should see four distinctly different tweets! Next time we’ll look at populating our tweets list with data from the server (our API).
https://jonhilton.net/react/state/
CC-MAIN-2022-05
en
refinedweb
A. 1. Why Unit Tests Anyway? Our main focus when writing software is building new features, fixing bugs. Of course, we need to test what we built, but we get the most joyful moment when our new developed feature works. Next step is to write unit tests… But, we already know it is working, so why spend much effort not breaking anything when changing the code (assuming the unit tests are well written and provide enough code coverage). It is therefore also common practice to run the unit tests as part of your CI/CD pipeline. When you do not like writing unit tests afterwards, you can also consider writing your unit tests first, let 🙂 . 2. Create Your First Unit Test We will build upon the sources of the Jira time report generator. We are using Python 3.7 and PyCharm as IDE. First, let’s create a test directory and right-click the directory in PyCharm. Choose New - Python File and Python unit test. This creates the following default file: import unittest class MyTestCase(unittest.TestCase): def test_something(self): self.assertEqual(True, False) if __name__ == '__main__': unittest.main() Running this unit test obviously fails (True does not equals False), but we do have set up the basics for writing our own unit tests now. 3. Mocking a Rest API We want to unit test the get_updated_issues function and this provides us a first challenge: the get_updated_issues function contains a call to the Jira Rest API. We do not want our unit test to be dependent of a third party service and therefore we need a way to mock the Rest API. There are several options to mock a Rest API, but we will make use of the requests-mock Python library which fits our needs. Install the requests-mock Python library: pip install requests_mock 3.1 Test Single Page Response The get_updated_issues function will request the issues which are updated in a certain time period. In our unit test, we will verify the behavior when one page with results is retrieved (the Rest API supports pagination, but that is something for a next unit test):. In line 5, we define the expected result with variable expected_result when the function get_updated_issues returns. At lines 11 up. 3.2 which. 3.3 Test Failed The unit tests above all pass. But do they fail when something is wrong? We can therefore change e.g. the following line in the get_updated_issues function: issues_json.extend(response_json['issues']) Change it with:: AssertionError: Lists differ 3.4 Test Multiple URI’s The Jira work logs are to be retrieved per issue. We therefore")) 4.. Your Way Of Explanation was good. You add some valuable points. Here I am sharing some information about python. Python software engineers are called as Pythonists. Along these lines, trying Pythonists, we should plunge into an ocean of learning and gain proficiency with the significant highlights of Python programming language. Highlights of Python Programming Language: Coherence: Python has a basic and exquisite sentence structure. Lucidness keeps up a huge number of lines of code and you compose less code when contrasted with other programming dialects like C, Java, or C++. Free and Open Source: Python is unreservedly accessible on practically all mainstream Operating Systems. On the off chance that you are utilizing Mac OS, Windows, or any Unix-based OS, for example, Linux or Ubuntu you have a Python mediator in your Operating System as a matter of course. This is only one language that has power over any other language. It is called as All in one language. As python is much simpler and easy to code language than any other. python training institutes in Hyderabad.
https://mydeveloperplanet.com/2020/03/11/how-to-mock-a-rest-api-in-python/?like_comment=19758&_wpnonce=5226efc264
CC-MAIN-2022-05
en
refinedweb
CreateAttestationStatementCommand ClassNamespace: Yubico.YubiKey.Piv.Commands Assembly: Yubico.YubiKey.dll Build an attestation statement for a private key in a specified slot. C# public sealed class CreateAttestationStatementCommand : Object, IYubiKeyCommand<CreateAttestationStatementResponse> Inheritance System.Object CreateAttestationStatementCommand Implements Remarks An attestation statement is an X.509 certificate. It verifies that a private key has been generated by the YubiKey. If the private key in a slot was imported, this command will not work. The create attestation statement command is available on YubiKey version 4.3 and later. The partner Response class is CreateAttestationStatementResponse. It is possible to build attestation statements for keys in slots 9A, 9C, 9D, and 9E. The certificate created will conatin the public key partner to the private key in the cert, along with the YubiKey's serial number. The cert will be signed by the attestation key, the private key in slot F9. Example: using System.Security.Cryptography.X509Certificates; IYubiKeyConnection connection = key.Connect(YubiKeyApplication.Piv); var createAttestationStatementCommand = new CreateAttestationStatementCommand (0x9A); CreateAttestationStatementResponse createAttestationStatementResponse = connection.SendCommand(createAttestationStatementCommand); if (createAttestationStatementResponse.Status != ResponseStatus.Success) { // Handle error } X509Certificate2 attestationStatement = createAttestationStatementResponse.GetData();
https://docs.yubico.com/yesdk/yubikey-api/Yubico.YubiKey.Piv.Commands.CreateAttestationStatementCommand.html
CC-MAIN-2022-05
en
refinedweb
SPH-DEM Examples Poiseuille FlowThe Poiseuille flow is used to build the tutorial of the SPH-DEM coupling. This laminar flow is able to show that the velocity profile and the tangential force given on the walls correspond to the results produced by the analytical expressions. The whole simulation is described in the following code, which will be explained in detail: // MechSys #include <mechsys/sph/Domain.h> #include <iostream> #include <fstream> using std::cout; using std::endl; int main(int argc, char **argv) try { size_t Nproc = omp_get_max_threads(); if (argc>1) Nproc = atoi(argv[1]); SPH::Domain dom; dom.Nproc = Nproc; dom.Dimension = 2; dom.Scheme = 1; dom.Gravity = 1.e-4,0.0,0.0; dom.KernelType = 0; dom.GradientType= 0; dom.VisEq = 0; dom.BC.Periodic[0] = true; dom.XSPH = 0.5; dom.DeltaSPH = 0.15; //Simulation time double Tf = 200.0; int numfiles = 200; double dtoutput = Tf/numfiles; //size of domain and parameters double H = 0.1; double L = H; int n = 60; double dx = H/n; double h = dx*1.3; dom.InitialDist = dx; double rho = 1000.0; double Mu = 1.0e-1; double Cs = 0.07; double P0 = 0.0; double CFL = 0.1; double timestep = (CFL*h/(Cs)); double Re = (rho*H*1.23e-3)/Mu; //////////////////////////////////////// DECLARING SPH PARTICLES /////////////////////////////////////// dom.AddBoxLength(/*Tag*/1, /*Position*/Vec3_t (0.0, 0.0, 0.0), /*Length*/L, /*Height*/H, /*Width*/0.0, /*Radius*/dx/2.0, /*Density*/rho, /*Smoothing*/h, /*Packing*/1, /*Rotation*/0, /*Random*/false, /*Fixed*/false); for (size_t i=0; i <dom.Particles.Size(); i++){ dom.Particles[i]->Material = 1; dom.Particles[i]->Constitutive= 11; dom.Particles[i]->Mu = Mu; dom.Particles[i]->PresEq = 0; dom.Particles[i]->P0 = P0; dom.Particles[i]->Cs = Cs; dom.Particles[i]->Alpha = 0.05; dom.Particles[i]->Beta = 0.05; dom.Particles[i]->TI = 0.3; dom.Particles[i]->TIn = 4.0; dom.Particles[i]->TIInitDist = dx; dom.Particles[i]->Shepard = true; } //////////////////////////////////////// DEM BOUNDARY CONDITIONS /////////////////////////////////////// dom.AddSegment(/*Tag*/ -2,/*PositionL*/ Vec3_t(-0.1*L,H,0.0),/*PositionR*/Vec3_t(1.1*L,H,0.0),/*Thickness*/dx,/*Density*/rho); dom.AddSegment(/*Tag*/ -1,/*PositionA*/ Vec3_t(-0.1*L,0.0,0.0),/*PositionB*/Vec3_t(1.1*L,0.0,0.0),/*Thickness*/dx,/*Density*/rho); for (size_t j = 0;j<dom.DEMParticles.Size();j++){ dom.DEMParticles[j]->FixVeloc(); dom.DEMParticles[j]->Props.Mu = 0.0; dom.DEMParticles[j]->Props.eps = 0.0;; } } cout << "Cs = " << Cs << endl; cout << "h = " << h << endl; cout << "dx = " << dx << endl; cout << "Re = " << Re << endl; ////////////////////////////////////////// SOLVING THE SYSTEM ////////////////////////////////////////// dom.Solve(/*tf*/Tf,/*dt*/timestep,/*dtOut*/dtoutput,"test06",numfiles); return 0; } MECHSYS_CATCH Initialisation MechSys/SPH-DEM simulations are prepared with a C++ source code. The first step is to include the required MechSys libraries. In this case, the file domain.h contains the SPH domain definitions: #include <mechsys/sph/domain.h> Simulation script structure The simulation script structure is: #include ... int main(int argc, char **argv) try { ... } MECHSYS_CATCH After including the required MechSys' libraries (and C++ standard libraries, if necessary), the main function is declared. For MechSys, it is essential to add the try keyword just after the int main() and before the first opening brace {. Then, after the last closing brace, the MECHSYS_CATCH is added. Declaring the SPH Domain The following line returns an upper bound on the number of threads that could be used: size_t Nproc = omp_get_max_threads(); If the first argument is introduced by following the execution command, the number of threads will be reassigned: if (argc>1) Nproc = atoi(argv[1]); The line, SPH::Domain dom; declares the container dom of the class SPH::Domain. This class defines the SPH universe and has some functions to include SPH particles and DEM particles, some of which will be used in this tutorial. The SPH model has many options to solve the equations. Thus, once the domain is named (e.g., dom), the functions and their options are selected using such an extension. For instance, the dimensionality of the problem can be selected using an integer, 2 or 3: dom.Dimension = 2; There are two explicit schemes to solve the temporal derivatives: - 0 → Modified Verlet (Ghadimi, 2012) - 1 → Leapfrog (Braune & Lewiner, 2013) And it can be called as follows: dom.Scheme = 1; The gravity is a vector defined in the domain, which directions are x, y and z. dom.Gravity = 1.e-4,0.0,0.0; Many kernel functions can be found in the literature. Mechsys has three options (Korzani et al., 2017): - 0 → Cubic kernel - 1 → Quintic - 2 → Quintic Spline dom.KernelType = 0; Remark: The option called Quintic is known as Wendland C2 or Wendland Quintic kernel function. Wendland kernel avoids pairing (or clumping) instability that occurs where there are more neighbouring SPH particles in the influence domain that a kernel function can accommodate (Dehnen & Aly, 2012; Bui & Nguyen, 2020). Two options are available to discretise the gradient of the pressure or the stress tensor depending on the form of the density as a denominator (Monaghan, 2005): - 0 → Squared density - 1 → Multiplied density dom.GradientType = 0; If the system to be solved is a Newtonian fluid, there are four options to discretise the second order derivative term: - 0 → Morris et al. (1997). - 1 → Shao & Lo (2003). - 2 → Real viscosity for incompressible fluids. - 3 → Real viscosity for compressible fluids (Takeda et al., 1994). Then, it can be chosen as: dom.VisEq = 0; In this case, periodic boundary conditions are used. To indicate the direction in which the periodic BC will be activated (x=0, y=1 and z=2) an integer is assigned as shown in the following command: dom.BC.Periodic[0] = true; Thus, option 0 is indicating that the periodic BC is on the x-direction. Also, it is possible to use some correction expressions to improve the solution given by SPH. XSPH is a widely implemented term to correct the position of the particles. This can be activated by calling the following function and assigning a value between 0 and 1. A typical value is 0.5: dom.XSPH = 0.5; The basic SPH structure produces a checkerboard pattern so that a filter to smooth the pressure field is necessary. Thus, Delta-SPH is a diffusive term added in the mass conservation equation that can be activated by calling the following function and assigning a value between 0 and 1, depending on the case. dom.DeltaSPH = 0.15; Then Tf defines the final time of the simulation, numfiles the number of frames to produce for visualization and analysis and dtoutput defines the time-step to print the files, but it is not related to the real time-step and stability condition. double Tf = 200.0; int numfiles = 200; double dtoutput = Tf/numfiles; Some parameters are defined to set up the size of the problem. Thus, height H and length L of a box that will contain the SPH particles are defined. Also, the number of particles n and the spacing among them dx is defined as well as the smoothing length double h = dx*1.3. double H = 0.1; double L = H; int n = 60; double dx = H/n; double h = dx*1.3; The value 1.3 is a scaling factor of the spread of the smoothing function, and it can be from 0.6 to 2.0; however, values between 1.1 and 1.3 are highly recommended. Also, it is necessary to assign the distance between points using the following function: dom.InitialDist = dx; Then, some of the variables or parameters that can influence the time-step and the stability are defined. Density rho, dynamic viscosity Mu, speed of the sound Cs, a background pressure P0, and the stability condition directly CFL : double rho = 1000.0; double Mu = 1.0e-1; double Cs = 0.07; double P0 = 0.0; double CFL = 0.1; double timestep = (CFL*h/(Cs)); The time-step is defined as a function of the CFL that typically involves choosing a value lower than 0.1. The commonly chosen sound speed for simulations differs from the physical value to make the solution terminable without affecting the results; otherwise, the simulation may take too much computation time. Morris (1997) shows some definitions regarding the speed of the sound and stability condition. On the other hand, Poiseuille case requires that the regime of the flow be laminar; thus, it is verified by the Reynolds number as follows: double Re = (rho*H*1.23e-3)/Mu; Declaring the SPH Particles A function that will introduce the SPH particles as an array into the domain is called. Three functions can be used for such a purpose; however, AddBoxLength is implemented in this example. Such a function requires some inputs: dom.AddBoxLength(/*Tag*/1, /*Position*/Vec3_t (0.0, 0.0, 0.0), /*Length*/L, /*Height*/H, /*Width*/0.0, /*Radius*/dx/2.0, /*Density*/rho, /*Smoothing*/h, /*Packing*/1, /*Rotation*/0, /*Random*/false, /*Fixed*/false); The first parameter is a "Tag" to distinguish different subgroups of particles. The vector Vec3_t(0.0, 0.0, 0.0) indicates the origin of the box with the SPH particles, which coordinates can be seen as Vec3_t(Left, Bottom, Front). Then, "Length", "Height", and "Width" must be defined by double type numbers. The "Radius" is typically defined as dx/2. Then, two parameters are called "Density" and "smoothing length" which were defined above. "Packing": this option gets an integer value to define particles packing arrangement. "Rotation": this option gets an integer value to define the direction in Hexagonal Close Packing. "Random": it is a boolean variable to add randomness in the position of each particle proportional to its radius. "Fixed": it is a boolean variable for defining if the particle is fixed or free, which is useful for boundary conditions. Remark: rectangle and cube are the primary shapes discretised by the SPH points, in 2D and 3D, respectively. Then, the rectangle and cube can be reshaped by deleting some particles placed in unwanted zones using basic functions. For example, the equation of the straight line, plane, circle, sphere, etc. It can be done by changing the initial tag of the particles through the variable called dom.Particles[i]->ID = .... Then, the particles with that new ID can be eliminated using a command from the code called dom.DelParticles(ID). A clear example of this procedure can be found in a case called "test08.cpp" content into the folder "tsph" from Mechsys. After calling the function to create the SPH points, it is necessary to assign the parameters to each particle that define the type and behaviour of the simulated material. This will be done by using a loop for, which structure is written as: for (size_t i=0; i <dom.Particles.Size(); i++){ ... } It is necessary to use the following structure to assign or define certain properties: dom.Particles[i]->parameter = value. Thus, the first parameter refers to the type of material, where there are three options that can be selected by an integer: - 1 → Fluid - 2 → Solid - 3 → Soil dom.Particles[a]->Material = 1; There are also some options to choose the constitutive model. There are five options for fluids, one for solids and two for soils: - 11 → Linear (Newton, 1687). - 12 → Bingham regularised (Papanastasiou, 1987). - 13 → Bingham-Bilinear (Imran et al., 2001). - 14 → Cross model (Shao & Lo 2003). - 15 → Herschel-Bulkley (Ké & Turcotte, 1980). - 2 → Elastic Perfect-Plastic with Von-Mises failure criteria. - 31 → Elastic Perfect-Plastic with Drucker-Praguer failure criteria (Bui et al., 2008). - 32 → Hypoplastic model (Peng et al., 2015). Then, the selection is made by the following command: dom.Particles[a]->Constitutive = 11; Once the constitutive model for fluid is chosen, some parameters must be modified by a double value, the dynamic viscosity Mu for the Newtonian fluid. When the fluid is non-Newtonian, other parameters must be modified, such as reference dynamic viscosity MuRef and the yield stress T0, which are used in all the non-Newtonian fluid models. The regularisation parameter m used in Bingham and Herschel-Bulkley models, which value is 300.0 by default; and time parameter t0mu used in Herschel-Bulkley. Thus, following the syntaxis here used, the parameter can be modified depending on the selected model as shown below: dom.Particles[a]->Mu = 1.0e-3; dom.Particles[a]->MuRef = 2.0; dom.Particles[a]->T0 = 5.0; dom.Particles[a]->m = 500.0; dom.Particles[a]->t0mu = 5.0e-4; Such values depend on the experimental measurement. For further details regarding the models, review the papers cited above. Since the system is assumed to be weakly compressible, an equation of state (EOS) must be used, and three options are given. - 0 → Nearly incompressible fluids (Cole, 1965, p. 38-44). - 1 → Artificial EOS (Morris et al., 2000). - 2 → (Morris et al., 1997). Thus, the EOS is selected as: dom.Particles[a]->PresEq = 0; Also, EOS 0 and 1 has a background pressure that is 0.0 by default, and can be modified using the following command: dom.Particles[a]->P0 = P0; The speed of the sound has an essential role since it influences the stability of the solution and the CPU time. Although a value is assigned in this example, some expressions can be found in the literature to compute the initial Cs, as shown in Morris et al., (1997). Typically, the value used is not the physical one. This is done to obtain results in a reasonable computational time, ensuring that it will not affect the quality of the solution. dom.Particles[a]->Cs = Cs; In order to diminish excessive oscillations caused by the shock fronts, an artificial viscous term is introduced. The filter can be used by the following commands: dom.Particles[a]->Alpha = 0.1; dom.Particles[a]->Beta = 0.1; Alpha and Beta are real numbers whose values depend on the material and case. Their value can be between 0.0 and 1.0. To activate this viscosity, either Alpha or Beta should be greater than zero. It has been found that 1.0 is an appropriate value for Alpha and Beta when simulating soils and close to 0.05 for fluid simulations. SPH particles are prone to agglomerate in an unphysical way when particles have negative pressure coming from the equation of state. Monaghan (2000) introduce a term in the momentum equation to mitigate such a problem (known as tensile instability in the literature), which works as an artificial pressure to give repulsion between particles. It can be implemented in code using the following lines: dom.Particles[a]->TI = 0.3; dom.Particles[a]->TIn = 4.0; dom.Particles[a]->TIInitDist = dx; By default, TIn is 4.0 in the code as recommended by Gray et al. (2001) for solid materials. To activate this option in the code, TI should be greater than zero. Generally, TI and TIn are recommended to be 0.3 and 4.0 for solids and fluids (Gray et al., 2001; Monaghan 2000), 0.5 and 2.55 for soils with cohesion (Bui et al., 2008). When the particle in concern is near the boundary or to the free surface, the kernel function suffers from a lack of particles. One way to correct such a lack is using the Shepard filter so that the density of the particles can be reinitialised. This filter is applied every M time-steps (usually M is on the order of 30 time-steps) (Crespo 2008). The following command must be used to activate this filter in the code: dom.Particles[a]->Shepard = true; If the number of neighbouring particles is not enough (using the number density concept), the Shepard filter will be skipped for that particle. Declaring the DEM Particles There are three kinds of DEM particles that can be used so far in the SPH-DEM coupling. Segments in 2D cases, Planes in 3D cases, and Spheres that can be implemented in both 2D and 3D cases. Two DEM segments will be inserted to set up the fixed boundary conditions of the Poiseuille case, one on the top and the other one on the bottom. The following lines are used to declare the two DEM segments: dom.AddSegment(/*Tag*/ -2,/*PositionL*/ Vec3_t(-0.1*L,H,0.0),/*PositionR*/Vec3_t(1.1*L,H,0.0),/*Thickness*/dx,/*Density*/rho); dom.AddSegment(/*Tag*/ -1,/*PositionL*/ Vec3_t(-0.1*L,0.0,0.0),/*PositionR*/ Vec3_t(1.1*L,0.0,0.0),/*Thickness*/dx,/*Density*/rho); The first parameter is a "Tag" to distinguish different subgroups of particles or even single particles. The vector Vec3_t(-0.1*L,0.0,0.0) indicates the "Position L" of the left extreme of the DEM segment with coordinates Vec3_t(x, y, z). The vector Vec3_t(1.1*L,0.0,0.0) indicates the "Position R" of the right extreme of the DEM segment, which coordinates are Vec3_t(x, y, z). "Thickness" is a number that defines the projection of the 2D segment into the third direction, z. However, this does not affect the simulation; a value of dx is recommended. Then, one more parameter, "Density", is assigned; the same density used for the SPH particles is suggested. When more materials are declared into the domain, the maximum density of one of the components is suggested to set up the boundary conditions. After declaring the DEM particles, which are segments, in this case, it is necessary to assign some parameters to each DEM particle that define their role, to say, if they are boundary particles or free-moving objects. This will be done by using a loop for, which structure is written as: for (size_t j = 0;j<dom.DEMParticles.Size();j++){ ... } To set up the DEM particles as the boundary conditions, they have to be fixed. Hence, the following command is used for such a purpose: dom.DEMParticles[j]->FixVeloc(); Some of the properties of the DEM particles can be modified using Props. For example, if the SPH particle corresponds to the soil, then a value greater than zero should be used in the coefficient of friction dom.DEMParticles->Props.Mu of the DEM particles. Nevertheless, a fluid is declared in this case so that the coefficient is zero. dom.DEMParticles[j]->Props.Mu = 0.0; SPH particles do not have a physical radius, whereas DEM particles have. Besides, DEM particles have a halo to allow SPH particles to penetrate a certain distance into such a halo but no DEM particles per se (Trujillo-Vela et al., 2020). First, the thickness of the halo is defined as zero: dom.DEMParticles[j]->Props.eps = 0.0; Then, using a loop, the thickness of the halo of each DEM particle is modified based on the distance between the SPH and DEM particles as follows:; } When the DEM segments have an inclination and the SPH particles are placed at different distances from the DEM particles, the minimum distance can be used to define the thickness of the halo. Starting the simulation The following lines were introduced in order to check the value of some variables on the terminal: cout << "Cs = " << Cs << endl; cout << "h = " << h << endl; cout << "dx = " << dx << endl; cout << "Re = " << Re << endl; The simulation can be started with the command Solve, so that: dom.Solve(/*finaltime*/Tf,/*dt*/timestep,/*dtOut*/dtoutput,"test06",numfiles); return 0; The system will evolve in this case up to Tfinal = 2.0 seconds of simulation time. The next argument is the integration time step, timestep. The smaller, the more accurate the simulation is, and the more time-consuming. The 3rd argument is the report time-step, in other words, the time span between movie frames, previously defined as dtoutput. The next argument is the name of the output files, file_name. Finally, the last argument is the number of total files numfiles. These variables were defined above. The way to execute the compiled file on the terminal is ./test_poiseuille 4 &. Where 4 indicates the number of processors when required to change it from the maximum as it was explained before. Simulation visualisation MechSys uses VisIt to visualise simulation results. A full tutorial can be found here. After following the instructions and choosing a Pseudocolor->SPHCenter->Velocity_magnitude data type to be visualised, the dynamics of the system should resemble the following video: Publications - Korzani, M. G., Galindo Torres, S., Scheuermann, A., & Williams, D. J. (2016). Smoothed particle hydrodynamics into the fluid dynamics of classical problems. In Applied Mechanics and Materials (Vol. 846, pp. 73-78). Trans Tech Publications Ltd. - Korzani, M. G., Galindo-Torres, S. A., Scheuermann, A., & Williams, D. J. (2017). Parametric study on smoothed particle hydrodynamics for accurate determination of drag coefficient for a circular cylinder. Water Science and Engineering, 10(2), 143-153. - Korzani, M. G., Galindo-Torres, S. A., Scheuermann, A., & Williams, D. J. (2018). Smoothed Particle Hydrodynamics for investigating hydraulic and mechanical behaviour of an embankment under the action of flooding and overburden loads. Computers and Geotechnics, 94, 31-45. - Korzani, M. G., Galindo-Torres, S. A., Scheuermann, A., & Williams, D. J. (2018). SPH approach for simulating hydro-mechanical processes with large deformations and variable permeabilities. Acta Geotechnica, 13(2), 303-316. - Trujillo-Vela, M. G., Galindo-Torres, S. A., Zhang, X., Ramos-Cañón, A. M., & Escobar-Vargas, J. A. (2020). Smooth particle hydrodynamics and discrete element method coupling scheme for the simulation of debris flows. Computers and Geotechnics, 125, 103669. References - Braune, L. & Lewiner, T. (2013). An initiation to sph. Rio de Janeiro: PUC-Rio, pages 1–7. - Bui, H. H., & Nguyen, G. D. Smoothed particle hydrodynamics (SPH) and its applications in geomechanics. ALERT Doctoral School 2020 Point-based numerical methods in geomechanics, 3. - Bui, H. H., Fukagawa, R., Sako, K., & Ohno, S. (2008). Lagrangian meshfree particles method (SPH) for large deformation and failure flows of geomaterial using elastic-plastic soil constitutive model. International journal for numerical and analytical methods in geomechanics, 32(12), 1537-1570. - Cole, R. H., & Weller, R. (1948). Underwater explosions. PhT, 1(6), 35. - Dehnen, W., & Aly, H. (2012). Improving convergence in smoothed particle hydrodynamics simulations without pairing instability. Monthly Notices of the Royal Astronomical Society, 425(2), 1068-1082. - Ghadimi, P., Farsi, M., and Dashtimanesh, A. (2012). Study of various numerical aspects of 3d-sph for simulation of the dam break problem. Journal of the Brazilian Society of Mechanical Sciences and Engineering, 34(4):486–491. - Gray, J. P., Monaghan, J. J., & Swift, R. P. (2001). SPH elastic dynamics. Computer methods in applied mechanics and engineering, 190(49-50), 6641-6662. - Ké, D. D., & Turcotte, G. (1980). Viscosity of biomaterials. Chemical Engineering Communications, 6(4-5), 273-282. - Monaghan, J. J. (2000). Sph without a tensile instability. Journal of Computational Physics, 159(2):290–311. - Monaghan, J. J. (2005). Smoothed particle hydrodynamics. Reports on progress in physics, 68(8):1703–1759. - Morris, J. P., Fox, P. J., & Zhu, Y. (1997). Modeling low Reynolds number incompressible flows using SPH. Journal of computational physics, 136(1), 214-226. - Morris, J. P. (2000). Simulating surface tension with smoothed particle hydrodynamics. International journal for numerical methods in fluids, 33(3), 333-353. - Imran, J., Parker, G., Locat, J., & Lee, H. (2001). 1D numerical model of muddy subaqueous and subaerial debris flows. Journal of hydraulic engineering, 127(11), 959-968. - Papanastasiou, T. C. (1987). Flows of materials with yield. Journal of Rheology, 31(5), 385-404. - Peng, C., Wu, W., Yu, H. S., & Wang, C. (2015). A SPH approach for large deformation analysis with hypoplastic constitutive model. Acta Geotechnica, 10(6), 703-717. - Shao, S., & Lo, E. Y. (2003). Incompressible SPH method for simulating Newtonian and non-Newtonian flows with a free surface. Advances in water resources, 26(7), 787-800. - Takeda, H., Miyama, S. M., & Sekiya, M. (1994). Numerical simulation of viscous flow by smoothed particle hydrodynamics. Progress of theoretical physics, 92(5), 939-960.
https://mechsys.nongnu.org/SPHexample01.html
CC-MAIN-2022-05
en
refinedweb
Intelligently pretty-print HTML/XML with inline tags. Project description prettierfier While I love Beautiful Soup as a parser, BeautifulSoup.prettify() adds a linebreak between every tag. This results in unwanted white space between tags that should be inline, like <sup>, <a>, <span>, etc: <p>Introducing GitHub<sup>®</sup></p> Introducing GitHub® vs. <p> Introducing GitHub <sup> ® </sup> </p> Introducing GitHub ® This module parses HTML/XML as a raw string to more intelligently format tags. Installation You have two options: pip install prettierfierin your command line - Copy the contents of prettierfier.py to your own module. This module is built with just the Python Standard Library and contains no external third-party dependencies. Functions prettify_xml(xml_string, indent=2, debug=False) - Can be used with no prior formatting. Args: xml_string (str): XML text to prettify. indent (int, optional): Set size of XML tag indents. Test-only args: debug (bool, optional): Show results of each regexp application. Returns: str: Prettified XML. prettify_html(html_string, debug=False) - Originally created to process BeautifulSoup.prettify()output. - Does not add or remove regular line breaks. Can be used with regular HTML if it already has the newlines you want to keep. Args: html_string (str): HTML string to parse. Test-only args: debug (bool, optional): Show results of each regexp application. Returns: str: Prettified HTML. Example import prettierfier ugly_html = """<p> Introducing GitHub <sup> ® </sup> </p>""" pretty_html = prettierfier.prettify_html(ugly_html) print(pretty_html) # Output >>> <p>Introducing GitHub<sup>®</sup></p> Links Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/prettierfier/
CC-MAIN-2022-05
en
refinedweb
A Soft Introduction to React So, you've heard about this thing all the cool kids are using nowadays and you're wondering how it works? Well this is the guide for you! I'm going to gently introduce React to someone who might know some web dev stuff, but who reaches for jQuery whenever they have to do something complicated. To make this introduction as soft as possible, you can play along using codesandbox - but first, let's look at a simple thing you might want to do. By the end of the article, we should have a counter that looks something like this: 1. The old-fashioned way Say you want a button that shows an alert popup when you click it. How do you do this? The jQuery approach If you had the following HTML file, you'd need to first create the button, add an event listener to it, then insert it in the <div>. What a pain! <html><head><title>jQuery Example</title></head><body><div id="btn-goes-here"></div></body></html> Now comes the jQuery. Notice how we're just repeatedly doing stuff to the same variable? $(document).ready(function() {var button = $('<button></button>');button.text('Click Me!');button.on('click', function () {alert('You clicked the button'!);});$('#btn-goes-here').append(button);}) And it only gets worse, as nested elements get increasingly verbose and complicated. Luckily, there is another way... The React approach import React from "react";export const AlertButton = () => {return (<button onClick={() => alert("You clicked the button!")}>Click Me!</button>);}; Here's that exact button here: I'm guessing you're wondering if that's HTML in the JavaScript, and the answer is kind of. This is actually JSX, a flavor of JavaScript which lets you put HTML in the middle of JavaScript, and then this is transpiled into real JavaScript that can be ran in the browser (but that's a story for another time). More importantly, there's a big difference to the approach of the two methods. React uses a declarative programming style, and traditional JavaScript/jQuery use an imperative programming style. Declarative vs Imperative programming To understand the difference, you need to imagine you want a painting. Using imperative painting, you would just go paint it yourself - paint the background, paint the flowers etc one by one until the painting is complete. If you were to paint it declaratively, you would declare what you want your painting to look like (blue background, 3 red flowers in a vase) and then get your friend to paint it for you. In this metaphor, your friend is React - it handles all the painting for you (modifying values in the DOM, keeping track of changes) leaving you free to fulfil your artistic vision. 2. A Quick Breakdown There are 3 important concepts to understand in React - components, props, and state. Now's the time to play along! Click this link and select the React template. Components and Nesting React is based around components, which in JS terms are functions (or classes). For example, AlertButton is a function. The real power of this is that you can nest these functions. Here is how we could put the alert button on a page: import React from "react";import { AlertButton } from "./alertbutton";export const App = () => {return (<div><h1>Alert Button Example</h1><AlertButton /></div>);}; We could then nest App in something else, and nest that something else in another component. The possibilities are endless! But how can components interact with each other? Props The answer is the next fundamental part of React - props. Props (short for properties, I assume) allows you to pass data down into child components. To demonstrate this, we're going to modify <AlertButton /> so that it takes a prop specifying the message. export const AlertButton = ({ message }) => {// the props are destructured here// alternatively you could take a props parameter// and use props.message insteadreturn <button onClick={() => alert(message)}>Click Me!</button>;};export const App = () => {return (<div><h1>Alert Button Example</h1><AlertButton message="I'm a prop!" /></div>);}; You may notice that we're passing a function to the onClick property of the button. This is because normal HTML elements take some props much like they do in real HTML - such as id. There are some important differences - they all* are now in Pascal case (i.e. onClick), and the most glaring change is class is now className, since "class" is a protected keyword in JavaScript. Also, we're passing a function - this is important. You can pass anything to a component (arrays, objects, functions) and the component will just receive them as a prop. Passing functions is especially useful, as we will see when we come to talk about state in just a moment. One final thing is that you can also receive the children of a component - they are simply found in props.children. For example, if you wanted to make a wrapper component for adding a dollar sign: export const Dollars = ({ children }) => {return <p>${children}</p>;};// <Dollars>20</Dollars> -> $20 Conditional Rendering You remember how I said there's 3 important concepts? Well conditional rendering is pretty important too. The idea is for a component to return different things depending on its props and/or its state. This allows you to build components that actually do things when data changes. You can't really fit a whole if statement into the JSX, so the common solution is to use boolean expressions and the ternary operator. A quick recap - in JavaScript (and most programming languages) you can do something called short-circuit evaluations. These allow you to do very concise comparisons, because it checks the first value in the expression, and then only evaluates the second value if the first value is true (because if it's false, it wouldn't matter what the second one turns out to be). If the second value is a function, or even better a React component, then it will only be executed if the first value is true. Here, I made some examples of this in 'practice': // Okay, this component might not be 'actually useful'. Whatever.export const DayNight = ({ isDay }) => {// we're taking a boolean, isDay, and// we want to return either a sun or a moonreturn <p>{isDay ? "☀️" : "🌙"}</p>;};// Here, we only want to provide a link to the admin section// if the current user is an adminexport const Application = ({ user }) => {return (<div><NavBar>{user.isAdmin && <AdminButton />}</NavBar><PageContent user={user} /></div>);}; State State is by far the most important part of React. It's what makes it reactive, so to speak. So far, our components take in data from the top, from their props, and then they pass it down to their children. But what if we want the component contain it's own data? This is where state comes in. An easy example of using state is a counter. Here's what we could make with what we know so far: const Counter = ({ count }) => {return (<div><button>+</button><p>Count: {count}</p></div>);}; The question is, how do we link the counter to the button? A naïve approach might just be to use a normal JavaScript variable. // This doesn't work!let count = 0;const Counter = () => {// This "add" function will be ran// when you click the button - React handles// the event listening stuffconst add = () => {count += 1;};return (<div><button onClick={add}>+</button><p>Count: {count}</p></div>);}; You can try this but it won't work - the problem is, React has the concept of rendering wherein it executes the "Counter" function to get the latest result from the component. When you click the button, count will be updated, but the component will not re-render. This is because React doesn't know that the value of count has changed. In order to inform the component that the value of count has changed, we need to use the useState function. This is what is known in React terminology as a hook. const Counter = () => {const [count, setCount] = useState(0);const add = () => {setCount(count + 1);};return (<div><button onClick={add}>+</button><p>Count: {count}</p></div>);}; useState takes in a initial value and returns an array of two values, which we destructure (because it looks neat). The first value is the value itself, and the second value is a function that allows you to update the value. What makes makes this different from a plain variable is that when we setCount to a new value, React will re-render the component. This means that count is kept in sync with the HTML output at all times! React is smart, and will only re-render if the new value is different. This unlocks the enormous power of React. You can use state to keep track of things, and you can use it to make your components more dynamic. You can pass data this data to other components, and from this construct complex applications. Hooks need to obey two major rules: - Only call them at the top level, and don't call them conditionally or in loops React uses the order in which they are called in order to figure out which hook is which - Only call them in React components However, there is one more big piece of the puzzle we need to tackle. I know I said there were 3 major concepts, but sometimes life isn't fair. 3. Effects Ok, now we have stateful components. However, sometimes the code runs too much - you might have noticed if you put something in the body of the component, it will run every time the component is rendered. This is called side effects. const Counter = ({ globalCount }) => {const [count, setCount] = useState(0);console.log(count);return (<div><button onClick={() => setCount(count + 1)}>+</button><p>Count: {count}</p><p>Global count: {globalCount}</p></div>);}; This will log the count every time the component is rendered. However, imagine that we had some prop or another piece of state that is also triggering renders independantly of count. This would be annoying, since now it's logging count not just when count changes, but when anything changes! The is a solution - we can use the useEffect function. This function is similar to useState, but it's used to choose if we want to run certain code when the component is rendered. useEffect takes in a function and an array of values, and it will only run this function when the component when the values in the array change. const Counter = ({ globalCount }) => {const [count, setCount] = useState(0);useEffect(() => {console.log(count);}, [count]);return (<div><button onClick={() => setCount(count + 1)}>+</button><p>Count: {count}</p><p>Global count: {globalCount}</p></div>);}; Now, count will only be logged when the count itself changes, and not just when the component is rendered. useEffect is extremely powerful - here's a few examples: Running something once, when the component renders Simply use useEffect with an empty array. This will run the function when the component is rendered, and no other time. // `data` will be undefined until the fetch resolvesconst [data, setData] = useState();useEffect(() => {fetch("/api/data").then((res) => res.json()).then((data) => setData(data));}, []); Running stuff conditionally Say we have 2 values, and want to calculate a third value based on these two. We can use useEffect to calculate this value whenever either a or b changes. This is useful for deriving state from other state. const [a, setA] = useState(0);const [b, setB] = useState(0);const [c, setC] = useState(0);// calculate cuseEffect(() => {setC(a + b);}, [a, b]); Event listeners Something I haven't mentioned yet - if we return a function from the function we pass to useEffect, it will be called when the component is unmounted. This is useful for unsubscribing from things and cleaning up event listeners. Here, we add an event listener to window to find the width of the window and keep it updated when the window size changes. const [width, setWidth] = useState(window.innerWidth);useEffect(() => {const findWidth = () => setWidth(window.innerWidth);window.addEventListener("resize", findWidth);return () => window.removeEventListener("resize", findWidth);}, []); Note that if we have something in the dependency array that causes the effect to run, it will execute the cleanup function before it runs the effect again with the new values. 4. Custom hooks Hooks are not magical - they're just other functions! We can write new hooks - they just also need to obey the rules of hooks. This can be helpful for abstracting logic and keeping things DRY. As an example, we can rewrite our previous example that measures the width of the window to be a simple hook. // the goal: `width` is always equal to the width of the windowconst width = useWindowWidth();return <p>Your window is {width}px wide</p>; // useWindowWidth.jsconst useWindowWidth = () => {const [width, setWidth] = useState(window.innerWidth);useEffect(() => {const findWidth = () => setWidth(window.innerWidth);window.addEventListener("resize", findWidth);return () => window.removeEventListener("resize", findWidth);}, []);return width;}; We can now reuse useWindowWidth across your application! Here it is here: Your window is 0px wide Change your window size, and it'll update. 5. Conclusion React is a powerful tool, and this post should hopefully have given you a tiny taste of what you can do with it. If you're anything like me, you'll never want to go back to doing things the old way. The new React docs has a tutorial that explains things better than I ever could, so you should check it out if you want to learn more. I hope you enjoyed this post, and let me know if you have any questions or comments!
https://samuel.felixnewman.com/blog/a-soft-intro-to-react
CC-MAIN-2022-05
en
refinedweb
Kubernetes provides us a feature to port forward an application which is running in the internal network through the Kubernetes client host. By default it binds to the 127.0.0.1 and it won’t accept requests from external hosts. The syntax of port forwarding command is given below. kubectl port-forward svc/[service-name] -n [namespace] [external-port]:[internal-port] I am giving an example of port forward command below. The below command will port forward the service running in port 443 to the port 8080 in the kubectl client host. kubectl port-forward svc/argocd-server -n argocd 8080:443 The screenshot after executing the command is given below. When I try to open this URL from another machine using the client machine’s IP Address and Port, it is not working. The reason is because the 8080 port is bound to 127.0.0.1. To make it working, we have to make this port listen to 0.0.0.0. This can be achieved easily by adding an additional parameter to the command. The syntax is given below. kubectl port-forward svc/[service-name] -n [namespace] [external-port]:[internal-port] --address='0.0.0.0' Now it will accept requests from all host. The sample screenshot is given below. I hope this article is useful. Feel free to comment if you have any questions. Simple article yet very much needed; I had been struggling with this all morning. This article got me over the hump. Cheers
https://amalgjose.com/2021/06/08/how-to-configure-kubernetes-port-forward-bind-to-0-0-0-0-instead-of-default-127-0-0-1/?replytocom=6442
CC-MAIN-2022-05
en
refinedweb
DEM Examples Collision of 2 particlesThis tutorial aims to build the simulation for 2 bodies colliding in an elastic way. The user declares a sphero-cube and a sphero-tetrahedron in a collision course. The energy, liner and angular momenta can be measured to check these conservation laws. The whole simulation is described in the following code which will be explained in detail: // MechSys #include <mechsys/dem/domain.h> int main(int argc, char **argv) try { DEM::Domain dom; // add cube Vec3_t x(-10,0,0); Vec3_t v(1.,0,0); Vec3_t w(0,M_PI/5,0); dom.AddCube (/*Tag*/-1, /*Position*/x,/*SpheroRadius*/0.3,/*Length*/3.,/*Density*/1.); dom.Particles[0]->v = v; dom.Particles[0]->w = w; // add tetrahedron x = 10.0, 0 , 0; w = 0.0, 0 , M_PI/10.; v = -1.0, 0 , 0; dom.AddTetra (/*Tag*/-1, /*Position*/x,/*SpheroRadius*/0.5,/*Length*/5.,/*Density*/1.); dom.Particles[1]->v = v; dom.Particles[1]->w = w; // particle parameters Dict D; D.Set(-1,"Gn Gt Mu",0.0,0.0,0.0); dom.SetProps(D); // initial constants Vec3_t l0; Vec3_t p0; double Ek0,Ep0,E0; dom.LinearMomentum (p0); dom.AngularMomentum (l0); E0 = d.CalcEnergy (Ek0,Ep0); // solve dom.CamPos = 0.0,30.0,0.0; dom.Solve(/*tf*/30.0, /*dt*/1.0e-4, /*dtOut*/0.3, NULL, NULL, /*filekey*/"test_dynamics"); // final constants Vec3_t l1; // initial linear momentum Vec3_t p1; // initial angular momentum double Ek1,Ep1,E1; // initial energy dom.LinearMomentum (p1); dom.AngularMomentum (l1); E1 = dom.CalcEnergy (Ek1,Ep1); // check; // results if (error > tol) return 1; else return 0; } MECHSYS_CATCH Initialization MechSys/DEM simulations are prepared with a C++ source code. The first step is to include the required MechSys libraries. In this case the file domain.h contains the DEM domain definitions: #include <mechsys/dem/domain.h> Simulation script structure The simulation script structure is: #include ... int main(int argc, char **argv) try { ... } MECHSYS_CATCH After including the required MechSys' libraries (and C++ standard libraries, if necessary), the main function is declared. For MechSys, it is important to add the try keyword just after the int main() and before the first opening brace {. Then, after the last closing brace the MECHSYS_CATCH is added. Declaring the DEM Domain and Particles The line, DEM::Domain d; declares the container dom of the class DEM::Domain. This class defines the DEM universe and also has many functions to include particles, some of which will be used in this tutorial. To declare the cube, some 3D vectors are needed. The Vec3_t data type is used for fast vectorial operations: Vec3_t x(-10,0,0); Vec3_t v(1.,0,0); Vec3_t w(0,M_PI/5,0); The vectors x v w define respectively the cube center position, its linear velocity and its angular velocity. The declared angular velocity ensures that the cube will spin around the y-axis counter clockwise. Once these three vectors are declared, the cube is introduced into the domain: dom.AddCube (/*Tag*/-1, /*Position*/x,/*SpheroRadius*/0.3,/*Length*/3.,/*Density*/1.); This function creates a cube into the memory. The first argument is a Tag which will be useful later on. The second is the position vector declared above. Next is the Sphero-Radius which is the radius of the rounded corners, and is required for the collision law. Ideally this spheroradius should be large enough so the overlapping between the colliding particles is small compared to the particles dimensions. The 4th argument is the length of the cube's edges and finally the density of the cube is also declared. The method, however, initializes the particle at rest and without rotation. Therefore, to add the vector velocities, internal variables must be accessed: dom.Particles[0]->v = v; dom.Particles[0]->w = w; The object dom.Particles is an array that contains the memory pointers to all the particles created in the domain dom. To access the first particle, which is the cube, the index 0 must be introduced. Therefore dom.Particles[0] is a pointer to the cube. The internal vector variables v and w are the cube's linear and angular velocity vectors. In a similar way the tetrahedron is included at the right side of the cube moving towards it: x = 10.0, 0 , 0; w = 0.0, 0 , M_PI/10.; v = -1.0, 0 , 0; dom.AddTetra (/*Tag*/-1, /*Position*/x,/*SpheroRadius*/0.5,/*Length*/5.,/*Density*/1.); dom.Particles[1]->v = v; dom.Particles[1]->w = w; With both particles properly defined, some collision parameters must be set. MechSys uses the spheropolyhedra method to model the collision of solid objects. The collision law is defined by two spring constants Kn and Kt; two dissipative constants Gn and Gt; and a friction coefficient μ. Since this simulation deals with an elastic collision, the two dissipative constants and the friction coefficient must be set to zero. In MechSys this is achieved by the use of dictionaries as follows: Dict D; D.Set(-1,"Gn Gt Mu",0.0,0.0,0.0); dom.SetProps(D); Where the dictionary D contains the user information on the new parameters. The first argument of the function D.Set is the particle's tag. Hence different groups of particles can be associated with different collision parameters. The elastic parameters Kn and Kt (as Kn and Kt in the dictionary string) can also be included into the dictionary, however for this example we will work with their default values. Starting the simulation To check the conservation laws some values must be calculated before the simulation starts: Vec3_t l0; Vec3_t p0; double Ek0,Ep0,E0; dom.LinearMomentum (p0); dom.AngularMomentum (l0); E0 = d.CalcEnergy (Ek0,Ep0); The vectors l0 and p0 will store the linear and angular momenta given by the functions dom.LinearMomentum and dom.AngularMomentum. The three numbers Ek0, Ep0 and E0 will be respectively the total kinetic energy, the total potential energy and the total energy as given by the function dom.CalcEnergy. This variable will be kept in memory to be compared with the situation after the simulation. With the DEM universe defined, the simulation will begin. First, for future visualization, is useful to define the position of a camera: dom.CamPos = 0.0,30.0,0.0; Then the system evolves for as long as defined by the user by means of the dom.Solve: dom.Solve(/*tf*/30.0, /*dt*/1.0e-4, /*dtOut*/0.3, NULL, NULL, /*filekey*/"test_dynamics"); The system will evolve in this case up to tf = 30.0 seconds of simulation time. The next argument is the integration time step dt, the smaller the more accurate the simulation is but also the more time consuming. The 3rd argument is the report time step, in other words, the time span between movie frames. The next two arguments are pointers to functions which are not explained in this simple example. This functions may be called from inside the Solve function, and hence offer some control over the evolution of the system from outside. Finally, the last argument is a file key which is used to identify the report and animation files form this simulation. At the end of the simulation, the same variables that were measured before it are measured again: Vec3_t l1; Vec3_t p1; double Ek1,Ep1,E1; dom.LinearMomentum (p1); dom.AngularMomentum (l1); E1 = d.CalcEnergy (Ek1,Ep1); And compared with their initial values:; After compiling and running, the simulation should give this results Error in energy = 0.128839 Error in angular momentum = 0.000122599 Error in linear momentum = 8.30147e-10 Unfortunately the conservation laws are not exactly met in the DEM and therefore there is a difference between the initial and final values. As mentioned before this error can be tuned down with the integration time step Simulation visualization MechSys uses VisIt to visualize simulation results. A full tutorial can be found here. After following the instructions the dynamics of the system should resemble the following video:
https://mechsys.nongnu.org/DEMexample01.html
CC-MAIN-2022-05
en
refinedweb
Classes. Using class members Objects have members consisting of functions and data (methods and instance variables, respectively). When you call a method, you invoke it on an object: the method has access to that object’s functions and data. Use a dot ( .) to refer to an instance variable or method: var p = Point(2, 2); // Set the value of the instance variable y. p.y = 3; // Get the value of y. assert(p.y == 3); // Invoke distanceTo() on p. num distance = p.distanceTo(Point(4, 4)); Use ?. instead of . to avoid an exception when the leftmost operand is null: // If p is non-null, set its y value to 4. p?.y = 4; Using constructors You can create an object using a constructor. Constructor names can be either ClassName or ClassName.identifier. For example, the following code creates Point objects using the Point() and Point.fromJson() constructors: var p1 = Point(2, 2); var p2 = Point.fromJson({'x': 1, 'y': 2}); The following code has the same effect, but uses the optional new keyword before the constructor name: var p1 = new Point(2, 2); var p2 = new Point.fromJson({'x': 1, 'y': 2}); Version note: The new keyword became optional in Dart 2. Some classes provide constant constructors. To create a compile-time constant using a constant constructor, put the const keyword before the constructor name: var p = const ImmutablePoint(2, 2); Constructing two identical compile-time constants results in a single, canonical instance: var a = const ImmutablePoint(1, 1); var b = const ImmutablePoint(1, 1); assert(identical(a, b)); // They are the same instance! Within a constant context, you can omit the const before a constructor or literal. For example, look at this code, which creates a const map: // Lots of const keywords here. const pointAndLine = const { 'point': const [const ImmutablePoint(0, 0)], 'line': const [const ImmutablePoint(1, 10), const ImmutablePoint(-2, 11)], }; If a constant constructor is outside of a constant context and is invoked without const, it creates a non-constant object: var a = const ImmutablePoint(1, 1); // Creates a constant var b = ImmutablePoint(1, 1); // Does NOT create a constant assert(!identical(a, b)); // NOT the same instance! Version note: The constkeyword became optional within a constant context in Dart 2. Getting an object’s type To get an object’s type at runtime, you can use Object’s runtimeType property, which returns a Type object. print('The type of a is ${a.runtimeType}'); Up to here, you’ve seen how to use classes. The rest of this section shows how to implement classes.. class Point { num x; num y; } void main() { var point =only. Constructors aren’t inherited Subclasses don’t inherit constructors from their superclass. A subclass that declares no constructors has only the default (no argument, no name) constructor.). In the following example, the constructor for the Employee class calls the named constructor for its superclass, Person. class Person { String firstName; Person.fromJson(Map data) { print('in Person'); } } class Employee extends Person { // Person does not have a default constructor; you must call super.fromJson(data). Employee.fromJson(Map data) : super.fromJson(data) { print('in Employee'); } } main() { var emp = new Employee.fromJson({}); // Prints: // in Person // in Employee if (emp is Person) { // Type check emp.firstName = 'Bob'; } (emp as Person).firstName = 'Bob'; } Because the arguments to the superclass constructor are evaluated before invoking the constructor, an argument can be an expression such as a function call: class Employee extends Person { Employee() : super.fromJson(defaultData); // ··· }. The following example initializes three final fields in an initializer list. class Point { final num x; final num y; final num distanceFromOrigin; Point(x, y) : x = x, y = y, distanceFromOrigin = sqrt(x * x + y * y); } main() { var p = new Point(2, 3); print(p.distanceFromOrigin); }); } Constant constructors don’t always create constants.) { return _cache.putIfAbsent( name, () => Logger._internal(name)); } Logger._internal(this.name); void log(String msg) { if (!mute) print(msg); } } Note: Factory constructors have no access to this. Invoke a factory constructor just like you would any other constructor: var logger = Logger('UI'); logger.log('Button clicked'); Methods Methods are functions that provide behavior for an object. Instance methods Instance methods on objects can access instance variables and this. The distanceTo() method in the following sample is an example of an instance method: import 'dart:math'; class Point { num x, y; Point(this.x, this.y); num distanceTo(Point other) { var dx = x - other.x; var dy = y - other.y; return sqrt(dx * dx + dy * dy); } }... } } Abstract classes Use the abstract modifier to define an abstract class — a class that can’t be instantiated. Abstract classes are useful for defining interfaces, often with some implementation. If you want your abstract class to appear to be instantiable, define a factory constructor. Abstract classes often have abstract methods. Here’s an example of declaring an abstract class that has an abstract method: // This class is declared abstract and thus can't be instantiated. abstract class AbstractContainer { // Define constructors, fields, methods... void updateChildren(); // Abstract method. }(Person('Kathy'))); print(greetBob(Impostor())); } Here’s an example of specifying that a class implements multiple interfaces: class Point implements Comparable, Location {...} } To narrow the type of a method parameter or instance variable in code that is type safe, you can use the covariant keyword. Overridable operators You can override the operators shown in the following table. For example, if you define a Vector class, you might define a + method to add two vectors. < > <= >= - + / ~/ * % | ^ & << >> [] []= ~ == Note: You may have noticed that != is not an overridable operator. The expression e1 != e2 is just syntactic sugar for !(e1 == e2). Here’s an example of a class that overrides the + and - operators: class Vector { final int x, y; Vector(this.x, this.y); Vector operator +(Vector v) => Vector(x + v.x, y + v.y); Vector operator -(Vector v) => Vector(x - v.x, y - v.y); // Operator == and hashCode not shown. } void main() { final v = Vector(2, 3); final w = Vector(2, 2); assert(v + w == Vector(4, 5)); assert(v - w == Vector(0, 1)); } If you override ==, you should also override Object’s hashCode getter.. Adding features to a class: mixins Mixins are a way of reusing a class’s code in multiple class hierarchies. To use a mixin, use the with keyword followed by one or more mixin names. The following example shows two classes that use mixins: class Musician extends Performer with Musical { // ··· } class Maestro extends Person with Musical, Aggressive, Demented { Maestro(String maestroName) { name = maestroName; canConduct = true; } } To implement a mixin, create a class that extends Object and declares no constructors. Unless you want your mixin to be usable as a regular class, use the mixin keyword instead of class. For example: mixin Musical { bool canPlayPiano = false; bool canCompose = false; bool canConduct = false; void entertainMe() { if (canPlayPiano) { print('Playing piano'); } else if (canConduct) { print('Waving hands'); } else { print('Humming to self'); } } } To specify that only certain types can use the mixin — for example, so your mixin can invoke a method that it doesn’t define — use on to specify the required superclass: mixin MusicalPerformer on Musician { // ··· } Version note: Support for the mixin keyword was introduced in Dart 2.1. Code in earlier releases usually used abstract class instead. Class variables and methods Use the static keyword to implement class-wide variables and methods. Static variables Static variables (class variables) are useful for class-wide state and constants: class Queue { static const initialCapacity = 16; // ··· } void main() { assert(Queue.initialCapacity == 16); } Static variables aren’t initialized until they’re used. Static methods Static methods (class methods) do not operate on an instance, and thus do not have access to this. For example: class Point { num x, y; Point(this.x, this.y); static num distanceBetween(Point a, Point b) { var dx = a.x - b.x; var dy = a.y - b.y; return sqrt(dx * dx + dy * dy); } } void main() { var a = Point(2, 2); var b =.
http://semantic-portal.net/dart-tour-classes
CC-MAIN-2022-05
en
refinedweb
Database Tools Changelog¶ On this page 100.5.1 Changelog¶ Released 2021-10-12 This release fixes an issue where certain config collections which should generally be ignored were included by mongodump and mongorestore. This release also ensures that any operations on these collections will not be applied during the oplog replay phase of mongorestore. Bug¶ - TOOLS-2952 Filter config collections in dump/restore 100.5.0 Changelog¶ Released 2021-08-10 This release includes support for the loadbalanced URI option, which provides compatibility with MongoDB Atlas Serverless. Build Failure¶ - TOOLS-2938 Re-add Ubuntu 16.04 PowerPC platform Release¶ - TOOLS-2880 Release Database Tools 100.5.0 Bug¶ - TOOLS-2863 cs.AuthMechanismProperties is not initialized when mechanism set by --authenticationMechanism New Feature¶ - TOOLS-2937 Set loadbalanced option in db.configureClient() Task¶ - TOOLS-2932 Upgrade to Go Driver 1.7.1 100.4.1 Changelog¶ Released 2021-07-23 This patch fixes a bug (:issue:` TOOLS-2931`) that was introduced in version 100.4.0 which causes mongodump to skip any document that contains an empty field name (e.g. { "": "foo" }). Documents with empty field names were not skipped by default if the --query or --queryFile options were specified. No tools other than mongodump were affected. It is highly recommended to upgrade to 100.4.1 if it is possible that your database contains documents with empty field names. Build Failure¶ - TOOLS-2927 Clean up the platforms list inside platform.go Release¶ - TOOLS-2929 Release Database Tools 100.4.1 Bug¶ - TOOLS-2931 mongodump skips documents with empty field names Task¶ - TOOLS-2926 Run release on 'test' and 'development' linux repo separately. 100.4.0 Changelog¶ Released 2021-07-19 This release includes MongoDB Server 5.0 support, including dump/restoring of timeseries collections. Build Failure¶ - TOOLS-2892 aws-auth tests failing on all variants - TOOLS-2893 legacy-js-tests 4.4 and 5.0 failing on all variants Release¶ - TOOLS-2845 Release Database Tools 100.4.0 Bug¶ - TOOLS-2041 Mongorestore should handle duplicate key errors during oplog replay - TOOLS-2833 Creating an index with partialFilterExpression during oplogReplay fails - TOOLS-2925 RPM packages are only signed with the 4.4 auth token New Feature¶ - TOOLS-2857 Dump timeseries collections - TOOLS-2858 Mongodump can query timeseries collections by metadata - TOOLS-2859 Restore timeseries collections - TOOLS-2860 Include/Exclude/Rename timeseries collections in mongorestore Task¶ - TOOLS-2719 Add Enterprise RHEL 8 zSeries to Tools - TOOLS-2721 Add RHEL8 ARM to Tools - TOOLS-2777 Generate Full JSON variant should not be running on every commit - TOOLS-2823 Build with go 1.16 - TOOLS-2824 Add static analysis task that runs "evergreen validate" - TOOLS-2849 Mongodump should fail during resharding - TOOLS-2850 Mongorestore should fail when restoring geoHaystack indexes to 4.9.0 - TOOLS-2851 importCollection command should cause mongodump to fail - TOOLS-2853 Hide deprecated --slaveOk option - TOOLS-2866 Drop support for zSeries platforms - TOOLS-2873 Run full test suite on all supported distros in evergreen - TOOLS-2881 Push tools releases to 4.9+ linux repos - TOOLS-2921 Upgrade to Go Driver 1.6 100.3.1 Changelog¶ Released 2021-03-17 This release includes various bug fixes. Particularly notable is TOOLS-2783, where we reverted a change from 100.2.1 (TOOLS-1856: use a memory pool in mongorestore) after discovering that it was causing memory usage issues. Build Failure¶ - TOOLS-2796 mongotop_sharded.js failing on all versions of the qa-tests - TOOLS-2815 Development build artifacts accidentally uploaded for versioned release Release¶ - TOOLS-2791 Release Database Tools 100.3.1 Bug¶ - TOOLS-2584 Restoring single BSON file should use db set in URI - TOOLS-2783 Mongorestore uses huge amount of RAM Task¶ - TOOLS-704 Remove system.indexes collection dumping from mongodump - TOOLS-2801 Migrate from dep to Go modules and update README - TOOLS-2802 Make mongo-tools-common a subpackage of mongo-tools - TOOLS-2805 Add mod tidy static analysis check for Go modules - TOOLS-2806 Migrate mongo-tools-common unit tests to mongo-tools - TOOLS-2807 Migrate mongo-tools-common integration tests to mongo-tools - TOOLS-2808 Migrate mongo-tools-common IAM auth tests to mongo-tools 100.3.0 Changelog¶ Released 2021-02-04 This release includes support for PKCS8-encrypted client private keys, support for providing secrets in a config file instead of on the command line, and a few small bug fixes. Build Failure¶ - TOOLS-2795 Tools failing to build on SUSE15-sp2 - TOOLS-2800 RPM creation failing on amazon linux 1 Release¶ - TOOLS-2790 Release Database Tools 100.3.0 Investigation¶ - TOOLS-2771 SSL connection problems mongodump Bug¶ - TOOLS-2751 Deferred query EstimatedDocumentCount helper incorrect with filter - TOOLS-2760 rpm package should not obsolete itself - TOOLS-2775 --local does not work with multi-file get or get_regex New Feature¶ - TOOLS-2779 Add --config option for password values Task¶ - TOOLS-2013 Support PKCS8 encrypted client private keys - TOOLS-2707 Build mongo-tools and mongo-tools-common with go 1.15 - TOOLS-2780 Add warning when password value appears on command line - TOOLS-2798 Add Amazon Linux 2 Arm64 to Tools 100.2.1 Changelog¶ Released 2020-11-13 This release includes a mongorestore performance improvement, a fix for a bug affecting highly parallel mongorestore instances, and an observability improvement to mongodump and mongoexport, in addition to a number of internal build/release changes. Build Failure¶ - TOOLS-2767 Windows 64 dist task fails Release¶ - TOOLS-2741 Release Database Tools 100.2.1 Bug¶ - TOOLS-2744 mongorestore not scaling due to unnecessary incremental sleep time New Feature¶ - TOOLS-2750 Log before getting collection counts Task¶ - TOOLS-1856 use a memory pool in mongorestore - TOOLS-2651 Simplify build scripts - TOOLS-2687 Add archived releases JSON feed for Database Tools - TOOLS-2735 Move server vendoring instructions to a README in the repo - TOOLS-2748 Add a String() to OpTime - TOOLS-2758 Bump Go driver to 1.4.2 100.2.0 Changelog¶ Released 2020-10-15 This release deprecates the --sslAllowInvalidHostnames and --sslAllowInvalidCertificates flags in favor of a new --tlsInsecure flag. The mongofiles put and mongofiles get commands can now accept a list of file names. There is a new mongofiles get_regex command to retrieve all files matching a regex pattern. The 100.2.0 release also contains fixes for several bugs. It fixes a bug introduced in version 100.1.0 that made it impossible to connect to clusters with an SRV connection string (TOOLS-2711). Build Failure¶ - TOOLS-2693 Most tasks failing on race detector variant - TOOLS-2737 Fix TLS tests on Mac and Windows - TOOLS-2747 Git tag release process does not work Release¶ - TOOLS-2704 Release Database Tools 100.2.0 Bug¶ - TOOLS-2587 sslAllowInvalidHostnames bypass ssl/tls server certification validation entirely - TOOLS-2688 mongodumpdoes not handle EOF when passing in the password as STDIN - TOOLS-2706 tar: implausibly old time stamp error on Amazon Linux/RHEL - TOOLS-2708 Atlas recommended connection string for mongostatdoesn't work - TOOLS-2710 Non-zero index key values are not preserved in ConvertLegacyIndexes - TOOLS-2711 Tools fail with "a direct connection cannot be made if multiple hosts are specified" if mongodb+srv URI or a legacy uri containing multiple mongosis specified - TOOLS-2716 mongodb-database-toolspackage should break older versions of mongodb-*-tools New Feature¶ - TOOLS-2667 Support list of files for putand getsubcommands in mongofiles - TOOLS-2668 Create regex interface for getting files from remote FS in mongofiles Task¶ - TOOLS-2674 Clarify contribution guidelines - TOOLS-2700 Use git tags for triggering release versions - TOOLS-2701 Log target linux repo in push task 100.1.1 Changelog¶ Released 2020-07-31 This release contains a fix for a linux packaging bug and a mongorestore bug related to the --convertLegacyIndexes flag. Release¶ - TOOLS-2685 Release Database Tools 100.1.1 Bug¶ - TOOLS-2645 Check for duplicate index keys after converting legacy index definitions - TOOLS-2683 Ubuntu 16.04 DB Tools 100.1.0 DEB depends on libcom-err2, should be libcomerr2 100.1.0 Changelog¶ Released 2020-07-24 This release officially adds support for MongoDB 4.4. In addition to various bug fixes, it adds support for MongoDB 4.4's new MONGODB-AWS authentication mechanism. The full list of changes is below: Build Failure¶ - TOOLS-2604 integration-4.4-cluster is failing on master - TOOLS-2638 Test-case failure for mongorestore - TOOLS-2643 New linux distros missing from repo-config.yaml Release¶ - TOOLS-2630 Release Database Tools 100.1.0 Bug¶ - TOOLS-2287 URI parser incorrectly prints unsupported parameter warnings - TOOLS-2337 nsInclude does not work with percent encoded namespaces - TOOLS-2366 ^C isn't handled by mongodump - TOOLS-2494 mongorestore thorw error "panic: close of closed channel" - TOOLS-2531 mongorestore hung if restoring views with --preserveUUID --drop options - TOOLS-2596 DBTools --help links to old Manual doc pages - TOOLS-2597 swallows errors from URI parsing - TOOLS-2609 Detached signatures incorrectly appearing in download JSON - TOOLS-2622 Tools do not build following README instructions - TOOLS-2669 macOS zip archive structure incorrect - TOOLS-2670 Troubleshoot IAM auth options errors New Feature¶ - TOOLS-2369 IAM Role-based authentication Task¶ - TOOLS-2363 Update warning message for "mongorestore" - TOOLS-2476 Notarize builds for macOS catalina - TOOLS-2505 Add missing 4.4 Platforms - TOOLS-2534 Ignore startIndexBuild and abortIndexBuild oplog entries in oplog replay - TOOLS-2535 commitIndexBuild and createIndexes oplog entries should build indexes with the createIndexes command during oplog replay - TOOLS-2554 Remove ReplSetTest file dependencies from repo - TOOLS-2569 Update tools to go driver 1.4.0 - TOOLS-2618 Refactor AWS IAM auth testing code - TOOLS-2628 Add 3.4 tests to evg - TOOLS-2644 Update barque authentication - TOOLS-2650 Create changelog for tools releases 100.0.2 Changelog¶ This release contains several bugfixes. It also adds support for dumping and restoring collections with long names since the 120 byte name limit will be raised to 255 bytes in MongoDB version 4.4. The full list of changes is below: Bug¶ - TOOLS-1785 Typo in mongodump help - TOOLS-2495 Oplog replay can't handle entries > 16 MB - TOOLS-2498 Nil pointer error mongodump - TOOLS-2559 Error on uninstalling database-tools 99.0.1-1 RPM - TOOLS-2575 mongorestore panic during convertLegacyIndexes from 4.4 mongodump - TOOLS-2593 Fix special handling of $admin filenames Task¶ - TOOLS-2446 Add MMAPV1 testing to Tools tests - TOOLS-2469 Accept multiple certs in CA - TOOLS-2530 Mongorestore can restore from new mongodump format - TOOLS-2537 Ignore config.system.indexBuilds namespace - TOOLS-2544 Add 4.4 tests to Evergreen - TOOLS-2551 Split release uploading into per-distro tasks - TOOLS-2555 Support directConnection option - TOOLS-2561 Sign mongodb-tools tarballs - TOOLS-2605 Cut 100.0.2 release 100.0.1 Changelog¶ This release was a test of our new release infrastructure and contains no changes from 100.0.0. Task¶ - TOOLS-2493 Cut tools 100.0.0 and 100.0.1 GA releases 100.0.0 Changelog¶ This is the first separate release of the Database Tools from the Server. We decided to move to a separate release so we can ship new features and bugfixes more frequently. The new separate release version starts from 100.0.0 to make it clear the versioning is separate from the Server. You can read more about this on the MongoDB blog. This release contains bugfixes, some new command-line options, and quality of life improvements. A full list can be found below, but here are some highlights: - There are no longer restrictions on using --uriwith other connection options such as --portand --passwordas long as the URI and the explicit option don't provide conflicting information. Connection strings can now be specified as a positional argument without the --urioption. - The new --useArrayIndexFieldsflag for mongoimportinterprets natural numbers in fields as array indexes when importing csv or tsv files. - The new --convertLegacyIndexesflag for mongorestoreremoves any invalid index options specified in the corresponding mongodumpoutput, and rewrites any legacy index key values to use valid values. - A new delete modefor mongoimport. With --mode delete, mongoimportdeletes existing documents in the database that match a document in the import file. Build Failure¶ - TOOLS-2489 format-go task failing on master Bug¶ - TOOLS-1493 Tools crash running help when terminal width is low - TOOLS-1786 mongodump does not create metadata.json file for views dumped as collections - TOOLS-1826 mongorestore panic in archive mode when replay oplog failed - TOOLS-1909 mongoimport does not report that it supports the decimal type - TOOLS-2275 autoIndexId:false is not supported in 4.0 - TOOLS-2334 Skip system collections during oplog replay - TOOLS-2336 Wrong deprecation error message printed when restoring from stdin - TOOLS-2346 mongodump --archive to stdout corrupts archive when prompting for password - TOOLS-2379 mongodump/mongorestore error if source database has an invalid index option - TOOLS-2380 mongodump fails against hidden node with authentication enabled - TOOLS-2381 Restore no socket timeout behavior - TOOLS-2395 Incorrect message for oplog overflow - TOOLS-2403 mongorestore hang while replaying last oplog failed in archive mode - TOOLS-2422 admin.tempusers is not dropped by mongorestore - TOOLS-2423 mongorestore does not drop admin.tempusers if it exists in the dump - TOOLS-2455 mongorestore hangs on invalid archive - TOOLS-2462 Password prompt does not work on windows - TOOLS-2497 mongorestore may incorrectly validate index name length before calling createIndexes - TOOLS-2513 Creating client options results in connection string validation error - TOOLS-2520 Fix options parsing for SSL options - TOOLS-2547 Installing database tools fails on rhel 7.0 - TOOLS-2548 Installing database tools fails on SLES 15 New Feature¶ - TOOLS-1954 Support roundtrip of mongoexport array notation in mongoimport - TOOLS-2268 Add remove mode to mongoimport - TOOLS-2412 Strip unsupported legacy index options - TOOLS-2430 mongorestore: in dotted index keys, replace "hashed" with "1" - TOOLS-2459 Allow --uri to be used with other connection string options - TOOLS-2460 A connection string can be set as a positional argument - TOOLS-2521 Add support for the tlsDisableOCSPEndpointCheck URI option - TOOLS-2529 Mongodump outputs new file format for long collection names Task¶ - TOOLS-2418 Remove mongoreplay from mongo-tools - TOOLS-2421 Maintain test coverage after moving tools tests from server - TOOLS-2438 Create MSI installer in dist task - TOOLS-2439 Tools formula included in homebrew tap - TOOLS-2440 Sign MSI installer - TOOLS-2441 Update release process documentation - TOOLS-2442 Automate release uploads - TOOLS-2443 Generate tarball archive in dist task - TOOLS-2444 Generate deb packages in dist task - TOOLS-2449 Create sign task - TOOLS-2464 Update platform support - TOOLS-2470 Sign linux packages - TOOLS-2471 Automate JSON download feed generation - TOOLS-2472 Automate linux package publishing - TOOLS-2473 Consolidate community and enterprise buildvariants - TOOLS-2475 Manually verify tools release - TOOLS-2480 Generate rpm packages in dist task - TOOLS-2488 Update package naming and versioning - TOOLS-2493 Cut tools 100.0.0 and 100.0.1 GA releases - TOOLS-2506 Update maintainer in linux packages - TOOLS-2523 Remove Ubuntu 12.04 and Debian 7.1 variants - TOOLS-2536 ignoreUnknownIndexOptions option in the createIndexes command for servers >4.1.9 - TOOLS-2538 Move convertLegacyIndexKeys() from mongorestore to mongo-tools-common - TOOLS-2539 Publish linux packages to curator with correct names - TOOLS-2549 Push GA releases to server testing repo - TOOLS-2550 Push GA releases to the 4.4 repo - TOOLS-2551 Split release uploading into per-distro tasks
https://docs.mongodb.com/database-tools/release-notes/database-tools-changelog/
CC-MAIN-2022-05
en
refinedweb
NAMEqb_atomic_pointer_compare_and_exchange - Compares oldval with the pointer pointed to by atomic and if they are equal, atomically exchanges *atomic with newval. SYNOPSIS #include <qb/qbatomic.h> int32_t qb_atomic_pointer_compare_and_exchange( volatile void *QB_GNUC_MAY_ALIAS *atomic, void *oldval, void *newval ); PARAMSatomic a pointer to a void* oldval the assumed old value of *atomic newval the new value of *atomic
https://man.archlinux.org/man/community/libqb/qb_atomic_pointer_compare_and_exchange.3.en
CC-MAIN-2022-05
en
refinedweb
Encrypted Secrets(Credentials) in Rails 6, Rails 5.1/5.2, older versions and non-Rails applications How to manage encrypted keys for different environments There are two most popular ways to manage secrets in your application. - Encrypted file with secrets. Best choice for a single monolith application. There’s no need for additional software, just keep your encrypted data in the app repository and move decrypt key under git ignore. - Centralized storage. For large and complex systems it’s better to use dedicated storage for all services and provide an interface for them. Vault is a cool example of this kind of solution. the first time you run: rails credentials:edit If you don’t have a master key, that will be created too. Applications after Rails 5.2 automatically have a basic credentials file generated that already contains the secret_key_base. Here is an example of config/credentials.yml.enc aws: access_key_id: 123 secret_access_key: 345secret_key_base: xxx And this is how to get value from credentials: Rails.application.credentials.aws[:secret_access_key] => 345 Below I’ll list three cases of managing encrypted variables: Rails 6, Rails 5.1/5.2, older versions and non-Rails applications. Rails 5.1.x and 5.2.x: Single file for all environments This was the main problem. One single file with all credentials and one single key for them. There is no way to share access between developers for a specific environment. Though you can easily hack a naming problem in two ways: - Adding dev, staging, productionkeys and put all needed under the related key. - Override credentialsmethod on Rails application. Top-level Hash as an environment key. development: aws: access_key_id: 123 secret_access_key: 345 production aws: access_key_id: 321 secret_access_key: 543 Fetching value in the code will be through a call Rails.env on Rails.application.credentials Rails.application.credentials.send(Rails.env)[:aws][:secret_access_key] Override credentials method on Rails application. module AppModule class Application < Rails::Application def credentials if Rails.env.production? super else encrypted( "config/credentials.#{Rails.env}.yml.enc", key_path: "config/#{Rails.env}.key" ) end end end end This uses the default credentials file config/credentials.yml.enc if the Rails environment is production. With this solution decrypt key can be specific for each environment. Rails 6: Specify And Manage credentials file for each environment So, now Rails 6 supports Multi Environment Credentials. The credentials command supports passing an --environment option to create an environment-specific override file with variables. That override will take precedence over the global config/credentials.yml.env file when running in that environment. Let’s create an example for the development environment: rails credentials:edit --environment development This task will create config/credentials/development.yml.enc with the new encryption key in config/credentials/development.key Let’s add our keys here: aws: access_key_id: 123 To get the value just use Rails.application.credentials Rails.application.credentials.aws[:access_key_id] => 123 Encrypted secrets for non-Rails applications Gem sekrets is the most flexible solution I’ve worked with. It works just fine for any kind of Ruby app. Even rails encrypted secrets were based on this. Encrypted secrets with Rails older than 5.0 Add to Gemfile of Rails project: gem 'sekrets' Then create an encrypted config file (for each needed environment) ruby -r yaml -e'puts({:some_key => 000}.to_yaml)' | sekrets write config/sekrets.yml.development.enc --key yoursecretkey Keep your secret key into .sekrets.key file: echo yoursecretkey > .sekrets.key Now you can edit them with the following task: sekrets edit config/sekrets.yml.development.enc Add this line to config/application.rb which will load secrets for the current environment require_relative 'boot' require 'rails/all' Bundler.require(*Rails.groups)module Rails5Cred class Application < Rails::Application config.sekrets = Sekrets.settings_for(Rails.root.join('config', "sekrets.yml.#{Rails.env}.enc")) end end To get the desired value use Rails.configuration.sekrets Rails.configuration.sekrets['some_key'] => 0 Encrypted secrets with pure Ruby apps (non-Rails) Add to Gemfile of Ruby project: gem 'sekrets' Here is an example of a secret reader class for a rack-based env variable RACK_ENV . Just replace it, if you have another environment id. class Secret def self.[](key) root = Pathname.new('./').expand_path sec_key = File.read(root.join('.sekrets.key')).strip Sekrets.settings_for("./config/sekrets.yml.#{ENV['RACK_ENV']}.enc", key: sec_key)[key] end end The process of creating and editing a file with variables is identical for Rails older than 5.x. To receive values just call Secret class. But first of all this file should be required globally or in a used location. Secret['some_key'] => 0 Conclusions - Separate encryption key for each environment. Do not create a single key for all environments. It is safer to have separate keys for CI, development, and production - Encrypted secrets make easier deploys. Variables can now be shipped with the code. You only need to upload the key to the server once. - This solution can be used for any kind of Ruby or Rails application.
https://medium.com/@kirill_shevch/encrypted-secrets-credentials-in-rails-6-rails-5-1-5-2-f470accd62fc
CC-MAIN-2020-05
en
refinedweb
I am trying to generate 2KHz square wave via a MCP4725 on RPi3. I need to vary the voltage somewhere between 0 to 5Vpp, so I cannot use the digital GPIO pins and I get this MCP4725 from Adafruit. I connect the MCP4275 and I can see it on the I2C bus. I copy the Adafuit example and modify it as little bit, however when I run the following Python code it does not produce the 2 KHz square wave. The square wave produced is only about 800 Hz. If I reduce the sleep to 0.00001, it gives about 2 KHz but it is not stable but oscillates from 1 KHz to 2 KHz. This is unacceptable for my application. Code: Select all import time # Import the MCP4725 module. import Adafruit_MCP4725 # Create a DAC instance. dac = Adafruit_MCP4725.MCP4725() # Loop forever alternating through different voltage outputs. print('Press Ctrl-C to quit...') while True: dac.set_voltage(0) time.sleep(0.00025) dac.set_voltage(4095) time.sleep(0.00025) I have took a movie on this and I wonder if it is a software issue? I am aware of the following possibilities: 1) I2C speed default to RPi3 is too low, I should change to 3400000 (max 3.4 Mbps according to MCP4725) 2) bad cable but my cable is short and only about 10 cm 3) Adafruit python lib is slow, so I should change to pigpio python lib instead. But I have no idea on how to use those on MCP4725, Thank you! Rolly
https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=190134&p=1195068
CC-MAIN-2020-05
en
refinedweb
The QSourceLocation class identifies a location in a resource by URI, line, and column. More... #include <QSourceLocation> Note: All functions in this class are reentrant. This class was introduced in Qt 4. Destructor. Returns the current column number. The column number refers to the count of characters, not bytes. The first column is column 1, not 0. The default value is -1, indicating the column number is unknown.. Sets the column number to newColumn. 0 is an invalid column number. The first column number is 1. Sets the line number to newLine. 0 is an invalid line number. The first line number is 1. Sets the URI to newUri. Returns the resource that this QSourceLocation refers to. For example, the resource could be a file in the local file system, if the URI scheme is file. Returns the opposite of applying operator==() for this QXmlName and other. Assigns this QSourceLocation instance to other. Returns true if this QSourceLocation is identical to other. Two QSourceLocation instances are equal if their uri(), line() and column() are equal. QSourceLocation instances for which isNull() returns true are considered equal. Computes a hash key for the QSourceLocation location. This function was introduced in Qt 4.4. Prints sourceLocation to the debug stream debug. This function was introduced in Qt 4.4.
https://doc-snapshots.qt.io/4.8/qsourcelocation.html
CC-MAIN-2020-05
en
refinedweb
This API supports the .NET Framework infrastructure and is not intended to be used directly from your code. public ref class MMC_PSO public class MMC_PSO type MMC_PSO = class Public Class MMC_PSO Determines whether the specified object is equal to the current object. Serves as the default hash function. Gets the Type of the current instance. Creates a shallow copy of the current Object. Returns a string that represents the current object. Thank you.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.clradmin.mmc_pso?view=netframework-1.1
CC-MAIN-2020-05
en
refinedweb
App Center Push Note Google announced it is migrating from the Google Cloud Messaging (GSM) platform to Firebase Cloud Messaging (FCM). For Android developers, the Firebase SDK is required to use Push Notifications. For additional information, please refer to the SDK migration guide. Please follow the Get started section if you haven't set up and started the SDK in your application, yet. 1. Add the App Center Push module The App Center SDK is designed with a modular approach – a developer only needs to integrate the modules of the services that they're interested in. Modify the Android project level build.gradle file: buildscript { repositories { // Add google line if missing before jcenter google() jcenter() } dependencies { // Add this line classpath 'com.google.gms:google-services:4.0.1' } } allprojects { repositories { // Add google line if missing before jcenter google() jcenter() } } Note Google introduced the google()repository with Gradle v4. If your Gradle version is lower than v4, then you need to use maven { url '' }instead of google(). Modify the app level build.gradle file: dependencies { // Add App Center Push module dependency def appCenterSdkVersion = '2.5.1' implementation "com.microsoft.appcenter:appcenter-push:${appCenterSdkVersion}" } // Add this line at the bottom apply plugin: 'com.google.gms.google-services' Note If the version of your Android Gradle plugin is lower than 3.0.0, then you need to replace the word implementation by compile. Make sure to trigger a Gradle sync in Android Studio. 2. Start App Center Push To use App Center capabilities in your application, the app must opt-in to the module(s) it wants to use. By default no modules are started and the app must explicitly call each of them when starting the SDK. Add the Push class to the app's call to the AppCenter.start() method to start App Center Push. AppCenter.start(getApplication(), "{Your App Secret}", Push.class); AppCenter.start(application, "{Your App Secret}", Push::class.java) Replace {Your App Secret} in the sample with the App Secret for the App Center project associated with this application. Refer to the Get started section if the SDK isn't configured in the application yet. Android Studio automatically suggests the required import statement once you add Push to the start() method, but if you see an error that the class names are not recognized, add the following lines to the import statements in your activity class: import com.microsoft.appcenter.AppCenter; import com.microsoft.appcenter.push.Push; import com.microsoft.appcenter.AppCenter import com.microsoft.appcenter.push.Push Intercept push notifications Set up a listener to be notified whenever a push notification is received in foreground or a background push notification has been clicked by the user. Note The device does not generate a notification when an application receives a push notification when the app is in the foreground. Note If the push is received in background, the event is NOT triggered at receive time. The event is triggered when you click on the notification. Note The background notification click callback does NOT expose title and message. Title and message are only available in foreground pushes. The app must register the listener before calling AppCenter.start as shown in the following example: Push.setListener(new MyPushListener()); AppCenter.start(...); Push.setListener(MyPushListener()) AppCenter.start(...) If the app's launcher activity uses a launchMode of singleTop, singleInstance or singleTask, add the following code in the activity onNewIntent method: @Override protected void onNewIntent(Intent intent) { super.onNewIntent(intent); Push.checkLaunchedFromNotification(this, intent); } override fun onNewIntent(intent: Intent?) { super.onNewIntent(intent) Push.checkLaunchedFromNotification(this, intent) } Here is an example of the listener implementation that displays an alert dialog if the message is received in foreground or a toast if a background push has been clicked: public class MyPushListener implements PushListener { @Override public void onPushNotificationReceived(Activity activity, PushNotification pushNotification) { /* The following notification properties are available. */ String title = pushNotification.getTitle(); String message = pushNotification.getMessage(); Map<String, String> customData = pushNotification.getCustomData(); /* * Message and title cannot be read from a background notification object. * Message being a mandatory field, you can use that to check foreground vs background. */ if (message != null) { /* Display an alert for foreground push. */ AlertDialog.Builder dialog = new" } } } class MyPushListener : PushListener { override fun onPushNotificationReceived(activity: Activity, pushNotification: PushNotification) { /* The following notification properties are available. */ val title = pushNotification.getTitle() val message = pushNotification.getMessage() val customData = pushNotification.getCustomData() /* * Message and title cannot be read from a background notification object. * Message being a mandatory field, you can use that to check foreground vs background. */ if (message != null) { /* Display an alert for foreground push. */ val dialog =" } } } Custom data in your notifications You can send optional custom data as part of the push payload. The data will be sent in the key-value format. This custom data can be intercepted in the app through Push SDK callback. There are few reserved keywords that can be set via custom data. You can customize your notifications by setting custom color, icon or sound. Note Android 5.0 and later uses a silhouette (only alpha channel) of your icon for notifications. See Android 5.0 Behavior Changes for details. Reserved keywords in Android platform - color: The notification's icon color, expressed in #rrggbbformat. Will be applied only on devices with Android 5.0 and later. - icon: The notification's icon. You should specify name of the icon resource. Supports drawableand mipmaptypes. If this value isn't specified application icon will be used. - sound: Add this key when you want the to play a sound when the device receives the notification. Supports defaultor the filename of a sound resource bundled in the app. Sound files must reside in /res/raw/. This is effective only for devices running or targeting an Android version lower than 8. Sound is set by default on Android 8 and user can change notification settings for the group of notifications coming from AppCenter. Configure notification's default values You can specify custom defaults for the icon and color that gets applied when it isn't set in the push payload. The lines below should be added to AndroidManifest.xml inside the application tag: <!-- Set default notification icon and color. --> <meta-data android: <meta-data android: App Center displays the application icon if a custom default icon and an icon are not set in the push payload. Existing Firebase Analytics users App Center Push SDK automatically disables Firebase Analytics. If you are a Firebase customer and want to keep reporting analytics data to Firebase, you must call the following method before AppCenter.start: Push.enableFirebaseAnalytics(getApplication()); AppCenter.start(getApplication(), "{Your App Secret}", Push.class);. Push.setEnabled(false); Push.setEnabled(false) To enable App Center Push again, use the same API but pass true as a parameter. Push.setEnabled(true); Push.setEnabled(true) The state is persisted in the device's storage across application launches. This API is asynchronous, you can read more about that in our App Center Asynchronous APIs guide. Note This method must only be used after Push has been started. Check if App Center Push is enabled You can also check if App Center Push is enabled or not: Push.isEnabled(); Push.isEnabled() This API is asynchronous, you can read more about that in our App Center Asynchronous APIs guide. Note This method must only be used after Push has been started, it will always return false before start. Comentarios Cargando comentarios...
https://docs.microsoft.com/es-es/appcenter/sdk/push/android
CC-MAIN-2020-05
en
refinedweb
codable 1.0.0 codable # A library for encoding and decoding dynamic data into Dart objects. Basic Usage # Data objects extend Coding: class Person extends Coding { String name; @override void decode(KeyedArchive object) { // must call super super.decode(object); name = object.decode("name"); } @override void encode(KeyedArchive object) { object.encode("name", name); } } An object that extends Coding can be read from JSON: final json = json.decode(...); final archive = KeyedArchive.unarchive(json); final person = Person()..decode(archive); Objects that extend Coding may also be written to JSON: final person = Person()..name = "Bob"; final archive = KeyedArchive.archive(person); final json = json.encode(archive); Coding objects can encode or decode other Coding objects, including lists of Coding objects and maps where Coding objects are values. You must provide a closure that instantiates the Coding object being decoded. class Team extends Coding { List<Person> members; Person manager; @override void decode(KeyedArchive object) { super.decode(object); // must call super members = object.decodeObjects("members", () => Person()); manager = object.decodeObject("manager", () => Person()); } @override void encode(KeyedArchive object) { object.encodeObject("manager", manager); object.encodeObjects("members", members); } } Dynamic Type Casting # Types with primitive type arguments (e.g., List<String> or Map<String, int>) are a particular pain point when decoding. Override castMap in Coding to perform type coercion. You must import package:codable/cast.dart as cast and prefix type names with cast. import 'package:codable/cast.dart' as cast; class Container extends Coding { List<String> things; @override Map<String, cast.Cast<dynamic>> get castMap => { "things": cast.List(cast.String) }; @override void decode(KeyedArchive object) { super.decode(object); things = object.decode("things"); } @override void encode(KeyedArchive object) { object.encode("things", things); } } Document References # Coding objects may be referred to multiple times in a document without duplicating their structure. An object is referenced with the $key key. For example, consider the following JSON: { "components": { "thing": { "name": "The Thing" } }, "data": { "$ref": "#/components/thing" } } In the above, the decoded value of data inherits all properties from /components/thing: { "$ref": "#/components/thing", "name": "The Thing" } You may create references in your in-memory data structures through the Coding.referenceURI. final person = Person()..referenceURI = Uri(path: "/teams/engineering/manager"); The above person is encoded as: { "$ref": "#/teams/engineering/manager" } You may have cyclical references. See the specification for JSON Schema and the $ref keyword for more details. 1.0.0 # - Initial version Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: codable: :codable/codable.dart'; We analyzed this package on Jan 17, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.4 Health suggestions Fix lib/src/keyed_archive.dart. (-7.71 points) Analysis of lib/src/keyed_archive.dart reported 16 hints, including: line 36 col 81: Use = to separate a named parameter from its default value. line 37 col 21: Unnecessary new keyword. line 39 col 30: Unnecessary new keyword. line 52 col 73: Use = to separate a named parameter from its default value. line 53 col 21: Unnecessary new keyword. Fix lib/cast.dart. (-4.41 points) Analysis of lib/cast.dart reported 9 hints, including: line 63 col 15: Unnecessary new keyword. line 71 col 15: Unnecessary new keyword. line 79 col 15: Unnecessary new keyword. line 87 col 15: Unnecessary new keyword. line 105 col 18: Unnecessary new keyword. Format lib/codable.dart. Run dartfmt to format lib/codable.dart. Fix additional 4 files with analysis or formatting issues. Additional issues in the following files: lib/src/codable.dart(Run dartfmtto format lib/src/codable.dart.) lib/src/coding.dart(Run dartfmtto format lib/src/coding.dart.) lib/src/list.dart(Run dartfmtto format lib/src/list.dart.) lib/src/resolver.dart(Run dartfmtto format lib/src/resolver.dart.) Maintenance suggestions Package is getting outdated. (-44.66 points) The package was last published 75 weeks ago. Maintain an example. (-10 points) Create a short demo in the example/ directory to show how to use this package. Common filename patterns include main.dart, example.dart, and codable.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/codable
CC-MAIN-2020-05
en
refinedweb
5. public class Name { private String first; private String last; public Name(String theFirst, String theLast) { first = theFirst; last = theLast; } public void setFirst(String theFirst) { first = theFirst; } public void setLast(String theLast) { last = theLast; } } - Determines the amount of space needed for an object and creates the object - The object is already created before the constructor is called but the constructor initializes the instance variables. - Names the new object - Constructors do not name the object. - Return to free storage all the memory used by this instance of the class. - Constructors do not free any memory. In Java the freeing of memory is done when the object is no longer referenced. - Initialize the instance variables in the object - A constructor initializes the instance variables to their default values or in the case of a parameterized constructor, to the values passed in to the constructor. 5-2-2: What best describes the purpose of a class’s constructor?. 5.2.2.. 5.2.3. AP Practice¶ Cat c = new Cat (“Oliver”, 7); The age 7 is less than 10, so this cat would not be considered a senior cat. Cat c = new Cat (“Max”, “15”); An integer should be passed in as the second parameter, not a string. Cat c = new Cat (“Spots”, true); An integer should be passed in as the second parameter, not a boolean. Cat c = new Cat (“Whiskers”, 10); Correct! Cat c = new Cat (“Bella”, isSenior); An integer should be passed in as the second parameter and isSenior would be undefined outside of the class. 5-2-3: Consider the definition of the Cat class below. The class uses the instance variable isSenior to indicate whether a cat is old enough to be considered a senior cat or not. public class Cat { private String name; private int age; private boolean isSenior; public Cat(String n, int a) { name = n; age = a; if (age >= 10) { isSenior = true; } else { isSenior = false; } } } Which of the following statements will create a Cat object that represents a cat that is considered a senior cat? - I only - Option III can also create a correct Cat instance. - II only - Option II will create a cat that is 0 years old with 5 kittens. - III only - Option I can also create a correct Cat instance. - I and III only - Good job! - I, II and III - Option II will create a cat that is 0 years old with 5 kittens. 5-2-4: Consider the following class definition. Each object of the class Cat will store the cat’s name as name, the cat’s age as age, and the number of kittens the cat has as kittens. Which of the following code segments, found in a class other than Cat, can be used to create a cat that is 5 years old with no kittens? public class Cat { private String name; private int age; private int kittens; public Cat(String n, int a, int k) { name = n; age = a; kittens = k; } public Cat(String n, int a) { name = n; age = a; kittens = 0; } /* Other methods not shown */ } I. Cat c = new Cat("Sprinkles", 5, 0); II. Cat c = new Cat("Lucy", 0, 5); III. Cat c = new Cat("Luna", 5); public Cat(String c, boolean h) { c = "black"; h = true; } The constructor should be changing the instance variables, not the local variables. public Cat(String c, boolean h) { c = "black"; h = "true"; } The constructor should be changing the instance variables, not the local variables. public Cat(String c, boolean h) { c = color; h = isHungry; } The constructor should be changing the instance variables, not the local variables. public Cat(String c, boolean h) { color = black; isHungry = true; } The constructor should be using the local variables to set the instance variables. public Cat(String c, boolean h) { color = c; isHungry = h; } Correct! 5-2-5: Consider the following class definition. public class Cat { private String color; private boolean isHungry; /* missing constructor */ } The following statement appears in a method in a class other than Cat. It is intended to create a new Cat object c with its attributes set to “black” and true. Cat c = new Cat("black", true); Which of the following can be used to replace /* missing constructor */ so that the object c is correctly created?
https://runestone.academy/runestone/books/published/csawesome/Unit5-Writing-Classes/topic-5-2-writing-constructors.html
CC-MAIN-2020-05
en
refinedweb
5.4. Accessor Methods¶ Since the instance variables in a class are usually marked as private to the class, programmers provide public methods that allow safe access to the instance variable values in a class. Accessor methods, also called get methods or getters, allow a way to get the value of each instance variable from outside of the class. In the next lesson, we will see mutator methods, also called set methods or setters, that allow a way to change the values of the instance variables. In Unit 2, we also used set/get methods with the Turtle class to get the Turtle object’s width, height, xPosition, etc. or to change them. If you used a language like App Inventor in an AP CSP class, you may have used setter and getter blocks. In App Inventor, you cannot make your own classes, but you can declare UI objects like Button1, Button2 from the Button class and use their get/set methods for any property like below. Java programmers write get methods for each instance variable that look like the following. Notice that the get method returns the instance variable’s value and it has a return type that is the same type as the variable that it is returning. class ExampleTemplate { //Instance variable declaration private typeOfVar varName; // Accessor (getter) method template public typeOfVar getVarName() { return varName; } } Here’s an example of an accessor method called getName() for the Student class which also demonstrates how to call getName() using a Student object: class Student { //Instance variable name private String name; /** getName() example * @return name */ public String getName() { return name; } public static void main(String[] args) { // To call a get method, use objectName.getVarName() Student s = new Student(); System.out.println("Name: " + s.getName() ); } Note Some common errors with methods that return values are: Forgetting a return type like int before the method name. Forgetting to use the return keyword to return a value at the end of the method. Forgetting to do something with the value returned from a method (like saving it into a variable or printing it out). Try the following code. Note that this active code window has 2 classes! The main method is in a separate Tester or Driver class. It does not have access to the private instance variables in the other Student class. Note that when you use multiple classes in an IDE, you usually put them in separate files, and you give the files the same name as the public class in them. In active code and IDEs, you can put 2 classes in 1 file, as demonstrated here, but only 1 of them can be public and have a main method in it. You can also view the fixed code in the Java visualizer. Try the following code. Note that it has a bug! It tries to access the private instance variable email from outside the class Student. Change the main method in Tester class so that it uses the appropriate public accessor method (get method) to access the email value instead. There is a subtle difference in methods that return primitive types versus reference/object types. If the method is returning a primitive type like int, it returns a copy of the value. This is called return by value. This means the original value is not changed and it is a safe way to access the instance variables. However, object variables really hold a reference to the object in memory. This is not the actual value, but its address in memory. So, if the method is returning an object like String, Java returns a copy of the object reference, not the value itself. Java was especially designed this way because objects tend to be large and we want to avoid copying large objects, so we just pass around references to the objects (their addresses in memory). So, when we call getName(), we actually get back a reference to the String for the name in memory. 5.4.1. toString()¶ Another common method that returns a value is the toString() method which returns a String description of the instance variables of the object. This method is called automatically to try to convert an object to a String when it is needed, for example in a print statement. Here is the Student class again, but this time with a toString() method. Note that when we call System.out.println(s1); it will automatically call the toString() method to cast the object into a String. The toString() method will return a String that is then printed out. Watch how the control moves to the toString() method and then comes back to main in the Java visualizer. 5.4.2. Programming Challenge : Class Pet¶ You’ve been hired to create a software system for the Awesome Animal Clinic! They would like to keep track of their animal patients. Here are some attributes of the pets that they would like to track: Name Age Weight Type (dog, cat, lizard, etc.) Breed Create a class that keeps track of the attributes above for pet records at the animal clinic. Decide what instance variables are needed and their data types. Make sure you use int, double, and String data types. Make the instance variables private. Create 2 constructors, one with no parameters and one with many parameters to initialize all the instance variables. Create Accessor (get) methods for each of the instance variables. Create a toString() method that returns all the information in a pet record. In the main method below, create 3 pet objects and call their constructors, accessor methods, and toString methods to test all of your methods. Make sure you use good commenting! 5.4.3. Summary¶ An accessor method allows other objects to obtain the value of instance variables or static variables. A non-void method returns a single value. Its header includes the return type in place of the keyword void. Accessor methods that return primitive types use “return by value” where a copy of the value is returned. When the return expression is a reference to an object, a copy of that reference is returned, not a copy of the object. The return keyword is used to return the flow of control to the point immediately following where the method or constructor was called. The toString method is an overridden method that is included in classes to provide a description of a specific object. It generally includes what values are stored in the instance data of the object. If System.out.print or System.out.println is passed an object, that object’s toString method is called, and the returned string is printed. 5.4.4. AP Practice¶ - The getNumOfPeople method should be declared as public. - Correct, accessor methods should be public so they can be accessed from outside the class. - The return type of the getNumOfPeople method should be void. - The method return type should stay as int. - The getNumOfPeople method should have at least one parameter. - This method should not have any parameters - The variable numOfPeople is not declared inside the getNumOfPeople method. - This is an instance variable and should be declared outside. - The instance variable num should be returned instead of numOfPeople, which is local to the constructor. - The numOfPeople variable is correctly returned. 5-4-1: Consider the following Party class. The getNumOfPeople method is intended to allow methods in other classes to access a Party object’s numOfPeople instance variable value; however, it does not work as intended. Which of the following best explains why the getNumOfPeople method does NOT work as intended? public class Party { private int numOfPeople; public Party(int num) { numOfPeople = num; } private int getNumOfPeople() { return numOfPeople; } } The id instance variable should be public. Instance variables should be private. The getId method should be declared as private. Accessor methods should be public methods. The getId method requires a parameter. Accessor methods usually do not require parameters. The return type of the getId method needs to be defined as void. void is not the correct return type. The return type of the getId method needs to be defined as int. Correct! Accessor methods have a return type of the instance variable they are returning. 5-4-2: Consider the following class definition. The class does not compile. public class Student { private int id; public getId() { return id; } // Constructor not shown } The accessor method getId is intended to return the id of a Student object. Which of the following best explains why the class does not compile?
https://runestone.academy/runestone/books/published/csawesome/Unit5-Writing-Classes/topic-5-4-accessor-methods.html
CC-MAIN-2020-05
en
refinedweb
Opened 7 years ago Closed 5 years ago #1739 closed defect (fixed) Language switch on wxGUI doesn't affect all strings Description Switch the language to any other (Settings > Pref > Appearance tab) Open GRASS again wxGUI is in the selected language, but error messages are not (try to find one). Change History (7) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by I will test again on the Mac when I get back to town in a couple days. But as of a week or 2 ago, this did not work at all on the Mac for 6.4.3. Michael comment:4 Changed 7 years ago by I found more examples where translated strings are ignored: - Map Display buttons flags - Dropdown list "Element list:" at Preferences -> Settings -> Appearance tab comment:5 Changed 7 years ago by I have completely changed the gettext usage in wxGUI. As reported by mmetz and MilenaN, the translation was active only in the files where gettext.install function was called. Adding gettext.install function call to the file enabled translation for the particular file, although according to the documentation one and only one call of gettext.install function should be enough (Python gettext localizing your application). The reason why it is not working is not known. However, gettext.install function is adding the underscore ( _("...")) function into the __builtins__ module. This makes life of tools such as pylint and pep8 (and maybe also doctest) harder and thus usage of gettext.install function is not considered as a good practice by some most notably Django project, moreover there might by also some problems with unicode which are not clear to me (see this blog post). Now I've implemented better different practice of using gettext which is supposed to be more general since it is also usable for Python modules (in the sense of libraries). The underscore function is no longer build-in (in global namespace) and must by defined (or imported) explicitly in every file. Code to enable translation for the file and define undescore function: import gettext _ = gettext.translation('grasswxpy', os.path.join(os.getenv("GISBASE"), 'locale')).ugettext In the case when the translation is not available and exception is thrown (e.g., during compilation, maybe only with different locale). For this case I added the null translation underscore function (there is some way using gettext, but this should be enough). No error is reported in this case (I don't like it, but I don't know how to report it and not introduce some strange message during compilation). The full code for enabling translation: try: import gettext _ = gettext.translation('grasswxpy', os.path.join(os.getenv("GISBASE"), 'locale')).ugettext except IOError: # using no translation silently def null_gettext(string): return string _ = null_gettext It is possible to just import the underscore function and since the code is long the function is exposed by ( gui/wxpython/) core.utils module and can be imported by others: from core.utils import _ Some modules cannot use the import since they are imported by core.utils. These modules have to use the larger code above. (This could be changed in the future by introducing the core.translations module which needs to be on the were top of the import tree). This was implemented in the r57219 and r57220 (first changesets contained bug, that's why committed in parts). These changesets (for trunk) fix the issue specified in the comment:4 by MilenaN. (Both changesets are hopefully easily revertable if necessary). However, there is still issue with strings in lib/python as specified in comment:2 by marisn. For GRASS 6 the string are no even generated as noted in comment:2. For GRASS 7 the strings are generated and I'm getting the translation. However, grass.script modules are using gettext.install which could be OK (but I thing it isn't) for GRASS Python modules but it is wrong when grass.script and friends are used from wxGUI. So this should be probably changed too (very bad practice, against manual and this method did not work for wxGUI), but this means to change not only all the files. I'm not sure about GRASS Python modules and other libraries such as temporal or pygrass. However, the modules seems to work, so there is no pressure for the change. comment:6 follow-up: 7 Changed. comment:7 Changed. No complains since then, so closing as fixed. Feel free open a new ticket if you encounter any other problems. Related/unresolved: If I understood correctly from non-helpful bug report, issue is with strings reported by Python parser preprocessor (or whatever it is). Issue is twofold: First part can be solved by (diff for 6.4 svn): Still even after applying supplied patch, translated strings are ignored in module error dialogs. My guess - something with passing LANG to child processes. Somebody with better understanding of Python+parser magic should look into it. Steps to reproduce issue:
https://trac.osgeo.org/grass/ticket/1739
CC-MAIN-2020-05
en
refinedweb
First of all: I've already ask this is another forum and basically just copied my post, so you will find this, if you search for it with google in the dreamincode forums. I've asked this a couple of weeks ago and it seems like no one could (or wanted) to help me. I've also removed all the imports, to make it the code a bit smaller here but there are no errors in my code currently. ------------------------------- I'm currently making a text based adventure game and everything is nearly finished but somehow there seems to be a problem with a XmlElementWrapper. I have a class called Inventory, with a ArrayList, which holds the items for the rooms and for the creatures. Here's how the XML File looks rougly. I removed a lot of sutff, because the XML File is quite huge and the other stuff most likely doesn't matter. I've also got a lot more rooms and so on, but just a example XML File: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <map> <rooms> <room id="1"> <description>The Entrance of the map/description> <neighbour> <direction>ahead</direction> <ref>2</ref> </neighbour> </room> <room id="2"> <description>Room 2</description> <neighbour> <direction>right</direction> <ref>3</ref> </neighbour> <inventory> <weapon> <name>Sword</name> <description>An old sword</description> <damage>15</damage> </weapon> <consumable> <name>Health Potion</name> <description>Regenerated 10 health points</description> <reg>10</reg> </consumable> </inventory> </room> <room id="3"> <description>Room 3</description> </room> </rooms> </map> This is my Item class: @XmlSeeAlso({Consumable.class, Weapon.class}) public abstract class Item { private String name; private String description; public Item() { } public Item(String name, String description) { this.name = name; this.description = description; } public String getName() { return name; } public String getDescription() { return description; } @XmlElement(name = "name") public void setName(String name) { this.name = name; } @XmlElement(name = "description") public void setDescription(String description) { this.description = description; } public boolean isWeapon() { return this.getClass() == Weapon.class; } public boolean isConsumable() { return this.getClass() == Consumable.class; } } and this one of the subclasses, just here one for example: (The Weapon class) @XmlRootElement(name = "weapon")@XmlAccessorType(XmlAccessType.NONE) public class Weapon extends Item { private int damage; public Weapon() { } public Weapon(String name, String description, int damage) { super(name, description); this.damage = damage; } public int getDamage() { return damage; } @XmlElement(name = "damage") public void setDamage(int damage) { this.damage = damage; } } and this is the inventory class: @XmlRootElement(name = "inventory")@XmlAccessorType(XmlAccessType.NONE) public class Inventory { @XmlElementWrapper(name = "inventory")@XmlElement(name = "item") private ArrayList<Item> inventory; public Inventory() { System.out.println("Inventory Constructor has been used!"); inventory = new ArrayList<Item>(); } public String getAll() { String returnString = ""; for(Item item : inventory) { returnString += " " + item.getName(); } return returnString; } public Item get(String name) { return get(name); } public void add(Item item) { inventory.add(item); } public Item remove(String name) { Item item = this.get(name); inventory.remove(this.get(item)); return item; } public void removeAll() { if(inventory.size() > 0) { inventory.clear(); System.out.println("All items of have been removed of your inventory!"); } else { System.out.println("There are no items in your inventory!"); } } } So, i've also added the System.out.println to the constructor, so that i can see if the constructor has been used and i get as many printlines as i've got rooms, so the constructor is used for each room. But there are always no items in the inventory, like it seems. Here's the room class: @XmlRootElement(name = "room")@XmlAccessorType(XmlAccessType.NONE) public class Room { @XmlID @XmlAttribute private String id; private String description; private Inventory inventory; @XmlElementWrapper(name = "neighbours")@XmlElement(name = "neighbour") private Neighbour[] neighbours; private static final int directionNum = Direction.values().length; public Room() { neighbours = new Neighbour[directionNum]; } public String geDescription() { return "Description of the Room: " + description + ".\n" + getNeighbours() + "\nItems in this room: " + inventory.getAll(); } @XmlElement(name = "description") public void setDescription(String description) { this.description = description; } public void addItem(Item item) { inventory.add(item); } public Item getItem(String name) { return (Item) inventory.get(name); } public Item removeItem(String name) { return (Item) inventory.remove(name); } }[ So, what am i doing wrong? I don't get it. I always use the getAll() method of the room class to get every item in that room, if i change the room but there are always no items in the ArrayList, like it seems. Another question btw: I also would like to have two items of the same type in the ArrayList, like two health potions, one that gives back 10 health and one that gives 20 health back. How can i ensure then, if i use the get method, that i get the right item, or if use the remove method, that i remove the right item and not the false one? Edited by Crusher, 11 November 2015 - 10:46 PM.
http://forum.codecall.net/topic/80659-jaxb-api-xmlelementwrapper-no-items-in-the-array/
CC-MAIN-2020-05
en
refinedweb
Regular. Table – Regular Expression Characters In Python The match Function It matches RE pattern to string with optional flags. Syntax re.match(pattern, string, flags=0) Where pattern is a regular expression to be matched, 2nd parameter is a string that will be searched to match pattern at the starting of the string. e.g. it is the address at which it was created. import re print re.match("b", "intellipaat") Output None Special Sequence Characters The six most important sequence characters are: - \d: Matches any decimal digit. This is really the same as writing [0-9], but is done so often that it has its own shortcut sequence. - \D: Matches any non-decimal digit. This is the set of all characters that are not in [0-9] and can be written as [^0-9] - \s: Matches any white space character. White space is normally defined as a space, carriage return, tab, and non-printable character. Basically, white space is what separates words in a given sentence. - \S: Matches any non white space character. This is simply the inverse of the \s sequence above. - \w: Matches any alphanumeric character. This is the set of all letters and numbers in both lower- and uppercase. - \W: Matches any non-alphanumeric character. This is the inverse of the \w sequence above. Search Function It searches for primary occurrence of RE pattern within string with optional flags. Syntax re.search(pattern, string, flags=0) e.g. m = re.search('\bopen\b', 'please open the door') print m Output None This ouput is occurred because the ‘\b’ escape sequence is treated as a special backspace character. Meta characters are those characters which include /. >>> import re >>> m = re.search('\\bopen\\b', "please open the door") >>> print m Output <_sre.SRE_Match object at 0x00A3F058> Regular Expression Modifiers (Option Flags)
https://intellipaat.com/blog/tutorial/python-regular-expressions/
CC-MAIN-2020-05
en
refinedweb
More Videos Streaming is available in most browsers, and in the WWDC app. ongoing work with emerging network technologies. 1 - Build Better Apps with CloudKit Dashboard - Writing Energy Efficient Apps - Your Apps and Evolving Network Security Standards Tech Talks WWDC 2016 WWDC 2015 - Download Jeff Tu: Good afternoon, everyone. I'd like to welcome to you part two to Advances in Networking, a continuation of the session from the past hour. My name is Jeff Tu, and I'll be taking you through the first topic. In this session we'll discuss new URLSession developer API and enhancements, networking best practices, and other important technology areas in networking. Our first topic is new URLSession API. But before that, I'd like to review the underlying API we'll be talking about, which is URLSession. URLSession is an easy-to-use API for networking introduced in iOS 7 and OS X Mavericks. URLSession supports networking protocols like HTTP/2; HTTP/1.1; FTP; and custom streams with an overall emphasis on URL loading. If you provide it an HTTPS URL, it also automatically provides the encryption and decryption of data between you and the web server. Last year we deprecated NSURLConnection API. So we encourage any new app development to occur with URLSession. For more information on URLSession, I encourage you to review past WWDC sessions and other online documentation. Recall that there are different kinds of URLSession objects that you can create. The basic object you can create is a default configuration URLSession object. Default sessions have a behavior where a task either fetches a URL immediately; or if the device can't connect to the web server, fails immediately. URL loads can fail because the device isn't connected to the Internet or if the server you're trying to reach happens to be down. Those are just a couple of examples. Background URLSession objects, on the other hand, don't have this immediate fetch or fail behavior but are scheduled out of process and continually monitored for network connectivity to the server. There are more examples of a URLSession task failing because of bad connectivity. You might have no Internet connection, you might be in a theatre with your device in Airplane Mode or it should be in Airplane Mode. Perhaps you have a session object where you've disallowed cell usage but the user only has cell connectivity. Or the server might only be are polling every set period of time or depending on the user to tap or drag to refresh the UI. The problem is that these approaches add complexity to your apps and aren't always effective. SCNetworkReachability only tells you that you might be able to reach the server, not that you will. You, our developers, have been asking for an easier solution. Wouldn't it be easier to say, then, "Please fetch me this resource when the network is available"? We're happy to tell you about a new feature that lets you do this. We call this the URLSession Adaptable Connectivity API. This -- This API is available now on all platforms. By opting into this API, you tell URLSession that in the event that the task would fail because of a lack of connectivity that it should wait for a connection to the server instead of failing. How do you opt in? There's a new boolean property called waitsForConnectivity. Set this to true, and then you get the new behavior. I'd like to repeat what this property does. You go from the default behavior of load it now or fail now if I can't connect to load it now, but if I can't and would have failed because of a lack of connectivity, try again when I get a real chance to talk to the server. The API also waits when it encounters DNS failures as well since one network's DNS service might fail to resolve but another one may not. Please note that this boolean is a no op for background sessions, as background URLSession objects get this behavior automatically. We'll tell you later in this hour more about the differences. You may be wondering, "Can my code get a notification if it's in this waiting state?" You might want to have the app present other behavior while it's waiting to connect. For example, having an offline browsing mode or a mode that operates when the user is only on cell. If you would like to know when your app is in this waiting state, you can optionally implement the URLSession taskIsWaitingForConnectivity delegate method. Note that this delegate method is only called if you've opted into the waitsForConnectivity property with a true value. If you've done this, the delegate method itself will be called only one time or not at all if the task never had to wait. We recommend that your apps always opt into the waitsForConnectivity property. This is because even when you opt in, the task will still try to run immediately. The task will only wait if it can't connect to the server. There are rare exceptions to opting into the property, though. For example, if you had a URLSession task whose purpose was to buy stock at market price, you'd want that to run now or fail now and not wait until you had an Internet connection. I'd also like to mention that when you opt into waitsForConnectivity, the timeout interval for request timer starts only after you've connected to the server. Timeout interval for resource, however, is always respected. Let's summarize how we would use the API and then go through a code example. The main thing is to opt into the waitsForConnectivity property. You would create and resume the URLSessionTask as before. If the device can't connect to the server, we'd call a delegate callback if you implemented it and only once. All other URLSession methods are still called same as before. Remember, though, that this API only has an effect for non-background sessions. Let's go through a sample code. First create a session configuration object and make one for default session type. Opt into the waitsForConnectivity property. Create the session object and set the URL you want to load. Use the session object to create a task object. And finally, resume the task to get it started. Even with adaptable connectivity, your request may still fail for other reasons. For example, you could connect to the server, but a new data center employee might unplug a server, cause the network connection to drop, and all your apps on your phone might disappear. Or your device connects to the server and sends an HTTP request, but there's so much traffic that the request times out. For situations like these, we'd like you to consult online resources that go into more detail on what you can do. Retrying network loads in a tight loop, though, is almost always a bad idea. You asked for a better way to load network resources. Better than polling for network connectivity to the server and better than reachability API that won't guarantee a connection to the server. Let URLSession do the work for you. Opt into the waitsForConnectivity Adaptable Connectivity API. If you opt in, the request will still run immediately with no performance penalty and only wait if you can't connect to the server. Once it can connect, your URLSession task behaves just like it did before. Continuing our theme of what's new, I'd like to pass the mic to my colleague, Jeff Jenkins. Jeff Jenkins: Thanks, Jeff. Well, good afternoon. Hope you guys are having a great WWDC. And I'm exciting to be here and thrilled to talk to you a little bit more about some enhancements we've made to the URLSessionTask API. Now, first I want to spend a little bit of time talking about background URLSession. We haven't talked a whole lot about it, so let me give you a little bit of background on that. The background session URLSession API allows your application to perform networking even if your process, your application isn't running. We monitor the system conditions, CPU, battery, all sorts of things to really find of right time to do your networking tasks. Now, of course, if you implement various delegate methods, we're going to wake up your app and call those delegate callbacks so that you can handle that information. And, of course, we're going to make sure your app is running when your task completes so that you can then process that data. Now, one of the great use cases for background URLSession is taking advantage of another feature on the system, which is background app refresh. Now, what this really does is allows your application to have the most current, the freshest data, right? There's nothing more frustrating than pulling your device out, launching an app, and the first thing you're greeted with is some sort of spinner, right? You're waiting for this application to start pulling down data. You want that data right away. You want to be able to get that data to your user so your user is excited and happy to use your app. Background app refresh is a way to do this. It's a way to tell the system that, "Hey, in the future I want to be able to be launched so that I can refresh my data so I have the most important information," maybe stock information, or weather forecast, other important things that your app does. Now, this applies to applications, as well as watchOS complications. And if you want to learn a little bit more in depth about background app refresh, you can go back to 2013 WWDC, as well as last year's 2016 WWDC and look at these sessions for more details. So let's look at background app refresh in action; what is it really doing? And to do that, we kind of need to look at the state of your application. We're interested in three states here: A running state, suspended state, or a background state. Now, with your app running, you're going to opt into background app refresh. You're going to say, "In the future, run my app, make sure my app runs so that I can get the latest information." And then your process could be suspended. And in the future your process is now running, your app is now going to be able to ask for new data. And like good developers, this app is using URLSession API. In fact, it uses a background URLSession. It creates a URLSession task and schedules this task to run and grab the data that your application needs. Now, your process could go away at this point, but then at some point URLSession is going to run your task and it's going to run it to completion hopefully if everything goes well. And you're going to get the data. So we're going to background launch your application and allow you to process that completed task and process that data that we've fetched for you. And then at some point the user is going to launch your app, it's going to come foreground, and boom, they've got the freshest data there. So this is great. But we looked at this flow and said, "Hmm, maybe there's something we can do to help our developers improve their applications on our platforms." And we think we can do something for you. The first problem that we want to solve is we noticed there's an extra background launch that had to happen just for you to create the URLSession task. And as we all know, anytime your process is launched, what does that do? It impacts battery life, requires CPU burden. So that's not necessarily great for the device if we're doing extraneous work, and we really don't need to be doing that. The other problem we'd like to solve are stale network requests, right? You're asking URLSession to do work. And at some point in the future, that work is going to complete. Well, what happens between when you ask for the work to be done and when it actually got done? Maybe there was some change in context and that original request doesn't make sense anymore. So we need to give you an opportunity to really, if there's a context change, let us know about that and get rid of these stale network requests. Because there's nothing worse than getting data and going, "I can't do anything with it," and throw it away. And the last problem we think we could help you with is helping us know how to best schedule your URLSession tasks. When is the most optimal, best time in the system to be able to run your task so that we can get your data in the most efficient way for you to display that so that your users are excited and delighted by that data? Let's look at what we did. We're introducing the URLSessionTask scheduling API. Now, this is available across all of our platforms. It's available in the beta builds that you have received here at WWDC. And we encourage you to take a deep look at this. Now, what we've done first is we provided a new property. This is a property on URLSessionTask object. It is called earliestBeginDate. And what you're going to do here is provide a date to us in the future when you want your task to be eligible for running. I use that word eligible because it's important. It doesn't mean that this is the point in time when your task will run, it will do networking; it's just telling the system, "I would like my task to be eligible so that it can run." And we're still bound by system policies as to when we can make the networking happen for this task. It's only applicable to background URLSessions and tasks built off of background URLSession. Let's take a look at how this property in conjunction with other existing properties really allows you to do some fine-grain scheduling. So you'll create a URLSessionTask and, of course, you'll call resume on it so that we know that this task can now be put into the queue so that work can happen. You'll -- and at this point the task will be in a waiting state. We're waiting for the earliestBeginDate to happen. And as soon as that is hit, that task becomes eligible for running. Now, you can use the existing timeoutIntervalForResource to really control how long your app is willing to wait for at that resource to get loaded, right? You might set some amount of time to say, "After this point in time, this resource isn't interesting to me anymore." And that interval of time covers from resume to when that timeout happens based on the value you place in timeoutIntervalForResource. Now, I want to go back to the original background app refresh workflow that we looked at earlier. Right? We noticed there was a couple of background launches that occurred. But with this new API we're able to get rid of one of those. So the way that your app will work is while your running, you're going to create a URLSessionTask; you'll opt into our new scheduling API by setting an earliestBeginDate; then your process can go to sleep. We're going to complete the work for you. And when that work is available, we're going to background launch you that one time and allow you to process the resulting data. And then when the user brings your app to the foreground, boom, it's got the freshest, most current data. And we've been able to solve that one problem of that additional background app launch. And so it's better performing on the system, and we think that's great. So that's problem number one solved. Let's look at problem number two, the stale network fetches. We want to give you an opportunity to alter future requests. So you might have given us a request, but the context might change. We've introduced a new delegate callback on a URLSessionTaskDelegate titled willBeginDelayedRequest. With this delegate, you'll be able to be called at the moment when your task is about to start networking. So it is you've told this that the task is eligible and the system now has decided yes, this is the right time to do the networking. We're going to call this delegate method, if you implement it, and allow you to make some decisions about this task. Now, this will only be called one, if you implement it; also, if you opt into the scheduling API by setting an earliestBeginDate. And, again, this is only available on background URLSessions. And as I mentioned, this is an optional delegate. And I want to take a second here to have you really think about this because it's important, this delegate method. As with all delegate methods, they're all opt into. But this one will cause some interesting side effect that I'll show you in a minute. You really need to think about, "Can my application determine context, the viability of a request in the future?" Now, there's a completion handler that's passed to this delegate method. And you need to give a disposition to URLSession. You need to tell us does the original request, does it still make sense? Go ahead and proceed. Or maybe the context has changed enough and you need to make some modifications, maybe a different URL or maybe a header value's different and you want to go ahead and modify that request at this point in time right before the networking happens. Or you might make the decision this request is just useless at this point, cancel. We don't want to do stale requests. So now if we go back to this workflow and we go back to my comment about really thinking about this delegate method, you will see that we're kind of back to that original workflow where there's two background launches in order to satisfy this URL task. Right? But we have to stop and think about that. What is more expensive, performing a stale network load or a mere application background launch? It is way more expensive to the system to do stale loads, get all this data, and then decide I don't need it and pitch it. Okay? So we want you to really think about this new delegate method and whether your application has the ability to really understand the viability of your requests in the future. Hopefully that make sense to you. Now, the third problem we want to solve is how do we schedule your request in a most optimal, most intelligent way in our system? There's some information that in URLSession we just don't know about. So we're providing a little bit of change to our API to allow you to explain to us some information about your requests and also about your responses. We're giving you two properties, the first one is countOfBytes ClientExpectsToSend, and the second one is countOfBytes ClientExpectsToReceive. We think you know more about your requests. Maybe you have a stream body that you want to attach to a request, we don't know about that. You probably know the size of that. We don't know about your servers and the size of data your servers' shipping back. We believe you have some insight to that. And that will give us hints as how we can in a most optimal, intelligent way schedule your tasks. If you don't know, well, then you can always specify NSURLSessionTransferSizeUnknown. So that solves the third problem. Let's take a look at how this new API works in code. It's very easy to use. First thing we're going to do is create a URLSession background configuration. We're then going to create a session based on that configuration. Once we have that, we're going to now generate a URLRequest, specify the URL we want to go to, maybe set a header value, something that makes sense for your task. Again, this is just an example. And now we're going to create a task that encapsulates that request on that session. And we're going to opt into the new scheduling API by setting the earliestBeginDate property and give us a date. In this example we say two hours from now I want this task to be eligible to be run. And I'm also going to give some hints to URLSession and say, "This is a small request, there's no body, I've just set one header, maybe 80 bytes." And then my server probably is going to send about a 2K response to this. And with all URLSession tasks, make sure you call resume. Now, how does the new delegate work? Well, we decided I know context. I can make some intelligent decisions about my networking tasks in the future. So I've implemented willBeginDelayedRequest. So in our example here what I've decided to do is to modify the request. I'm going to take the original request, create a new updatedRequest. I'm going to maybe change a value in the header that makes more sense now that this task is actually going to do some networking. Time has passed, I have new information. I put that information on that task. And then I'm going to call the completionHandler and use a disposition of useNewRequest and passed it that new request. If you take a look at our header file, you can see other dispositions available to you in this completionHandler call. So let me recap the scheduling API that we're introducing here. Background URLSession is an awesome API for doing networking that allows your application to not even be running and have this networking happen for you. Our new scheduling API will allow you to delay your requests so that they can, you know, obtain and pull down the freshest information for your application. And it's really we give you an opportunity to alter those things based on the context and the time at when the networking is actually going to happen. The other part of this API change is to allow you to give hints to us so that we can be super intelligent and make these tasks run at the most optimal time on these devices. Now, I'd like to turn the time over to Stuart Cheshire, an Apple distinguished engineer. And thank you for your time. Stuart Cheshire: Thank you, Jeff. Now we're going to talk about enhancements in URLSession. We have four things to cover, let's move through them. Often you want to show a progress bar to indicate to the users how progress is being made. And right now this is a little bit cumbersome. There are four variables that you need to monitor with Key-value Observing. And sometimes the countOfBytesExpectedToReceive or Send is not always available. The good news now in iOS 11 is URLSessionTask has adopted the ProgressReporting protocol. You can get a progress object from the URLSessionTask, and that gives you a variable fractionCompleted, which is a number in the range zero to one. You can also provide strings to give more detail about what the operation is. You can attach that progress object to a UIProgressView or an NSProgressIndicator to get an automatic progress bar. You can also combine multiple progress objects into a parent progress object when you're performing multiple tasks, such as downloading a file, decompressing a file, and then handling the data. So that makes your progress reporting much simpler. The binding between a URLSessionTask and the progress object is bidirectional. So if you suspend a URLSessionTask, that is the same as pausing the progress object. If you pause the progress object, that is the same as suspending the URLSessionTask. We now have support for the Brotli compression algorithm. In tests this compresses about 15% better than gzip, which results in faster network access. Like other new compression schemes, this is only used over encrypted connections to avoid confusing middle boxes that might not recognize this compression. Because Safari uses URLSession, that's also means Safari gets the benefit of this new Brotli compression algorithm. And many major websites have already announced support for Brotli in their web service. Our next topic is the Public Suffix List. The Public Suffix List is sometimes called the effective top-level domain list. And this is important for determining where administrative boundaries occur in the namespace of the Internet. One thing we don't want to allow is for a website to set a cookie on the com domain, which is then accessible to any other dot com company. So you might be tempted to make a rule that you can't set cookies on top level domains, only on second level and lower. But domains are named differently in different parts of the world. In America, Apple.com and FileMaker.com are different companies. But in Australia many, many companies are under com.au, and that doesn't make them all the same company. So the Public Suffix List is a file of rules and patterns that tells software how to judge where administrative boundaries occur. This is used for partitioning cookies, and it's used by the URLSession APIs. And if you use the HTTPCookieStorage APIs directly, it's supported there, too. We used to update this in software updates, but now with the more rapid progress in creating top level domains, we've changed to doing this over the air. We could push a new list every two weeks if we wanted to. URLSessionStreamTask is the API you would use if you just want a byte stream. If you're not doing HTTP Style Gets but say you want to write a mail client, URLSessionStreamTask gives you a simple byte stream. It supports upgrading to TLS with the STARTTLS option. If you have existing code that is written using the old NSInputStream and NSOutputStream APIs, you can extract those objects from a URLSessionStreamTask to use your old code. But for any new code you're writing, we strongly recommend that you use the new native URLSessionStreamTask APIs. We announced this a couple of years ago at WWDC 2015. What we have new for you now is automatic navigation of authenticating proxies. If the proxy requires credentials, then we will automatically extract those from the key chain or promptly the user on your behalf. So we've covered the enhancements for URLSession, let's move on. Thank you. Tips and hints that we've learned from our years helping developers. Number one rule: Don't use BSD Sockets. And by the same token, we encourage you not to embed libraries that are based on BSD Sockets. Because we do lots of work, as you've been hearing today, to provide benefits to your applications. We provide Wi-Fi Assist so that your application succeeds instead of failing when Wi-Fi isn't working. We provide techniques to minimize CPU use and minimize battery use to give users longer battery life. We have the ability to do tasks in the background when your application isn't even running. And the third-party libraries just can't do anything when they're not in memory running. And final bit of advice: Always try to use connect-by-name APIs as opposed to APIs where you resolve a name to an IP address and then connect to the address. We talked earlier about the requirement for IPv6 support. And the reason that almost all of your apps worked perfectly is because when you use connect-by-name APIs, you don't get involved with the IP addresses. And if you're not involved with the IP address, you don't need to care whether it's v4 or v6, it just works. Another question we often get is about the timeout values. So I want to recap that. The timeoutIntervalForResource is the time limit for fetching the entire resource. By default, this is seven days. If the entire resource has not been fetched by that time, it will fail. timeoutIntervalForRequest is a timer that only starts once the transfer starts. Once it starts, if your transfer stalls and ceases making progress for that time-out value, that when that timer will fire. We have seen developers that take their old NSURLConnection code and convert it to the new URLSession code by mechanically making a URLSession for every old NSURLConnection they used to have. This is very inefficient and wasteful. For almost all of your apps what you want to have is just one URLSession, which can then have as many tasks as you want. The only time you would want more than one URLSession is when you have groups of different operations that have radically different requirements. And in that case you might create two different configuration objects and create two different URLSessions using those two configuration objects. One example is private browsing in Safari where each private browsing window is its own separate URLSession so that it doesn't share cookies and other states with the other sessions. Most apps can just have one statically-allocated URLSession, and that's fine. But if you do allocate URLSessions dynamically, remember to clean up afterwards. Either finish tasks and invalidate or invalidate and cancel. But if you don't clean up, you'll leak memory. We get developers asking us about convenience methods and delegate callbacks. Delegate callbacks give you detailed step-by-step progress information on the state of your task. The convenience methods, like the name suggests, are a quick and easy way of using the API. With convenience methods you don't get all the intermediate delegate callbacks, you just get the final result reported to the completionHandler. Don't mix and match both on the same URLSession, pick one style and be consistent. If you're using the completionHandler, you will not get the delegate callbacks with two exceptions. If networking is not currently available and the task is waiting for connectivity, you'll be notified of that in case you want to show some indication in your UI. The other delegate method you may get notified is the didReceive AuthenticationChallenge. So here's a summary of the options available to you. Doing URLSessionTasks in your process with waits for connectivity as we recommend, the task will start immediately if it can; or if it can't, it will start at the first possible opportunity. You also have the option of doing tasks in the background. And you can do background discretionary tasks, which will wait until the best time in terms of battery power and Wi-Fi networking. Now I have a couple of ongoing developments to talk about. I'm sure many people in this room have heard about TLS 1.3. TLS, Transport Layer Security is the protocol that encrypts your data on the network to prevent eavesdroppers from seeing it and perhaps as importantly to make sure that you have connected to the server you intended to connect to. TLS 1.2 is very old at this stage. It has a number of problems that have been discovered. And TLS 1.3 is almost finished. That standard is not quite finalized. Apple is participating in that IETF working group, and we expect that to be finished by the end of this year. In the meantime, we do have a draft implementation if you want to experiment with it right now. And if you check out the security session from this Apple Developer Conference, you can learn how to experiment with that. Another thing you may have heard of is QUIC. QUIC is a new transport protocol designed to experiment with new ideas that are hard to do with TCP. QUIC started out as an experiment by some Google engineers, and it was a very successful experiment. They learned a lot. Some ideas were good, some turned out not to work as well as they hoped. Those engineers have taken those lessons they learned to the IETF. We have formed a new working group to develop the IETF standard QUIC protocol. Apple is also participating in that working group. That is not nearly as far long as TLS is, but that is also making good progress. Before we finish, one other thing we should talk about, Bonjour. Fifteen year ago at this very convention center in San Jose, Steve Jobs announced Bonjour to the world. And I got the opportunity to tell you all how it worked. A lot has happened since then. Since we launched it in 2004, we brought out Bonjour for Windows, for Linux. We had Java APIs. The next year Mac OS X 10.4 introduced wide-area Bonjour to complement the local multicast-based Bonjour that was in the original Mac OS 10.2 launch. The same year the Linux community came out with a completely independent GPL-licensed implementation of Bonjour called Avahi. A couple of years after that, Apple shipped Back to My Mac, which is built on the wide-area Bonjour capabilities introduced in 10.4. And in 2009 we brought out the Bonjour Sleep Proxy, which let you get Back to Your Mac at Home, even when it was asleep to save power. In the years since then, Android adopted with Bonjour with their own native APIs in 2012. That was in API Level 16 for those of you paying attention. And a couple of years ago, Windows 10 added their own native Bonjour support. Now, I know a lot of people in this room are well aware of the history. We know about the major OS vendors adopting Bonjour. But something else happened that surprised even me: Bonjour started showing up in a lot of other places. And I want to illustrate this with just a little personal anecdote. I recently bought a new house. And as part of the process of buying a new house, you often end up buying a bunch of new stuff. And I started adding things to my home and connecting things to the network. And I started finding a bunch of stuff showing up in Bonjour. Now, I bought a new printer; it had Bonjour. I bought some Access Network security cameras, they had Bonjour. That didn't surprise me because we know printers and network cameras were among the first devices to adopt Bonjour. But then I got a surround sound amplifier and it had Wi-Fi, and it had an embedded web server with Bonjour. Now, you can set up the amplifier with the TV and the remote control, but naming the inputs with up, down, left, right on the remote control one character at a time is really tedious. Being able to do this on my laptop or on my 27-inch iMac with a keyboard and a mouse is such a nicer way to set up a new piece of equipment. I bought another amplifier from different company, it also had Bonjour. I got solar panels on the roof of the house to save on the electricity bill, the inverter has Wi-Fi with an embedded web server advertised with Bonjour. So now with one click, I can see a graph of how much power I've produced in the day. My most recent purchase was an irrigation controller to control the sprinklers that water my lawn. It has Wi-Fi with an embedded web server advertised with Bonjour. Compared to trying to program your garden sprinkles with a two-digital LCD display and the plus minus buttons, this is such a glorious experience to see it all on my big iMac screen at the same time. So thank you to all you device makers who are making these wonderful products. For the app developers in the room, how does this affect you? The IETF DNS Service Discovery Working Group continues to make progress. We have new enhancements to do serve discovery on enterprise networks where multicast is not efficient and on new mesh network technologies that like Thread that don't support multicast well. The good news for app developers is this is all completely transparent to your apps. The APIs haven't changed because we anticipated these things even 15 years ago. The only thing to remember is when you do a browse call and you get back a name, type, and domain, pay attention to all three. You may be used to see the domain always being local, but now it may not be local. So when you call resolve, make sure to pass the name, type, and domain you got from the browse call. And for the device makers out there, don't forget to support link-local addressing. Link-local addressing is the most reliable way to get to a device on the local network because if you can't configure it, you can't misconfigure it. So to wrap up, in part one we talked about ongoing progress in ECN. It's now supported in clients and servers, the stage is set. Any ISP can now see an immediate benefit for their customers by turning on ECN at the key bottleneck links. Continue testing your apps on NAT64. Mostly known use there, we're very happy everything is going smoothly. We have a move to user space networking, which also doesn't change the APIs. But you may notice when you're debugging and looking at stack traces, you may see symbols in the stack trace you're not used to. You may see differences in CPU usage. We wanted to you to be aware of that so it didn't surprise you. We have new capabilities in the network extension framework. And the big news, we have multipath TCP as used by Siri now available for your apps to use as well. Thank you. In part two, we covered some enhancements in URLSession, especially the waitsForConnectivity, which is really networking APIs done the way they always should have done. When you ask us to do something, we should just do it, not bother you with silly error messages that it can't be done right now. You ask us, we will do it when we can. I gave some tips about best practices and news about ongoing developments. You can get more information about this session on the web. We have some other sessions we recommend you hear that you'll probably find interesting. Thank you. Looking for something specific? Enter a topic above and jump straight to the good stuff. An error occurred when submitting your query. Please check your Internet connection and try again.
https://developer.apple.com/videos/play/wwdc2017/709
CC-MAIN-2020-05
en
refinedweb
A plugin that renders the Scheduler’s button that is used to navigate to the today’s date Use the following statement to import a plugin with embedded theme components: import { TodayButton } from '@devexpress/dx-react-scheduler-material-ui'; You can import the themeless plugin if you want to use custom components: import { TodayButton } from '@devexpress/dx-react-scheduler'; Properties passed to the component that renders the today button. Additional properties are added to the component’s root element.
https://devexpress.github.io/devextreme-reactive/react/scheduler/docs/reference/today-button/
CC-MAIN-2020-05
en
refinedweb
- 04 Apr, 2014 1 commit - Jonathon Duerig authored ...bringing assign kicking and screaming into the century of the fruit). - 10 Nov, 2010 1 commit - 19 Aug, 2010 1 commit. - 13 May, 2010 1 commit Big commit with a lot of changes to add the rspec parser base class as well as for version 1 and version 2. These changes are not throughly tested yet, and the extension support hasn't yet been integrated into the main parser. The v1 and v2 support has. - 20 May, 2009 1 commit - 20 Oct, 2008 1 commit - Ryan Jackson authored If any of these changes break something, please fix it and let me know. - 01 Mar, 2007 1 commit 4.x . Also, set things up so I only have to write the conditional in one place, port. - 11 Aug, 2004 1 commit branch. - 08 Jun, 2004 1 commit disambiguate from boost's random() function. But, for some reason, with 3.x, random() doesn't live in the std:: namespace. Macro-ize it. - 03 Jun, 2004 1 commit. - 15 Apr, 2004 1 commit internal errors. - 08 Mar, 2004 1 commit. */ - 28 Jan, 2004 1 commit Do some scoring, not just violations, for stateful features and desires - this does a better job of nudging assign towards good solutions. - 10 Oct, 2003 1 commit they mean. - 04 Sep, 2003 1 commit. - 10 Jul, 2003 1 commit - 26 Jun, 2003 1 commit turns out that getting iterators to the STL hash_* data structures is really slow, so for some that won't be very big, use the non-hash version. Buys something like a 30% speedup for large topologies. - 20 Jun, 2003 1 commit some independant functionality off into new files, and reduce its use of globals, which can be very confusing to follow. I didn't get as far as I had hoped, but it's a good start. -. -. -. - 10 Jan, 2003 1 commit - -. - 24 Apr, 2001 1 commit - 25 Aug, 2000 1 commit
https://gitlab.flux.utah.edu/emulab/emulab-devel/commits/03eaec1ddae687d408f1470ec6a28766a349a99d/assign/common.h
CC-MAIN-2020-05
en
refinedweb
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including glob-interceptor with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. Bring your own file system for globbing by proxying the cache This library uses some implementation detail of the glob package by adding Proxies to specific cache options. Since the cache is always checked before executing a certain file system operation, this package completely bypasses the underlying fs module. By its very nature a cache is synchronous. Therefore all proxied file system operations need to return synchronously. That's why this package only makes sense for glob.sync. It also works for the async counterpart, but since file system access is sync behind the scenes, this is rather useless. You can create an interceptor by calling createGlobInterceptor(fs) and providing an object that implements the following interface for file system access: /** A minimal abstraction of FileSystem operations needed to provide a cache proxy for 'glob'. None of the methods are expected to throw. */ export interface GlobFileSystem { /** Returns `true` if the specified `path` is a directory, `undefined` if it doesn't exist and `false` otherwise. */ isDirectory(path: string): boolean | undefined; /** Returns `true` if the specified `path` is a symlink, `false` in all other cases. */ isSymbolicLink(path: string): boolean; /** Get the entries of a directory as string array. Will only be called on paths where `isDirectory` returns `true`*/ readDirectory(dir: string): string[]; /** Get the realpath of a given `path` by resolving all symlinks in the path. */ realpath(path: string): string; } For convenience there is a utility function fromNodeLikeFileSystem, that returns an instance of GlobFileSystem for a given file system compatible with Node's fs module. import {Volume} from "memfs"; import {fromNodeLikeFileSystem, createGlobInterceptor} from "glob-interceptor"; const interceptor = createGlobInterceptor(fromNodeLikeFileSystem(Volume.fromJSON({/* your in-memory files go here*/}))); With the previously created interceptor you can now invoke glob.sync to intercept all file system interaction: let result = glob.sync('**' /* any pattern you want */, {nodir: true /* any options you like */, ...interceptor}); // or if you are targeting a runtime without object spread, you can use `Object.assign` instead result = glob.sync('**' /* any pattern you want */, Object.assign({nodir: true /* any options you like */}, interceptor)); You can reuse an interceptor as often as you want or need. Note that interceptor contains the following properties: cache, statCache, realpathCache, symlinks. If you add one of these properties to your options object, they will be overridden by the ones from interceptor. If you explicitly override any of these properties with your own, the interceptor will not work as expected. Unless your implementation of GlobFileSystem does some caching, it will always execute the underlying binding. There's a utility function memoizeFileSystem to add caching to your GlobFileSystem: import {fromNodeLikeFileSystem, createGlobInterceptor, memoizeFileSystem} from "glob-interceptor"; import fs from "fs"; const interceptor = createGlobInterceptor(memoizeFileSystem(fromNodeLikeFileSystem(fs))); MIT © Klaus Meinhardt
https://npm.runkit.com/glob-interceptor
CC-MAIN-2020-05
en
refinedweb
Has anyone written a script to trim fastq files? I read in another post somewhere that these scripts are a dime a dozen. So, could we share them to prevent people from eternally reinventing the wheel? Here's mine so far, which is a modified version of a modified version of Richard Mott's trimming algorithm used in CLC Genomics Workbench: def mott(dna, **kwargs): ''' Trims a sequence using modified Richard Mott algorithm ''' try: limit = kwargs['limit'] except KeyError: limit = 0.05 tseq = [] S = 0 pos = 0 for q, n in zip(dna.qual, dna.seq): pos += 1 #if q == 'B': continue Q = ord(q) - 64 p = 10**(Q/-10.0) s = S S += limit - p if S < 0: S = 0 #print '%-3s %-3s %-3s %-3s %4.3f %6.3f %7.3f' % (pos, n, q, Q, p, limit - p, S), if s > S: break else: tseq.append(n) dna.seq = ''.join(tseq) dna.qual = dna.qual[0:len(dna.seq)] return dna I should mention that dna is an object I've created which has some properties: seq is the sequence and qual is the quality scores (their ascii representation). This algorithm only works for Illumina 1.5 PHRED scores. I'm using it in a pipeline for metagenome analysis of Illumina data. I'm also writing a median sliding window algorithm to see if that works better. what's wrong with the stuff in fastx toolkit? I'd also suggest that whilst the code golf is fine, and posting code to ask for help/suggestions is fine, dumping code snippets here when there are a dozen implementations elsewhere is not what BioStar should be used for. Stick it on github. I don't know who Richard Mott is but it would be helpful if you describe algorithms in English, e.g. "this trims all base pairs to the right of the first position where quality drops more than 'limit'"
https://www.biostars.org/p/1923/
CC-MAIN-2020-05
en
refinedweb
Evolution of a Python Programmer import operator f = lambda n: reduce(operator.mul, range(1,n+1))...Read more » This is a tool that for writing AST-based refactorings for large Python codebases. It uses lib2to3 to convert source code to an AST, run a visitor over it that modifies the tree, and convert the tree back into source code. Read more import operator f = lambda n: reduce(operator.mul, range(1,n+1))...Read more » With so many photos likely to be taken of the solar eclipse, it will be a challenge to align them to each other to compare them. Python & SunPy to the rescue! » sanic-graphql-example - Sanic using Graphsql + SQLAlchemy example... (more…)Read more » This notebook covers the basics of probability theory, with Python 3 implementations. (You should have some background in probability and Python.)... (more…)Read more »
https://fullstackfeed.com/a-tool-for-writing-python-ast-based-refactorings/
CC-MAIN-2020-05
en
refinedweb
First solution in Clear category for The Stones by tigerhu3180 def stones(pile, moves): results = [True] + [False] * pile for i in range(1, pile + 1): leave = [0 if i - j <= 0 else i - j for j in moves] if all([results[s] for s in leave]): # for the next player #The current player loses if all of the results for the next player are True. #The current player can win if there is at least one choice that is False. results[i] = False # for the current player else: results[i] = True # for the current player return 1 if results[pile] == True else 2 if __name__ == '__main__': print("Example:") print(stones(17, [1, 3, 4])) #These "asserts" using only for self-checking and not necessary for auto-testing assert stones(17, [1, 3, 4]) == 2 assert stones(17, [1, 3, 4, 6, 9]) == 1 assert stones(99, [1]) == 2 print("Coding complete? Click 'Check' to earn cool rewards!") Sept. 27, 2018 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/the-stones/publications/tigerhu3180/python-3/first/share/ad46b5a584dedc245171459a59072dd5/
CC-MAIN-2020-05
en
refinedweb
Facade in Python Facade is a structural design pattern that provides a simplified (but limited) interface to a complex system of classes, library or framework. While Facade decreases the overall complexity of the application, it also helps to move unwanted dependencies to one place. Usage of the pattern in Python Complexity: Popularity: Usage examples: The Facade pattern is commonly used in apps written in Python. design pattern. It focuses on answering these questions: - What classes does it consist of? - What roles do these classes play? - In what way the elements of the pattern are related? main.py: Conceptual Example from __future__ import annotations class Facade: """. """ def __init__(self, subsystem1: Subsystem1, subsystem2: Subsystem2) -> None: """ Depending on your application's needs, you can provide the Facade with existing subsystem objects or force the Facade to create them on its own. """ self._subsystem1 = subsystem1 or Subsystem1() self._subsystem2 = subsystem2 or Subsystem2() def operation(self) -> str: """ The Facade's methods are convenient shortcuts to the sophisticated functionality of the subsystems. However, clients get only to a fraction of a subsystem's capabilities. """ results = [] results.append("Facade initializes subsystems:") results.append(self._subsystem1.operation1()) results.append(self._subsystem2.operation1()) results.append("Facade orders subsystems to perform the action:") results.append(self._subsystem1.operation_n()) results.append(self._subsystem2.operation_z()) return "\n".join(results) class Subsystem1: """ The Subsystem can accept requests either from the facade or client directly. In any case, to the Subsystem, the Facade is yet another client, and it's not a part of the Subsystem. """ def operation1(self) -> str: return "Subsystem1: Ready!" # ... def operation_n(self) -> str: return "Subsystem1: Go!" class Subsystem2: """ Some facades can work with multiple subsystems at the same time. """ def operation1(self) -> str: return "Subsystem2: Get ready!" # ... def operation_z(self) -> str: return "Subsystem2: Fire!" def client_code(facade: Facade) -> None: """ The client code works with complex subsystems through a simple interface provided by the Facade. When a facade manages the lifecycle of the subsystem, the client might not even know about the existence of the subsystem. This approach lets you keep the complexity under control. """ print(facade.operation(), end="") if __name__ == "__main__": # The client code may have some of the subsystem's objects already created. # In this case, it might be worthwhile to initialize the Facade with these # objects instead of letting the Facade create new instances. subsystem1 = Subsystem1() subsystem2 = Subsystem2() facade = Facade(subsystem1, subsystem2) client_code(facade) Output.txt: Execution result Facade initializes subsystems: Subsystem1: Ready! Subsystem2: Get ready! Facade orders subsystems to perform the action: Subsystem1: Go! Subsystem2: Fire!
https://refactoring.guru/design-patterns/facade/python/example
CC-MAIN-2020-05
en
refinedweb
On-the-fly code quality analysis is available in C#, VB.NET, XAML, ASP.NET, JavaScript, TypeScript, CSS, HTML, and XML. ReSharper will let you know if your code can be improved and suggest automatic quick-fixes. Multiple code editing helpers are available, such as extended IntelliSense, hundreds of instant code transformations, auto-importing namespaces, rearranging code and displaying documentation. You don't have to write properties, overloads, implementations, and comparers by hand: use code generation actions to handle boilerplate code faster. Instant fixes help eliminate errors and code smells. Not only does ReSharper warn you when there are problems in your code but it provides quick-fixes to solve them automatically. Apply solution-wide refactorings or smaller code transformations to safely change your code base. Whether you need to revitalize legacy code or put your project structure in order, you can lean on ReSharper. Use code formatting and cleanup to get rid of unused code and ensure compliance to coding standards. Navigation features help you instantly traverse your entire solution. You can jump to any file, type, or member in your code base in no time, or navigate from a specific symbol to its usages, base and derived symbols, or implementations. Other ReSharper features include a powerful unit test runner, various kinds of code templates, debugging assistance, a project dependency viewer, internationalization assistance, as well as language-specific features for ASP.NET/ASP.NET MVC, XAML and other technologies. All keyboard shortcuts provided in the "Features" section are taken from 'Visual Studio' keyboard scheme. For details on ReSharper's two default keymaps, see Documentation and Demos.
https://www.jetbrains.com/resharper/features/index.html?linklogos
CC-MAIN-2020-05
en
refinedweb
Asynchronous parallel SSH library Project description Asynchronous parallel SSH client library. Run SSH commands over many - hundreds/hundreds of thousands - number of servers asynchronously and with minimal system load on the client host. Native code based client with extremely high performance - based on libssh2 C library. Contents Installation pip install parallel-ssh Usage Example See documentation on read the docs for more complete examples. Run uname on two remote hosts in parallel with sudo. from __future__ import print_function from pssh.clients import ParallelSSHClient hosts = ['myhost1', 'myhost2'] client = ParallelSSHClient(hosts) output = client.run_command('uname') for host, host_output in output.items(): for line in host_output.stdout: print(line) Native client. The new client will become the default and will replace the current pssh.pssh_client in a new major version of the library - 2.0.0. The paramiko based client will become an optional install via pip extras, available under pssh.clients.miko. For example: from pprint import pprint from pssh.clients.native import ParallelSSHClient hosts = ['myhost1', 'myhost2'] client = ParallelSSHClient(hosts) output = client.run_command('uname') for host, host_output in output.items(): for line in host_output.stdout: print(line) See documentation for a feature comparison of the two clients. Native Code Client Features - Highest performance and least overhead of any Python SSH libraries - Thread safe - makes use of native threads for blocking calls like authentication - Natively non-blocking utilising libssh2 via ssh2-python - no monkey patching of the Python standard library - Significantly reduced overhead in CPU and memory usage Exit codes Once either standard output is iterated on to completion, or client.join(output) is called, exit codes become available in host output. Iteration ends only when remote command has completed, though it may be interrupted and resumed at any point. for host in output: print(output[host].exit_code) The client’s join function can be used to wait for all commands in output object to finish: client.join(output) Similarly, output and exit codes are available after client.join is called: from pprint import pprint output = client.run_command('exit 0') # Wait for commands to complete and gather exit codes. # Output is updated in-place. client.join(output) pprint(output.values()[0].exit_code) # Output remains available in output generators for host, host_output in output.items(): for line in host_output.stdout: pprint(line) There is also a built in host logger that can be enabled to log output from remote hosts..join(client.run_command('uname'), consume_output=True) SFTP SFTP is supported natively. suffixed with the host’s name with the copy_remote_file function. Directory recursion is supported in both cases via the recurse parameter - defaults to off. See SFTP documentation for more examples. Design And Goals parallel-ssh’s design goals and motivation are to provide a library for running non-blocking asynchronous SSH commands in parallel with little to no load induced on the system by doing so with the intended usage being completely programmatic and non-interactive. To meet these goals, API driven solutions are preferred first and foremost. This frees up developers to drive the library via any method desired, be that environment variables, CI driven tasks, command line tools, existing OpenSSH or new configuration files, from within an application et al. Comparison With Alternatives There are not many alternatives for SSH libraries in Python. Of the few that do exist, here is how they compare with parallel-ssh. As always, it is best to use a tool that is suited to the task at hand. parallel-ssh is a library for programmatic and non-interactive use - see Design And Goals. If requirements do not match what it provides then it best not be used. Same applies for the tools described below. Paramiko The default SSH client library in parallel-ssh 1.x.x series. Pure Python code, while having native extensions as dependencies, with poor performance and numerous bugs compared to both OpenSSH binaries and the libssh2 based native clients in parallel-ssh 1.2.x and above. Recent versions have regressed in performance and have blocker issues. It does not support non-blocking mode, so to make it non-blocking monkey patching must be used which affects all other uses of the Python standard library. However, some functionality like Kerberos (GSS-API) authentication is not provided by other libraries. asyncssh Python 3 only asyncio framework using client library. License (EPL) is not compatible with GPL, BSD or other open source licenses and combined works cannot be distributed. Therefore unsuitable for use in many projects, including parallel-ssh. Fabric Port of Capistrano from Ruby to Python. Intended for command line use and is heavily systems administration oriented rather than non-interactive library. Same maintainer as Paramiko. Uses Paramiko and suffers from the same limitations. More over, uses threads for parallelisation, while not being thread safe, and exhibits very poor performance and extremely high CPU usage even for limited number of hosts - 1 to 10 - with scaling limited to one core. Library API is non-standard, poorly documented and with numerous issues as API use is not intended. Ansible A configuration management and automation tool that makes use of SSH remote commands. Uses, in parts, both Paramiko and OpenSSH binaries. Similarly to Fabric, uses threads for parallelisation and suffers from the poor scaling that this model offers. See The State of Python SSH Libraries for what to expect from scaling SSH with threads, as compared to non-blocking I/O with parallel-ssh. Again similar to Fabric, its intended and documented use is interactive via command line rather than library API based. It may, however, be an option if Ansible is already being used for automation purposes with existing playbooks, the number of hosts is small, and when the use case is interactive via command line. parallel-ssh is, on the other hand, a suitable option for Ansible as an SSH client that would improve its parallel SSH performance significantly. ssh2-python Wrapper to libssh2 C library. Used by parallel-ssh as of 1.2.0 and is by same author. Does not do parallelisation out of the box but can be made parallel via Python’s threading library relatively easily and as it is a wrapper to a native library that releases Python’s GIL, can scale to multiple cores. parallel-ssh uses ssh2-python in its native non-blocking mode with event loop and co-operative sockets provided by gevent for an extremely high performance library without the side-effects of monkey patching - see benchmarks. In addition, parallel-ssh uses native threads to offload CPU blocked tasks like authentication in order to scale to multiple cores while still remaining non-blocking for network I/O. pssh.clients.native.SSHClient is a single host natively non-blocking client for users that do not need parallel capabilities but still want a non-blocking client with native code performance. Out of all the available Python SSH libraries, libssh2 and ssh2-python have been shown, see benchmarks above, to perform the best with the least resource utilisation and ironically for a native code extension the least amount of dependencies. Only libssh2 C library and its dependencies which are included in binary wheels. However, it lacks support for some SSH features present elsewhere like ECDSA keys (PR pending), agent forwarding (PR also pending) and Kerberos authentication - see feature comparison. Scaling Some guide lines on scaling parallel-ssh and pool size numbers. In general, long lived commands with little or no output gathering will scale better. Pool sizes in the multiple thousands have been used successfully with little CPU overhead in the single thread running them in these use cases. Conversely, many short lived commands with output gathering will not scale as well. In this use case, smaller pool sizes in the hundreds are likely to perform better with regards to CPU overhead in the event loop. Multiple Python native threads, each of which can get its own event loop, may be used to scale this use case further as number of CPU cores allows. Note that parallel-ssh imports must be done within the target function of the newly started thread for it to receive its own event loop. gevent.get_hub() may be used to confirm that the worker thread event loop differs from the main thread. Gathering is highlighted here as output generation does not affect scaling. Only when output is gathered either over multiple still running commands, or while more commands are being triggered, is overhead increased. Technical Details To understand why this is, consider that in co-operative multi tasking, which is being used in this project via the gevent library, a co-routine (greenlet) needs to yield the event loop to allow others to execute - co-operation. When one co-routine is constantly grabbing the event loop in order to gather output, or when co-routines are constantly trying to start new short-lived commands, it causes contention with other co-routines that also want to use the event loop. This manifests itself as increased CPU usage in the process running the event loop and reduced performance with regards to scaling improvements from increasing pool size. On the other end of the spectrum, long lived remote commands that generate no output only need the event loop at the start, when they are establishing connections, and at the end, when they are finished and need to gather exit codes, which results in practically zero CPU overhead at any time other than start or end of command execution. Output generation is done remotely and has no effect on the event loop until output is gathered - output buffers are iterated on. Only at that point does the event loop need to be held. User’s group There is a public ParallelSSH Google group setup for this purpose - both posting and viewing are open to the public. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/parallel-ssh/1.8.0/
CC-MAIN-2018-30
en
refinedweb
If you have worked a lot with MOSS you probably know how to make new page layouts. But if you create new page layouts you might sometimes wonder that how could I add some common functionalities to my page layout pages. One example could be localization. You have decided that Variations isn't the way to go in your case, but you still want to have different site structures for different languages... and of course you want to have texts localized. Or you want to change your master page for some reason on the fly... one example could be for printing reasons. Or even wilder... you want to change you page layout to another! You could do this kind of stuff pretty easily if you create your own PublishingLayoutPage class that has support your new functionalities. I'm going to explain how you can do that with SharePoint Designer and Visual Studio. Create new class that will extend the functionality of PublishingLayoutPage I started my journey by creating new Class Library project. I named it "Microsoft.MCS.Common" (since I work in MCS inside Microsoft... cool naming right :-). I added new class and named it PublishingLayoutPageEx. I inherited that from PublishingLayoutPage which is class behind page layouts. Where did I got that class name? Well I just opened ArticleLeft.aspx with SharePoint Designer and checked the first line: <%@ Page language="C#" Inherits="Microsoft.SharePoint.Publishing.PublishingLayoutPage, Microsoft.SharePoint.Publishing, Version=12.0.0.0,Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> So it was pretty obvious that if I want to extend the functionality of the basic publishing page, I needed to inherit from it. At this point my code looked like this (not much since we just started): } And now I'm ready to test my new class in action. I just added strong name key, compiled and put it in the GAC. And then I changed the ArticleLeft.aspx to use my new class: <%@ Page language="C#" Inherits="Microsoft.MCS.Common.PublishingLayoutPageEx, Microsoft.MCS.Common,Version=1.0.0.0, Culture=neutral,PublicKeyToken=b1e9400215c03709" %> <small sidetrack to .NET Reflector> If you're interestested in the stuff that's implemented in PublishingLayoutPage, then you can play around with incredible tool: Lutz Roeder's NET Reflector: In just few clicks we can see that there is some MasterPageFile retrieving in OnPreInit: </small sidetrack to .NET Reflector> If you now try your new PublishingLayoutPageEx in action you'll get this kind of error message:: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: The base type 'Microsoft.MCS.Common.PublishingLayoutPageEx' is not allowed for this page. The type is not registered as safe. Source Error: Source File: /_catalogs/masterpage/ArticleLeft.aspx Line: 1:<SafeControl Assembly="Microsoft.MCS.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b1e9400215c03709" Namespace="Microsoft.MCS.Common" TypeName="PublishingLayoutPageEx" Safe="True" AllowRemoteDesigner="true" /> And then hit F5 in your browser and you should be all set. Now you have base what we're going to extend in next. Add localization support to your pages If you haven't played with ASP.NET Resource files, then you should take small detour into localization quickstart. So now you know about .RESX files 🙂 I created Example.resx, Example.en-US.resx and Example.fi-FI. I have added only two words to the resource files: - House: - en-US: House - fi-FI: Talo - You: - en-US: You - fi-FI: Sinä I copied those resource files to my applications App_GlobalResouces folder: C:\Inetpub\wwwroot\wss\VirtualDirectories\80\App_GlobalResources Now I modified my default_Janne.master page so that it would receive text from my resource files. I added following line just before </body> in master page. <asp:Literal <-> <asp:Literal We have now added resource files and modified master page so that it will take text from our resource file. Let's just add code to our new class so that we could change the language on the fly. protected override void OnPreInit() 20 { 21 base.OnPreInit(); 22 this.InitializeCulture(); 23 } 24 25 protected override void InitializeCulture() 26 { 27 if (Request["mylang"] != null) 28 { 29 Thread.CurrentThread.CurrentCulture = new CultureInfo(Request"mylang"].ToString()); 30 Thread.CurrentThread.CurrentUICulture = new CultureInfo(Request["mylang"].ToString()); 31 } 32 33 base.InitializeCulture(); 34 } 35 } 36 } And now we can change the language from URL: Here is result without the mylang parameter: Of course you might not want to change your language by url parameter 🙂 This is just sample that you CAN do that. Maybe it would be much wiser to use some kind of site structure for localization. But I'll leave that to you... Change master page on the fly Now we want to make something fancier... like changing the master page on the fly. You could want to use this for print layouts, smaller screen, mobile etc. But anyway.. You just might want to do that sometimes 🙂 So let's throw some code in here and see what happens: ... 1 protected override void OnPreInit() 2 { 3 base.OnPreInit(); 4 if (Request["Print"] != null) 5 { 6 this.MasterPageFile = "/_catalogs/masterpage/BlueBand.master"; 7 } 8 } ... On lines 4 to 6 we have just check that if there is mysterious Print parameter set. If that is set, we'll change the master page to nicely hardcode one. Let's see what happens on our browser: So the result is quite easy to see... our master page changed from default_janne.master to BlueBand.master. Change the page layout Before I start... I'm going to give credit of this idea to Vesa Juvonen (colleague of mine at MCS who also works with SharePoint). He said that this would be interesting thing to checkout. And since I happened to have some code ready we tried this stuff on my environment. But he's going to create full solution of this page layout change and publish it in his blog. So you probably want to check that place out too. Okay.. but let's get back to the subject. This might sound a bit strange but still... sometimes you might want to change page layout after the page has been created. Consider the Article Page content type which is OOB content type in SharePoint. It has 4 different kind of page layouts. User could have selected Image on right layout and has filled the page with data. After a while you want to change it to another layout.... BUT there isn't easy way to do that... unless we'll extend our nice class again. Idea is take (again) some nice url parameter that tells the destination page layout. In this example I'll just take integer which is used to get the correct page layout from array of possible page layouts of this content type. And yes... I know that this code sample has a lot to improve... It just gives you ideas. Let's throw some code in here and see what happens: ... 1 protected override void OnPreInit() 2 { 3 SPContext current = SPContext.Current; 4 if (current != null && 5 Request["changepagelayout"] != null && 6 Request["done"] == null) 7 { 8 SPWeb web = current.Web; 9 // We need to allow unsafe updates in order to do this: 10 web.AllowUnsafeUpdates = true; 11 web.Update(); 12 13 PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web); 14 PageLayout[] layouts = publishingWeb.GetAvailablePageLayouts(current.ListItem.ContentType.Parent.Id); 15 PublishingPage publishingPage = PublishingPage.GetPublishingPage(current.ListItem); 16 publishingPage.CheckOut(); 17 // This is the magic: 18 publishingPage.Layout = layouts[Convert.ToInt32(Request["changepagelayout"])]; 19 publishingPage.Update(); 20 publishingPage.CheckIn("We have changed page layout"); 21 22 SPFile file = current.ListItem.File; 23 file.Publish("Publishing after page layout change"); 24 // We have content approval on: 25 file.Approve("Approving the page layout change"); 26 Response.Redirect(Request.Url + "&done=true"); 27 } 28 base.OnPreInit(e); 29 } ... And you can right away see from code that there isn't any checks or any error handling. So this code is only for demonstration purposes and you shouldn't take it any other way.... but here we can see the results after user has type in parameter changepagelayout=1 (Image on left): And here is page if parameter is 2 (Image on right). If you look at the code on line 14 where page layouts are retrieved... I'm using Parent of the current content type. You might ask why... But the reason is simple since your content type from the list actually inherits the Article Page content type from Site collection level. So if you would use ListItem.ContentType you wouldn't get those 4 page layouts of Article Page. Insted you need to get the parent of the content type in the list and then you get 4 different page layouts. Makes sense if you think how inheritance in SharePoint works. If you wonder that done parameter I'm using... It is just helper to avoid recursive page layout change 😉 Note: If you look at the code you probably already noticed that it's not changing the layout for this rendering... it has changed the page layout permanently. Of course you can change it back if you want to. Summary You can use a lot of stuff from ASP.NET right in your SharePoint page layouts. I can't even imagine all the capabilities of this but I'm just going to give you brief list what I can think of now: 1) Localization: - This one is obvious 🙂 I live in Finland and we need to deal with this in every project. 2) Change the master page - Print layouts - Mobile UIs 3) Change page layout - If you had created page but you later on want to change to more suitable one... here's how you can do it I hope you got the idea of this post. I know I could improve those samples a lot, but I just wanted to share my idea and give you the opportunity to make it much better than I did. Anyways... happy hacking! J PingBack from Hello Janne, Thanks for that great article. But I’m still strugling with a couple of things :(. I inherited my page from the custom PublishingLayoutPageEx and it wend fine, All sharePoint words became in the right language exept the webparts?. All webparts stay in english despite what culture you using, is that normal? Or is a webparts rendered differently than the Page? Hi koen! Web parts don’t differ in that sense that if their made using resources, then they work fine. Here’s an example: writer.Write("Resource: " + HttpContext.GetGlobalResourceObject("Example","House").ToString() + "<br/>"); If you add that to you web part then you get example same text than in your page. What is the exact web part your having problems with? It could be that it’s using XSL Style Sheets and there is hardcoded text. Is that web part your own or OOB web part? I hope my answer helped a little bit. Feel free to ping me if you didn’t get the idea. Anyways… Happy hacking! J Hi janne, First of all, thanks for the quick response :). I’m not using a custom webpart. Instead I inserted a view of the common task list on the page and this view will always stay in english, nomatter what culture you set your thread to. Really strange. If you go and look at the structure of the task list (feature at C:Program FilesCommon FilesMicrosoft Sharedweb server extensions12TEMPLATEFEATURESTasksListTasksSchema.xml) you can see it uses resources stored in C:Program FilesCommon FilesMicrosoft Sharedweb server extensions12Resources. I have installed multiple language packs on my SHarepoint server so that every resource file is there in different language but for some strange reason it refuses to use it :). If I use variations, it will translate the tasklist in different languages but the problem is that I can use only one tasklist and that’s wy I used your solution. Hi Koen! Thanks for asking because it was really nice to dig on this one. I like to solve problems and this was definately a nice one 🙂 Okay.. I’ll explain what I did. 1) I first verified your problem by creating same situation. And you’re absolutely right! Web Parts don’t actually change texts 🙁 2) Started digging with .resx files and DLLs with Reflector. 3) Noticed that behind the scenes there is CoreResources class that handles translations. And if you look at the following code, does this ring a bell on you: static CoreResource() { _aIntl = Assembly.Load("Microsoft.SharePoint.intl, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"); _SPRM = new ResourceManager("Microsoft.SharePoint", _aIntl); _WebPartPageRM = new ResourceManager("Microsoft.SharePoint.WebPartPages.strings", _aIntl); } 4) You start wondering why there is _WebPartPageRM (and is different from the other texts) and where are those text then. 5) Take Reflector and open "Microsoft.SharePoint.intl, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c". Under it there is Resources and of course: Microsoft.SharePoint.WebPartPages.strings.resources. 6) Search text that you know some web part page uses 7) You have found the place for your text. Okay.. so now we have this "neutral" dll where are resources. But I don’t have access to systems that has multiple languages installed, so I don’t know that is that DLL available for different cultures. I hope that this gives you the answer your looking for… If you can dig your system too and you could give me feedback that you have verified my idea. Anyways… Happy hacking! J I just wanted to add a note for my previous comment: "Web Parts don’t actually change texts". But of course your own web parts change languages if you use resources 🙂 Just to make this clear. J Janne, You are right, I have found the CoreResource en the resource DLL’s. In a multilingual environment, you have a dll for each language: Microsoft.SharePoint.intl.dll –> Microsoft.SharePoint.resources Microsoft.SharePoint.WebPartPages.strings.resources microsoft.sharepoint.intl.resources.dll Language Dutch) –> Microsoft.SharePoint.nl.resources Microsoft.SharePoint.WebPartPages.strings.nl.resources microsoft.sharepoint.intl.resources.dll Language French) –> Microsoft.SharePoint.fr.resources Microsoft.SharePoint.WebPartPages.strings.fr.resources When I look at the resource string of a common task list, for instance the Priority field, you have following schema: <Field ID="{a8eb573e-9e11-481a-a8c9-1104a54b2fbd}" Type="Choice" Name="Priority" DisplayName="$Resources:core,Priority;" SourceID="" StaticName="Priority"> <!– _locID@DisplayName="camlidT2" _locComment=" " –> <CHOICES> <CHOICE>$Resources:core,Priority_High;</CHOICE> <CHOICE>$Resources:core,Priority_Normal;</CHOICE> <CHOICE>$Resources:core,Priority_Low;</CHOICE> </CHOICES> <MAPPINGS> <MAPPING Value="1">$Resources:core,Priority_High;</MAPPING> <MAPPING Value="2">$Resources:core,Priority_Normal;</MAPPING> <MAPPING Value="3">$Resources:core,Priority_Low;</MAPPING> </MAPPINGS> <Default>$Resources:core,Priority_Normal;</Default> </Field> So, when I search for Priority_High, Priority_Normal, Priority_Low, I found nothing in the resource files. I can found it back in the resource file C:Program FilesCommon FilesMicrosoft Sharedweb server extensions12Resourcescore.resx C:Program FilesCommon FilesMicrosoft Sharedweb server extensions12Resourcescorecore.nl-nl.resx C:Program FilesCommon FilesMicrosoft Sharedweb server extensions12Resourcescorecore.fr-fr.resx Also, I have seen that, like you said, there is a clear difference between plain SharePoint and a sharepoint WebPartPage internal static string GetString(CultureInfo culture, ResourceGroup rg, string name, params object[] values) { string text = null; StringBuilder builder; int num3; switch (rg) { case ResourceGroup.SharePoint: text = _SPRM.GetString(name, culture); case ResourceGroup.WebPartPage: text = _WebPartPageRM.GetString(name, culture); } So, I followed the trace from above in the listviewWebpart but so far I don’t see it yet. I keep you posted, I hope you will do it also Regards, Koen Hi Janne I’m having a few problems with this. Firstly I’m at the point where it’s time to register in the GAC. Is it possible to register this Class Library in the bin folder of the web application (site collection) rather than in the GAC? I’ve tried placing the dll (which is strong named) into InetpubwwwrootwssVirtualDirectoriesMyAppbin folder and putting the entry in the web.config with the correct details (including the PublicKeyToken) but I get the Parser Error you show above still. We have been able to get our webparts working by doing the above. I thought I’d try registering it in the GAC on the server, but my first hurdle was that the gacutil was for version 1.1 only and would not accept it. The admin control panels also only have the 1.1 .net configuration control panel. Obviously .net 2 is on there (as SP is running) but I’ve double checked IIS which confirms it is using 2.0. I’ve copied up gacutil for 2.0 and it says it is registering it, but does not appear in the list (and I still get this parser error). Any ideas? Preferably I’d like to get this class running from the web bin. I’ve been able to get it into the GAC now, but I release the error I am getting is different to the one on this page. I get this error in all cases… Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load the assembly "…" Hi Andy! So now your’re having the "Could not load the assembly" error? I think you get the error because it’s not probably installed to the GAC. Did you use Visual Studio Command prompt and GACUTIL /i your_assembly_here? And did you verify that version number of gacutil is "Version 2.0.50727.42"? Or how did you install it to gac? Because if it’s installed to GAC properly and that "Inherits=…" part is also correctly set in page layout then this should be fine. Please give comment if that didn’t solve your problem. Anyways… happy hacking! J Hi Janne, the server I am installing to was missing the v 2.0.5… gacutil but I copied a version of it to the server and used: gacutil -i MyCompany.PublishingLayoutPageEx.dll which reported it was OK. I did gacutil -l MyCompany.PublishingLayoutPageEx and it showed up. In my layout page (I made a copy of Article Left and modified that) I changed the top to read: <%@ Page language="C#" Inherits="MyCompany.PublishingLayoutPageEx,MyCompany,Version=1.0.0.0,Culture=neutral,PublicKeyToken=1294058909b4416a" %> The PublicKeyToken is correct (checked with Net Reflector). In VS2005 I have this: using System; using System.Collections.Generic; using System.Text; using System.Web.UI; using Microsoft.SharePoint.Publishing; using Microsoft.SharePoint; namespace MyCompany { public class PublishingLayoutPageEx : PublishingLayoutPage { public PublishingLayoutPageEx() : base() { } } } And I have added references to System.Web and the two sharepoint dll’s. My AssemblyInfo.cs properties show Assembly name: MyCompany.PublishingLayoutPageEx and Default namespace as MyCompany. I am new to both SharePoint and .NET but the instructions on your blog are clear enough and I’m not sure what I might be missing or doing wrong. Even after an iisreset I get that error with "Could not load the assembly". Hi Andy! It seems that there is conflict with these two: MyCompany.PublishingLayoutPageEx.dll and Inherits="MyCompany.PublishingLayoutPageEx,MyCompany,Version=1.0.0.0,… I think the Inherits should be: MyCompany.PublishingLayoutPageEx,MyCompany.PublishingLayoutPageEx,… (just add ".PublishingLayoutPageEx") That looks like the problem… But you’re close the finish line already 🙂 Anyways.. happy hacking! J Thank you so much Janne I’m getting closer. I’m now getting "Parser Error Message: Could not load type" so at least it is being recognised now and I can go back over my previous steps to check its all set up correctly. Got it more or less sorted now. For information on my last problem, that was caused by an error in case (Mycompany v MyCompany). Thank you so much for the help, I wonder if I could ask one more question. In SharePoint designer, the copy of ArticleLeft.aspx I created is stating "Master Page error – The page has controls that require a Master Page reference, but none is specified. Attach a Master Page, or correct the problem in Code View." Attach a Master Page is a link but does not do anything and Code View is a link and highlights my <@ Page line at the top. I should add that currently my DLL sits in the sites bin folder and not the GAC as I originally wanted it just for my site collection and not anything else on the server, is the the problem or should I be looking elsewhere? Hi Andy! You can define the master page at the Page directive like this: … MasterPageFile="~/_layouts/default.master" … After that SPD should be fine with it. I think you can just deploy to your bin. I don’t see any problems with that. Anyways… happy hacking! J Dear Janne, I want to publish the News articles without text formatting. I mean I want to modify ArticleLeft.aspx PageLayout to display Left aligned image and text only. MOSS users can make news entries with Image on the left template. They may copy some news and texts from Internet and paste them into the content of news entry. But there would be text formatting at that entries. What I need to do is to display only Image with text without formatting. Is it possible to create new PageLayout to display like that? Can you give me some ideas how to do? Please kindly advice me. I hope your reply. Myo This is powerful stuff, and thanks to this I can do a lot more powerful MOSS WCM custom development. I was wondering if you had tried to do the masterpage change using a custom HttpModule? I have been trying to do it using the following code: public class Hook : IHttpModule { public void Init(HttpApplication context) { context.PreRequestHandlerExecute += new EventHandler(context_PreRequestHandlerExecute); } void context_PreRequestHandlerExecute(object sender, EventArgs e) { Page page; IHttpHandler handler = HttpContext.Current.Handler; page = handler as Page; if (page != null) page.PreInit += new EventHandler(page_PreInit); } void page_PreInit(object sender, EventArgs e) { Page page = sender as Page; if (page != null) { string strUrl = "/Docs/_catalogs/masterpage/"; SPWeb web = SPControl.GetContextSite(HttpContext.Current).OpenWeb(); SPUser user = web.CurrentUser; if (user == null) strUrl = ""; else strUrl += "BlueBand.master"; if (strUrl.Length > 0) page.MasterPageFile = strUrl; } } public void Dispose() { } } I works like a charm on wss 3.0, but MOSS publishing pages just ignores it. The preInit event hook is set on the page, but it is never called. I’m all out of ideas, any comments would be greatly appreciated. Hello Janne, I am interested in the code to change the PageLayout associated with a PublishingPage. To my mind, the PublishingPage represents the data, and the PageLayout represents the view on that data. Consequently the view should be selectable dynamically. I would expect to be able to place webparts on the PageLayout aspx page which connected to Field Values from the PublishingPage via FieldControls. Thus the overall content displayed on a Publishing page would include PublishingPage columns, and adhoc data from else where, via connected web parts. I would expect to control rights to the edit the web parts discretely from the rights to edit the columns of a PublishingPage via a PageLayout page (ie someone may be able to edit web parts, but not the PublishingPage data, and vice versa). However I have two concerns over whether I am off track with these thoughts: 1) I not sure how to select the PageLayout dynamically at render time. The closest I have got is to derive from the TemplateRedirectionPage, and set the PublishingPage.Layout property in it’s PreInit event (somehow). 2) An article "How to Upgrade an Area based on a Custom Site Definition" at says "Webpart associations are also based on the actual publishing page" I am not sure of the consequences of this. I would be very interested to have your response to this. Many Thanks Martin Kristian, did u find a solution to the problem? I am having the same issue I found a great article Janne Mattila. discuses that if the layout page just inherit from Microsoft.SharePoint.Publishing.PublishingLayoutPage… great article…..helped med a lot namespace Microsoft.MCS.Common { public class PublishingLayoutPageEx : PublishingLayoutPage { public PublishingLayoutPageEx() : base() { } } } I am getting bellow error " : base() " in this line Could not load file or assembly ‘Microsoft.SharePoint.Library, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c’ or one of its dependencies. The system cannot find the file specified. PublishingLayoutPage vs. WebPartPage U cannot always inherit from PublishingLayoutPage, If you are not using Default. Master Read this article Thanks for the tip on PublishingLayoutPage changing the MasterPageFile on OnPreInit(). See how I changed it back in my Page scope OnPreInit() override. Thanks for this article. very helpful. Sorry, I have a basic question — where do i get the class library project template from? do you mean any regular class? I am trying to understand, what would i need to start working on this. Thanks in advance! Overview The National Native Title Tribunal of Australia assists people to resolve native title issues i couldnt get your Print trick to work. For one, the page (either master page or page layout) says that there is no suitable method to override for OnPreInit so I tried Page_PreInit instead. This didnt work on the master page and while it executed on the page layout it did not change my master page. Is there an update call missing? did you test your code? oops – it works, I just didnt read the first half of the article and jumped to the code :-/ sorry! Hi, Please correct me if I am wrong. What I under stand from above is that this would work for custom webparts but not for OOB webparts.
https://blogs.msdn.microsoft.com/jannemattila/2007/04/13/adding-functionalities-to-pages-by-inheriting-publishinglayoutpage/?replytocom=303
CC-MAIN-2018-30
en
refinedweb
Intro Machine learning has been growing by leaps and bounds in recent years, and with libraries like TensorFlow, it seems like almost anything is possible. One interesting application of neural networks is in classification of handwritten characters – in this case digits. This article will go through the fundamentals of creating and using a specific kind of network in TensorFlow: a convolutional neural network. Convolutional neural networks are specialized networks used for image recognition, that perform much better than a vanilla deep neural network. Concepts Before diving into this project, we will need to review some concepts. TensorFlow TensorFlow is more than just a machine learning library, it is actually a library for creating distributed computation graphs, whose execution can be deferred until needed, and stored when not needed. TensorFlow works by the creation of calculation graphs. These graphs are stored and executed later, within a "session". By storing neural network connection weights as matrices, TensorFlow can be used to create computation graphs which are effectively neural networks. This is the primary use of TensorFlow today, and how we'll be using it in this article. Convolutional Neural Networks Convolutional neural networks are networks based on the physical qualities of the human eye. Information is received as a "block" of data, like an image, and filters are applied across the entire image, which transform the image and reveal features which can be used for classification. For instance, one filter might find round edges, which could indicate a five or a six. Other filters might find straight lines, indicating a one or a seven. The weight of these filters are learned as the model receives data, and thus it gets better and better at predicting images, by getting better and better at coaxing features out using its filters. There is much more than this to a convolutional neural network, but this will suffice for this article. The Data How do we get the data we'll need to train this network? No problem; TensorFlow provides us some easy methods to fetch the MNIST dataset, a common machine learning dataset used to classify handwritten digits. Simply import the input_data method from the TensorFlow MNIST tutorial namespace as below. You will need to reshape the data into a square of 28 by 28, since the original dataset is a flat list of 784 numbers per image. from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data") test_imgs = mnist.test.images.reshape(-1, 28, 28, 1) test_lbls = mnist.test.labels train_imgs = mnist.train.images.reshape(-1, 28, 28, 1) train_lbls = mnist.train.labels The Network So how might we build such a network? Where do we start? Well lucky for us, TensorFlow provides this functionality out of the box, so there's no need to reinvent the wheel. The first thing that must be defined are our input and output variables. For this, we'll use placeholders. X = tf.placeholder(tf.float32, shape=(None, 28, 28, 1)) y = tf.placeholder(tf.int64, shape=(None), name="y") Next, we need to define our initial filters. In order to avoid dying/exploding gradients, a truncated normal distribution is recommended for initialization. In our case, we will have two lists of filters for our two convolutional layers. filters = tf.Variable(tf.truncated_normal((5,5,1,32), stddev=0.1)) filters_2 = tf.Variable(tf.truncated_normal((5,5,32,64), stddev=0.1)) Finally, we need to create our actual convolutional layers. This is done using TensorFlow's tf.nn.conv2d method. We also use a name scope to keep things organized. Note the max pooling layers between convolutional layers. The max pool layers aggregate the image data from each filter using a predefined method, and are not trained. They simply help reduce the complexity of the data by squashing the many layers produced by our filters. with tf.name_scope("dnn"): convolution = tf.nn.conv2d(X, filters, strides=[1,2,2,1], padding="SAME") max_pool = tf.nn.max_pool(convolution, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID") convolution_2 = tf.nn.conv2d(max_pool, filters_2, strides=[1,2,2,1], padding="SAME") max_pool_2 = tf.nn.max_pool(convolution_2, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID") flatten = tf.reshape(max_pool_2, [-1, 2 * 2 * 64]) predict = fully_connected(flatten, 1024, scope="predict") keep_prob = tf.placeholder(tf.float32) dropout = tf.nn.dropout(predict, keep_prob) logits = fully_connected(dropout, n_outputs, scope="outputs", activation_fn=None) Also note that before our prediction layer, we have to squash down the final max pool output to make predictions at our fully connected layer. You can get the shapes of the various layers as shown below, to figure out what size your various layers need to be. print("conv", convolution.get_shape()) print("max", max_pool.get_shape()) print("conv2", convolution_2.get_shape()) print("max2", max_pool_2.get_shape()) print("flat", flatten.get_shape()) print("predict", predict.get_shape()) print("dropout", dropout.get_shape()) print("logits", logits.get_shape()) print("logits guess", logits_guess.get_shape()) print("correct", correct.get_shape()) print("accuracy", accuracy.get_shape()) We also apply dropout to avoid overfitting, and do not apply an activation function to our outputs. We will instead calculate the entropy manually at each training step, which improves performance. Now to create our training and evaluation layers. We will also namespace these like the previous layers, to make things easier to understand when they are viewed in a visualization tool like TensorBoard. Our loss is the average of the cross-entropy between the expected outputs and the output of our logits, this much should make sense. For training, we use an Adam optimizer, which is almost always recommended. The learning rate used in this article is 1e-4. This is the same learning rate that is used for TensorFlow's own "expert" tutorial on MNIST. Our evaluation is a little more complicated. Since we are training with batches, we need to get the output for each item in the batch. We do this by applying tf.argmax to every output list using tf.map_fn. Then, we compare the guesses to the actual values using tf.equal. Our accuracy is the average number of correct predictions (i.e., the percentage of numbers we classified correctly). with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("train"): optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): logits_guess = tf.cast(tf.map_fn(tf.argmax, logits, dtype=tf.int64), tf.int64) correct = tf.equal(logits_guess, y) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() To actually train the network, we will need to run through the data several times, running a batch at every iteration. In this case, we will aim for 20,000 iterations. To calculate how many epochs we will need for our batch size, we use the following code. keep_prob_num = 0.5 batch_size = 50 goal_iterations = 20000 iterations = mnist.train.num_examples // batch_size epochs = int(goal_iterations / iterations) # so that total iterations ends up being around goal_iterations Now to actually run the training operation on our graph. with tf.Session() as sess: sess.run(init) for i in range(epochs): for iteration in range(iterations): X_batch, y_batch = mnist.train.next_batch(batch_size) X_batch_shaped = X_batch.reshape(X_batch.shape[0], 28, 28, 1) sess.run(training_op, feed_dict = {X: X_batch_shaped, y: y_batch, keep_prob: keep_prob_num}) print("epoch:",i) print("iteration:", iteration) It's also recommended that you save the model and evaluate the accuracy at every epoch. You can accomplish this with the following code. Evaluating accuracy_val = sess.run(accuracy, feed_dict = {X: train_imgs, y: train_lbls, keep_prob: 1.0}) print("accuracy:", accuracy_val) Saving saver = tf.train.Saver() saver.save(sess, save_path) After running this model through all epochs and iterations, your accuracy should be around 99.2%. Let's check that. with tf.Session() as sess: saver.restore(sess, save_path) #assume you've saved model, but could run in same session immediately after training accuracy_val = sess.run(accuracy, feed_dict = {X: test_imgs, y: test_lbls, keep_prob: 1.0}) # test accuracy t_accuracy_val = sess.run(accuracy, feed_dict = {X: train_imgs, y: train_lbls, keep_prob: 1.0}) # training accuracy print("accuracy:", accuracy_val) print("train accuracy:", t_accuracy_val) Of course, in the above, the test accuracy is what's most important, as we want our model to generalize to new data. Improvements There are several steps you can take to improve on this model. One step is to apply affine transformations to the images, creating additional images similar but slightly different than the originals. This helps account for handwriting with various "tilts" and other tendencies. You can also train several of the same network, and have them make the final prediction together, averaging the predictions or choosing the prediction with the highest confidence. Conclusion TensorFlow makes digit classification easier than ever. Machine learning is no longer the domain of specialists, but rather should be a tool in the belt of every programmer, to help solve complex optimization, classification, and regression problems for which there is no obvious or cost-effective solution, and for programs which must respond to new information. Machine learning is the way of the future for many problems, and as has been said in another blogger's post: it's unreasonably effective.
http://www.aisoftwarellc.com/blog/post/digit-classification-with-tensorflow-and-the-mnist-dataset/2039
CC-MAIN-2018-30
en
refinedweb
Syntax Highlighting¶ Regex-based Highlighting¶ Syntax highlighting in Builder is performed by the GtkSourceView project. By providing an XML description of the syntax, GtkSourceView can automatically highlight the language of your choice. Thankfully, GtkSourceView already supports a large number of languages so the chances you need to add a new language is low. However, if you do, we suggest that you work with GtkSourceView to ensure that all applications, such as Gedit, benefit from your work. Chances are you can find existing language syntax files on your system in /usr/share/gtksourceview-3.0/language-specs/. These language-spec files serve as a great example of how to make your own. If it is not there, chances are there is already a .lang file created but it has not yet been merged upstream. Bundling Language Specs¶ Should you need to bundle your own language-spec, consider using GResources to embed the language-spec within your plugin. Then append the directory path of your language-specs to the GtkSource.LanguageManager so it knows where to locate them. from gi.repository import GtkSource manager = GtkSource.LanguageManager.get_default() paths = manager.get_search_path() paths.append('resources:///org/gnome/builder/plugins/my-plugin/language-specs/') manager.set_search_path(paths) Symantic Highlighting¶ If the language you are using provides an AST you may want to highlight additional information not easily decernable by a regex-based highlighter. To simplify this, Builder provides the Ide.HighlightEngine and Ide.Highlighter abstractions. The Ide.HighlightEngine provides background updating of the document so that your Ide.Highlighter implementation can focus on highlighting without dealing with performance impacts. Out of simplicity, most Ide.Highlighter implementations in Builder today use a simple word index and highlight based on the word. However, this is not required if you prefer to do something more technical such as matching ranges to the AST.
http://builder.readthedocs.io/en/latest/plugins/editor/highlighting.html
CC-MAIN-2018-30
en
refinedweb
Highlighting Border - Online Code Description This is a code which produces Borders which focusses on a specific element when user Clicks on it. Source Code import java.awt.BorderLayout; import java.awt.Component; import java.awt.Container; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.DefaultListCellRenderer; import javax.swing.JFrame... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/969/Highlighting_Border
CC-MAIN-2018-30
en
refinedweb
IAM, Enterprise Directories and Shibboleth (oh my!) - Ernest Melton - 2 years ago - Views: Transcription 1 IAM, Enterprise Directories and Shibboleth (oh my!) Gary Windham Senior Enterprise Systems Architect University Information Technology Services 2 What is IAM? Identity and Access Management (IAM) is a framework consisting of technical, policy, and governance components that allows an organization to: identify individuals link identities with roles, responsibilities and affiliations assign privileges, access, and entitlements based on identity and associations IAM permits data stewards and service providers to control access to information and/or services, according to an individual's identity, roles and responsibilities 3 What is IAM? (cont) A middleware layer, used by many services Comprised of four main areas: Credentialing (assignment of an unique token to an entity needing access to resources) Authentication (act of validating proof of identity) Authorization (act of affording access to only appropriate resources and functions) Accountability (ensuring against illegitimate utilization of an entity s authority flows from the first 3 functions) 4 IAM Functions Consolidates information about a person and their roles (identities) across an organization or organizations. Makes this information available, in appropriate and policy-guided ways, to services and applications. Allows for integration of services and authority management which can grant, change, or rescind access based on status or affiliation with the organization. Provides the mechanism for appropriate, auditable access to services, critical for security architectures and to ensure compliance. 5 IAM Example Scenario Hi! I m Lisa. (Identity) and here s my NetID / password to prove it. (Authentication) I want to open the Portal to check my . (Authorization : Allowing Lisa to use the services for which she s authorized) And I want to change my grade in last semester s Physics course. (Denial of Authorization : Preventing her from doing things she s not supposed to do) Source: Keith Hazelton UW-Madison/Internet2 MACE 6 Functional View of IAM environment Source: Internet2 Middleware Initiative 7 The State of UA IAM Today The UA NetID service was designed as an authentication mechanism, not authorization. Together, the UA NetID authentication service and WebAuth (UA s Web Single Sign-On Environment) provide a solid middleware foundation for authentication but more is needed for a full-fledged IAM environment Some campus applications/services rely on NetID authentication as implicit authorization. The move to a permanent NetID will only exacerbate this problem Applications/services requiring more granular authorization typically use one (or more) of the following approaches: Run queries (canned or custom) against UIS Create and maintain local, application-specific repositories of authorization data Query the LDAP phonebook directory (inclusion in which is not guaranteed) 8 Enterprise Directory Service (EDS) 9 What is an Enterprise Directory? A lightweight directory services (LDAP) repository containing core bio/demo data for students, staff, faculty, and other University affiliates Groups, roles, etc, can be easily represented as well Contains key institutional and person data reflected from systems of record, represented as name-value pairs that can be easily retrieved and utilized by a variety of applications The enterprise directory does not replace the databases supporting the institutional systems of record Rather, it provides a unified view of selected subsets of these records or other information maintained by departments at the institution Optimized for reads can service hundreds of requests per second 10 What does the EDS contain? The EDS contains a select subset of bio/demo attributes related to employees, students and other affiliates EDS also provides a subset of attributes specified by the eduperson LDAP schema. EduPerson specifies a common set of attributes related to affiliates of higher education institutions, and was developed by EDUCAUSE and Internet2 edupersonaffiliation and edupersonprimaryaffiliation are of particular interest Many attributes, particularly those carrying term-specific or position/title-specific data are multi-valued, meaning that one attribute name may be associated with multiple values Some attributes group multiple, co-related pieces of information into a single, token-delimited string reduces attribute bloat resulting from several pieces of information which need to be represented multiple times (e.g. term-specific data, employee position information, etc) examples of such attributes include studentacademicprogram, studenttermstatus and employeeincumbentrecord 11 Where EDS fits in 12 Provisioning and Consumption of EDS data EDS is currently provisioned with SIS and PSOS data via a daily UIS batch feed The future data provisioning process will incorporate daily feeds from EPM in conjunction with real-time event notifications from PeopleSoft to reflect status changes (e.g. a student record transitions from admitted to enrolled ) Employee- and student-specific codes (e.g. PSOS employee type/status codes, SIS majors, colleges, etc) go into EDS unfiltered assumes that the consuming application understands meaning of codes See the EDS attribute documentation () for full details Because the EDS attribute names are (mostly) divorced from SIS/PSOS/ UIS schemas and nomenclature, populating them with PeopleSoft data should not be a huge migration effort SIS- and PSOS-specific codes that change during the PeopleSoft migration will obviously change in EDS as well Attributes composed of multiple, co-related data elements (e.g. employeeincumbentposition, which contains the PCN, budget department and position end-date) may incur format changes depending on how PeopleSoft models these elements 13 Inclusion in EDS Data for the following populations are included in the EDS directory: Students (must meet one of the following criteria) admitted for a future term registered for a future term orientation session enrolled in a current/future term has been enrolled within the past academic year Employees incumbent in a budgeted position within the last 100 days Departmental Sponsored Visitors currently sponsored, non-expired 14 EDS Schema and Attributes Entries in the EDS consist of attributes belonging to the following LDAP object classes: person inetorgperson eduperson arizonaeduperson arizonaedustudent arizonaeduemployee All entries will contain the first 4 object classes listed above these contain base information about the person and his/her affiliation with the University If a person is active in a student and/or employee role, attributes from the relevant object classes will be present as well A complete list of attribute names and descriptions is available on the EDS documentation site: 15 EDS Directory Structure and Naming The EDS consists of a flat namespace for person data: ou=people,dc=eds,dc=arizona,dc=edu Entries are uniquely identified via the uaid attribute This attribute will most likely contain the PeopleSoft EMPLID value after the migration to Mosaic Group data (coming soon) will occupy a separate branch: ou=groups,dc=eds,dc=arizona,dc=edu The group branch may incorporate sub-branches in order to reflect organization structure and permit delegation of group membership and management functions 16 Sample EDS data (Employee) 17 Sample EDS data (Student) 18 Sample EDS data (Student & Employee) 19 EDS Access Mechanisms REST/DSML Data returned in an XML format (specifically, DSMLv1) Takes a simple HTTP GET request as input Caller can user any standard identifier (NetID, SID/EID or UA_ID) to retrieve attribute values for the desired individual Each person is treated as a discrete resource with a globally unique URI endpoint. An example request, for the fictitious NetID johndoe, would look like this: Access to this interface requires authentication, using the application username and password provided obtained during the registration process Username/password are transmitted to the REST service via standard HTTP Basic authentication Credentials and data are encrypted (in transit) via HTTPS 20 REST/DSML Output Example 21 EDS Access Mechanisms (cont) LDAP The Enterprise Directory Service is based on an LDAPv3-compliant directory Can be accessed via the LDAPS (LDAP-over-SSL) protocol Access to the directory server provided via common registration process used for both REST/DSML and LDAP access uses the same access credentials (username/password) required to access the REST interface Attributes provided via the LDAP interface are identical to those provided via the REST/DSML interface However, results not in XML format BER encoding format requires use of LDAP API client library LDAP interface offers more flexibility to application programmers at the cost of increased complexity Can perform searches based on combinations of different attributes (search filters) Can retrieve multiple entries in a result, rather than only a single person entry For connection details and programming examples, please refer to the EDS documentation: 22 EDS REST/DSML code sample 23 EDS LDAP code sample 24 Availability and Registration EDS has been generally available since mid-february FERPA training is a prerequisite for requesting EDS access FERPA training verified against UIS table during registration process EDS access is granted to applications, which are associated with one-or-more departmental points of contact EDS access is requested via a self-service registration application, available at Access credentials expire, and must be renewed annually Registered points of contact will receive notification well in advance of credential expiration 25 Shibboleth 26 What is Shibboleth? An open software system for web single sign-on Developed by Internet2 Enables web applications deployed in most typical web server environments to authenticate and authorize users via a single protocol Facilitates federated identity Enables fine-grained assertion of identity data to federated and external partners privacy and security are key elements 27 Where Shibboleth fits in 28 Key Concept #1: Federated Identity Federated identity supplies user information to applications offered by different organizations, enabling: single sign-on one identity for common access across applications and organizations provisioning of authoritative data Identity information can include anything from the user's full identity, role information, academic or employment information to simply the fact that the user has successfully authenticated, leaving the user anonymous There are several major advantages of federated identity: 1. It delivers authoritative user attributes directly from the institution responsible for the credentials 2. Organizations do not have to maintain credentials for inter-institutional affiliates in order to provision application access 3. User data is protected. Storage at a single, hardened location and stringent release policies minimize the chance of privacy violation 4. Users across organizations and institutions can utilize their local authentication mechanisms to access remote resources enhances end-user experience and scales easily to new participants 29 Key Concept #2: Attributes The "currency" of the Shibboleth software is attributes. named set of values about an authenticated user values are typically strings, but can be more complex XML-based data. When a user logs into your service provider software, Shibboleth obtains a set of attributes for that user and maps them (based on rules you create) into environment variables and/or HTTP headers for your application to consume Attributes not stored within Shibboleth itself pulled from other sources (e.g. LDAP directory or database) Attribute data retrieved from sources can be enriched/transformed by both identity and service providers Shibboleth is capable of using arbitrary, different attribute names for each interface, decoupling the name in any protocol from all other systems Identity providers and consumers have unlimited flexibility in choosing what attributes to provide and consume 30 How does it work? Shibboleth is an implementation of the SAML (Security Assertion Markup Language) specifications for web single sign-on and attribute exchange adds additional layers of public-key trust management and configuration features specifically designed for web-based deployment designed to interoperate with other open and proprietary implementations of SAML and with applications, portals, etc that offer SAML support 31 How does it work (cont)? Shibboleth is comprised of two major components: Identity Provider (IdP) supplies information about users to services Service Provider (SP) gathers information about users to protect resources (static content, application functionality, etc) Interaction between the IdP and SP are governed by the Shibboleth and SAML specifications IdP s are typically centrally managed by the institution s IT organization SP s are installed in a service s web application container Apache and Microsoft IIS environments are supported SP runtime environment consists of both an Apache module (or ISAPI filter) and a standalone daemon (or service, in Windows environments) Java application servers (e.g. Tomcat, Jboss, Weblogic, etc) can be accomodated by front-ending with Apache and mod_jk, mod_wl (or similar) 32 IdP and SP components While the IdP and SP software are typically implemented as discrete, monolithic services, internally they are composed of multiple services some are externally addressable by distinct URI endpoints, others are internal components that handle discrete phases of the SAML authentication/attribute exchange process IdP Authentication Authority Attribute Authority SSO Artifact Resolution Service SP Attribute Requester Assertion Consumer Service Resource Manager WAYF Where Are you From service (more on this later) 33 1. User requests WebAuth The Shibboleth 2. You are Protocol not (intra-institutional resource use case) authenticated, redirect to IdP SSO 7b 1a 3b Client Web Browser 4a 3c 3a Credentials 3. I don t know you. Authenticate using WebAuth 4. I know you now. Send client (via form POST) to resource s ACS 2 Web Resource Assertion Consumer Service (ACS) 2 SSO Service 7a Resource Manager (RM) 1b 4c Handle Handle 4b EDS Attributes 6b Attribute Requester Resource Provider Web Site 7. Based on attribute values, allow access to resource Attributes 5 6a Handle 5. I don t know your attributes. Ask the attribute authority Source: Kathryn Huxtable, Internet2 Attribute Authority (AA) Identity Provider Web Site 6. Return the attributes allowed by release policy 33 34 The 2a. You Shibboleth are not Protocol (federated use case) authenticated, redirect to federation WAYF 2b. Where are you from? 7b 1a 1. User requests resource Client Web Browser 2b 2c 3b 4a 3c WebSSO 3. I don t know you. Authenticate using org s WebSSO 3a Credentials 2c. Redirect to your home institution s IdP 4. I know you now. Send client to resource s ACS Web Resource Assertion Consumer Service (ACS) 2a WAYF 2c SSO Service 7a Resource Manager (RM) 1b 4c Handle Handle 4b EDS Attributes 6b Attribute Requester Resource Provider Web Site 7. Based on attribute values, allow access to resource Attributes 5. I don t know your attributes. Ask the attribute authority 5 6a Source: Kathryn Huxtable, Internet2 Handle Attribute Authority (AA) Identity Provider Web Site 6. Return the attributes allowed by release policy 34 35 Where Are You From Service (WAYF) 36 Wow, that s all really complex what does it mean to me? The complexity of the protocol is handled transparently by the IdP and SP software components Shibboleth security is applied declaratively to resources within the web application container (e.g. via Apache <Location> directives or IIS virtual URLs) Complex access control rules involving multiple attributes can be declaratively configured via XML The Shibboleth SP facilitates direct access to the attributes in the SAML assertion via the application environment 37 Show me the attributes! Attributes released to a particular SP depend on an attribute release policy (ARP) maintained by the IdP ARPs usually are written to release the same set of attibutes to all members of a particular federation (identified via federation metadata) UA maintains an internal federation consisting of campus service providers who wish to utilize Shibboleth for intra-campus authentication and authorization The ARP for this federation releases all the attributes contained in the EDS Membership in this federation is governed by the same policies as EDS access UA is also a member of the InCommon federation an identity federation made up of North American higher education institutions and partners The ARP for this federation releases very basic information by default: edupersonaffiliation, edupersonprimaryaffiliation and edupersontargetedid 38 Really show me the attributes Attributes are provisioned to consuming applications, by the SP software, via environment variables (or HTTP headers in some cases) In most cases, retrieving an attribute is as simple as referencing an environment variable or an object property. Examples for retrieving the employeeprimarydept attribute in a few different environments follow: Apache/PHP $_SERVER( Shib-employeePrimaryDept ) Java HttpServletRequest.getHeader("Shib-employeePrimaryDept ) ASP Request.ServerVariables("HTTP_SHIB_EMPLOYEEPRIMARYDEPT ) Cold Fusion CGI.SHIB_EMPLOYEEPRIMARYDEPT 39 Really show me the attributes (cont) The EDS attribute list represents, for the most part, the set of attributes available via Shibboleth UITS provides an UA-specific SP attribute-map.xml file, which provisions these attributes using the same names as described in the EDS attribute list but prefixed with the string "Shib-" (in order to avoid any potential HTTP environment/request header namespace collisions) some exceptions, due to attributes that don t map one-to-one with EDS attributes; see the documentation site for details Federated use cases require coordination between the SP and the IdP organizations to ascertain what attributes will be released by the IdP and how they should be mapped at the SP 40 Attribute example 41 Lazy Sessions Application developers who wish to delay the Shibboleth SSO process can utilize an advanced feature called lazy sessions The Shibboleth SP registers a series of virtual URIs with the underlying web application container Application developers can simply issue an HTTP Redirect to the Shibboleth session initiator URI (typically, /Shibboleth.sso/Login), with query-string parameters indicating the return point, in order to launch the Shibboleth SSO process 42 InCommon Federation UA is an InCommon federation member InCommon is a higher education federation of identity and service providers Provides common policies, practices and framework for sharing identity information across institutions Enables collaboration and enhances trust The de-facto standard for identity federation in higher ed 43 InCommon Members 44 Availability and Registration Shibboleth has been generally available since mid-february FERPA training is a prerequisite for requesting Shibboleth access FERPA training verified against UIS table during registration process Shibboleth access is granted to applications, which are associated with one-or-more departmental points of contact Shibboleth access is requested via a self-service registration application, available at Upon registration, the point-of-contact will receive instructions to complete the set-up process (including certificate generation, inclusion in UA metadata, etc) Federated SP deployments require additional set-up please contact if you have a need to establish federated service with InCommon members or external entities 45 Resources This presentation available online at: 46 Q & A INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES 1. Federation Participant Information 1.1 The InCommon Participant Operational Practices information below is for: InCommon Participant organization Perceptive Experience Single Sign-On Solutions Perceptive Experience Single Sign-On Solutions Technical Guide Version: 2.x Written by: Product Knowledge, R&D Date: January 2016 2016 Lexmark International Technology, S.A. All rights reserved. Lexmark INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES Participation in InCommon Federation ( Federation ) enables the participant to use Shibboleth identity attribute sharing technologies to manage INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES Participation in the InCommon Federation ( Federation ) enables a federation participating organization ("Participant") to use Shibboleth identity INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES Participation in the InCommon Federation ( Federation ) enables a federation participating organization ("Participant") to use Shibboleth identity, Authentication Integration Authentication Integration VoiceThread provides multiple authentication frameworks allowing your organization to choose the optimal method to implement. This document details the various available authentication INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES Participation in the InCommon Federation ( Federation ) enables a federation participating organization ("Participant") to use Shibboleth identity WebNow Single Sign-On Solutions WebNow Single Sign-On Solutions Technical Guide ImageNow Version: 6.7. x Written by: Product Documentation, R&D Date: June 2015 2012 Perceptive Software. All rights reserved CaptureNow, ImageNow, Interact, Flexible Identity Federation Flexible Identity Federation Quick start guide version 1.0.1 Publication history Date Description Revision 2015.09.23 initial release 1.0.0 2015.12.11 minor updates 1.0.1 Copyright Orange Business, Federated Identity Management Solutions Federated Identity Management Solutions Jyri Kallela Helsinki University of Technology jkallela@cc.hut.fi Abstract Federated identity management allows users to access multiple services based on a single Authentication Methods Authentication Methods Overview In addition to the OU Campus-managed authentication system, OU Campus supports LDAP, CAS, and Shibboleth authentication methods. LDAP users can be configured through the Getting Started with Single Sign-On Getting Started with Single Sign-On I. Introduction Your institution is considering or has already purchased Collaboratory from Treetop Commons, LLC. One benefit provided to member institutions is Single) Participant Name: University of Victoria Canadian Access Federation: Trust Assertion Document (TAD) 1. Purpose A fundamental requirement of Participants in the Canadian Access Federation is that they assert Implementation Guide SAP NetWeaver Identity Management Identity Provider Implementation Guide SAP NetWeaver Identity Management Identity Provider Target Audience Technology Consultants System Administrators PUBLIC Document version: 1.10 2011-07-18 Document History CAUTION Before Symplified I: Windows User Identity. Matthew McNew and Lex Hubbard Symplified I: Windows User Identity Matthew McNew and Lex Hubbard Table of Contents Abstract 1 Introduction to the Project 2 Project Description 2 Requirements Specification 2 Functional Requirements 2 CA Performance Center CA Performance Center Single Sign-On User Guide 2.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is Remote Authentication and Single Sign-on Support in Tk20 Remote Authentication and Single Sign-on Support in Tk20 1 Table of content Introduction:... 3 Architecture... 3 Single Sign-on... 5 Remote Authentication... 6 Request for Information... 8 Testing Procedure... Building Secure Applications. James Tedrick Building Secure Applications James Tedrick What We re Covering Today: Accessing ArcGIS Resources ArcGIS Web App Topics covered: Using Token endpoints Using OAuth/SAML User login App login Portal ArcGIS Canadian Access Federation: Trust Assertion Document (TAD) Participant Name: Canadian Access Federation: Trust Assertion Document (TAD) 1. Purpose A fundamental requirement of Participants in the Canadian Access Federation is that they assert authoritative and Business and Process Requirements Business Requirements mapped to downstream Process Requirements. IAM UC Davis Business and Process Requirements Business Requirements mapped to downstream Process Requirements IAM UC Davis IAM-REQ-1 Authorization Capabilities The system shall enable authorization capabilities that... Canadian Access Federation: Trust Assertion Document (TAD) Participant Name: McGill University Canadian Access Federation: Trust Assertion Document (TAD) 1. Purpose A fundamental requirement of Participants in the Canadian Access Federation is that they assert Authentication and Single Sign On Contents 1. Introduction 2. Fronter Authentication 2.1 Passwords in Fronter 2.2 Secure Sockets Layer 2.3 Fronter remote authentication 3. External authentication through remote LDAP 3.1 Regular LDAP authentication, White Paper Cybercom & Axiomatics Joint Identity & Access Management (R)evolution White Paper Cybercom & Axiomatics Joint Identity & Access Management (R)evolution Federation and Attribute Based Access Control Page 2 Realization of the IAM (R)evolution Executive Summary Many organizations TIBCO Spotfire Platform IT Brief Platform IT Brief This IT brief outlines features of the system: Communication security, load balancing and failover, authentication options, and recommended practices for licenses and access. It primarily The increasing popularity of mobile devices is rapidly changing how and where we Mobile Security BACKGROUND The increasing popularity of mobile devices is rapidly changing how and where we consume business related content. Mobile workforce expectations are forcing organizations to GSA Valve Security Framework Introduction Google Enterprise EMEA GSA Valve Security Framework Introduction Google Enterprise EMEA Strategic and secure information sources are naturally becoming key repositories that customers want to make searchable. Since search is IGI Portal architecture and interaction with a CA- online IGI Portal architecture and interaction with a CA- online Abstract In the framework of the Italian Grid Infrastructure, we are designing a web portal for the grid and cloud services provisioning. In following Canadian Access Federation: Trust Assertion Document (TAD) Participant Name: Royal Roads University_ Canadian Access Federation: Trust Assertion Document (TAD) 1. Purpose A fundamental requirement of Participants in the Canadian Access Federation is that they Getting Started with Single Sign-On Getting Started with Single Sign-On I. Introduction NobleHour sets out to incentivize civic engagement by enabling users within companies, educational institutions, and organizations to conduct and coordinate USING FEDERATED AUTHENTICATION WITH M-FILES M-FILES CORPORATION USING FEDERATED AUTHENTICATION WITH M-FILES VERSION 1.0 Abstract This article provides an overview of federated identity management and an introduction on using federated authentication Agenda. How to configure dlaw@esri.com Agenda Strongly Recommend: Knowledge of ArcGIS Server and Portal for ArcGIS Security in the context of ArcGIS Server/Portal for ArcGIS Access Authentication Authorization: securing web services PARTNER INTEGRATION GUIDE. Edition 1.0 PARTNER INTEGRATION GUIDE Edition 1.0 Last Revised December 11, 2014 Overview This document provides standards and guidance for USAA partners when considering integration with USAA. It is an overview of Integrating VMware Horizon Workspace and VMware Horizon View TECHNICAL WHITE PAPER Integrating VMware Horizon Workspace and VMware Horizon View TECHNICAL WHITE PAPER Table of Contents Introduction.... 3 Requirements.... 3 Horizon Workspace Components.... 3 SAML 2.0 Standard.... 3 Authentication Canadian Access Federation: Trust Assertion Document (TAD) Purpose A fundamental requirement of Participants in the Canadian Access Federation is that they assert authoritative and accurate identity attributes to resources being accessed, and that Participants API Architecture. for the Data Interoperability at OSU initiative API Architecture for the Data Interoperability at OSU initiative Introduction Principles and Standards OSU s current approach to data interoperability consists of low level access and custom data models ABOUT TOOLS4EVER ABOUT DELOITTE RISK SERVICES CONTENTS About Tools4ever... 3 About Deloitte Risk Services... 3 HelloID... 4 Microsoft Azure... 5 HelloID Security Architecture... 6 Scenarios... 8 SAML Identity Provider (IDP)... 8 Service Provider SAML 1 OTM and SOA Mark Hagan Principal Software Engineer Oracle Product Development Content What is SOA? What is Web Services Security? Web Services Security in OTM Futures 3 PARADIGM 4 Content What is SOA? Understanding Mediasite security. Technical planner: TP-03 Understanding Mediasite security Technical planner: TP-03 2010 Sonic Foundry, Inc. All rights reserved. No part of this document may be copied and/or redistributed without the consent of Sonic Foundry, Single Sign-On Implementation Guide Single Sign-On Implementation Guide Salesforce, Winter 16 @salesforcedocs Last updated: November 4, 2015 Copyright 2000 2015 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark Secure the Web: OpenSSO Secure the Web: OpenSSO Sang Shin, Technology Architect Sun Microsystems, Inc. javapassion.com Pat Patterson, Principal Engineer Sun Microsystems, Inc. blogs.sun.com/superpat 1 Agenda Need for identity-based CA Adapter. Installation and Configuration Guide for Windows. r2.2.9 CA Adapter Installation and Configuration Guide for Windows r2.2.9 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation Single Sign On at Colorado State. Ron Splittgerber Single Sign On at Colorado State Ron Splittgerber Agenda Identity Management Authentication Authorization The Problem The Solution: Federation Trust Between Institutions Trust Between Institution and Federal Crawl Proxy Installation and Configuration Guide Crawl Proxy Installation and Configuration Guide Google Enterprise EMEA Google Search Appliance is able to natively crawl secure content coming from multiple sources using for instance the following main Canadian Access Federation: Trust Assertion Document (TAD) Participant Name: University of Lethbridge 1. Purpose A fundamental requirement of Participants in the Canadian Access Federation is that they assert authoritative and accurate identity attributes to resources EXTENDING SINGLE SIGN-ON TO AMAZON WEB SERVICES pingidentity.com EXTENDING SINGLE SIGN-ON TO AMAZON WEB SERVICES Best practices for identity federation in AWS Table of Contents Executive Overview 3 Introduction: Identity and Access Management in Am: How SSO Works in WhosOnLocation About Single Sign-on By default, your administrators and users are authenticated and logged in using WhosOnLocation s user authentication. You can however bypass this INTEGRATION GUIDE. DIGIPASS Authentication for Salesforce using IDENTIKEY Federation Server INTEGRATION GUIDE DIGIPASS Authentication for Salesforce using IDENTIKEY Federation Server Disclaimer Disclaimer of Warranties and Limitation of Liabilities All information contained in this document is Microsoft Office 365 Using SAML Integration Guide Microsoft Office 365 Using SAML Integration Guide Revision A Copyright 2013 SafeNet, Inc. All rights reserved. All attempts have been made to make the information in this document complete and accurate., Identity Implementation Guide Identity Implementation Guide Version 35.0, Winter 16 @salesforcedocs Last updated: October 27, 2015 Copyright 2000 2015 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark of Single Sign-On Implementation Guide Single Sign-On Implementation Guide Salesforce, Summer 15 @salesforcedocs Last updated: July 1, 2015 Copyright 2000 2015 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark of Last Updated: July 2011. STATISTICA Enterprise Server Security Last Updated: July 2011 STATISTICA Enterprise Server Security STATISTICA Enterprise Server Security Page 2 of 10 Table of Contents Executive Summary... 3 Introduction to STATISTICA Enterprise Server... StreamServe Persuasion SP5 StreamStudio StreamServe Persuasion SP5 StreamStudio Administrator s Guide Rev B StreamServe Persuasion SP5 StreamStudio Administrator s Guide Rev B OPEN TEXT CORPORATION ALL RIGHTS RESERVED United States and other ADFS Integration Guidelines ADFS Integration Guidelines Version 1.6 updated March 13 th 2014 Table of contents About This Guide 3 Requirements 3 Part 1 Configure Marcombox in the ADFS Environment 4 Part 2 Add Relying Party in ADFS Oracle White Paper January 2013. Integrating Oracle Application Express with Oracle Access Manager. Revision 1 An Oracle White Paper January 2013 Integrating Oracle Application Express with Oracle Access Manager Revision 1 Disclaimer The following is intended to outline our general product direction. It is intended Managing trust relationships with multiple business identity providers (basics) 55091A; 3 Days Lincoln Land Community College Capital City Training Center 130 West Mason Springfield, IL 62702 217-782-7436 Managing trust relationships with multiple business identity providers (basics) tibbr Now, the Information Finds You. tibbr Now, the Information Finds You. - tibbr Integration 1 tibbr Integration: Get More from Your Existing Enterprise Systems and Improve Business Process tibbr empowers IT to integrate the enterprise CA Nimsoft Service Desk CA Nimsoft Service Desk Single Sign-On Configuration Guide 6.2.6 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation OpenHRE Security Architecture. (DRAFT v0.5) OpenHRE Security Architecture (DRAFT v0.5) Table of Contents Introduction -----------------------------------------------------------------------------------------------------------------------2 Assumptions----------------------------------------------------------------------------------------------------------------------2 HP Software as a Service HP Software as a Service Software Version: 6.1 Federated SSO Document Release Date: August 2013 Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty Feide Integration Guide. Integrating a service provider with Feide Feide Integration Guide Integrating a service provider with Feide May 2015 Document History Version Date Initials Comments 1.0 Nov 2009 HV First version of this document 1.1 Dec 2009 HV Updated URLs and CA Single Sign-On r12.x (CA SiteMinder) Implementation Proven Professional Exam CA Single Sign-On r12.x (CA SiteMinder) Implementation Proven Professional Exam (CAT-140) Version 1.4 - PROPRIETARY AND CONFIDENTIAL INFORMATION - These educational materials (hereinafter referred to as IT@Intel. Improving Security and Productivity through Federation and Single Sign-on White Paper Intel Information Technology Computer Manufacturing Security Improving Security and Productivity through Federation and Single Sign-on Intel IT has developed a strategy and process for providing
http://docplayer.net/17720174-Iam-enterprise-directories-and-shibboleth-oh-my.html
CC-MAIN-2018-30
en
refinedweb
marks do not work for unittest style test cases With py.test 1.3.4 on python 2.5 (Debian Lenny): {{{ !python import py import unittest class TestSimple: pytestmark = py.test.mark.simple def test_answer(self): assert 41 + 1 == 42 def test_answer2(self): assert 41 + 1 == 41 class UnittestTestCase(unittest.TestCase): pytestmark = py.test.mark.unittest def test01(self): pass def test02(self): self.fail() }}} py.test => 2 failed, 2 passed (as expected) py.test -k simple => 1 failed, 1 passed, 2 deselected (as expected) BUT py.test -k unittest => 4 deselected (expected 1 failed, 1 passed, 2 deselected) thanks for the report. I think i fixed this in the ongoing development branch (which has better unittest support). could you try with and then run with your test file? Also am curious: do you plan to actually mix pytest and unittest-style tests like you did in your example? If not how do you plan to "mix"? cheers, holger
https://bitbucket.org/hpk42/py-trunk/issues/135/marks-do-not-work-for-unittest-style-test
CC-MAIN-2018-30
en
refinedweb
This restaurant is closedFind similar restaurants Own this business? Learn more about offering online ordering to your diners. Gloria's Cafe Monday lentil soup Tuesday vegetable soup Wednesday green plantains soup Thursday tortilla soup Friday beef rib soup Saturday beef tripe soup Sunday hen stew soup carne asada, chicharron, huevo, aguacate, frijoles, chorizo typical platter grilled beef, pork skin, sausage, egg, avocado, beans carne, chicharron, huevo, arroz, frijoles y maduros small typical platter beef, fried pork skin, egg, rice, beans and sweet plantains carne molida, chicharron, huevo, maduros, frijoles, aguacate y arroz country platter ground beef, pork skin, egg, sweet plantains, beans, avocado and rice papa, arroz y frijoles top flank in creole sauce, potato, rice and beans maduros, arroz frijoles top flank grilled sweet plantains, rice and beans maduros, papa, arroz y frijoles tongue in sauce sweet plantains, potato, rice and beans maduros, arroz y frijoles top round steak with onions sweet plantains, rice and beans arroz, frijoles y huevo frito top round steak with fried egg rice and beans frijoles, maduros y arroz grilled top round steak beans, sweet plantains and rice arroz, frijoles y ensalada grilled pork chop rice, beans and salad ensalada, arroz, frijoles y maduros breaded pork loin salad, sweet plantains, rice and beans arroz, frijoles, maduros y ensalada grilled pork loin rice, beans, sweet plantains and salad arroz, frijoles, maduros y ensalada breaded steak rice, beans, sweet plantains and salad carne de res, chicharron, chorizo, carne de cerdo, patacon, yuca y arepa beef platter beef steak, pork skin, sausage, pork steak, green plantain, cassava and corn cake papa criolla, ensalada y chimichurri skirt steak small yellow potatoes, salad and chimichurri sauce papa frita, chicken with rice French fries arroz, frijoles y ensalada grilled chicken breast, rice, beans and salad arroz, frijoles, maduros y ensalada chicken breast in creole sauce, rice, beans, sweet plantains and salad arroz, frijoles y ensalada, breaded chicken breast, rice, beans and salad arroz, frijoles, maduros y ensalada, grilled chicken breast with onion, rice, beans, sweet plantains and salad tostones y ensalada, rice with seafood green plantain and salad steamed red snapper fried red snapper fried red tilapia steamed red tilapia seafood casserole shrimp with garlic sauce shrimp with hot sauce breaded shrimp breaded tilapia filet beef patties sausage with corn cake fried pork skin with corn cake boiled corn in milk guava paste. chips and salsa oatmeal sweet corn cake chicken pie green plantain fried sweet plantain French fries white rice red beans order of avocado order of salad avocado salad champagne, apple, orange, grape, pineapple, pony malta domestic beer import beer three, milks flan figs with cheese ice cream cake chicken or beef, served with red beans, pico de gallo and sour cream. beef or chicken sauteed with onions, beans and cheese inside. Also comes with pico de gallo, sour cream, red beans, and rice. soft or crispy corn tortilla. Beef, chicken, or chorizo. Served with pico de gallo, red beans, and rice. chicken, cheese, and roasted salsa, wrapped in a flour tortilla and fried to perfection. Drizzled with chipotle mayonnaise and barbeque sauce. Served with pico de gallo, sour cream, and rice and beans. lightly breaded or grilled. Served with rice and red beans or tostones and salad fried pork skin, carne asada, sweet plantains and served with an egg, rice, and red beans served with rice, red beans and a choice of tostones or sweet plantains chicken, beef, and shrimp ask your server for details. ask your server for details. ask your server for details. served with the meat special, soup of the day, rice, salad and sweet plantains. brown sugar water shakes and smoothies custard flan Menu for Gloria.
https://www.allmenus.com/fl/orlando/34300-glorias-cafe/menu/
CC-MAIN-2018-30
en
refinedweb
olm datei in pst konvertieren olm in pst konvertieren Converter eml Para pst Converter pst Para dbx join multiple pst files into single pst pst recovery software repair outlook pst files Ansi pst File Unicode pst Outlook pst convert to pst 2gb limitation pst File Split pst File Split olm to mbox mac import olm to mac mail olm to ics olm to ical converter OST File to PST File Tool is a unique tool for Outlook user because this software recovers emails in Offline Mode than mean. when Exchange Server does corrupt due to any external or internal issue as well as technical problem. With OST to PST conversion process recover emails in Offline. OST Recovery software is a dependable and thus one of the most excellent for Recover Damaged OST File tool that can you rapidly develop to implement the most exceptional OST data into PST File process possible. OST file cannot release in Outlook then require to OST File to PST Tool to OST Data into PST File. You can obtain online full licensed version our software OST File to PST Tool only at $49 for personal and $199 for business. . ost file to pst tool , scan ost file to pst , solve damage ost file , repair damage ost file , damage ost data into pst , recover damage ost file , ost recovery software , ost data into pst file Utilize Best OST File Conversiom software which helps to convert OST into PST as every outlook user is aware of the fact that sometimes the Microsoft OST file does not open in MS outlook because of the synchronization problems with the ms exchange server and thus files get corrupted and cannot be opened or accessed in outlook. And if you are an outlook user and willing to restore deleted data within OST file rather if you are willing to extract the data from the deleted OST file that can be done by converting OST file to PST and Best OST File Conversion software come handy in that case as it convert the OST file into PST and it easily opens OST file as PST file in MS Outlook. . ost file conversion , best ost file conversion tool , ost file conversion software , conversion ost file , ost file conversion utility By using this tool any non-technical user can also repair corrupt OST file as well as convenient conversion of inaccessible OST file to PST file on all edition of Windows OS (Windows XP, 2003, Vista, Win 7, Win 8. microsoft ost recovery , microsoft outlook ost recovery , recover ost file , ost file recovery Advance free OST to PST Converter perform recovery process in two steps one is quick scan and other is advance scan. Free OST to PST converter tool to convert OST files of any exchange version to PST files of any Outlook version in easy way. By download free OST to pst converter software user can easily repair and save first 25 OST files per folder. OST2PST freeware enhanced with advanced features and this will generate the report for source OST file(s) and resultant PST file(s) locations in CSV format after export. You can get Free OST to PST at affordable price. freeosttopst. . free ost to pst converter , convert ost to pst free , convert outlook ost2pst , ost2pst freeware , free ost repair , ost to pst migration Powerful Microsoft OST Converter is a best OST to PST Conversion tool which repairs unreadable orphan OST file on cache mode of Exchange Server and easily restore emails from unworkable OST file. User can repair and restore emails from OST file manually ? it is technical process and consumes more time but using third party Microsoft OST to PST Converter software to easily restore from OST file without any technical skill and time consuming. OST to PST Conversion software maintain the folder structure after recovery emails from OST file. Microsoft OST Conversion has multiple export option to easily export OST to PST. EML or MSG file. OST to PST freeware tool get complete export report including all properties of emails after OST to PST conversion. Mail filter is one of the best features of OST to PST Converter software which gives permission to Outlook client easily restore emails according to date. Best OST to PST Conversion tool successfully runs with entire Windows Platform as well as support whole MS Outlook Edition to easily Convert OST to PST files including all email properties. . best ost to pst conversion tool , microsoft ost to pst , convert ost to pst , ost to pst converter , microsoft ost converter , ost to pst conversion PST. PST. Other Files: PCT. PST. data recovery , file recovery , recover lost files , recover deleted files How to convert OST PST freeware in new computer? Then try this advance convert OST PST freeware Tool which has a multi functional tool to recover & convert OST PST. EML and MSG file format. After moving OST file into PST file format. you can simply convert OST to new computer. This OST PST convert software to convert exchange mailbox items including email. task etc. The OST PST converter software supports all the advance version of Exchange server and MS Outlook. Download the free demo version which is free to show the converted items and after get satisfied with this product you can buy the full version of OST PST Convert from our website only $99 to convert your whole the OST data into PST file with all the attachments. . convert ost pst freeware tool , convert ost pst freeware , ost pst convert , convert ost to outlook , ost pst freeware Convert OST to PST have most essential role to save Exchange OST file from corruption. To perform this task you need to try Exchange OST to PST converter tool which is an expert to convert Outlook OST to PST. EML and MSG formats. With this tool you can convert Outlook data file to PST with entire email. task etc. Exchange OST to PST converter tool to convert an OST to PST? Because OST file creates on Offline where network is disconnected during the sending email by the Outlook user such that time email store in Offline Exchange Server as OST file but it is unable to open in Outlook but email is very important for personal or business purpose so converting Outlook user OST file to PST with help of Convert an OST to a PST. you can purchase it online without any hesitation. You want to save converted email and their items or properties for any important use or further use then purchase full version of Convert OST to PST software. . convert an ost to pst , convert ost to pst format , convert an ost to a pst free , convert an ost to a pst freeware , convert an ost to a pst outlook 2010 , ost converter software Purchase OST to PST utility restores original formatting of Plain Text. virus attack or other regions in this situation you can use Buy Convert OST to PST Tool which recovers or restore all OST data items in offline modes. Purchase OST to PST software works on all MS Exchange Server (. OST to PST Buy tool recovers or restore all inaccessible or orphaned OST File and remove all OST corruptions. Purchase OST to PST software supports all Microsoft Outlook versions such as 2016. Vista and Win7 or Win8 (both of 32-Bit and 64-Bit versions). Download the OST to PST conversion software and checked it’s functionality if you fulfilled software recovery process. then purchase full version of our software OST to PST. . purchase ost to pst , ost to pst purchase , ost to pst buy , buy convert ost to pst , ost to pst conversion , ost converter shareware Exchange OST to PST Converter application is the tool that has been designed for serving Outlook users who wish to Exchange OST to PST. The software is the OST to PST Conversion expertise which Exchange OST to PST with much of success that no other software reaches with so much perfection. OST to PST Converter software easily convert and recover your mail messages. Date and time etc. from OST into PST file format. Try the freeware version of Exchange OST to PST converter tool and recover first 25 items free. If you satisfied this software email recovery process then purchase full version of our software Exchange OST to PST Converter just at $49. . exchange ost to pst converter , ost to pst conversion , exchange ost to pst , exchange ost to outlook , ost to pst converter , ost to pst , convert exchange ost to pst Repair OST database and convert into a fresh PST file completely unchanged. OST File Repair software is an excellent recovery and conversion tool because it can convert OST file to PST database. Useful options is that you can save the OST file into three file formats such as: MSG /PST /EML email file format and can save converted OST file according to your requirement. Repair OST to PST simply remove all types of OST corruptions after converting OST file save in Microsoft Outlook PST file. . ost file repair , ost repair , outlook ost file repair , ost repair tool , repair ost , repair ost file , repair ost to pst , repair ost download , ost file repair 2007 , ost to pst repair If you have identical queries like how to convert OST to PST?. then try OST converter software which is tally as a best OST to PST converter tool converts OST to PST & recover corrupt. appointments. After converting OST file into PST. EML you can access each file singly. This best OST file converter tool can convert OST to PST. and EML. This OST PST converter tool support entire advance version of MS Outlook as well as compatible with entire earlier version of Win OS including Win10. ost converter software , ost converter , ost file converter , free ost converter , ost to pst converter , ost pst converter , convert ost to pst , how to convert ost to pst Exchange OST to PST Tool presents an exceptional platform for MS Outlook customer to easily recover deleted entire emails from MS Exchange Server database because. now use OST Recovery to convert OST as PST then open in MS Outlook. . exchange ost to pst software , exchange ost to pst recovery software , ost to pst , convert exchange ost pst , recover exchange ost file , exchange ost recovery , repair ost , extract ost , fix ost , ost to pst , move ost as pst , exchange ost to pst converter How to convert OST to PST free Outlook 2007? Then Use OST2PST converter software that permit you most professional solution to convert OST PST free Outlook 2007. 2010 format. This OST to PST conversion software can recover & repair corrupt OST file then convert OST file to Outlook PST. EML and MSG file formats. This advance OST PST converter application smoothly convert OST file to Outlook 2003. 2010 and 2013 (both of 32-bit and 64-bit versions). Download demo version of OST2PST converter to observe how this advance OST2PT converter software works. . convert ost pst free outlook 2007 , convert ost to pst free , ost2pst converter , ost to pst conversion , repair corrupt ost file Exchange OST Recovery software solution utilize inaccessible OST file as PST file with all data like email messages. notes etc. User want to recover OST to PST & restore deleted emails with help of Outlook OST Recover software and convert OST file to PST and access important emails. This OST file recovery software to recover OST file & convert OST file to PST file easily along with all objects like email messages. notes etc. OST to PST software supports all editions of Microsoft Outlook 97. recover ost to pst , recover ost pst , recover data from ost file , recover ost file , recover ost file to pst , ost file recovery , recover ost , recover ost to outlook , ost file recover , outlook ost recover , ost to pst , exchange ost recovery The OST Repair tool simply repair OST file and recovers or repair emails from inaccessible OST file and saves them to working PST file for easy access with MS Outlook. repair ost file , ost to pst repair , repair ost to pst , repair ost in pst , ost file repair , ost repair , repair ost file software Convert OST PST Outlook Free is quick idea to save OST data into another email application as well as electronic gadgets like window address book. Outlook Express etc. This OST PST convert free software is most excellent design into OST recovery lab which allows converting from OST to PST in Outlook. Free OST PST convert software easily support an entire version of MS Outlook such as Outlook 2016. this free demo for evaluation convert all data from OST to PST & OST to MSG/EML format. If you hold heavy database to convert then you have to purchase full version of OST PST Conversion Software. . convert ost pst outlook free , ost pst convert free , free ost pst convert , ost pst converter , ost pst conversion , how to convert ost pst free OST PST Convert tool is a quick solution for converting OST to PST and get the most powerful features to convert OST to PST. convert OST to MSG/EML. Most effective and efficient conversing ability of OST to PST and convert all exchange OST items like emails. and journal etc. OST file convert tool can convert OST to PST. This OST file PST convert tool for 30 day trial period in which. you can recover OST file to PST and view the recovered items but you cannot save them. You can purchase the OST to PST converter tool for enjoying its full functionality. However. to save recovered OST items to PST Outlook. ost pst convert tool , ost file pst convert , ost convert , ost file convert , ost to pst convert With this Microsoft OST to Outlook converter software, you can quickly recover OST file and convert OST file into three different extensions such as Outlook PST, EML and MSG. how to convert ost to outlook , convert ost to pst , ost to outlook converter , microsoft ost to outlook converter tool , ost converter , ost conversion application Use this tool to convert OST to PST file format. OST to PST file converter software. immense idea of SysTools developers provided for how to convert OST file to PST file allied problems. Users can use this OST to PST converter free software without any much technical knowledge and any expertise skills. Convert OST to PST software can easily restore OST file thereby performing a through scan of OST file. OST to PST converter tool displays the recovered data including email messages. journals and notes etc in a tree-like structure. The recovered data is displayed so that verify the recovered email data and save them separately in the PST file at a preferred location. You can invest in full licensed version of OST2PST Software to convert OST file. . tool to convert ost to pst , tool ost to pst converter , tool ost to pst , tool ost converter , ost converter tool , ost to pst converter , ost to pst converter tool , ost to pst file converter , convert ost to pst , convert ost file to pst file Filter: All / Freeware only / Title OS: Mac / Mobile / Linux Sort by: Download / Rating / Update
http://freedownloadsapps.com/mobile-olm-para-pst.html
CC-MAIN-2018-30
en
refinedweb
#include <nnet-am-decodable-simple.h> Definition at line 249 of file nnet-am-decodable-simple.h. This constructor takes features as input, and you can either supply a single iVector input, estimated in batch-mode ('ivector'), or 'online' iVectors ('online_ivectors' and 'online_ivector_period', or none at all. Note: it stores references to all arguments to the constructor, so don't delete them till this goes out of scope. Definition at line 58 of file nnet-am-decodable-simple.cc. Returns true if this is the last frame. Frames are zero-based, so the first frame is zero. IsLastFrame(-1) will return false, unless the file is empty (which is a case that I'm not sure all the code will handle, so be careful). Caution: the behavior of this function in an online setting is being changed somewhat. In future it may return false in cases where we haven't yet decided to terminate decoding, but later true if we decide to terminate decoding. The plan in future is to rely more on NumFramesReady(), and in future, IsLastFrame() would always return false in an online-decoding setting, and would only return true in a decoding-from-matrix setting where we want to allow the last delta or LDA features to be flushed out for compatibility with the baseline setup. Implements DecodableInterface. Definition at line 303 of file nnet-am-decodable-simple.h. References KALDI_ASSERT, and KALDI_DISALLOW_COPY_AND_ASSIGN. Returns the log likelihood, which will be negated in the decoder. The "frame" starts from zero. You should verify that IsLastFrame(frame-1) returns false before calling this. Implements DecodableInterface. Definition at line 78 of file nnet-am-decodable-simple.cc. References DecodableAmNnetSimple::decodable_nnet_, DecodableNnetSimple::GetOutput(), DecodableAmNnetSimple::trans_model_, and TransitionModel::TransitionIdToPdf(). The call NumFramesReady() will return the number of frames currently available for this decodable object. This is for use in setups where you don't want the decoder to block while waiting for input. This is newly added as of Jan 2014, and I hope, going forward, to rely on this mechanism more than IsLastFrame to know when to stop decoding. Reimplemented from DecodableInterface. Definition at line 297 of file nnet-am-decodable-simple.h. Returns the number of states in the acoustic model (they will be indexed one-based, i.e. from 1 to NumIndices(); this is for compatibility with OpenFst). Implements DecodableInterface. Definition at line 301 of file nnet-am-decodable-simple.h. Definition at line 312 of file nnet-am-decodable-simple.h. Referenced by DecodableNnetSimple::DoNnetComputation(). Definition at line 313 of file nnet-am-decodable-simple.h. Referenced by DecodableAmNnetSimple::LogLikelihood(). Definition at line 314 of file nnet-am-decodable-simple.h. Referenced by DecodableAmNnetSimple::LogLikelihood().
http://kaldi-asr.org/doc/classkaldi_1_1nnet3_1_1DecodableAmNnetSimple.html
CC-MAIN-2018-30
en
refinedweb
The namespace identified by the URI is defined in the Service Modeling Language Interchange Format Version 1.1. This document contains a directory of links to resources related to that namespace. This namespace is normatively defined by the current version of the specification Service Modeling Language Interchange Format. Comments on this document may be sent to the public public-sml@w3.org mailing list (public archive).
http://www.w3.org/ns/sml-if
CC-MAIN-2016-18
en
refinedweb
This th...DWHEELER/Text-Markup-0.23 - 21 May 2015 06:24:15 GMT - Search in distribution - Text::Markup::HTML - HTML parser for Text::Markup - Text::Markup::Pod - Pod parser for Text::Markup - Text::Markup::None - Turn a file with no known markup into HTML - 9 more results from Text-Markup » Text::Markup::Any is Common Lightweight Markup Language Interface. Currently supported modules are Text::Markdown, Text::MultiMarkdown, Text::Markdown::Discount, Text::Markdown::GitHubAPI, Text::Markdown::Hoedown, Text::Xatena and Text::Textile....SONGMU/Text-Markup-Any-0.04 - 23 Nov 2013 06:32:02 GMT - Search in distribution TEI XML is a wonderful thing. The elements defined therein allow a transcriber to record and represent just about any feature of a text that he or she encounters. The problem is the transcription itself. When I am transcribing a manuscript, especiall...AURUM/Text-TEI-Markup-1.9 - 16 May 2014 14:48:17 GMT - Search in distribution MetaMarkup was inspired by POD, Wiki and PerlMonks. I created it because I wanted a simple format to write documents for my site quickly. A document consists of paragraphs. Paragraphs are separated by blank lines, which may contain whitespace. A para...JUERD/Text-MetaMarkup-0.01 - 13 Jun 2003 07:40:24 GMT - Search in distribution - Text::MetaMarkup::HTML - MM-to-HTML conversion - Text::MetaMarkup::AddOn::Perl - Add-on for MM to support embedded Perl - Text::MetaMarkup::AddOn::Raw - Add-on for MM to support raw code - 2 more results from Text-MetaMarkup » Provides formatting to HTML for the *Caffeinated Markup Language*. Implemented using the Text::CaffeinatedMarkup::PullParser. For details on the syntax that CML implements, please see the Github wiki < - 04 Jan 2014 22:55:24 GMT - Search in distribution TODO...ABW/Kite-0.4 - 28 Feb 2001 15:12:52 GMT - Search in distribution - Kite - collection of modules useful in Kite design and construction. - Kite::XML::Parser - XML parser for kite related markup Blatte is a very powerful text markup and transformation language with a very simple syntax. A Blatte document can be translated into a Perl program that, when executed, produces a transformed version of the input document. This module itself contain...BOBG/Blatte-0.9.4 - 28 Jul 2001 21:05:53 - Text::Smart::HTML - Smart text outputter for HTML This module simply strips HTML-like markup from text rapidly and brutally. It could easily be used to strip XML or SGML markup instead; but as removing HTML is a much more common problem, this module lives in the HTML:: namespace. It is written in XS...KILINRAX/HTML-Strip-2.10 4.5 (3 reviews) - 22 Apr 2016 11:21:38 GMT - Search in distribution ZOUL/Text-FindLinks-0.04 - 27 Sep 2009 07:45:44 GMT - Search in distribution This module is a thin wrapper for John Gruber's SmartyPants plugin for various CMSs. SmartyPants is a web publishing utility that translates plain ASCII punctuation characters into "smart" typographic punctuation HTML entities. SmartyPants can perfor...TSIBLEY/Text-Typography-0.01 5 (1 review) - 10 Jan 2006 04:33:49 GMT - Search in distribution Provides a simple means of parsing XML to return a selection of information based on a markup profile describing the XML structure and how the structure relates to a logical grouping of information ( a dataset )....SPURIN/XML-Dataset-0.006 - 04 Apr 2014 15:22 Spork lets you create HTML slideshow presentations easily. It comes with a sample slideshow. All you need is a text editor, a browser and a topic. Spork allows you create an entire slideshow by editing a single file called "Spork.slides" (by default)...INGY/Spork-0.21 - 10 Jun 2011 16:29:05 GMT - Search in distribution This class represents an XPC request or response. It uses XML::Parser to parse XML passed to its constructor....GREGOR/XPC-0.2 - 13 Apr 2001 11:35:13 GMT - Search in distribution As these items are completed, move them down into Recently Completed Items, make sure to date and initial. When we have a version release, all of the recently completed items should be moved into changelog.pod....TPEDERSE/WordNet-Similarity-2.07 - 04 Oct 2015 16:19:03 GMT - Search in distribution This script uses "Pod::POM" to convert a Pod document into text, HTML, back into Pod (e.g. to normalise a document to fix any markup errors), or any other format for which you have a view module. If the viewer is not one of the viewers bundled with "...NEILB/Pod-POM-2.01 3 (2 reviews) - 07 Nov 2015 21:05:42 3.5 (10 reviews) - 18 Apr 2015 15:04:42 GMT - Search in distribution DBR/App-PDoc-0.10.0 - 21 Mar 2013 18:09:39 GMT - Search in distribution
https://metacpan.org/search?q=Text-Markup
CC-MAIN-2016-18
en
refinedweb
minidumper - Python crash dumps on Windows If you're writing software on Windows, you've likely come across minidumps. They're a great help when your project encounters a crashing scenario, as they record varying levels of information to help you reproduce the problem. The main product I work on at my day job, a server written in C++, has had minidump functionality since the beginning. We keep PDBs around for our releases, then when customers encounter a crash, we grab the minidump, match it up with the binaries and PDBs, then try to figure out what the scenario was. I think that's fairly standard operating procedure, and it tends to work alright. Release crash dumps are obviously less helpful than debug dumps, but you can still get enough out of them to get started in the right direction. So while one part of my job has that, the other part - the Python part - has had me wishing for it. So I wrote it. The extension modules I maintain internally for our server's APIs occasionally come crashing down during our test automation. That's fairly alarming at first since the tests just drop out and you don't get much of an indication of why. Was it the extension? The underlying C++ API? Python itself? The unittest logs are all we have to go off of, so then it's a matter of piecing together what was happening at the time, then either manually re-running it from the REPL and/or attaching the Visual Studio debugger to catch the problem. In comes minidumper. By importing minidumper and enabling it, you can receive crash dump files whenever your Python process goes down. It's there for you. import minidumper minidumper.enable(name="example") Now if you do some crazy stuff and cause a crash in your extension code... void divide_by_zero() { int x = 1; int b = x / 0; } ...you'll get a crash dump that will tell you exactly what just happened. In my case, I got example_20110929-071529.mdmp. Now if you open that up in Visual Studio, ideally the one that Python was compiled with, you'll get a look into what happened once you hit F5 (or Debug > Start Debugging). The first thing you'll see is a popup telling you what the problem was and where it occurred, then Visual Studio will show you exactly where in the code the issue lies. As we all know, division by zero is a no-no, and it crashed. If you hit the break button, you can poke around in a ton of information that was gathered from your crashed process. Depending on what value you gave to the type parameter of minidumper.enable(type=...), which defaults to MiniDumpNormal and has a full list of options here, you'll have different amounts of information. You can walk around the call stack and see what functions were called with what values, and from there you can inspect variables within a function by hovering over them with the cursor. The Debug > Windows menu contains a whole bunch of other pieces of information, including memory, disassembly, value watching, and more. As far as examples and tests go, I only have some of the basics down, although I plan on bulking those areas up and coming up with more useful and interesting code to prove this extension's worth. I just threw the source up on, but I'm going to wait on getting it on PyPI until I figure out the best way to organize and distribute it. If you're looking for more info on minidumps, and were helpful sites, as well as the various MSDN documentation. The following setup steps are what I do to get started, using the CPython default branch, aka, CPython 3.3. Also note that I'm using a debug-built Python, and telling the minidumper extension to do a debug build as it's what I usually use at work, as well as when I'm working on CPython. - hg clone minidumper-dev - C:python-devcpython-mainPCbuildpython_d.exe setup.py build --debug install - C:python-devcpython-mainPCbuildpython_d.exe -m tests Running the tests will build a tester extension, which contains two crashing functions. Right now, the few tests just call the crash functions with different minidumper.enable settings in order to make sure the right dumps are being created in the right places. Hope it helps. Note: Until I fix, the crash windows asking you to debug or close the program will stay around until you click something. Ideally I'll be able to add functionality to temporarily disable Windows Error Reporting for the module, as it currently requires manual intervention while running the CPython test suite on Windows, as :code:`minidumper` does.
http://blog.briancurtin.com/posts/20110929minidumper-python-crash-dumps-on-windows.html
CC-MAIN-2016-18
en
refinedweb
Procedures need to send the data to the server, so wrote a servlet to do simple data acceptance testing, but reported the following exception: java.net.ConnectException: localhost/127.0.0.1: 8080 - Connection refused Error code in this section URL From: Problem: Few days back we were working with Apache web server and were using it for proxy. We want to use HTTPD for directing requests to 8080 port where Apache tomcat was running. We configured the proxy sett. Original Source: It is said that HAProxy can do load balancing, server health monitoring but can also have automatic stop down distribution of machines, when the server is Original This article describes how to configure Tomcat and Terracotta server will be common Web application deployment to the cluster, the nodes across Tomcat session replication to achieve post, in essence, the purpose of proceedings sent to a sign an agreement for the post string as follows: 01 POST / Purpose program HTTP/1.1 02 Accept: */* 03 Referer: http: // 04 Accept-Language: zh-cn,en-us;q=0.5 05 Content-Type: appl ORIGINAL: Said HAProxy can do load balancing, server health monitoring can also have distribution down the automatic stop when the server is normal, then automatically bal 目前的项目需要涉及到与服务器的交互,就把大概的代码贴出来交流一下. 大概的内容就是android端用post方法与服务器连接,然后在服务器端用servlet去根据用户名去查找数据库,如果用户名存在那么登录成功并查找存取的数据. android端的内容: final String uriConnection = ""; //用户名及其密码 String username; S <! - [If gte mso 9]> <xml> <w:WordDocument> <w:View> Normal </ w: View> <w:Zoom> 0 </ w: Zoom> <w:TrackMoves/> < w: TrackFormatting /> <w:PunctuationKerning/> <w:DrawingGridVerticalSpacing> Compiled under Linux install mysql-5.0.45.tar.gz Reproduced <! - Body start -> (1) ------------- ---------- Preparatory work 1: If the downloaded file is named: mysql-5.0.45.tar.gz 2: If the copy to the / home under the 3: groupadd mysql # add the m A few days ago to see have nginx_upstream_jvm_route project, read the introduction, very excited, because it is written by a Chinese patch to address the session of sync problem, but he is not shared, nor is it synchronization, Radio receivers - BroadcastReceiver 1. Overview of broadcasting is divided into two different types: "ordinary radio (Normal broadcasts)" and "orderly Broadcasting (Ordered broadcasts)". General broadcast is completely asynchronous, an CentOS(Linux)下的apache服务器配置与管理方法分享,需要的朋友可以参考下. 一.WEB服务器与Apache 1.web服务器与网址 2.Apache的历史 3.补充可以查看apache服务器的市场占有率 同时必须注意的是ngnix,正处于强势增长的上升时期,大有和apache一争天下的感觉,真是后生可畏~~~ 二.Apache服务器的管理命令 1.命令启动:service httpd start/stop/restart/relo 对于程序员来说,掌握一些基本的Linux命令是必不可少的,即使现在用不到,在不久的将来也应该会用到.由于Linux有很多命令,每个命令基本可以用一篇文章介绍,所以本文仅总结一些常用命令的常用用法,如有明显的遗漏或错误,请各位帮忙指出,谢谢! 以下内容基于测试环境:Red Hat 4.5/5 一.服务器硬件配置 1.查看硬盘及分区情况 # fdisk -l 2.查看分区空间使用情况 可以查看各分区大小.已使用.可用.已使用百分比.挂载情况 1)默认单位为K # df 2)可读性更好的显示,如单位M 注:是我在/etc/hosts文件中注册的域名,方面直观测试.可以换成nginx服务器ip 1 #user nobody; 2 worker_processes 2; 3 4 #error_log logs/error.log; 5 #error_log logs/error.log notice; 6 #error_log logs/error.log info; 7 8 #pid logs/nginx.pid; 9 10 11 events { 12 worker_c 编辑nginx配置文件: upstream test_backend { server 127.0.0.1:8080 fail_timeout=0; server 192.168.5.100:8080 fail_timeout=0; server 192.168.5.102:8080 fail_timeout=0; } server { listen 80; server_name; root /home/ubuntu/workspace/group_demo/publ 多线程下载,大概意思就是说,本地先用RandomAccessFile创建一个文件,设置好大小.把服务器端的文件分成多个块(分成几块就是几个线程),每个块对应的开启一个线程下载,形成一个文件.好处嘛,大家都知道..速度快了呗~ 老规矩,先把核心代码弄上来 线程主体内容 public void run() { try { URL url = new URL(path); HttpURLConnection conn = (HttpURLConnection) url.openConnection() For the team, the establishment of a unified development environment is a necessity maven well assist the establishment of a unified environment. Here's how to introduce more effective unified configuration. Preparation work: Download the necessary s. package xzt.rs.tools; imp maven2 start I believe everyone maven1 already very familiar with the specific maven can do, is not explained in detail. Personally feel that the open-source projects using maven or rel maven2 start I believe everyone maven1 already very familiar with the specific maven can do, is not explained in detail. Personally feel that the open-source projects using maven or relatively more within the company, it is not clear. I've used the c package example.regularexpressions; import java.util.regex.MatchResult; import java.util.regex.Matcher; import java.util.regex.Pattern; import junit.framework.TestCase; public class Basics extends TestCase { /** * Pattern Class: * Pattern Compile the Original Source: A very detailed description of the HTTP protocol to explain. Author: Jeffrey Introduction HTTP is a part of object-oriented application layer protocol, because of its simple First, prepare the environment In this article, all procedures are completed in Linux, under development, were able to function properly. In the development process, we need to use gSOAP, you can download from the following Web site: [url] From: Here I will explain: Many articles from the selections from the other, I have written the source, the other good things in addition to the article, you can through this web s Concurrent connections by looking Nginx, we can be more clear to know the site load. Nginx complicated view there are two ways (This is so because I only know of two), one is through the web interface is through the command, web view than the command android use ksoap2 connect webservice (2010-04-16 16:36:25) reproduced Tags: androidksoap2webserviceit Categories: Android Use of J2SE's ksoap2 standards, I also do a version of the android cottage connect webservice. Because the relationship between android use ksoap2 connect webservice (2010-04-16 16:36:25) reproduced Tags: androidksoap2webserviceit Categories: Android Use of J2SE's ksoap2 standards, I also do a cottage version of the android to connect webservice. Because the relationship betw On stress testing A prospective customer to do a system performance test, talked in front of phone calls, he metalink, bbs, etc. understand the insert rac lower than single-node performance, and that can help to adjust. First of all, he felt a proble 1. Requirements of Apache access logs to find out which number 100 in front of the ip number. Log about 78M or so. What is the apache log file Excerpts (including log access.log): 218.107.27.137 ..... 202.112.32.22 ....[ 26/Mar/2006: 23:59:55 +0800 ] These days with a spring-flex to be a able to receive offline messages Flex applications, the Internet is a lot of information, but nobody mentioned how off-message, and finally the development of documentation in blazeds find how to make blazeds mes These days to do with the spring-flex able to receive offline messages a Flex application, the Internet is a lot of information, but nobody mentioned how off-message, and finally the development of documentation in blazeds find how to make blazeds me LVS lvs project website: lvs linux operating system is based on the virtual server to achieve load balancing between the service node. It is based on linux kernel implementation, 2.6.X kernel modules by default inte 1. Using the system comes with VideoView play MP4, 3pg video files, etc. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="" android:orientation="verti Distributed Denial of Service attack (DDoS) is often used and difficult to guard against hackers attack means. This paper introduces the concept of the beginning of this attack, hackers focused on describing how to organize and launch the DDoS attack What is the http connection? A page is loading, it is the picture is the style, the script, the request for these things is to share a single connection or multiple connections? Some say the Internet in order to save the number of connections, should Introduction HTTP is an application layer is object-oriented protocol, because of its simple, quick way to apply to distributed hypermedia information systems. It made in 1990, after years of use and development are constantly improving and expanding HTTP protocol description Article Category: Web Front End Introduction HTTP is an application layer is object-oriented protocol, because of its simple, quick way to apply to distributed hypermedia information systems. It made in 1990, after years of CodeWeblog.com 版权所有 黔ICP备15002463号-1 processed in 0.087 (s). 10 q(s)
http://www.codeweblog.com/stag/www-192-168-0-102-8080/
CC-MAIN-2016-18
en
refinedweb
Help please! I've been trying to make this work for a while. The problem is such: Write a program that allows the user to enter eight judges' scores and then outputs the points received by the contestant. Format your output with two decimal places. A judge awards points between 1 and 10, with 1 being the lowest and 10 being the highest. For example, if the scores are 9.2, 9.3, 9.0, 9.9, 9.5, 9.5, 9.6, and 9.8, the contestant receives a total of 56.90. I believe that I have nearly completed everything correctly. What I can't figure out is how to calculate the sum of the remaining 6 numbers of the Array which is assigned the variable total in the calculateScore method and retrieve the variable. My last print statement is supposed to read ("The score of "+contestant+" is "+total+":"). Not sure how to correctly call the method calculateScore or where to put the print message without receiving the error:cannot find symbol - variable contestant Code : import java.util.Scanner; import javax.swing.*; public class Judging { public static void main(String[]args)//void, returns nothing { Scanner console = new Scanner (System.in); String contestant; double []scores = new double[8]; int n = 8; do { System.out.println("What is the name of the contestant? "); contestant= console.nextLine(); getData(scores,n,console); } while(true); } private static void getData(double []scores, int n, Scanner console) { double judge; for(int i=0;i<n;i++) { do { System.out.print("Give the score for the judge number "+(i+1)+": "); judge = console.nextDouble(); if (judge < 1 || judge > 10) System.out.println("Sorry! The score must be between 1 and 10"); else scores[i] = judge; } while (judge < 1 || judge > 10); } console.nextLine(); } private static double calculateScore(double []scores, int n) { double minValue = scores[0]; double maxValue = scores[0]; double total = 0.00; if (scores[n] < minValue) { minValue = scores[n]; } if (scores[n] > maxValue) { maxValue = scores[n]; } for (int i = 0; i < 8; i++) { if (scores[n] != minValue && scores[n] != maxValue) { total = total + scores[n]; //System.out.println("The score of "+contestant+"is "+total+":");????? } } return total; } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/4088-judging-printingthethread.html
CC-MAIN-2016-18
en
refinedweb
Please refer to the errata for this document, which may include normative corrections. See also translations. Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. SOAP Version 1.2 Part 0: Primer (Second Edition). This second edition includes additional material on the SOAP Message Transmission Optimization Mechanism (MTOM), the XML-binary Optimized Packaging (XOP) and the Resource Representation SOAP Header Block (RRSHB) produced by the XML Protocol Working Group, which is part of the Web Services Activity. This second edition is not a new version of the SOAP1.2 Primer. Rather, as a convenience to readers, it incorporates the changes reflected in the accumulated errata to the original Recommendation. Additionally, it incorporates changes to incorporate an overview of the XML-binary Optimized Packaging, SOAP Message Transmission Optimization Mechanism and Resource Representation SOAP Header Block specifications and their usage. Changes between these two versions are described in a diff document. report errors in this document to the public mailing list xmlp-comments@w3.org (archive). It is inappropriate to send discussion email to this address. Since the primary purpose of this specification is to present a set of SOAP Version 1.2 specifications and fonctionalities, no implementation report is provided. However, the SOAP 1.2 Implementation Report can be found at and the SOAP MTOM/XOP/RRSHB Implementation/Interop Summary can be found at. 2. Basic Usage Scenarios 3. SOAP Processing Model 4. Using Various Protocol Bindings 5. Advanced Usage Scenarios 6. Changes Between SOAP 1.1 and SOAP 1.2 7. References A. Acknowledgements (Non-Normative) 1. Introduction 1.1 Overview 1.2 Notational Conventions 2. Basic Usage Scenarios 2.1 SOAP Messages 2.2 SOAP Message Exchange 2.2.1 Conversational Message Exchanges 2.2.2 Remote Procedure Calls 2.3 Fault Scenarios 3. SOAP Processing Model 3.1 The "role" Attribute 3.2 The "mustUnderstand" Attribute 3.3 The "relay" Attribute 4. Using Various Protocol Bindings 4.1 The SOAP HTTP Binding 4.1.1 SOAP HTTP GET Usage 4.1.2 SOAP HTTP POST Usage 4.1.3 Web Architecture Compatible SOAP Usage 4.2 SOAP Over Email 5. Advanced Usage Scenarios 5.1 Using SOAP Intermediaries 5.2 Using Other Encoding Schemes 5.3 Optimized serialization of SOAP messages 5.3.1 The Abstract SOAP Transmission Optimization Feature 5.3.2 The Optimized Transmission Serialization Format 5.3.3 Using the Resource Representation SOAP Header Block 6. Changes Between SOAP 1.1 and SOAP 1.2 7. References A. Acknowledgements (Non-Normative) SOAP Version 1.2 Part 0: Primer (Second Edition) is a non-normative document intended to provide an easily understandable tutorial on the features of the SOAP Version 1.2 specifications. Its purpose is to help a technically competent person understand how SOAP may be used, by describing representative SOAP message structures and message exchange patterns. In particular, this primer describes the features of SOAP through various usage scenarios, and is intended to complement the normative text contained in SOAP Version 1.2 Part 1: Messaging Framework (hereafter [SOAP Part1]), SOAP Version 1.2 Part 2: Adjuncts (hereafter [SOAP Part2]), the SOAP Message Transmission Optimization Mechanism (MTOM) (hereafter [MTOM]), XML-binary Optimized Packaging [XOP] and the Resource Representation SOAP Header Block [ResRep] specifications. It is expected that the reader has some familiarity with the basic syntax of XML, including the use of XML namespaces and infosets, and Web concepts such as URIs and HTTP. It is intended primarily for users of SOAP, such as application designers, rather than implementors of the SOAP specifications, although the latter may derive some benefit. This primer aims at highlighting the essential features of SOAP Version 1.2, not at completeness in describing every nuance or edge case. Therefore, there is no substitute for the main specifications to obtain a fuller understanding of SOAP. To that end, this primer provides extensive links to the main specifications wherever new concepts are introduced or used. [SOAP Part1] defines the SOAP envelope, which is a construct that defines an overall framework for representing the contents of a SOAP message, identifying who should deal with all or part of it, and whether handling such parts are optional or mandatory. It also defines a protocol binding framework, which describes how the specification for a binding of SOAP onto another underlying protocol may be written. [SOAP Part2] defines a data model for SOAP, a particular encoding scheme for data types which may be used for conveying remote procedure calls (RPC), as well as one concrete realization of the underlying protocol binding framework defined in [SOAP Part1]. This binding allows the exchange of SOAP messages either as payload of a HTTP POST request and response, or as a SOAP message in the response to a HTTP GET. [MTOM] describes an abstract feature for optimizing the wire format of a SOAP message for certain type of content, as well as a concrete implementation of it realized in an HTTP binding, while still maintaining the modeling of a SOAP message as a single XML Infoset. [XOP]defines a convention for serializing more efficiently an XML Infoset that has binary content. [MTOM] makes use of the [XOP] format for optimizing the transmission of SOAP messages. [ResRep] specifies a SOAP header block which carries a representation of a Web resource, which is needed for processing a SOAP message but which a receiver would prefer not to or cannot obtain by dereferencing the URI for the resource carried within the message. This document (the primer) is not normative, which means that it does not provide the definitive specification of SOAP Version 1.2 or the other specifications cited above. The examples provided here are intended to complement the formal specifications, and in any question of interpretation the formal specifications naturally take precedence. The examples shown here provide a subset of the uses expected for SOAP. In actual usage scenarios, SOAP will most likely be a part of an overall solution, and there will no doubt be other application-specific requirements which are not captured in these examples. SOAP Version 1.2 provides the definition of the XML-based information which can be used for exchanging structured and typed information between peers in a decentralized, distributed environment. [SOAP Part1] explains that a SOAP message is formally specified as an XML Information Set [XML Infoset] (henceforth often simply infoset), which provides an abstract description of its contents. Infosets can have different on-the-wire representations (aka serializations), one common example of which is as an XML 1.0 [XML 1.0] document. However, other serializations are also possible, and [MTOM] using the [XOP] format offers one mechanism for doing so for the cases where there is a need to optimize the processing and size of the transmitted message. SOAP. Section 2 of this document provides an introduction to the basic features of SOAP starting with the simplest usage scenarios, namely a one-way SOAP message, followed by various request-response type exchanges, including RPCs. Fault situations are also described. Section 3 provides an overview of the SOAP processing model, which describes the rules for initial construction of a message, rules by which messages are processed when received at an intermediary or ultimate destination, and rules by which portions of the message can be inserted, deleted or modified by the actions of an intermediary. Section 4 of this document describes the ways in which SOAP messages may be transported to realize various usage scenarios. It describes the SOAP HTTP binding specified in [SOAP Part2], as well as an example of how SOAP messages may be conveyed in email messages. As a part of the HTTP binding, it introduces two message exchange patterns which are available to an application, one of which uses the HTTP POST method, while the other uses HTTP GET. Examples are also provided on how RPCs, in particular those that represent "safe" information retrieval, may be represented in SOAP message exchanges in a manner that is compatible with the architectural principles of the World Wide Web . Section 5 of this document provides a treatment of various aspects of SOAP that can be used in more complex usage scenarios. These include the extensibility mechanism offered through the use of header elements, which may be targeted at specific intermediate SOAP nodes to provide value-added services to communicating applications, using various encoding schemes to serialize application-specific data in SOAP messages, and the means to provide a more optimized serialization of a SOAP message under certain circumstances. Section 6 of this document describes the changes from SOAP Version 1.1 [SOAP 1.1]. Section 7 of this document provides references. For ease of reference, terms and concepts used in this primer are hyper-linked to their definition in the main specifications. Throughout this primer, sample SOAP envelopes and messages are shown as [XML 1.0] documents. [SOAP Part1] explains that a SOAP message is formally specified as an [XML InfoSet], which is an abstract description of its contents. The distinction between the SOAP message infosets and their representation as XML documents is unlikely to be of interest to those using this primer as an introduction to SOAP; those who do care (typically those who port SOAP to new protocol bindings where the messages may have alternative representations) should understand these examples as referring to the corresponding XML infosets. Further elaboration of this point is provided in Section 4 of this document. The namespace prefixes "env", "enc", "rpc", "rep", "xop" and "xmime" used in the prose sections of this document are associated with the namespace names "" , "", "", "", "" and "" respectively. The namespace prefixes "xs" and "xsi" used in the prose sections of this document are associated with the namespace names "" and "" respectively, both of which are defined in the XML Schema specifications [XML Schema Part1], [XML Schema Part2]. Note that the choice of any other namespace prefix is arbitrary and not semantically significant. Namespace URIs of the general form "..." and "..." represent an application-dependent or context-dependent URI [RFC 3986]. A SOAP message is fundamentally a one-way transmission between SOAP nodes, from a SOAP sender to a SOAP receiver, but SOAP messages are expected to be combined by applications to implement more complex interaction patterns ranging from request/response to multiple, back-and-forth "conversational" exchanges. The primer starts by exposing the structure of a SOAP message and its exchange in some simple usage scenarios based on a travel reservation application. Various aspects of this application scenario will be used throughout the primer. In this scenario, the travel reservation application for an employee of a company negotiates a travel reservation with a travel booking service for a planned trip. The information exchanged between the travel reservation application and the travel service application is in the form of SOAP messages. The ultimate recipient of a SOAP message sent from the travel reservation application is the travel service application, but it is possible that the SOAP message may be "routed" through one or more SOAP intermediaries which act in some way on the message. Some simple examples of such SOAP intermediaries might be ones that log, audit or, possibly, amend each travel request. Examples, and a more detailed discussion of the behavior and role of SOAP intermediaries, is postponed to section 5.1. Section 2.1 describes a travel reservation request expressed as a SOAP message, which offers the opportunity to describe the various "parts" of a SOAP message. Section 2.2.1 continues the same scenario to show a response from the travel service in the form of another SOAP message, which forms a part of a conversational message exchange as the various choices meeting the constraints of the travel request are negotiated. Section 2.2.2 assumes that the various parameters of the travel reservation have been accepted by the traveller, and an exchange - modelled as a remote procedure call (RPC) - between the travel reservation and the travel service applications confirms the payment for the reservation. Section 2.3 shows examples of fault handling. Example 1 shows data for a travel reservation expressed in a SOAP message. < SOAP message in Example 1 contains two SOAP-specific sub-elements within the overall env:Envelope, namely an env:Header and an env:Body. The contents of these elements are application defined and not a part of the SOAP specifications, although the latter do have something to say about how such elements must be handled. A SOAP header element is optional, but it has been included in the example to explain certain features of SOAP. A SOAP header is an extension mechanism that provides a way to pass information in SOAP messages that is not application payload. Such "control" information includes, for example, passing directives or contextual information related to the processing of the message. This allows a SOAP message to be extended in an application-specific manner. The immediate child elements of the env:Header element are called header blocks, and represent a logical grouping of data which, as shown later, can individually be targeted at SOAP nodes that might be encountered in the path of a message from a sender to an ultimate receiver. SOAP headers have been designed in anticipation of various uses for SOAP, many of which will involve the participation of other SOAP processing nodes - called SOAP intermediaries - along a message's path from an initial SOAP sender to an ultimate SOAP receiver. This allows SOAP intermediaries to provide value-added services. Headers, as shown later, may be inspected, inserted, deleted or forwarded by SOAP nodes encountered along a SOAP message path. (It should be kept in mind, though, that the SOAP specifications do not deal with what the contents of header elements are, or how SOAP messages are routed between nodes, or the manner by which the route is determined and so forth. These are a part of the overall application, and could be the subject of other specifications.) The SOAP body is the mandatory element within the SOAP env:Envelope, which implies that this is where the main end-to-end information conveyed in a SOAP message must be carried. A pictorial representation of the SOAP message in Example 1 is as follows. Figure 1: SOAP message structure In Example 1, the header contains two header blocks, each of which is defined in its own XML namespace and which represent some aspect pertaining to the overall processing of the body of the SOAP message. For this travel reservation application, such "meta" information pertaining to the overall request is a reservation header block which provides a reference and time stamp for this instance of a reservation, and the traveller's identity in the passenger block. The header blocks reservation and passenger must be processed by the next SOAP intermediary encountered in the message path or, if there is no intermediary, by the ultimate recipient of the message. The fact that it is targeted at the next SOAP node encountered en route is indicated by the presence of the attribute env:role with the value "" (hereafter simply "next"), which is a role that all SOAP nodes must be willing to play. The presence of an env:mustUnderstand attribute with value "true" indicates that the node(s) processing the header must absolutely process these header blocks in a manner consistent with their specifications, or else not process the message at all and throw a fault. Note that whenever a header block is processed, either because it is marked env:mustUnderstand="true" or for another reason, the block must be processed in accordance with the specifications for that block. Such header block specifications are application defined and not a part of SOAP. Section 3 will elaborate further on SOAP message processing based on the values of these attributes. The choices of what data is placed in a header block and what goes in the SOAP body are decisions made at the time of application design. The main point to keep in mind is that header blocks may be targeted at various nodes that might be encountered along a message's path from a sender to the ultimate recipient. Such intermediate SOAP nodes may provide value-added services based on data in such headers. In Example 1, the passenger data is placed in a header block to illustrate the use of this data at a SOAP intermediary to do some additional processing. For example, as shown later in section 5.1, the outbound message is altered by the SOAP intermediary by having the travel policies pertaining to this passenger appended to the message as another header block. The env:Body element and its associated child elements, itinerary and lodging, are intended for exchange of information between the initial SOAP sender and the SOAP node which assumes the role of the ultimate SOAP receiver in the message path, which is the travel service application. Therefore, the env:Body and its contents are implicitly targeted and are expected to be understood by the ultimate receiver. The means by which a SOAP node assumes such a role is not defined by the SOAP specification, and is determined as a part of the overall application semantics and associated message flow. Note that a SOAP intermediary may decide to play the role of the ultimate SOAP receiver for a given message transfer, and thus process the env:Body. However, even though this sort of a behavior cannot be prevented, it is not something that should be done lightly as it may pervert the intentions of the message's sender, and have undesirable side effects (such as not processing header blocks that might be targeted at intermediaries further along the message path). A SOAP message such as that in Example 1 may be transferred by different underlying protocols and used in a variety of message exchange patterns. For example, for a Web-based access to a travel service application, it could be placed in the body of a HTTP POST request. In another protocol binding, it might be sent in an email message (see section 4.2). Section 4 will describe how SOAP messages may be conveyed by a variety of underlying protocols. For the time being, it is assumed that a mechanism exists for message transfer and the remainder of this section concentrates on the details of the SOAP messages and their processing. SOAP Version 1.2 is a simple messaging framework for transferring information specified in the form of an XML infoset between an initial SOAP sender and an ultimate SOAP receiver. The more interesting scenarios typically involve multiple message exchanges between these two nodes. The simplest such exchange is a request-response pattern. Some early uses of [SOAP 1.1] emphasized the use of this pattern as means for conveying remote procedure calls (RPC), but it is important to note that not all SOAP request-response exchanges can or need to be modelled as RPCs. The latter is used when there is a need to model a certain programmatic behavior, with the exchanged messages conforming to a pre-defined description of the remote call and its return. A much larger set of usage scenarios than that covered by the request-response pattern can be modeled simply as XML-based content exchanged in SOAP messages to form a back-and-forth "conversation", where the semantics are at the level of the sending and receiving applications. Section 2.2.1 covers the case of XML-based content exchanged in SOAP messages between the travel reservation application and the travel service application in a conversational pattern, while section 2.2.2 provides an example of an exchange modeled as an RPC. Continuing with the travel request scenario, Example 2 shows a SOAP message returned from the travel service in response to the reservation request message in Example 1. This response seeks to refine some information in the request, namely the choice of airports in the departingClarification xmlns: <p:departure> <p:departing> <p:airportChoices> JFK LGA EWR </p:airportChoices> </p:departing> </p:departure> <p:return> <p:arriving> <p:airportChoices> JFK LGA EWR </p:airportChoices> </p:arriving> </p:return> </p:itineraryClarification> </env:Body> </env:Envelope> As described earlier, the env:Body contains the primary content of the message, which in this example includes a list of the various alternatives for the airport, conforming to a schema definition in the XML namespace. In this example, the header blocks from Example 1 are returned (with some sub-element values altered) in the response. This could allow message correlation at the SOAP level, but such headers are very likely to also have other application-specific uses. The message exchanges in Examples 1 and 2 are cases where XML-based contents conforming to some application-defined schema are exchanged via SOAP messages. Once again, a discussion of the means by which such messages are transferred is deferred to section 4. It is easy enough to see how such exchanges can build up to a multiple back-and-forth "conversational" message exchange pattern. Example 3 shows a SOAP message sent by the travel reservation application in response to that in Example 2 choosing one from the list of available airports. The header block reservation with the same value of the reference sub-element accompanies each message in this conversation, thereby offering a way, should it be needed, to correlate the messages exchanged between them at the application level. <:36:50.000-05:00</m:dateAndTime> </m:reservation> <n:passenger xmlns: <n:name>Åke Jógvan Øyvind</n:name> </n:passenger> </env:Header> <env:Body> <p:itinerary xmlns: <p:departure> <p:departing>LGA</p:departing> </p:departure> <p:return> <p:arriving>EWR</p:arriving> </p:return> </p:itinerary> </env:Body> </env:Envelope> One of the design goals of SOAP Version 1.2 is to encapsulate remote procedure call functionality using the extensibility and flexibility of XML. SOAP Part 2 section 4 has defined a uniform representation for RPC invocations and responses carried in SOAP messages. This section continues with the travel reservation scenario to illustrate the use of SOAP messages to convey remote procedure calls and their return. To that end, the next example shows the payment for the trip using a credit card. (It is assumed that the conversational exchanges described in section 2.2.1 have resulted in a confirmed itinerary.) Here, it is further assumed that the payment happens in the context of an overall transaction where the credit card is charged only when the travel and the lodging (not shown in any example, but presumably reserved in a similar manner) are both confirmed. The travel reservation application provides credit card information and the successful completion of the different activities results in the card being charged and a reservation code returned. This reserve-and-charge interaction between the travel reservation application and the travel service application is modeled as a SOAP RPC. To invoke a SOAP RPC, the following information is needed: Such information may be expressed by a variety of means, including formal Interface Definition Languages (IDL). Note that SOAP does not provide any IDL, formal or informal. Note also that the above information differs in subtle ways from information generally needed to invoke other, non-SOAP RPCs. Regarding Item 1 above, there is, from a SOAP perspective, a SOAP node which "contains" or "supports" the target of the RPC. It is the SOAP node which (appropriately) adopts the role of the ultimate SOAP receiver. As required by Item 1, the ultimate recipient can identify the target of the named procedure or method by looking for its URI. The manner in which the target URI is made available depends on the underlying protocol binding. One possibility is that the URI identifying the target is carried in a SOAP header block. Some protocol bindings, such as the SOAP HTTP binding defined in [SOAP Part2], offer a mechanism for carrying the URI outside the SOAP message. In general, one of the properties of a protocol binding specification must be a description of how the target URI is carried as a part of the binding. Section 4.1 provides some concrete examples of how the URI is carried in the case of the standardized SOAP protocol binding to HTTP. Item 4 and Item 5 above are required to ensure that RPC applications that employ SOAP can do so in a manner which is compatible with the architectural principles of the World Wide Web. Section 4.1.3 discusses how the information provided by items 4 and 5 are utilized. For the remainder of this section, it is assumed that the RPC conveyed in a SOAP message as shown in Example 4 is appropriately targeted and dispatched. The purpose of this section is to highlight the syntactical aspects of RPC requests and returns carried within a SOAP message. <> The RPC itself is carried as a child of the env:Body element, and is modelled as a struct which takes the name of the procedure or method, in this case chargeReservation. (A struct is a concept from the SOAP Data Model defined in [SOAP Part2] that models a structure or record type that occurs in some common programming languages.) The design of the RPC in the example (whose formal description has not been explicitly provided) takes two input (or "in") parameters, the reservation corresponding to the planned trip identified by the reservation code, and the creditCard information. The latter is also a struct, which takes three elements, the card holder's name, the card number and an expiration date. In this example, the env:encodingStyle attribute with the value shows that the contents of the chargeReservation structure have been serialized according to the SOAP encoding rules, i.e., the particular rules defined in SOAP Part 2 section 3. Even though SOAP specifies this particular encoding scheme, its use is optional and the specification makes clear that other encoding schemes may be used for application-specific data within a SOAP message. It is for this purpose that it provides the env:encodingStyle attribute to qualify header blocks and body sub-elements. The choice of the value for this attribute is an application-specific decision and the ability of a caller and callee to interoperate is assumed to have been settled "out-of-band". Section 5.2 shows an example of using another encoding scheme. As noted in Item 6 above, RPCs may also require additional information to be carried, which can be important for the processing of the call in a distributed environment, but which are not a part of the formal procedure or method description. (Note, however, that providing such additional contextual information is not specific to RPCs, but may be required in general for the processing of any distributed application.) In the example, the RPC is carried out in the context of an overall transaction which involves several activities which must all complete successfully before the RPC returns successfully. Example 4 shows how a header block transaction directed at the ultimate recipient (implied by the absence of the env:role attribute) is used to carry such information. (The value "5" is some transaction identifier set by and meaningful to the application. No further elaboration of the application-specific semantics of this header are provided here, as it is not germane to the discussion of the syntactical aspects of SOAP RPC messages.) Let us assume that the RPC in the charging example has been designed to have the procedure description which indicates that there are two output (or "out") parameters, one providing the reference code for the reservation and the other a URL where the details of the reservation may be viewed. The RPC response is returned in the env:Body element of a SOAP message, which is modeled as a struct taking the procedure name chargeReservation and, as a convention, the word "Response" appended. The two output (or "out") parameters accompanying the response are the alphanumeric code identifying the reservation in question, and a URI for the location, viewAt, from where the reservation may be retrieved. This is shown in Example 5a, where the header again: <m:code>FT35ZBQ</m:code> <m:viewAt> </m:viewAt> </m:chargeReservationResponse> </env:Body> </env:Envelope> RPCs often have descriptions where a particular output parameter is distinguished, the so-called "return" value. The SOAP RPC convention offers a way to distinguish this "return" value from the other output parameters in the procedure description. To show this, the charging example is modified to have an RPC description that is almost the same as that for Example 5a, i.e, with the same two "out" parameters, but in addition it also has a "return" value, which is an enumeration with potential values of "confirmed" and "pending". The RPC response conforming to this description is shown in Example 5b, where the SOAP header, as before,: <rpc:result>m:status</rpc:result> <m:status>confirmed</m:status> <m:code>FT35ZBQ</m:code> <m:viewAt> </m:viewAt> </m:chargeReservationResponse> </env:Body> </env:Envelope> In Example 5b, the return value is identified by the element rpc:result, and contains the XML Qualified Name (of type xs:QName) of another element within the struct which is m:status. This, in turn, contains the actual return value, "confirmed". This technique allows the actual return value to be strongly typed according to some schema. If the rpc:result element is absent, as is the case in Example 5a, the return value is not present or is of the type void. While, in principle, using SOAP for RPC is independent of the decision to use a particular means for transferring the RPC call and its return, certain protocol bindings that support the SOAP Request-Response message exchange pattern may be more naturally suited for such purposes. A protocol binding supporting this message exchange pattern can provide the correlation between a request and a response. Of course, the designer of an RPC-based application could choose to put a correlation ID relating a call and its return in a SOAP header, thereby making the RPC independent of any underlying transfer mechanism. In any case, application designers have to be aware of all the characteristics of the particular protocols chosen for transferring SOAP RPCs, such as latency, synchrony, etc. In the commonly used case, standardized in SOAP Part 2 section 7, of using HTTP as the underlying transfer protocol, an RPC invocation maps naturally to the HTTP request and an RPC response maps to the HTTP response. Section 4.1 provides examples of carrying RPCs using the HTTP binding. However, it is worth keeping in mind that even though most examples of SOAP for RPC use the HTTP protocol binding, it is not limited to that means alone. SOAP provides a model for handling situations when faults arise in the processing of a message. SOAP distinguishes between the conditions that result in a fault, and the ability to signal that fault to the originator of the faulty message or another node. The ability to signal the fault depends on the message transfer mechanism used, and one aspect of the binding specification of SOAP onto an underlying protocol is to specify how faults are signalled, if at all. The remainder of this section assumes that a transfer mechanism is available for signalling faults generated while processing received messages, and concentrates on the structure of the SOAP fault message. The SOAP env:Body element has another distinguished role in that it is the place where such fault information is placed. The SOAP fault model (see SOAP Part 1, section 2.6) requires that all SOAP-specific and application-specific faults be reported using a single distinguished element, env:Fault, carried within the env:Body element. The env:Fault element contains two mandatory sub-elements, env:Code and env:Reason, and (optionally) application-specific information in the env:Detail sub-element. Another optional sub-element, env:Node, identifies via a URI the SOAP node which generated the fault, its absence implying that it was the ultimate recipient of the message which did so. There is yet another optional sub-element, env:Role, which identifies the role being played by the node which generated the fault. The env:Code sub-element of env:Fault is itself made up of a mandatory env:Value sub-element, whose content is specified in the SOAP specification (see SOAP Part 1 section 5.4.6) as well as an optional env:Subcode sub-element. Example 6a shows a SOAP message returned in response to the RPC request in Example 4, and indicating a failure to process the RPC. <?xml version='1.0' ?> <env:Envelope xmlns:env="" xmlns: > In Example 6a, the top-level env:Value uses a standardized XML Qualified Name (of type xs:QName) to identify that it is an env:Sender fault, which indicates that it is related to some syntactical error or inappropriate information in the message. (When a env:Sender fault is received by the sender, it is expected that some corrective action is taken before a similar message is sent again.) The env:Subcode element is optional, and, if present, as it is in this example, qualifies the parent value further. In Example 6a, the env:Subcode denotes that an RPC specific fault, rpc:BadArguments, defined in SOAP Part 2 section 4.4, is the cause of the failure to process the request. The structure of the env:Subcode element has been chosen to be hierarchical - each child env:Subcode element has a mandatory env:Value and an optional env:Subcode sub-element - to allow application-specific codes to be carried. This hierarchical structure of the env:Code element allows for an uniform mechanism for conveying multiple level of fault codes. The top-level env:Value is a base fault that is specified in the SOAP Version 1.2 specifications (see SOAP Part 1 section 5.4.6) and must be understood by all SOAP nodes. Nested env:Values are application-specific, and represent further elaboration or refinement of the base fault from an application perspective. Some of these values may well be standardized, such as the RPC codes standardized in SOAP 1.2 (see SOAP Part 2 section 4.4), or in some other standards that use SOAP as an encapsulation protocol. The only requirement for defining such application-specific subcode values is that they be namespace qualified using any namespace other than the SOAP env namespace which defines the main classifications for SOAP faults. There is no requirement from a SOAP perspective that applications need to understand, or even look at all levels of the subcode values. The env:Reason sub-element is not meant for algorithmic processing, but rather for human understanding; so, even though this is a mandatory item, the chosen value need not be standardized. Therefore all that is required is that it reasonably accurately describe the fault situation..) The absence of a env:Node sub-element within env:Fault in Example 6a implies that it is generated by the ultimate receiver of the call. The contents of env:Detail, as shown in the example, are application-specific. During the processing of a SOAP message, a fault may also be generated if a mandatory header element is not understood or the information contained in it cannot be processed. Errors in processing a header block are also signalled using a env:Fault element within the env:Body, but with a particular distinguished header block, env:NotUnderstood, that identifies the offending header block. Example 6b shows an example of a response to the RPC in Example 4 indicating a failure to process the t:transaction header block. Note the presence of the env:MustUnderstand fault code in the env:Body, and the identification of the header not understood using an (unqualified) attribute, qname, in the special (empty) header block env:NotUnderstood. <?xml version='1.0' ?> <env:Envelope xmlns: <env:Header> <env:NotUnderstood </env:Header> <env:Body> <env:Fault> <env:Code> <env:Value>env:MustUnderstand</env:Value> </env:Code> <env:Reason> <env:Text xml:Header not understood</env:Text> <env:Text xml:En-tête non compris</env:Text> </env:Reason> </env:Fault> </env:Body> </env:Envelope> If there were several mandatory header blocks that were not understood, then each could be identified by its qname attribute in a series of such env:NotUnderstood header blocks. Having established the various syntactical aspects of a SOAP message as well as some basic message exchange patterns, this section provides a general overview of the SOAP processing model (specified in SOAP Part 1, section 2). The SOAP processing model describes the actions taken by a SOAP node on receiving a SOAP message. Example 7a shows a SOAP message with several header blocks (with their contents omitted for brevity). Variations of this will be used in the remainder of this section to illustrate various aspects of the processing model. <?xml version="1.0" ?> <env:Envelope xmlns: <env:Header> <p:oneBlock xmlns: ... ... </p:oneBlock> <q:anotherBlock xmlns: ... ... </q:anotherBlock> <r:aThirdBlock xmlns: ... ... </r:aThirdBlock> </env:Header> <env:Body > ... ... </env:Body> </env:Envelope> The SOAP processing model describes the (logical) actions taken by a SOAP node on receiving a SOAP message. There is a requirement for the node to analyze those parts of a message that are SOAP-specific, namely those elements in the SOAP "env" namespace. Such elements are the envelope itself, the header element and the body element. A first step is, of course, the overall check that the SOAP message is syntactically correct. That is, it conforms to the SOAP XML infoset subject to the restrictions on the use of certain XML constructs - Processing Instructions and Document Type Definitions - as defined in SOAP Part 1, section 5. Further processing of header blocks and the body depend on the role(s) assumed by the SOAP node for the processing of a given message. SOAP defines the (optional) env:role attribute - syntactically, xs:anyURI - that may be present in a header block, which identifies the role played by the intended target of that header block. A SOAP node is required to process a header block if it assumes the role identified by the value of the URI. How a SOAP node assumes a particular role is not a part of the SOAP specifications. Three standardized roles have been defined (see SOAP Part 1, section 2.2), which are In Example 7a, the header block oneBlock is targeted at any SOAP node that plays the application-defined role defined by the URI. For purposes of illustration, it is assumed that the specification for such a header block requires that any SOAP node adopting this role log the entire Every SOAP node receiving a message with a header block that has a env:role attribute of "next" must be capable of processing the contents of the element, as this is a standardized role that every SOAP node must be willing to assume. A header block thus attributed is one which is expected to be examined and (possibly) processed by the next SOAP node along the path of a message, assuming that such a header has not been removed as a result of processing at some node earlier in the message path. In Example 7a, the header block anotherBlock is targeted at the next node in the message path. In this case, the SOAP message received by the node playing the application-defined role of "", must also be willing to play the SOAP-defined role of "next". This is also true for the node which is the ultimate recipient of the message, as it obviously (and implicitly) also plays the "next" role by virtue of being next in the message path. The third header block, aThirdBlock, in Example 7a does not have the env:role attribute. It is targeted at a SOAP node which assumes the "ultimateReceiver" role. The "ultimateReceiver" role (which can be explicitly declared or is implicit if the env:role attribute is absent in a header block) is played by a SOAP node that assumes the role of the ultimate recipient of a particular SOAP message. The absence of a env:role attribute in the aThirdBlock header block means that this header element is targeted at the SOAP node that assumes the "ultimateReceiver" role. Note that the env:Body element does not have a env:role attribute. The body element is always targeted at the SOAP node that assumes the "ultimateReceiver" role. In that sense, the body element is just like a header block targeted at the ultimate receiver, but it has been distinguished to allow for SOAP nodes (typically SOAP intermediaries) to skip over it if they assume roles other than that of the ultimate receiver. SOAP does not prescribe any structure for the env:Body element, except that it recommends that any sub-elements be XML namespace qualified. Some applications, such as that in Example 1, may choose to organize the sub-elements of env:Body in blocks, but this is not of concern to the SOAP processing model. The other distinguished role for the env:Body element, as the container where information on SOAP-specific faults, i.e., failure to process elements of a SOAP message, is placed has been described previously in section 2.3. If a header element has the standardized env:role attribute with value "none", it means that no SOAP node should process the contents, although a node may need to examine it if the content are data referenced by another header element that is targeted at the particular SOAP node. for the protocol binding to establish a base URI, possibly by reference to the encapsulating protocol in which the SOAP message is embedded for transport. (In fact, when SOAP messages are transported using HTTP, SOAP Part 2 section 7.1.2 defines the base URI as the Request-URI of the HTTP request, or the value of the HTTP Content-Location header.) The following table summarizes the applicable standardized roles that may be assumed at various SOAP nodes. ("Yes" and "No" means that the corresponding node does or does not, respectively, play the named role.) Example 7b augments the previous example by introducing another (optional) attribute for header blocks, the env:mustUnderstand attribute. <?xml version="1.0" ?> <env:Envelope xmlns: <env:Header> <p:oneBlock xmlns: ... ... </p:oneBlock> <q:anotherBlock xmlns: ... ... </q:anotherBlock> <r:aThirdBlock xmlns: ... ... </r:aThirdBlock> </env:Header> <env:Body > ... ... </env:Body> </env:Envelope> After a SOAP node has correctly identified the header blocks (and possibly the body) targeted at itself using the env:role attribute, the additional attribute, env:mustUnderstand, in the header elements determines further processing actions that have to be taken. In order to ensure that SOAP nodes do not ignore header blocks which are important to the overall purpose of the application, SOAP header blocks also provide for the additional optional attribute, env:mustUnderstand, which, if "true", means that the targeted SOAP node must process the block according to the specification of that block. Such a block is colloquially referred to as a mandatory header block. In fact, processing of the SOAP message must not even start until the node has identified all the mandatory header blocks targeted at itself, and "understood" them. Understanding a header means that the node must be prepared to do whatever is described in the specification of that block. (Keep in mind that the specifications of header blocks are not a part of the SOAP specifications.) In Example 7b, the header block oneBlock is marked with a env:mustUnderstand value set to "true", which means that it is mandatory to process this block if the SOAP node plays the role identified by "". The other two header blocks are not so marked, which means that SOAP node at which these blocks are targeted need not process them. (Presumably the specifications for these blocks allow for this.) A env:mustUnderstand value of "true" means that the SOAP node must process the header with the semantics described in that header's specification, or else generate a SOAP fault. Processing the header appropriately may include removing the header from any generated SOAP message, reinserting the header with the same or altered value, or inserting a new header. The inability to process a mandatory header requires that all further processing of the SOAP message cease, and a SOAP fault be generated. The message is not forwarded any further. The env:Body element has no env:mustUnderstand attribute but it must be processed by the ultimate recipient. In Example 7b, the ultimate recipient of the message - the SOAP node which plays the "ultimateReceiver" role - must process the env:Body and may process the header block aThirdBlock. It may also process the header block anotherBlock, as it is targeted at it (in the role of "next") but it is not mandatory to do so if the specifications for processing the blocks do not demand it. (If the specification for anotherBlock demanded that it must be processed at the next recipient, it would have required that it be marked with a env:mustUnderstand="true".) The role(s) a SOAP node plays when processing a SOAP message can be determined by many factors. The role could be known a priori, or set by some out-of-band means, or a node can inspect all parts of a received message to determine which roles it will assume before processing the message. An interesting case arises when a SOAP node, during the course of processing a message, decides that there are additional roles that it needs to adopt. No matter when this determination is made, externally it must appear as though the processing model has been adhered to. That is, it must appear as though the role had been known from the start of the processing of the message. In particular, the external appearance must be that the env:mustUnderstand checking of any headers with those additional roles assumed was performed before any processing began. Also, if a SOAP node assumes such additional roles, it must ensure that it is prepared to do everything that the specifications for those roles require. The following table summarizes how the processing actions for a header block are qualified by the env:mustUnderstand attribute with respect to a node that has been appropriately targeted (via the env:role attribute). As a result of processing a SOAP message, a SOAP node may generate a single SOAP fault if it fails to process a message, or, depending on the application, generate additional SOAP messages for consumption at other SOAP nodes. SOAP Part 1 section 5.4 describes the structure of the fault message while the SOAP processing model defines the conditions under which it is generated. As illustrated previously in section 2.3, a SOAP fault is a SOAP message with a standardized env:Body sub-element named env:Fault. SOAP makes a distinction between generating a fault and ensuring that the fault is returned to the originator of the message or to another appropriate node which can benefit from this information. However, whether a generated fault can be propagated appropriately depends on the underlying protocol binding chosen for the SOAP message message exchange. The specification does not define what happens if faults are generated during the propagation of one-way messages. The only normative underlying protocol binding, which is the SOAP HTTP binding, offers the HTTP response as a means for reporting a fault in the incoming SOAP message. (See Section 4 for more details on SOAP protocol bindings.) SOAP Version 1.2 defines another optional attribute for header blocks, env:relay of type xs:boolean, which indicates if a header block targeted at a SOAP intermediary must be relayed if it is not processed. Note that if a header block is processed, the SOAP processing rules (see SOAP Part 1 section 2.7.2) requires that it be removed from the outbound message. (It may, however, be reinserted, either unchanged or with its contents altered, if the processing of other header blocks determines that the header block be retained in the forwarded message.) The default behavior for an unprocessed header block targeted at a role played by a SOAP intermediary is that it must be removed before the message is relayed. The reason for this choice of default is to lean on the side of safety by ensuring that a SOAP intermediary make no assumptions about the survivability past itself of a header block targeted at a role it assumes, and representing some value-added feature, particularly if it chooses not to process the header block, very likely because it does not "understand" it. That is because certain header blocks represent hop-by-hop features, and it may not make sense to unknowingly propagate it end-to-end. As an intermediary may not be in a position to make this determination, it was thought that it would be safer if unprocessed header blocks were removed before the message was relayed. However, there are instances when an application designer would like to introduce a new feature, manifested through a SOAP header block, targeted at any capable intermediary which might be encountered in the SOAP message path. Such a header block would be available to those intermediaries that "understood" it, but ignored and relayed onwards by those that did not. Being a new feature, the processing software for this header block may be implemented, at least initially, in some but not all SOAP nodes. Marking such a header block with env:mustUnderstand = "false" is obviously needed, so that intermediaries that have not implemented the feature do not generate a fault. To circumvent the default rule of the processing model, marking a header block with the additional attribute env:relay with the value "true" allows the intermediary to forward the header block targeted at itself in the event that it chooses not to process. Example 7c shows the use of the env:relay attribute. <?xml version="1.0" ?> <env:Envelope xmlns: <env:Header> <p:oneBlock xmlns: ... ... </p:oneBlock> <q:anotherBlock xmlns: ... ... </q:anotherBlock> <r:aThirdBlock xmlns: ... ... </r:aThirdBlock> </env:Header> <env:Body > ... ... </env:Body> </env:Envelope> The header block q:anotherBlock, targeted at the "next" node in the message path, has the additional attribute env:relay="true". A SOAP node receiving this message may process this header block if it "understands" it, but if it does so the processing rules require that this header block be removed before forwarding. However, if the SOAP node chooses to ignore this header block, which it can because it is not mandatory to process it, as indicated by the absence of the env:mustUnderstand attribute, then it must forward it. Processing the header block p:oneBlock is mandatory and the SOAP processing rules require that it not be relayed, unless the processing of some other header block requires that it be present in the outbound message. The header block r:aThirdBlock does not have an env:relay attribute, which is equivalent to having it with the value of env:relay = "false". Hence, this header is not forwarded if it is not processed. SOAP 1.2 Part 1 Table 3 summarizes the conditions which determine when a SOAP intermediary assuming a given role is allowed to forward unprocessed header blocks. SOAP messages may be exchanged using a variety of "underlying" protocols, including other application layer protocols. The specification of how SOAP messages may be passed from one SOAP node to another using an underlying protocol is called a SOAP binding. [SOAP Part1] defines a SOAP message in the form of an [XML Infoset], i.e., in terms of element and attribute information items of an abstract "document" called the env:Envelope (see SOAP Part 1, section 5). Any SOAP env:Envelope infoset representation will be made concrete through a protocol binding, whose task, among other things, it is to provide a serialized representation of the infoset that can be conveyed to the next SOAP node in the message path in a manner such that the original infoset can be reconstructed without loss of information. In typical examples of SOAP messages, and certainly in all the examples in this primer, the serialization shown is that of a well-formed [XML 1.0] document. However, there may be other protocol bindings - for example a protocol binding between two SOAP nodes over a limited bandwidth interface - where an alternative, compressed serialization of the same infoset may be chosen. Another binding, chosen for a different purpose, may provide a serialization which is an encrypted structure representing the same infoset. The [MTOM] specification provides a SOAP binding to HTTP that allows for an optimized serialization of the SOAP message infoset under certain circumstances. A more detailed discussion of this binding is deferred to Section 5.3. In addition to providing a concrete realization of a SOAP infoset between adjacent SOAP nodes along a SOAP message path, a protocol binding provides the mechanisms to support features that are needed by a SOAP application. A feature is a specification of a certain functionality required in the interactions between two SOAP nodes, which may be provided by a binding. A feature description is identified by a URI, so that all applications referencing it are assured of the same semantics. Features are qualified by properties, which provide additional information that help in the implementation of the feature. For example, a typical usage scenario might require many concurrent request-response exchanges between adjacent SOAP nodes, in which case the feature that is required is the ability to correlate a request with a response. The abstract property associated with this feature is a "correlation ID". Other examples includes "an encrypted channel" feature, or a "reliable delivery channel" feature, or a particular SOAP message exchange pattern feature. In particular, the [MTOM] specification defines an Abstract SOAP Transmission Optimization feature which may be used by SOAP bindings to optimize the serialization of selected element information items of a SOAP message infoset. (See section 5.3.1 for details). A SOAP binding specification (see SOAP Part 1 section 4) describes, among other things, which (if any) features it provides. Some features may be provided natively by the underlying protocol. If the feature is not available through the binding, it may be implemented within the SOAP envelope, using SOAP header blocks. The specification of a feature implemented using SOAP header blocks is called a SOAP module. For example, if SOAP message exchanges were being transported directly over a datagram protocol like UDP, obviously the message correlation feature mentioned earlier would have to be provided by other means, either directly by the application or more likely as a part of the SOAP infosets being exchanged. In the latter case, the message correlation feature has a binding-specific expression within the SOAP envelope, i.e., as a SOAP header block, defined in a "Request-Response Correlation" module identified by a URI. However, if the SOAP infosets were being exchanged using an underlying protocol that was itself request/response, the application could implicitly "inherit" this feature provided by the binding, and no further support need be provided at the application or the SOAP level. (In fact, the HTTP binding for SOAP takes advantage of just this feature of HTTP.) The Abstract SOAP Transmission Optimization feature defined in [MTOM] is similarly implemented as a part of an augmented SOAP HTTP binding, by serializing particular nodes of a SOAP message infoset in binary format together with a modified SOAP Envelope, which are then carried in separate parts of a MIME Multipart/Related [RFC 2387] package (see section 5.3.2 for details). However, a SOAP message may travel over several hops between a sender and the ultimate receiver, where each hop may be a different protocol binding. In other words, a feature (e.g., message correlation, reliability etc.) that is supported by the protocol binding in one hop may not be supported by another along the message path. SOAP itself does not provide any mechanism for hiding the differences in features provided by different underlying protocols. However, any end-to-end or multi-hop feature that is required by a particular application, but which may not be available in the underlying infrastructure along the anticipated message path, can be compensated for by being carried as a part of the SOAP message infoset, i.e., as a SOAP header block specified in some module. Thus it is apparent that there are a number of issues that have to be tackled by an application designer to accomplish particular application semantics, including how to take advantage of the native features of underlying protocols that are available for use in the chosen environment. SOAP Part 1 section 4.2 provides a general framework for describing how SOAP-based applications may choose to use the features provided by an underlying protocol binding to accomplish particular application semantics. It is intended to provide guidelines for writing interoperable protocol binding specifications for exchanging SOAP messages. Among other things, a binding specification must define one particular feature, namely the message exchange pattern(s) that it supports. [SOAP Part2] defines two such message exchange patterns, namely a SOAP Request-Response message exchange pattern where one SOAP message is exchanged in each direction between two adjacent SOAP nodes, and a SOAP Response message exchange pattern which consists of a non-SOAP message acting as a request followed by a SOAP message included as a part of the response. [SOAP Part2] also offers the application designer a general feature called the SOAP Web Method feature that allows applications full control over the choice of the so-called "Web method" - one of GET, POST, PUT, DELETE whose semantics are as defined in the [HTTP 1.1] specifications - that may be used over the binding. This feature is defined to ensure that applications using SOAP can do so in a manner which is compatible with the architectural principles of the World Wide Web. (Very briefly, the simplicity and scalability of the Web is largely due to the fact that there are a few "generic" methods (GET, POST, PUT, DELETE) which can be used to interact with any resource made available on the Web via a URI.) The SOAP Web Method feature is supported by the SOAP HTTP binding, although, in principle, it is available to all SOAP underlying protocol bindings. SOAP Part 2 section 7 specifies one standardized protocol binding using the binding framework of [SOAP Part1], namely how SOAP is used in conjunction with HTTP as the underlying protocol. SOAP Version 1.2 restricts itself to the definition of a HTTP binding allowing only the use of the POST method in conjunction with the Request-Response message exchange pattern and the GET method with the SOAP Response message exchange pattern. Other specifications in future could define SOAP bindings to HTTP or other transports that make use of the other Web methods (i.e., PUT, DELETE). The next sections show examples of two underlying protocol bindings for SOAP, namely those to [HTTP 1.1] and email. It should be emphasized again that the only normative binding for SOAP 1.2 messages is to [HTTP 1.1]. The examples in section 4.2 showing email as a transport mechanism for SOAP is simply meant to suggest that other choices for the transfer of SOAP messages are possible, although not standardized at this time. A W3C Note [SOAP Email Binding] offers an application of the SOAP protocol binding framework of [SOAP Part1] by describing an experimental binding of SOAP to email transport, specifically [RFC 2822]-based message transport. The discussion of [MTOM] and its concrete realization in an HTTP binding is provided in section 5.3. HTTP has a well-known connection model and a message exchange pattern. The client identifies the server via a URI, connects to it using the underlying TCP/IP network, issues a HTTP request message and receives a HTTP response message over the same TCP connection. HTTP implicitly correlates its request message with its response message; therefore, an application using this binding can chose to infer a correlation between a SOAP message sent in the body of a HTTP request message and a SOAP message returned in the HTTP response. Similarly, HTTP identifies the server endpoint via a URI, the Request-URI, which can also serve as the identification of a SOAP node at the server. HTTP allows for multiple intermediaries between the initial client and the origin server identified by the Request-URI, in which case the request/response model is a series of such pairs. Note, however, that HTTP intermediaries are distinct from SOAP intermediaries. The HTTP binding in [SOAP Part2] makes use of theSOAP Web Method feature to allow applications to choose the so-called Web method - restricting it to one of GET or POST - to use over the HTTP message exchange. In addition, it makes use of two message exchange patterns that offer applications two ways of exchanging SOAP messages via HTTP: 1) the use of the HTTP POST method for conveying SOAP messages in the bodies of HTTP request and response messages, and 2) the use of the HTTP GET method in a HTTP request to return a SOAP message in the body of a HTTP response. The first usage pattern is the HTTP-specific instantiation of a binding feature called the SOAP request-response message exchange pattern, while the second uses a feature called the SOAP response message exchange pattern. The purpose of providing these two types of usages is to accommodate the two interaction paradigms which are well established on the World Wide Web. The first type of interaction allows for the use of data within the body of a HTTP POST to create or modify the state of a resource identified by the URI to which the HTTP request is destined. The second type of interaction pattern offers the ability to use a HTTP GET request to obtain a representation of a resource without altering its state in any way. In the first case, the SOAP-specific aspect of concern is that the body of the HTTP POST request is a SOAP message which has to be processed (per the SOAP processing model) as a part of the application-specific processing required to conform to the POST semantics. In the second case, the typical usage that is forseen is the case where the representation of the resource that is being requested is returned not as a HTML, or indeed a generic XML document, but as a SOAP message. That is, the HTTP content type header of the response message identifies it as being of media type "application/soap+xml" [RFC 3902]. Presumably, there will be publishers of resources on the Web who determine that such resources are best retrieved and made available in the form of SOAP messages. Note, however, that resources can, in general, be made available in multiple representations, and the desired or preferred representation is indicated by the requesting application using the HTTP Accept header. One further aspect of the SOAP HTTP binding is the question of how an application determines which of these two types of message exchange patterns to use. [SOAP Part2] offers guidance on circumstances when applications may use one of the two specified message exchange patterns. (It is guidance - albeit a strong one - as it is phrased in the form of a "SHOULD" in the specifications rather than an absolute requirement identified by the word "MUST", where these words are interpreted as defined in the IETF [RFC 2119].) The SOAP response message exchange pattern with the HTTP GET method is used when an application is assured that the message exchange is for the purposes of information retrieval, where the information resource is "untouched" as a result of the interaction. Such interactions are referred to as safe and idempotent in the HTTP specification. As the HTTP SOAP GET usage does not allow for a SOAP message in the request, applications that need features in the outbound interaction that can only be supported by a binding-specific expression within the SOAP infoset (i.e., as SOAP header blocks) obviously cannot make use of this message exchange pattern. Note that the HTTP POST binding is available for use in all cases. The following subsections provide examples of the use of these two message exchange patterns defined for the HTTP binding. Using the HTTP binding with the SOAP Response message exchange pattern is restricted to the HTTP GET method. This means that the response to a HTTP GET request from a requesting SOAP node is a SOAP message in the HTTP response. Example 8a shows a HTTP GET directed by the traveller's application (in the continuing travel reservation scenario) at the URI >where the traveler's itinerary may be viewed. (How this URL was made available can be seen in Example 5a.) GET /travelcompany.example.org/reservations?code=FT35ZBQ HTTP/1.1 Host: travelcompany.example.org Accept: text/html;q=0.5, application/soap+xml. Example 8b shows the HTTP response to the GET in Example 8a. The body of the HTTP response contains a SOAP message showing the travel details. A discussion of the contents of the SOAP message is postponed until section 5.2 , as it is not relevant, at this point, to understanding the HTTP GET binding usage. HTTP/1.1 200 OK Content-Type: application/soap+xml; charset="utf-8" Content-Length: nnnn <> Note that the reservation details could well have been returned as an (X)HTML document, but this example wanted to show a case where the reservation application is returning the state of the resource (the reservation) in a data-centric media form (a SOAP message) which can be machine processed, instead of (X)HTML which would be processed by a browser. Indeed, in the most likely anticipated uses of SOAP, the consuming application will not be a browser. Also, as shown in the example, the use of SOAP in the HTTP response body offers the possibility of expressing some application-specific feature through the use of SOAP headers. By using SOAP, the application is provided with a useful and consistent framework and processing model for expressing such features. Using the HTTP binding with the SOAP Request-Response message exchange pattern is restricted to the HTTP POST method. Note that the use of this message exchange pattern in the SOAP HTTP binding is available to all applications, whether they involve the exchange of general XML data or RPCs (as in the following examples) encapsulated in SOAP messages. Examples 9 and 10 show an example of a HTTP binding using the SOAP Request-Response message exchange pattern, using the same scenario as that for Example 4 and Example 5a, respectively, namely conveying an RPC and its return in the body of a SOAP message. The examples and discussion in this section only concentrate on the HTTP headers and their role. POST /Reservations> Example 9 shows an RPC request directed at the travel service application. The SOAP message is sent in the body of a HTTP POST method directed at the URI identifying the 3986]. When placing SOAP messages in HTTP bodies, the HTTP Content-type header must be chosen as "application/soap+xml" [RFC 3902]. (The optional charset parameter, which can take the value of "utf-8" or "utf-16", is shown in this example, but if it is absent the character set rules for freestanding [XML 1.0] apply to the body of the HTTP request.) Example 10 shows the RPC return (with details omitted) sent by the travel service application in the corresponding HTTP response to the request from Example 5a. SOAP, using HTTP transport, follows the semantics of the HTTP status codes for communicating status information in HTTP. For example, the 2xx series of HTTP status codes indicate that the client's request (including the SOAP component) was successfully received, understood, and accepted etc. HTTP/1.1 200 OK Content-Type: application/soap+xml; <env:Header> ... ... </env:Header> <env:Body> ... ... </env:Body> </env:Envelope> If an error occurs processing the request, the HTTP binding specification requires that a HTTP 500 "Internal Server Error" be used with an embedded SOAP message containing a SOAP fault indicating the server-side processing error. Example 11 is the same SOAP fault message as Example 6a, but this time with the HTTP headers added. HTTP/1.1 500 Internal Server Error Content-Type: application/soap+xml; > SOAP Part 2 Table 16 provides detailed behavior for handling the various possible HTTP response codes, i.e., the 2xx (successful), 3xx (redirection), 4xx (client error) and 5xx (server error). One of the most central concepts of the World Wide Web is that of a URI as a resource identifier. SOAP services that use the HTTP binding and wish to interoperate with other Web software should use URIs to address all important resources in their service. For example, a very important - indeed predominant - use of the World Wide Web is pure information retrieval, where the representation of an available resource, identified by a URI, is fetched using a HTTP GET request without affecting the resource in any way. (This is called a safe and idempotent method in HTTP terminology.) The key point is that the publisher of a resource makes available its URI, which consumers may "GET". There are many instances when SOAP messages are designed for uses which are purely for information retrieval, such as when the state of some resource (or object, in programming terms) is requested, as opposed to uses that perform resource manipulation. In such instances, the use of a SOAP body to carry the request for the state, with an element of the body representing the object in question, is seen as counter to the spirit of the Web because the resource is not identified by the Request-URI of the HTTP GET. (In some SOAP/RPC implementations, the HTTP Request-URI is often not the identifier of the resource itself but some intermediate entity which has to evaluate the SOAP message to identify the resource.) To highlight the changes needed, Example 12a shows the way that is not recommended for doing safe information retrieval on the Web. This is an example of an RPC carried in a SOAP message, again using the travel reservation theme, where the request is to retrieve the itinerary for a particular reservation identified by one of the parameters, reservationCode, of the RPC. (For purposes of this discussion, it is assumed that the application using this RPC request does not need features which require the use of SOAP headers.) POST /Reservations HTTP/1.1 Host: travelcompany.example.org Content-Type: application/soap+xml; <env:Body> <m:retrieveItinerary env: <m:reservationCode>FT35ZBQ</m:reservationCode> </m:retrieveItinerary> </env:Body> </env:Envelope> Note that the resource to be retrieved is not identified by the target URI in the HTTP request but has to be obtained by looking within the SOAP envelope. Thus, it is not possible, as would be the case with other "gettable" URIs on the Web, to make this available via HTTP alone to consumers on the World Wide Web. SOAP Part 2 section 4.1 offers recommendations on how RPCs that constitute safe and idempotent information retrievals may be defined in a Web-friendly manner. It does so by distinguishing aspects of the method and specific parameters in an RPC definition that serve to identify resources from those that serve other purposes. In Example 12a, the resource to be retrieved is identified by two things: the first is that it is an itinerary (part of the method name), and the second is the reference to a specific instance (a parameter to the method). In such a case, the recommendation is that these resource-identifying parts be made available in the HTTP Request-URI identifying the resource, as for example, as follows:. Furthermore, when an RPC definition is such that all parts of its method description can be described as resource-identifying, the entire target of the RPC may be identified by a URI. In this case, if the supplier of the resource can also assure that a retrieval request is safe, then SOAP Version 1.2 recommends that the choice of the Web method property of GET and the use of the SOAP Response message exchange pattern be used as described in section 4.1.1. This will ensure that the SOAP RPC is performed in a Web architecture compatible manner. Example 12b shows the preferred way for a SOAP node to request the safe retrieval of a resource. GET /Reservations/itinerary?reservationCode=FT35ZBQ HTTP/1.1 Host: travelcompany.example.org Accept: application/soap+xml It should be noted that SOAP Version 1.2 does not specify any algorithm on how to compute a URI from the definition of an RPC which has been determined to represent pure information retrieval. Note, however, that if the application requires the use of features that can only have a binding-specific expression within the SOAP infoset, i.e., using SOAP header blocks, then the application must choose HTTP POST method with a SOAP message in the request body. It also requires the use of the SOAP Request-Response message exchange pattern implemented via a HTTP POST if the RPC description includes data (parameters) which are not resource-identifying. Even in this case, the HTTP POST with a SOAP message can be represented in a Web-friendly manner. As with the use of the GET, [SOAP Part2] recommends for the general case that any part of the SOAP message that serves to identify the resource to which the request is POSTed be identified in the HTTP Request-URI. The same parameters may, of course, be retained in the SOAP env:Body element. (The parameters must be retained in the Body in the case of a SOAP-based RPC as these are related to the procedure/method description expected by the receiving application.) Example 13 is the same as that in Example 9, except that the HTTP Request-URI has been modified to include the reservation code, which serves to identify the resource (the reservation in question, which is being confirmed and paid for). POST /Reservations?code=FT35ZBQ. In other words, as seen from the above examples, the recommendation in the SOAP specifications is to use URIs in a Web-architecture compatible way - that is, as resource identifiers - whether or not it is GET or POST that is used. Application developers can use the Internet email infrastructure to move SOAP messages as either email text or attachments. The examples shown below offer one way to carry SOAP messages, and should not be construed as being the standard way of doing so. The SOAP Version 1.2 specifications do not specify such a binding. However, there is a non-normative W3C Note [SOAP Email Binding] describing an email binding for SOAP, its main purpose being to demonstrate the application of the general SOAP Protocol Binding Framework described in [SOAP Part 1]. Example 14 shows the travel reservation request message from Example 1 carried as an email message between a sending and receiving mail user agent. It is implied that the receiver node has SOAP capabilities, to which the body of the email is delivered for processing. (It is assumed that the sending node also has SOAP capabilities so as to be able to process any SOAP faults received in response, or to correlate any SOAP messages received in response to this one.) From: a.oyvind@mycompany.example.com To: reservations@travelcompany.example.org Subject: Travel to LA Date: Thu, 29 Nov 2001 13:20:00 EST Message-Id: header in Example 14 is in the standard form [RFC 2822] for email messages. Although an email is a one-way message exchange, and no guarantee of delivery is provided, email infrastructures like the Simple Mail Transport Protocol (SMTP) specification [SMTP] offer a delivery notification mechanism which, in the case of SMTP, are called Delivery Status Notification (DSN) and Message Disposition Notification (MDN). These notifications take the form of email messages sent to the email address specified in the mail header. Applications, as well as email end users, can use these mechanisms to provide the status of an email transmission, but these, if delivered, are notifications at the SMTP level. The application developer must fully understand the capabilities and limitations of these delivery notifications or risk assuming a successful data delivery when none occurred. SMTP delivery status messages are separate from message processing at the SOAP layer. Resulting SOAP responses to the contained SOAP data will be returned through a new email message which may or may not have a link to the original requesting email at the SMTP level. The use of the [RFC 2822] In-reply-to: header can achieve a correlation at the SMTP level, but does not necessarily offer a correlation at the SOAP level. Example 15 is exactly the same scenario as described for Example 2, which shows the SOAP message (body details omitted for brevity) sent from the travel service application to the travel reservation application seeking clarification on some reservation details, except that it is carried as an email message. In this example, the original email's Message-Id is carried in the additional email header In-reply-to:, which correlates email messages at the SMTP level, but cannot provide a SOAP-specific correlation. In this example, the application relies on the reservation header block to correlate SOAP messages. Again, how such correlation is achieved is application-specific, and is not within the scope of SOAP. From: reservations@travelcompany.example.org To: a.oyvind@mycompany.example.com Subject: Which NY airport? Date: Thu, 29 Nov 2001 13:35:11 EST Message-Id: <200109251753.NAA10655@travelcompany.example.org> In-reply-to: xmlns: <p:itineraryClarifications> ... ... </p:itineraryClarifications> </p:itinerary> </env:Body> </env:Envelope> The travel reservation scenario used throughout the primer offers an opportunity to expose some uses of SOAP intermediaries. Recall that the basic exchange was the exchange of a travel reservation request between a travel reservation application and a travel service application. SOAP does not specify how such a message path is determined and followed. That is outside the scope of the SOAP specification. It does describe, though, how a SOAP node should behave if it receives a SOAP message for which it is not the ultimate receiver. SOAP Version 1.2 describes two types of intermediaries: forwarding intermediaries and active intermediaries. A forwarding intermediary is a SOAP node which, based on the semantics of a header block in a received SOAP message or based on the message exchange pattern in use, forwards the SOAP message to another SOAP node. For example, processing a "routing" header block describing a message path feature in an incoming SOAP message may dictate that the SOAP message be forwarded to another SOAP node identified by data in that header block. The format of the SOAP header of the outbound SOAP message, i.e., the placement of inserted or reinserted header blocks, is determined by the overall processing at this forwarding intermediary based on the semantics of the processed header blocks. An active intermediary is one that does additional processing on an incoming SOAP message before forwarding the message using criteria that are not described by incoming SOAP header blocks, or by the message exchange pattern in use. Some examples of such active intervention at a SOAP node could be, for instance, encrypting some parts of a SOAP message and providing the information on the cipher key in a header block, or including some additional information in a new header block in the outbound message providing a timestamp or an annotation, for example, for interpretation by appropriately targeted nodes downstream.. In the following example, a SOAP node is introduced in the message path between the travel reservation and travel service applications, which intercepts the message shown in Example 1. An example of such a SOAP node is one which logs all travel requests for off-line review by a corporate travel office. Note that the header blocks reservation and passenger in that example are intended for the node(s) that assume the role "next", which means that it is targeted at the next SOAP node in the message path that receives the message. The header blocks are mandatory (the mustUnderstand attribute is set to "true"), which means that the node must have knowledge (through an external specification of the header blocks' semantics) of what to do. A logging specification for such header blocks might simply require that various details of the message be recorded at every node that receives such a message, and that the message be relayed along the message path unchanged. (Note that the specifications of the header blocks must require that the same header blocks be reinserted in the outbound message, because otherwise, the SOAP processing model would require that they be removed.) In this case, the SOAP node acts as a forwarding intermediary. A more complex scenario is one where the received SOAP message is amended in some way not anticipated by the initial sender. In the following example, it is assumed that a corporate travel application at the SOAP intermediary attaches a header block to the SOAP message from Example 1 before relaying it along its message path towards the travel service application - the ultimate recipient. The header block contains the constraints imposed by a travel policy for this requested trip. The specification of such a header block might require that the ultimate recipient (and only the ultimate recipient, as implied by the absence of the role attribute) make use of the information conveyed by it when processing the body of the message. Example 16 shows an active intermediary inserting an additional header block, travelPolicy, intended for the ultimate recipient which includes information that qualifies the application-level processing of this travel request. <> <z:travelPolicy xmlns: <z:class>economy</z:class> <z:fareBasis>non-refundable<z:fareBasis> <z:exceptions>none</z:exceptions> </z:travelPolicy> <> Even though SOAP Version 1.2 defines a particular encoding scheme (see SOAP Part 2 section 3), its use is optional and the specification makes clear that other encoding schemes may be used for application-specific data within a SOAP message. For this purpose it provides the attribute env:encodingStyle, of type xs:anyURI, to qualify header blocks, any child elements of the SOAP env:Body, and any child elements of the env:Detail element and their descendants. It signals a serialization scheme for the nested contents, or at least the one in place until another element is encountered which indicates another encoding style for its nested contents. The choice of the value for the env:encodingStyle attribute is an application-specific decision and the ability to interoperate is assumed to have been settled "out-of-band". If this attribute is not present, then no claims are being made about the encoding being used. The use of an alternative encoding scheme is illustrated in Example 17. Continuing with the travel reservation theme, this example shows a SOAP message which is sent to the passenger from the travel service after the reservation is confirmed, showing the travel details. (The same message was used in Example 8b in another context.) <> Bodyelement In Example 17, the body of the SOAP message contains a description of the itinerary using the encoding of a graph of resources and their properties using the syntax of the Resource Description Framework (RDF) [RDF]. (Very briefly, as RDF syntax or usage is not the subject of this primer, an RDF graph relates resources - such as the travel reservation resource available at - to other resources (or values) via properties, such as the passenger, the outbound and return dates of travel. The RDF encoding for the itinerary might have been chosen, for example, to allow the passenger's travel application to store it in an RDF-capable calendar application, which could then be queried in complex ways.) env:Envelope, is to transform the binary data into a character representation of type xs:base64Binaryusing the Base64 content-transfer-encoding scheme defined in [RFC 2045]. The disadvantages of this approach are that there is a significant increase in message size, as well as a potential processing overhead in encoding/decoding the binary data to/from its character representation, which may create throughput problems in case of message transmission over low bandwidth links or SOAP nodes with low processing power. The [MTOM] specification provides a mechanism to support such use cases. It should be noted, though, that the specification does not address the general problem of handling the inclusion of non-XML content in arbitrary XML documents, but confines itself to the specific case of SOAP message transmission optimization for certain type of content. In order to allow for independence from the underlying protocol binding, so that the optimization mechanism may be available over a variety of transports, as well as to retain the principal SOAP binding paradigm - that the SOAP message infoset, however serialized, be transmitted unchanged between adjacent nodes - [MTOM] defines an Abstract SOAP Transmission Optimization feature, of which one implementation is provided for the particular case of HTTP-based transmission of an optimized SOAP message in a MIME Multipart/Related [RFC 2387] package. This makes use of the [XOP] format (on which more in Section 5.3.2) which is an alternative serialization of an XML infoset geared towards a more eficient processing and representation of Base64-encoded content. The Abstract SOAP Transmission Optimization feature is defined for certain element information item in the SOAP message infoset which are identified as potential candidates for optimization. XML infosets identify the structure and content of XML documents, but not the data type of the contents of elements and attributes. One way to identify these would require schema validation of the infoset, something which is not a requirement for SOAP. A more likely possibility is that the sending application already "knows" the type of data - a binary stream, and perhaps also the nature of the media type that it represents - that it wishes to transmit because that is the way in which the data is already available to it. The Abstract SOAP Transmission Optimization feature assumes that the type information for those element information items which are potential candidates for optimization are somehow available to the sender of a SOAP message. This feature is restricted to the optimization of character information items of any element information item in the SOAP message infoset which is known to be of type xs:base64Binary in its canonical lexical form (see [Schema Part2] Section 3.2.16 base64Binary). (The rationale for the restriction to the canonical form of xs:base64Binary is provided at the end of this section.) To motivate the need for [MTOM], consider the example of a SOAP message sent in response to the request for the travel itinerary in Example 12b. The travel reservation application may wish to send, in addition to the information which can readily be represented in XML, a corporate logo, a map of the destination and other such information which is available in binary format (e.g., image files). If there were only a small amount of non-XML data, it may be possible to convert such data to its base64 encoding and convey the result in a SOAP message sent in the HTTP response as shown in Example 18 (with irrelevant content indicated by ellipses for brevity, and line breaks added for clarity). HTTP/1.1 200 OK Content-Type: application/soap+xml; <env:Header> <m:reservation xmlns: ::: </m:reservation> <n:passenger xmlns: ::: </n:passenger> </env:Header> <env:Body> <o:travelAgencyLogo xmlns:HlkR4cT YvAQLAcOAAAAAAQCAAgEAA0GAAEJAAYLAAoNAA8PAAAAJAQCJAgEJA0GJAEJJAYLJAoNJ A8PJAAASAQCSAgESA0GSAEJSAYLSAoNSA8PSAAQbAQSbAgUbA0WbAEZbAYbbAodbA8fbAAQkAQSk AgUkA0WkAEZkAYbkAodkA8fkAAgtAQitAgktA0mtAEptAYrtAottA8vtAAg2AQi2Agk2A0m2AEp==</o:travelAgencyLogo> <p:itinerary xmlns: <p:departure> ::: </p:departure> <p:return> ::: </p:return> </p:itinerary> <q:lodging xmlns: <hotel> ::: </hotel> <r:areaMap xmlns:HlkR4kTYMCgoAQMAAAA AA8///ru6q/8zPzMzM/7v/CLsw6pne+4jPCIgACHcwFWYhBFUQBEQABDM AEBAsAAAAAAjAIKAAUw/gRRCFlmnopqrFHJvvNyyLw2Ji56789+/ADKcHia3OiMpSEoEobgKHDnS ::: w83hCNkr0gECT1bgEaJigpDmwFEvYOkDCgYQLMus1QDgFUYOoGGN+gtPdYYgOMDZLhwyA+BHyDMR qweAJoAgAcYHvzTQktoAsODhl4LYBIDMevgtHPUDiAmg5gSQTUB8ETxO1HKAJRj4OI70AMeKgriF LOECAAwO=</r:areaMap> </q:lodging> </env:Body> </env:Envelope> Example 18 highlights two elements contained within the SOAP env:Body>, namely o:travelAgencyLogo and r:areaMap, containing the base64 encoding of binary data corresponding to a corporate logo and an area map. While small amounts of binary data can be placed in a SOAP message using the base64 encoding without incurring the performance overheads noted earlier, binary data anticipated in typical use cases is typically quite large, often many orders of magnitude larger than the XML content. To avoid the performance penalty in such circumstances, [MTOM] offers an optimization that avoids the need to base64-encode large binary content. (Note that SOAP nodes that do not implement [MTOM] have no choice but to carry binary data re-encoded in its base64 character representation, as in Example 18.) The Abstract SOAP Transmission Optimization feature provides the optimization by conceptually describing the binary data that needs to be conveyed as the content of an element information item in the SOAP message infoset in terms of its base64 encoding. While this character based representation is conceptually present at the sender and receiver, [MTOM] works on the optimistic assumption that the sending and receiving applications will not actually need the character-based representation of the binary value, and therefore there will be no real processing overhead in conversion between the binary value and its base64 encoding. Similarly, the implementation of this feature using the [XOP] format (more details are provided in section 5.3.2) employs a MIME Multipart/Related [RFC 2387] package to convey the binary data as an "attachment" referenced from within a modified, serialized SOAP message; therefore, there is also no overhead of increased message size. As noted earlier, it is assumed that the sending implementation somehow knows or determines the type information of the element information items that are candidates for potential optimization; otherwise the optimization feature does not work. The scope of [MTOM] is solely to optimize the transmission of the SOAP message infoset for those element information items that have base64 encoded binary data in canonical form as their content. As with all features, [MTOM] needs a SOAP protocol binding to transfer the optimized serialization. Recall from Section 4 that a SOAP protocol binding must transfer the SOAP message infoset in such a way that it can be reconstructed at the receiver unchanged. An MTOM-aware binding is one where a sender can serialize a SOAP message infoset by transmitting the actual value - that is, the actual bits - of certain element information items known to be in the canonical lexical representation of type xs:base64Binary rather than their lexical character representation. A receiver supporting this binding can, from the received value, reconstruct, at least conceptually, the lexical character representation if that is required by the application. [MTOM] provides an enhancement to the existing SOAP HTTP binding to provide an implementation of the Abstract SOAP Transmission Optimization feature. It uses the [XOP]-based inclusion mechanism described in section 5.3.2, and places the resulting MIME Multipart/Related package in the body of a HTTP message. As noted earlier, applications, in many implementations, will deal directly with the binary values and there is no implication that a base64 encoded character representation of the received value needs to be created, if there is no need to do so. However, there are instances when there may be a need to obtain the character representation, for example at a SOAP intermediary which has to forward the message on a non-MTOM-aware binding. One important subtlety in ensuring that the original message infoset can be reconstructed faithfully is to mandate, as does [MTOM], that the original base64 encoded characters be in their canonical form. [XML Schema Part2] allows for multiple lexical representations of the xs:base64Binary data type, mainly in the handling of white space, and therefore defines a canonical form which permits a 1-to-1 correspondence between a binary value and its lexical representation. By restricting itself to optimization candidates which are in the canonical form of xs:base64Binary, it can be ensured that the transferred message infoset is reproduced unchanged. Therefore, in the following sections, whenever we, for the sake of brevity, refer to base64-encoded data, the reader should keep in mind that we mean XML element content whose type is in the canonical lexical representation of xs:base64Binary. The next step in implementing the Abstract SOAP Transmission Optimization feature is to define the format in which the SOAP message infoset (with potential optimization candidates identified, as described in the previous section) is serialized in an optimal way for transmission. The serialization technique is described in [MTOM] by making use of an "inclusion" technique specified in the XML-binary Optimized Packaging [XOP] specification together with a MIME Multipart/Related packaging ([RFC 2387]). [XOP] defines an xop:Include element that ties, at a SOAP binding level, the binary content for an element to its infoset representation as base64-encoded character information items in the [children] property of that element information item. A SOAP binding that is capable of optimized serialization of an infoset containing such binary data represented by their character information items uses this xop:Include element in the SOAP envelope as a placeholder to link (using an href attribute) to the optimized (i.e., binary) data carried along with the SOAP envelope in an overall package. The overall package chosen is the extensible MIME Multipart/Related [RFC 2387] format. The root body part of this MIME package contains the XML 1.0 serialization of the SOAP env:Envelope, modified by the presence of one (or more) xop:Include element(s), while the other (related) body part(s) of the MIME package contain the compact (i.e, binary) data referenced by the xop:Include element(s). The serialization of the SOAP message from Example 18, converted to this optimized format using [XOP], is shown in Example 19a, the conventional MIME Multipart/Related package conveys a compound "object" broken up into multiple inter-related body parts. The "start" parameter of the overall Content-Type conveys, via a Content-ID, the body part which contains the compound object's "root", while the media type parameter value of "application/xop+xml" identifies the contents as an XML document serialized using the [XOP] format. The "startinfo" parameter of the package shows that this root part is the XML 1.0 serialization of the SOAP env:Envelope modified by the inclusion of xop:Include elements where appropriate. Compared with Example 18, note, in Example 19a, the presence of the two xop:Include elements which replace the character representations of the binary data corresponding to the company logo and the lodging area map. Each of these elements provides via the href attribute the link by which the binding knows which MIME body part contains the binary data that corresponds to the (canonical form of the) equivalent base64-encoded character representation. Note also the presence of the additional attribute xmime:contentType (see [MediaType] Section 2.1 contentType Attribute) in the xop:Include elements to indicate the media type of the contents of the o:TravelAgencyLogo and r:AreaMap elements. When such an optimized MIME Multipart/Related package based on the [XOP] format is sent in a HTTP message, [MTOM] Section 4.3 requires that the resultant MIME headers are sent as HTTP headers, while the remainder of the package is placed in the HTTP body. Example 19b shows the SOAP message from Example 19a returned in a HTTP response (with the relevant HTTP headers highlighted). HTTP/1.1 200 OK Content-Type: Multipart/Related; boundary=example-boundary; type=application/xop+xml; start="<itinerary123.xml@travelcompany.example.org>"; startinfo="application/soap+xml;action=\"\"" Content-Description: This is an example of an optimized SOAP message Content-Length: nnnn -b, the MIME Multipart/Related headers arising from the [XOP] format (see Example 19a) are carried as HTTP headers in the HTTP 200 OK response. Another optimization that has been identified as useful for processing a SOAP message which includes URI-based references to Web resources is one where the sender includes a representation of each such resource in the SOAP message to either the ultimate receiver or an intermediary. This helps in situations where the processing of the SOAP message depends on dereferencing the URIs, but which may not be possible because the receiver is not able or wishes to avoid the overhead of the network traffic needed to do so. The gain is even greater if the same resource (the image of a logo, say) is referenced multiple times within the message. The Resource Representation SOAP Header Block [ResRep] specification describes a SOAP header block, containing a rep:Representation element, which defines how URI-based representations of resources referenced within a SOAP message infoset may be carried and processed by an identified receiver. Its use is illustrated by examples that follow. Recall, from Example 18, that a base64-encoded form of the travel agency logo was sent in the SOAP message. However, this may well have been included by providing a HTTP URL link to the location from which the (ultimate) receiver could retrieve the image as a part of processing the message. This is shown, with all inessentials deleted, in Example 20. HTTP/1.1 200 OK Content-Type: application/soap+xml; <env:Header> ::: </env:Header> <env:Body> <o:travelAgencyLogo xmlns: <o:image o: </o:travelAgencyLogo> ::: ::: </env:Body> </env:Envelope> In Example 20, the expectation is that the contents of the o:image element would be obtained by dereferencing the URL identified by the o:source attribute. However, as identified earlier, if a situation were anticipated where the processing overhead of dereferencing the URI were considered unacceptable, a representation of the logo image can be sent using the rep:Representation element, as shown in Example 21 (with the header highlighted). HTTP/1.1 200 OK Content-Type: application/soap+xml; <o:image o: </o:travelAgencyLogo> ::: ::: </env:Body> </env:Envelope> In Example 21, the rep:Representation element contains a mandatory resource attribute whose value is the URI identifying the Web resource, while the rep:Data element is a base64-encoded representation of the resource. The optional xmime:contentType attribute in rep:Data is used to identify the media type of the resource representation being conveyed. The rep:Representation element can make use of other attributes (see [ResRep] Section 2.2 Representation header block Constructs for details) including the SOAP-defined ones, env:mustUnderstand and env:Role, described in section 3. The use of such additional headers allows the targeted receiver to know that the resource representation is available to it. If the binary content representing the resource were available to the sender, and sending the base64-encoded form of that (presumably large) binary content was deemed inefficient, the use of the rep:Representation element can be combined with [MTOM] and the [XOP] format to gain the efficiencies of that feature. This is shown in Example 22, with the xop:Include element highlighted. HTTP/1.1 200 OK Content-Type: mime/multipart-related;</rep:Representation> ::: </env:Header> <env:Body> <o:travelAgencyLogo xmlns: <o:image o: </o:travelAgencyLogo> ::: ::: </env:Body> </env:Envelope> --example-boundary Content-Type: image/jpg Content-Transfer-Encoding: binary Content-ID: <logo.gif@travelcompany.example.org> ::: the binary data for the travel agency logo ::: --example-boundary Content-Type: image/jpg Content-Transfer-Encoding: binary Content-ID: <map123.jpg@travelcompany.example.org> ::: the binary data ::: --example-boundary-- SOAP Version 1.2 has a number of changes in syntax and provides additional (or clarified) semantics from those described in [SOAP 1.1]. The following is a list of features where the two specifications differ. The purpose of this list is to provide the reader with a quick and easily accessible summary of the differences between the two specifications. The features have been put in categories purely for ease of reference, and in some cases, an item might equally well have been placed in another category. Document structure Additional or changed syntax env:encodingStyleattribute to appear on the SOAP env:Envelope, whereas SOAP 1.1 allows it to appear on any element. SOAP 1.2 specifies specific elements where this attribute may be used. env:NotUnderstoodheader element for conveying information on a mandatory header block which could not be processed, as indicated by the presence of an env:MustUnderstandfault code. SOAP 1.1 provided the fault code, but no details on its use. env:mustUnderstandattribute in header elements takes the (logical) value "true" or "false", whereas in SOAP 1.1 they are the literal value "1" or "0" respectively. DataEncodingUnknown. env:actorwith env:rolebut with essentially the same semantics. env:relay, for header blocks to indicate if unprocessed header blocks should be forwarded. env:Codeand env:Reason, respectively, for what used to be called faultcodeand faultstringin SOAP 1.1. SOAP 1.2 also allows multiple env:Textchild elements of env:Reasonqualified by xml:langto allow multiple language versions of the fault reason. env:Codesub-element in the env:Faultelement, and introduces two new optional subelements, env:Nodeand env:Role. env:Detailselement in env:Fault. In SOAP 1.2, the presence of the env:Detailselement has no significance as to which part of the fault SOAP message was processed. SOAP HTTP binding SOAPActionHTTP header defined in SOAP 1.1 has been removed, and a new HTTP status code 427 has been sought from IANA for indicating (at the discretion of the HTTP origin server) that its presence is required by the server application. The contents of the former SOAPActionHTTP header are now expressed as a value of an (optional) "action" parameter of the "application/soap+xml" media type that is signaled in the HTTP binding. RPC rpc:resultelement accessor for RPCs. SOAP encodings hrefattribute in SOAP 1.1 (of type xs:anyURI) is called enc:refin SOAP 1.2 and is of type IDREF. enc:nodeTypeto elements encoded using SOAP encoding that identifies its structure (i.e., a simple value, a struct or an array). SOAP Part 1 Appendix A provides version management rules for a SOAP node that can support the version transition from [SOAP 1.1] to SOAP Version 1.2. In particular, in defines an env:Upgrade header block which can be used by a SOAP 1.2 node on receipt of a [SOAP 1.1] message to send a SOAP fault message to the originator to signal which version of SOAP it supports. Highland Mary Mountain provided the initial material for the section on the SMTP binding. Paul Denning provided material for a usage scenario, which has since been moved to the SOAP Version 1.2 Usage Scenarios Working Draft. Stuart Williams, Oisin Hurley, Chris Ferris, Lynne Thompson, John Ibbotson, Marc Hadley, Yin-Leng Husband and Jean-Jacques Moreau provided detailed comments on earlier versions of this document, as did many others during the Last Call Working Draft review. Jacek Kopecky provided a list of RPC and SOAP encoding changes. Martin Gudgin reviewed the additional material in the second edition and provided many helpful comments. This document.). We also wish to thank all the people who have contributed to discussions on xml-dist-app@w3.org.
https://www.w3.org/TR/soap12-part0/
CC-MAIN-2016-18
en
refinedweb
Hi, This is my first time programming with c++, so please bear with me. I have a string variable that needs to be converted into lower case or upper case. I want to use the tolower() or toupper() functions in the C string. I have a problem changing the string into a character array. This is my code: #include <iostream> #include <string> #include <ctype.h> using namespace std; int main( int argc, char *argv[] ) { string strMyString; string strNewString; cin >> strMyString; //pass in string to be changed char tmpChar[strMyString.length()]; tmpChar = strMyString.c_str(); for (int i = 0; i<strlen(tempChar); i++) { strNewString += tolower(tmpChar[i]); } cout << strNewString << endl; } i have a problem with assigning the character array tempChar to strMyString.c_str(); does anyone have any ideas? thanks a lot!!
http://cboard.cprogramming.com/cplusplus-programming/17471-converting-strings-lowercase-printable-thread.html
CC-MAIN-2016-18
en
refinedweb
How to embed a existing cursor file into the application? I was using Visual C# 2005 to build a GUI application. I needed to add existing cursor file into the project. The procedure i followed was: 1) I added a existing cursor file into project. 2) Selected cursor file in the Solution Explorer > Chose View->Properties > Change the “Build Action” to “Embedded Resources”. 3) Code Cursor newCur = new Cursor(“xyz.cur”); this.Cursor = newCur; But application was throwing exception “Could not find file C:\App\Debug\rotate.cur” even though i have added it in application. After googling i found the solution at Here it goes: 1) Follow procedure 1 and 2 as done by me and Add using System.Reflection; using System.Resources; using System.IO; 2) Actual code for getting resource (cursor) Assembly asm = Assembly.GetExecutingAssembly(); using( Stream resStream = asm.GetManifestResourceStream( “MyNameSpace.xyz.cur” ) ) { Cursor cursor = new Cursor( resStream ); } where MyNamespace is name of our namespace. 3) Use Form.Cursor = cursor; The link also give the detailss of how to print all our resource names to the console (for example to lookup if the resource exists): Assembly asm = Assembly.GetExecutingAssembly(); Console.WriteLine( “Manifest resources for {0}”, asm.FullName ); foreach( String resourceName in asm.GetManifestResourceNames() ) { Console.WriteLine( “\t{0}”, resourceName ); } uhm… what do you mean in PROJECT??? where can i find that PROJECT??? please help me…. thanks!! Your Project in Solution Explorer. Right click on your Project and click “Add New Item”. Add the cursor file.
https://mahesg.wordpress.com/2008/02/09/embedding-cursor/
CC-MAIN-2016-18
en
refinedweb
_lwp_cond_reltimedwait(2) - duplicate an open file descriptor #include <unistd.h> int dup(int fildes); The dup() function returns a new file descriptor having the following in common with the original open file descriptor fildes: same open file (or pipe) same file pointer (that is, both file descriptors share one file pointer) same access mode (read, write or read/write). The new file descriptor is set to remain open across exec functions (see fcntl(2)). The file descriptor returned is the lowest one available. The dup(fildes) function call is equivalent to: fcntl(fildes, F_DUPFD, 0) Upon successful completion, a non-negative integer representing the file descriptor is returned. Otherwise, -1 is returned and errno is set to indicate the error. The dup() function will fail if: The fildes argument is not a valid open file descriptor. A signal was caught during the execution of the dup() function. The process has too many open files (see getrlimit(2)). The fildes argument is on a remote machine and the link to that machine is no longer active. See attributes(5) for descriptions of the following attributes: close(2), creat(2), exec(2), fcntl(2), getrlimit(2), open(2), pipe(2), dup2(3C), lockf(3C), attributes(5), standards(5)
http://docs.oracle.com/cd/E19963-01/html/821-1463/dup-2.html
CC-MAIN-2016-18
en
refinedweb
Testing with Apache Shiro This part of the documentation explains how to enable Shiro in unit tests. What to know for tests As we've already covered in the Subject reference, we know that a Subject is security-specific view of the 'currently executing' user, and that Subject instances are always bound to a thread to ensure we know who is executing logic at any time during the thread's execution. This means three basic things must always occur in order to support being able to access the currently executing Subject: - A Subject instance must be created - The Subject instance must be bound to the currently executing thread. - After the thread is finished executing (or if the thread's execution results in a Throwable), the Subject must be unbound to ensure that the thread remains 'clean' in any thread-pooled environment. Shiro has architectural components that perform this bind/unbind logic automatically for a running application. For example, in a web application, the root Shiro Filter performs this logic when filtering a request. But as test environments and frameworks differ, we need to perform this bind/unbind logic ourselves for our chosen test framework. Test Setup So we know after creating a Subject instance, it must be bound to thread. After the thread (or in this case, a test) is finished executing, we must unbind the Subject to keep the thread 'clean'. Luckily enough, modern test frameworks like JUnit and TestNG natively support this notion of 'setup' and 'teardown' already. We can leverage this support to simulate what Shiro would do in a 'complete' application. We've created a base abstract class that you can use in your own testing below - feel free to copy and/or modify as you see fit. It can be used in both unit testing and integration testing (we're using JUnit in this example, but TestNG works just as well): AbstractShiroTest import org.apache.shiro.SecurityUtils; import org.apache.shiro.UnavailableSecurityManagerException; import org.apache.shiro.mgt.SecurityManager; import org.apache.shiro.subject.Subject; import org.apache.shiro.subject.support.SubjectThreadState; import org.apache.shiro.util.LifecycleUtils; import org.apache.shiro.util.ThreadState; import org.junit.AfterClass; /** * Abstract test case enabling Shiro in test environments. */ public abstract class AbstractShiroTest { private static ThreadState subjectThreadState; public AbstractShiroTest() { } /** * Allows subclasses to set the currently executing {@link Subject} instance. * * @param subject the Subject instance */ protected void setSubject(Subject subject) { clearSubject(); subjectThreadState = createThreadState(subject); subjectThreadState.bind(); } protected Subject getSubject() { return SecurityUtils.getSubject(); } protected ThreadState createThreadState(Subject subject) { return new SubjectThreadState(subject); } /** * Clears Shiro's thread state, ensuring the thread remains clean for future test execution. */ protected void clearSubject() { doClearSubject(); } private static void doClearSubject() { if (subjectThreadState != null) { subjectThreadState.clear(); subjectThreadState = null; } } protected static void setSecurityManager(SecurityManager securityManager) { SecurityUtils.setSecurityManager(securityManager); } protected static SecurityManager getSecurityManager() { return SecurityUtils.getSecurityManager(); } @AfterClass public static void tearDownShiro() { doClearSubject(); try { SecurityManager securityManager = getSecurityManager(); LifecycleUtils.destroy(securityManager); } catch (UnavailableSecurityManagerException e) { //we don't care about this when cleaning up the test environment //(for example, maybe the subclass is a unit test and it didn't // need a SecurityManager instance because it was using only // mock Subject instances) } setSecurityManager(null); } } Unit Testing Unit testing is mostly about testing your code and only your code in a limited scope. When you take Shiro into account, what you really want to focus on is that your code works correctly with Shiro's API - you don't want to necessarily test that Shiro's implementation is working correctly (that's something that the Shiro development team must ensure in Shiro's code base). Testing to see if Shiro's implementations work in conjunction with your implementations is really integration testing (discussed below). ExampleShiroUnitTest Because unit tests are better suited for testing your own logic (and not any implementations your logic might call), it is a great idea to mock any APIs that your logic depends on. This works very well with Shiro - you can mock the Subject interface and have it reflect whatever conditions you want your code under test to react to. We can leverage modern mock frameworks like EasyMock and Mockito to do this for us. But as stated above, the key in Shiro tests is to remember that any Subject instance (mock or real) must be bound to the thread during test execution. So all we need to do is bind the mock Subject to ensure things work as expected. (this example uses EasyMock, but Mockito works equally as well): import org.apache.shiro.subject.Subject; import org.junit.After; import org.junit.Test; import static org.easymock.EasyMock.*; /** * Simple example test class showing how one may perform unit tests for code that requires Shiro APIs. */ public class ExampleShiroUnitTest extends AbstractShiroTest { @Test public void testSimple() { //1. Create a mock authenticated Subject instance for the test to run: Subject subjectUnderTest = createNiceMock(Subject.class); expect(subjectUnderTest.isAuthenticated()).andReturn(true); //2. Bind the subject to the current thread: setSubject(subjectUnderTest); //perform test logic here. Any call to //SecurityUtils.getSubject() directly (or nested in the //call stack) will work properly. } @After public void tearDownSubject() { //3. Unbind the subject from the current thread: clearSubject(); } } As you can see, we're not setting up a Shiro SecurityManager instance or configuring a Realm or anything like that. We're simply creating a mock Subject instance and binding it to the thread via the setSubject method call. This will ensure that any calls in our test code or in the code we're testing to SecurityUtils.getSubject() will work correctly. Note that the setSubject method implementation will bind your mock Subject to the thread and it will remain there until you call setSubject with a different Subject instance or until you explicitly clear it from the thread via the clearSubject() call. How long you keep the subject bound to the thread (or swap it out for a new instance in a different test) is up to you and your testing requirements. tearDownSubject() The tearDownSubject() method in the example uses a Junit 4 annotation to ensure that the Subject is cleared from the thread after every test method is executed, no matter what. This requires you to set up a new Subject instance and set it (via setSubject) for every test that executes. This is not strictly necessary however. For example, you could just bind a new Subject instance (via setSujbect) at the beginning of every test, say, in an @Before-annotated method. But if you're going to do that, you might as well have the @After tearDownSubject() method to keep things symmetrical and 'clean'. You can mix and match this setup/teardown logic in each method manually or use the @Before and @After annotations as you see fit. The AbstractShiroTest super class will however unbind the Subject from the thread after all tests because of the @AfterClass annotation in its tearDownShiro() method. Integration Testing Now that we've covered unit test setup, let's talk a bit about integration testing. Integration testing is testing implementations across API boundaries. For example, testing that implementation A works when calling implementation B and that implementation B does what it is supposed to. You can easily perform integration testing in Shiro as well. Shiro's SecurityManager instance and things it wraps (like Realms and SessionManager, etc) are all very lightweight POJOs that use very little memory. This means you can create and tear down a SecurityManager instance for every test class you execute. When your integration tests run, they will be using 'real' SecurityManager and Subject instances like your application will be using at runtime. ExampleShiroIntegrationTest The example code below looks almost identical to the Unit Test example above, but the 3 step process is slightly different: - There is now a step '0', which sets up a 'real' SecurityManager instance. - Step 1 now constructs a 'real' Subject instance with the Subject.Builder and binds it to the thread. Thread binding and unbinding (steps 2 and 3) function the same as the Unit Test example. import org.apache.shiro.config.IniSecurityManagerFactory; import org.apache.shiro.mgt.SecurityManager; import org.apache.shiro.subject.Subject; import org.apache.shiro.util.Factory; import org.junit.After; import org.junit.BeforeClass; import org.junit.Test; public class ExampleShiroIntegrationTest extends AbstractShiroTest { @BeforeClass public static void beforeClass() { //0. Build and set the SecurityManager used to build Subject instances used in your tests // This typically only needs to be done once per class if your shiro.ini doesn't change, // otherwise, you'll need to do this logic in each test that is different Factory<SecurityManager> factory = new IniSecurityManagerFactory("classpath:test.shiro.ini"); setSecurityManager(factory.getInstance()); } @Test public void testSimple() { //1. Build the Subject instance for the test to run: Subject subjectUnderTest = new Subject.Builder(getSecurityManager()).buildSubject(); //2. Bind the subject to the current thread: setSubject(subjectUnderTest); //perform test logic here. Any call to //SecurityUtils.getSubject() directly (or nested in the //call stack) will work properly. } @AfterClass public void tearDownSubject() { //3. Unbind the subject from the current thread: clearSubject(); } } As you can see, a concrete SecurityManager implementation is instantiated and made accessible for the remainder of the test via the setSecurityManager method. Test methods can then use this SecurityManager when using the Subject.Builder later via the getSecurityManager() method. Also note that the SecurityManager instance is set up once in a @BeforeClass setup method - a fairly common practice for most test classes. But if you wanted to, you could create a new SecurityManager instance and set it via setSecurityManager at any time from any test method - for example, you might reference two different .ini files to build a new SecurityManager depending on your test requirements. Finally, just as with the Unit Test example, the AbstractShiroTest super class will clean up all Shiro artifacts (any remaining SecurityManager and Subject instance) via its @AfterClass tearDownShiro() method to ensure the thread is 'clean' for the next test class to run..
http://shiro.apache.org/testing.html
CC-MAIN-2016-18
en
refinedweb
#include "petscmat.h" PetscErrorCode MatCreateSeqSBAIJWithArrays(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt *i,PetscInt *j,PetscScalar *a,Mat *mat)Collective on MPI_Comm You cannot set new nonzero locations into this matrix, that will generate an error. The i and j indices are 0 based When block size is greater than 1 the matrix values must be stored using the SBAIJ storage format (see the SBAIJ code to determine this). For block size of 1 it is the regular CSR format excluding the lower triangular elements. Level:advanced Location:src/mat/impls/sbaij/seq/sbaij.c Index of all Mat routines Table of Contents for all manual pages Index of all manual pages
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateSeqSBAIJWithArrays.html
CC-MAIN-2016-18
en
refinedweb
Token newbie 64-bit question I’m looking at simplemax.c. The new function does this: object_post((t_object *)x, "a new %s object was instantiated: 0x%X", s->s_name, x); Since the %X isn’t qualified, isn’t this wrong for 64-bit pointers? I get a slightly different (and possibly correct) answer for: object_post((t_object *)x, "a new %s object was instantiated: 0x%llX", s->s_name, x); although that’s not very portable. The C99 standard is something like object_post((t_object *)x, "a new %s object was instantiated: 0x" PRIxPTR, s->s_name, x); but object_post doesn’t seem to work with that (I get 1x printed for the pointer value). Is there a sanctified way to do this? (I’ll try the standard sprintf to see what that produces.) This is OS X/Xcode 4, btw. On my platform [ MacOS 10.8.3 running Max 6.1.1 (64-bit) ] this seems to work and should also be portable. object_post((t_object *)x, "a new %s object was instantiated: %p", s->s_name, x); The only problem I see is that there’s no really good way to determine the maximum length of the string produced by "%p". However, for most uses it should be fine. Let me know if it works for you… – Luigi Ah, %p. I’ve been using Java for too long… For what it’s worth, here’s my test code: object_post((t_object *)x, "object via %%X: %X", x); object_post((t_object *)x, "object via %%llX: %llX", x); object_post((t_object *)x, "object via %%p: %p", x); object_post((t_object *)x, "object via PRIxPTR: %" PRIxPTR, x); The 32-bit output is: simplemax: object via %X: BE944A8 simplemax: object via %llX: C00670BE944A8 simplemax: object via %p: 0xbe944a8 simplemax: object via PRIxPTR: be944a8 The 64-bit output is: simplemax: object via %X: C4A2578 simplemax: object via %llX: 10C4A2578 simplemax: object via %p: 0x10c4a2578 simplemax: object via PRIxPTR: 10c4a2578 (Ignore the PRIxPTR complaint earlier: I woz doing it wrong.) PRIxPTR appears to be "lx", which must translate to 8-bytes in a 64-bit env. In 64 env, %X appears to deliver the low-order long, and (after more experimentation) doesn’t appear to corrupt the stack. (I don’t immediately understand why not, unless it’s some register allocation business. I’ve not done machine code for decades.) In 32 env, %llX corrupts the stack (so it’s still 64 bits); %lX works fine (so must mean 32 bits here). So for 64 bits the %X of the example code appears benign, but the value is wrong. Tim: could you consider this a manual GitHub pull request to make this %p or "%" PRIxPTR? While I’m here: turning on -Wall in Xcode gives me Unknown Pragma in #ifdef MAC_VERSION #ifndef powerc #pragma d0_pointers on #endif ... (in ext_prefix.h.) It seems that -Wall is a nice one to have, and it would be nice if the build were still clean. Forums > Dev
https://cycling74.com/forums/topic/token-newbie-64-bit-question/
CC-MAIN-2016-18
en
refinedweb
In an environment where peak times are very frequent, retaining a high capacity value makes sense. However, if peak times are relatively rare or short, you may need to "trim" the container, ensuring that its capacity matches its size. Unfortunately, STL containers don't define a trim() member function. However, it's relatively easy to achieve this goal with a few steps. Trimming a Container When you copy a container, the target's capacity is the same as its size. For example, if you have a vector of integers whose capacity and size are 100 and 1, respectively, a copy of this vector will have the same size as the source, but its capacity will be identical to its size. The following program demonstrates this behavior: #include <vector> #include <iostream> using namespace std; int main() { vector <int> vi; vi.reserve(100); //enforce a capacity of 100 vi.push_back(5); //size is 1 cout<<"capacity of vi is: "<<vi.capacity()<<endl; cout<<"size of vi is: "<<vi.size()<<'\n'<<endl; vector <int> vi2=vi; cout<<"capacity of vi2 is: "<<vi2.capacity()<<endl; cout<<"size of vi2 is: "<<vi2.size()<<endl; } capacity of vi is: 100 size of vi is: 1 capacity of vi2 is: 1 size of vi2 is: 1 vi2=vi; //vi2 is a trimmed copy of vi vi=vi2; //trims vi Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/cplus/10MinuteSolution/29484/0/page/2
CC-MAIN-2016-18
en
refinedweb
The presentation of this document has been augmented to identify changes from a previous version. Three kinds of changes are highlighted: new, added text, changed text, and deleted text. This. Major changes in this version of the document encompass an enhancement of the conformance and security sections, an addition of PolicyReference extensibility, and various clarifications with respect to e.g. the Namespace URI versioning Policy, constraints on @xml:id type usage for Policy Identification, or the relation between wsp:PolicyReference and the wsp:Policy element. Discussion of this document takes place on the public public-ws-policy@w3.org mailing list (public archive) and within Bugzilla.services 4. Policy Expression 4.1 Normal Form Policy Expression 4.2 Policy Identification 4.3 Compact Policy Expression 4.3.1 Optional Policy Assertions 4.3.2 Policy Assertion Nesting 4.3.3 Policy Operators 4.3.4 Policy References 4.3.5 Policy Inclusion 4.4 Policy Intersection 5. Security Considerations 5.1 Information Disclosure Threats 5.2 Spoofing and Tampering Threats 5.3 Downgrade Threats 5.4 Repudiation Threats 5.5 Denial of Service Threats 5.6 General XML. A policy is a collection of policy policyalternatives, alternatives, where a policy alternative is a collection of policy assertions.assertions. A policy assertion represents an individual requirement, capability, or other property of a behavior. A policy expression is an XML Infoset representation of a policy. attachment. A policy attachment is a mechanism for associating policy with one or more policy scopes,scopes. A [[Web Services Policy Attachment]] defines such]]. Example 1-1The following example illustrates a security policy expression using assertions defined in WS-SecurityPolicy [[WS-SecurityPolicy]]: Example 1-1. Use of Web Services Policy with security policy assertions. -06) represent one policy alternative for signing a message body. Linesrequired (08-11) representperforming cryptographic a second policy alternative forasymmetric key-based encrypting asecurity message body. tokens. Lines (02-13) illustrate the ExactlyOne policy operator. Policy operators group policy assertions into policy alternatives. A valid interpretation of the policy above would be that an invocation of a Web service willuses one either sign or encrypt the message body.03-04) specified. This section specifies the notations, namespaces, and terminology used in this specification. Structures]] an Attribute Information Item is not recognized, it SHOULD be ignored; if an Element Information Item is not recognized, it MUST nested policy expression is a policy expression that is an Element Information Item in the children property of a policy assertion. A policy is a potentially empty collection of policy alternatives. A policy alternative is a potentially empty collection of policy assertions. A policy alternative vocabulary is the set of all policy assertion types within the policy alternative. scope is a collection of policy subjects to which a policy may apply. A policy vocabulary is the set of all policy assertion types used in a policy. This section defines an abstract model for policies and for operations upon policies. The descriptions below use XML Infoset terminology for convenience of description. However, this abstract model itself is independent of how it is represented as an XML Infoset. [Definition: A policy assertion represents an individual requirement, capability, or other property of a behavior.] A policy assertion identifies a behavior that is a requirement or capability of a policy subject. Assertions indicate domain-specific (e.g., security, transactions) semantics and are expected to be defined in separate, domain-specific specifications. Assertions are strongly typed by the domain authors that define them. [Definition: A policy assertion type represents a class of policy assertions and implies a schema for the assertion and assertion-specific semantics.] The policy assertion type is identified only by the XML Infoset namespace name and local name properties (that is, the qualified name or QName) of the root Element Information Item representing the assertion. A policy assertion type represents a class of policy assertions and implies a schema for the assertion and assertion-specific semantics. Assertions of a given type MUST be consistently interpreted independent of their policy subjects. AuthorsDomain authors MAY define that an assertion contains a policy expression (as defined in 4. Policy Expression) as one of its children. Nested policyPolicy expression expression(s)nesting are used by domain authors to further qualify one or more specific aspects of the original assertion. For example, security policy domain authors may define an assertion describing a set of security algorithms to qualify the specific behavior of a security binding assertion. The XML Infoset of a policy assertion MAY contain a non-empty attributes property and/or a non-empty children property. Such properties are policy assertion parameters andcontent MAY be used to parameterize the behavior indicated by the assertion. [Definition: A policy assertion parameter qualifies the behavior indicated by a policy assertion.] For example, an assertion identifying support for a specific reliable messaging mechanism might include an attribute information item to indicate how long an endpoint will wait before sending an acknowledgement. Authors authors should be cognizant of the processing requirements when defining complex assertions containing policyadditional assertion parameterscontent or nested policy expressions.expressions. Specifically, domain authors are encouraged to consider when the identity of the root Element Information Item alone is enough to convey the requirement or capability. [Definition: A policy alternative alternative is a logical construct which represents a potentially empty collection of policy assertions.] An alternative with zero assertions indicates no behaviors. An alternative with one or more assertions indicates behaviors implied by those, and only those assertions. [Definition: AThe vocabulary of a policy vocabularyalternative is the set of all all policy assertion types used in a policy.] [Definition: alternative. A policy alternative vocabulary is the set of all policy assertion types withinused in the policypolicy. alternative.] WhenAn an assertion whose type is part of the policy's vocabulary but is not included in in a policy alternative, thean policy alternative without the assertion type indicates that the assertion will not be applied inby the context of the attached policy subject. See the example in Section 4.3.1 Optional Policy Assertionsalternative. Assertions within an alternative are not ordered, and thus aspects such as the order in which behaviors (indicated by assertions) are applied to a subject are beyond the scope of this specification. However, authors can write assertions that control the order in which behaviours are applied. A policy alternative MAY contain multiple assertions of the same type. Mechanisms for determining the aggregate behavior indicated by the assertions (and their Post-Schema-Validation Infoset (PSVI) (See XML Schema Part 1 [[XML Schema Structures]]) content, if any) are specific to the assertion type and are outside the scope of this document. [Definition: A based system,model, policy is used to convey conditions on an interaction between entities (requester application, provider service, Web infrastructure component, etc). [Definition: A policy subject is an entity (e.g., an endpoint, message, resource, interaction) with whichand a policy can be associated. ] Any entity in a Web services based system may expose a policy toprovider. convey conditions under which it functions. Satisfying assertions in the policy usually results results in behavior that reflects these conditions. Typically, conditions. For example, if two entities -exposes a policy to requester and provider - exposeit provides their policies, a requester might use the policy of the provider to decide whether or not to use the service. A requester may choose any alternative since each is a valid configuration configuration for interaction with the service, but a requester MUST choose only a single alternative for an interaction with a service since each represents an alternative configuration. A policy assertion is supported by an entity ina requester the web services based system if and only if the entity satisfies the requirement requirement (or accommodates the capability) corresponding to the assertion. A policy alternative is is supported by an entity if and only if the requester. This section describes how toTo convey policy in an interoperable form, using the XML Infoset representation of a policy. [Definition: A policy expression expression is an XML Infoset representation of a policy, eitherpolicy. in a normal form or in an equivalent compact form.] Other subsections below describe several important aspects related to policy expression, namely (i) Normal form of a policy expression (ii) Compact form of a policy expressionThe (iii) Identification of policy expressions and (iv) Policy intersection. The normal form of a policy expression is the most straightforward Infoset representation;Infoset; equivalent, alternative Infosets allow compactly expressing a policy through a number of constructs. This specification does not define processing for arbitrary wsp:Policy Element Information Items in any context other than as an Element Information Item in the children property of an Element Information Item that is in the children property of an element Information Item defined in section 4.1 below.: (01) <wsp:Policy … > (02) <wsp:ExactlyOne> (03) ( <wsp:All> ( <Assertion …> … </Assertion> )* </wsp:All> )* (04) </wsp:ExactlyOne> (05) </wsp:Policy> The following describes the Element Information Items defined in the schema outline above: ; element; if an attribute is not recognized, it SHOULD be ignored. If an assertion in the normal form of a policy expression contains a nested policy expression,expression, the nested policy expression MUST contain at most one policy alternative (see 4.3.2 Policy Assertion Nesting).alternative. To simplify processing and improve interoperability, the normal form of a policy expression SHOULDshould be used where practical. For example, the following is the normal form of a policy expression.expression example introduced earlier. -07) and Lines (08-11) express the two alternatives in the policy. If the first alternative is selected, only the message body needsRSA 15 to be signedsuite [[WS-SecurityPolicy]] is supported; conversely, if the second alternative is selected, only the message bodyRSA 15 needs to be encrypted. A policy expression MAY be associated with an IRI [[IETF RFC 3987]]. The schema outline for attributes to associate an IRI is as follows: (01) <wsp:Policy ( Name="xs:anyURI" )? (02) ( wsu:Id="xs:ID" | xml:id="xs:ID" )? (03) … > (04) … (05) <]]. [Definition: A policy attachment is a mechanism for associating policy with one or more policy scopes.] [Definition: A policy scope is a collection of policy subjects to which a policy may apply.] /wsp:Policy/(@wsu:Id | @xml:id) The identity of the policy expression as an ID within the enclosing XML document. If omitted, there is no implied value. The constraints of the XML 1.0 [[XML 1.0]] ID type MUST be met.To.. Note that an implementation may use a more efficient procedure and is not required to explicitly convert a compact expression into the normal form as long as the processing results are indistinguishable from doing so. To indicate that a policy assertion is optional, this specification defines an attribute attribute that is a compact authoringsyntactic style for expressing a pair of policy alternatives, one alternatives with and one without that assertion. The schema outline for this attribute is as follows: (01) <Assertion ( wsp:Optional="xs:boolean" )? …> … </Assertion> The following describes the Attribute Information Item defined in the schema outline above: /Assertion/@wsp:Optional If the actual value (See XML Schema Part 1 [[XML Schema Structures]]) is true, the expression of the assertion is semantically equivalent to the following: (01) <wsp:ExactlyOne> (02) <wsp:All> <Assertion …> … </Assertion> </wsp:All> (03) <wsp:All /> (04) </wsp:ExactlyOne> If the actual value (See XML Schema Part 1 [[XML Schema Structures]]) is false, the expression of the assertion is semantically equivalent to the following: (01) <wsp:ExactlyOne> (02) <wsp:All> <Assertion …> … </Assertion> </wsp:All> (03) < policy expression. [Definition: A nested policy expression is a policy expression that is an Element Information Item in the children property. of a policy assertion.] The schema outline for a nested policy expression is: (01) <Assertion …> (02) … (03) ( <wsp:Policy …> … </wsp:Policy> )? (04) … (05) </Assertion> The following describes additional processing constraints on the outline listed above: /Assertion/wsp:Policythis requiring at least an empty <wsp:Policy/> Element above is to ensure that two assertions of the same type will always be compatible and an intersection would not fail (see Section 4.4 Policy Intersection). Note: This specification does not define processing for arbitrary wsp:Policy Element Information Items in the descendants of an assertion parameter policy expression with nested policy expressions in a compact form the example above). Policies are used to convey a set of capabilities, requirements, and general characteristics of entities (see 1. Introduction). These are generally expressible as a set of policy alternatives. Policy operators ( wsp:Policy , wsp:All and wsp:ExactlyOne ) are used to group policy assertions into policy alternatives. In some instances, complex policies expressed in normal form can get relatively large and hard to manage.: Use of wsp:Policy as an operator within a policy expressionis is equivalent to wsp:All . <wsp:All /> expresses a policy with zero policy assertions. Note that since wsp:Policy, (01) <wsp:All> <!-- assertion 1 --> <!-- assertion 2 --> </wsp:All> is equivalent to: (01) <wsp:All> <!-- assertion 2 --> <!-- assertion 1 --> </wsp:All> and: (01) <wsp:ExactlyOne> (02) <!-- assertion 1 --> <!-- assertion 2 --> (03) </wsp:ExactlyOne> is equivalent to: (01) <wsp:ExactlyOne> (02) <!-- assertion 2 --> <!-- assertion 1 --> (03) </wsp:ExactlyOne> wsp:All and wsp:ExactlyOne are associative. For example, (01) <wsp:All> (02) <!-- assertion 1 --> (03) <wsp:All> <!-- assertion 2 --> </wsp:All> (04) </wsp:All> is equivalent to: (01) <wsp:All> <!-- assertion 1 --> <!-- assertion 2 --> </wsp:All> and: (01) <wsp:ExactlyOne> (02) <!-- assertion 1 --> (03) <wsp:ExactlyOne> <!-- assertion 2 --> </wsp:ExactlyOne> (04) </wsp:ExactlyOne> is equivalent to: (01) <wsp:ExactlyOne> (02) <!-- assertion 1 --> <!-- assertion 2 --> (03) </wsp:ExactlyOne> wsp:All and wsp:ExactlyOne are idempotent. For example, (01) <wsp:All> (02) <wsp:All> <!-- assertion 1 --> <!-- assertion 2 --> </wsp:All> (03) </wsp:All> is equivalent to: (01) <wsp:All> <!-- assertion 1 --> <!-- assertion 2 --> </wsp:All> and: (01) <wsp:ExactlyOne> (02) <wsp:ExactlyOne> (03) <!-- assertion 1 --> <!-- assertion 2 --> (04) </wsp:ExactlyOne> (05) </wsp:ExactlyOne> is equivalent to: (01) <wsp:ExactlyOne> (02) <!-- assertion 1 --> <!-- assertion 2 --> (03) </wsp:ExactlyOne> wsp:All distributes over wsp:ExactlyOne . For example, (01) <wsp:All> (02) <wsp:ExactlyOne> (03) <!-- assertion 1 --> (04) <!-- assertion 2 --> (05) </wsp:ExactlyOne> (06) </wsp:All> is equivalent to: (01) <wsp:ExactlyOne> (02) <wsp:All> (03) <!-- assertion 1 --> (04) </wsp:All> (05) <wsp:All> (06) <!-- assertion 2 --> (07) </wsp:All> (08) </wsp:ExactlyOne> Similarly by repeatedly distributing wsp:All over wsp:ExactlyOne,Similarly, (01) <wsp:All> (02) <wsp:ExactlyOne> (03) <!-- assertion 1 --> (04) <!-- assertion 2 --> (05) </wsp:ExactlyOne> (06) <wsp:ExactlyOne> (07) <!-- assertion 3 --> (08) <!-- assertion 4 --> (09) </wsp:ExactlyOne> (10) </wsp:All> is equivalent to: (01) <wsp:ExactlyOne> (02) <wsp:All><!-- assertion 1 --><!-- assertion 3 --></wsp:All> (03) <wsp:All><!-- assertion 1 --><!-- assertion 4 --></wsp:All> (04) <wsp:All><!-- assertion 2 --><!-- assertion 3 --></wsp:All> (05) <wsp:All><!-- assertion 2 --><!-- assertion 4 --></wsp:All> (06) </wsp:ExactlyOne> Distributing wsp:All over an empty wsp:ExactlyOne is equivalent to no alternatives. For example, (01) <wsp:All> (02) <wsp:ExactlyOne> (03) <!-- assertion 1 --> (04) <!-- assertion 2 --> (05) </wsp:ExactlyOne> (06) <wsp:ExactlyOne /> (07) </wsp:All> is equivalent to: (01) ). TheIn order to share assertions across policy expressions, the wsp:PolicyReference element MAY be present anywhere a policy assertion is allowed inside a policy expression. This element is used to referenceinclude the content of one policy expression in another policy expressions. Theexpression determinedwrapped in a wsp:All operator. by the contextwsp:PolicyReference element, a policy expression MUST NOT reference itself either directly or indirectly. (Note: References that in which it@Digest is usedSHOULD (for an example, see 4.3.5 Policy Inclusion).included.) The schema outline for the wsp:PolicyReference element is as follows: (01) <wsp:PolicyReference (02) URI="xs:anyURI" (03) ( Digest="xs:base64Binary" ( DigestAlgorithm="xs:anyURI" )? )? (04) … > (05) … (06) </wsp:PolicyReference> The following describes the Attribute and Element Information Items defined in the schema outline above: /wsp:PolicyReference This element references a policy expression that is being referenced. Reference/@Digest This optional attribute specifies the digest of the referenced policy expression. This is used to ensure the included policy is the expected policy. If omitted, there is no implied value. /wsp:PolicyReference/@DigestAlgorithm This optional URI attribute specifies the digest algorithms being used. This specification predefines the default algorithm below, although additional algorithms can be expressed. /wsp:PolicyReference/@{any} Additional attributes MAY be specified but MUST NOT contradict the semantics of the owner element; element; if an attribute is not recognized, it SHOULD be ignored. /wsp:PolicyReference/{any} Additional elements MAY be specified but MUST NOT contradict the semantics of the parent element; if an element is not recognized, it SHOULD be ignored.. If a domain-specific intersection processing algorithm is required will be known from the QNames of the specific assertion types involved in the policy alternatives. As a first approximation, an algorithm is defined herein that approximates compatibility in a domain-independent manner; specifically, for two policy alternatives to be compatible, they must at least have the same policy alternative vocabulary (see Section 3.2 Policy Alternative). Two policy assertions are compatible if they have the same type and.. This section describes the security considerations that service providers, requestors, policy authors, policy assertion authors, and policy implementers need to consider when exposing, consuming and designing policy expressions, authoring policy assertions or implementing policy. [[WS-Security 2004]] and WS-MetadataExchange [[WS-MetadataExchange]] .. Example 5-1. Chained Policy Reference Elements (01) <Policy wsu: (02) <PolicyReference URI="#p2"/ > (03) <PolicyReference URI="#p2"/> (04) </Policy> (05) (06) <Policy wsu: (07) <PolicyReference URI="#p3"/> (08) <PolicyReference URI="#p3"/> (09) </Policy> (10) (11) <Policy wsu: (12) <PolicyReference URI="#p4"/> (13) <PolicyReference URI="#p4"/> (14) </Policy> (15) (16) <!-- Policy/@wsu:Id p4 through p99 --> (17) (18) <Policy wsu: (19) <PolicyReference URI="#p101"/> (20) <PolicyReference URI="#p101"/> (21) </Policy> (22) (23) <Policy wsu: (24) <mtom:OptimizedMimeSerialization /> (25) </Policy>. An element information item whose namespace name is "" and whose local part is Policy or PolicyReference conforms to this specification if it is valid according to the XML Schema [[XML Schema Structures]] for that element as defined by this specification () and additionally adheres to all the constraints contained in this specification. Such a conformant element information item constitutes a policy expression.: Bijan Parsia (University of Manchester), Seumas Soltysik (IONA Technologies, Inc.) The people who have contributed to discussions on public-ws-policy@w3.org are also gratefully acknowledged. A list of substantive changes since the Working Draft dated 27 September, 2006 is below: Enhanced Conformance section. Enhanced Security Considerations section. Clarified WS-Policy 1.5 Framework and Attachment XML Namespace URIfor versioning Policy. Clarified the policy model for Webxml:id Services. Clarified that an Element (EII) within a policy expression MUST be an assertion.section. Clarified that policy assertion parameters are opaque to framework processing. Added PolicyReference extensibility via {Any} Clarified constraints on @xml:id type usage for Policy Identification. Clarified that a wsp:PolicyReference can be used any place where a wsp:Policy element can be usedIRI.
http://www.w3.org/TR/2006/WD-ws-policy-20061102/ws-policy-framework-diff20060927.html
CC-MAIN-2016-18
en
refinedweb
I am trying to find the midpoint between 2 vertices (vectors). It seems that mathutils used to have a function MidpointVecs, but now this does not exisr anymore? I use Blender 2.5 and higher. Richard Midpoint of 2 vectors Scripting in Blender with Python, and working on the API Moderators: jesterKing, stiv 4 posts • Page 1 of 1 - Posts: 7 - Joined: Mon Dec 24, 2012 12:55 pm - Location: Breda, the Netherlands Do it yourself: cu Mr.Yeah Code: Select all def MidpointVecs(vec1, vec2): vec = vec1 + vec2 vec = vec / 2 return vec cu Mr.Yeah - Posts: 7 - Joined: Mon Dec 24, 2012 12:55 pm - Location: Breda, the Netherlands 4 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 2 guests
https://www.blender.org/forum/viewtopic.php?t=25993&view=next
CC-MAIN-2016-18
en
refinedweb
can you tell me what is the out put and which interface will call when you want me what is the out put and which interface will call when you want display () Last edited by helloworld922; April 21st, 2011 at 09:56 AM. No. Can you tell us? What happened when you wrote a test program to test this? How to Ask Questions the Smart Way Static Void Games - GameDev tutorials, free Java and JavaScript hosting! Static Void Games forum - Come say hello!: public class DiamondProblemTest { public interface Cowboy{ public void draw(); } public interface Artist{ public void draw(); } public static class Person implements Cowboy, Artist{ public void draw(){ //should I pull out a gun or a paintbrush? } } public static void main(String... args){ new Person().draw(); } } How to Ask Questions the Smart Way Static Void Games - GameDev tutorials, free Java and JavaScript hosting! Static Void Games forum - Come say hello! The answer's obviously that your person needs to draw pictures with his bulletsThe answer's obviously that your person needs to draw pictures with his bullets
http://www.javaprogrammingforums.com/member-introductions/8625-interface.html
CC-MAIN-2016-18
en
refinedweb
I have a listing in a sub menu.I would like to display different captions for the different items, all which are dynamically obtained. Much the same way 'Open recent' works. I've implemented a def is_visible(self, index): function to only show relevant menu items, and that works great. How can I provide a custom caption?Is there a function along the lines of has_caption(self, index): that I could implement? There's an API: description(<args>) String Returns a description of the command with the given arguments. Used in the menu, if no caption is provided. Return None to get the default description. So something like: def description(self, *args): return "DESCRIPTION" Not sure exactly when it is called (once or each times ?) I tried implementing a function like but it is never called...Do I need to put something special in my .sublime-menu file to trigger this callback?I'm using sublime 2. Tried it right now and it works:startup, version: 2221 windows x64 channel: stable \Sublime Text 2\Packages\User\Main.sublime-menu { "id": "view", "children": { "command": "example" } ] } ] \Sublime Text 2\Packages\User\exemple.py[code]class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): print 'Hello' def description(self, *args): return "DESCRIPTION"[/code]
https://forum.sublimetext.com/t/dynamic-menu-caption/11331
CC-MAIN-2016-18
en
refinedweb
On Mon, Apr 9, 2012 at 3:15 AM, David Rientjes <rientjes@google.com> wrote:>> I think you nailed it.>> I suspect the problem is 1eda5166c764 ("staging: android/lowmemorykiller:> Don't unregister notifier from atomic context") merged during the 3.4> merge window and, unfortunately, backported to stable.Ok. That does seem to match everything.However, I think your patch is the wrong one.The real bug is actually that those notifiers are a f*cking joke, andthe return value from the notifier is a mistake.So I personally think that the real problem is this code inprofile_handoff_task: return (ret == NOTIFY_OK) ? 1 : 0;and ask yourself two questions: - what the hell does NOTIFY_OK/NOTIFY_DONE mean? - what happens if there are multiple notifiers that all (or some)return NOTIFY_OK?I'll tell you what my answers are: (a) NOTIFY_DONE is the "ok, everything is fine, you can free thetask-struct". It's also what that handoff notifier thing returns ifthere are no notifiers registered at all. So the fix to the Android lowmemorykiller is as simple as justchanging NOTIFY_OK to NOTIFY_DONE, which will mean that the callerwill properly free the task struct. The NOTIFY_OK/NOTIFY_DONE difference really does seem to be just"NOTIFY_OK means that I will free the task myself later". That's whatthe oprofile uses, and it frees the task. (b) But the whole interface is a total f*cking mess. If *multiple*people return NOTIFY_OK, they're royally fucked. And the whole (andonly) point of notifiers is that you can register multiple differentones independently.So quite frankly, the *real* bug is not in that android driver(although I'd say that we should just make it return NOTIFY_DONE andbe done with it). The real bug is that the whole f*cking notifier is amistake, and checking the error return was the biggest mistake of all.Werner: just test David's patch (do *not* change both the error value*and* apply David's patch - that would free the task-struct twice). Idon't think his patch is what I want to apply eventually, but itshould fix the issue.Sadly, I don't think we have anybody who really "owns"kernel/profile.c - the thing is broken, it was misdesigned, and nobodyreally cares. Which is why we'll probably have to fix this by justmaking that Android thing return NOTIFY_DONE, and just accept that thewhole thing is a f*cking joke. Linus
https://lkml.org/lkml/2012/4/9/177
CC-MAIN-2016-18
en
refinedweb
Name | Synopsis | Description | Environment Variables | Attributes | See Also | Notes cc [ flag ... ] file ... -lnsl [ library ... ] #include <rpcsvc/nis.h> nis_name nis_leaf_of(const nis_name name); nis_name nis_name_of(const nis_name name); nis_name nis_domain_of(const nis_name name); nis_name *nis_getnames(const nis_name name); void nis_freenames(nis_name *namelist); name_pos nis_dir_cmp(const nis_name n1, const nis_name n2); nis_object *nis_clone_object(const nis_object *src, nis_object *dest); void nis_destroy_object(nis_object *obj); void nis_print_object(const nis_object *obj); These subroutines are provided to assist in the development of NIS+ applications. They provide several useful operations on both NIS+ names and objects. The first group, nis_leaf_of(), nis_domain_of(), and nis_name_of() provide the functions for parsing NIS+ names. nis_leaf_of() will return the first label in an NIS+ name. It takes into account the double quote character `"' which can be used to protect embedded `.' (dot) characters in object names. Note that the name returned will never have a trailing dot character. If passed the global root directory name “.”, it will return the null string. nis_domain_of() returns the name of the NIS+ domain in which an object resides. This name will always be a fully qualified NIS+ name and ends with a dot. By iteratively calling nis_leaf_of() and nis_domain_of() it is possible to break a NIS+ name into its individual components. nis_name_of() is used to extract the unique part of a NIS+ name. This function removes from the tail portion of the name all labels that are in common with the local domain. Thus if a machine were in domain foo.bar.baz. and nis_name_of() were passed a name bob.friends.foo.bar.baz, then nis_name_of() would return the unique part, bob.friends. If the name passed to this function is not in either the local domain or one of its children, this function will return null. nis_getnames() will return a list of candidate names for the name passed in as name. If this name is not fully qualified, nis_getnames() will generate a list of names using the default NIS+ directory search path, or the environment variable NIS_PATH if it is set. The returned array of pointers is terminated by a null pointer, and the memory associated with this array should be freed by calling nis_freenames() Though nis_dir_cmp() can be used to compare any two NIS+ names, it is used primarily to compare domain names. This comparison is done in a case independent fashion, and the results are an enum of type name_pos. When the names passed to this function are identical, the function returns a value of SAME_NAME. If the name n1 is a direct ancestor of name n2, then this function returns the result HIGHER_NAME. Similarly, if the name n1 is a direct descendant of name n2, then this function returns the result LOWER_NAME. When the name n1 is neither a direct ancestor nor a direct descendant of n2, as it would be if the two names were siblings in separate portions of the namespace, then this function returns the result NOT_SEQUENTIAL. Finally, if either name cannot be parsed as a legitimate name then this function returns the value BAD_NAME. The second set of functions, consisting of nis_clone_object() and nis_destroy_object(), are used for manipulating objects. nis_clone_object() creates an exact duplicate of the NIS+ object src. If the value of dest is non-null, it creates the clone of the object into this object structure and allocate the necessary memory for the variable length arrays. If this parameter is null, a pointer to the cloned object is returned. Refer to nis_objects(3NSL) for a description of the nis_object structure. nis_destroy_object() can be used to destroy an object created by nis_clone_object(). This will free up all memory associated with the object and free the pointer passed. If the object was cloned into an array using the dest parameter to nis_clone_object(), then the object cannot be freed with this function. Instead, the function xdr_free(xdr_nis_object,dest) must be used. nis_print_object() prints out the contents of a NIS+ object structure on the standard output. Its primary use is for debugging NIS+ programs. nis_leaf_of(), nis_name_of()and nis_clone_object() return their results as thread-specific data in multithreaded applications. This variable overrides the default NIS+ directory search path used by nis_getnames(). '$'. See attributes(5) for descriptions of the following attributes: nis_names(3NSL), | Environment Variables | Attributes | See Also | Notes
http://docs.oracle.com/cd/E19253-01/816-5170/6mbb5eskf/index.html
CC-MAIN-2016-18
en
refinedweb
Regular Expressions Regular expressions are a powerful tool for pattern matching on strings of text. They are built in to the core of languages like Perl, Ruby, and Javascript. Perl and Ruby are particulary reknowned for adroitly handling regular expressions. So why aren't they part of the D core language? Read on and see how they're done in D compared with Ruby. This article explains how to use regular expressions in D. It doesn't explain regular expressions themselves, after all, people have written entire books on that topic. D's specific implementation of regular expressions is entirely contained in the Phobos library module std.regexp. For a more advanced treatment of using regular expressions in conjuction with template metaprogramming, see Templates Revisited. In Ruby a regular expression can be created as a special literal: r = /pattern/ s = /p[1-5]\s*/ D doesn't have special literals for them, but they can be created: r = RegExp("pattern"); s = RegExp(r"p[1-5]\s*"); If the pattern contains backslash characters \, wysiwyg string literals are used, which have the 'r' prefix to the string. r and s are of type RegExp, but we can use type inference to declare and assign them automatically: auto r = RegExp("pattern"); auto s = RegExp(r"p[1-5]\s*"); To check for a match of a string s with a regular expression in Ruby, use the =~ operator, which returns the index of the first match: s = "abcabcabab" s =~ /b/ /* match, returns 1 */ s =~ /f/ /* no match, returns nil */ In D this looks like: auto s = "abcabcabab"; std.regexp.find(s, "b"); /* match, returns 1 */ std.regexp.find(s, "f"); /* no match, returns -1 */ Note the equivalence to std.string.find, which searches for substring matches rather than regular expression matches. The Ruby =~ operator sets some implicitly defined variables based on the result: s = "abcdef" if s =~ /c/ "#{$`}[#{$&}]#{$'}" /* generates string ab[c]def The function std.regexp.search() returns a RegExp object describing the match, which can be exploited: auto m = std.regexp.search("abcdef", "c"); if (m) writefln("%s[%s]%s", m.pre, m.match(0), m.post); Or even more concisely as: if (auto m = std.regexp.search("abcdef", "c")) writefln("%s[%s]%s", m.pre, m.match(0), m.post); // writes ab[c]def Search and Replace Search and replace gets more interesting. To replace the occurrences of "a" with "ZZ" in Ruby; the first occurrence, then all: s = "Strap a rocket engine on a chicken." s.sub(/a/, "ZZ") // result: StrZZp a rocket engine on a chicken. s.gsub(/a/, "ZZ") // result: StrZZp ZZ rocket engine on ZZ chicken. In D: s = "Strap a rocket engine on a chicken."; sub(s, "a", "ZZ"); // result: StrZZp a rocket engine on a chicken. sub(s, "a", "ZZ", "g"); // result: StrZZp ZZ rocket engine on ZZ chicken. The replacement string can reference the matches using the $&, $$, $', $`, .. 9 notation: sub(s, "[ar]", "[$&]", "g"); // result: St[r][a]p [a] [r]ocket engine on [a] chicken. Or the replacement string can be provided by a delegate: sub(s, "[ar]", (RegExp m) { return toupper(m.match(0)); }, "g"); // result: StRAp A Rocket engine on A chicken.(toupper() comes from std.string.) Looping It's possible to search over all matches within a string: import std.stdio; import std.regexp; void main() { foreach(m; RegExp("ab").search("abcabcabab")) { writefln("%s[%s]%s", m.pre, m.match(0), m.post); } } // Prints: // [ab]cabcabab // abc[ab]cabab // abcabc[ab]ab // abcabcab[ab] Conclusion D regular expression handling is as powerful as Ruby's. But its syntax isn't as concise: - Regular expression literal syntax - doing so would make it impossible to perform lexical analysis without also doing syntactic or semantic analysis. - Implicit naming of match variables - this causes problems with name collisions, and just doesn't fit with the rest of the way D works. But it is just as powerful.
http://www.digitalmars.com/d/1.0/regular-expression.html
crawl-001
en
refinedweb
ISO 14000 - Environmental Management Services Baldrige Award [ISO 9001:2000][Quality Management] [OMS Operational Management] [ISO 9000][ISO 14000 Gap Analysis] ISO 14000 - Environmental Management Services Environmental management services can be provided by PROMAX Consulting Services. Our personnel possess several years of professional experience. These services are targeted to customers who desire to develop or build upon their existing environmental management systems. We help organizations examine their environmental management systems and improve the ways they manage and account for environmental aspects of their operations. Listed below are some of our services by topic: ISO 14000 Awareness Training ISO 14001 Gap Analysis ISO 14000 Documentation Development Environmental Program Development Policy Development Consultation and Program Management EMS Audits Life Cycle Analysis Background ISO 14000 is an international voluntary environmental standard recognized by major trading nations and trade regulating organizations such as GATT and the World Trade Organization. It is not a law in the sense that no one is required to be registered ( hence it is voluntary ); however, neither does anyone have to do business with you, buy your products and services, or let your products and services into their country if they have declared ISO 14000 registration a requirement for doing business with them or in their country. It is expected that many foreign trading partners will require registration by import manufacturers. This is a recognized legal trade barrier under international treaty. Elements of the U.S. Government have indicated intention to institute either preference for, or requirement that, suppliers be registered. It is likely that registration will influence the enforcement stance of environmental regulators, and will likely influence insurance rates and lender practices. ISO 14000 is actually a series of standards that cover everything from environmental management systems ( The EMS ) to auditor qualifications to as yet unwritten standards for such things as life cycle assessment. The issue of concern at this point for organizations seeking registration is the EMS. This is governed by ISO 14001 and this is what registration deals with. ISO 14001 requires conformance with a series of elements of an EMS. That is, the organization must show that it has a working system in place to produce the required outcomes. The ISO 14001 does not dictate how this is done, but it does require a stringent audit to determine that they in fact are done and are continuously operating. ISO 14001, for instance, does not require that an organization be in compliance with any environmental law, but it does require that the organization know what regulations it is subject to, and has in place a verifiable system for achieving compliance and for heading off non-compliance before they occur. This responsibility must involve everyone in the organization from top management down to the line worker, wherever any employee has an influence on the environmental impacts of the company. This brings up another aspect of ISO 14001 -- environmental aspects. This major element of ISO 14001 requires that an organization know what impacts it is having on the environment. This awareness must go beyond mere textbook knowledge of typical pollution control. It must take into account the specific facility's environmental aspects peculiar to its operations, processes, products, and its location. It must take into account its possible affects on the community local to the facility, and its impact on other stakeholders, such as citizens groups, or even the local wastewater treatment plant. The objective is to identify the environmental "aspects" and continually work to minimize negative effects of operation. This is the key to ISO 14001 -- a management system that ensures the entire organization is involved in continual improvement. The system must have a structure that forces improvement, and can prove it. To accomplish this, the organization must set performance measures against which to measure improvement, and must involve each member of the organization who has a role in achieving the performance measure. The documents that describe the system must indicate who these members are, down to the line worker, and it must indicate where supporting plans, instructions, and guidance documents are located showing that whoever "needs to know" can easily find the proper documents and performance measures. Again, this does not involve strict attention to legal compliance. It is perfectly legal to generate 10 tons of solid waste per week, but if the facility can produce as high a quality product while producing 3 tons per week, it should strive for this reduction and in the process it will benefit like most other companies who have implemented an EMS -- its costs will drop sharply. Select the items that apply, and then let us know how to contact you. Send information Call me PROMAX Consulting Services, Inc. Telephone 866-610-4300 (Toll Free) :: Fax 321-725-8890 General Information: mail@promaxconsulting.com Consulting Services, Inc.
http://www.promaxconsulting.com/ISO_14000.HTM
crawl-001
en
refinedweb
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove Ads via The Lounge ↑ Grab this Headline Animator\ Update: I've uploaded a more complete version of the framework and also changes the file name. please get it from here: if you have the earlier version. (and just so you know - it works perfectly with the TestDriven.Net tool suite) One of the things that have bothered me the most since I got into the whole "Add database rollback to your unit tests" thing, is how much work it takes to make your test suite use this feature. I actually went and made a new binary of the NUnit Framework to support this new attribute. All this because there is no clear extensibility model for NUnit these days. Peli, on the other hand has a very nice way of extending MbUnit, but it still entails recompiling his library for this to work (or am I wrong?) so - an idea came to me. A while ago Peli told me he had just found out about ContextBoundObjects and the ability to "intercept" method calls for pre and post processing. He said it might have some cool things that can be used with unit testing but we couldn't find something that was really cool to do with it. The other day, while reading this nice article about implementing interception in your code, I got an idea: Maybe interception and Contexts are the best way for extensibility? So, I gave it a shot. And it turns out pretty darn cool I have to say. Introducing the XtUnit.Framework project With this project you are now able to add any attribute you can think of to you NUnit (or any other xUnit tests) with the ease of simply deriving from a base class. In the solution that you can download you will find 3 projects: now here are the cool things: I'd love to get your feedback. Have fun :) I'd like to thank Jamie for helping me solve two simple and annoying bugs I just couldn't find with my thick head. HI Roy, I'm getting this on my testing solution when I try to use the SampleTestFixture.cs copied into my solution. This is using TestDriven.net 2.0.1948 Personal : TestCase 'M:dbTesting.SampleTestFixture.MyDataRelatedTest' failed: Couldn't find declaring type with name 'dbTesting.SampleTestFixture' Do you have an idea what's causing this ? Thanks for your help! Hi, I'm using DataRollBack attribute and it works fine with Sql Server...compliments...but now I want to use it even with an application that uses an Oracle 9i db accessed by ODP.NET . Reading your article, I've thought that it must work anyway, because ODP.NET is based upon ADO.NET . Enterprise Services manages transactions at this level...but it doesn't work, or rather, the transaction doesn't roll back...do you have any idea? Hi, I've forgot to say that the Oracle server is on a remote machine. This is probably the cause of the problem. Of course XtUnit works on the local DTC...it is possible to connect on the ServiceDomain object of a remote machine? And how to do that? Thanks for your help Cosimo I modified the code to use System.Transactions.TransactionScope...thus removing the need for the MSDTC and Enterprise Services. public class RollBackAttribute:TestProcessingAttributeBase { TransactionScope transactionScope; [DebuggerStepThrough] protected override void OnPreProcess() { try { transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew, new TimeSpan(0,0,0,10000,0)); } catch(Exception e) { OutputDebugMessage("Could not enter into a new transaction:\n" + e.ToString()); } [DebuggerStepThrough] protected override void OnPostProcess() Transaction.Current.Rollback(); transactionScope.Dispose(); OutputDebugMessage("Could not leave an existing transaction:\n" + e.ToString()); }
http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx
crawl-001
en
refinedweb
Many Swing components, such as labels, buttons, and tabbed panes, can be decorated with an icon — a fixed-sized picture. An icon is an object that adheres to the Iconinterface. Swing provides a particularly useful implementation of the Iconinterface: ImageIcon, which paints an icon from a GIF, JPEG, or PNG image. Here's a snapshot of an application with three labels, two decorated with an icon: The program uses one image icon to contain and paint the yellow splats. One statement creates the image icon and two more statements include the image icon on each of the two labels:ImageIcon icon = createImageIcon("images/middle.gif", "a pretty but meaningless splat"); label1 = new JLabel("Image and Text", icon, JLabel.CENTER); ... label3 = new JLabel(icon); The createImageIconmethod (used in the preceding snippet) is one we use in many of our code samples. It finds the specified file and returns an ImageIconfor that file, or nullif that file couldn't be found. Here is a typical implementation:/**; } } In the preceding snippet, the first argument to the ImageIconconstructor is relative to the location of the current class, and will be resolved to an absolute URL. The descriptionargument is a string that allows assistive technologies to help a visually impaired user understand what information the icon conveys. Generally, applications provide their own set of images used as part of the application, as is the case with the images used by many of our demos. You should use the Class getResourcemethod to obtain the path to the image. This allows the application to verify that the image is available and to provide sensible error handling if it is not. When the image is not part of the application, getResourceshould not be used and the ImageIconconstructor is used directly. For example:ImageIcon icon = new ImageIcon("images/middle.gif", "a pretty but meaningless splat"); When you specify a filename or URL to an ImageIconconstructor, processing is blocked until after the image data is completely loaded or the data location has proven to be invalid. If the data location is invalid (but non-null), an ImageIconis still successfully created; it just has no size and, therefore, paints nothing. As shown in the createImageIconmethod, it is advisable to first verify that the URL points to an existing file before passing it to the ImageIconconstructor. This allows graceful error handling when the file isn't present. If you want more information while the image is loading, you can register an observer on an image icon by calling its setImageObservermethod. Under the covers, each image icon uses an Imageobject to hold the image data. The rest of this section covers the following topics: - A More Complex Image Icon Example - Improving Perceived Performance When Loading Image Icons - Creating a Custom Icon Implementation - The Image Icon API - Examples that Use Icons Here's an application that uses six image icons. Five of them display display thumbnail images and the sixth diplays the full size the photograph. Try this: - Click the Launch button to run IconDemo using Java™ Web Start (download JDK 6). Or, to compile and run the example yourself, consult the example index. - Click any of the thumbnail images to view the full size photographs. - Hold the mouse over a photograph. A tool tip appears that displays the photograph caption. IconDemoApp demonstrates icons used in the following ways: - As a GUI element attached to a button (the thumbnail images on the buttons). - To display an image (the five photographs). The photographs are loaded in a separate thread by loadimages.execute. The loadimagescode is shown a little later in this section. The ThumbnailActionclass, an inner class in IconDemoApp.java, is a descendant of AbstractActionthat manages our full size image icon, a thumbnail version, and its description. When the actionPerformedmethod is called the full size image is loaded into the main display area. Each button has its own instance of ThumbnailActionwhich specifies a different image to show./** * Action class that shows the image specified in it's constructor. */ private class ThumbnailAction extends AbstractAction{ /** *The icon if the full image we want to display. */ private Icon displayPhoto; /** * @param Icon - The full size photo to show in the button. * @param Icon - The thumbnail to show in the button. * @param String - The descriptioon of the icon. */ public ThumbnailAction(Icon photo, Icon thumb, String desc){ displayPhoto = photo; // The short description becomes the tooltip of a button. putValue(SHORT_DESCRIPTION, desc); // The LARGE_ICON_KEY is actually the key for setting the // icon when an Action is applied to a button. putValue(LARGE_ICON_KEY, thumb); } /** * Shows the full image in the main area and sets the application title. */ public void actionPerformed(ActionEvent e) { photographLabel.setIcon(displayPhoto); setTitle("Icon Demo: " + getValue(SHORT_DESCRIPTION).toString()); } } Most often, an image icon's data comes from an image file. There are a number of valid ways that your application's class and image files may be configured on your file server. You might have your class files in a JAR file, or your image files in a JAR file; they might be in the same JAR file, or they might be in different JAR files. The following figures illustrate a few of the ways these files can be configured: If you are writing a real-world application, it is likely (and recommended) that you put your files into a package. For more information on packages, see Creating and Using Packages in the Learning the Java Language trail. Here are some possible configurations using a package named "omega": All seven configurations shown are valid, and the same code reads the image:java.net.URL imageURL = myDemo.class.getResource("images/myImage.gif"); ... if (imageURL != null) { ImageIcon icon = new ImageIcon(imageURL); } The getResourcemethod causes the class loader to look through the directories and JAR files in the program's class path, returning a URL as soon as it finds the desired file. In the example the MyDemo program attempts to load the images/myImage.pngfile from the omegaclass. The class loader looks through the directories and JAR files in the program's class path for /omega/images/myImage.png. If the class loader finds the file, it returns the URL of the JAR file or directory that contained the file. If another JAR file or directory in the class path contains the images/myImage.pngfile, the class loader returns the first instance that contains the file. Here are three ways to specify the class path: - Using the -cpor -classpathcommand-line argument. For example, in the case where the images are in a JAR file named images.jarand the class file is in the current directory:java -cp .;image.jar MyDemo [Microsoft Windows] java -cp ".;image.jar" MyDemo [Unix-emulating shell on Microsoft Windows — you must quote the path] java -cp .:image.jar MyDemo [Unix] If your image and class files are in separate JAR files, your command line will look something like:java -cp .;MyDemo.jar;image.jar MyDemo [Microsoft Windows] In the situation where all the files are in one JAR file, you can use either of the following commands:java -jar MyAppPlusImages.jar java -cp .;MyAppPlusImages.jar MyApp [Microsoft Windows] For more information, see the JAR Files trail. - In the program's JNLP file (used by Java Web Start). For example, here is the JNLP file used by DragPictureDemo:<?xml version="1.0" encoding="utf-8"?> <!-- JNLP File for DragPictureDemo --> <jnlp spec="1.0+" codebase="" href="DragPictureDemo.jnlp"> <information> <title>DragPictureDemo</title> <vendor>The Java(tm) Tutorial: Sun Microsystems, Inc.</vendor> <homepage href=""/> <description>DragPictureDemo</description> <description kind="short">A demo showing how to install data transfer on a custom component.</description> <offline-allowed/> </information> <resources> <j2se version="1.6+"/> <jar href="allClasses.jar"/> <jar href="images.jar"/> </resources> <application-desc </jnlp> In this example, the class files and the images files are in separate JAR files. The JAR files are specified using the XML jartag. - Setting the CLASSPATHenvironment variable. This last approach is not recommended. If CLASSPATHis not set, the current directory (".") followed by the location of the system classes shipped with the JRE are used by default. Most of the Swing Tutorial examples put the images in an imagesdirectory under the directory that contains the examples' class files. When we create JAR files for the examples, we keep the same relative locations, although often we put the class files in a different JAR file than the image JAR file. No matter where the class and image files are in the file system — in one JAR file, or in multiple JAR files, in a named package, or in the default package — the same code finds the image files using getResource. For more information, see Accessing Resources in a Location-Independent Manner and the Application Development Considerations. Applets generally load image data from the computer that served up the applet. The APPLETtag is where you specify information about the images used in the applet. For more information on the APPLETtag see Using the APPLET Tag Improving Perceived Performance When Loading Image Icons Because the photograph images can be slow to access, IconDemoApp.javauses a SwingWorkerto improve the performance of the program as perceived by the user. Background image loading — the program uses a javax.swing.SwingWorker object to load each photograph image and compute it's thumbnail in a background thread. Using a SwingWorkerprevents the program from appearing to freeze up while loading and scaling the images. Here's the code to process each image:SwingWorker invokes theSwingWorker invokes the/** * SwingWorker class that loads the images a background thread and calls publish * when a new one is ready to be displayed. * * We use Void as the first SwingWroker param as we do not need to return * anything from doInBackground(). */ private SwingWorker loadimages = new SwingWorker () { /** * Creates full size and thumbnail versions of the target image files. */ @Override protected Void doInBackground() throws Exception { for (int i = 0; i < imageCaptions.length; i++) { ImageIcon icon; icon = createImageIcon(imagedir + imageFileNames[i], imageCaptions[i]); ThumbnailAction thumbAction; if(icon != null){ ImageIcon thumbnailIcon = new ImageIcon(getScaledImage(icon.getImage(), 32, 32)); thumbAction = new ThumbnailAction(icon, thumbnailIcon, imageCaptions[i]); } else { // the image failed to load for some reason // so load a placeholder instead thumbAction = new ThumbnailAction(placeholderIcon, placeholderIcon, imageCaptions[i]); } publish(thumbAction); } // unfortunately we must return something, and only null is valid to // return when the return type is void. return null; } /** * Process all loaded images. */ @Override protected void process(List chunks) { for (ThumbnailAction thumbAction : chunks) { JButton thumbButton = new JButton(thumbAction); // add the new button BEFORE the last glue // this centers the buttons in the toolbar buttonBar.add(thumbButton, buttonBar.getComponentCount() - 1); } } }; doInBackgroundmethod in a background thread. The method places a full size image, thumbnail size image and caption into a ThumbnailActionobject. The SwingWorker then delivers the ThumbnailActionto the processmethod. The processmethod executes on the event dispatch thread and updates the GUI by adding a button to the toolbar. JButtonhas a constructor that takes an action object. The action object determines a number of the button's properties. In our case the button icon, the caption and the action to be performed when the button is pressed is all determined by the ThumbnailAction. Overhead — this program eventually loads all the source images into memory. This may not be desirable in all situations. Loading a number of very large files could cause the program to allocate a very large amount or memory. Care should be taken to manage the number and size of images that are loaded. As with all performance-related issues, this technique is applicable in some situations and not others. Also the technique described here is designed to improve the program's perceived performance, but does not necessarily impact its real performance. Creating a Custom Icon ImplementationThe createImageIconmethod returns null when it cannot find an image, but what should the program do then? One possibility would be to ignore that image and move on. Another option would be to provide some sort of default icon to display when the real one cannot be loaded. Making another call to createImageIconmight result in another null so using that is not a good idea. Instead lets create a custom Iconimplementation. You can find the implementation of the custom icon class in MissingIcon.java. Here are the interesting parts of its code:/** * The "missing icon" is a white box with a black border and a red x. * It's used to display something when there are issues loading an * icon from an external location. * * @author Collin Fagan */ public class MissingIcon implements Icon{ private int width = 32; private int height = 32; private BasicStroke stroke = new BasicStroke(4); public void paintIcon(Component c, Graphics g, int x, int y) { Graphics2D g2d = (Graphics2D) g.create(); g2d.setColor(Color.WHITE); g2d.fillRect(x +1 ,y + 1,width -2 ,height -2); g2d.setColor(Color.BLACK); g2d.drawRect(x +1 ,y + 1,width -2 ,height -2); g2d.setColor(Color.RED); g2d.setStroke(stroke); g2d.drawLine(x +10, y + 10, x + width -10, y + height -10); g2d.drawLine(x +10, y + height -10, x + width -10, y + 10); g2d.dispose(); } public int getIconWidth() { return width; } public int getIconHeight() { return height; } } The paintIconmethod is passed a Graphicsobject. The Graphicsobject gives the paintIconmethod access to the entire Java2D API. For more information about painting and Java2D, see Performing Custom Painting. The following code demonstrates how the MissingIconclass is used in the SwingWorker doInBackgroundmethod.private MissingIcon placeholderIcon = new MissingIcon(); ... if(icon != null) { ... } else { // the image failed to load for some reason // so load a placeholder instead thumbAction = new ThumbnailAction(placeholderIcon, placeholderIcon, imageCaptions[i]); } Using a custom icon has a few implications: - Because the icon's appearance is determined dynamically, the icon painting code can use any information — component and application state, for example — to determine what to paint. - Depending on the platform and the type of image, you may get a performance boost with custom icons, since painting simple shapes can sometimes be faster than copying images. - Because MissingIcondoes not do any file I/O there is no need for separate threads to load the image. The Image Icon API The following tables list the commonly used ImageIconconstructors and methods. Note that ImageIconis not a descendent of JComponentor even of Component. The API for using image icons falls into these categories: - Setting, Getting, and Painting the Image Icon's Image - Setting or Getting Information about the Image Icon - Watching the Image Icon's Image Load Examples that Use Icons The following table lists just a few of the many examples that use ImageIcon. Note: The photographs used in the IconDemoare copyright ©2006 spriggs.net and licenced under a Creative Commons Licence.
http://java.sun.com/docs/books/tutorial/uiswing/components/icon.html
crawl-001
en
refinedweb
Well ratbags! Left out again. I only get $890 a month in Social Security and don't pay any income taxes. How do I get to take part? UPDATE, 3/27: STAND DOWN! In a Clinton press conference today, I asked her Communications Director, Howard Wolfson, whether Clinton would produce all her returns next week. He promised she would. I take him at his word. Stop faxing (unless you want to send a one-pager saying "Thank you!") -- and check back at Off the Bus next week when we'll see what exactly shows up. GREAT WORK, AND THANK YOU!! Hillary Clinton, who claims to be "the most transparent" politician in America, isn't. There's a long history of her withholding documents relating to her personal history, her days as First Lady, and her financial records (for instance, why did she wait until after the important primary in NAFTA-hating Ohio to release her daily schedules as First Lady, which show all the work she did to help NAFTA pass?) (For those who are interested, there's more detail about Barack Obama's unusual frankness and transparency, and Clinton's lack of it, in a slightly fuller version of this post on VichyDems.) Today the issue is tax returns. It's become customary for Presidential candidates to release copies of their returns so voters can see how they've made their money -- and who they might owe favors to. Barack Obama released his tax returns to the media months ago. Today he went even further and posted all his returns from 2000 forward on his campaign website. You can get .pdfs of them here. But Hillary Clinton - who is rich enough that she's personally loaned at least $5 million to her own campaign to keep it afloat - hasn't produced any tax returns since her husband left office. That's seven years during which she and Bill went from being civil servants to becoming incredibly wealthy - yet she won't tell the public where her newly-acquired millions came from. (At first, she ignored the issue. Then, she said she would produce them sometime "around" April 15. Now, she's agreed to produce something tax-wise, but not until 3 days before the Pennsylvania primary on April 22 -- nearly a month away, and not leaving Pennsylvania voters much time to really thing about any issues her returns might raise -- and won't say exactly what she'll provide: for instance, will it be just the 1040s, or also the schedules and attachments that contain the actual details?) A more cynical person than me might suspect those tax returns contain something Clinton would rather hide -- but Senator Clinton has a simpler explanation. Why hasn't she done what every other candidate has done, and made copies of her tax returns available early enough to make a difference? Because, she claims, she's been too busy - as if she personally needs to rummage through her filing cabinet, run to Kinko's, and look up the fax number for the Associated Press. Personally, I think it's easier to photocopy a tax return than Clinton thinks. In fact, since 2007's returns are due in less than a month, everyone in America has just finished, or soon will be, filling out, copying (for their own files), and sending off a tax return. In other words, we're all doing exactly what Clinton claims she "doesn't have time" to do! So here's an idea: LET'S ALL SHOW HILLARY HOW EASY IT IS, BY SENDING HER COPIES OF OUR OWN TAX RETURNS. It's ridiculously simple: 1. Grab this year's tax return, and some past years' returns too if they're handy, and copy them. 2. Take a Sharpie and black out any personal information like your name, social security number, etc. - we don't want any identity theft! 3. Scribble a brief note, maybe in huge Sharpie letters on the first page of your return, saying something like: "Hey, Hil: if I can do this, so can you. Please produce your 2000-2006 returns NOW." 4. Fax to one of Clinton's campaign offices. (I say fax, not mail, because all mail to Senators has to be screened for anthrax before it's opened.) That's it! Grassroots activism at its best (and simplest!). Again: photocopy, Sharpie, note, fax - and you're a "netboots" activist (informed by the Internet, but taking action, "boots on the ground"). Feel good? It should! Here are three important guidelines to make sure you're just sending a message, not harassing or interfering with her campaign: 1. Please keep your cover note polite and to the point. 2. Don't send faxes to Clinton's official Senate offices (either in D.C. or in New York); those numbers are for her real work, and we don't want to interfere with that in any way. 3. Do your best to fax a local campaign office, not the national one. That way, the load will be spread among many fax machines, rather than jamming up a few important ones and making them unusable for campaign business. Again: we're sending a message, not doing a dirty trick by jamming her lines. Contact information for Clinton's various campaign offices can be found on her website's "States" page. If her local campaign office shows a telephone number but not a fax number, give them a call and ask politely what their fax number is (and please share that info in the comments section of this post so others in your area can use it). If her website doesn't give any local information for your state (e.g., she has lots of info for Pennsylvania, where she expects to win, and none for North Carolina, where she doesn't), then you don't have much choice, and need to use one of her other numbers. Only for those who don't have a local office to telephone or fax to, here are some fax numbers you can try: National Campaign Headquarters (Virginia): 703-962-8600 Pennsylvania Headquarters: 215-625-0379 New York Headquarters: 212-213-3041 If I can get more fax numbers to spread things out, I'll post them over at VichyDems - so please check there if the numbers above get too busy. Thanks for playing "help the candidates be transparent!" - and let us know how it goes! (Visit the author's home page.) Want to reply to a comment? Hint: Click "Reply" at the bottom of the comment; after being approved your comment will appear directly underneath the comment you replied to Well ratbags! Left out again. I only get $890 a month in Social Security and don't pay any income taxes. How do I get to take part? Get off it. She said April 15th so wait until then, then rag on her. In the meantime the less than 4% that Obama gave to charity & the $27,000.00 check he gave Rev. Wright is the focus right now. The media is saying Obama says do as I say, not as I do, time to protect your man. Great question -- luckily with a simple answer! If you don't file a return, or if you're uncomfortable faxing Clinton your own completed returns (even with personal info blacked out), then there's another option. Visit and print out a blank 1040 AND -- this is important -- any schedules you think the Clintons might need (Schedule A&B, itemized deductions; Schedule D, capital gains and losses; maybe Schedule H, household employment!)). Then fax those out, with a nice note asking whether she's lost hers and hoping these will help. EVERYONE: please emphasize in your cover notes that we want her to produce returns AND SCHEDULES. Her campaign is making noise about maybe producing some returns or financial info or something next week (good work!!), but is still vague about what exactly that will consist of. We should be clear: the same stuff as Obama posted to his website. That's equal, that's fair, that's not taking sides -- it's transparency. Thanks! I *love* how your Mind works..! This sounds like the kind of things my advocacy family would do.. :giggle: Very much appreciate where you said, "Please keep your cover note polite....." Respect, respect, respect.. Helps get one welcomed back in the front door the _next_ Time.. Cyber hugs.. :) Wow, where do I start.... Obama released his 2007 Tax returns in late Feb/ early March. He released the tax returns from 2000 - 2006 Today. The point is that he's willing to be open about his finances for the past 8 years. Hillary , thus far, is not. Now, with Obama only releasing his today, I dont' think it's fair to jump on Hillary just yet. Give her a day or two to see if she follows suit and releases her tax returns for the same period. The reason this is important is two fold. First, the public should see where the financial interests of their public officials lay. In Hillary's case, if she owns stock or has benefited financially from companies that were helped by NAFTA then that brings into question her willingness to renegotiate NAFTA. Renegotiation would be counter to her financial interests. Secondly, she has made the case on many occasions that she has been vetted and therefore is immune to any "October Surprise" that republicans could dig up from her past. Her unwillingness to release her tax returns ( thus far) brings that into question. The implication is that she has something to hide. I don't know whether that is true, but until she releases her returns that is what people will believe. The democratic electorate deserve to have a full picture of their candidates. Better yet, just pressure the Attorney General to indict her for her crimes. She, like Bush, will ignore public pressure. Do you really feel Hillary wants to release tax returns that show her and hubby making a fortune off their Exxon Mobil Common Stock while the rest of us are raped at the pump? Yeah, that ought to go over well in states that can barely afford to eat, much less pay for gas. Pay attention PA..... I think the dividends paid out to the Clintons--who hold self-reported assets in Exxon Oil Common Stock ($100,001-$250,000)--might be a bit of a confidence bust for everyday Americans, sick of pumping every available nickel and dime into the gas pump every single day. Meanwhile--unless she tells us otherwise--Hillary is making money off our pain AND laughing all the way to the bank. Google Smashed Frog to read more. Bank on it. Yeh, I would really like know how she and hubby made their 40-50 Millions in 16 years, considering they were broke with all those law suits when they left the WH. Hillary IS the most transparent candidate. It's crystal clear to anyone with even one working brain cell that she's devious, a pathological liar, and will do and say anything to get elected. She doesn't care about the issues- she cares about getting elected. If she really cared about the issues, she would step back and stop damaging the candidate who WILL be running against McCain in November. It appears she WANTS another 4 years of Bush rhetoric, as she just continues to hurt Obama AND the Democratic Party. Yes, she is completely transparent. Exactly, Anne! Hillary is only interested in having presidential power. She wants her name in history books, as the first female president. I don't believe she truly cares about the United States and, obviously, not her own Democratic Party. I think she will vote for John McCain before she'll vote for Barack Obama. And she will do so out of jealousy, spite, and vindictiveness. Don't care any more. Would rather just see her concede to Obama so the Dems can unite against McCain this fall. I like Hillary, I like Obama. But the handwriting is on the wall and it is time to move toward the light. They wrote books and gave speeches. You can acquire quite a fortune doing that. What I find more interesting is that Obama's preacher shouted that Obama has never been priviledged, Obama has never been wealthy. Well an adjusted gross income of six digits makes you wealthy and priviledged to me. Just a financial but one heck of a spread $10 Mill too $50 mill? 47 pages. As Bill Bradley put it, the reason their returns and contributors to the library are so important is because it will show who the Clintons are beholden to. Hillary accepts money from PACs, lobbysists and corporations. Those types of organizations are just looking for a place to throw money away to (since it is not tax-deductible) they are looking to gain favor. The same goes for the Clinton Presidential Library; if, say, a million dollars was donated by Saudi Princes, or some such thing, it would call to question where the Clinton's loyalties (and indeed favors) lay. Obama does not accept this kind of money, his campaign is funded by over a million individual donors. Yes, many have contributed the max, but most have not even come close. In this line of thinking, if you are beholden to the people who finance your campaign, then Senator Obama is beholden to average Americans. And I'm OK with that. To be sure, I would be most happy to have the same salary as either one of the Obamas, but I also note how much they've contributed to charity. Obama himself admitted to the press that he wouldn't ever loan $5M to his campaign because he doesn't have it. And of course, if you have nothing to hide, why so secretive? Uh, John McCain hasn't released his either. Uh, "books and speeches" doesn't give you the fortune to throw $5 million around like pocket change. It's the deals -- like the $30 million Bill earned for his library by using his presidential prestige to score an oil deal in an old Soviet Republic -- the gives pause. On the other hand, as I recall, it was only when Barack did well with his first book and Michele got a promotion that they finally came into the chips. Still, I bet they're not doing as well as 90% of the other two-Harvard-Law-grad couples. Six digits when you are running for President is a relative pauper ! Remember he was a law professor and a state legislator so it is not as if he did not have well paying jobs. But he did work for a living, and he worked smart. The job of senator itself pays 6 digits. Sounds pretty clean to me ! ( not that I don't wish I made 6 digits. ) The Clintons also did questionable deals. Hillary currently has an ethics complaint pending against her for filing false financial documents with the U.S. senate. If you find Obama's preacher so interesting why not give him a call? Hillary undoubtedly has a CPA firm handling all the financially affairs. And I would bet there's a team of CPA auditors pouring over returns and documents for accuracy and how to minimally release the information so that it doesn't look like she is hiding anything but at the same time hiding. That's part of the reason for the delay I would think. The CPAs are being very careful to cover themselves for any backlash. I'd like to see the tax returns to see where the $5 million dollar loan came from. All other campaign information is available to the public at This is the United States Of America. We cannot live in fear of a certain group of people. Barack Obama has mislead the American people. He made us believe he would cross the racial divide and bring us all together. He said he would have "good judgement" from day one. The when the Pastor Disaster hit, his whole platform collapsed. He has no right to win the nomination at this point. he really should step down if he actually cared about the Democratic party. WE STILL HAVE A CHANCE TO FIX THIS MESS. Here are some tapes to see if you haven't already. Please pass these on to all the people in your life, so that everyone can make an informed decision. kATIE1263, YOU SOUND Like Lou Dobbs, when he said he couldn't belive people talked like that (referencing the Rev. Well, you and he (Dobbs) should have been in Korea, in the early sixties and heard what a bunch of gi's had to say about JFK, for extending their assignment. Then, you should have heard what they all said about McNamara, when they were on the ground in other countries and he was lying to everybody about that. Oh, and you should have been in the tent after the Generals left the meeting and heard the drivers. To put it mildly, I'll say, they complained bitterly, ok. See, when you pay your dues by "serving your country via military duty" it allows one to justify most anything and everything they complain about -even if the language is a bit colorful. In other words, it ain't no big deal. Lou, did you get that? You're right Katie1263, we cannot live in fear of a certain group of people. Yet - black Americans and other people of color, have done just that, for years. But that's another subject. Here we are talking about honesty, truth telling and character. Obama leads the way in this instance. Hillary has lied, and lied - Oh excuse me, "misspoke" and thrown everyone under the bus - Democratic party included. It is shameful that she and Bill would be so callous, and feel so privileged after the mess they served us on a worldwide platter, as to chance causing the Democrats to lose in the general election. Shame, shame, shame on them. She should SIT DOWN. If you really want to check out the "Pastor Disaster" then get the truth - see and hear the rest of the story. Anyone, yourself included, can sound like a not-so-nice person when their words and situations are taken out of context. If you are really interested in the truth, then go see - and open your eyes and ears. Then tell me who is the person who can't be trusted. At the very least, Obama is upfront and honest. And there are a lot more real truths out there about Hillary for anyone who takes the time to look. OK -- but that's O/T. The question is, should Clinton make her 2000-2006 tax returns public like Obama has, to make sure we know everything in everyone's closet and aren't ambushed in November? If Clinton is hiding something -- and why wouldn't she have slapped her own returns up on her website and said "so there!" to Obama by now if she doesn't have something to hide? -- then the Democratic Party is much better off knowing sooner rather than later, I'd think. I'd love to know your answer -- should she or do we give her a pass -- and why or why not. Thanks! what happened to this in the news cycle? i've had MSNBC on yesterday and today and -- NOTHING. all they're talking about is how it's harder for a woman to get elected than it is for a black man and how wanting hillary to quit this nonsense is unfair. oh, and chelsea. oh, and, of course, reverend wright. not a WORD about tax returns -- not barack's, not hers ... nada. How much do you want to bet "around April 15" that has now become "a few days before the April 22 primary" somehow gets delayed until after the primary. (Because she's just too busy campaigning) Or it's just the 1040s and no really relevant documents. She will drop out of the race before she releases any tax records. Mark my words. Why? Because we'll see just how much money Bill has received for special interests. So don't hold your breath. Hillary is still 'too busy' to produce her returns. Too busy lying about her 'experience'. I love your post! It is true that there is something really secretive if not downright suspicious about EIGHT years of missing tax records. Everyone should stop and think about that! Why not just photocopy them (she has a huge staff..come on!) and make them public. Well, she may scrape by in Pennsylvania but I am going to enjoy seeing her lose big time in my home state of North Carolina. By then, hopefully she will have enough dignity to release EIGHT years of missing tax returns. No wonder she isn't agreeing to a debate in NC. Go Obama! I also realize that they are probably buying time to sanitize their records. But the fact that she is blatantly withholding them and for so many years speaks volumes. Is the Huffington Post an online newspaper or is it an Obama propaganda medium? DId I miss something? Just a little observation on how this site is super one sided when it comes to election news. At least fox news talk about the democrats a little.... I have not decided who I am going to vote for. One thing I do know for certain is that I will not vote for anyone that will not reveal their past tax records before voting time. . My logic is pretty simple. If they have something to hide before they get elected, just think what they will hide when they get elected to the most powerful office in the US. PS Most people with big bucks have tax specialists do their returns because they are very busy making more money. She said she will release the returns around April 15 - what is your point other than using the issue as dirty campaign on behalf of Obama. As for the Clinton libray donor, the library is Bill's not Hillary's. I guess it is ok to drag the spouse into everything because he is famous and a man. You would not do that to Michelle would you? Furthermore, doesn't obama want to stay above all these little things and focus on solving big problems by forging unity?? One has to include him as he said America would be getting 2 for the price of 1. Also, sometimes he campaigns as if he running for President and not his wife. Amen. I'm actually surprised that Obama only released his returns yesterday, after weeks and weeks of sniping about Hillary's failure to release hers. Hillary has said she would release them by mid-April, let her release them by mid-April. There will be plenty of time to analyze them all then. What a non-story. Nofuzzydreams: Please understand that 2007's return, which isn't due until April 15, isn't the issue. Of course she should have as long as anyone else to file her return this year. But why is she stalling on her 2006 return (Obama produced his 2006 months ago), and on her 2005, 2004, 2003, 2002, 2001, and 2000 returns (all of which Obama produced yesterday, and which Clinton hasn't produced)? And now Clinton has backed away from her April 15 promise (which was always an "on or around" fuzzy date anyway), saying that she'll produce her returns three days before the PA primary, ie, April 19 -- which does PA's early-voting absentee voters no good (Clinton's website has a big button urging Pennsylvanians to vote absentee), and doesn't give the rest of the voters much time to process the new info. So the question, again, is: why can't Clinton photocopy and produce returns that are already done, filed with the IRS, and sitting in her file cabinet? Posted March 25, 2008 | 02:23 PM (EST)
http://www.faqs.org/faqs/software-eng/testing-faq/
crawl-001
en
refinedweb
Provided by: alliance_5.0-20110203-4_amd64 NAME namealloc - hash table for strings SYNOPSYS #include "mut.h" char ∗namealloc(inputname) char ∗inputname; PARAMETER inputname Pointer to a string of characters DESCRIPTION The namealloc function creates a dictionnary of names in mbk. It warranties equality on characters string if the pointers to these strings are equal, at strcmp(3) meaning. This means also that there is a single memory address for a given string. The case of the letters do not matter. All names are changed to lower case before beeing introduced in the symbol table. This is needed because most of the file format do not check case. namealloc is used by all mbk utility function using names, so its use should be needed only when directly filling or modifing the structure, or when having to compare an external string to mbk internal ones. This should speed up string comparisons. One shall never modify the contains of a string pointed to by a result of namealloc, since all the field that points to this name would have there values modified, and that there is no chance that the new hash code will be the same as the old one, so pointer comparison would be meaningless. All string used by namealloc are constants string, and therefore must be left alone. RETURN VALUE namealloc returns a string pointer. If the inputname is already in the hash table, then its internal pointer is returned, else a new entry is created, and then the new pointer returned. EXAMPLE #include "mut.h" #include "mlo.h" lofig_list ∗find_fig(name) char ∗name; { lofig_list ∗p; name = namealloc(name); for (p = HEAD_LOFIG; p; p = p->NEXT) if (p->NAME == name) /∗ pointer equality ∗/ return p; return NULL; } DIAGNOSTICS namealloc can be used only after a call to mbkenv(3). SEE ALSO mbk(1).
http://manpages.ubuntu.com/manpages/precise/man3/namealloc.3.html
CC-MAIN-2019-43
en
refinedweb
First things first here's the code: IEnumerator FadeToBlack() { Debug.Log ("hi"); mySprite.color = GameObject.FindWithTag ("PlayerBase").GetComponent<ColourMaster> ().worldColour; Color TempC = mySprite.color; float speedFade = 5f; while (mySprite.color.a > 0f) { TempC.a -= (0.01f * speedFade); mySprite.color = TempC; yield return new WaitForSeconds (0.01f); } } First of all, the coroutine isn't being spammed. The debug message only appears once, meaning the coroutine is only started once. After it does start, the object's sprite renderer (mySprite) has its color defined. Then some other thing are set, like a temp color variable. Then I start a while loop that runs while the alpha value is greater than 0. As TempC's alpha is lowered, mySprite.color is changed to TempC. In theory, this is supposed to lower the alpha of the sprite until it is 0. Next, the object is destroyed (a timer is elsewhere on my script and functions properly), independently of this coroutine. However, the observed effect is that the alpha value seems to pick random values every frame. Visually, this looks like the object's sprite is glitching out and flickering. Does anyone know how to fix this? Any help would be greatly appreciated. just as a small side-node: you might want to use mySprite.color = Mathf.Clamp01(TempC); in line 8 to prevent the value from becoming lower than 0. Answer by Glurth · Feb 03, 2018 at 11:03 PM I suspect it has to do with something else in your project, or perhaps the way you are invoking it? I created the following test script decrease a color's alpha, and it worked as expected. Of course, this is the ONLY script in the test project, so I know nothing ELSE is changing the color. ( I also used just a member color variable, rather than another object's sprite's color, to simplify things- so the way you select the sprite might also have something to do with your issue.) But this should be proof enough that the coroutine is being invoked, when/as expected. public class corouttest : MonoBehaviour { public Color aColorValue; void Start () { StartCoroutine(FadeToBlack()); } IEnumerator FadeToBlack() { Debug.Log("hi"); // mySprite.color = GameObject.FindWithTag("PlayerBase").GetComponent<ColourMaster>().worldColour; Color TempC = aColorValue; float speedFade = 5f; while (aColorValue.a > 0f) { TempC.a -= (0.01f * speedFade); aColorValue = TempC; Debug.Log("fading: " + aColorValue.a); yield return new WaitForSeconds(0.01f); } } } Yeah you were right. I feel pretty dumb now. I have another script attached to the object in question which makes sure the objects color matches the world color. I forgot to disable it before running the coroutine. Did that and now it. UI Text not changing color [C#] 2 Answers Make the cube have the same color as the background every 3 seconds 0 Answers Problem Changin Standard shader Alpha Channel 0 Answers Change startcolor of particle system through script 1 Answer Animation to fade alpha but retain color 2 Answers
https://answers.unity.com/questions/1463849/trying-to-make-sprites-alpha-decrease-to-0-in-orde.html?sort=oldest
CC-MAIN-2019-43
en
refinedweb
Creating a CV in React-PDF Andri Originally published at andri.dk on ・3 min read TLDR; But why? I never much cared for pixel-pushing on screen. It has always been a necessary evil. But, print? Love that shit. I did my university reports in LaTeX, even the graphics and even though the errors were HORRIBLE, it remained a loyal TeX fan. So when I received a task at work to evaluate react-pdf vs CSS-printing I knew I had something special to play with. I wanted the following features: - Use JSON resume for the CV data - Components for for work-experience, education and sections - Built automatically with my Gatsby site into a PDF file A good starting point There is a an example in the react-pdf repo that has much prettier code then mine. So, if you want to make your own, I suggest you start there. Using JSON resume, well mostly If you're anything like me, you don't like updating your CV, or portfolio. Or you just forget. We can use one JSON file for all those things and be done with it. The spec is good, but I made some minor changes to mine. I added a "skills" array to work-item and "color" string to skill-items. Visit jsonresume.org and make your own resume.json file. They even offer free hosting and rendering of your résumé, and if you're feeling lazy, then just do that instead. Components I've pasted some code in here, so that you can get a little feeling for how this is built. But, keep in mind that the code might change, and refer to the repo for code-examples. Box A simple box, with an headline. export const Box = ({ children, title, color, style = {} }) => ( <View wrap={false} style={{ marginBottom: 20 }}> <SectionHeader color={color}>{title}</SectionHeader> <View style={{ ...style }}> {children && typeof children === 'string' ? ( <Text>{children}</Text> ) : ( children )} </View> </View> ) Work Item export const TimelineItem = ({ title, period, children, employer, tags = [], location }) => { tags = tags.sort() return ( <View wrap={false} style={{ marginBottom: 10 }}> <View style={{ flexDirection: 'row', justifyContent: 'space-between', marginBottom: 2.5, flexWrap: 'wrap' }} > <Text style={{ fontWeight: 'bold' }}> {title}, <Text style={{ fontWeight: 'normal' }}>{employer}</Text> </Text> <Text>{period}</Text> </View> {children && <Text style={{ marginBottom: 2.5 }}>{children}</Text>} {tags && ( <View style={{ flexDirection: 'row' }}> {tags && tags.map(m => ( <Tag key={m} color={tagColors[m.toLowerCase()]}> {m} </Tag> ))} </View> )} </View> ) } Build with Gatsby Originally, I wanted Gatsby to render my CV as a page, using react-dom on the client and pdf on the server. That turned out to be very hard to do, with little gain. So now we just generate the PDF file seperately. In retrospect, I should probably move this into pkg/cv instead of src/cv. package.json "scripts": { "build-cv": "cd src/cv && babel-node build.js", "watch-cv": "cd src/cv && nodemon --exec babel-node build.js" }, gatsby-config.js exports.onPostBuild = () => { const cp = require('child_process') cp.execSync('yarn run build-cv') } src/cv/.babelrc From the react-pdf repo. I also tried to adapt Gatsby's babel configuration here, but without luck. { "presets": [ [ "@babel/preset-env", { "loose": true, "targets": { "node": "current" } } ], "@babel/preset-react" ], "plugins": [ "@babel/plugin-transform-runtime", "@babel/plugin-proposal-class-properties" ] } Workflow Then just run yarn run watch-cv while developing it. I use evince on Linux as my PDF viewer, because it automatically reloads the file on-write. So almost like hot-reloading. Conclusion This was a fun project for me. I'm not seeking employment, so I'm not motivated to polish it further at this time. I hope this gave a few bread-crumbs, if you're considering something similar. Showdev: We are building an online meeting app - Collabify 🎉🎦🖼🎭 Collabify - An online meeting app that runs in your browser. Built by two students.
https://dev.to/andrioid/creating-a-cv-in-react-1h2p
CC-MAIN-2019-43
en
refinedweb
Search Criteria Package Details: platformio-git v4.0.1.r1.gadde5e6a-1 Dependencies (11) - python-bottle (python-bottle-git) - python-click (python-click-5.1) - python-colorama (python-colorama-git) - python-lockfile - python-pyserial - python-requests - python-semantic-version - python-setuptools - python-tabulate (python-tabulate-git) - arduino (arduino-git, teensyduino, arduino-rc) (optional) – For Arduino based projects - energia (optional) – For MSP430 based projects Latest Comments 1 2 Next › Last » sl1pkn07 commented on 2019-09-30 14:44 Hi. is possible add in the package the udev rules from PIO? greetings sl1pkn07 commented on 2019-08-21 18:08 please add python-tabulate req for pio 4.0.1-rc2 Jake commented on 2019-07-14 10:50 python-arrowis not required anymore since 3.5.1: sgar commented on 2019-02-02 16:39 hi, would be possible to update this package to compile with python 3 instead of python 2... upstream already added initial support for python 3.. bilko commented on 2018-04-14 16:33 Hello, It seems that python2-backports.functools_lru_cache is now a runtime dependency. killermoehre commented on 2017-08-08 11:43 Hi, seems like python2-arrow is needed as runtime dependency $ platformio init --ide emacs --board nanoatmega328 Traceback (most recent call last): File "/usr/bin/platformio", line 6, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3049, in <module> @_call_aside File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3033, in _call_aside f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3062, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 658, in _build_master ws.require(__requires__) File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 972, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 858, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'arrow<1' distribution was not found and is required by platformio will.price94 commented on 2016-09-04 10:39 Hi Alex, I've taken over maintainer-ship of the python2-semantic-version package and have now updated it, and added it to the dependencies of this package. Thanks :) will.price94 commented on 2016-09-04 10:20 Hi Alex, Thanks for the tip although python2-semantic-version is at 2.4.1 so doesn't solve this problem. I'll see what I can do and try and get a fix out later today. alex4o commented on 2016-09-04 08:41 Add "python2-semantic-version" as dependency pkg_resources.DistributionNotFound: The 'semantic_version>=2.5.0' distribution was not found and is required by platformio will.price94 commented on 2016-01-09 21:38 Updated.
https://aur.archlinux.org/packages/platformio-git/
CC-MAIN-2019-43
en
refinedweb
Doing so makes the test suite test non-conforming behavior that neither libc++ nor libstdc++ needs. Details - Reviewers - Doing so makes the test suite test non-conforming behavior that neither Diff Detail Event Timeline Guys, I understand that given pstl code is becoming a part of C++ standard library and , of course should be std::execution::unsequenced_policy and so on.. But, there is PSTL standalone version, which works with the other compilers and environment , MSVC for example, where there is another implementation of Parallel STL. In that case we have to differ a namespace where the execution policies are defined, otherwise a user code (and the tests) will use MSVC implementation of parallel algorithms instead of our... I have two alternative ideas here: - Using a special macro like __PSTL_NAMESPACE_FOR_EXECUTION On LLVM side it will be defined as #define __PSTL_NAMESPACE_FOR_EXECUTION std On our side it will be defined as #define __PSTL_NAMESPACE_FOR_EXECUTION __pstl - Or just use using namespace std; or (depends on a macro like PSTL_POLICY_USAGE) using namespace __pstl; in test\support\pstl_test_config.h or test\support\utils.h Of course, we should still support the use case you have in mind. My goal is not to make the PSTL *only* useful for standard libraries -- the PSTL shouldn't inject its declarations into namespace std, and this was a follow-up fix I planned on making. Let me address your concern first and I think this patch will then make more sense to you. Basically, I was thinking (like you) that we should have a way to customize the namespace in which pstl injects the algorithms. Then standard libraries can use something like std::__1, and you can use something like pstl::execution or whatever you want. I think we're in agreement here -- let me put this mechanism in place before removing this namespace injection. we should have a way to customize the namespace in which pstl injects the algorithms Actually, pstl the algorithms are defined in "std" namespace. pstl injects just execution policy into "std". I would like to pay your attention for changing like this in the tests std::for_each_n(std::execution::seq, expected_first, n, Flip<T>(1)); Where a passed execution specifies an overloaded(by a policy type) implementation of std::for_each and now the code is not correct in general I guess, because the test engine passes __pstl::execution::seg etc policies. So, just for the test I suggest std::for_each_n(__PSTL_POLICY_NAMESPACE::execution::seq, expected_first, n, Flip<T>(1)); to specify a proper overload in case of standalone PSTL version and there is a standard PSTL from a standard library delivered with a compiler. Continuing the discussion - the code snippet of the test engine - pay your attention to using namespace __pstl::execution invoke_on_all_policies(Op op, T&&... rest) { using namespace __pstl::execution; // Try static execution policies invoke_on_all_iterator_types()(seq, op, std::forward<T>(rest)...); invoke_on_all_iterator_types()(unseq, op, std::forward<T>(rest)...); #if _PSTL_USE_PAR_POLICIES invoke_on_all_iterator_types()(par, op, std::forward<T>(rest)...); invoke_on_all_iterator_types()(par_unseq, op, std::forward<T>(rest)...); #endif } Louis, I have an issue regarding the dummy stdlib headers. Proposed location is valid only for the test. (/test/support/stdlib) The tests is not necessary part of standalone PSTL because is not a part of implementation of PSTL. For example, If i have an application which use standalone pstl include "pstl/test/support/stdlib" would look strange at least. So, I suggest moving the dummy stdlib headers into ..pstl\internal\stdlib or into ..pstl\stdlib I believe what's wrong here is the expectation that the PSTL is something that can be shipped as-is by implementations. The idea is that the PSTL provides the functionality but not necessarily the packaging for the parallel algorithms. Hence, Standard libraries have a little bit of boilerplate to include this functionality, and a "standalone" version of the PSTL should also wrap the internal headers to package the functionality however they want to. In other words, I think the correct thing to do is for you guys to define <algorithm> & friends headers that #include <pstl/internal/XXX.h> while making sure that it works with the underlying standard library. I don't think the pstl should include headers that are useless to all but one specific shipping vehicle of the PSTL (the standalone version). That being said, if we realize that there's something useful to factor out for several shipping vehicles, of course we should do it. WDYT? I don't think the pstl should include headers that are useless to all but one specific shipping vehicle of the PSTL (the standalone version) Why cannot it be lie onto high "include\pstl" (or even "pstl") level of "LLVM upstream" as a special folder, which will not be taken into libcxx and libcstd++ libraries? (BTW, and as well the other extra functionality which is not going to be released with the next libcxx/libcstd release?, for e[ample alternative back-ends, experimental features, etc) Just ignore it "extra" folders when the code are integrating into libcxx and libcstd repositories. What do you think here? I don't think it should be necessary to subset the library when shipping it, and in fact I didn't plan on needing that for libc++. However, if all we're arguing about is the path for these headers, then we can put them into include/pstl/internal/test_stdlib or something similar. But let's avoid any name/path that suggests these headers are part of the public interface of the library, because they are not (that specific bit is the point I'm trying to make). But let's avoid any name/path that suggests these headers are part of the public interface of the library, because they are not I'm absolutely agree. And I suggest put these "dummy" standard headers on the rPSTL repo root level, where there are other folder which not be taken for the library, such as "doc", "cmake" and "test". So, I suggest adding "stdlib\pstl" where "dummy" standard headers are placed, and "experimental" folder for future C++ features: cmake doc include stdlib\pstl stdlib\experimental or just experimental test Arguments for "stdlib\pstl" folder: Many people have already use standalone PSTL and a usage model is following: Add the <install_dir>/include folder to the compiler include paths. You can do this by calling the pstlvars script. Add #include "pstl/execution" to your code. Then add a subset of the following set of lines, depending on the algorithms you intend to use: #include "pstl/algorithm" #include "pstl/numeric" #include "pstl/memory" Arguments for "experimental" folder: We would prefer keeping One main repository for PSTL further development (on LLVM site). And we are going to develop just features which already have had "Technical Specification" and to be going to added into one of the next C++ standard. Otherwise we have to support the second active development in the our own repo and "sync-up with LLVM repo" overheads will be very big in that case.
https://reviews.llvm.org/D60537
CC-MAIN-2019-43
en
refinedweb
! Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: The Gravity Sensing Potentiometer I found a commercial grade one in an old helicopter remote, but if you don't want to buy one, the next step tells you how to build your own one. For the sake of people who are going to build their own one, I built my simulator sensor with one commercial sensor and one homemade sensor (you need 2 total) Step 2: Making Your Own Gravity Sensing Potentiometer This step shows you how to create a perfectly working gravity sensing potentiometer from household parts Step 3: The Wheel I found an old light up Frisbee but it did not have the ability to switch on and off the light, plus it came with pesky button cell batteries. This step solves that problem by rewiring the Frisbee to have external battery power and a switch to turn the internal light on and off. Connect wire to the positive and negative ends of the Frisbee circuit. Wire up a switch and connect it between the battery and the circuit. Glue everything down. Step 4: The Casing Bring out the legos! 12x8 box with 3x6 hole and 6x6 hole these are used for up, down, left, and right sensor movement The box should be 5 high You can then fill in spaces so that the sensor is more compacted and directionally limited Make sure to include holes for wires to come out Step 5: Hooking Everything Up Attach longer wires to everything so that you can use the wheel from a distance Glue the sensors to plates and pack them into the holes in the Lego casing Then attach Velcro to the underside and to the back of the "wheel" Step 6: Optional Shield (Method One) You can make your own "shield" using one of these two methods: 1: get some PCB and some pins and solder them to the board and solder on wires Step 7: Optional Shield (Method Two) 2: get some pins and glue them or use silicon to fasten them to a plate. Make sure the pins line up with the diagram in the next step Let the silicon dry for a day I used cut safety pins and they worked well in the end I used hot glue, then super glue, then silicon to match the strength of soldering Step 8: The Diagram That Will Lead You on Your Way to Greatness Do NOT mess up on this diagram. Make sure everything is correct before proceeding and plugging in the arduino. Potentiometer 1 right goes to GND middle goes to Analog 5 left goes to pin 4 Potentiometer 2 right goes to GND middle goes to Analog 0 left goes to pin 8 Step 9: Programming (Arduino) Arduino Code: void setup(){ Serial.begin(9600); pinMode(4,OUTPUT); pinMode(8,OUTPUT); } void loop(){ digitalWrite(4,HIGH); int d=analogRead(A5); digitalWrite(4,LOW); digitalWrite(8,HIGH); int r=analogRead(A0); digitalWrite(8,LOW); int minimum=400; int maximum=800; Serial.println(d); Serial.println(r); //foreward tilt if(d>maximum){ Serial.println('0'); } else{ Serial.println('1'); } delay(12.5); //backwards tilt if(d<minimum){ Serial.println('2'); } else{ Serial.println('3'); } delay(12.5); //left tilt if(r<minimum){ Serial.println('4'); } else{ Serial.println('5'); } delay(12.5); //right tilt if(r>maximum){ Serial.println('6'); } else{ Serial.println('7'); } delay(12.5); } Step 10: Programming (Python) Python Code: import serial import codecs import ctypes import time from time import sleep SendInput = ctypes.windll.user32.SendInput PUL = ctypes.POINTER(ctypes.c_ulong) class KeyBdInput(ctypes.Structure): _fields_ = [("wVk", ctypes.c_ushort), ("wScan", ctypes.c_ushort), ("dwFlags", ctypes.c_ulong), ("time", ctypes.c_ulong), ("dwExtraInfo", PUL)] class HardwareInput(ctypes.Structure): _fields_ = [("uMsg", ctypes.c_ulong), ("wParamL", ctypes.c_short), ("wParamH", ctypes.c_ushort)] class MouseInput(ctypes.Structure): _fields_ = [("dx", ctypes.c_long), ("dy", ctypes.c_long), ("mouseData", ctypes.c_ulong), ("dwFlags", ctypes.c_ulong), ("time",ctypes.c_ulong), ("dwExtraInfo", PUL)] class Input_I(ctypes.Union): _fields_ = [("ki", KeyBdInput), ("mi", MouseInput), ("hi", HardwareInput)] class Input(ctypes.Structure): _fields_ = [("type", ctypes.c_ulong), ("ii", Input_I)] def PressKey(hexKeyCode): extra = ctypes.c_ulong(0) ii_ = Input_I() ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0, 0, ctypes.pointer(extra) ) x = Input( ctypes.c_ulong(1), ii_ ) ctypes.windll.user32.SendInput(1, ctypes.pointer(x), ctypes.sizeof(x)) def ReleaseKey(hexKeyCode): extra = ctypes.c_ulong(0) ii_ = Input_I() ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0x0002, 0, ctypes.pointer(extra) ) x = Input( ctypes.c_ulong(1), ii_ ) ctypes.windll.user32.SendInput(1, ctypes.pointer(x), ctypes.sizeof(x)) def PressW(): PressKey(0x57) #W def ReleaseW(): ReleaseKey(0x57) #W def PressA(): PressKey(0x41) #A def ReleaseA(): ReleaseKey(0x41) #A def PressS(): PressKey(0x53) #S def ReleaseS(): ReleaseKey(0x53) #S def PressD(): PressKey(0x44) #D def ReleaseD(): ReleaseKey(0x44) #D port = "COM6" ser = serial.Serial(port, 9600, timeout=0) while True: #data = ser.read(9999) line = ser.readline() if line: print( 'Got:', line) if line==b'0\r\n': print('W_UP') PressW() elif line==b'1\r\n': print('W_DOWN') ReleaseW() if line==b'2\r\n': print('S_UP') PressS() elif line==b'3\r\n': print('S_DOWN') ReleaseS() if line==b'4\r\n': print('A_UP') PressA() elif line==b'5\r\n': print('A_DOWN') ReleaseA() if line==b'6\r\n': print('D_UP') PressD() elif line==b'7\r\n': print('D_DOWN') ReleaseD() sleep(0.0125) print('0') ser.close() Step 11: The Game Step 12: One Step Further If you want to go above and beyond, you can either make or use a pedal from an electric piano to act as a gas pedal/ brake pedal. I wish you the best of luck and hope you succeed in creating your very own driving simulator because trust me, they are way better than using the keyboard keys to drive a car! In the future, I plan on using this concept to build a leg band that tracks movement and can use running movements to power internet speed etc. to make you have to work for that connection! Just imagine an new way of computer-body communication! Post in the comments or message me if you have any other applications for this sort of device! Participated in the Holiday Gifts Contest Participated in the UP! Contest Participated in the Instructables Design Competition 2 Discussions 6 years ago on Step 11 well done! could show us a video of it in action? Reply 6 years ago on Step 11 I'm working in it. My arduino started doing wierd stuff before I could video (I'm not sure why, it just started giving me errors when trying to upload) so it may be a while before I can save up the $ for a new chip.
https://www.instructables.com/id/DIY-Driving-Simulator/
CC-MAIN-2019-43
en
refinedweb
Access $router outside vue How can I access $router in a .js file? i would try to import the router where you need it: import router from '/router/index'; //replace with your correct path Tried that, but the line router().replace({name: 'logout'})only changes the url, without anything happening on the UI. Things work after I click F5 - Allan-EN-GB Admin last edited by router().push({name: 'logout'}) Both router().replace({name: 'logout'})and router().replace({name: 'logout'})only change the URL, but nothing more happens. The page isn’t changed, no errors are shown in the console. I have to click F5 and then things work (because the URL is changed to the right one) - rstoenescu Admin last edited by Use boot files where you have access to the instance of the Router (as param to the default export method). If you import and call router() you are essentially creating ANOTHER instance of the router, so nothing can actually happen to your app since your app is connected only to the initial Router. Thank you @rstoenescu for the guidelines. I’m trying to use the router in a vuex action inside of a module. How would I transfer that part of the redirect logic in a boot file? - metalsadman last edited by metalsadman @reath follow what @rstoenescu suggested, then import that boot file in your vuex. edit. something like // boot/router.js let routerInstance = void 0 export default async ({ router }) => { // something to do routerInstance = router } export { routerInstance } // store/somemodule/action.js import { routerInstance } from 'boot/router' export const someAction = (...) => { ... routerInstance.push('/some-route') } - rstoenescu Admin last edited by If you’re using this in a Vuex store file, then it will suffice to access “this.$router”. Just make sure you don’t define your actions with ES6 arrow syntax (because “this” will mean something else as an effect). export const someAction(...) { //... this.$router...... } What are your tring to accomlish? In a scenario, I need to command the routing using electron menu. In this case I use electron function to fire an event and inside vue I can access a ‘bridge’ to listen for electron-initiated events. So user click on a menu and vue router change the page
https://forum.quasar-framework.org/topic/3960/access-router-outside-vue/3
CC-MAIN-2019-43
en
refinedweb
Validation and Inheritance If you use inheritance, you need to know how the validation rules are applied throughout the class hierarchy. Here is an example of a simple hierarchy, where class PreferredCustomer inherits from class Customer. (The validator attributes used in the example, such as CustomerNameValidator refer to custom validators and are not validators included with the Validation Application Block.) public class Customer { [CustomerNameValidator] public string Name { get { ... } set { ... } } [DiscountValidator] public virtual double Discount { get { ... } set { ... } } } public class PreferredCustomer : Customer { [PreferredDiscountValidator] public override double Discount { get { ... } set { ... } } } 'Usage Public Class Customer <CustomerNameValidator()> _ Public Property Name(ByVal _name As String) Get ' ... End Get Set(ByVal value) ' ... End Set End Property <DiscountValidator()> _ Public Overridable Property Discount(ByVal _discount As Double) Get ' ... End Get Set(ByVal value) ' ... End Set End Property End Class Public Class PreferredCustomer Inherits Customer <PreferredDiscountValidator()> _ Overrides Public Property Discount(ByVal _discount As Double) Get ' ... End Get Set(ByVal value) ' ... End Set End Property End Class In this example, the PreferredCustomer class derives from the Customer class, and it also overrides the Discount property. There are two rules for how validators work within a class hierarchy: - If a derived class inherits a member and does not override it, the member's validators from the base class apply to the derived class. - If a derived class inherits a member but overrides it, the member's attributes from the base class do not apply to the derived class. In this example, the CustomerNameValidator attribute applies to the PreferredCustomer class, but the DiscountValidator attribute does not. Instead, the PreferredDiscountValidator attribute applies. If this is not the desired behavior, you can use validators of base classes to check instances of derived classes. The following code example shows how to do this. It assumes that you have resolved an instance of the Validation Application Block ValidatorFactory class and stored it in a variable named valFactory. Validator<Customer> customerValidator = valFactory.CreateValidator<Customer>(); PreferredCustomer myPreferredCustomer = new PreferredCustomer(); // Set properties of PreferredCustomer here ValidationResults r = customerValidator.Validate(myPreferredCustomer); 'Usage Dim customerValidator As Validator(Of Customer) = valFactory.CreateValidator(Of Customer)() Dim myPreferredCustomer As PreferredCustomer = New PreferredCustomer() ' Set properties of PreferredCustomer here Dim r As ValidationResults = customerValidator.Validate(myPreferredCustomer) This example validates a PreferredCustomer object. However, the validation is based on the attributes of the Customer base class. The validation rules defined on the PreferredCustomer class are not applied. You can use the CreateValidator(Type) overload of the ValidatorFactory class to create a validator that is specific to a class that you provide at run time. public ValidationResults CheckObject(object obj) { if (obj != null) { Validator v = valFactory.CreateValidator(obj.GetType()); return v.Validate(obj); } else { return null; } } 'Usage Public Function ValidationResults(ByVal obj As Object) If Not obj Is Nothing Then Dim v As Validator = valFactory.CreateValidator(obj.GetType()) Return v.Validate(obj) Else Return Nothing End If End Function This example creates a validator based on the run time type of the input argument to the CheckObject method.
https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff664641%28v%3Dpandp.50%29
CC-MAIN-2019-43
en
refinedweb
Member 90 Points Dec 22, 2015 11:37 AM|SautinSoft|LINK Hi Comminy! We are happy to announce about releasing new DOCX Document .Net library! It's 100% standalone and independent .Net assembly, completely written in C#. DOCX Document .Net helps you to develop any .Net (ASP.Net, SilverLight, WPF, Console ...) application which: The library doesn't require MS Office, it even doesn't use System.Drawing. The main and single requirement is .Net 4.0 or higher. This easy C# sample shows "How to create a new DOCX document in .Net": using System; using System.IO; using SautinSoft.Document; namespace Sample { class Sample { static void Main(string[] args) { // Let's create a simple DOCX document. DocumentCore docx = new DocumentCore(); // Add new section. Section section = new Section(docx); docx.Sections.Add(section); // Let's set page size A4. section.PageSetup.PaperType = PaperType.A4; // Add paragarph docx.Content.End.Insert("Hello World!", new CharacterFormat() { Size = 25, FontColor = Color.Blue, Bold = true }); // Save DOCX to a file docx.Save(@"d:\HelloWorld.docx"); } } } Download:. Code Samples:. You are welcome with your questions and offers. Best wishes, Max 0 replies Last post Dec 22, 2015 11:37 AM by SautinSoft
https://forums.asp.net/p/2080773/6002374.aspx?DOCX+Document+Net+allows+to+create+and+parse+DOCX+documents+in+C+
CC-MAIN-2019-43
en
refinedweb
public class Synchronizer extends Object), Sample code and further information clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public Synchronizer(Display display) display- the display to create the synchronizer on. SWTException- asyncExec(java.lang.Runnable) Copyright (c) 2000, 2014 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
https://help.eclipse.org/luna/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/widgets/Synchronizer.html
CC-MAIN-2019-43
en
refinedweb
SET 01 Set 11. T he company has a rule ------- entrance to the warehouse without official authorization. (A) prohibited (B) prohibiting (C) prohibit (D) prohibits2. Customers can call our hotline to hear about 6. Part-time staff members will receive ------- ------- of the discounts and special offers available to them. (A) so (B) such (C) ones (D) some3. A deposit must ------- to the manager in the selection submitted by the advertising department. (A) choose (B) chosen (C) to choose (D) choosing8. The chairman ------- to the business order to secure ones reservation. (A) be paid (B) be paying (C) have paid (D) to pay4. Of the five applicants for the position, four convention if his schedule had allowed for it. (A) will go (B) went (C) have gone (D) would have gone9. ------- can call our customer service center if are too inexperienced, while ------- has a bad attitude. (A) other (B) the others (C) others (D) the other5. A full health examination is mandatory for you require more information. (A) You (B) Your (C) Yours (D) Yourself10. If you wish to make an appointment with ------- the employees participating in the charity climbing expedition. (A) many (B) how (C) whom (D) all Professor Coltrane, please contact ------- by email. (A) him (B) his own (C) he (D) his Set 1 marketing department last month, she received a salary increase of 15%. (A) moves (B) has moved (C) moving (D) moved12. The door wont open ------- the password assembly line will be ------- starting Monday. (A) operational (B) operate (C) operation (D) operations17. ------- testing is required before the project has been entered correctly. (A) when (B) unless (C) in case (D) given13. ------- the interviewers were impressed can enter the production stage. (A) Additional (B) Addition (C) Additionally (D) Additions18. Modern cellular phones boast ------- Set 5 Set 6 with him, Ben was not offered the position. (A) Despite (B) Although (C) Because (D) In addition to14. All software changes must be dealt with interactive features and entertainment options. (A) every (B) many (C) a lot (D) little19. Mr. Smith found Kevins advice ------- before the ------- version is shown to investors. (A) finalize (B) finalist (C) finals (D) final15. The concert organizers insist ------- no when he began working at the company. (A) helpfully (B) help (C) helpful (D) helps20. Many companies ------- success reliesSet 10 recording equipment be brought into the venue. (A) so (B) that (C) while (D) unless on customer loyalty offer incentives and discounts to long-term customers. (A) that (B) his (C) whose (D) which consists of around 12,000 people, many of ------- are employed by the Department of Medicine. (A) which (B) whose (C) whom (D) those22. Be sure to frequently save your work ------- network features a broadband connection to the internet. (A) Before (B) Instead (C) Unlike (D) Contrary27. Some of the workers ------- discontent toward the proposed changes. (A) express (B) expresses (C) expressing (D) to express28. Each ------- must prepare a mock avoid any loss of information due to system errors. (A) for (B) to (C) so (D) when23. Motivating and encouraging workers is a presentation for his second interview. crucial part of ------- an effective manager. (A) be (B) being (C) been (D) to be24. Few businesses doubt the ------- of that involves many weeks of negotiations. advertising in the commercial market. (A) powerfully (B) powered (C) powerful (D) power25. Delivery of your items will be made ------- three days from the date of payment. (A) within (B) toward (C) before (D) among the emergency services immediately and do not attempt by any means to extinguish the blaze -------. (A) yourself (B) itself (C) himself (D) themselves 300 SET 02 Set 21. Mr. Howard is knowledgable ------- in 6. The dramatic rise in crime rates ------- been business to start his own printing company. (A) enough (B) too much (C) very (D) well2. The basement level of the factory will be linked to the rise in unemployment. (A) has (B) have (C) having (D) to have7. Some ------- information about upcoming closed for ------- during the next three months. (A) renovation (B) renovate (C) renovated (D) renovator3. ------- enough time is available, the weekly local events is listed on the web site. (A) interesting (B) interests (C) interested (D) interest8. While ------- a new firewall software, we meeting will be followed by a presentation. (A) The fact that (B) In view of (C) Providing (D) Nevertheless4. Richard Harris was instructed to search had our web site repeatedly shut down by a series of hacking attacks. (A) installed (B) installing (C) install (D) installment9. The board of directors did not accept ------- for the largest convention hall ------Bakersfield, California. (A) of (B) in (C) from (D) to5. The recent ------- are likely to benefit anyone of the changes to the health and safety regulations that were suggested by the inspector. (A) any (B) those (C) them (D) every10. ------- companies have signed an working in the health industry. (A) reformed (B) reformer (C) reforms (D) reform agreement obligating them to actively promote each others new product line. (A) Each (B) Any (C) Every (D) Both never allow defective products to pass the inspection. (A) your (B) yours (C) you (D) yourself12. ------- the chairman thinks you are suitable aspects of computing, the company started giving him more complicated assignments. (A) has mastered (B) masters (C) had mastered (D) is mastering17. ------- the seminar was free to attend, not for the public relations position, he will contact you to schedule a second interview. (A) If (B) Though (C) Whether (D) While13. During the flight, we will serve ------- many employees showed up. (A) Instead of (B) Because (C) Although (D) Due to18. People like to replace their cars every few passengers a hot meal and beverage of their choice. (A) us (B) we (C) our (D) ourselves14. Unauthorized personnel do not have access years ------- when their car is in perfect condition. (A) about (B) even (C) although (D) quite19. Each staff member is personally responsible to ------- of these floors. (A) every (B) either (C) much (D) those15. A number of local merchants ------- a variety for any ------- expenses incurred during the business trip. (A) incidental (B) incidents (C) incidence (D) incidentally20. We are considering remodeling the office Set 10 of products to the charity since it was first established five years ago. (A) donate (B) donating (C) has donated (D) have donated ------- the meeting room can be more spacious. (A) in order to (B) so that (C) because of (D) just as more efficient ------- coal or oil-based power. (A) that (B) as (C) than (D) to22. The man that I spoke ------- on the phone that many people don't consider when applying for a job. (A) factoring (B) factored (C) factor (D) to factor27. The director does not know ------- should told me to wait until I get the test results. (A) to (B) him (C) for (D) to him23. Readers of News World magazine can now be done about the new office building. (A) those (B) what (C) whether (D) there28. The design team is hoping to complete the receive daily email updates by registering ------- through the publications homepage. (A) electronic (B) electronically (C) electronics (D) electrical24. Your account information is available online ------- for the new company logo by the end of this week. (A) propose (B) proposed (C) proposal (D) proposing29. The aim of the district survey is ------- public so you can access it ------- necessary. (A) whichever (B) whatever (C) whoever (D) whenever25. The marketing department supervisor is opinion on the planned construction of a new shopping center. (A) gather (B) gathered (C) to gather (D) having gathered30. Exercise is always necessary but ------- can unsure where ------- for the annual company trip. (A) to go (B) going (C) to going (D) go to damage your health. (A) rare (B) extreme (C) excessive (D) too much SET 03 Set 31. Each of the office computers ------- checked 6. Because of the stock market fluctuation, and upgraded on a monthly basis. (A) is (B) are (C) being (D) been2. It ------- the department manager who some of ------- for Ledal corp. are seeking the help of financial experts. (A) investor (B) investors (C) the investors (D) investment7. Laboratory researchers at the Crop recommended you for promotion. (A) was (B) were (C) been (D) being3. Passengers are asked to turn off all Research Institute must take care of plant samples that need ------- at regular intervals. (A) to water (B) be watered (C) to be watered (D) to watering8. NeoSys Inc. employs 300 workers, some of electronic devices ------- takeoff and landing. (A) during (B) to (C) at (D) within4. Although all the tables at Sortinos have ------- live in the company dormitory. (A) what (B) where (C) which (D) whom9. Mr. Robinson spoke at the debate ------- the already been reserved, we can call you ------- there is a last-minute cancellation. (A) therefore (B) even if (C) in case (D) despite5. Trinity Business Tower is composed ------- audience were allowed to ask questions. (A) which (B) with which (C) during which (D) that10. Interns need to learn about the recent 20 office floors, an underground aquarium, and a revolving restaurant on the uppermost floor. (A) by (B) of (C) at (D) on10 scientific developments, ------- our technology is based. (A) which (B) what (C) that (D) on which construction of the new electronics factory will be completed by March 25. (A) go (B) goes (C) gone (D) going12. During the late 1990s, the stock market management, very few members of the companys ------- board of directors remain. (A) formal (B) forming (C) formation (D) former17. Your vehicle insurance will remain active ------- close to crashing due to the failure of many Internet-based companies. (A) coming (B) come (C) comes (D) came13. The online banking system is better ------- you continue to make regular payments each month. (A) as long as (B) in case (C) whereas (D) besides18. ------- Zantassi XQ23 color printer comes than ------- due to upgrades to account management and security. (A) once (B) never (C) not (D) ever14. Although the Langford GX Hall is an with a two-year warranty covering the cost of all repairs and replacement parts. (A) Various (B) Every (C) Several (D) Many19. Employees must immediately return to their Set 8 Set 9 acceptable conference venue, the Richmond Exhibition Center seems -------. (A) better (B) more better (C) at best (D) more best15. Investors expect that the restaurant will be workstations ------- the meeting has been finished. (A) that (B) when (C) since (D) so that20. By the time the companys presentation ------- in six months. (A) operationally (B) operational (C) operation (D) operate began, most of the investors ------- from the place. (A) are disappearing (B) will have disappeared (C) disappear (D) had disappeared 11 carry hand luggage that is larger than what is stipulated in their luggage allowance policy. (A) no (B) not (C) not to (D) do not22. Harwood Bank insists that we ------- the thirty minutes from platform four, and hourly from platform five. (A) every (B) each (C) all (D) some27. Because of the approaching tropical storm, outstanding balance of our short-term loan within 60 working days. (A) paid (B) are paying (C) will pay (D) pay23. The construction project for the new national Martins restaurant will be ------- early on Saturday. (A) close (B) closes (C) to close (D) closing28. ------- presentations and seminars, library was a long ------- uncomplicated process. (A) for (B) yet (C) not (D) and24. ------- applicants possess the skills and convention attendees will have the chance to view some of the most advanced technologies available in the market. (A) Include (B) In addition to (C) Because (D) Owing29. Smith and Gracie Associates is controlled qualifications required to successfully fulfill the role of lead designer at Info-tech Services. (A) Almost of (B) Most all (C) Almost all (D) Mostly of25. To guarantee the health and safety of factory by two attorneys, ------- established the firm more than ten years ago. (A) who (B) whom (C) that (D) whose30. Anyone interested in attending the seminar workers, Brownlow Inc. ensure that all machinery is well-maintained and completely -------. (A) relying (B) reliant (C) reliance (D) reliable12 has ------- the end of today to register. (A) ahead (B) until (C) during (D) before SET 04 Set 41. ------- already spent two months in training, 6. ------- in the department will have an Mr. Wallace was eager to begin his new job. (A) Having (B) Had (C) To have (D) Have2. Your ------- for updating company policies opportunity to apply for the marketing position. (A) Everyone (B) Whoever (C) Whomever (D) Everywhere7. After the surveys had been completed, Mr. will be reviewed by the board of directors. (A) recommendation (B) recommendable (C) recommending (D) recommend3. A performance by the Moscow State Circus Rogers gathered the forms and submitted ------- to the personnel manager. (A) that (B) them (C) they (D) these8. If the company cared about overseas ------- at the London O2 Arena. (A) holds (B) has held (C) is holding (D) is being held4. Public computers, along with photocopiers, expansion, it ------- more money on global marketing. (A) will spend (B) would spend (C) spend (D) spent9. Telewest Cables deluxe package ------- ------- on the second floor of the library. (A) is locating (B) located (C) locate (D) are located5. ------- who wish to be refunded for travel a variety of channels to cater to a wider number of viewers. (A) offering (B) offers (C) to offer (D) be offering10. Help us to maintain our high qua lity of expenses should fill out the appropriate form at reception. (A) These (B) This (C) That (D) Those service by filling out a comment card before you ------- the restaurant. (A) to leave (B) had left (C) leaving (D) leave 14 conceived and designed ------- for novice users. (A) expressing (B) expresses (C) expressly (D) expressive12. Despite Mr. Fullertons lack of supervisory economical, the location of our new branch will either be in the Mason Building ------the Sorenton Tower. (A) or (B) yet (C) and (D) also17. Commuting by subway is recommendable, experience, he ------- managed to motivate his employees to work effectively. (A) any (B) still (C) more (D) same13. There ------- is a large amount of money ------- it can get uncomfortably crowded during rush hour. (A) except that (B) wide of (C) aside from (D) long since18. Due to the rapidly deteriorating state of the stored in the main vault. (A) normally (B) normality (C) normal (D) normalcy14. Some workers have still not been informed Set 9 ------- of the plans to merge departments. (A) adequate (B) adequacy (C) adequately (D) adequateness15. You must acquire more experience ------Set 10 to inquire about vacant positions. (A) call (B) call to (C) calling (D) to call20. Sales representatives should let clients you are considered for promotion. (A) but (B) before (C) enough (D) afterward ------- about the terms of the contract. (A) know (B) to know (C) knowing (D) and know 15 ------- its final destination at approximately 2 PM. (A) arrives (B) comes (C) reaches (D) gets22. The renowned architect Frank Gehry has failed to meet our clients -------. (A) require (B) requiring (C) required (D) requirements27. Insufficient ------- via advertising and been asked to design a building ------- is both stylish and practical. (A) that (B) what (C) where (D) who23. One of our waiters will let you know ------- promotion is a major reason why certain products fail in the market. (A) expose (B) exposing (C) exposure (D) exposed28. Over two thousand Zanussi 362AW washing your table is ready. (A) what (B) who (C) which (D) when24. The engineer needed ------- the broken air machines have been sold ------- the past eight weeks. (A) over (B) between (C) beyond (D) by29. Mr. Grayson would like to talk to you ------- conditioning unit. (A) replacing (B) to replace (C) having replaced (D) replaced25. Ms. Phillips enjoyed ------- with the the errors found in your financial report. (A) regard (B) regards (C) regarding (D) regardless30. Both the leather chair and the books on the overseas investors at the annual company stockholders meeting. (A) talk (B) talking (C) talked (D) to talked shelf ------- to the previous occupant. (A) belong (B) belongs (C) belonging (D) to belong 16 SET 05 Set 51. Once -------, the three departments will be 6. Mrs. Halliday and I might struggle to agree under the supervision of only one manager. (A) merging (B) merged (C) merge (D) to merge2. The rental price varies, ------- on the car on the issue of budget restrictions as her views are completely opposite to -------. (A) my (B) me (C) mine (D) myself7. The conference can only be led by ------- with the appropriate credentials. (A) anyone (B) someone (C) a one (D) one of8. ------- of the project proposals is likely to activities to the control center. (A) asking (B) asks (C) are asked (D) asked for4. Belenux Aeronautics researchers ------- to attract the interest of foreign investors. (A) Much (B) Neither (C) Both (D) Some9. Had Mr. Osborne ------- the 7 AM train, unveil their latest aircraft engine at the 2011 Geneva Aerospace Convention. (A) plan (B) are planned (C) planning (D) have plan5. ------- is a complex electronic security he might have been on time for the weekly department meeting. (A) catch (B) been caught (C) caught (D) being caught10. Please make sure that the items you buy system which can only be locked by using the appropriate key and password. (A) Those (B) Them (C) This (D) They from the supermarket ------- not past their expiration date. (A) are (B) is (C) been (D) being 18 number of faults in the new system. (A) use (B) using (C) used (D) will use12. Many of the car models that they ------- authorized to enter the building at night. (A) Almost (B) Only (C) Enough (D) Neither17. Remember to pack appropriate clothing for Set 2 Set 3 in the past decade will be shown at the forthcoming automobile convention. (A) produce (B) will produce (C) produced (D) produces13. As of next week, Sarah ------- as a health the research trip to Brazil as it will probably be raining ------- for the duration of your stay. (A) heaviness (B) heavies (C) heavy (D) heavily18. Mr. Devor has ------- competent personnel care official in Africa for approximately 18 months. (A) to work (B) worked (C) had worked (D) will have worked14. Access to the building C has been restricted Set 7 that he seldom has to supervise them at work. (A) so (B) such (C) too (D) much19. An ------- will be present to assist us in ------- it was declared unsafe by health inspectors. (A) if (B) until (C) about (D) since15. Employees cant leave early ------- the discussions with the Japanese CEO. (A) interpret (B) interpreting (C) interpretation (D) interpreter20. ------- seats are available but I can place you formal authorization of their department manager. (A) into (B) until (C) among (D) without on the waiting list. (A) Any (B) Not (C) None (D) No 19 are becoming less -------. (A) frequented (B) frequent (C) frequently (D) frequency22. The project manager views Mr. Walters workshop this weekend should notify the personnel office by this afternoon. (A) Who (B) That (C) Whoever (D) Anyone27. Cell phone service is available ------- you ------- an integral member of the development team. (A) upon (B) to (C) as (D) with23. Please ensure that you do thorough travel in South Korea. (A) whoever (B) wherever (C) whatever (D) whichever28. All workers entering the construction site are research on the issue beforehand so that you can ------- effectively in the debate. (A) participated (B) participant (C) participate (D) participating24. Although Matleys Inc. has had to close required to ------- hardhats and protective goggles. (A) wore (B) wearing (C) worn (D) wear29. Mr. Williams told the receptionist ------- the several of its overseas offices, ------- still remain three branches in the U.S. (A) it (B) he (C) they (D) there25. The bank loan, ------- is interest free, must conference schedule to the Chicago office. (A) fax (B) will fax (C) was faxing (D) to fax30. Mr. Gibson began ------- a new economy be repaid in regular installments. (A) what (B) which (C) who (D) when related book as soon as he retired from the company last month. (A) writing (B) written (C) write (D) writes 20 SET 06 Set 61. ------- assigned to the car insurance team, 6. It is essential that a consumer returning Ms. Fields hasnt won any contracts. (A) Since (B) If (C) When (D) While2. ------- between high pay and quality of life, damaged or faulty items ------- refunded in full. (A) be (B) are (C) have (D) has7. Laura Reid has been studying economics for Jack sought advice from his friends. (A) Undeciding (B) Undecided (C) Undecide (D) To undecide 3. As project leader, Phil Royle must ensure four years so that she can start ------- own business after graduation. (A) she (B) her (C) herself (D) hers8. The engineer ------- to repair the elevator that ------- of the team members performs their tasks to the best of his or her ability. (A) every (B) all (C) each (D) much4. Every year, the Jewell Corporation ------- since early this morning and is expected to be finished by 5 PM. (A) having tried (B) had tried (C) has been trying (D) having been trying9. ------- of our senior sales representatives a percentage of its annual profits to community projects and childrens charities. (A) donate (B) to donate (C) donates (D) donating5. ------- of the trade union officials were have been temporarily transferred to Canada to assist with the training of new employees. (A) Some (B) Any (C) Every (D) Each10. ------- the proposal is due to be submitted in satisfied with the outcome of the meeting with the board of directors. (A) Anyone (B) Nobody (C) None (D) Almost two hours, the project leader insisted that a large portion be rewritten and improved. (A) Even though (B) In summary (C) As if (D) Accordingly 22 becomes available ------- delivery. (A) of (B) for (C) on (D) with12. The Association of Chartered Surveyors ------- the cause of Mr. Bradys insomnia. (A) probable (B) probability (C) probably (D) probabilities17. Mr. Hargreaves was forced to leave the is an extensive organization with ------members throughout Europe. (A) thousand (B) thousands (C) thousand of (D) thousands of13. There are ------- miscalculations in your medical conference earlier than expected because he was feeling -------. (A) sick (B) sicked (C) sicking (D) sickening18. The pharmaceutical laboratory ------- we financial report, so it will need to be amended and resubmitted. (A) much (B) little (C) a few (D) less14. Extensive work experience has ------- merit work is a modern facility equipped with the most technologically advanced equipment. (A) at which (B) inside (C) where at (D) within19. Due to a ten percent increase in monthly rent, residents of the Regent Tower apartment block opted not ------- the revised lease agreement. (A) sign (B) to signing (C) signing to (D) to sign20. The management will consider the fully understand the nature of the job. (A) employee (B) employment (C) employs (D) employees implementation of an employee incentive program, ------- this high standard of work continues. (A) so as (B) provided that (C) depending on (D) rather than23 changes be made to the project so that it costs -------. (A) less (B) lesser (C) the least (D) little22. Because the entire Japanese ------- will stay old and worn, giving our company a very unprofessional appearance. (A) is (B) are (C) being (D) been27. The public survey revealed that ------- for a week, extra buses have been rented for their use. (A) delegated (B) delegate (C) delegation (D) delegatory23. Mr. Bennett will resign from the company people tend to ignore advertising and stay loyal to their favorite brands. (A) the most (B) almost (C) most (D) most of28. A regular health check is ------- for workers ------- February 18, so his replacement must be chosen before then. (A) in (B) at (C) on (D) for24. After a series of successful interviews, at Eklate Paint due to their continuous exposure to harmful chemicals. (A) necessarily (B) necessity (C) necessary (D) necessitate29. Mr. King informed the committee that the Mr. Fincher will be hired ------- personnel manager of Asda Superstores. (A) for (B) to (C) as (D) in25. One of the reports you submitted ------- representatives of Gate Enterprises had just ------- at OHare International Airport. (A) reached (B) arrived (C) got (D) come30. The project team consisted of four incomplete and requires your signature. (A) is (B) are (C) being (D) been members, all of ------- were presented with an award for their outstanding contributions to science. (A) whom (B) whose (C) who (D) whoever 24 SET 07 Set 71. ------- with the possibility of forced retirement, 6. Unfortunately, many of the employees he must pass todays interview with the management. (A) Facing (C) Face (B) Faced (D) Faces at Windham Gregg ------- the company directors expectations. (A) have not met (B) have not been met (C) having been met (D) had been met7. The companys logo has been the same it did not cause as much damage as originally -------. (A) predicting (B) predicted (C) predict (D) prediction3. Regardless of ------- short Mr. Fords ------- almost 15 years, but it will be radically redesigned next month. (A) since (C) by (B) for (D) about presentation was, it contained many interesting points. (A) what (C) which (B) how (D) where Lundgren will talk about the development process ------- Mr. Seagal demonstrates the products functions. (A) whoever (B) while (C) meantime (D) during9. The machinery was considered hazardous tickets have been reserved and he can pick ------- up at the box office before the performance. (A) us (B) them (C) him (D) you5. If safety guidelines had been implemented to ------- it failed to meet industry safety standards. (A) only if (B) even though (C) in that (D) just10. Situated on the fortieth floor of the Kyushu ensure the proper use of factory machinery, any possibility of employee injuries would probably -------. (A) eliminate (B) be eliminating (C) have been eliminated (D) had been eliminated26 Trade Center, Mr. Okawas office has a rather ------- view of the city. (A) impress (B) impressing (C) impressed (D) impressive requires ------- amendments before it can be submitted to the board. (A) much (B) a large (C) several (D) a little12. ------- Sylar Corporation will unveil their new for Axis Industries stated that the company will not ------- accountants their monthly sales figures. (A) sign (B) enter (C) give (D) want17. Mr. Davis would like to know ------- the report high definition flat-screen monitor at the London Electronics Convention next week. (A) Presumptive (B) Presumably (C) Presumptuous (D) Presuming13. Any employee seeking reimbursement of was not submitted by the 5 PM deadline. (A) why (B) what (C) where (D) which18. Goodlife Supermarkets only sell ------- fruitSet 7 Set 5 Set 6 corporate travel expenses must submit their receipts no ------- than on the 26th of each month. (A) later (B) less (C) after (D) more14. Ms. Givens types ------- of all the is being cultivated or grown naturally in the region. (A) all (B) which (C) something (D) whatever19. The aim of the Mexico-US Technological receptionists at Edwards & Grieves Associates. (A) fastest (B) the fastest (C) faster than (D) ever fast15. Despite their hard work, the advertising Alliance is ------- collaborative research and development relationships between the two countries. (A) foster (B) fosters (C) to foster (D) fostered20. Despite his flight delay, Mr. Carmichael hoped department had expected the board of directors to ------- their new promotional campaign. (A) object (B) object to (C) objecting (D) objective ------- on time for the awards presentation. (A) that (B) to be (C) being (D) to being 27 feedback indicated a high level of dissatisfaction with the companys services. (A) a (B) the (C) when (D) if22. Tetrosyl Ltd. is ------- manufacturer of car the product ------- extensive advertising and promotion. (A) are (B) is (C) were (D) be27. The software design manager, Julien Temple, accessories in Europe, and aims to capitalize on its success by expanding into Canada. (A) a larger (B) the largest (C) larger (D) a largest23. Due to the recent delays, some of ------- are was delighted to hear his name ------- at the annual awards ceremony. (A) calling (B) called (C) calls (D) to call28. Forbes Electric Inc. is unlikely to ------- growing frustrated with the projects slow progress. (A) investor (B) investors (C) the investors (D) investment24. To get to the conference center from the business with Benelux Systems again after the recent breaches of contract. (A) do (B) doing (C) does (D) done29. All applicants should ------- sure to include all hotel, take the South River Expressway ------- Haversham Junction, turn left, and continue for 2 kilometers. (A) within (B) since (C) during (D) to25. ------- the new education policy, students required documents with their employment application. (A) make (B) making (C) made (D) makes30. Skyweb Systems is developing a robot which from low-income families will receive government funding. (A) Aside (B) Behind (C) Against (D) Under28 responds to voice commands, and that can ------- perform simple household tasks. (A) ever (B) either (C) even (D) quite SET 08 Set 81. By referring to your home improvement manual, 6. ------- of the employees in the design you can repair your pipe leaks on -------. (A) your own (B) yourself (C) you (D) yours2. Due to the poor reviews of our last three team were given a day off since Cress Inc. decided to advance the deadline to Monday. (A) None (B) Anybody (C) Whoever (D) Something7. Unlike -------- employees, Mr. Grant is products, our research department expects profits ------- reduced by 50% by this time next year. (A) have (B) have been (C) will have been (D) will have to be3. Quardtechs new fast-charging batteries are comfortable and productive even when working unsupervised. (A) much (B) many (C) more (D) the most8. Children under the age of 10 must be ------- able to fully recharge in only five minutes, while other batteries must be left in the charger for a ------- half-hour. (A) well (C) better (B) good (D) fine by an adult to be allowed into the swimming pool area. (A) will accompany (B) accompanying (C) to accompany (D) accompanied9. This information session is intended for poor year, its newest product is one of the ------- to have seen an increase in sales since its release. (A) any (C) most (B) few (D) little those ------- wish to learn more about their health benefits packages. (A) who (C) what (B) whose (D) their recorder to all digital cable customers, ------subscribers to skip commercials and pause television shows. (A) allows (C) allow (B) allowing (D) will allow includes a CD of the author reading the story. (A) Every (B) Few (C) Whole (D) Many 30 company which specializes in clothes made ------- entirely from recycled materials. (A) more (B) almost (C) near (D) over12. Translation services are available ------- you masters degree in business administration and at least 2 years of experience in a managerial position. (A) possess (B) possessed (C) possessing (D) have possessed17. ------- youre worried about limited capacity, see the yellow TS sticker. (A) whoever (B) wherever (C) whatever (D) whichever13. All employees must attend the training you should book well in advance to participate in the workshop. (A) If (C) As (B) While (D) Before session to learn ------- to fill out the new expense reports. (A) during (B) about (C) how (D) whom14. Employees working with machinery in the the streets, the employees also received a lot of positive feedback on their new product. (A) However (B) While (C) If (D) Because19. The monthly performance-based review plant need to take ------- recommended precautions to prevent injuries to themselves and others. (A) most of (B) all (C) much (D) almost15. ------- driver runs the risk of getting fined if ------- to criticize our employees, but to assist them in improving their job efficiency and effectiveness. (A) is designing (B) is not designed (C) is design (D) is not designing20. A spokesman for McDowell Industries found driving without wearing a safety belt. (A) All (B) Every (C) Some (D) Most announced that Pete OToole ------- from his position as vice president at the beginning of next month. (A) retire (B) retiring (C) be retired (D) will retire31 advantage of our trial membership offer should contact Mr. Hobbs at reception. (A) do (C) take (B) get (D) make ------- we can open our next coffee shop branch. (A) what (B) which (C) where (D) whose27. The Houston Scientific Research Center than expected, the chairman is unlikely to approve the proposed increase of employees basic wages. (A) As if (B) Given that (C) Provided that (D) Except that23. ------- has a product evoked such will ------- be concluding its research into alternative fuels and renewable energy sources. (A) soon (B) frequently (C) sparingly (D) sometimes28. The stock market crash of 2008 was the excitement and anticipation in the computer game industry. (A) Nearly (B) So (C) Seldom (D) Ever24. Readers of Global Economist receive a ------- worst global financial crisis since the crash of 1929. (A) single (B) first (C) only (D) whole29. At the Millenium Hotel, we believe ------- complimentary 2GB USB flash drive when they subscribe to the magazine for ------$50 a year. (A) just (B) little (C) mere (D) low25. Although his appointment as team leader is more valuable than the comfort and satisfaction of our guests. (A) none (B) nobody (C) nothing (D) no one30. The international biotechnology conference surprised some of his colleagues, Mr. Manos is ------- experienced in project management. (A) too (B) very (C) such (D) far32 ------- next month. (A) is taken place (B) take place (C) will be taken place (D) will take place SET 09 Set 91. Mick McCarthy, who ------- the Global 6. ------- the time Erling Industries was Friend Foundation, will travel to Ethiopia next month to meet with the charitys African representatives. (A) founded (B) foundation (C) founding (D) founds2. The layout of the building can often be established, it operated as a paper company from a small office in the countryside. (A) Of (C) At (B) Up (D) On they will be promptly escorted ------- the company limousine parked outside. (A) to (C) on (B) in (D) from ------- to new employees and visitors. (A) confusing (B) confusion (C) confuse (D) confused3. Monthly Economist has hired two additional software, Mr. Hume consulted the electronic user manual. (A) Not to know (B) Not knowing (C) Didnt know (D) Dont knowing9. ------- that Romulan Inc.s latest DVD proofreaders to ------- for errors in each article prior to publication. (A) find (C) care (B) look (D) pick up with a variety of services including high-speed connections and unlimited downloads. (A) offers (B) contributes (C) provides (D) extends5. The study indicates that the downturn in player has only been on the market for two months, it has already sold an impressive amount of units. (A) Considered (B) Considering (C) Consider (D) Considerate10. To establish a ------- bond with their main sales is linked ------- an increase in the number of similar products on the market. (A) to (C) at (B) on (D) out suppliers, Java Blend Co. agreed to repair and maintain all equipment used by the coffee farmers. (A) last (C) lasted (B) lasting (D) lasts 34 remind all employees to keep their car doors and windows -------. (A) locking (B) locked (C) locks (D) locker12. The battery life of the Orion S750 digital to those of the other candidates and his experience in advertising is ------- as extensive. (A) just (C) while (B) as well (D) very camera is far inferior to ------- of any other comparably priced model currently on the market. (A) those (C) these (B) that (D) this will be held this weekend, so we encourage employees to register for ------- sessions are of interest to them. (A) these (B) some (C) whichever (D) whose18. The assembly instructions for our dinning new product surprised everyone by proving to be ------- of a success. (A) anything (B) everything (C) nothing (D) something14. The director admitted that the construction set are ------- too complicated for most consumers. (A) far (B) well (C) quite (D) pretty19. The information technology workshop is plan for Byron Bridge was financially sound, but there were still a number of ------factors to consider. (A) the others (B) other (C) another (D) others15. Construction of the bridge would have open to ------- wishes to improve their computer skills. (A) people (B) who (C) those (D) whoever20. The client ------- you prepared the income proceeded more quickly and efficiently ------- having to endure the poor weather conditions. (A) so as (C) in that (B) without (D) as to tax returns is scheduled to arrive at 4:30PM. (A) whose (B) which (C) for whom (D) who 35 ------- to be on the decline as major cities enforce stricter policing measures. (A) seems (B) chances (C) occurs (D) emerges22. Paul Rothko has hopes of ------- an research process, the development of the new toothpaste formula is progressing -------. (A) nice (B) nicely (C) nicer (D) nicest27. Blackwell Book Services specializes in internationally recognized expert in high performance computing and hardware analysis. (A) becoming (B) to become (C) in becoming (D) becomes23. ------- funds appropriately is an integral part finding books that are rare or out of ------and publishing them in attractive new editions. (A) prints (B) printing (C) print (D) printed28. Recent economic reports argue that ------- of good project management. (A) Spends (B) Spend (C) Spending (D) Spendable24. Centurion Real Estate currently has a variety competition in the business market may be the result of a decrease in the number of competitors. (A) fewer (C) a few (B) least (D) less of properties near Eden Lake and some deluxe cottages ------- the actual waterfront. (A) among (B) along (C) into (D) under25. The statistics taken from the recent at Bellingham Trade Center is $10 ------hour. (A) for (C) by (B) per (D) with consumer poll will be analysed and compared with last years -------. (A) founded (B) finds (C) found (D) findings ensure that the citys famous Wallace Monument is kept in good ------- for years to come. (A) condition (B) conditions (C) conditioning (D) conditional 36 SET 10 Set 101. After receiving estimates from all of 6. Pair Cosmetics began an innovative the companies, we will ------- which construction company to use for the building of the shopping complex. (A) determine (B) determines (C) determining (D) determination2. Radley & Associates regards online advertising ------- on their Web site which helped increase their market share. (A) campaigns (B) campaigning (C) to campaign (D) campaign7. ------- had Ms. Trent turned on the advertising as the key to ------- its sales. (A) expansively (B) expand (C) expanse (D) expanding3. Herschel Overton, who ------- Overton computer on her desk than the electricity went out. (A) Sooner (C) Any sooner (B) The sooner (D) No sooner you have to pay an additional fee to have your application regarded as a priority. (A) To be expedited (B) To expedite (C) Will expedite (D) Expedite9. Mr. Glen requested to be released from the Industries sixty-five years ago, will be giving the keynote speech at the Chemical Innovation Seminar. (A) founded (B) foundation (C) founding (D) founds4. While Im out of the office, Id like you ------- contract negotiations because he felt that it had become too ------- to deal with the clients lawyers. (A) frustrating (C) frustrate (B) frustration (D) frustrated any calls to my mobile phone (A) forward (C) forwarding (B) forwards (D) to forward 10. Blue Cloud Airlines ------- a new payment 5. In recognition of his ground-breaking work as a member of the World Climate Program, a Goldman Environmental Prize ------- to Prof. Svensson. (A) awarded (C) was awarded (B) award (D) awarding system which allows customers to pay for flight tickets directly from their bank accounts. (A) introduce (B) was introduced (C) introducing (D) has introduced 38 basic benefit package; however, we ------offer specialized plans to employees with families or special conditions. (A) as well (C) also (B) not only (D) either organized by ------- professionals in the education department. (A) dedicate (B) dedication (C) dedicating (D) dedicated17. Once you ------- the employee training three ------- years, significantly reducing its market share. (A) next (B) consecutive (C) following (D) constant13. The construction supervisor asked that all Set 4 course, you will be assigned to one of the supervisors who oversee the sales team. (A) will complete (B) are completing (C) having completed (D) have completed18. All employees have been ------- that they company tools and equipment ------- at the end of each day. (A) be returned (B) to return (C) returns (D) returning14. One of the advantages of Hyde Mobiles make sure all files are backed up in case of a system failure. (A) advice (B) advised (C) advisedly (D) advisable19. The Museum of Scientific History will be newest mobile phone is that ------- it takes no longer than 10 minutes with its newly developed battery. (A) recharge (B) recharger (C) recharged (D) recharging15. Some economic analysts are saying that expanded ------- additional rest areas. (A) for inclusion (B) with included (C) to include (D) in including20. ------- us of equipment malfunctions gave us investors should sell most of their stocks before the ------- recession occurs. (A) predict (B) predicting (C) predicted (D) prediction time to ensure that all machinery in the plant was running properly. (A) You warned (B) You warn (C) Your warning (D) You are warning 39 Manufacturing may have to increase the number of staff in at its plant. (A) Considering (B) Meanwhile (C) Thus (D) Against22. Below is a list of the authors who ------- for Drug store location before it ------- at the end of the month. (A) expired (B) expires (C) expiring (D) expiration27. Mr. Gosford accessed the company the Nobel Prize in literature this year. (A) have nominated (B) nominated (C) will be nominating (D) have been nominated23. Videos that are rented from Hitz Movies computer system to ------- for Mr. Smiths contract details in the personnel database. (A) looking (B) look (C) had looked (D) be looking28. It is essential for all maintenance employees should be returned to the store ------- to avoid any late fees. (A) prompt (B) promptly (C) prompted (D) prompting24. When Greg didnt come to work the day to carefully ------- all of the machines components when doing repairs. (A) checks (B) check (C) checking (D) checked29. How far ------- businesses are located from after getting into an argument with the managing director, everyone figured that he -------. (A) fired (B) had fired (C) had been fired (D) will have been fired25. All English instructors at Halls Language one another can determine how intense their competition is. (A) above (B) between (C) among (D) apart30. ------- remains a final draft to be edited Institute have to go through one month of comprehensive ------- training before they start teaching. (A) formal (B) formation (C) formed (D) forming40 before the manuscript is published. (A) It (B) He (C) They (D) There
https://fr.scribd.com/document/176234871/%ED%95%84%EC%88%98%EB%AC%B8%EB%B2%95300%EC%A0%9C-%EC%A0%84%EC%B2%B4
CC-MAIN-2019-43
en
refinedweb
import "go.chromium.org/luci/appengine/gaeauth/server/internal/authdbimpl" Package authdbimpl implements datastore-based storage and update of AuthDB snapshots used for authorization decisions by server/auth/*. It uses server/auth/service to communicate with auth_service to fetch AuthDB snapshots and subscribe to PubSub notifications. It always uses default datastore namespace for storage, and thus auth groups are global to the service. authdb.go doc.go handlers.go helpers.go metrics.go ConfigureAuthService makes initial fetch of AuthDB snapshot from the auth service and sets up PubSub subscription. `baseURL` is root URL of currently running service, will be used to derive PubSub push endpoint URL. If `authServiceURL` is blank, disables the fetching. GetAuthDBSnapshot fetches, inflates and deserializes AuthDB snapshot. func InstallHandlers(r *router.Router, base router.MiddlewareChain) InstallHandlers installs PubSub related HTTP handlers. type Snapshot struct { ID string `gae:"$id"` // AuthDBDeflated is zlib-compressed serialized AuthDB protobuf message. AuthDBDeflated []byte `gae:",noindex"` CreatedAt time.Time // when it was created on Auth service FetchedAt time.Time // when it was fetched and put into the datastore // contains filtered or unexported fields } Snapshot is serialized deflated AuthDB blob with some minimal metadata. Root entity. Immutable. Key has the form "v1,<AuthServiceURL>,<Revision>", it's generated by SnapshotInfo.GetSnapshotID(). It is globally unique version identifier, since it includes URL of an auth service. AuthServiceURL should be not very long (~< 250 chars) for this too work. Currently does not get garbage collected. type SnapshotInfo struct { AuthServiceURL string `gae:",noindex"` Rev int64 `gae:",noindex"` // contains filtered or unexported fields } SnapshotInfo identifies some concrete AuthDB snapshot. Singleton entity. Serves as a pointer to a blob with corresponding AuthDB proto message (stored in separate Snapshot entity). func GetLatestSnapshotInfo(ctx context.Context) (*SnapshotInfo, error) GetLatestSnapshotInfo fetches SnapshotInfo singleton entity. If no such entity is stored, returns (nil, nil). func (si *SnapshotInfo) GetSnapshotID() string GetSnapshotID returns datastore ID of the corresponding Snapshot entity. Package authdbimpl imports 21 packages (graph) and is imported by 2 packages. Updated 2019-10-14. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/appengine/gaeauth/server/internal/authdbimpl
CC-MAIN-2019-43
en
refinedweb
On Fri, Oct 27, 2000 at 03:00:31PM +0000, Petr Vandrovec wrote:> On 27 Oct 00 at 0:16, Jeff V. Merkey wrote:> > I noticed NCPFS is flagging all the files on a native NetWare volume as> > executable and chmod -x does not work, even if the NetWare server has > > the NFS namespace loaded. I looked at you code in more detail, and I > > did not see support their for the NFS/Unix namespace. > > > Is this in a newer version, or has it not been implemented yet? I was> > testing with MARS and Native NetWare this evening and saw this. If the > > NFS namespace is loaded, you should be able to get to it and access and > > set all these bits in the file system namespace directory records.> > > > Do you need any info from me to get this working, or is there another> > version where I can get this for Ute-Linux?> > Hi Jeff,> ncpfs does not support NFS fields, as access through namespace functions> is hopelessly broken (modify ns specific info has swapped some bits> in mask which data update and which not), and it also adds some (100%)> overhead on directory/inode lookups, as you must ask twice - first for> non-NFS info, and second for NFS specific...> > There exists patch from Ben Harris <bjh21@cus.cam.ac.uk>, which adds> this feature to 2.2.16 kernel and 2.2.0.17 ncpfs. You can download> it from{1,2}.pat. ncp1.pat> is kernel patch (including email headers; cut them if applying), ncp2.pat> is patch for 2.2.0.17 ncpfs userspace - it adds mount option "nfsextras".> (I apologize to Ben - I promised to integrate it into ncpfs, and into> 2.4 kernel, but...)> > Except that, you can make all files non-executable by using> "filemode=0644" mount option. Or you can use "extras,symlinks", in which> case (NFS namespace incompatible) hidden/shareable/transactional attributes> are used to signal executability of file...> > If you have some document which describes what each NFS specific field > does - currently ncpfs names them "name", "mode", "gid", "nlinks", "rdev",> "link", "created", "uid", "acsflag" and "myflag" - if I remember correctly,> it is how Unixware 2.0 nuc.h names them. And I did not found any information> about layout of NFS huge info, which is used for hardlinks :-(> > Also, as NCP 87,8 kills almost every NW server I know, if used namespace> is NFS, I'm a bit sceptic about usability of NCP 87,* on NFS namespace.> > In 1998 and 1999 I tried to ask Novell for documentation of NCP 95,*> (Netware Unix Client), but it was refused and ignored, so... here we are.> Best regards,> Petr Vandrovec> vandrove@vc.cvut.czPetr,I've got the info you need including the layout of the NFS namspace records. For a start, grab NWFS and look at the NFS structure in NWDIR.H. The fields you have to provide are there. There are some funky ones in this structure. You are correct regarding the NCP extensions to support this. They are about 12 years old, BTW and are not even supported any longer (no one has worked on this at Novell since about 1994). I will put together a complete description today (will take a couple of hours) and post to the list on the implementation of of the NFSnamespace over the wire. Hardlinks use a root DirNo handle that must point back to the DOS namespace record that owns the FAT chainand this is probably the only nasty thing to deal with here.I'll get started immediately.:-)Jeff> > P.S.: Jeff, if you want, there is ncpfs/MaRS/LinWare specific list,> linware@sh.cvut.cz. Subscribe: "subscribe linware" to "listserv@sh.cvut.cz".-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at
https://lkml.org/lkml/2000/10/27/135
CC-MAIN-2019-43
en
refinedweb
So I have a working patch system using the Mac's terminal with the following commands(I'm adding this because I have seen a lot of similar questions around): diff -Naur oldfile newfile > patch.txt // this will make a 'patch' in the execution directory Then to apply the patch: patch -p0 < patch.txt I have verified this working. It works like a charm. However now for my question. I want to do the same type of thing on Windows. I haven't been able to figure this one out. I tried using UnixUtils and it just breaks my files(Not a Valid windows application). I also looked at WinMerge but haven't figured out how it works. Is there anything that I can do to make a patch file that people could download and apply? Any help on this one would be great. I am looking for something pretty simple that I can manage. I would like to figure out any information on this subject. I am already aware of the Crafty system and Hegza Patcher. Update 12/27/2016: This still has a few bugs but you can start looking into this system as well. This is a repo of many scripts that I have been working on an planning to release to the public. Far from complete but part of this collection is a Patching System (not done): and a bit of documentation I put together for that patching library: I have put this aside for now, but if there is still interest I will start working on this again. I would look into C# libraries for handling textual differences (like google-diff-match-patch or diffplex) to write something built into your game for handling the patch files you distribute. I doubt that you really want your users to have to go to their terminal or other application to manually patch your game files. Thanks for the reply. I haven't heard of any of these so I will definite have a look at these. Also I have an xcode application that I made that does the patching for me on the mac. While it is executing terminal commands it isn't visible to the user. I will be doing something similar, I hope, with some of these links you provided. I will read up on these to get a better idea what is available out there. I still haven't found anything that can really do the job here. :( unfortunatly google-diff & diffplex is for text files only, that really isn't going to work for a binary diff need. Is it possible to use Patch for Windows? The binary download comes with a bunch of folders but patch.exe in the bin/ subdirectory seems to work fine alone. Could just throw it into StreamingAssets and run it from there on Windows, and it would be nice to have the same workflow on Mac and Windows. I will look into this one. I also found this: I honestly have no idea how good it is. Maybe it will help me get started as well. I really am interested in Patch for Windows though that sounds interesting. Answer by wesleywh · Sep 08, 2015 at 09:18 AM Alright Folks I have done it! I have made a patch system that will work for me. This goes way beyond just making a patch file. You are more than welcome to use my code. Background: I store all of my raw files on Google Drive (15GB free file hosting). This will essentially work with any server or file hosting out there. Any spaces are considered invalid in file names for url usage. So I had to format the text that I read from files to make sure it met valid character criteria. You can't edit files that are currently open, that's a security measure on all operating systems. So I had to create a separate patching system. First off, right when the user opens my game I have it look in my Google Drive for a text file called "version.txt". This simply has the version in it like "1.0.1". I connect to my google drive, read the file, remove all white spaces, change it to a number, and finally check to make sure that number is greater than the current version number of the game running. It does this extremely quickly (normally less than a second but I added wait times for visuals). To make a file on google drive downloadable for everyone you have to make it public, there is a lot of info about how to do this online -- very easy. Second you have to make a link out of the file like the following:<Google Drive Folder>/<filename> mine looks like this: Simply copy and paste the link into a browser to make sure it worked (It should start to immediately download). Next, if it recognizes that version.txt version number is greater than the game version number than I notify the user that there is an update and give them the choice of whether or not to download it. If they accept to download it then I will close their game and open the patcher system. The patcher system is much more complicated and is where all the magic takes place. It opens and reads a file called "patch.txt", it has the diff of the files. How do I get a diff of my files? This is where my code could be improved but what I did wasn't code it at all. I downloaded a free program for windows called "WinMerge". In Winmerge you can compair two folders and it will tell you everything that is different between the two folders. I simply copied that list of file names it produced into the patch.txt file seperate by enter/return spaces. On a Mac there is a "diff" command that you can run and make it spit out to a text file. The patch.txt file simply will list the files that need to be downloaded for that patch. Now if a user gets very behind on patches I don't want to have to make them download the same file over and over to work through the patches to be up to date. So I made it so it will work by downloading all the files for all patches up to the current one, all at once ;). My patch.txt file looks like the following: patch 1.0.0 level0 level2 level3 patch 1.0.1 level1 level2 leve3 leve4 So what it does is if you are on v0.0.9 you will need to download files: level0, level1, level2, level3, and level4. I makes sure that you don't download level2 or level3 twice. Cool eh? With the files downloaded it will find the data folder of my game, called "The Dark Tower", and replace the files there with the new ones. It does this relatively so the patcher and "The Dark Tower" game can be anywhere on the computer and it will work. However, the patcher and the dark tower game need to be in the same folder in order for it to work. It's not sophisticated enough to scan your computer for the .exe file. Also this needs to work on both Mac and Windows and they have a different folder structure. Have no fear! I have taken care of that as well, The Windows is fully tested and working but the Mac version isn't. With that extensive explanation here is the code to check if there is an update (The Dark Tower Game.exe): var buildVersion:String = "1.0.0"; private var updateDone:boolean = false; private var downloadingFiles:boolean = false; private var savePath:String = ""; private var originalMsg : String = ""; private var download:WWW; private var patch:String = ""; private var rawDataFolder = ""; private var rawExc = ""; Start(){ checkForUpdates(); } function Update(){ if(downloadingFiles){ updateMsg = originalMsg+"\n Download Progress: "+(download.progress * 100).ToString("#00.00")+"%\n"; } } //-------------------------FOR PATCHING ----------------------------------------------------- updateVersion = (patch).Trim(); if (updateVersion ==?\n\nThis will close this program \nand \n will launch the patching program."; showUpdateButton = true; } } //---------------------------------------------------------------- function applyUpdate(){ updateMsg = "\n\n\nIdentifying Platform"; if(Application.platform == RuntimePlatform.OSXPlayer || Application.platform == RuntimePlatform.OSXEditor){ Application.OpenURL(Application.dataPath+"/../../TheDarkTowerPatcher.app"); } else if(Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.WindowsEditor){ Application.OpenURL(Application.dataPath+"/../TheDarkTowerPatcher.exe"); } Application.Quit(); } GUI.skin = myskin; GUI.skin.font = font; GUI.Label(Rect(Screen.width-100, Screen.height-80, 100, 80),"<size=20>"+guiTextVersion+"</size>"); if(updating){ if(backgroundImage) GUI.DrawTexture(Rect(0,0,Screen.width,Screen.height), backgroundImage); GUI.Box(Rect(Screen.width*0.5f-250,Screen.height*0.5f-250,500,300), "<size=25>"+updateMsg+"</size>"); if(showUpdateButton == true){ if(GUI.Button(Rect(Screen.width*0.5f-100,Screen.height*0.5f+82,200,100),"Update")){ GetComponent.<AudioSource>().clip = MouseClickSound; GetComponent.<AudioSource>().Play(); showUpdateButton = false; applyUpdate(); } if(GUI.Button(Rect(Screen.width*0.5f-100,Screen.height*0.5f+192,200,100),"Cancle")){ GetComponent.<AudioSource>().clip = MouseClickSound; GetComponent.<AudioSource>().Play(); updating = false; showGUI = true; } } if(errorOccured == true || updateDone == true){ if(GUI.Button(Rect(Screen.width*0.5f-100,Screen.height*0.5f+192,200,100),"Okay")){ GetComponent.<AudioSource>().clip = MouseClickSound; GetComponent.<AudioSource>().Play(); updating = false; showGUI = true; } } } Here is the code for the actual patcher that will patch The Dark Tower Game (The Dark Tower Game Patcher.exe) #pragma strict import System.Collections.Generic; import System.Linq; //to make the ToList function work var font:Font; var patcherVersion:String = "Patcher Version 1.0.0"; var backgroundImage:Texture2D; var myskin:GUISkin; private var updateMsg:String = ""; private var showUpdateButton:boolean = false; private var originalMsg:String = ""; private var savePath:String =""; private var download:WWW; private var errorOccured:boolean = false; private var updateDone:boolean = false; private var updating:boolean = false; private var version:String = ""; function Start () { checkVersion(); } function checkVersion(){ //for the patcher. Check the version first. updateMsg = "\n\n\n\n\nChecking the Current Version"; var buildVersion = Application.dataPath; if(Application.platform == RuntimePlatform.OSXPlayer || Application.platform == RuntimePlatform.OSXEditor){ buildVersion += "/../../"; //finds the exectuable } else if(Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.WindowsEditor){ buildVersion += "/../"; //finds the exectuable } buildVersion += "version.txt";//have this file next to app/exe version = System.IO.File.ReadAllText(buildVersion); version = version.Replace(".",""); version = version.Trim(); convertedText = patch; convertedText = convertedText.Replace(".",""); convertedText = convertedText.Replace("\n",""); convertedText = convertedText.Replace("\r",""); convertedText = convertedText.Trim(); var availablePatch = parseFloat(convertedText); if (parseFloat(version) >= availablePatch){ //check version updateMsg = "\n\n\nCurrently update to date.\n\nWould you like to launch The Dark Tower?"; updateDone = true; } else { applyUpdate(); } } //---------------------------------------------------------------- function applyUpdate(){ updateMsg = "\n\n\nIdentifying Platform"; //Find the place to save all of the data savePath = Application.dataPath; if(Application.platform == RuntimePlatform.OSXPlayer || Application.platform == RuntimePlatform.OSXEditor){ updateMsg = "\n\n\n\n\nIdentifying updates for the Mac platform\n\nEstablishing Connection..."; savePath += "/../../Contents/"; } else if(Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.WindowsEditor){ updateMsg = "\n\n\n\n\nIdentifying updates for the Windows platform\n\nEstablishing Connection..."; savePath += "/../TheDarkTower_Data/"; } //open and read the patch file contents var currPatch:String = ""; var patchlist:WWW = WWW(currPatch); yield patchlist; var fileList = patchlist.text; var fileListArray:String[] = fileList.Split("\n" [0]); //every single line in patch.txt seperated into an array var fileSaveSuccess = true; //generate the list of files to download (removing duplicates) var downloadList = Array(); var convertedString:float; var addFiles:boolean = false; var found = false; for(var j =0; j < fileListArray.Length; j++){ //Look through list and build files to download list if(fileListArray[j].Trim() == "patch"){ convertedString = parseFloat(fileListArray[(j+1)].Replace(".","").Trim()); if(convertedString > parseFloat(version)) { addFiles = true; j = j + 2; } } if(addFiles == true){ for(var k = 0; k < downloadList.length; k++) { if(downloadList[k] == fileListArray[j].Trim()){ found = true; break; } } if(found == false) downloadList.Push(fileListArray[j].Trim()); } found = false; } for (var i = 0; i < downloadList.length; i++) //<-----START DOWNLOADING INDIVIDUAL FILES { updateMsg = "\n\n\nFile "+(i+1)+" of "+downloadList.length+"\n\nDownloading "+downloadList[i]; var fileName = downloadList[i].ToString().Trim(); updateMsg = "\n\n\n\nDownloading File "+(i+1)+" of "+downloadList.length+"\n"; originalMsg = "\n\n\n\nDownloading File "+(i+1)+" of "+downloadList.length+"\n"; yield downloadFile(fileName); if(errorOccured == true){ break; } } if(errorOccured == false){ var versionURL = ""; var versionText:WWW = WWW(versionURL); yield versionText; System.IO.File.WriteAllBytes (savePath+"/../version.txt", versionText.bytes); updateMsg = "\n\n\n\nSuccessfully Updated. \n\nWould you like to relaunch The Dark Tower?"; updateDone = true; } } //---------------------------------------------------------------- function downloadFile(file:String){ var rawDataFolder = ""; var url:String = (rawDataFolder+file).ToString(); download = WWW(url); //download file from platforms raw folder while(!download.isDone){ if(download.error){ errorOccured = true; } else { updateMsg = originalMsg+"\n\n"+file+"\nDownload Progress: "+(download.progress * 100).ToString("##0.00")+"%"; } } yield download; // wait for download to finish try{ updateMsg +="...saving..."; System.IO.File.WriteAllBytes (savePath+file, download.bytes); updateMsg +="success!"; } catch(error){ updateMsg ="Update Failed with error message:\n\n"+error.ToString(); errorOccured = true; } } function openDarkTowerApp(){ var fileToOpen:String = Application.dataPath; if(Application.platform == RuntimePlatform.OSXPlayer || Application.platform == RuntimePlatform.OSXEditor){ fileToOpen += "/../../TheDarkTower.app"; } else if(Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.WindowsEditor){ fileToOpen += "/../TheDarkTower.exe"; } Application.OpenURL(fileToOpen); Application.Quit(); } function OnGUI(){ GUI.skin = myskin; GUI.skin.font = font; GUI.Label(Rect(Screen.width-130, Screen.height-80, 100, 80),"<size=20>"+patcherVersion+"</size>"); if(backgroundImage) GUI.DrawTexture(Rect(0,0,Screen.width,Screen.height), backgroundImage); GUI.Box(Rect(Screen.width*0.5f-250,Screen.height*0.5f-250,500,300), "<size=25>"+updateMsg+"</size>"); if(updateDone == true){ if(GUI.Button(Rect(Screen.width*0.5f-100,Screen.height*0.5f+82,200,100),"Yes")){ updateDone = false; openDarkTowerApp(); } if(GUI.Button(Rect(Screen.width*0.5f-100,Screen.height*0.5f+192,200,100),"No")){ Application.Quit(); } } if(errorOccured == true){ if(GUI.Button(Rect(Screen.width*0.5f-100,Screen.height*0.5f+192,200,100),"Close")){ Application.Quit(); } } } Final few thoughts. This patcher doesn't account for a "man in the middle" attack. Even though it says "https://" it isn't secure. Unity doesn't care about security like that. So you will have to do some research on certificates and authentication to protect your users against these kinds of attacks. However, for me it is fine since these downloads will more than likely not be on public networks and will be done over home networks that a much more secure(hopefully). Also this isn't a huge game just something between my friends so it's not going to be a problem. Hope this teaches people that they want to know about patching and gives them an alternative to paying $30+ for a patching system. Granted those systems on the Unity Asset store are going to be much more robust than what I have got here but with a little more coding I think people can make it as strong as they would like. Just a review of what you will need: Patch system right next to game launcher. version.txt in container folder (where patcher and game are) version.txt on server (for me google drive) patch.txt that contains all files for patch on server updated version in code in game so it recognizes correct version(could change this around to look at version.txt in local directory) Here is a picture of my final folder structure for windows Good Luck! EDIT: I have since made edits to my main game (not the patcher) that will check the version.txt file just like the patcher does, just for consistency sake. I would probably suggest to edit the code to do the same.... its easier to follow that way. thank you so much for this ya no problem. Glad to help another coder in need. This stuff was kind of a pain to find. Great Work btw, However when I try to get the version file from online it gives me the HTML code instead of the Version number. Do you know of a way to fix that because I can't find it anywhere. I tried your link and It pulled the data but mine isn't working and I'm using google drive like you. Just noticed this comment. Does the file have HTML code in it? If so the code is simply pulling down all of the text and not caring about tags such as those. So just put a number in that document or parse the text for HTML tags on your client side to display it correctly. There might be a better way to pull down a file if you want to display HTML code vs plain text. The problem about using "..." is that it will require the user to login into his "google account" before downloading. When I try to download files from that link (without logging in), all files are returned as a "Google 'Select Account' HTML" page and the update fails. Hum. I guess I never tested it while I was not logged in. That's a great point to bring up. You don't need to use google drive though any repository will do. Just the above example uses the URL from google. Simply replace that with whatever repository you would like to use. First of all, what a fantastic answer. If I could upvote *10, I would :D Now the sad part... I had this bookmarked for a long time, but when I finally implemented it in Unity5.4.0 , sadly I get the error : Error : Failed to initialize player Failed to load PlayerSettings (internal index #0). Most likely data file is corrupted, or built with mismatching editor and platform support versions. Looking at all the previous comments, this did once work. Unless I've done something wrong in creating the list of changed files, it seems that there is some kind off ID system now regarding all the files in a build. Searching the above error led me to a couple of steam forums, most noticeably for Kerbal Space Program. IIRC Steam uses a type of version control like this patch method. However the only solutions I read was to completely uninstall/reinstall. This question came up in the forums, so I have cross-linked this post in the hope that the community can work to evolve and resolve this system for everybody. Once again, thank you so much @wesleywh for putting in the time and effort in this answer. Forum Question : Well thank you I appreciate that. This is quite old. This was originally made with unity 4 and since then many things have changed. So I do need to update this method. I am actually working on quite a large library of game ready scripts that I plan on releasing for free. I need to convert this to a better, more universal method. This was something that I did while I was still learning. The concepts while generally will remain the same the actual syntax for the scripts could be quite different now. I will see what I can come up with here in the near future to see if I can give a better answer. I know this is a common issue that I see often on the internet so I know it will help out many others. One thing I would like to change for example is where I store the files for download. I will probably not use Google Drive since that had issues in the past and google has recently stop supporting the web page method that I use here (Got an email from google about this a little while ago). So I am sure a few things will be different. I will probably link to my new method here once I figure out how I would like to do it. Sorry for bumping this thread. But this was really useful. I did too encounter the Failed to load player settings problem. I fixed it by removing all the globalgamemanager files from the change list. Thank you wesleywh for your time to post this code! Really helped me and my team :) Glad it could help you Answer by wizard40 · Aug 03, 2017 at 08:06 AM Use is the best way to create a patch for your. Detail resolution per patch 5 Answers IsFinite(outDistanceForSort) error in water script, since 5.3.1 patch - console spam 2 Answers Updating a MeshText as a child? 0 Answers Can't update models from one Unity project to another 1 Answer How to make a patcher 0 Answers
https://answers.unity.com/questions/944482/making-a-patch-file-for-my-game.html?sort=oldest
CC-MAIN-2019-43
en
refinedweb
Capital project analysis This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here! Able Corporation has Project A with the following cash flows and a 8.7% cost of money: Numbers in parentheses are outflows. Both Year 0 and Year 3 cash flows are outflows. Year 0 1 2 3 4 5 6 Cash flow $(312,000) $ 95,000 $120,000 $(260,000) $ 230,000 $260,000 $180,000 Please calculate the net present value ______________ Please calculate the profitability indexes (two decimals please)_________________ Please calculate the modified profitability index using the terminal value approach in the textbook (two decimals please)_______________________ Please calculate the internal rate of return (two decimals please)_____________________________ Please calculate the modified internal rate of return (two decimals please and per the book)________________________ Please calculate the payback period (two decimals please)________________________ Please calculate the present value payback period (two decimals please)______________________© BrainMass Inc. brainmass.com October 10, 2019, 7:40 am ad1c9bdddf Solution Summary This solution provides calculations of net present value, profitability index, probability, internal rate of return, present value payback, and payback period in Excel.
https://brainmass.com/business/finance/capital-project-analysis-590566
CC-MAIN-2019-43
en
refinedweb
| Introduction | High Level Design | Program/Hardware Design | Results of the Design | Conclusions | | Appendices | References | Introduction Single sentence summary A programmable laser light show that allows the user to specify the pattern displayed via three motor speeds and the length of time that this pattern is held. Project Summary For this project, we designed a system to guide a laser beam through an array of three rotating mirrors and then project it onto a screen where it is able to “draw” a range of different patterns. Allowing the mirrors to rotate at variable speeds creates the different patterns. These variable speeds of the motors are produced via pulse width modulation (PWM) control from the MCU. Each motor has its own dedicated pulse width modulator, allowing each motor to rotate at a different speed. Our project can be broken down into a few major constituent components. These components include the laser tube and high voltage power supply, the mirror assembly (three circular mirrors and motors), the Atmega32(L) (and STK-500), and finally a custom made board which contains the PWM support circuitry. All of these components are mounted on a piece of pegboard making the unit easily portable. Additionally, a user interface for the motors has been implemented that uses HyperTerminal. Each one of these components is discussed in detail below. We decided on this project after discussing the types of potential projects we could complete and jointly decided that since we never dealt with any type of motor control in lab that the final project was a good opportunity to do so. Additionally, after a trip last summer to Darien Lake amusement park where we watched a professional laser light show, we both had wondered how exactly this type of presentation was done. After a bit of research, we concluded that for the time allotted for this project, this tri-mirror setup was the most feasible. High level design Rationale and sources of your project idea To develop the idea of this project, we first built the system using hardware only. The system was the same as in the final design with the exception that all motor control was done with potentiometers instead of PWM control. This allowed us to experiments with different numbers of mirrors, different ranges of speeds and different arrangements of the mirrors themselves. From this experimentation, we were able to make many decisions regarding the way the final project should be constructed. Firstly, we decided on three motors/mirrors for a number of different reasons, the main reason being that we could maximize the number of patterns available for display by maximizing the number of mirrors. The Atmega32(L) contains four pulse width modulators from three timer sources. One of these timers must be dedicated to program control, this leaves a maximum of three PWM channels from a single chip. Hence, we decided that three mirrors and motors was the proper choice. Once we established that we could control three motors to make a pattern, we tried to think of a way that we could incorporate more of the microcontroller’s capabilities into our design. We thought of the light show idea next. We could extend the idea of motor control by using serial communication. A user could specify the motor speeds and the pattern to be held. This show concept required an extensive user interface, which we had to design. At this point, all timers had a task assigned to them. We thought that this idea was complex enough to appropriately challenge our design skills. Logical structure We visualized this project in three parts. First, we had to successfully implement control of an individual light pattern. In our user interface, we added an option to make a light pattern. The user can enter a speed for each of the three motors to specify a light pattern. This pattern will be held until the user decided to quit the program. This was the first logical component of our project. Next we adapted the individual light pattern module to design the light show component. We used three arrays to specify the three motor speeds, and a fourth array to specify the time that the pattern is held for. For example, index zero of array 1,2,3 specifies the speed for each of the three motors. The zero index of the time array specifies the length of time that the speeds will be held for. After this length of time (in seconds) has expired, the next stored pattern is displayed. When the end of the show arrays are reached, a message alerts the user that the show has concluded, and all three motors shut off. Lastly, we took these four arrays and enabled the user to store them in EEPROM. In this manner, a light show can be restored, even if the system is powered down. In the user interface, there are options that allow a user to save and restore a light show. Hardware/software tradeoffs One of the tradeoffs that we needed to weigh during the design of this project was the choice of using all hardware PWMs or needing to code a single PWM in software while the other two were done in hardware, as our original proposal suggested. For instance, a slightly less expensive chip could have been used, such as the mega163, but then only two hardware PWMs would have been available since timer0 would have still needed to be used as the task timer. While writing a software PWM itself is not complicated (it only requires the manual toggling of a single output port bit), calibrating the modulation frequency to coincide with the frequency of the other two hardware PWMs may have proven to be a tricky procedure. Another tradeoff that we needed to consider was how much EEPROM would be available for on-chip laser show storage. The mega163 had 512 bytes while the mega32 has 1024 bytes which for the storage of our four arrays (three speed arrays and one timing array) which makes a significant difference. With these two metrics in mind, we felt that the Atmega32(L) was the proper choice of MCU. Program/Hardware design Hardware Design As mentioned earlier, the hardware dimension of our project is composed of three main parts: a laser tube and high voltage power supply, a spinning motor/mirror assembly, and the support circuitry for the PWM motor control. Each one of these parts is discussed in detail below. I) Laser Tube and High Voltage Power Supply The laser tube we have chosen is a continuous beam, 0.9mW, red argon laser tube. The particular beam was chosen because of its high brightness and minimal spreading and also because the large tube was easy to mount and hold steady. This is crucial because jitter from the mirrors could cause the laser beam to randomly bounce around, making the show appear sloppy. The tube is a cylinder and is mounted with U-brackets onto the pegboard (see Figure 2). The laser runs on a 14,000V power supply which steps up from 12VDC. We have constructed this power supply from scratch by following a schematic of an existing high voltage laser power supply (see appendix II). This supply is based around a switching flyback transformer run by a 555-timer IC. No modifications were made to this original schematic. The 12VDC comes from a separate power supply that plugs into a standard 110VAC wall outlet. II) Mirror Assembly This assembly consists of three identical motors, each equipped with a mirror mounted perpendicular to the end of the rotating motor shaft, as shown in the side view below (note that the reflective surface of the mirror is facing away from the motor shaft): Figure 1: Motor Assembly The motors are 6V, continuous DC motors that run from an external 6V battery pack comprised of 4 series connect C-cell batteries. The mirrors are flat and circular in shape measuring 1 1/4” in diameter and 1/8” in thickness. Mounting the mirrors to the motor shafts was done with a specialized epoxy that allows for high-speed rotation of the mirrors. The motors are attached to the pegboard with small bolts to prevent any rotation of the motor body itself. The combined laser and spinning mirror assembly is pictured below in figure 2: Figure 2: Laser and Spinning Mirror Hardware Assembly Also pictured in the diagram above is the project is the projection screen. This screen was made from two custom made L-brackets, a length of threaded steel rod, and some spare window shade vinyl. The screen can be removed if the user wants to display the show on a bare wall. III) Pulse Width Modulation General Discussion of Concept Motor control via PWM signals is one of variety of standard ways in which a motor’s speed can be precisely controlled. The idea of PWM control of a motor is analogous in a way to a light switch/light bulb system. If over the course of a unit of time the light switch is off for the entire time, then the bulb will be dark. Conversely, if the light switch is on for the entire time, then the bulb will be as bright as possible. However, if one was to rapidly flip the switch (on, off, on, off, etc.), depending on what percentage of the cycle the switch was on (the duty cycle) the light could be made to shine at a range of intensities between 100% dark and 100% bright. Generalizing this concept to the idea of motor control, the Atmega32 has the ability to produce signals that look like the following: Figure 3: PWM Control Signals The top most signal represents a slow speed because 90% of the cycle the motor is off. The middle signal represents half speed because the on and off times are balanced (50% duty cycle). Finally the bottom signal represents a high speed because 90% of the cycle the motor is on. Although the signals are fluctuating rapidly, the behavior seen from the motor is very smooth because the inertia of the rotating shaft provides a natural smoothing effect (like the flywheel in a car). However, additional smoothing of the signal can be added in the support circuitry, discussed later. The only point left to discuss is how the signals actually drive the motors. The motors we are using need more voltage than the STK board can supply, so an external battery pack is used to supply this power. In order to use the Atmel PWM signals to modulate this external power supply, 2N2904 high-speed switching transistors are used. Going back to our light bulb analogy, if you were the one flipping the switch, than your hand would represent the PWM signal from the STK board and the switch would represent the 2N3904. The battery pack would be analogous to the 110VAC used to drive the load (light bulb, motor). Implementation This project will require three pulse width modulators to drive our three motors. The mega32 offers three timers, each equipped with PWM capability. Additionally, timer1 has two independent PWM channels that can be used. We have chosen timer0 to be used for general timing tasks such as serial communication and sound generation; hence it is timers 1 and 2 that are used to generate the PWM signals for the motors. In all cases, the PWM output lines will run from the MCU to external support circuitry, which is discussed next. Support Circuitry Support circuitry is necessary for motor control because the MCU has neither the voltage nor amperage to adequately drive the motors. Therefore, the PWM output lines modulate the gates of three transistors that drive the motors using an external power source. This general circuit is shown below: Figure 4: PWM Support Circuitry The rapidly varying PWM lines allow for smoothly varying motor speeds, which in turn allow for crisp laser images. Note that the protection diodes are needed because the changing magnetic fields in the inductive load will cause sharp spikes that would blow out the motors without these diodes. The VCC rail is powered by a 6V battery pack (4 C-cells). This is the final hardware component of our design project. A picture of the complete hardware system is shown below. Program Design Using HyperTerminal, we constructed a user interface that supports two modes. Mode 1 will allow the user to set the duty cycles of each motor individually. Once these duty cycles are set, they will be held until the user changes them. Mode 2 is a light show programmer. This mode allows the user to specify a series of motor speeds (that determine a light pattern) that is held for a user-determined amount of time. A more detailed description of each mode follows. Mode 1: Independent Duty Cycle Control In this mode the user is prompted to select one of the three motors and set its speed. The user sets the speed by entering a number between 0 (off) and 255 (100% duty cycle). This range of numbers is a result of us choosing to use 8-bit PWM. Note that due to loads on the motor shafts (mostly weight of the mirror) we are unable to utilize the full range of speeds. This is discussed later on. The user must set one speed at a time, but can specify different speeds for each motor. The speeds are held until the user quits out to the main menu. The picture below shows what the user-interface menu looks like. Mode 2: Laser Show Programmer This mode allows the user to setup a laser show. The user has several options in this mode. The first option is to program a new laser show using the “program show” option. The user will be prompted for a set of three duty cycles, one for each motor. This will specify a particular laser pattern. The user will also be asked for a length of time (in seconds) that this pattern will be held. Each of the four values will be stored in separate arrays. Once this input set is complete, the user will be asked whether they are done entering data. If not, then they will be prompted to enter in four new values for the next pattern. If data entry is complete, they return to the programming menu. The user now has the options of either running the show or saving to EEPROM to be used at a later session. If the user selects “save show”, the arrays will be stored in EEPROM, so that at power down, the show will be saved. When the system is turned back on, the user is able to select the “load show” command and restore the saved show. Once a show is entered or loaded back from memory, the user may select the “start show” command. This option immediately spins up the motors to the speeds specified in the first data set. Each successive pattern is displayed and held for the specified amount time until the final data set, after which the motors are shut off. Below, menu 2 is shown, along with the information the user is prompted for while programming a show. Code Structure The purpose of this section is to explain, in some detail, the workings of our code. Very low-level detail will not be discussed here, but can be examined in our commented code in Appendix A. This section will provide the reader with enough detail to be able to follow the commented code. At reset, an initialize function is called in which all state variables, indices, LCVs, etc. are set. This function is also called if the user presses ‘5’ at any of the mode menus. There are only two major components that are present in our while loop in main. The first sets up a one second counter. This counter is only run while a laser show is displayed. The counter in main increments whenever the ‘time’ variable reaches zero (it starts at 125). This indicates that one second has elapsed. Once the counter reaches the user specified time length, the next laser pattern is displayed. The other component that is run from main is our control flow function. This function is really the heart of the user interface. From here, user input is received and decoded. Based upon several state variables and the user input, the appropriate actions are determined. A function called get_input is used to assign commands. USCRA bit 7 is examined first. If it is high, then there is a character waiting to be processed. The function then checks if the character is a carriage return. If it isn’t, then it is added to a character input array. If the entered command is a carriage return, then the character array is concatenated and type cast into a character. This character is stored in the ‘command’ variable. In addition, a variable ‘process’ is set high. This will be used as an indicator that there is a valid command ready to be processed by the control flow function. The principle state variable used is called ‘menu_page’. It takes on one of three values. It is zero on reset, one if in independent duty cycle control mode, and two if in show programmer mode. Based mostly upon the value of ‘menu_page’, the appropriate menu is displayed to the user. Based upon which menu is currently displayed, the current command, and some other state variables, an appropriate action is taken. For example, process is frequently used to ensure that menus are only displayed once, and not endlessly repeated onscreen. ‘enter_motsp’ is used during the individual PWM control mode, and ensures that the user is prompted for the appropriate information. ‘Programshow’ is used to determine when the user is entering data to be stored in the laser show arrays. It also helps ensure that the user is prompted for the appropriate data. After reading the above description, it becomes clear that one of our principle tasks in designing this interface was to keep track of ‘where’ the user was. By using state variables, we had to ensure that all data was assigned to the appropriate variables, and that the program did not take unwanted actions. Hardware/Software Implementation Problems As with all major design projects, we encountered several difficult and unexpected problems. First, after we successfully got the PWM working, we noticed that we could not get the full range of speeds (0-255) with the motor. When the motors are first started up, they need a speed of at least 65 to get the motor shaft to continually turn. Once the motor is started, speeds can be dropped lower than 65, but usually not much below 35. Although we were initially concerned with this performance aspect, after testing we determined that this was not actually a problem. At such low speeds, the motors are not even able to move the beam fast enough so that the persistence of vision takes effect. At these low speeds, the laser just appears as a moving dot, rather than a coherent pattern. We are still able to get a full range of patterns even without the full range of speeds. We also encountered some problems with ground. As we tested our design, we noticed that sometimes the motors would not fully shut off at zero duty cycle and power down, or that tapping the shaft could restart them. After some investigation, we determined that there was about a 2-volt peak-to-peak sine wave present at ground. We ensured that both the evaluation board’s ground was common with the battery-pack’s ground. This did not have any effect. After further experimentation, we found that the only way to rid ground of this unwanted signal is to make both grounds common, and connect them to earth ground (through the scope or wall socket). | Introduction | High Level Design | Program/Hardware Design | Results of the Design | Conclusions | | Appendices | References | Results of the design Speed of execution Our biggest concern was ensuring that any command entered by the user was processed correctly. We made sure that we used non-blocking code for this input. When the user enters a command in HyperTerminal and hits carriage return, a flag called “process” is set high. The software uses this flag to indicate that there is a command ready to be processed. A long series of if-else and case statements determines and executes the appropriate action. We were somewhat concerned that this conditional chain may be so long that some user input would be missed. However, after some extensive testing we found that our software always accurately processes user input. Our other concern was that the PWM techniques that we used would provide a suitable level of sensitivity to the motors. We wanted to ensure that a full range of speeds was possible, thus allowing for the greatest variety of laser light patterns. We accomplished this mainly empirically, using 8-bit modulation and then checking the motor operation to ensure that a wide variety of speeds were possible. Accuracy Accuracy, although important, was not a major consideration in our design. We wanted to make sure that the patterns in the show mode could be displayed in some length of seconds. We used a counter to keep track of the number of seconds that has elapsed. Once this counter equals the desired display length for the pattern, the second counter is reset and the next pattern is displayed. If we are off by some small number of milliseconds, the error will not really be noticeable to the user, nor will it affect the overall appearance of the light show. The only other accuracy consideration was to ensure that all commands entered in HyperTerminal were properly received. The methods we used to ensure this have been discussed previously in the speed section. Safety There are several safety concerns involved in this project. The first is that project involves a laser so basic laser safety concerns need to be addressed. While the beam is certainly not high power, it is still dangerous if pointed directly into one’s eye. Most of the time the beam is simply directed through the mirror array and onto the screen and posses no real threat. However, during alignment of the mirrors the laser must be on and the beam could accidentally strike someone in the eye. Care must be taken, especially during alignment, to make sure that the beam does not become a hazard. Another concern is the high voltage being used to operate the laser tube. Because we constructed the supply from discrete parts and the unit is not packaged up, it was important to make sure that it would not be easy for someone to accidentally stick their hand in the unit and get injured. We took several precautions to try and avoid this. The flyback transformer we used is a commercial unit that is encased in plastic, as are all capacitors and other high voltage components. Also, any observable external contacts carrying either high voltage DC or 110VAC were covered with electrical tape to shield them from accidental contact. Interference with Other People's Designs We didn’t believe that our project caused any noticeable interference, since we ran it for a couple days without causing any problems for our surrounding neighbors. However on one of the last days, we did encounter one problem. One group was doing a guitar tuner of some kind, and noticed that our project created some noise that was detectable on his system. Apparently, the PWM signals that came off the MCU caused this problem. We were a little surprised by this, as we thought that our laser power supply would be the likely culprit. However, the interference was not generated when the laser supply was on. We guess that the wires that connect the output pins on the evaluation board to our circuitry are causing this noise. Fortunately, the interference was not so severe as to create a serious problem for our classmate. Usability by You and Other People A major component of our project was to create a user interface that would be robust and easily understood. We believe that practically anyone could use our design. The most difficult aspect of our design would be the setup, which anyone who has taken 476 could do. Setup merely involves setting up HyperTerminal. Once the project is set up, anyone could use the interface. The hardware is ready to use, so the only thing the user must do is turn on the power supply. From there, it is a simple matter of selecting the desired option, and inputting a motor speed. We thought about making the project ore portable by using an LCD to display the interface, however we decided against it because we thought it would be significantly less user-friendly. Conclusions We were able to meet all of our major design goals, as set forth in our project proposal. The secondary goal, sound, was not implemented due to lack of time. However, we are extremely pleased with the way that our project turned out. Our resultant product is a great demonstration of the Atmel32’s capabilities with motor control and serial communication. Our project utilized most of the chip’s resources including all three timers, the UART, and EEPROM storage. Further, we are very pleased that the project is fun to operate. It is very entertaining to explore the different patterns possible, and then program in a show that demonstrates some very interesting light patterns. If we had to implement this project again, we would do several things differently. First, we would try and make our code somewhat more organized. Although entirely functional, and generally readable, we feel that had we done a little more work on paper, the code could be more easily understood. There are not too many intellectual property considerations with our project. We used readily available hardware, including an argon laser, power supply, transistors, motors, and mirrors. We did not have to sign any non-disclosure agreements to get any of this hardware. Our code is of our own design, and not part of the public domain. Our project is an interesting demonstration of how a simple laser light show may be programmed. We do not feel that there are very many patent possibilities because most commercial systems are far more advanced and capable of many more designs. Smaller systems are marketed that are similar to ours except for the fact that they use glavometers in place of motors and are often able to accept RCA inputs so that the pattern changes with the music. However, none of these systems to our knowledge allow the user to preprogram the unit to display a certain series of patterns. This idea might be worked in to allow some sort of patent potential. Finally, we would like to demonstrate a few ways in which our project complies with the IEEE Code of Ethics. Point one of the code states: to accept responsibility in making engineering decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment As stated before in this report, all major safety hazards have been assessed and avoided as much as was possible given the time span of this project. The high voltage supply was sealed up and labeled with a “high voltage” sticker so that observers would understand the threat. The entire unit was mounted securely to a board, minimizing the risk that someone could pick up the laser tube and cause injury to themselves or others with the beam. Also, despite the speed of the motors, the edges of the mirrors have been smoothed so that no sharp edges could cut a user who was trying to align the unit during operation. Point five of the code states: to improve the understanding of technology, its appropriate application, and potential consequences We have already mentioned how happy we were that this project placed a slightly different “spin” on the idea of the laser show in an application that is a bit different than standard laser shows. Large-scale laser shows tend to utilize devices that can accurately and quickly move the beam in both the x- and y- planes, hence being able to make any 2-D figure. Ours, while more simplistic, is also much cheaper and in some ways more elegant in its operation. The sensitivity of the patterns to the changing motor speeds is quite interesting and allows for a wide range of available patterns if the user is willing to experiment enough. We feel that this is certainly an appropriate application of the popular laser technology. Point three of the code states: to be honest and realistic in stating claims or estimates based on available data From the onset of this project we were honest as to our own goals and the ability of the system that we were creating. We were realistic with the budget that this project would run us and we made it so that the project worked as specified given the $25 limit. We referenced all parts of the design that we did not explicitly develop and were honest about the portions of the project that could not be completed due to lack of time. Finally, we were honest when discussing the limited potential of patenting this device because of it similarity (though not exactly) to some existing laser projection devices. Point seven of the code states: to seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others Our project without question complies with this portion of the code. As stated before, the high voltage supply was constructed from a design that was not ours and that design was certainly referenced in this report. Also, during the completion of this project we certainly stumbled upon many errors, of which some of the larger ones were discussed previously. For example, the problem with our circuit concerning the method of grounding was eventually identified (after hours of searching for it) and corrected. Also, Professor Land pointed out a few places on our power supply that were potentially dangerous and asked us to cover them with electrical tape, which we did immediately. Finally, throughout the design of this project we asked the TAs, Professor Land, as well as some of the other students for their input on the project and certainly took their input to heart. For instance our TA, Derek, certainly assisted us in choosing the proper switching transistors for the PWM control circuitry. Finally, point ten of the code states: to assist colleagues and co-workers in their professional development and to support them in following this code of ethics We feel that this point has a bit of dual meaning in the context of ECE 476. The “colleagues and co-workers” applies both to the partners working on the project together as well as the other groups working around us in the lab. As a group, we tried our best to help out any groups that asked us for advice or input because we expected to receive the same sort of treatment when we asked them. Additionally, by working closely together we were able to keep each other’s work at high standards and make sure limited mistakes were made that could possibly violate the code of ethics. For instance, this close working unit was the reason the no limited safety issues remained when the project was complete. Also, by helping each other out we were able to expedite the development process and actually complete most of the goals that we set for ourselves. In conclusion, we feel that our project was a success on several different levels. It was a physical success in that we ended up with a working laser show system that utilized a wide range of our own capabilities as well as the Atmega32’s capabilities. It was a learning success in that we learned a lot of different things about how to approach a project of this magnitude and how we would approach it differently given a second chance and/or more time. Finally, it was an ethical success as we showed above in its compliance with the IEEE code of ethics. Appendix I: Commented Program Listing /**************************************************** ECE 476 Final Project: Computer Controlled Laser Show Josh Silbermann and Matt Melnyk Thursday, May 1st 2003 ***************************************************** */ #include <Mega32.h> #include <stdio.h> #include <stdlib.h> #include <math.h> char current_char; //most recently entered character char input[4]; //array that holds the command char process; //set high when command recieved char menu_page; //current menu page displayed char show_mode; //high if running laser show unsigned int in_count; //index for input array char command; //concatinated input array char mot; //selects motor that command refers to char enter_motsp; //state variable that is high when speed entered unsigned char reload; //used to set time base char time; //used to set 1-second timer unsigned char motor1_show[25]; //speeds for motor1 in show mode unsigned char motor2_show[25]; //speeds for motor2 in show mode unsigned char motor3_show[25]; //speeds for motor3 in show mode unsigned char showtime[25]; //pattern holding times for show mode char stage; //index into show mode arrays while programming show char programshow; //state variable that is high when entering show mode data char parse; //counter to ensure that all show mode data entered char max_index; //maximum index into show mode arrays char run_time; //number of seconds elapsed since pattern began char index; //index into show mode arrays while show running char menu2; //control variable used to display menu 2 char y; //index variable used for eeprom assignment char display_speed; //control variable to display motor speeds in Hyperterminal char display_end; //control variable to display end-of-show message in Hyperterminal //Variables used for power-down storage of data eeprom char e_motor1_show[25]; //eeprom storage of motor1 speeds eeprom char e_motor2_show[25]; //eeprom storage of motor2 speeds eeprom char e_motor3_show[25]; //eeprom storage of motor3 speeds eeprom char e_showtime[25]; //eeprom storage of pattern holding times for show mode eeprom char e_max_index; //eeprom storage of max index into show mode arrays //Program functions void display_menu0(void); //displays menu 0 void display_menu1(void); //displays menu 1 void display_menu2(void); //displays menu 2 void get_input(void); //checks for input and assigns command void control_flow(void); //master control function void initialize(void); //initialize all variables void start_show(void); //runs current show //timer 0 overflow ISR interrupt [TIM0_OVF] void timer0_overflow(void) { //reload to force 1 mSec overflow TCNT0=reload; //Decrement time if not zero if(time>0) --time; } void main(void) { initialize(); //initialize variables while(1) { if((time==0) & show_mode) //if running show, use 1 sec time intervals { time=125; run_time++; PORTB.2=!PORTB.2; //1Hz timing flash } control_flow(); } } void display_menu0(void) //display menu 0 { putchar(0x0c); printf("*************************************************************************"); printf("\r\n"); printf("Welcome to the Laser light show user interface"); printf("\r\n"); printf("Please select from the following options"); printf("\r\n"); printf("1. Enter 3 motor speeds and make a light pattern"); printf("\r\n"); printf("2. Program a laser light show"); printf("\r\n"); printf("*************************************************************************"); printf("\r\n"); } void display_menu1(void) //display menu 1 { putchar(0x0c); printf("*************************************************************************"); printf("\r\n"); printf("Enter speed for"); printf("\r\n"); printf("\r\n"); printf("Motor(1)"); printf("\r\n"); printf("Motor(2)"); printf("\r\n"); printf("Motor(3)"); printf("\r\n"); printf("*************************************************************************"); printf("\r\n"); printf("Enter 5 to quit to the main menu"); printf("\r\n"); void display_menu2(void) //display menu 2 { putchar(0x0c); printf("*************************************************************************"); printf("\r\n"); printf("SHOW MODE"); printf("\r\n"); printf("Select from the following options"); printf("\r\n"); printf("1. Program Show"); printf("\r\n"); printf("2. Save Show"); printf("\r\n"); printf("3. Load Show"); printf("\r\n"); printf("4. Start Show"); printf("\r\n"); printf("*************************************************************************"); printf("\r\n"); printf("Enter 5 to quit to the main menu"); printf("\r\n"); } void get_input(void) //checks for input and assigns command { if (UCSRA.7) //if character is waiting { current_char=getchar(); if(current_char != '\r') //check for carriage return { putchar(current_char); if(current_char == 0x08) --in_count; else input[in_count++] = current_char; //construct command array } if(current_char == '\r') //check for complete command { input[in_count] = 0x00; //null terminate putchar(current_char); putchar('\n'); in_count=0; process=1; command=(char)atoi(input); //assigns command to be processed in control flow } } } void control_flow(void) //master control function { if((menu_page==0) & process) //display menu 0 on reset { display_menu0(); process=0; } else if((menu_page==1) & !enter_motsp & process) //display menu 1 when not entering motor speeds { display_menu1(); process=0; } get_input(); //gets command if((menu_page==0) & (command==1)) //enter individual motor speed setting mode { menu_page=1; } else if((menu_page==0) & (command==2)) //enter show mode { menu_page=2; menu2=1; } else if((menu_page==1) & (command == 1) & process & !enter_motsp) //change motor 0 speed { enter_motsp=1; printf("Enter speed for motor 1 (0-255)"); printf("\r\n"); process=0; mot=1; } else if((menu_page==1) & (command == 2) & process & !enter_motsp) //change motor 1 speed { enter_motsp=1; printf("Enter speed for motor 2 (0-255)"); printf("\r\n"); process=0; mot=2; } else if((menu_page==1) & (command == 3) & process & ! enter_motsp) //change motor 2 speed { enter_motsp=1; printf("Enter speed for motor 3 (0-255)"); printf("\r\n"); process=0; mot=3; } else if((menu_page==2) & process & menu2) //for proper displaying of menu 2 { display_menu2(); process=0; menu2=0; } else if((menu_page==2) & process & (command==1) & !programshow) //begin to program show { programshow=1; parse=5; //5 pieces of data to enter; 3 motor speeds, hold time, finish program query } else if((menu_page==2) & (command==4) & !programshow) //run current show { if(process) //for proper display purpose { printf("Enter 5 to stop show"); printf("\r\n"); show_mode=1; process=0; } start_show(); } else if((menu_page==2) & (command==2) & !programshow & process) //save current show to EEPROM { while(y<=max_index) //copy show arrays to equivilant EEPROM locations { e_motor1_show[y]=motor1_show[y]; e_motor2_show[y]=motor2_show[y]; e_motor3_show[y]=motor3_show[y]; e_showtime[y]=showtime[y]; y++; } y=0; e_max_index=max_index; menu2=1; } else if((menu_page==2) & (command==3) & !programshow & process) //load current show from EEPROM { max_index=e_max_index; //prevents loading residual data from longer shows while(y<=max_index) //copy show arrays back from EEPROM locations { motor1_show[y]=e_motor1_show[y]; motor2_show[y]=e_motor2_show[y]; motor3_show[y]=e_motor3_show[y]; showtime[y]=e_showtime[y]; y++; } y=0; menu2=1; } else if((mot==1) & process) //send motor 1 speed to timer 1 PWM, channel 1 { OCR1A = command; mot=0; process=1; command=0; enter_motsp=0; } else if((mot==2) & process) //send motor 2 speed to timer 1 PWM, channel 2 { OCR1B = command; mot=0; process=1; command=0; enter_motsp=0; } else if((mot==3) & process) //send motor 3 speed to timer 2 PWM { OCR2 = command; mot=0; process=1; command=0; enter_motsp=0; } if((programshow) & process) //program single stage of show { switch(parse) //increment through 5 necessary data items { //get motor speed 1 case 5: printf("Enter speed for motor 1"); printf("\r\n"); parse--; process=0; break; //get motor speed 2 case 4: motor1_show[stage]=command; printf("Enter speed for motor 2"); printf("\r\n"); parse--; process=0; break; //get motor speed 3 case 3: motor2_show[stage]=command; printf("Enter speed for motor 3"); printf("\r\n"); parse--; process=0; break; //get holding time for stage (in seconds) case 2: motor3_show[stage]=command; printf("Enter time to hold pattern"); printf("\r\n"); parse--; process=0; break; //query user for end-of-programming case 1: showtime[stage]=command; printf("Finished programming? Yes, Hit 2 ; No, Hit 1"); printf("\r\n"); process=0; parse--; break; case 0: if ((command==2) | (stage==24)) //quit and return to menu 2 { menu2=1; programshow=0; max_index=stage; stage=0; command=-1; } else // continue programming another stage { stage++; } parse=5; break; } } if((command==5) & process) //quit and re-initialize { PORTB.2=0; initialize(); } } void initialize(void) //initialize all variables { //setup UART UCSRB = 0x18; UBRRL = 103; //initilize motor speeds to zero OCR1A = 0; OCR1B = 0; OCR2=0; //make PWM ports outputs for motor control DDRD.7=1; DDRD.5=1; DDRD.4=1; //setup timer0 TCCR0 = 0b00000101; //prescale 1024 TCNT0 = reload; TIMSK = 1; //interupt on overflow //setup timer1: dual PWM channels TCCR1A = 0b10100001; TCCR1B = 0b00000010; //setup timer2: single PWM TCCR2 = 0b01100010; //initialize state variables and flags menu_page=0; process=1; in_count=0; enter_motsp=0; mot=0; command=0; time = 125; //for 1 sec intervals reload = 256-125; stage=0; programshow=0; parse=5; show_mode=0; run_time=0; index=0; y=0; display_speed=0; display_end=0; //setup for LEDs DDRB.0 = 1; DDRB.2=1; PORTB.2=1; //enable interrupts #asm sei #endasm } void start_show(void) //run current show { if(index<=max_index) //check for end-of-show { if(showtime[index] != run_time) { OCR1A=motor1_show[index]; //assign motor 1 speed OCR1B=motor2_show[index]; //assign motor 2 speed OCR2=motor3_show[index]; //assign motor 3 speed if(!display_speed) //display current speeds in Hyperterminal { putchar(0x0c); printf("Current Motor Speeds"); printf("\r\n"); printf("Motor 1 speed is %d",motor1_show[index]); printf("\r\n"); printf("Motor 2 speed is %d",motor2_show[index]); printf("\r\n"); printf("Motor 3 speed is %d",motor3_show[index]); printf("\r\n"); printf("\r\n"); display_speed=1; } } else { index++; run_time=0; display_speed=0; } } else if(!display_end) { OCR1A=0; OCR1B=0; OCR2=0; printf("End of Show, hit 5 to return to menu"); printf("\r\n"); display_end=1; } //if(index>max_index) uncomment to loop display //index=0; } Appendix II: Schematics Schematic A: High Voltage Power Supply for Laser Tube Schematic B: Motor Wiring Diagram with PWM Control Appendix III: Cost details **We spoke to a hobby electronics store in Rochester called Allied-Action Limited, whom we have dealt with in the past. We explained the type of laser tube we had and that we needed to construct a supply to power it. We also explained that this project was for a final engineering project at Cornell and that we were under a strict budget. He agreed to sell us the flyback transformer as well as the some of the other components not easily found in a common lab for a set price of $12.50. Appendix IV: Project Tasks Carried out by each Team Member For the grand majority of this project we worked side-by-side, completing each portion of the project together. This was our goal from the beginning of our work session as we felt that it would maximize both our understanding of the project and the relevant concepts as well as the final result as each part of the project would be evaluated and thought through by both of us. Specifically, there were only two occasions where the two of us were not working together. Matt spent some extended time working on the main control flow loop while Josh was out for the Passover holiday. Josh made up for this extra time by spending an extended session putting together a large portion of the hardware system. Appendix V: Acknowledgements We would like to thank Professor Bruce Land for his help throughout this semester and especially on this final project. We would also like to thank our TA Derek for helping work a couple bugs out of our circuitry. Finally, our thanks to Andy Ruina and his Human Power and Bicycle lab for letting us use a few of their spare parts that were sitting around. References High Voltage Laser Power Supply Allied-Action Limited 182 Avenue D Rochester, NY 14621 Barnett, R. et al. (2003). Embedded C Programming and the Atmel AVR. New York. Horowitz, P. et al. (1989). The Art of Electronics 2nd Edition. Cambridge. Datasheet for the Atmega32
https://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2003/jms225/MattandJosh/index.htm
CC-MAIN-2017-47
en
refinedweb
A full-featured Python API for Authorize.net's AIM, CIM, ARB and Reporting APIs. Py-Authorize is a full-featured Python API for the Authorize.net payment gateway. Authorize.net offers great payment processing capabilities with a terribly incoherent API. Py-Authorize attempts to alleviate many of the problems programmers might experience with Authorize.net’s’API by providing a cleaner, simpler and much more coherent API. Py-Authorize supports most all of the Authorize.net’s API functionality including: - Advanced Integration Method (AIM) - Customer Integration Manager (CIM) - Transaction Detail API/Reporting - Automated Recurring Billing API (ARB) Here is a simple example of a basic credit card transaction. import authorize authorize.Configuration.configure( authorize.Environment.TEST, 'api_login_id', 'api_transaction_key', ) result = authorize.Transaction.sale({ 'amount': 40.00, 'credit_card': { 'card_number': '4111111111111111', 'expiration_date': '04/2014', 'card_code': '343', } }) result.transaction_response.trans_id # e.g. '2194343352' Documentation Please visit the Github Page for full documentation. License Py-Authorize is distributed under the MIT license. Support All bug reports, new feature requests and pull requests are handled through this project’s Github issues page. Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Py-Authorize/
CC-MAIN-2017-47
en
refinedweb
Warning! When using this method, maps that only have info_player_combine and info_player_rebel on them '''must''' be run with mp_teamplay 1, set on the server. This can be done by entering it into your Autoexec.cfg file for the game Steamapps\SourceMods\\cfg\autoexec.cfg VGUI Overview Before you start working on the VGUI team menu, it is recommended that you look at the VGUI2 Overview. (). You will learn that the base for all VGUI panels (not class-based, but where VGUI-objects are attached) is a viewport. This viewport is created in '''cl_dll/sdk/clientmode_sdk.cpp''' in the method '''ClientModeSDKNormal::InitViewport''': This is just for reference. You do not need to edit anything here. void ClientModeSDKNormal::InitViewport() { m_pViewport = new SDKViewport(); m_pViewport->Start( gameuifuncs, gameeventmanager ); } Team panel and menu Let's take a look at the CBaseViewport, which handles all the different Panels used by the mod. It is defined in '''cl_dll/baseviewport.cpp'''. This function creates all panels known to this viewport: void CBaseViewport::CreateDefaultPanels( void ) { AddNewPanel( CreatePanelByName( PANEL_SCOREBOARD ) ); AddNewPanel( CreatePanelByName( PANEL_INFO ) ); AddNewPanel( CreatePanelByName( PANEL_SPECGUI ) ); AddNewPanel( CreatePanelByName( PANEL_SPECMENU ) ); // AddNewPanel( CreatePanelByName( PANEL_TEAM ) ); // AddNewPanel( CreatePanelByName( PANEL_CLASS ) ); // AddNewPanel( CreatePanelByName( PANEL_BUY ) ); } Since you want to enable the team menu, '''uncomment''' AddNewPanel( CreatePanelByName( PANEL_TEAM ) ); The "real" creation is done in the function '''CBaseViewport::CreatePanelByName''', which takes the panelname as string and creates a corresponding class. The panel names are referenced in game_shared/viewport_panel_names.h, e.g. #define PANEL_TEAM "team" '''PANEL_TEAM''' creates the class '''CTeamMenu''', which contains all the functionality of that panel and is implemented in '''game_controls/teammenu.cpp'''. So uncomment that, too. else if ( Q_strcmp(PANEL_TEAM, szPanelName) == 0 ) { newpanel = new CTeamMenu( this ); } The class CTeamMenu is nearly functional; the only thing missing is the function which handles the messages created by the panel. Open '''game_controls/teammenu.h''' to add the declaration of the new function OnCommand to the class. Find // command callbacks and below it add virtual void OnCommand( const char *command ); Then add the implementation in '''game_controls/teammenu.cpp''': void CTeamMenu::OnCommand( const char *command ) { if ( Q_stricmp( command, "vguicancel" ) ) { engine->ClientCmd( const_cast( command ) ); } Close(); gViewPortInterface->ShowBackGround( false ); BaseClass::OnCommand(command); } You don't see anything done here with the commands. They are passed to the baseclass where hopefully they will be employed in a useful manner, but that is beyond the scope of this tutorial. Menu layout The layout and contents of your panel can be defined solely by a resource file. If you don't need to do customized things (e.g. displaying gamemode-dependent buttons or map information), you won't require any more to complete your menu. Most resources are missing in the Source SDK; the team menu resource is no exception. You need to create one yourself (or copy one from a cache file); the standard path would be /resource/ui/Teammenu.res. I think the format is self-explanatory, and you'll find an example below. It defines one panel (team), one label (joinTeam) and five buttons (jointeam1, jointeam2, jointeam3, autojoin, CancelButton). The interesting part in the button-definition is "Command", because this defines, which command the client executes, if he presses that button. You don't need to edit the file by hand. Valve included a nice tool with which you may change it in-game, as mentioned in VGUI Documentation. Just press [SHIFT]+[CTRL]+[ALT]+[B]in the game to open the VGUI Build-Mode editor: Added: replaced example with modified Counter-Strike: Source team menu, which includes HTML map info panel. You can save this example into /resource/ui/TeamMenu.res: "Resource/UI/TeamMenu.res" { "team" { "ControlName" "CTeamMenu" "fieldName" "team" "xpos" "0" "ypos" "0" "wide" "640" "tall" "480" "autoResize" "0" "pinCorner" "0" "visible" "1" "enabled" "1" "tabPosition" "0" } "SysMenu" { "ControlName" "Menu" "fieldName" "SysMenu" "xpos" "0" "ypos" "0" "wide" "64" "tall" "24" "autoResize" "0" "pinCorner" "0" "visible" "0" "enabled" "0" "tabPosition" "0" } "MapInfoHTML" { "ControlName" "HTML" "fieldName" "MapInfoHTML" "xpos" "244" "ypos" "116" "wide" "316" "tall" "286" "autoResize" "0" "pinCorner" "0" "visible" "1" "enabled" "1" "tabPosition" "0" } "joinTeam" { "ControlName" "Label" "fieldName" "joinTeam" "xpos" "76" "ypos" "22" "wide" "450" "tall" "48" "autoResize" "0" "pinCorner" "0" "visible" "1" "enabled" "1" "labelText" "#TM_Join_Team" "textAlignment" "west" "dulltext" "0" "brighttext" "0" "font" "MenuTitle" } "mapname" { "ControlName" "Label" "fieldName" "mapname" "xpos" "244" "ypos" "72" "wide" "180" "tall" "24" "autoResize" "0" "pinCorner" "0" "visible" "0" "enabled" "1" "labelText" "" "textAlignment" "west" "dulltext" "0" "brighttext" "1" } "jointeam2" { "ControlName" "Button" "fieldName" "jointeam2" "xpos" "76" "ypos" "116" "wide" "148" "tall" "20" "autoResize" "0" "pinCorner" "2" "visible" "1" "enabled" "1" "tabPosition" "3" "labelText" "#TM_Join_Team_2" "textAlignment" "west" "dulltext" "0" "brighttext" "0" "command" "jointeam 2" } "jointeam3" { "ControlName" "Button" "fieldName" "jointeam3" "xpos" "76" "ypos" "148" "wide" "148" "tall" "20" "autoResize" "0" "pinCorner" "2" "visible" "1" "enabled" "1" "tabPosition" "4" "labelText" "#TM_Join_Team_3" "textAlignment" "west" "dulltext" "0" "brighttext" "0" "command" "jointeam 3" } "autojoin" { "ControlName" "Button" "fieldName" "autojoin" "xpos" "76" "ypos" "212" "wide" "148" "tall" "20" "autoResize" "0" "pinCorner" "2" "visible" "1" "enabled" "1" "tabPosition" "1" "labelText" "#TM_Auto_Join" "textAlignment" "west" "dulltext" "0" "brighttext" "0" "command" "jointeam 0" "Default" "1" } "jointeam1" { "ControlName" "Button" "fieldName" "jointeam1" "xpos" "76" "ypos" "244" "wide" "148" "tall" "20" "autoResize" "0" "pinCorner" "2" "visible" "1" "enabled" "1" "tabPosition" "2" "labelText" "#TM_Join_Team_1" "textAlignment" "west" "dulltext" "0" "brighttext" "0" "command" "jointeam 1" } "CancelButton" { "ControlName" "Button" "fieldName" "CancelButton" "xpos" "76" "ypos" "276" "wide" "148" "tall" "20" "autoResize" "0" "pinCorner" "2" "visible" "1" "enabled" "1" "tabPosition" "0" "labelText" "#TM_Cancel" "textAlignment" "west" "dulltext" "0" "brighttext" "0" "Command" "vguicancel" } } Menu command The only thing missing to show the menu is a command to bring it up on screen. '''cl_dll/baseviewport.cpp''' already implements a command that you can enter from the console to show any panel: CON_COMMAND( showpanel, "Shows a viewport panel " ) but making a team menu command is no big deal: CON_COMMAND( chooseteam, "Opens a menu for teamchoose" ) { if ( !gViewPortInterface ) return; gViewPortInterface->ShowPanel( "team", true ); } With this code, if you enter the command "chooseteam" in the console it will show the menu. Localization If you do not already have a _.txt in your mod's resource folder, you will need to create one. The example below has been extracted from the HL2MP cache and modified to include team names from the team menu. Save it as _english.txt into your mod's resource folder. If you already have a _english.txt file, you can copy just the team parts from the example below and paste them into your existing file. "lang" { "Language" "English" "Tokens" { "TM_Join_Team" "Join a Team" "TM_Join_Team_1" "Spectate" "TM_Join_Team_2" "Team Combine" "TM_Join_Team_3" "Team Rebels" "TM_Auto_Join" "Auto-Select" "TM_Cancel" "Cancel" "hl2_AmmoFull" "FULL" "HL2_357Handgun" ".357 MAGNUM" "HL2_Pulse_Rifle" "OVERWATCH STANDARD ISSUE\n(PULSE-RIFLE)" "HL2_Bugbait" "PHEROPOD\n(BUGBAIT)" "HL2_Crossbow" "CROSSBOW" "HL2_Crowbar" "CROWBAR" "HL2_Grenade" "GRENADE" "HL2_GravityGun" "ZERO-POINT ENERGY GUN\n(GRAVITY GUN)" "HL2_Pistol" "9MM PISTOL" "HL2_RPG" "RPG\n(ROCKET PROPELLED GRENADE)" "HL2_Shotgun" "SHOTGUN" "HL2_SMG1" "SMG\n(SUBMACHINE GUN)" "HL2_SLAM" "S.L.A.M\n(Selectable Lightweight Attack Munition)" "HL2_StunBaton" "STUNSTICK" "HL2_357Handgun_Menu" ".357 MAGNUM" "HL2_Pulse_Rifle_Menu" "PULSE RIFLE" "HL2_Crossbow_Menu" "CROSSBOW" "HL2_Crowbar_Menu" "CROWBAR" "HL2_Grenade_Menu" "GRENADE" "HL2_GravityGun_Menu" "GRAVITY GUN" "HL2_Pistol_Menu" "9MM PISTOL" "HL2_RPG_Menu" "RPG" "HL2_Shotgun_Menu" "SHOTGUN" "HL2_SMG1_Menu" "SMG" "HL2_SLAM_Menu" "S.L.A.M" "HL2_StunBaton_Menu" "STUNSTICK" "ScoreBoard_Player" "%s1 - %s2 player" "ScoreBoard_Players" "%s1 - %s2 players" "ScoreBoard_Deathmatch" "Deathmatch" "ScoreBoard_TeamDeathmatch" "Team Deathmatch" "Playerid_sameteam" "Friend: %s1 Health: %s2" "Playerid_diffteam" "Enemy: %s1" "Playerid_noteam" "%s1 Health:%s2" "Team" "Team %s1" "Game_connected" "%s1 connected" "Game_disconnected" "%s1 has left the game" "Cannot_Be_Spectator" "This server does not allow spectating" } } If you take a look back at '''TeamMenu.res''', you'll find labeltexts beginning with a '#'. These labels are replaced by their counterparts in the translation file, only without the '#'. For example '''#TM_Join_Team''' will be replaced with "Join a Team" if your Steam language is set to English. For other languages you need to create additional files, like '''_german''', '''_french.txt''', etc. Using the example TeamMenu.res provided above, you can also display an HTML mapinfo file next to the team menu. Just create a simple HTML file and fill it with map information. It should be named _.html and be located in /resource/maphtml/ Forcing teamplay We will now force HL2MP to be teamplay only. It will use the existing teams (Combine and Rebels) but you will learn how to change the team names if you wish. First ensure that you have 3 teams defined in '''shareddefs.h''': #define TEAM_INVALID -1 #define TEAM_UNASSIGNED 0 #define TEAM_SPECTATOR 1 TEAM_COMBINE and TEAM_REBELS should '''NOT''' be defined here unless you are modding a from-scratch Source SDK. Here we are modding a deathmatch SDK, so the two teams have already been defined in an enum in hl2mpgamerules.h Now open '''teamplay_gamerules.cpp''', and in the function '''bool CTeamplayRules::ClientCommand( CBaseEntity *pEdict, const CCommand &args )''' ''below'' the lines if( BaseClass::ClientCommand( pEdict, args ) ) return true; add these lines: if (pEdict->IsPlayer() && static_cast(pEdict)->ClientCommand(args)) { return true; } to make the link between the call of CTeamMenu::OnCommand and CBasePlayer::ClientCommand In player.cpp, in the function '''bool CBasePlayer::ClientCommand(const char *cmd)''' we will add else if ( stricmp( cmd, "jointeam" ) == 0 ) //start jointeam { if ( args.ArgC() < 2 ) return true; int team = atoi( args.Arg(1) ); //don't do anything if you join your own team if ( team == GetTeamNumber() ) return true; //auto assign if you join team 0 if ( team == 0 ) { if ( g_Teams[TEAM_COMBINE]->GetNumPlayers() > g_Teams[TEAM_REBELS]->GetNumPlayers() ) team = TEAM_REBELS; else team = TEAM_COMBINE; } if ( !IsDead() ) { if ( GetTeamNumber() != TEAM_UNASSIGNED ) { CommitSuicide(); IncrementFragCount(1); //adds 1 frag to balance out the 1 subtracted for killing yourself } } ChangeTeam( team ); return true; } //end jointeam Notice the test to check if the player belongs to TEAM_UNASSIGNED, this removes the dead body falling on the floor when we connect to a team. Also to make it compile correctly add this include in '''player.cpp''' to the top of the file #include "hl2mp_gamerules.h" And finally, to force teamplay to be on all the time (even if mp_teamplay is set to 0), go into '''hl2mp_gamerules.cpp''' and search for the line m_bTeamPlayEnabled = teamplay.GetBool(); and replace it with m_bTeamPlayEnabled = true; Final Touches We're almost done, but there are first a few more things we need to do. First, we need to force players to join Spectator as soon as they connect to the server. In '''hl2mp_player.cpp''', find function '''void CHL2MP_Player::PickDefaultSpawnTeam( void )''' and comment the entire function. Add this in its place: void CHL2MP_Player::PickDefaultSpawnTeam( void ) { if ( GetTeamNumber() == 0 ) { if ( HL2MPRules()->IsTeamplay() == false ) { if ( GetModelPtr() == NULL ) { ChangeTeam( TEAM_UNASSIGNED ); } } else { ChangeTeam( TEAM_SPECTATOR ); } } } Now find function '''void CHL2MP_Player::Spawn(void)''' and add this to the end of that function: if ( GetTeamNumber() != TEAM_SPECTATOR ) { StopObserverMode(); } else { //if we are a spectator then go into roaming mode StartObserverMode( OBS_MODE_ROAMING ); } And lastly we will make the team menu pop up after the MOTD is closed, so the player knows to join a team. In '''hl2mp_client.cpp''' find the line pPlayer->ShowViewPortPanel( PANEL_INFO, true, data ); and add this ABOVE it: pPlayer->ShowViewPortPanel( PANEL_TEAM, true, data ); (This is really just a temporary solution; what it does is bring up the team menu, then brings up the motd over it so when you close the motd the team menu appears to pop up) One final thing: If you want to change the names of your teams (Combine and Rebels), you do not need to change the predefined names. If you're modding the Deathmatch SDK and decided to change TEAM_COMBINE and TEAM_REBELS in hl2mp_gamerules.h to something else, you'd have to find all instances of those everywhere in the SDK and change them to match, which is a big hassle and likely to cause problems. You can change the visual name appearance of your teams very easily in the file '''hl2mp_gamerules.cpp''' by searching for the line char *sTeamNames[] = Below it you will notice the names for all the defined teams, including Combine and Rebels. Just change those names and your teams will appear with the names on the scoreboard and HUD. To change their names in the Team Menu that we created earlier, you can edit _english.txt. To add a [[bind|bindable]] key to the key configuration list, just open kb_act.lst in your mod's script directory, and add this to the bottom: "chooseteam" "Choose Team" Awesome, thanks for the tutorials! Hey no problem. These are actually mirrors of the tutorials that were lost some time back and thrown on the VDC. You can find some others on there as well. developer.valvesoftware.com/wiki/Main_Page I keep getting this error \game_controls\teammenu.cpp(404) : error C2059: syntax error : '(' not sure why I am not a coder and really cant see what is wrong with this code line engine->ClientCmd( const_cast( command ) );
http://www.moddb.com/engines/source/tutorials/creating-teams
CC-MAIN-2017-47
en
refinedweb
A fast and minimal view router written in react which responds to hash navigation. React basic router is just javascript, it can be webpacked or required as you like. This is how you can install it. npm install --save react-basic-router In this eample, I assume "About" and "MainView" are react components. To load them, simply navigate normally to one of the hash links. import React from 'react';import { Router, Route, ErrorRoute } from 'react-basic-router';class App extends React.Component {render() {return(<div><Router><Route hash="#/" component={MainView} absolute /><Route hash="#/about" component={About} title={"Hey look, I can pass props"} /><ErrorRoute component={MissingFilePage} /></Router></div>);}} To build the project, run the following commands npm installnpm install -g gulpgulp SharpCoder aka Joshua Cole
https://www.npmjs.com/package/react-basic-router
CC-MAIN-2017-47
en
refinedweb
28 March 2011 By clicking Submit, you accept the Adobe Terms of Use. To make the most of this tutorial you’ll need previous experience building applications with Adobe Flash Builder as well as some knowledge of development techniques using .NET and Microsoft Visual Studio. Intermediate In this tutorial you’ll learn about Remote Shared Objects (RSOs) and how to use them from either the client or server side. You’ll also develop a small application based on a simple word game that will access and modify RSOs using from client-side (ActionScript) and server-side (C#) code using the WebORB Integration Server to marshal the communications between client and server. You’ll need to have an IIS server running and WebORB installed on your server. RSOs track, store, share, and synchronize data between multiple client applications. An RSO is an object that lives on the server. It resides in the scope of a messaging application that clients connect to. More than one client can connect to an RSO, and all of them will have access to the data in the RSO. WebORB, in this case, is responsible for managing the RSO and providing access to it for various clients. RSOs are particularly useful when they are used on several clients at the same time. When one client changes data that updates the RSO on the server, the server sends the change to all other connected clients, enabling you to synchronize many different clients with the same data. RSOs can also be updated and accessed by the server, giving developers more options for application development. To sum it up, RSOs can be used to: In this tutorial, you will use RSOs to create a simple online version of the Add-a-word game. The object of this game is to add a word to a sentence, one user at a time, and eventually come up with a very long sentence (that still makes sense). Consider the following example: The server-side code for this application keeps track of all connected users, assign turns, and add words to the sentence. Note: The code snippets included in the steps below are not complete; rather they are used to illustrate the main concepts in the server-side implementation. For the complete code, see WeborbSharpRSO.cs in the sample files for this tutorial. Follow these steps to create the server-side DLL: namespace WeborbSharpRSO { public class WeborbSharpRSO : ApplicationAdapter { } } The ApplicationAdapter class has several methods that let you know when an application is started, when a new room is created, and when a client connects or disconnects. An application can have several rooms running simultaneously. Each room may have one or more clients connected and sharing information. Clients connected to one room share information only with other clients in the same room. For this example you’ll use the following two methods to detect when a client joins or leaves a room: public override bool roomJoin(IClient client, IScope room) public override void roomLeave(IClient client, IScope room) public string sharedObjectName = "addWord"; roomJoin()method, first check if the user can connect to the room. Then check if the room has the Remote Shared Object. If not, create it and add a SharedObjectListener to it. This listener will detect any changes in the RSO. public override bool roomJoin(IClient client, IScope room){ if (base.roomJoin(client, room)) { ISharedObject so; if (!hasSharedObject(room, sharedObjectName)){ createSharedObject(room, sharedObjectName, false); so = getSharedObject(room, sharedObjectName); so.addSharedObjectListener(new MySharedObjectListener()); } else so = getSharedObject(room, sharedObjectName); … } } Note: The code in roomJoin could also be placed inside of the appStart method. The next step is to check and update the values of some attributes of the RSO, such as the number of users connected and whose turn is next. These attributes hold all the information you want to share across clients. For example, your code will get the existing number of users connected to the room from the shared object (SO) and then increment that number by one. This value is then written back to the SO: if(so.hasAttribute("totalUsers")) totalUsers = so.getLongAttribute("totalUsers"); totalUsers++; … so.setAttribute("totalUsers", totalUsers); In addition to totalUsers, the RSO shares the following data: userList: List of all the clients connected to the same room currentUser: id and name of the client whose turn it is to add the next word sentence: the actual sentence being formed word: the last word submitted Each client that connects to a room is assigned unique ID based on the connection order. roomJoin()function, add the following code, which sends this ID to the client using the invokeClientsmethod (explained in more detail below): object[] args = new Object[] {client.getId()}; invokeClients("SetUserId", args, client.getConnections(room)); public class MySharedObjectListener : ISharedObjectListener { } This class provides methods that can be used to check for different types of changes on the shared object. This tutorial focuses on the onSharedObjectUpdate() method, which is used to check for changes on the remote object. A change on the word attribute of the RSO invokes a method that will process the information and set the next user. onSharedObjectUpdate()as follows: public void onSharedObjectUpdate(ISharedObjectBase so, string key, object value){ if (key == "word") WeborbSharpRSO.nextUser(so, value.ToString()); } The nextUser() method is called when there’s a new word to add to the sentence. Even though you could modify the RSO directly in this function, do not use this approach. When the server-side code changes the RSO during a client-side request, the client that initiated the request does not get the changes added by the server. The server-side accumulates all the changes as independent events. The accumulated events are sent out to the original sender first and then to all other subscribers. Each event has a corresponding RSO version number. If there are multiple change events all going out with the same version number, Flash Player lets only the first one through. This will cause the client that initiated the request to get only the changes it requested, not the changes added by the server to the RSO on the same request. public static void nextUser(ISharedObjectBase so,string word) { ThreadPool.QueueUserWorkItem(ChangeCurrentUser, new object[] {so, word}); } ChangeCurrentUser()function to actually change the RSO on the server. Here’s where you add the word to the sentence and write it back to the RSO: public static void changeCurrentUser( object state ){ … if( word != "" ){ string text = ""; if( so.hasAttribute("sentence") ) text = so.getStringAttribute("sentence"); so.setAttribute("sentence", text += " " + word); } … } so.setAttribute("currentUser", newCurrentUser.ToString() ); roomLeave()method to handle clean-up tasks when a user leaves the room. This method updates the list of clients still playing the game. Note the three-step process: the content of the userList attribute is retrieved and set to the users dictionary, the client that is leaving the room is removed from the dictionary, and then a copy of the content of this object is put into a newUsers dictionary. The reason behind this awkward process is that the server won’t detect the change on the original users object when an element is removed. As a result, if you send back the same object to the RSO, the changes will not be sent to the clients. users = so.getMapAttribute("userList"); … if (users.Contains(client.getId())) users.Remove(client.getId()); newUsers = new Dictionary<string, string>(); foreach (DictionaryEntry de in users) newUsers[de.Key] = de.Value; … so.setAttribute("totalUsers", totalUsers); invokeClients()to the main class. This function uses the connection.invokemethod to call ActionScript methods on the client. You will need to pass the name of the method to call, any needed parameters, and the list of client connections. The name used in the functionNamevariable must exist as a function name in the client application: private void invokeClients(string functionName, object[] args, IList<IConnection> ILconn ){ foreach(IConnection conn in ILconn){ ((IServiceCapableConnection)conn).invoke(functionName, args); } } A detailed explanation of the invoke() method is outside of the scope of this tutorial. For more information about this method please consult the WebORB documentation or see Invoke ActionScript functions from .NET. You’ll need to configure WebORB before your application will work. Specifically, you need to add a messaging application so WebORB is aware of its existence and can manage the user connections. Follow these steps: Open the WebORB management console (see Figure 1) using a web browser. If you installed WebORB using the default settings, the console is available at: Note: If the MyRSO application doesn’t show up under the list of applications, but you can see the folder in the hard drive, you may need to restart IIS and then reload the WebORB console. If you cannot see the MyRSO folder in your hard drive you may need to check your permissions. For more information on permissions, refer to the WebORB installation and deployment documentation available through the Help/Resources tab of the WebORB console. With the server-side code in place, you’re ready to open Adobe Flash Builder and develop the ActionScript code. Note that you could have made many of the changes to the RSO directly with ActionScript using WebORB’s own SharedObjectsApp messaging application. This tutorial used a more convoluted path to illustrate RSO access and modification from the server. Note: The code snippets included in the steps below are not complete; rather they are used to illustrate the main concepts in the client-side implementation. For the complete code, see WeborbRSO.mxml in the sample files for this tutorial. To create your ActionScript code, follow these steps: <s:states> <s:State <s:State </s:states> TitleWindowwith two textInputcontrols (Room Name and User Name) and a Connect button: <s:TitleWindow <s:layout> <s:VerticalLayout </s:layout> <s:HGroup <s:Label <s:TextInput </s:HGroup> <s:HGroup <s:Label <s:TextInput </s:HGroup> <s:Button </s:TitleWindow> HGroupthat displays a list of the connected users on the left, the sentence the users are forming on the top right, and a textareacontrol where the users can type the next word at the bottom right: <s:HGroup <s:TitleWindow <s:List <!-- We use an item renderer to color red the name of the current user --> <s:itemRenderer> <fx:Component> <s:ItemRenderer <s:Label </s:ItemRenderer> </fx:Component> </s:itemRenderer> </s:List> </s:TitleWindow> <s:TitleWindow <s:layout> <s:VerticalLayout </s:layout> <s:TextArea <s:HGroup <s:TextInput <s:Button </s:HGroup> </s:TitleWindow> </s:HGroup> The Connect button in the login state invokes the onConnect() function. This function connects to the server and obtains the remote shared object. onConnect()to the block: public function onConnect():void{ roomName = txtRoomName.text; userName = txtYourName.text; SharedObject.defaultObjectEncoding = ObjectEncoding.AMF0; /** * Establish connection * */ nc = new NetConnection(); nc.client = this; nc.objectEncoding = ObjectEncoding.AMF0; nc.addEventListener( NetStatusEvent.NET_STATUS, onNetStatus ); nc.connect( urlServer + "/" + weborbApplicationName + "/" + roomName ); /** * Get Remote Object * */ so = SharedObject.getRemote( sharedObjectName, nc.uri, false ,false); so.client = this; so.addEventListener( SyncEvent.SYNC, onSync ); so.connect( nc ); } The server URL, name of the WebORB application, and name of the remote object are previously declared in variables. The name of the chat room is obtained from the login window. Users can login to different chat rooms to work on different sentences. public function setUserId(userId:String):void { this.userId = userId; this.currentState="game"; setName = true; } onSync()function, which is called each time there’s a sync event on the remote shared object. Use this function to set the client’s name the first time they login. Also use this function to enable/disable the button to submit text as well as to update the list of users logged in. Each time there is a change, this function creates the list and sets the id of the user whose turn it is. (See WeborbRSO.mxml for the full implementation.) onSendText()function, which send the word to the server. This function is called when the user clicks the Add Word button. It simply sets the wordproperty on the shared object. public function onSendText():void { so.setProperty("word",txtAddText.text); txtAddText.text = ""; } This add-a-word game is just a simple application that showcases some of the possibilities of Remote Shared Objects. Of course, there are several ways to improve this application. To start with, you could limit the content sent by a user to just one word, and not allow any number of words. You could also use the RSO to keep track of several sentences at a time inside the same room if you wanted. This tutorial demonstrated that RSOs can be used in many different situations, ranging from sharing information between clients to managing and synchronizing real-time online games. Now that you’re familiar with them, you can adapt these techniques for your own applications. You can try this application at Anden Solutions. For more information about WebORB for .NET visit its overview page. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe.
http://www.adobe.com/devnet/flex/articles/flex-dotnet-remote-shared-objects.html
CC-MAIN-2016-07
en
refinedweb
Config::Framework - handy one-stop shopping for (most) of your configuration file needs. #create a new object, load two configuration files and decrypt the passwords. my $Object = new Config::Framework( Files => ['ApplicationCfg.xml','UserCfg.xml'], GetSecure => 1 ) || die $Config::Framework::errstr; #change some data in one of the loaded configs $Object->{'UserCfg'}->{'backgroundColor'} = '#00CCFF'; #write that change back to the file you loaded it from $Object->WriteConfig(configNamespace => "UserCfg") || die $Object->{'errstr'}; #Define a new configuration namespace %{ $Object->{'newConfig'} } = ( 'configNamespace' => "newConfig", 'protectNamespace' => 1, 'Version' => 1, #arbitrary data keys follow 'backgroundColor' => '#006699', 'getRecords' => 10, 'followLinks' => 1, 'someThing' => "in a bag" ); #Write your new configuration data out to an encrypted file #under the application's ApplicationFramework directory $Object->WriteConfig( configNamespace => "newConfig", Encrypt => 1, Crypt => "Rijndael", Key => "l33tp4sw3rd" File => "$Object->{'FrameworkDir'}/newConfig.xml" ) || die $Object->{'errstr'}; At long last I have decided to re-write the documentation for Config::Framework, in a manner which should be comprehensible by people other than myself. I would like to offer my sincerest appologies to anyone who tried to comprehend the sprawling stream-of-consiousness rant that was the previous 'documentation'. I'm sorry, I wasn't trying to make you insane. Now on with the show. Ok so what is Config::Framework? It's a handy module for dealing with situations where you need your program to be able to load external data from a file that controls how your program operates. I'm talking about configuration files here. So what do you do in a situation like that? Well you figure out a format to store your configuration parameters in, then write routines to read that format and put it some sort of meaningful data structure, and to write data from the data structure back out to the file. Wouldn't it be nice if someone defined a standard config file format so that you wouldn't need to write your own parser? Well in the end, all a config file really is, is an arbitrary data structure expressed in in ascii. A standard way of serializing data structures in ascii you say? That sounds a bit like XML! Well the Data::DumpXML module will serialize perl data structures into XML and restore them for you, and you could certainly dump that to a file easily enough. Ok getting to the point. The main thing that Config::Framework does for you is to define a standard data structure (or at least some standard guidelines for your data structure) and then front-ends Data::DumpXML so that you can arbitrarily dump and restore these data structures to files. While we're at it, Config::Framework aspires to be your 'one-stop-shop' for config-type-stuff, by helping you stay organized in the way you handle external configuration data. When it comes to your program needing to load external files, things can quickly get messy. Config::Framework helps you stay organized with this by defining a directory structure that you can use for all of your programs. Config::Framework defines something called the 'virtual root'. This is a master directory underneath which all of your programs, and all of the things that they might need to load or operate correctly would live. When you first build the Config::Framework module, the Makefile.PL will prompt you to enter a directory to use for your Virtual Root. This will be the default Virtual Root for all objects that you create. That is whenever you create a new Config::Framework object, unless you specify a different Virtual Root (via the 'v_root' option) the directory you entered when you were building the module will be your Virtual Root. Let me give you a quick example of why this is important. I had a rather large group of perl programs, some of which I had written, others which i had inherited from others. All of them lived beneath a certain directory structure which was allocated for my department by the unix help-desk. Almost all of these programs had one file or annother which they would need to load to run, all of the paths were hard-coded. One day help-desk decided to change our directory. Mass confusion, and one REALLY LONG night of me picking through all of the programs trying to change hard-coded file paths ensued. Had I been using Config::Framework, all I would have needed to do was rebuild the module with a new virtual root. Well that's the story of why I started development on this module. Aaaah yes, those were good times ... Under the Virtual Root, Config::Framework defines a subdirectory, beneath which it is presumed you will be keeping all of your configuration files. This is referred to as the "Config Location" or $object->{'config_loc'}. As with the Virtual Root, you will be prompted for a default config location when you build the module. You can override the default config location by specifying the 'config_loc' option at instantiation. When you load a configuration file, Config::Framework will look for it here first (unless otherwise specified). Generally speaking, you should put "Global" configuration data here, that is, configuration data that all programs might use. Beneath the Config Location, Config::Framework expects to see a subdirectory named "ApplicationFrameworks". Beneath that should be subdirectories for various programs. For instance, if I had a program called "Skeletor" and it needed a configuration file called "Skeletors_Config.xml", a gif called "trogdor.gif", and a berkley DB file called Skel.db, I would create a directory under the Virtual Root in the Config Location under the ApplicationFrameworks directory called Skeletor, and I'd put Skeletors_Config.xml, trogdor.gif, and Skel.db in there. Sounds confusing. It isn't here's annother go at that: Virtual Root = /prod Config Location = config cd /prod/config/ApplicationFrameworks/Skeletor; ls; Skeletors_Config.xml trogdor.gif Skel.db Got it? Ok so every program that needs to load specific external files should have a directory beneath the ApplicationFrameworks directory that corresponds to it's program name. In that directory is where you should put everything that the program might need to load. Things that multiple programs might need to load should be put at the root level of the config location. When you load a config file with Config::Framework, the second place it looks (after the config location) is the subdirectory of ApplicationFrameworks that corresponds to the program's name. Ok so that pretty much lays out how things are kept organized in the filesystem. How about organizing access to all these gobbs of data? I'm glad you asked, 'cause I was gonna tell you anyhow. You might recall that I mentioned earlier that the heart of Config::Framework was a standard data structure for your configuration data which is serialized and stored in a file, and then miraculously restored to perl data-structure-hood via the Data::DumpXML facility. Before I lay out exatly what that standard data structure is (well actually it's pretty much just some guidelines), let me tell you about a really cool name I came up with: "configNamespace". This really cool, complicated sounding thing is a word I came up with to describe the concept of having more than one set of configuration data resident in the Config::Framework object at one time. For instance, I may have a global configuration file that holds some general purpose data like the hostname of my oracle server and the port that my SSL server listens on. Stuff that lots of programs might want to know. In annother config file, I might have some application specific data like the maximum number of multiple process for this program I want to have running at once, and a regular expression matching record numbers for a certain database. I would want to have them both loaded with my Config::Framework object at the same time, so I needed to come up with a way that I could differentiate between the two sets of config data. So what I did was to make each config file define a 'configNamespace' under which it resides in the Config::Framework object. That's a fancy ways of saying it's a string that happens to be a hash key in the object. This is a key concept for using Config::Framework. For instance, back in my example, I could make the global config file define the configNamespace 'global', and the application specific file define the configNamespace 'myApp'. So to access the global data I would look under $object->{'global'} and for the app specific data I would look under $object->{'myApp'}. Well like I said this is more of a guideline than it is a standard. Only the mandatory parts are, well mandatory. The rest is just a suggestion. For instance, you don't need to include information about the autors or module dependancies, However, if you WANTED to include that information, this is how I suggest that you do it. Ok, strap on your perl hats, here we go: The data structure is basically a giant hash: %DATA = ( ## Mandatory Information ######################## #the configNamspace to load this data under in the Config::Framework object 'configNamespace' => $configNamespace, #throw an error if this configNamespace is attempted to be overwritten (by loading #annother config with the same configNamespace) if set to non-zero value. 'protectNamespace' => 1 | 0, #revision of this configuration 'Version' => $version_number, ## Optional Config Meta Data #################### #date corresponding to the last revision of this configuration 'Date' => $date_in_epoch_format, #automatically load these files and nest their configNamespaces underneath this one. 'children' => [$file_name, $file_name, ... ] ## Author Data ################################## #name of the lead developer for this project 'Lead Developer' => $lead_developer_name, #lead developer's email address 'Lead Developer Email' => $lead_developer_email, #the others in the development team 'Developers' => [ 'array','of','other','developers' ], ## Program Specific Keys ######################## #would go here. Any goofy thing you want. ); And that's pretty much it. Passwords are one special kind of config data that programs frequently need to load. For instance, do you have a program that needs to talk to a database? How about one which needs to talk to an SSL Website? Well then it probably needs to have a username and password. It's kind of bad form to have passwords and usernames hard-coded into programs. Especially if you have lots of programs, then you have both the nightmare of updating all of the hard-coded passwords in each program when the password gets changed, as well as the security risk of having a password in perhaps tens or hundreds of individual files. One option is to stick those usernames and passwords in a configuration file or course, so that many programs can access the same file. However, you've still got your passwords hanging out 'in the nude' in a file somewhere waiting to be discovered. Config::Framework provides some built-in options to help you, if not eliminate, to at least to mitigate that risk. Config::Framework knows how to decrypt a file encrypted with any of the Crypt::* modules which is Crypt::CBC compliant. When you specify the 'GetSecure' option at object instantiation, Config::Framework knows to look for a file called 'passwds.xml' located at the root level of the config directory. When you build Config::Framework, the Makefile.PL will ask you for a Crypt::* module to use to and a passphrase to use to decrypt and encrypt this file. Sure, the passphrase is still 'in the nude' somewhere buried in your perl distributions lib/ directory, and theoretically, someone could go digging through that directyory, and find the passphrase, then use it to get all of the passwords in your passwds.txt. However, it's better than nothing. Like I said this mitigates the risk a bit, it dosen't eliminate it. At the moment there really aren't any good systems available to perl to handle passwords securely. At least this way, you have your password access abstracted a bit, so when something like that comes along, we can add support to Config::Framework. Ok something that's not config file related, but is none-the-less important, and so is therefore included in the one-stop-shop that is Config::Framework is the ability to keep log files, and to let someone know when something bad has happened with your program. When you build Config::Framework, you are prompted to enter the email address of someone who should be notified when a program using Config::Framework dies unexpectedly. This is a bit misleading. An alert will not be sent to this address automatically, you must catch your own exceptions and call the AlertAdmin function with your alert message. As with all of the other default parameters gathered durring the build process, this address can be overridden at object instantiation. To override the admin address send a new email address on the 'admin' object at instatiation. Config::Framework also provides support for appending messages to alert files via the 'Log' method. This creates a new Config::Framework my $object = new Config::Framework( [options] ) || die $Config::Framework::errstr This is the 'name' of the program. This defailts to the name of the executable file if you don't specify it explicitly. This is the 'Virtual Root': the directory under which all of the external things your program will need to live happily reside. (See above section: 'Virtual Root'). If not specified explicitly, this value defaults to the virtual root specified when the module wad built. This is the directory beneath 'v_root' which contains all Config::Framework loadable config files as well as the ApplicationFramework directories (see above section 'The Config Location'). If not specified explicitly, this value defaults to the 'config_loc' specified when the module was built. This is the path to the sendmail executable, which we pipe directly to when sending alerts via the AlertAdmin method. If not explicitly defined, this value defaults to the path to sendmail given when the module was built. This is the email address of the person whom we should send email alerts to by deefault when the AlertAdmin method is called. If not explicitly defined, this value defaults to the admin address given when the module was built. This is the Crypt::* module to use when encrypting or decrypting encrypted configuration files, such as the infamous passwds.txt. (See the 'Passwords' section above). Keep in mind that whatever Crypt::* module you specify must be Crypt::CBC compliant, and of course, you must have it installed already! If not explicitly defined, this value defaults to the Crypt::* module specified when the module was built. This would be the passphrase to use when encrtpying or decrpting encrypted configuration files, again, such as that infamous passwds.txt (see 'Passwords' section above). If not explicitly defined, this value defaults to the passphrase given when the module was built. This is a sticky-wicket, thrown in for backward compatibility. This is an array conaining a list of strings which are data keys which you would like to have exported to the shell environment. There are 5 default members of this list: 'SYBASE', 'ORACLE_HOME', 'ORACLE_SID', 'ARTCPPORT', 'LD_LIBRARY_PATH' This means that if you happen to have defined any of these options, either explicity at instantiation, or through the build process as a default option, the values associated with these options will be exported to your sheel environment. Mucking with this is hardly ever worthwhile, if you're looking for a quick and easy way to export stuff to the shell environment, check out the 'Export' option. This is a hash refrence of variable names and values that you would like to have exported to the processes shell environment. For example to set the 'BLAH' variable to "ain't it grand?", you could do this: $object = new Config::Framework( Export => { 'BLAH' => "ain't it grand?" } ); if you have a sybase client libraries installed, and you would like to set the environment variable SYBASE to this value, you can specify it here and it will be exported. (it's part of the default 'EnvExportLIst'). If you don't explicity define this, and you did define it duirring the build process, it will be exported to your shell environment by default. This is meant to be the path to the Sybase client library distribution. the path to the oracle client librarry distribution. Like SYBASE, this is in the default EnvExportList, so if you define it, expect for it to be exported to the processes shell environment the SID of an oracle database you would like to connect to. A default member of EnvExportList (see above). the port that you would like to talk to a Remedy ARS server on. A default member of EnvExportList (see above). linker library path. believe it or not, you need to have this defined for a whole lot of things to run correctly under *nix. So if you defined a library location under v_root when you were building the module, this will be exported to the processes shell environment. You can also, obviously specify it explicitly here. If set to a non-zero value, this will cause Config::Framework to automatically load and decrypt the encrypted config file 'passwds.txt' under v_root/config_loc. This is a list of config files to load before returning the object. Well this can either be a string containing one file name, or an array reference contaiing multiple file names. Keep in mind that each file must define it's own unique configNamespace for this to work correctly. if set to a non-zero value will automatically load all child configs specified in any config that is loaded into the object. The default value for this object is 1. To override this behaviour, just set the option to 0. This loads a configuration file into the object under the configNamespace specified in the file. If there is already data loaded under this configNamespace in the object, and the protectNamespace option is set in the existing config data, an error will be thrown. $object->LoadConfig(File => $file_name) || die $object->{'errstr'}; this should be a string containing the filename of the file containing configuration data that you would like to load. The file is looked for in the following locations: IF the file exists as you have specified it (that is if you specified the full path to some file, and it exists there) then the file will be loaded. Otherwise, if it exists in the root level of v_root/config_loc (the location for global configuration files) it will be loded from there. Else if it exists in v_root/config_loc/ApplicationFrameworks/$object->{'program'} (where $object->{'program'} is the name of the program (see 'new (constructor)' above), then it loaded from that directory. Lastly, if the file is not found in any of those locations, we look in the home directorry of the user executing the process (this is determined via $ENV{'HOME'}). Using this precendenc allowes a great deal of flexibility ... just remember to keep tour config file names unique! ;-) IF the file you are loading DOES NOT specify it's own configNamespace, you can specify one explicity in the function call using this parameter. This should be a string you would like to use for the configNamespace-less file you are loading. you may specify a parent namespace under which to nest the configNamespace of the file you are loading. For instance. If I have an application called 'Daleks' which has a config file with a configNamespace of 'Dalek' and a user preferences file which specifies the configNamespace 'usersDalekConfig', then I might do something like: $object->LoadCondfig( File => "usersDalekConfig.xml", Parent => "Dalek" ); this would load the user-specific config file UNDER the 'Dalek' configNamespace, so that I could access the user-specific data thusly $object->{'Dalek'}->{'usersDalekConfig'}->{'someKey'}; LoadConfig has the capability to decrypt and load config files encrypted via one of the CBC compliant Crypt::* modules. This option specifies the Crypt::* subclass that you would like to use to decrypt the specified config file (presuming it is encrypted). For instance, if you wanted to load the file "mySecretConfig.xml" which was encrypted using the Crypt::Rijndael module you would do something like: $object->LoadConfig( File => "mySecretConfig.xml", Crypt => "Rijndael", Key => $mySecretKey ) || die $object->{'errstr'}; NOTE: this option defaults to the Crypt::* subclass specified when the module was built, if not explicitly defined either at this function call or at object instantiation. (see Crypt option above). This is the passphrase to be used to decrypt the configuration file using the Crypt::* subclass specified on the 'Crypt' option. NOTE: this option defaults to the passphrase specified when the module was built, if not explicitly defined either at this function call or at object instantiation. This will write the data under some configNamespace in the object to the file it was loaded out of, or alternately to a different specified file. Encrypted data is handled transparently. $object->WriteConfig(configNamespace => "usersDalekConfig.xml") || die $object->{'errstr'}; This should be a string indicating the configNamespace that you want to dump back into the spcified file. Obviously, the configNamespace that you specify must already exist in the current object. This is the file that you want to write the data contained in the specified 'configNamespace' back out to. If this option is not specified explicitly, then the file from which the specified configNamespace was loaded is used. The same file location precidence that is used in LoadConfig is maintained here. That if the file as specified is not writeable, then we look first under v_root/config_loc, then v_root/config_loc/ApplicationFrameworks/$object->{'program'} and lastly in the user's home directory. if set to a non-zero value, this will cause the file which is being written out to be encrypted with either the specified Key and Crypt or the default options given when the module was built. NOTE: setting this option is not necessary if you are writing data back to a file which was encrypted when you originally loaded it, this option is only necesary if you are encrypting a file which was not previously encrypted, or if you are creating a new encrypted file. This should be the CBC compliant Crypt::* subclass that you would like to use encrypt the data. If not specified, this option defaults to the value givne when the module was built. For more information, see ReadConfig. This should be a string contining the passphrase you want to use to encrypt the data with the specified CBC compliant Crypt::* subclass. For more information, see ReadConfig. This function will load any specified file in the Data::DumpXML DTD. If a binary file is specified it is presumed to be encrypted. Encrypted files are decrypted using either a specified Crypt::* module and passphrase, or the default options specified when the module was built. Data is returned via a hash reference, and is NOT loaded directly into the object. $data = $object->LoadXMLConfig(File => "path/to/some/file.xml") || die $object->{'errstr'}; This is the backend to LoadConfig which handles inserting config data into the object under the correct configNamespace, and also handles child configs and nested namespaces. If you just want to get some raw data out of a file in the Data::DumpXML dtd, which might possibly be encrypted using a Crypt::* module which is CBC compliant, then this is the method you're looking for. again, this is a string containing the complete path to and name of the file you would like to load. No location precidence mathing occurs here, you must specify the entire path and file you want to load. This would be the CBC compliant Crypt::* subclass that you would like to use to decrypt the given file, presuming it is, in fact, encrypted. (See LoadConfig method) This would be the passphrase you'd like to use to decrypt the (presumably encrypted) config data using the CBC compliant Crypt::* sublcass you specified above. (see LoadConfig method) This will email an alert to the address specified by either the 'To' option, or the default address specified when the object was built. Additionally, the method can optionally copy the message to a group of addresses, log the message to a file, or call the die() routine. This is accomplished via a piped shell process which calls the sendmail binary. If we are unable to open a pipe to the sendmail process, as a last resort we will attempt to append the specified message to a logfile located at v_root/var/log/last_resort.log. $object->AlertAdmin( Message => "I can't log in to the database, bailing out!", Log => "copy/this/message/to/my/log/file.txt", Die => 1 ) || die $object->{'errstr'}; a string containing the address to send the 'Message' to, or alternately, a reference to an array containing a list of multiple addresses to use. If not explicitly specified, this option defaults to the admin address given when the module was built. a string (possibly a very long one) which contains the data you would like to send. if specified, this will cause the method to attempt to append the 'Message' to the specified log file, tagged with the current date and time. if set to a non-zero value, this will cause the program to terminate itself after sending the 'Message' if set to a non-zero value, this will cause the method to append the entire contents of the global %ENV hash to the end of the 'Message' This will append the given 'Message' to the specified 'Log' file. The log file is presumed to live beneath v_root somewhere. If you want to use system-wide loging locations, you might want to use sym-links to accomplish that. TO DO: add syslog support via Net::Syslog or Sys::Syslog so that messages can be logged to a remote syslog server or put in the machine's local syslog. That's far less ghetto than appending messages via the open >> method. $object->Log( Message => "hey, I wasn't able open that file, moving on to the next one!", Log => "path/under/v_root/to/my/log/file.txt" ) || die $object->{'errstr'}; this should be the complete path to (starting under v_root) and name of the file you want to put log messages in. if set to a non-zero value, this will cause the 'Message' to be warn'd to the console in addition to being appended to the file. (good for debug modes). this is the message you would like to have placed in the log file. The message will be prepended with the date and time in epoch format, enclosed in brackets. For instance the call above would result in something like this appearining in the file "path/under/v_root/to/my/log/file.txt": [1064852564]: hey, I wasn't able open that file, moving on to the next one! if set to a non-zero value, this will cause the process to terminate after writing the message to the specified log file.
http://search.cpan.org/~ahicox/Config-Framework-2.5/Framework.pod
CC-MAIN-2016-07
en
refinedweb
What is the best way to find and remove duplicate contacts, tasks, and notes in Outlook 2007? Hi if you export the contacts, edit them (fairly easy in excel) and import, outlook should identify the duplicates and merge them. to add the same notes to multiple contacts You’ll need to copy and paste. You could use in-cell editing and show the notes/message field- then you can paste without opening each item. turn on in line editing. You do it when in a table view, right click the column header, customise current view, other settings, allow in cell editing. After installing the free program, run Outlook and look for the newly added ODIR menu. Click it, then choose Remove Duplicate Items. Select the folder you want ODIR to scan; it will find duplicates and relocate them to a subfolder called “ODIR_Duplicate_Items” from where you can analyse delete the items that you don’t require any more. ODIR recognizes an item as a duplicate if all of the following properties match those of another item in the same folder: •Contact items: first name, last name, company name and email address •Appointment items: subject, location, start date and end date •Task items: subject, start date, due date and status •Note items: contents of the note (Body property) and colour – received emails: the Internet message ID (this is a unique identifier for each email received) – sent emails: email subject and the time the email is sent (PR_CLIENT_SUBMIT_TIME) – unsent emails: email subject only. if you would like for better support then try a shareware Anti-Dupe (14.95USD) You can also follow microsoft guide Alternative to ODIR is AODR (Accurate Outlook Duplicate Remover) The Supported Outlook version: Microsoft Outlook 2010/2007/2003. Try using a proper namespace definition for this property eg if you take the example from How to print all contact fields including the notes field To accomplish this, one solution is to add that field to your current view. This can be done by selecting the Contacts folder and choosing View -> Arrange By -> Current View –> Customize Current View and clicking the Fields button. You can now remove or add any fields as you need. There’s a drop-down menu under the prompt “select available fields from” and you may need to change this to see the field or fields you need to add (for example, the Notes field). Once the field is added, you can just print it out by selecting Print under the File menu or using the icon on the icon bar. Thank you for this… I did not see any way to confirm if the note field was the same in the contact? My contacts may be duplicated but the notes field is where the differences actually preside. Have you used ODIR Outlook Duplicate Items Remover 1.4.4? How accurate is it as it pertains to my notes concern above? How could I compare the notes field? Thank you Dave Hi ODIR Outlook Duplicate Items Remover 1.4.4 1.In MS Outlook 2007 as well as MS Outlook 2002, go to View menu and point to Current View. Now click the change the folder view to table type view. In MS Outlook 2003, point to Arrange By option in the View Menu, Point to Current View and select to change the folder view to a table type View. For example, use the following view and field combinations: View Field Calendar Active Appointments Contacts Phone List Inbox Messages Journal Entry List Notes Notes List 2.Right-click the column heading and select the Field Chooser 3.Right from the list at the top of the Field Choose, click to select the All fields 4.Drag the Modified field to the table heading 5.Now confirm that the duplicate items have a unique date from the original set of items. If the date is unique, select the Modified heading so that the items are sorted by this field. 6.Select the first item in the set that you wish to delete, go down to the last item in the set that you wanted to delete and then click the last item while you hold down the SHIFT key 7.Press DELETE key to permanently delete all selected items If you are looking for help, please ask a new question. We will be happy to help you!
http://www.makeuseof.com/answers/remove-duplicate-data-outlook-2007/
CC-MAIN-2016-07
en
refinedweb
Working with Namespaces in XML Schema Dare Obasanjo Microsoft Corporation August 20, 2002 Summary: Dare Obasanjo discusses various aspects of W3C XML Schema and how they are affected by namespaces. Topics covered include proper usage of the targetNamespace, elementFormDefault and attributeFormDefault attributes, as well as the include, import, and redefine elements within a schema. (13 printed pages) My Kingdom for Some Power Tools This weekend, the bookcase I mentioned ordering in my last article finally arrived. Instead of going off to buy tools, I eagerly attempted to put it together with nothing more than a screwdriver and an old shoe to use as a hammer. Several blisters and a few hours later, my bookcase was assembled and a little wobbly. After getting off the phone with my significant other who couldn't help laughing at the fact that I had gotten blisters by simply "putting some furniture" together, I decided to continue with my XML-based book catalog in a brazen attempt to restore my dignity. I decided to create a schema for my XML instances so that I could not only check them for validity in applications I built, but could also use the cool features in .NET XML Serialization to convert the XML to C# objects as needed. But first I needed a handy command line tool for performing validation of both instance documents and schemas. Below is the tool I built to simplify this process: using System; using System.Xml; using System.Xml.Schema; public class XsdValidate{ static XmlSchemaCollection sc = new XmlSchemaCollection(); static string xsdFile = null; static string xmlFile = null; static string nsUri = null; static string usage = @"Usage: xsdvalidate.exe [-xml <xml-file>] [-xsd <schema-file>] [-ns <namespace-uri>] Sample: xsdvalidate.exe -xml t.xml Validate the XML file by loading it into XmlValidatingReader with ValidationType set to auto. Sample: xsdvalidate.exe -xml t.xml -xsd t.xsd -ns ns1 This will validate the t.xml with the schema t.xsd with target namespace 'ns1' Sample: xsdvalidate.exe xsd t.xsd -ns ns1 This will validate the schema t.xsd with target namespace 'ns1'"; public static void ValidationCallback(object sender, ValidationEventArgs args) { if(args.Severity == XmlSeverityType.Warning) Console.Write("WARNING: "); else if(args.Severity == XmlSeverityType.Error) Console.Write("ERROR: "); Console.WriteLine(args.Message); // Print the error to the screen. } public static void Main(string[] args){ if((args.Length == 0) || (args.Length %2 != 0)){ Console.WriteLine(usage); return; } for(int i = 0; i < args.Length; i++) { switch(args[i]){ case "-xsd": xsdFile = args[++i]; break; case "-xml": xmlFile = args[++i]; break; case "-ns": nsUri = args[++i]; break; default: Console.WriteLine("ERROR: Unexpected argument " + args[i]); return; }//switch }//for if(xsdFile != null){ sc.ValidationEventHandler += new ValidationEventHandler(ValidationCallback); sc.Add( nsUri, xsdFile); Console.WriteLine("Schema Validation Completed"); } if(xmlFile != null){ XmlValidatingReader vr = new XmlValidatingReader(new XmlTextReader(xmlFile)); vr.Schemas.Add(sc); vr.ValidationType = ValidationType.Schema; vr.ValidationEventHandler += new ValidationEventHandler(ValidationCallback); while(vr.Read()); Console.WriteLine("Instance Validation Completed"); } }//Main }//XsdValidate Target Namespace, Schema Location: What's the Difference? The first decision I had to make was whether I wanted to create a schema with a target namespace or not. The target namespace of a schema specifies the namespace of the elements and attributes that can be validated by that schema. Since the instance document from my previous article used the namespace urn:xmlns:25hoursaday-com:my-bookshelf, the choice was really whether I wanted to use that as my target namespace or create instance documents without a namespace. Given that I was effectively creating a new markup vocabulary and namespaces provide a mechanism for disambiguating markup vocabularies, I decided to go with a target namespace. Thus the global (or top level) element and attribute declarations in the schema will refer only to elements and attributes from the urn:xmlns:25hoursaday-com:my-bookshelf namespace. The same applies to the global type definitions in the schema. The first line of my schema is shown below: The second related decision I made was to use a schema location in my XML instance documents. The attributes schemaLocation and noNamespaceSchemaLocation from the namespace are used in an instance document to provide hard-coded references to one or more schemas that can be used to validate the document. The referenced schema(s) applies to the entire document and not just the scope of the element on which they appear. However, it is an error to specify a schema location after the first occurrence of an attribute or element whose namespace name is the same as the target namespace of the schema. The schemaLocation attribute has as its value one or more pairs of target namespaces and URI references to a schema's location. Below is a snippet of an instance document that uses a schemaLocation attribute to refer to the target namespace and location of a schema to use in validating the document: The value of the noNamespaceSchemaLocation is a single URI reference to a schema without a target namespace. Note Both the schemaLocationand the noNamespaceSchemaLocationattributes are only hints to the validating processor that can be ignored if other means are used to specify the schema(s) for the document. I decided against using the schemaLocation or the noNamespaceSchemaLocation attribute in my instance documents because I expect to utilize the documents on different machines that also may or may not have Internet connectivity, so a hard-coded reference to a schema would, in many cases, be inappropriate. If at First You Don't Succeed On rethinking the format for my XML books catalog, I decided to remove the on-loan attribute from the root element but keep the rest of the format unchanged. Thus, given the (slightly modified) instance document below from my last article: < created following schema to validate it and others of its ilk: <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns: :sequence> <xs:attribute <xs:attribute </xs:complexType> </xs:schema> Surprisingly, although the above schema validated successfully with my validation tool, multiple error messages were displayed once I attempted to validate the XML instance document. Specifically when I executed the following command: I got the following output (line numbers trimmed): Schema Validation Completed ERROR: Element 'urn:xmlns:25hoursaday-com:my-bookshelf:books' has invalid child element 'urn:xmlns:25hoursaday-com:my-bookshelf:book'. Expected 'book' ERROR: The 'urn:xmlns:25hoursaday-com:my-bookshelf:book' element is not declared. WARNING: Could not find schema information for the attribute 'publisher'. WARNING: Could not find schema information for the attribute 'on-loan'. ERROR: The 'urn:xmlns:25hoursaday-com:my-bookshelf:title' element is not declared. ERROR: The 'urn:xmlns:25hoursaday-com:my-bookshelf:author' element is not declared. ERROR: The 'urn:xmlns:25hoursaday-com:my-bookshelf:book' element is not declared. WARNING: Could not find schema information for the attribute 'publisher'. ERROR: The 'urn:xmlns:25hoursaday-com:my-bookshelf:title' element is not declared. ERROR: The 'urn:xmlns:25hoursaday-com:my-bookshelf:author' element is not declared. Instance Validation Completed The first error message gave me a clue as to what was wrong. I quickly changed the first line of the schema to the following: When I reran the tool got the following output: The appearance of the error messages was due to the fact that the schema contains local element declarations and the default value of the elementFormDefault attribute on the xs:schema element is "unqualified". These concepts are explained in more detail in the following sections. Think Globally, Act Locally Element and attribute declarations that appear as children of the xs:schema element are considered to be global declarations. All other element and attribute declarations are considered to be local declarations. A local element or attribute declaration can reference a global declaration through the ref attribute, which effectively makes the local declaration the same as the global one. The names of global declarations are placed in a separate symbol space from those of local declarations. Also, the scope of a local declaration's name is that of its enclosing type definition. Thus, a schema can have two or more type definitions that contain element or attribute declarations with the same name and no naming conflict will ensue. This is also the case with a global element or attribute that shares the same name as one or more local elements or attributes. Both local element declarations and references to global elements can have their cardinality expressed using occurrence constraints. The occurrence constraints are specified using the minOccurs and maxOccurs attributes. Below is a sample schema that uses local and global elements, as well as references to a global element declaration: <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns: <!-- global element declaration --> <xs:element <!-- complex type with local element declaration --> <xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> <!-- complex type with reference to global element declaration --> <xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> </xs:schema> By default, global elements have a namespace name equivalent to that of the target namespace of the schema, while local elements have no namespace name. This means that for the above schema, the global language element declaration can validate language elements in an instance document that have as their namespace name. However, the local declaration of the language element in the sequenceOfLanguages type can only validate language elements in an instance document that have no namespace name. Type definitions that occur as children of the xs:schema element are considered to be global type definitions. Global type definitions must have a name. Global type definitions can be referenced through the type attribute of attribute and element declarations or the base attribute of derived types. Type definitions can also be created locally as part of an element or attribute declaration, in which case they must have no name and are considered anonymous types. Below is a sample schema that uses anonymous and global type definitions as well as references to a global type definition: <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns: <!-- element declaration that references a global complex type --> <xs:element <!-- global complex type definition --> <xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> <!-- attribute declaration with anonymous simple type --> <xs:attribute <xs:simpleType> <xs:restriction <xs:minExclusive </xs:restriction> </xs:simpleType> </xs:attribute> </xs:schema> Type definitions, element declarations, and attribute declarations do not share the same symbol space for names. So, it is possible to have a schema where a type definition, global declaration, and local declaration share a single name. This practice is extremely confusing and should be avoided. Are You Qualified? In the last section, I mentioned that by default global declarations validate elements or attributes with a namespace name, while local declarations validate elements or attributes without a namespace name. The term used to describe elements or attributes with a namespace name is namespace qualified. It is possible to override the default behavior with regards to value of the elementFormDefault and attributeFormDefault attributes on the xs:schema element. This allows for finer grained control of how validation of elements and attributes in the instance document should operate in relation to local declarations. The following examples highlight how one can control local declarations using the elementFormDefault , attributeFormDefault, form attributes. andand The Whole Is Greater than the Sum of Its Parts A schema can be constituted from multiple schemas that are assembled into a single logical schema during validation. W3C XML Schema provides three elements that can be used to assemble global declarations and type definitions from external schemas into a target schema document. The three elements are xs:include, xs:import, xs:redefine. andand The xs:include is used to bring in definitions from schemas that either have no target namespace or have the same target namespace as the enclosing schema. xs:import is similar to xs:include with the difference being that imported schema must have a different target namespace from the enclosing schema. If an imported schema has no namespace name, then the enclosing schema must have a target namespace. The example below shows how imported declarations are referenced in the enclosing schema as well as how namespace qualification affects local declarations. <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns: <xs:import </xs:schema> Imported Schema: import.xsd > Instance Document: [when <imp:child1>I am from imported schema </imp:child1> <imp:child1>So Am I </imp:child1> <imp:child2>Me too </imp:child2> </ex:root> Instance Document: [when <child1>Don't know where I come from </child1> <child1>neither do I </child1> <child2>Me too </child2> </ex:root> xs :redefine is used for type redefinition by performing what are essentially two tasks. The first is to act as an xs:include by bringing in declarations and definitions from another schema document and making them available as part of the current target namespace. The included declarations and types must be from a schema with the same target namespace, or it must have no namespace. Secondly, types can be redefined in a manner similar to type derivation with the new definition replacing the old one. Examples and further explanations of xs:include, xs:import and xs:redefine are available in the W3C XML Schema Primer. Karma Chameleon. Here's an example that uses chameleon schemas. Further Reading - W3C XML Schema Primer - XML Schemas: Best Practices - W3C XML Schema Design Patterns: Dealing With Change.
https://msdn.microsoft.com/en-us/library/ms950796.aspx
CC-MAIN-2016-07
en
refinedweb
I’ve just finished work on a small command line client for the Heroku Build API written in Haskell. It may be a bit overkill for the task, but it allowed me to play with a library I was very interested in but hadn’t had a chance to use yet: optparse-applicative. In figuring things out, I again noticed something I find common to many Haskell libraries: - It’s extremely easy to use and solves the problem exactly as I need. - It’s woefully under-documented and appears incredibly difficult to use at first glance. Note that when I say under-documented, I mean it in a very specific way. The Haddocks are stellar. Unfortunately, what I find lacking are blogs and example-driven tutorials. Rather than complain about the lack of tutorials, I’ve decided to write one. Applicative Parsers Haskell is known for its great parsing libraries and this is no exception. For some context, here’s an example of what it looks like to build a Parser in Haskell: type CSV = [[String]] csvFile :: Parser CSV csvFile = do lines <- many csvLine eof return lines where csvLine = do cells <- many csvCell `sepBy` comma eol return cells csvCell = quoted (many anyChar) comma = char ',' eol = char '\n' <|> char '\r\n' -- etc... As you can see, Haskell parsers have a fractal nature. You make tiny parsers for simple values and combine them into slightly larger parsers for slightly more complicated values. You continue this process until you reach the top level csvFile which reads like exactly what it is. When combining parsers from a general-purpose library like parsec, we typically do it monadically. This means that each parsing step is sequenced together (that’s what do-notation does) and that sequencing will be respected when the parser is ultimately executed on some input. Sequencing parsing steps in an imperative way like this allows us to make decisions mid-parse about what to do next or to use the results of earlier parses in later ones. This ability is essential in most cases. When using libraries like optparse-applicative and aeson we’re able to do something different. Instead of treating parsers as monadic, we can treat them as applicative. The Applicative type class is a lot like Monad in that it’s a means of describing combination. Crucially, it differs in that it has no ability to define an order – there’s no sequencing. If it helps, you can think of applicative parsers as atomic or parallel while monadic parsers would be incremental or serial. Yet another way to say it is that monadic parsers operate on the result of the previous parser and can only return something to the next; the overall result is then simply the result of the last parser in the chain. Applicative parsers, on the other hand, operate on the whole input and contribute directly to the whole output – when combined and executed, many applicative parsers can run “at once” to produce the final result. Taking values and combining them into a larger value via some constructor is exactly how normal function application works. The Applicative type class lets you construct things from values wrapped in some context (say, a Parser State) using a very similar syntax. By using Applicative to combine smaller parsers into larger ones, you end up with a very convenient situation: the constructed parsers resemble the structure of their output, not their input. When you look at the CSV parser above, it reads like the document it’s parsing, not the value it’s producing. It doesn’t look like an array of arrays, it looks like a walk over the values and down the lines of a file. There’s nothing wrong with this structure per se, but contrast it with this parser for creating a User from a JSON value: data User = User String Int -- Value is a type provided by aeson to represent <abbr title="JavaScript Object Notation">JSON</abbr> values. parseUser :: Value -> Parser User parseUser (Object o) = User <$> o .: "name" <*> o .: "age" It’s hard to believe the two share any qualities at all, but they are in fact the same thing, just constructed via different means of combination. In the CSV case, parsers like csvLine and eof are combined monadically via do-notation: You will parse many lines of CSV, then you will parse an end-of-file. In the JSON case, parsers like o .: "name" and o .: "age" each contribute part of a User and those parts are combined applicatively via (<$>) and (<*>) (pronounced fmap and apply): You will parse a user from the value for the “name” key and the value for the “age” key Just by virtue of how Applicative works, we find ourselves with a Parser User that looks surprisingly like a User. I go through all of this not because you need to know about it to use these libraries (though it does help with understanding their error messages), but because I think it’s a great example of something many developers don’t believe: not only can highly theoretic concepts have tangible value in real world code, but they in fact do in Haskell. Let’s see it in action. Options Parsing My little command line client has the following usage: heroku-build [--app COMPILE-APP] [start|status|release] Where each sub-command has its own set of arguments: heroku-build start SOURCE-URL VERSION heroku-build status BUILD-ID heroku-build release BUILD-ID RELEASE-APP The first step is to define a data type for what you want out of options parsing. I typically call this Options: import Options.Applicative -- Provided by optparse-applicative type App = String type Version = String type Url = String type BuildId = String data Command = Start Url Version | Status BuildId | Release BuildId App data Options = Options App Command If we assume that we can build a Parser Options, using it in main would look like this: main :: IO () main = run =<< execParser (parseOptions `withInfo` "Interact with the Heroku Build API") parseOptions :: Parser Options parseOptions = undefined -- Actual program logic run :: Options -> IO () run opts = undefined Where withInfo is just a convenience function to add --help support given a parser and description: withInfo :: Parser a -> String -> ParserInfo a withInfo opts desc = info (helper <*> opts) $ progDesc desc So what does an Applicative Options Parser look like? Well, if you remember the discussion above, it’s going to be a series of smaller parsers combined in an applicative way. Let’s start by parsing just the --app option using the library-provided strOption helper: parseApp :: Parser App parseApp = strOption $ short 'a' <> long "app" <> metavar "COMPILE-APP" <> help "Heroku app on which to compile" Next we make a parser for each sub-command: parseStart :: Parser Command parseStart = Start <$> argument str (metavar "SOURCE-URL") <*> argument str (metavar "VERSION") parseStatus :: Parser Command parseStatus = Status <$> argument str (metavar "BUILD-ID") parseRelease :: Parser Command parseRelease = Release <$> argument str (metavar "BUILD-ID") <*> argument str (metavar "RELEASE-APP") Looks familiar, right? These parsers are made up of simpler parsers (like argument) combined in much the same way as our parseUser example. We can then combine them further via the subparser function: parseCommand :: Parser Command parseCommand = subparser $ command "start" (parseStart `withInfo` "Start a build on the compilation app") <> command "status" (parseStatus `withInfo` "Check the status of a build") <> command "release" (parseRelease `withInfo` "Release a successful build") By re-using withInfo here, we even get sub-command --help flags: $ heroku-build start --help Usage: heroku-build start SOURCE-URL VERSION Start a build on the compilation app Available options: -h,--help Show this help text Pretty great, right? All of this comes together to make the full Options parser: parseOptions :: Parser Options parseOptions = Options <$> parseApp <*> parseCommand Again, this looks just like parseUser. You might’ve thought that o .: "name" was some kind of magic, but as you can see, it’s just a parser. It was defined in the same way as parseApp, designed to parse something simple, and is easily combined into a more complex parser thanks to its applicative nature. Finally, with option handling thoroughly taken care of, we’re free to implement our program logic in terms of meaningful types: run :: Options -> IO () run (Options app cmd) = do case cmd of Start url version -> -- ... Status build -> -- ... Release build rApp -> -- ... Wrapping Up To recap, optparse-applicative allows us to do a number of things: - Implement our program input as a meaningful type - State how to turn command-line options into a value of that type in a concise and declarative way - Do this even in the presence of something complex like sub-commands - Handle invalid input and get a really great --helpmessage for free Hopefully, this post has piqued some interest in Haskell’s deeper ideas which I believe lead to most of these benefits. If not, at least there’s some real world examples that you can reference the next time you want to parse command-line options in Haskell.
https://robots.thoughtbot.com/applicative-options-parsing-in-haskell
CC-MAIN-2016-07
en
refinedweb
Antti Koivunen wrote: > >. Good point. > Also, if the URI is used to locate the descriptor, there's always the > possibility that you're offline or behind a firewall (although the block > manager should provide a way around this by giving the option of storing > the descriptor locally). Yes, of course. >: > >> > >> <provides-file > >> <provides-implementation > > > > > > Yes, this is where I aim to start. > > Very good. > > >>>3) detailed description: the contract identifier indicates both the > >>>skeleton and the behavior of the contract. This allows high granular > >>>automatic validation. > >> > >>Sounds good, but would be difficult to implement using just an XML > >>descriptor. > > > > > > If you are saying that the XML descriptor might get insanely complex, I > > totally agree. > > Exactly my point, but as said before, a few simple validation rules > would go a long way (but probably not all the way). > > >>Following proper SoC, perhaps the role itself should provide > >>the tools for more complex validation. > > > > > > No, this *breaks* SoC! Validation is not your concern is *our* to > > understand if what you provided us with works depending on the contract > > that we are expecting! > > Well, SoC isn't just about who writes the Java code. Offering a clean > Java API to the block (role) authors for defining the validation rules > might be better than offering an "insanely complex" XML API. Our main > concern is to perform the validation in a uniform way according to these > rules. Yes, but it's a matter of *trust* more than SoC at this point: I give you the contract, you give me the implementation *and* a way to check the validation. As italian, I'm always very sensible at cheating :) > >>The role descriptor could make > >>use of the simple built-in validators (see above) and/or define custom > >>ones if necessary. > >> > >>It should be possible to define an 'intermediate' API to make it easy to > >>implement new validators, e.g. > >> > >> interface Validator > >> { > >> void validate( ValidationContext ctx ) throws ValidationException; > >> } > >> > >> interface ValidationContext > >> { > >> BlockInfo getBlockInfo(); > >> URL getResource( String name ); > >> ClassLoader getContextClassLoader(); > >> Configuration getConfiguration(); // from the role descriptor > >> } > >> > >>This approach would allow practically any level complexity, but would > >>also mean that the role might not consist of just the XML descriptor, > >>i.e. we might end up with another archive format, say '.cor'. Still, > >>it's probably be better than trying to please everybody and ending up > >>with 50kB role descriptors. > > > > > > Hmmm, no, I was thinking more of using namespaces to trigger different > > validation behavior during installation. > > I'm also quite hesitant to go beyond XML, but it might be difficult to > define standalone roles. Consider the following fairly simple example: > > <validate-xml > > Now, if the schema URI does not resolve to a valid schema (or there's no > internet access or the server is down), we have a problem. There are a > couple of possible solutions (pre-install the schema, require momentary > internet access), but wouldn't it be more convenient to download a > single file that contains everything? Then we could do something like: > > <validate-xml > > This is just one example, but I'm pretty sure there other similar > situations. Ok, I'll try to come up with something as soon as I have time.
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200204.mbox/%3C3CAC4669.1D82E02A@apache.org%3E
CC-MAIN-2016-07
en
refinedweb
WARNING: Please do not use this code in your apps. It was just a quick experiment and is neither well tested nor secure. Updates: - February 23, 2012: I put the code for the category on GitHub and added a corresponding link to the article. I also added a custom prefix to the category methods to avoid possible namespace problems. - July 4, 2014: Added warning not to use the code. The codesign Utility On the command line, you can use the codesign utility to check whether a Mac app is signed. For example, codesign --display --verbose=4 /Applications/Preview.app will display a whole lot of info about Preview’s code signature: You can also use codesign to determine whether an app is sandboxed and, if so, list its sandboxing entitlements. The command codesign --display --entitlements - /Applications/Preview.app will display the contents of the entitlements property list that is embedded in the application binary: Doing it in Code Can we do the same in code? Yes we can. With a lot of help from my coworkers Jörg Jacobsen (see his work on XPC and Sandboxing for the iMedia framework) and Christian Beer (who pointed me to the source code for the codesign utility), I wrote a category on NSBundle that can tell you for any application bundle: - whether it has a valid code signature, - whether it is sandboxed and - whether it was downloaded from the Mac App Store. The public interface for the category looks like this and should be self-explanatory: Code Signing Services For the implementation, we need to look at the Code Signing Services in the Security framework. The SecStaticCodeCreateWithPath() takes the URL of an app bundle and returns a reference to a so-called static code object that represents the bundle’s code. We can then call the function SecStaticCodeCheckValidityWithErrors() on the static code object to obtain information about its code signature. Additional Requirements for the Signature (Sandboxing) To determine whether an app is sandboxed, we can call SecStaticCodeCheckValidityWithErrors() again, this time with the additional code requirement (passed as the third argument to the function) that the code object contains a certain entitlement (which is com.apple.security.app-sandbox in our case). The call to create this requirement looks like this: Have a look at the documentation for the Code Signing Requirement Language to learn how to formulate other requirements you might have. Mac App Store Receipt Check The implementation of the last method, -ob_comesFromAppStore, is rather unrelated. It simply checks whether the bundle contains a Mac App Store receipt. OS X 10.7 has a special method to find the App Store receipt in the bundle: appStoreReceiptURL. If 10.6 compatibility is important for you, you have to hard-code the path to the receipt at Contents/_MASReceipt/receipt. The Source Code Check out the full source code of the category below. I use associative references to cache the values of some variables that I use in multiple places, such as the code signature state. Update February 23, 2012: The code is now also available on GitHub.
http://oleb.net/blog/2012/02/checking-code-signing-and-sandboxing-status-in-code/
CC-MAIN-2016-07
en
refinedweb
/** * Implement a class Car that provides a programmatic model for a car object with the following functionality: * Ability to construct a Car object by specifying its gas tank capacity, ----- its gas mileage (the number of miles it will travel on one gallon of gas), and---- the number of gallons of gas (currently) in its tank.---- * Ability to find out the number of gallons in the tank of a Car object.---- * Ability to find out the capacity of a Car object's tank.---- * Ability to find out the mileage of a Car object.--- * Ability to print on the screen a Car object's tank capacity, mileage and---- current number of gallons of gas in tank.--- * Ability to find out how many miles the car can travel with its current gas. * Ability to "fill" gas into a Car object. We should be able to specify the number of gallons that we are filling. --- If the Car object's tank cannot accpet this number of gallons, then the Car object should remain unchanged and an appropriate error message should appear on the screen. --- * Ability to "drive" a Car object for a specified number of miles. If the object is not able to drive the given number of miles (because it would run out of gas, for example), then the Car object should remain unchanged and an appropriate error message should appear on the screen. All private attributes and methods of the class Car must be documented * * @author (Derek Stockl) * @version (v 1.0 Feb 19, 2010) */ public class Car { // The Gas Tank Capacity private int tankCapacity; // The Gas Milage(number of miles it will travel on one gallon of gas) private int gasMilage; // Number of gallons (currently) private int currentGas; // Find out the numnber of gallons in the tank of Car. public int getCurrentGas() { return currentGas; } // Find out the gas milage of Car. public int getMPG() { return gasMilage; } // Find out the tank capacity of Car. public int getTankCapacity() { return tankCapacity; } // Set the Gas mileage for Car. public void setMPG(int gasMilage) { this.gasMilage = gasMilage; } // Set the Current Gas left. public void setGasLeft(int currentGas) { this.currentGas = currentGas; } // Set the tank capacity. public void setTankCapacity(int tankCapacity) { this.tankCapacity = tankCapacity; } // Print details of Tank Capacity, MPG, miles til empt and Current Gas. public void printDetails() { System.out.println("Tank Capacity: \t" + tankCapacity); System.out.println("Miles per Gallon: \t" + gasMilage); System.out.println("Current Amount of Gas: \t" + currentGas); System.out.println("Miles Left Until Empty \t" + milesLeft() ); } // Find how many miles left until empty. public double milesLeft() { return currentGas * gasMilage; } //Fill gas tank public void fillTank(int moreGas) { if (currentGas + moreGas < tankCapacity) currentGas = currentGas + moreGas; else { System.out.println("Cannot exceed Tank Capacity"); } } // ***THIS IS WHERE I AM STUCK AT*** public void drive(double distance) { if (currentGas * gasMilage > distance) milesLeft = (currentGas * gasMilage) - distance; else { System.out.println("Not Enough Gas"); } } } Any assistance would be greatly appreciated. now i know milesLeft = (currentGas * gasMilage) - distance; but i am having a mental block and cannot figure out what to put in that if statement.
http://www.dreamincode.net/forums/topic/157743-car-program-stuck-java/
CC-MAIN-2016-07
en
refinedweb
Changes for version 0.003 - Don't import module_notional_filename() from Module::Runtime, then we don't have to clean it out of the namespace. - Made reference to LAWALSH be a link to her MetaCPAN author page Modules - me::inlined - EXPERIMENTAL - define multiple packages in one file, and reference them in any order
https://metacpan.org/release/me-inlined
CC-MAIN-2016-07
en
refinedweb
0 Hi, I'm using windows XP and trying to simulate mouse movement and mouse clicks. The following code supposed to move mouse to absolute position (100,100) and perform a click: //test.cpp file: #include "stdafx.h" int main(int argc, char* argv[]){ INPUT *buffer = new INPUT[3]; //allocate a buffer buffer->type = INPUT_MOUSE; buffer->mi.dx = 100; buffer->mi.dy = 100; buffer->mi.mouseData = 0; buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE); buffer->mi.time = 0; buffer->mi.dwExtraInfo = 0; (buffer+1)->type = INPUT_MOUSE; (buffer+1)->mi.dx = 100; (buffer+1)->mi.dy = 100; (buffer+1)->mi.mouseData = 0; (buffer+1)->mi.dwFlags = MOUSEEVENTF_LEFTDOWN; (buffer+1)->mi.time = 0; (buffer+1)->mi.dwExtraInfo = 0; (buffer+2)->type = INPUT_MOUSE; (buffer+2)->mi.dx = 100; (buffer+2)->mi.dy = 100; (buffer+2)->mi.mouseData = 0; (buffer+2)->mi.dwFlags = MOUSEEVENTF_LEFTUP; (buffer+2)->mi.time = 0; (buffer+2)->mi.dwExtraInfo = 0; SendInput(3,buffer,sizeof(INPUT)); delete (buffer); //clean up our messes. return 0; } when "stdafx.h" is: #pragma once #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers #define _WIN32_WINNT 0x0500 // so the code would compile #include <windows.h> The problem is that the mouse moves to position (0,0) and performs a click. I've been unable so far to simulate mouse movement to any absolute coordinate (x,y) specified in mi.dx and mi.dy respectively. If someone knows a way to force mouse to move to absolute position (x,y) please tell me...
https://www.daniweb.com/programming/software-development/threads/6727/simulate-mouse-move
CC-MAIN-2016-07
en
refinedweb
Migration in Afghanistan with the fall of the government, but began to increase again in 1996 with the rise of the Taliban. In 2002, with the fall of the Taliban and the US-led invasion, record numbers of Afghan refugees returned to Afghanistan. An international reconstruction and development initiative began to aid Afghans in rebuilding their country from decades of war. Reports indicate that change is occurring in Afghanistan, but the progress is slow. The Taliban have regained strength in the second half of this decade and insurgency and instability are rising. Afghanistan continues to be challenged by underdevelopment, lack of infrastructure, few employment opportunities, and widespread poverty. The slow pace of change has led Afghans to continue migrating in order to meet the needs of their families. Today refugee movements no longer characterize the primary source of Afghan migration. Migration countries. The highly skilled in Afghanistan often seek to migrate to Western countries, as the opportunities in Afghanistan are limited. Afghans transnational movements have led to the development of the Afghan Diaspora, which has been essential in providing remittances to families in Afghanistan to meet their daily needs. The Afghan Diaspora has been involved in the reconstruction effort and is a key contributor to development in Afghanistan. The continued engagement of the Diaspora is important to the building of Afghanistan's future. This paper seeks to provide an overview of migration and development in Afghanistan. It will begin with a country profile on Afghanistan (Chapter 2), followed by a review of historical migration patterns in Afghanistan (Chapter 3) and a synthesis of current migration patterns in Afghanistan (Chapter 4). The paper will then move to discuss migration and development in Afghanistan (Chapter 5), the Afghan Diaspora (Chapter 6), policies regarding migration in Afghanistan (Chapter 7), and the migration relationship between the Netherlands and Afghanistan (Chapter 8). The paper will conclude with an examination of future migration prospects for Afghanistan (Chapter 9) and a conclusion (Chapter 10). 2. General Country Profile Afghanistan is one of the poorest countries in the world and has been inundated by decades of war, civil strife and poverty. Today, Afghanistan is central in media attention due to the US led invasion post 9/11, however the country has been in turmoil for much longer. This section will provide a brief overview of the recent history of Afghanistan, the current economic situation, the current political situation, a cultural overview, and the current status of women in the country. Historical Overview The modern history of Afghanistan can be divided into four essential periods: pre 1978, 1978-1992, 1992-2001, and post 2001. Pre 1978 Afghanistan was founded in 1774 by Ahmad Shah Durrani who unified the Pashtun tribes in the region and created the state (CIA, 2009). The country was ruled by a monarchy and acted as a buffer between the British and Russian empires until it received independence from conjectural British control in 1919 (CIA, 2009). The last King, Zahir Shah, reigned from 1933 to 1973, when he was overthrown by a coup d'etat led by his cousin and ex-premier President Mohammed Daoud (Jazayery, 2002). Opposition to Daoud's Government lead to a coup in 1978 by the People's Democratic Party of Afghanistan (PDPA) (Jazayery, 2002). 1978-1992 - Soviet Invasion The PDPA was a Marxist regime and from 1989 was supported by the Soviet Union. This was the first major flow of refugees from Afghanistan. The occupation by the Soviets was viewed in the west as an escalation of the Cold War. The West began to fund millions of dollars, which became billions of dollars, to the resistance forces known as the Mujahideen (Jazayery, 2002). The resistance forces operated primarily from Pakistan. In 1986 when Mikhail Gorbachev came to power in the Soviet Union, the Soviets began the process of extraditing themselves from Afghanistan and by 1989 the Soviets had left Afghanistan. 1992-2001 - Taliban Rule In 1992 the Mujahideen forces overthrew Najibullah's Government. A failure of consensus of the new Government led to a civil war from 1992-1996 (Jazayery, 2002). Afghanistan became divided into tribal fiefdoms controlled by armed commanders and warlords (Poppelwell, 2007). The country was in a state of anarchy and Afghans lived in a state of constant fear of physical and sexual assault (Poppelwell, 2007). During this time, the Taliban emerged in 1994, claiming that Afghanistan should be ruled by Shari'a (Islamic law) (Jazayery, 2002). The Taliban received support and funding from Saudi Arabia and Arab individuals in the quest to establish a pure Islamic model state (Poppelwell, 2007). The Taliban swept through Afghanistan encountering no resistance by the Mujahideen and were welcomed in many areas as they established relative security in the areas they controlled (Jazayery, 2002). By 1998, The Taliban had captured the majority of the country and established the “Islamic Emirate of Afghanistan” (Jazayery, 2002). A Northern Alliance that arose in opposition to the Taliban maintained a Government of the “Islamic State of Afghanistan” with Burhanuddin Rabbini as president (Jazayery, 2002). The Taliban Government was only recognized by Pakistan, Saudi Arabia, and the United Arab Emirates, while the Government of Rabbini maintained an officially represented seat at the UN (Jazayery, 2002). After the bombings of the US Embassy's in Kenya and Tanzania the Taliban were asked to stop harboring Osama bin Laden (Poppelwell, 2007). At their refusal, the UN imposed sanctions against the Taliban and Afghanistan in 1999 (Poppelwell, 2007). By this time the Taliban were known for disregarding international law and human rights (Poppelwell, 2007). During this time, killing, pillaging, raping, and ethnic cleansing of individuals occurred across Afghanistan by the Taliban regime (Jazayery, 2002). Post 2001 The events of 9/11 2001 led the US to lead Coalition Forces to invade Afghanistan on 7 October 2007. Within months the military forces had taken control of Afghanistan and declared the fall of the Taliban. The International Security and Assistance Forces (ISAF) in Afghanistan began with 5,000 troops. In 2003, NATO took over the ISAF, which now, due to increased security concerns, is comprised of approximately 50,000 troops coming from all 28 NATO members (NATO, 2009). In December 2001 a UN led interim administration was established under the Bonn Agreement. The Bonn Agreement established a new constitution and the first democratic elections in 2004 (Poppelwell, 2007). Hamid-Karzai, became the leader of a broad based thirty-member ethnic council that aimed to be multi-ethnic and representative of Afghan society (Poppelwell, 2007). The new administration faced many challenges and in 2005 the Taliban began to regain strength in Afghanistan. The increased security challenges led to the London Conference in January 2006 to address the end of the Bonn agreement and the current challenges in Afghanistan. The result of the London Conference was the Afghanistan Compact, which identified a five-year plan for Afghanistan. The Afghanistan Compact is based on three key pillars: “security, governance, the rule of law and human rights; economic and social development; and the cross-cutting issue of counter-narcotics” (Poppelwell, 2007, p. 8). Western Governments have taken on specific areas as a country lead for areas in which they will focus. The reconstruction process in Afghanistan has been extensive. A total of $14,775,000,000 US dollars has been contributed to the reconstruction process since 2001 (Livingston, Messera, and Shapiro, 2009). Despite the development efforts, insecurity has increased since 2005 with the Taliban regaining strength. The overall situation in Afghanistan continues to be characterized by conflict and poverty. Demographics A census has not been conducted in Afghanistan since prior to the Soviet invasion in 1978. Thus, all demographic information is estimates. In 2009, the CIA World Factbook estimated the population of Afghanistan to be 28.3 million. This was a significant decrease from the previous estimate of 33.6 million. An Afghanistan census is scheduled for 2010. The population growth rate in Afghanistan was estimated by the United Nations to be 3.9 percent 2005-2010 (UN Data, 2009). Economic and Poverty Overview Economic progress in Afghanistan is occurring through the reconstruction effort, however, Afghanistan continues to be one of the least developed and poorest countries in the world. Table 1 provides an overview of key economic and poverty indicators for Afghanistan in 2007. Real GDP growth for 2008-09 decelerated to 2.3 percent from 16.2 percent in 2007-08 (World Bank, 2009). This is the lowest GDP growth has been in the post-Taliban period and was due to poor agricultural production (World Bank, 2009). In 2009, however, growth is expected to increase due to a good agricultural harvest (World Bank, 2009). Table 1: Key Indicators GDP Per Capita (PPP US $) 1,054 Life Expectancy 43.6 Adult Literacy Rate (% aged 15 and above) 28.0 Combined Gross Enrolment Ration in Education 50.1 Human Poverty Index Rank 135.0 Probability at birth of not surviving to age 40 (% of cohort) 40.7 Population not using an improved water Source (%) 78.0 Children underweight for age (% under age 5) 39.0 Overseas Development Assistance per Capita (US$) 146.0 Source: UNDP, 2009 The latest poverty assessment in Afghanistan was conducted in 2005 through the National Risk and Vulnerability Assessment (NRVA). The findings indicate that the poverty rate was 42 percent, corresponding to 12 million people living below the poverty line (Islamic Republic of Afghanistan, 2009, p. 14). In addition, 20 percent of the population was slightly above the poverty line, suggesting that a small economic shock could place them below the poverty line (Islamic Republic of Afghanistan, 2009, p. 14). It is evident that widespread poverty continues to be a challenge in Afghanistan. Political Situation In August 2009, Afghanistan held it second democratic elections (World Bank, 2009). The incumbent President Hamid Karzai, was re-elected with 50 percent of the necessary votes, however, since the election there have been over 2,000 fraud allegations lodged with the Electoral Complaints Commission (ECC). The Independent Election Commission announced in October 2009 that its final results indicated less than 50 percent of the votes for Karzai. Thus, a run-off election was scheduled for November between Karzai and the lead opponent. Before the election, however, the opponent withdrew from the race leaving Karzai as President (World Bank, 2009). The United Nations Mission to Afghanistan has continued to coordinate international assistance and support the Afghan government in developing good governance. The key aspects of the UN Mission, 2009). The political situation in Afghanistan continues to be complex. In 2009, Transparency International rated Afghanistan 1.3 on the Global Corruption Perceptions Index (Transparency International, 2009). This was the second lowest ranking with only Somalia receiving a lower score. This suggests a high lack of trust in the Government of Afghanistan. Culture/ Ethnic Groups Afghanistan is a traditional and conservative society with large ethnic divisions. Table 2 shows the percentage of the population that belongs to the different ethnic groups. Table 2: Ethnic Groups in Afghanistan 1970's 2006 Pashtun 39.4 40.9 Tajik 33.7 37.1 Uzbeck 8 9.2 Hazara 8 9.2 Turkmen 3.3 1.7 Aimak 4.1 0.1 Baloch 1.6 0.5 Other 1.9 1.4 Source: The Asia Foundation, 2006; Encycopedia Iranica, 2009 The Pashtun's have generally been the majority in Afghanistan. They occupy land in the South and the East and are divided amoung tribal lines. The Tajik's are primarily Sunni Muslims who are Persian and occupy the Northeast and West of Afghanistan. The Tajiks are often well educated and landowners. The Uzbecks are descendents from the Turks and are primarily involved in agriculture. The Hazara's are primarily Shi'ite Muslims who occupy the infertile highlands in central Afghanistan. The Hazara's are subsistence farms that have used migration routes for survival for centuries (Robinson and Lipson, 2002). The vast majority of the population in Afghanistan is Sunni Muslim (87.9 percent). Shi'ia Muslims account for 10.4 percent of the population and the remaining ethnic groups are negligible in numbers. Shi'ia Muslims are thus a minority and have faced persecution in Afghanistan. Status of Women Afghanistan's GDI (Gender Development Index) value is 0.310, which is 88.1 percent of its Human Development Index (HDI) (UNDP, 2009). The HDI does not account for gender inequality, and the GDI adds this component to the HDI. Afghanistan ranks 155 out of 155 countries measured in the world for its GDI. Indicators, such as literacy, illustrate this; 43.1 percent of adult males are literate, compared to 12.6 percent of adult females (UNDP, 2009). The culture of Afghanistan is a based on traditional gender roles. Traditionally, women are seen as embodying the honour of the family (World Bank, 2005). As such, women are given as brides to create peace, or to honour a relationship. The role of a wife is to maintain the household and support the husband, which includes domestic and sexual services. In general, a wife meets the husbands needs and if the wife does not she has dishonoured her family and community (World Bank, 2005). The legal rights of women in Afghanistan have changed with the political structure. Prior to Taliban rule, the Constitution of Afghanistan guaranteed women equal rights under the law, although local tribes may have had different customs. Under Taliban rule women's rights were severely hindered as they were not permitted to leave their homes unless accompanied by a close male relative, receive education, and had restricted access the health care and employment. Women were frequently raped and abused during this time. With the fall of the Taliban the situation has improved for women, however there are great differences between the rural and urban situation (World Bank, 2005). The Ministry of Women's Affairs (MOWA) was established in the Bonn Agreement to promote the advancement of women in Afghanistan. MOWA works in an advocacy role to ensure that policies are implemented for both men and women. In addition, MOWA works with NGOs to ensure programs for women are implemented. Women's rights remain to be a primary concern in Afghanistan. At present, approximately 60 percent of women are married before the age of 16 (IRIN, 2005). At 44, women in Afghanistan have one of the lowest life expectancies in the world (UNDP, 2009). Women who are widowed are ostracized in rural communities, but are often able to make a living in the cities to support themselves and their families. However, female-headed households tend to be primarily represented in the poorest quintiles of Afghan society (World Bank, 2005). The situation for women in the urban centres such as Kabul is becoming more liberal. Education rates of girls in the urban centres are high than rural areas and these indicators suggest changes are occurring for women in urban areas. Women's rights are high on the international policy agenda for Afghanistan and a key goal of development aid. 3. Historic Overview of Migration Migration in Afghanistan has had a long history and has significantly shaped the countries social and cultural landscape (Monsutti, 2007). Historically, Afghanistan was a country of trade between the east and the west and a key location on the Silk Road trade route. Thus, migration is a part of the historical identity of the country. The following chapter presents an overview of the complex migration patterns, with a historical perspective. Migration Patterns from Afghanistan to Pakistan and Iran Prior to 1978 Migration between Afghanistan and Pakistan and Iran has a long history. The migration relationships are rooted in the ethnic ties that span the borders between the countries. For instance, Pashtuns make up 20 percent of the population in Pakistan and 30 percent in Afghanistan. The Pashtuns are separated by the Pakistan-Afghanistan border, which is referred to as the Durand Line. The Durand Line was established during British colonialism to demarcate British India from Afghanistan, and has been acknowledged to be an arbitrary divide of Pashtun land (Monsutti, 2005). Thus, cross-border migration of the Pashtuns between Afghanistan and Pakistan has been a way of life. Similarly, the Hazara's of Afghanistan are Shiite's Muslims, which is the majority religion in Iran (Monsutti, 2005). Hazara's regularly engaged in migration to and from Iran via religious ties. These ethnic and cultural ties led to cross-border migration for decades prior to the Soviet Invasion of Afghanistan. The poor economic position of Afghanistan prior to 1978 led to further economic migration to the better off states of Pakistan and Iran. Stigter states, “The economic differences between Afghanistan and Pakistan and Iran have long led Afghans to migrate to these countries to find employment and, for Iran, enjoy the benefits of a higher income” (2006, p. 117). In the 1960s and 1970s industrialization in Afghanistan was minimal and there were limited opportunities for the newly educated and growing rural population (Stigter, 2006). A widespread drought in the 1970s led to large-scale crop failure and further migration of many Afghans from the north and north-western Afghanistan into Iran (Monsutti, 2006). In addition, the oil boom of 1973 caused further increasing numbers of Afghans to cross into Iran and other Middle Eastern countries to capitalize on the labour opportunities (Stigter, 2006). Studies have also confirmed that prior to the war migrants from Northern Afghanistan travelled to Pakistan during the winter, illustrating that seasonal migration occurred between the two countries (Stigter, 2006 from CSSR, 2005). These pre-established migration movements reveal that social networks were established between Afghanistan and Pakistan and Iran prior to the Soviet Invasion and proceeding wars. Monsutti states that “Channels of pre-established transnational networks exist between Afghanistan, Pakistan and Iran- the movement of individuals to seek work, to escape drought or to flee war has been a common experience in Afghanistan” (Monsutti, 2006, p. 6-7). Thus, it can be deduced that migration to Pakistan and Iran was a natural option for many Afghans. International Migration Post 1978 International migration movements from Afghanistan from 1978 have primarily been comprised of refugee flows. The vast majority of refugees fled to Pakistan and Iran in the largest refugee crises of the late 20th Century. 1 shows the number of Afghan refugees in Pakistan and Iran from 1979-2001. 1 illustrates that refugee outflows from Afghanistan began in 1979 with the Soviet Invasion. The outflows continued to increase during the Soviet occupation when there was civil war between the US funded Mujahideen and the Soviet backed Najibullah. Flows during this time spanned social classes and ethnic groups as the initial reason for migration was primarily protection led. However, reasons of a lack of economic opportunities, devastation of infrastructure and trade networks, limited access to social services such as healthcare and education, and political and social reasons also contributed to migration flows (Stigter, 2006). Migration was thus not only refugee protection, but also the need to make a livelihood (Stigter, 2006). The peak of the refugee flows occurred in 1990 with 6.2 million Afghan refugees. This was after the Soviet withdrawal and when the Najibullah remained in power (Jazayery, 2002, p. 240). In the 1990s drought contributed to continuing refugee flows from Afghanistan (Stigter, 2006). The fall of the Najibullah in 1992 led to large-scale repatriation. However, with the Taliban gaining power in 1996, the number of refugees began to increase again to approximately 3.8 million refugees in 2001. During the initial refugee outflows in 1979 both Pakistan and Iran warmly welcomed the refugees under a banner Muslim solidarity (Monsutti, 2006). Iran is a signatory and Pakistan is not a signatory to the 1951 Convention of Refugees and its 1967 Protocol, however both countries welcomed the refugees. In Iran the refugees were given identification cards, allowed access to work, health care, food, free primary and secondary education, and were free to settle where they chose (Monsutti, 2006). Pakistan created an agreement with the United Nations to provide services to the Afghan refugees and received financial support from the international community (Monsutti, 2006). The era of welcoming Afghan refugees began to change in 1989. In Pakistan refugees were still welcomed from 1989-2001, but were not provided with the same level of services and facilitation (Monsutti, 2006). In Iran support also decreased and by the 1990s refugees no longer received identity cards and assistance (Monsutti, 2006). The position of the host countries became increasingly unfriendly post 2001, which will be discussed in the next chapter of this paper. Return Migration The Mujahideen took over the government in 1992 and as a result nearly 2 million refugees returned to Afghanistan. By 1997 an estimated 4 million refugees had returned from Pakistan and Iran (Stigter, 2006). Simultaneously, however, conflicts between rival Mujahideen groups dissuaded many refugees from returning, and created new refugees and IDPs. Internal Migration The primary source of internal migration in Afghanistan was Internally Displaced Persons (IDPs). Internally Displaced Persons Internal displacement flows have followed a similar trajectory as refugee flows. The exact number of IDPs is not known and 3 shows estimated number of IDPs in Afghanistan from 1985-2001. Generally those who are internally displaced do not have the means to cross an international border. IDPs in Afghanistan had access to very few services during this period. The UNHCR's capacity in Afghanistan began to increase after 1992 as is illustrated in 3 by the red line. From 1995 the two lines start to converge as the number of IDPs assisted by UNHCR increases and the total number of IDPs decreases. By 2001 the number of IDPs has significantly increased to 1.2 million. The number of IDPs in Afghanistan will be further examined in the next chapter. 4. Current Migration Patterns- 2001- Present Current migration patterns in Afghanistan are complex and multifaceted. Since 2001 Afghanistan has witnessed the largest movement of refugee return in UNHCR's history (Monsutti, 2008). These flows have been a mixture of voluntary and forced return of refugees who had been outside of Afghanistan for varying periods. The majority of returnees are from Pakistan. Afghan refugees have maintained ties with Pakistan and now cross-border labour migration between Afghanistan and Pakistan is increasing. In addition to international flows, the numbers of IDPs have decreased in Afghanistan since 2001 as IDPs return to their regions of origin. Finally, within this picture there are large flows of rural-urban migration as returnees and non-returnees find limited opportunities in rural areas and move to the cities in search of work. All of these flows are occurring simultaneously and present a complex picture of current migration patterns and flows. Each of these areas will be addressed in the following section. Internal Migration Internal migration flows in Afghanistan have been increasing in the post-Taliban period. As refugees and migrants return to Afghanistan they do not necessarily end their migration cycle. Returnees may continue to migrate internally in search of livelihoods and opportunities. The internal migration flows in Afghanistan are comprised of IDPs, rural to urban migration, and trafficking. Internally Displaced Persons Internal displacement in Afghanistan has been understudied and information is limited to availability from the UNHCR. In 2004, the UNHCR conducted a data profiling of IDPs in UNHCR assisted camps and in 2008 the UNHCR created a national profile of IDPs in Afghanistan. Statistics regarding IDPs are estimates[1]. Table 3 shows the number of IDPs and IDP returnees from 2001 to 2008. At the fall of the Taliban in 2001 there were approximately 1.2 million IDPs in Afghanistan, of which many returned spontaneously in 2002 (UNHCR, 2008, p. 6). In 2008, IDP returns were negligible due to continued insecurity, inter-tribal and personal conflict, landlessness and drought, and lack of job opportunities and basic services in rural areas (UNHCR, 2008). Table 3: IDPs Total and Returns: 2001-2008 IDPs IDP Returnees Year Total Assisted Total Total 2,865,700 513,700 822,600 31,000 Source: UNHCR Global Reports, 2001-2008 Of the current IDPs (235,000) the UNHCR identifies 132,000 as a protracted caseload (2008). Table 4 shows the reasons for displacement of the current IDP population. These numbers do not include those who are invisible IDPs or urban unidentified IDPs. UNHCR estimates that the actual number of IDPs in Afghanistan is substantially larger than the numbers suggest (2008, p. 18). Table 4: Reason for Displacement of Current IDPs (2008) Reason for Displacement No. of Families No. of Individuals Protracted 31,501 166,153 New Drought affected 1,083 6,598 New Conflict Affected 1,749 9,901 Returnees in Displacement 8,737 52,422 Battle-affected 127 759 Total 43,197 235,833 Source: UNHCR, 2008 Since 2007 the return of IDPs has continued to decrease due to increased instability in the country, drought, landlessness, and the spread of conflict and insurgency areas (IDMC, 2008). Disputes are arising between IDPs and locals as in Afghan culture if you are not born in the region you do not belong there (IDMC, 2008). Options for IDPs appear to be limited as they are not welcomed in the regions where they are seeking protection. Rural to Urban Migration Urbanization is rapidly occurring in Afghanistan as returnees settle in the cities and people migrate from rural communities to urban centres. Approximately 30 percent of returnees settle in Kabul (Stigter, 2006). The population of Kabul in 2001 was roughly 500,000 and it had grown to over 3 million by 2007 (IRIN, 2007). The urban centres do not have the infrastructure or resources to meet the needs of the large inflows of migrants, however, research suggest that the difficult situations in the cities are better than rural areas. In 2005 the Afghanistan Research and Evaluation Unit conducted a study on rural to urban migration (Opel, 2005). A total of 500 migrants were interviewed in the cities of Kabul, Herat, and Jalalabad. The majority of migrants were male (89 percent) and the average age of migrants was 31 years (p. 4). Males tend to migrate to support their families, and females migrate when they have lost their husbands or have been ostracized by their community and have no means of supporting themselves in rural areas. The majority did not own productive assets in their village (71.2 percent), although 43 percent owned a house in their village (p. 8). The primary reasons for migration were the lack or work in the village and better opportunities in town (42%), followed by lack of work in the village (38.2%) and insecurity (16.3%) (p. 11). The majority of migrants made the journey on their own (70.7%) and paid for the journey from their savings (p.14). Migration to urban areas is expensive and the poorest of the poor cannot afford the journey. Once in the cities, the majority were employed in low skilled day labour work and on average respondents reported working 16 out of the past 30 days (p. 20). Social networks were essential in people finding work as 89 percent of skilled workers and 60 percent of unskilled workers reported receiving assistance from a relative, friend or neighbour (p. 20). Incomes in the cities were low, but were higher than what individuals could earn in the rural areas. The majority of urban migrants remitted money to their family in rural areas, which they carried with them when they returned or sent through family or friends. None of the urban migrants use the Hawala (see Chapter 6) system, which was reported to be too expensive for them. The majority of migrants reported planning to settle in the city (55%) (p. 26). Overall, the majority did improve their economic situation through migration (61.9% for males and 80.9% for females) (p. 27). The large-scale migration to urban centers appears to be a trend that will continue. It is estimated that urban centers are now accounting for 30 percent of the population in Afghanistan (Opel, 2005). The rapid urbanization has shifted rural poverty to urban poverty (Stigter, 2006) and many challenges remain for the cities in managing the rapid growth. National Trafficking In 2003 the IOM in Afghanistan conducted a study on trafficking of Afghan women and children. Research on trafficking in Afghanistan is difficult due to the lack of data inherent in all areas of Afghanistan, but increasingly so due to the fear of reporting trafficking related crimes and the shame associated with such crimes. The IOM reports that trafficking occurs in four ways in Afghanistan. The first is through prostitution, which is believed to be occurring in Kabul, but is not reported as prostitution is not to occur in an Islamic state. The second is forced labour services, which occurs in the form of forced marriages of women and girls. The third is servitude, which is either sexual or domestic, and occurs with both boys and girls as young as 4 years old who are taken and sexually abused. Finally, trafficking for the purpose of organ removal has been reported, but there is no evidence to substantiate these claims. It can thus be inferred that trafficking of persons is occurring in Afghanistan, but the degree and forms of trafficking are unknown (IOM, 2003). International Migration Afghanistan has had large international migration flows since the fall of the Taliban, primarily from refugee returns. This section will discuss refugee return since 2001 and the emergence of circular migration systems between Afghanistan and Pakistan. Refugees The number of Afghan refugees in Pakistan and Iran has continually decreased since 2001. 4 shows the number of Afghan refugees in Pakistan and Iran from 2001-2008. Migration flows in Afghanistan since 2001 have been comprised primarily of refugee return flows. Statistics regarding return flows vary by source. Table 5 shows estimated return flows from the UNHCR. Kronenfeld estimates that in 2002 there were 2,153,382 refugee returns, which presents a substantial difference from the table (2008, p. 48). It is widely recognized, however, that capturing flows of people at such high volumes presents logistical challenges. Pakistan Iran Various Total Year Total Assisted Total Assisted Total Assisted Total Returnees Total Total 3,462,900 1,940,900 1,599,800 600,000 11,060 2,000 5,073,760 2,542,900 Table 5: Estimated Refugee Returns Source: UNHCR Global Reports, 2001-2008 The numbers of refugee returns from 2002 showed higher numbers of returnees than were thought to be residing in Afghanistan and Iran. Turton and Marsden (2002) state: “In January 2002, UNHCR issues a draft planning document for the “Return and Reintegration of Afghan Refugees and Internally Displaced People” over a three-year period, in which it estimated that there were 2.2 million Afghan refugees living in Pakistan and 1.5 million in Iran. It was envisaged that, during the course of 2002 and with the assistance of UNHCR, 400,000 refugees would return from Pakistan, and the same number would return from Iran. Approximately the same numbers were expected to return in 2003 and 2004” (p. 19). It is evident from Table 5 that the number of returnees in 2002 alone (1.8 million) was more than double the initial UNHCR estimates. One reason cited for the large numbers of returnees was the issue of ‘recyclers' from Pakistan. A ‘recycler' is a refugee that registers with the Voluntary Repatriation Centre in Pakistan, crosses the border to Afghanistan to receive their cash grant, food, and other items, then returns via an alternative route to Pakistan and engages in the process again (Turton and Marsden, 2002, p. 20). In Iran recyclers were not very present as the distance it takes to return to Iran is much greater, the fact that the cash grant and return package was far less substantial, and that in Iran it took on average a month to get a Voluntary Repatriation form, whereas in Pakistan is was issued the same day (Turton and Marsden, 2002). The issue of ‘recyclers' was virtually resolved by the fall of 2002, as UNHCR received iris-scanning technology that made recyclers identifiable (Kronenfeld, 2008). Thus, the issue of recyclers could be a contributor to the high statistics, but is not the only source. Kronenfeld states that one reason for the discrepancy is that there appears to have been a gross underestimation of the refugee population in Pakistan. “UNHCR estimated in the middle of 2001 that there were two million Afghans living in Pakistan (and one million in Iran). But three years later, after the return of nearly three million Afghans, the census shows that over three million still remain in Pakistan- well over the initial 2001 estimate” (Kronenfeld, 2008, p. 49-50). It appears that the growth rate of the refugee population that fled in the late 1970s had not been factored into the statistics. The growth rate of the Afghan refugee population in 2005 was estimated at 3 percent. Furthermore, 19.4 percent of the refugee population was under the age of five, and 55 percent were under the age of 18 (p. 49). Thus, half the population of Afghan refugees in Pakistan was born in exile. The individuals that did return were not from the UNHCR camps. Returnees from Pakistan were those living in urban areas, not the camps, and thus not necessarily included in the general refugee statistics (Kronenfeld, 2008). Turton and Marsden (2002) hypothesize that the majority of returnees in 2002 were those who were having difficulty making ends meet; “from urban areas of Pakistan, where they had been surviving on low and erratic incomes from daily labour” (p. 2). Returnees from Iran had been there for less than five years (Stigter, 2006), and thus were less socially integrated than refugees who had been there for longer. It is generally recognized that refugees who have been outside their country of origin for longer have more economic and social ties in the host country, and weaker economic and social ties to their country of origin, making return more difficult (Stiger, 2006). Return after 2002 from Pakistan and Iran was influenced by “asylum-fatigue” in the host countries. Both host countries had now been dealing with a protracted refugee situation and were hosting approximately 20 percent of the world refugee population. The political climate for refugees in the host countries after the fall of Taliban became increasingly difficult. Both Iran and Pakistan, albeit Iran more aggressively, began to forcibly deport refugees: “From 2002 to the end of December, 2005, a total of 271, 508 individuals were deported from Iran in comparison to 5,347 individuals from Pakistan” (Stigter, 2006, p. 113 from UNHCR, 2006). People without papers in Iran were taken by the policy and forcibly deported, and in 2001 Iran passed legislation imposing heavy fines on employers employing illegals. In 2008, the Government of Pakistan began to officially close refugee camps and deport refugees. The camps were destroyed for urban land use. These measures are expected to contribute to the continued flow of returnees. Since 2005, however, the flow of returnees has tapered off, despite the fact that millions of refugees still reside in Pakistan and Iran. According to the 2005 census of Afghan Refugees in Pakistan, 51 percent of refugees remaining were long-stayers who had arrived in Pakistan before 1981 (Government of Pakistan, 2005, p. 19 in Kronenfeld, 2008, p 51-52). 5 illustrates the primary reasons cited by refugees for not returning to Afghanistan. In addition to the reasons cited above the political situation in Afghanistan has deteriorated since 2005, which deters people from returning. Return to Afghanistan has been geographically uneven. Approximately 30 percent of returnees have settled in Kabul (Stigter, 2006, p. 114). The There has been minimal return to the south and southeast due to insecurity in those areas (Stigter, 2006). Compared to the total population in 2008 returnees accounted for 16 percent of the population, which is as high as 32 percent in the East and 20 percent in the Central region (UNCHR, 2008). The issue of refugee return in Afghanistan still presents many challenges. The underdevelopment of the country, especially in rural areas has led to a severe lack of basic infrastructure. High levels of poverty are abundant across the country. In retrospect, analysts have suggested that the high levels of return were too fast for Afghanistan's absorption rate and that further return should not be the priority at this time (Stigter and Monsutti, 2005). The country needs to be able to meet the needs of the current population before it can absorb further returnees. On the other hand, refugees are increasingly less welcome in Pakistan and Iran, despite the fact that they fill low-level jobs and contribute to the economy in both countries. The protracted refugee situation is thus continuing in an environment of competing geo-political interests. Circular Migration At present, circular migration is arguable the primary form of migration occurring between Afghanistan and Pakistan. It is important to note from the previous section that return migration of refugees does not necessarily mean the end of the migration cycle (Monsutti, 2008). Furthermore, in Mosutti's research with refugees in Pakistan and Iran, it became evident that the majority of refugees had returned to Afghanistan to see the conditions for themselves before making the decision not to return (2004). This suggests the occurrence of informal circular migration processes. In 2008 the UNHCR commissioned a study on cross border movements between Pakistan and Afghanistan (Altai Consulting, 2009). The study revealed that cross border migration is occurring at substantially higher levels than anticipated in movements of circular migration based on employment, social relations, and receiving essential services such as health care and education. The study was based on interviews with 1,007 migrants crossing to Pakistan, 1,016 migrants crossing to Afghanistan, and a counting exercise of people crossing the border. The counting exercises revealed that on average for a week in one morning or afternoon 11,297 entered and 16, 257 people exited Afghanistan. On a given day official numbers would show 138 exits, and the counting would show 23,934 exits, illustrating that official records substantially under represent cross-border flows (Altai Consulting, 2009). The survey portion of the study provided a strong picture of the types and reasons for cross-border migration. The vast majority of the migrants were males traveling alone (75.3 percent) (p. 35). The results indicate that 81.3 percent of interviewees indicated going back and forth on a regular basis (p. 30); 85.9 percent have lived in Pakistan for over a year (p. 31); 89.5 percent were planning to spend less than one year in Pakistan, and 19.7 percent had permanent residence in Pakistan (p. 33). 6 shows the primary reason cited for their travel to or from Pakistan. The majority of migrants (64.7 percent) were planning to work in low skilled professions (construction and retail) in Pakistan. A lack of employment opportunities in Afghanistan were driving them to find temporary work in Pakistan that would allow them to meet the basic needs of themselves and their families. This study supports the research of Monsutti in that Afghans are currently migrating to Pakistan on a temporary basis as a livelihood strategy or to maintain social networks. Monsutti states that, “After many years the migratory movement are highly organized, and the transnational networks become a major, even constitutive, element in the social, cultural, and economic life of Afghans” (Monsutti, 2008, p. 62). It also supports Monsutti's argument that return migration does not mean the end of the migration cycle, as the results indicate that a large number of these migrants were returnees. The current migration processes occurring between Pakistan and Afghanistan are a way of life for Afghans as their social network and economic opportunities span the border. 5. The Diaspora Three decades of conflict and displacement have led to the emergence of the Afghan Diaspora. The Afghan Diaspora grew out of two waves of emigration. The first wave was from 1980-1996 (Wescott, 1996). This wave was primarily comprised of the upper classes and individuals opposed to the communist regime (Wescott, 1996). The second wave occurred from 1996-2001 with the rise of the Taliban (Wescott, 1996). This wave was primarily comprised of minority groups, such as Shia Muslims, Sikhs, and Hindu's, and a large representation of women and children (Wescott, 1996). Today, the Afghan Diaspora is a highly transnational group with money, goods, information and people circulating between Afghans in different continents around the world (Braakman, 2005). Sizes of the Afghan Diaspora It is important to distinguish between the “near Diaspora” and the “wider Diaspora” in the case of Afghanistan. Many Afghan refugees are located in the neighbouring countries of Iran and Pakistan, while other have moved further abroad. The former are termed the “near Diaspora” and the later are called “wider Diaspora” (Koser and Van Hear, 2003). This section will refer to the Wider Diaspora. It is evident from 7 that the United States and Germany have been the primary destinations for Afghan migrants. The total estimated size of the Afghan Diaspora is 2,031,678 (including near and wide Diaspora's) (World Bank, 2007). 8 shows the stock of the Afghan born in the top ten receiving countries of the wider Diaspora. It is evident that the largest concentration of Afghans is in Germany, followed by the United States. In the 1970s Germany had the most liberally asylum policies, which attracted large numbers of refugees. Germany continues to be a country of preference as it has a large Afghan population (Braakman, 2005). Amoung the Afghan community in Germany, Hamburg is home to 22,000 Afghans and is known as ‘The Kabul of Europe' (Braakman, 2005). The Afghan Diaspora in Germany is well organized and has a number of associations (Zunzer, 2004). A key means of exchange for the Afghan community is through the online website of “Afghan German Online” () (Zunzer, 2004). Furthermore, the German Afghan community was highly engaged in the Bonn process in 2002. According to the 2006 US Census there were 65,972 Afghans, who were born in Afghanistan, in the US. The Afghan Embassy in the US, however, estimates that there are 300,000 Afghans in the US (The Embassy of Afghanistan, 2009). According to the US Census, 53 percent of the Afghan population entered before 1990, 28.3 percent entered between 1990-2000, and 18.5 percent entered since 2000 (US Census Bureau, 2006). Thus, the majority of Afghans in the US are from the first wave of Afghan migrants. The median Afghan household wage is US$ 34,973, which is $9,423 below the national average, and 27.7 percent of Afghan households are at the national poverty rate, compared to the national average of 9.8 percent (US Census Bureau, 2006). The reasons for the high rates of poverty are unknown. The Afghan population in the US is heavily concentrated in the San Francisco area, Northern Virginia, and Los Angeles (Robson and Lipson, 2002). The Afghan population in the US is diverse, reflecting the various ethnic backgrounds of Afghanistan. The majority of Afghans in the US are of Pashtun and Tajik origin, with a community of Uzbek minority in New York and small Hazara communities scattered around the country (Robson and Lipson, 2002). As such, there are Sunni Muslims, Shi'te Muslims, and Ismailis in the United States (Robson and Lipson, 2002). Involvement of the Afghan Diaspora in the Reconstruction Effort The Afghan Diaspora has been highly involved in the reconstruction effort. Zunzer (2004) states: “The diaspora played a significant political role in organizing a peaceful transition after the Nato military intervention in 2001/2002. Diaspora members played an important role during the Petersberg Talks, in the ongoing Bonn process of political transition, and as connectors between the international community, the national administrations, international civil society and the private sector” (p. 40). Members of the Diaspora received Ministerial positions in the interim government established from the Bonn Agreement, and President Hamid Karzai, himself had spent significant time in Pakistan and the US (Van Hear, 2002). At the end of the Bonn Agreement however, the Diaspora was split into four main groups (Van Hear, 2002). The first was the Northern Alliance, or United Front, which represented Kabul (Van Hear, 2002). The second was the Rome-based delegation representing the former King, Zaher Shah (Van Hear, 2002). The third was a group from Cyprus of intellectuals that were supported by Iran and had meeting in Cyprus for the past four years to discuss future options of Afghanistan (Van Hear, 2002). The final group was Peshawar that was primarily comprised of Pashtun refugees (Van Hear, 2002). These fractions illustrate that divisions continued to exist amoung the Diaspora. Despite the fractions in the Diaspora, Afghans have come together to assist in the reconstruction effort. There are four key programmes that were established to engage the Afghan Diaspora. First, the World Bank allocated 1.5 million US dollars for a fund to hire qualified Afghans to return to Afghanistan and assist in the reconstruction efforts. Secondly, the World Bank established the World Bank Afghanistan Directory of Expertise, which is a database of skilled Afghans and non-Afghans with experience in Afghanistan. This database has served to connect many qualified individuals with projects in Afghanistan. Third, the IOM established a Temporary Return of Qualified Nationals programme to engage the Afghan Diaspora in returning temporarily to work on training and capacity building projects. Finally, the Swiss Peace Foundation has established an Internet forum to create dialogue between civil society, the Diaspora, and government regarding peace in Afghanistan (Zunzer, 2004). In addition to these programmes, Afghan Diaspora groups are uniting on their own to build networks among themselves in an effort to get involved in the reconstruction effort. For instance, the Afghan Diaspora in the US has made significant contributions to the education sector in Afghanistan; “With investments in school construction and teaching, 6 million Afghan children were able to register for school, 34% of them being female” (The Embassy of Afghanistan, 2009). Both the financial and intellectual investments of the Diaspora in Afghanistan appear to be an integral piece in the reconstruction effort. 6. Migration and Development Transnational migration networks provide an essential contribution to development in Afghanistan. Skill flow out of Afghanistan has occurred for decades, but with the fall of the Taliban it appears small amounts of skill flow is being attracted to return to the country. The Diaspora in the west provides essential remittances that provide families with the funds needed to meet daily needs. Migration has been employed as a livelihood strategy in Afghanistan for decades and through transnational networks Afghanistan is receiving needed support for development and reconstruction. Brain Drain/ Skill Flow The skill flow out of Afghanistan presents a challenge in the reconstruction process. In the 1980s and 1990s the majority of Afghans who migrated to Europe, North America, or Australia were the country's elite from the upper and middle urban classes (Monsutti, 2008, p. 68). This group had the skills to seek better opportunities in the west and resulted in a brain drain from Afghanistan. In 2000, the World Bank cited the skilled emigration rate to be 13.2 percent and the emigration rate of physicians to be 9.1 percent (2009). This data, however, tells little of the current situation. In 2005, the World Health Organization stated that there were a total of 5,970 physicians and 14, 930 nurses and midwives in Afghanistan (2009). That is roughly one physician per 5, 000 people in Afghanistan. An opinion piece in the New York Times in 2006 stated that physicians in Afghanistan made roughly $100/ month and University professors earned less than $2 per month (Younossi, 9 Feb 2006). The same piece stated that, “When I asked university students whether they want to stay in Afghanistan or go to another country, an overwhelming majority said they want to emigrate”. The underdevelopment of Afghanistan is resulting in a continued skill flow from the country. Simultaneously, however, Afghans are returning to the country to assist in the reconstruction process. One example is as follows: “In Afghanistan, the transitional government identified a number of qualified persons in the justice sector, because under the Taliban rule the country had lost most of its judges. IOM was called by the transitional government to rebuild the educational and justice sectors. Some 4,000 qualified nationals enrolled in the database, giving their availability, and 400 persons went to Afghanistan. Thus, there was a need to develop modalities that would use these skills in the best possible way and through projects that made sure they do not lose the possibility to return in the host country” (Dall'Oglio in Roison, 2004). Data does not exist on the number of skills flowing in and out of the country, but anecdotal information suggests the flow is occurring in both directions. Due to the large deficit of skills created over the last two decades, however, it appears that without a significant inflow of skills that counteracts the decades of emigration and current emigration there will continue to be a skill deficit. A lack of skilled personnel contributes to continued underdevelopment, particularly in key areas such as the health and education sectors. Remittances and Development Remittances provide a key livelihood for migrant's families remaining in Afghanistan. The International Fund for Agricultural Development (IFAD)[2] estimated in 2006 that the annual remittance flows to Afghanistan were valued at 2,485 million US Dollars or 29.6 percent of GDP. This includes both formal and informal remittance flows. Alessandro Monsutti (2004) concludes from his research with Afghan migrants that an estimate of annual remittances to Afghanistan in 1995 would have been 50 million dollars annually (p. 220). Even if this not correct, Monsutti argues that the overall amount remitted would certainly exceed that of international humanitarian assistance (2004). In 2003 the World Bank conducted a National Risk and Vulnerability assessment of 11,200 households in Afghanistan. Table 6 shows the percentage of households receiving remittances, and for remittance receiving households the source country of the flows and the average per capita value of the households. The data is divided into five quintiles based on the economic status of the household, with Q1 representing the poorest households and Q5 the richest households. Table 6 highlights that the majority of remittances are received from family members outside of Pakistan and Iran, which aligns with previous data in this report as individuals in Pakistan and Iran cannot often afford to send remittances. Table 6 also highlights a large discrepancy in the per capita value of remittances between the richest and poorest households, with the richest households receiving 247 percent more. Table 6: Remittances Received Q1 (poorest) Q2 Q3 Q4 Q5 (richest) Total Percentage of Households Receiving Remittances 10.5 10.1 12.8 18.3 23.2 15.2 From Pakistan/ Iran (%) 35 32 32 28 27 31 From Other (%) 65 68 68 72 73 69 Amount of Remittances per capita (receiving households, US $) 19 28 33 38 47 34 Source: World Bank, 2005 (p. 174) Of the families receiving remittances, on average the remittances account for 20 percent of their expenditure, however, for the lowest quintile they account for 30 percent of expenditure (World Bank, 2005, p. 25). Remittances are a vital source of livelihoods for families receiving them. In the majority of cases remittances are used to meet basic needs such as food, clothing, and medicine (Monsutti, 2006). The benefits of the remittances are generally short-term, creating a dependency cycle (Monsutti, 2006). Few remittance receivers are able to accumulate assets such as constructing a house, purchasing land, and saving for the mahr and weddings (Stigter, 2006) or purchase luxury goods such as a car, camera, or televisions (Monsutti, 2006). Remittances for the most part, are essential to maintaining livelihoods for those who have returned or have not migrated. Remittances in Afghanistan do not only have a crucial economic component, but also have a significant social component. The remittance sending mechanism in Afghanistan is the Hawala system, which is based on social networks. A hawaladar is a half-merchant, half-banker whose expertise is in the transfer or money and goods (Monsutti, 2008). If an individual, a kargar, knows a hawaladar (they must belong to the same lineage or come from the same valley) then the kargar can go to directly to the hawaladar and if not he/she must go through a middleman. Once the hawalar has the request a letter is sent to his partners stating the details of the transaction and a letter is sent to the kargar's family stating the details. The hawaladar would either send the money directly, making a profit off the currency exchange, or use the money to purchase goods that are sent. The goods would be sold by a partner on the other end, who would use the profits to pay the kargar's family. This process can often have several partners and steps to get the money to the final partner near the family (Monsutti, 2008; Monsutti, 2006; Monsutti, 2004). The important aspect of the hawala system is that it is based on social relationships. The hawala system operates around the world and provides a functioning remittance system in the absence of formal banking institutions in Afghanistan. Within the hawala system there is tremendous trust that has been sustained through regular interactions occurring over a long periods of time. The hawala system has established a transnational network and cooperation system amoung Afghanis around the world. The remittance sending structure of the hawala has been essential to maintaining livelihoods in Afghanistan. It continues to be the primary remittance mechanism in Afghanistan today (Monsutti, 2008; Monsutti, 2006; Monsutti, 2004). Migration and remittances are essential to the development of Afghanistan. Remittances account for greater flows than humanitarian assistance to Afghanistan and are critical to sustain families and prevent further poverty. As the humanitarian assistance continues to decrease to Afghanistan, remittances will gain further importance for development. 7. Migration Policies and Programmes Migration policies in Afghanistan are organized and implemented through partnerships between the Islamic Republic of Afghanistan and international organizations. The key policy unit is The Ministry of Refugees and Repatriation (MoRR). The key international organizations involved in migration in Afghanistan are the International Organization for Migration (IOM), and the United Nations High Commissioner for Refugees (UNHCR). The Ministry of Refugees and Repatriation The Ministry of Refugees and Repatriation (MoRR) is responsible for implementing migration policies and programmes in Afghanistan. The MoRR exists under the Afghanistan National Development Strategy, which serves as the countries Poverty Reduction Strategy Paper (Islamic Republic of Afghanistan, 2008). The MoRR has initiated a number of migration management projects with cooperation from international organizations such as the UNHCR and IOM. The primary objectives of the MoRR are to ensure integration and resettlement, safe livelihood, provide employment opportunities, vocational trainings and legal support during repatriation for the returnees. The MoRR national policy priorities are based on the following five principles: “• Voluntary, gradual, safe and dignified return of refugees and their reintegration in their places of origin. • Ensuring reintegration and resettlement • Protecting their rights and privileges • Building the capacity of the households • Ensuring employment opportunities” (MoRR, 2007, p.7) These principles guide the programme planning of the MoRR (MoRR, 2007). MoRR has 34 branches in different provinces in Afghanistan and additional special branches out of the country to implement the strategy related to solving the problems of refugees and returnees (MoRR, 2007). MoRR has established permanent residential facilities in 50 townships located in 29 provinces to provide legal assistance, employment and educational opportunities (MoRR, 2007). International Organization for Migration The International Organization for Migration (IOM) in Afghanistan is based in Kabul and provides technical cooperation and capacity building to Afghan government institutions in managing migration (IOM, 2009). The IOM provides emergency relief to vulnerable displaced families; facilitates long-term return and reintegration to and within Afghanistan and stabilizes migrant communities. The IOM facilitates several programmes to provide emergency and post-conflict migration management services in Afghanistan. These programmes include (IOM, 2009): * Rapid Response Humanitarian Assistance - Assists refugees and migrants returning to Afghanistan in recent years; “many of them vulnerable without adequate shelter, food, water or mean to travel to their final destinations.” * Afghan Civilian Assistance Programme - Assists temporary and medium term displacement by providing assistance packages to those displaced by military activities. * Construction of Health and Education Facilities- Works with the Afghan Ministries of Public Health and Education to construct hospitals, midwifery training schools and teacher training colleges. * Support to Voter Registration- Provides support in capacity building for trained staff for the Independent Election Commission. * Return of Qualified Afghans- Coordinates the return of qualified Afghans to participate in the reconstruction process. According to IOM; “846 Afghan experts living abroad have returned to Afghanistan from 32 countries with IOM's assistance in order to participate in the rebuilding of their nation”. * Assisted Voluntary Return and Reintegration- Coordinates the voluntary return of failed asylum seekers from developing countries; “IOM has assisted over 7,600 Afghans with their returns, approximately 2,500 of whom received individually tailored reintegration assistance packages. Assistance includes training, self-employment, business start-ups and employment referral”. * Counter-Trafficking Initiative- Seek to provide awareness and protection to victims of Trafficking. * Passport and Visa Issuance Capacity Building- Provides passport and visa issuance to support the capacity building of the Afghan Government. * Border Management- Seeks to provide support to managing the border between Afghanistan and Pakistan and Iran. * Support to Provincial Governance- Provides grants for sub-projects that are based run by the Provincial governments and their partners. These programmes provide the core of the activities of the IOM in Afghanistan and are essential to providing services to Afghans. The IOM in cooperation with the European Union established the Return, Reception and Reintegration of Afghan Nationals to Afghanistan (RANA) programme in 2004. This programme seeks to provide additional assistance to Afghan nationals returning to Afghanistan from the EU member states (IOM, 2007). The objective of the programme was to encourage and provide for sustainable return to Afghanistan. The programme provided training, employment, on-the-job training, and self-employment assistance to returnees (IOM, 2007). As of 2007, a total of 2,097 Afghan refugees had utilized the programme to return, of which 35.2 percent were from The Netherlands and 35 percent were from Germany (IOM, 2007, p. 6). United Nations High Commissioner for Refugees The United Nations High Commissioner for Refugees (UNHCR) has had a key role in the protection and reintegration of refugees in Afghanistan. The UNHCR provides immediate and emergency services, as well as long-term services to returnees. Since 2002 the UNHCR has “supported the construction of more than 181,000 shelters in rural areas benefiting over 1 million homeless returnees” (UNHCR, 2009). The UNCH also provides services such as developing water points: “9,415 water points have been completed under UNHCR's water programme in high or potential return areas, as well as those hit by drought” (UNCHR, 2009). In addition, UNHCR has provided a limited number of income-generating projects in areas of high return to assist returnees in building livelihoods (UNCHR, 2009). The UNCHR also provides services to Afghan refugees in Pakistan and Iran. This includes running the refugee camps in Pakistan and providing voluntary return centres in both countries. 8. Migration Relationship with the Netherlands The Netherlands and Afghanistan The Netherlands has supported Afghanistan in the reconstruction effort since the fall of the Taliban in 2001 through humanitarian aid, development assistance, and the deployment of Dutch troops. The Dutch effort is targeted to fighting poverty in Afghanistan and helping to establish stability in the region. The Netherlands is a member of the Afghanistan Compact and pledged 10 million USD to support Afghanistan in 2006. The Netherlands is the country lead in the area of good governance and provides assistance for elections and developing a democratic state (Buitenlandse Zaken, 2006). In 2006, 1,400-2,000 Dutch troops entered Afghanistan and established responsibility for the province of Uruzugan in Southern Afghanistan (Buitenlandse Zaken, 2009). At that time the Dutch committed troops to the NATO International Security and Assistance Force in Afghanistan until 2008. The mission has been extended to 1 August 2010, with a reduction in troops to 1,100 (The Netherlands UK Embassy). The Dutch mission has been active in providing security and development aid to Uruzugan. The Netherlands development assistance to Uruzugan has included the building of infrastructure, support of education, women and girls, and health care. At this time, 15 schools have opened in the region and seven large health centers and there are plans to establish a total of 78 new schools and provide further health support and services. Saffron corns have been distributed in the region with cultivation lessons to help farmers establish a crop that demands a high price on global markets. The Dutch have supported the establishment of a radio service to connect the people of Uruzugan with the Afghanistan government. Finally, microcredit programs have been established to provide economic opportunities to men and women in the region. Further initiatives are underway in Uruzugan, but it is evident that a holistic approach has been taken to provide development assistance to the area (Ministry of Foreign Affairs, 2009). Migration Flows between the Netherlands and Afghanistan Immigration from Afghanistan to the Netherlands was virtually inexistent prior to 1985. 9 illustrates the number of asylum applications from Afghanistan to the Netherlands from 1980- 2008. It is evident that the majority of asylum applications were received during the Taliban rule of 1992-2001. Since the fall of the Taliban, the number of asylum applications has significantly fallen. 10 shows the migration motivations of individuals to the Netherlands. It is evident from 10 that the vast majority of migrants are asylum-seekers. The numbers for family reunification have followed a relatively similar trajectory as asylum applications with a few year lag, which rationally illustrates that asylum-seekers apply for family reunification once they have received their status. 10 also illustrates that the number of students and labour migrants to the Netherlands are negligible. It can be deduced from the data that the majority of Afghans to the Netherlands are fairly recent migrants that is within the last 20 years and have received residence in the Netherlands based on refugee status. Emigration numbers of Afghans from the Netherlands have been small. 12 shows the number of Afghan emigrations from the Netherlands from 1995-2008. In 2003, the Netherlands, Afghanistan, and the UNHCR signed a tripartite Memorandum of Understanding on voluntary return migration from the Netherlands to Afghanistan (UNHCR, 2009). This agreement provided further assistance to Afghan's returning to Afghanistan. It is possible that this program contributed to the increase in return numbers illustrated in 11 from 2003. The data indicates that Afghan immigration to the Netherlands has been more pronounced than Afghan emigration. The Afghan Community in the Netherlands In 2009 there were 37,709 Afghans living in the Netherlands (CBS, 2009). 12 shows the age and gender distribution of the Afghan population in the Netherlands in 2008. 12 illustrates that there are slightly more males (54 percent) than females (46 percent) living in the Netherlands. It is also evident that the population of Afghan's in the Netherlands is a young population with 90 percent being under the age of 50 years. Remittance sending from the Netherlands to Afghanistan is estimated at €79,664 (Siegel et al., 2009). In a Remittance corridor analysis conducted by Siegel et al. (2009) on 180 individuals in the Netherlands, 40 percent had sent remittances in the last 12 months to Afghanistan. The hawala system is widely used in sending remittances from the Netherlands to Afghanistan. In general the amounts remitted are between €100- €300 (Siegel et al., 2009). The remittances are primarily used in Afghanistan to meet daily needs. The Afghan community in the Netherlands has grown rapidly since the early 1990's. The Afghan community is young and the vast majority of Afghans have come to the Netherlands as refugees. It is thus not surprising that there is over 60 percent unemployment of Afghans in the Netherlands and over 50 percent of the population is on social assistance (Siegel et al., 2009). 9. Future Perspectives of Migration High levels of multifaceted migration flows have been prevalent in Afghanistan for the last 30 years and the evidence indicates that these flows will continue in the future. Three key reasons for continued migration can be noted. First, migration in and from Afghanistan has been motivated by insecurity, underdevelopment, severe poverty, and lack of opportunities. Unfortunately, all these conditions remain in Afghanistan at present. This alone suggests that migration will continue. Secondly, the UNHCR predicts that the population of Afghanistan will be 97,324,000 in 2050 (Stigter and Monsutti, 2005, p.3). Afghanistan's economy is based on agriculture where the rural landscape is already overpopulated. The high levels of population growth indicate that the rural communities will not be able to support the population, which will lead to increasing migration flows. Third, the evidence has illustrated that Afghans are a highly mobile and resilient people. The World Bank states: “Afghans are a resourceful, resilient, creative, opportunity-seeking, and entrepreneurial people (as witnessed by the high incidence of labor migration, entrepreneurial activity wherever they are located, trading networks, and remittances)” (2005, p.147). Afghan culture and historical migration patterns of the Afghan people provides strong indication that migration will continue from Afghanistan. The continuation of high levels of migration flows will pose many challenges for Afghanistan. Primarily, the skill drain will become a more acute issue, as the population increases and skilled individuals are needed to meet the needs of the population. Retaining skilled workers will be a great challenge for Afghanistan if the situation in the country does not improve. In addition to emigration, there are also questions of future return migration to Afghanistan. The current refugee situation between Afghanistan and Pakistan and Iran has promoted much debate regarding the competing geo-political interests. Stigter and Monsutti (2005) argue that repatriation from Pakistan and Iran of all Afghan refugees is not feasible and would have a negative impact on the reconstruction efforts in Afghanistan. Stigter and Monsutti (2005) propose the following recommendations: “Establish a bilateral labour migration framework that provides a clear legal identity and rights for Afghan labourers in Iran; Provide easier access to passports for Afghans; Increase awareness of the contribution, both in labour and otherwise, of Afghans to the Iranian and Pakistani economy; and In line with international conventions, continue to uphold the refugee status and protection of the most vulnerable” (p. 2). This approach clearly provides for the increased rights and protection of Afghan refugees. It also suggests that Afghan refugees be permitted the legal right to remain in Pakistan and Iran in the future. Alternatively, however, Pakistan and Iran are seeking the decrease the rights of Afghan refugees, although they positively contribute to the economy in both countries. This situation poses future challenges as to how the refugee situation in Pakistan and Iran will be addressed. That is, if the refugees will be forcibly repatriated, or allowed to reside in Pakistan and Iran. Overall, it is evident that migration flows from and to Afghanistan will continue in the near future. Stigter and Monsutti state (2005): “For many migration has become a way of life: it is now highly organized and the transnational networks that have developed to support it are a major, even constitutive, element in the social, cultural and economic life of Afghans” (Stigter and Monsutti, 2005, p. 3). Migration is embedded in the Afghan way of life and will continue to be a key element of the culture, social and economic fabric. 10. Conclusion Afghanistan has experienced one of the largest migration flows of any country in the world over the last three decades. These flows have been multifaceted, but have been primarily driven by conflict and insecurity, and the vast underdevelopment of the country. Through the periods of war and now in a time of reconstruction, migration continues to be a key livelihood strategy of Afghan families. Afghans have complex webs of migration that are based on historical, ethnic, cultural and social networks. In particular Afghanistan has strong migration relationships with its neighbors Pakistan and Iran. Flows from Afghanistan to Pakistan and Iran have occurred throughout the last century. Monsutti states: “Channels of pre-established transnational networks exist between Afghanistan, Pakistan and Iran, as the movement of individuals to seek work, to escape drought or to flee war has been a common experience in the whole region…Many Afghans have been shifting from one place to the next for years- some never returning to their place of origin, others only a temporary basis before deciding to return into Iran, Pakistan, or further afield” (Monsutti, 2008, p. 61). These transnational networks have aided the transmission of money, capital, goods, and ideas between Afghans around the world. The Hawala system is based on social networks and spans to virtually all corners of the globe connecting Afghani's. It is the most widely used method for Afghans to send remittances and goods back to their country. The engagement of the Afghan Diaspora both financially and socially has contributed to the reconstruction of the country. Skilled Afghans have returned to their country both temporarily and permanently to design policies and programmes or work in their country to assist in the rebuilding effort. Financial remittances sent to Afghanistan are used primarily to meet families daily needs and comprises a major source of income for remittance receiving families. Afghanistan continues to face many challenges in the reconstruction effort including the management of returned refugees, managing migration relationships with Pakistan and Iran, IDPs return and rapid urbanization. The country has experienced difficulty in absorbing the large rates of return and poverty is high. Retaining the highly skilled poses a great challenge to the country at a time when skilled workers are in demand. However, until significant change occurs in the form of political stability, peace, development, infrastructure, and poverty alleviation, it can be assumed that high levels of migration will continue to occur out of Afghanistan. Migration has been a way of life for Afghani's for decades and it will continue to be a key survival strategy. References Altai Consulting. (2009). Study on Cross Border Population Movements Between Afghanistan and Pakistan. United Nations High Commission for Refugees. Retrieved 23 November 2009 from Braakmann, M. (2005). Roots and Routes. Questions of Home, Belonging and Return in an Afghan Diaspora. Buitenlandse Zaken. (2006). The Netherlands in Afghanistan. The Ministry of Foreign Affiars, The Netherlands. Central Bureau voor de Statistiek. (2009). Afghanistan. Retrieved from Central Intelligence Agency. The World Factbook Afghanistan. Retrieved 15 November 2009 from Internal Displacement Monitoring Centre. 2008. Afghanistan: Increasing Hardship and limited support for Growing Displaced Population. Retrieved 22 November 2008 from International Organization for Migration (IOM). (2003). Trafficking in Persons An Analysis of Afghanistan. Kabul, Afghanistan. International Organization for Migration (IOM). (2007). Return, Reception and Reintegration of Afghan National to Afghanistan.” International Organization for Migration (IOM). (2009). Country Profile: Afghanistan. Retrieved 2 December 2009 from IRIN. (2005). Afghanistan: Child Marriage still Widespread. Retrieved 23 November 2009 from IRIN. (2007). Afhganinstan: Kabul facing unregulated urbanization. Retrieved 3 December 2009 from Islamic Republic of Afghanistan. (2008). Refugee's Retunees and IDP's Sector Strategy. Retrieved 2 December 2009 from Islamic Republic of Afghanistan. (2009). Afghanistan National Development Strategy First Annual Report. Retrieved 30 November 2009 from siteresources.worldbank.org/AFGHANISTANEXTN/.../ANDSreport.pdf Jazayery, L. (2002). The Migration-Development Nexus: Afghanistan Case Study. International Migration, 40(5), 231254. Koser, K., and Van Hear, N. (2003). Asylum migration and implications for countries of origin. World Institute for Development Economics Research. Discssion Paper No. 2003/20. Retrieved 2 December 2009 from Kronenfeld, D. (2008). Afghan Refugees in Pakistan: Not All Refugees, Not Always in Pakistan, Not Necessarily Afghan? Journal of Refugee Studies, 21(1), 43-63. Livingston, I., Messera, H., and Shapiro, J. (2009). Afghanistan Index. Brookings Institute. Retrieved 23 November 2009 from Ministry of Foreign Affairs. (2009). Afghanistan. Retrieved 15 November 2009 from Ministry of Refugees and Repatriation (MoRRO. (2007). Ministry Strategy for the Afghanistan National Development Strategy. Retrieved 2 December 2009 from. Monsutti, A. (2004). Cooperation, remittances, and kinship amoung the Hazaras. Iranian Studies, 37(2), 219-240. Monsutti, A. (2006). Afhgan Transnational Networks: Looking Beyond Repatriation. Afghan Research and Evaluation Unit. Kabul, Afghanistan. Monsutti, A. (2008). Afghan Migratory Strategies and the Three Solutions to the Refugee Problem. Refugee Survey Quarterly, 27(1), 58-73. North Atlantic Treaty Organization (NATO). (2009). NATO's Role in Afghanistan Retrieved 23 November 2009 from Poppelwell, T. (2007). Afghanistan. Retrieved 15 November 2009 from Forced Migration Online Robson, B. and Lipson, J. (2002). The Afghans Their History and Culture. Retrieved 2 December 2009 from Roison, C. 2004. The Brian Drain: Challenges and Opportunities for Development. The United Nations Chronicle. Retrieved 22 November 2009 from Siegel, M., Vanore, M., Lucas, R., and de Neubourg, C. (2009). The Netherlands- Afghanistan Remittance Corridor Study. Ministry of Foreign Affairs, The Netherlands. Stigter, E. and Monsutti, A. (2005). Transnational Networks: Recognising a Regional Reality. AREU. Retrieved 2 December 2009 from: Stigter, E. (2006). Afghan Migratory Strategies- An Assessment of Repatriation and Sustainable Return in Response to the Convention Plus. Refugee Survey Quarterly, 25(2), 109-122. The Asia Foundation. (2006). Afghanistan in 2006: A Survey of the Afghan People. Retrieved 25 November 2009 from The Embassy of Afghanistan. (2009). Afghan Diaspora. Retrieved 2 December 2009 from Transparency International. (2009). Corrporate Perceptions Index. Retrieved 30 November 2009 from Turton, D. and Marsden, P. (2002). Taking Refugees for a Ride? The politics of refugee return to Afghanistan. Afghanistan Research and Evaluation Unit. Kabul, Afghanistan. United Nations Assistance Mission in Afghanistan. (2009). Political Affairs. Retrieved 30 November 2009 from United Nations Development Program. (2009). Human Development Report 2009 Statistics. Retrieved 30 November 2009 from UNHCR. Global Report 2001. Retrieved 18 November 2009 from UNHCR. Global Report 2002. Retrieved 18 November 2009 from UNHCR. Global Report 2003. Retrieved 18 November 2009 from UNHCR. Global Report 2004. Retrieved 18 November 2009 from UNHCR. Global Report 2005. Retrieved 18 November 2009 from UNHCR. Global Report 2006. Retrieved 18 November 2009 from UNHCR. Global Report 2007. Retrieved 18 November 2009 from UNHCR. Global Report 2008. Retrieved 18 November 2009 from UNHCR. (2008). National Profile of Internal Displaced Persons (IDPs) in Afghanistan. Retrieved 23 November 2009 from UNHCR. (2008). Pakistan: Violence marks closure of Afghan Refugee Camps. Retrieved 25 November 2009 from UNHCR. (2009). Afghanistan Country Operations Profile. Retrieved 2 December 2009 from UNHCR. (2009). Afghanistan Situation Operational Update. Retrieved 2 December 2009 from: UNHCR. (2009). Tripartite Memorandum of Understanding Between the Government of the Netherlands, the Transitional Islamic State of Afghanistan, and the United Nations High Commissioner for Refugees (UNHCR). Retrieved 25 November 2009 from United States Census Bureau. (2006). Selected Population Profile in the United States: Afghan. American Community Survey. Retrieved 2 December 2009 from Van Hear, N. (2002). From ‘Durable Solutions' to ‘Transnational' Relations: Home and Exile Amoung Refugee Diasporas. Centre for Development Research Working Paper. 02.9 Wescott, C. (2006). Harnessing Knowledge Exchange Amoung Overseas Professional of Afghanistan, People's Republic of China, and Philippines. Prepared for the Labour Migration Workshop New York, 15- March 2006. World Bank. 2009. 2009 Afghanistan Economic Update. Retrieved 20 November 2009 from World Bank. 2009. Migration and Remittances Factbook: Afghanistan. Retrieved 22 November 2009 from World Health Organization. 2009. Detailed Database Search. Retrieved 22 November from. Zunzer, W. (2004). Diaspora communities and civil conflict transformation. Berghof Occasional Paper. No. 24 [1] The UNHCR identifies an inconsistency in the statistics as come statistics measure the number of families and individuals and some only measure the number of families. In the latter case, the number of families was multiplied by six to get the number of individuals, although it is known that many families are much larger than six (2008, p.6) [2] The methodology for this estimate was based on tabulations from numerous surveys previously conducted by other organizations. For details on the methodology see:
http://www.ukessays.com/dissertation/examples/geography/migration-in-afghanistan.php
CC-MAIN-2016-07
en
refinedweb
{-| Module : Text Text.HTML.Download(openURL, openItem) where import System.IO import System.IO.Unsafe <- unsafeInterleaveIO $ readResponse hndl return (c:cs) -- | Open a URL (if it starts with @http:\/\/@) or a file otherwise openItem :: String -> IO String openItem x | "http://" `isPrefixOf` x = openURL x | otherwise = readFile x
http://hackage.haskell.org/package/tagsoup-0.4/docs/src/Text-HTML-Download.html
CC-MAIN-2016-07
en
refinedweb