text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
poseLib
PoseLib is based on a donation system. It means that if poseLib is useful to you or your studio then you can make a donation to reflect your satisfaction. Thanks for using poseLib! 😀
Number of times downloaded: ~ 23,000
Number of donations: 13
It is strongly advised that you backup your existing poseLib directory before you use the new version!
COMPATIBILITY:
PoseLib supports Maya 2011, 2012, 2013, 2014, 2015 and 2016.
DOWNLOAD: Last updated 8 November 2015
Click on the icon to download poseLib version 6.6.0:
- Fixed support for Macs (thanks Ludo!)
- Fixed problem with not being able to create a new project
- Fixed bug with switching projects
- Fixed choosing text editor for Macs and PCs
(See the complete history at the bottom of this article)
INSTALLATION:
For Windows:
1) Copy poseLib.mel and poseLibModule.py to your …\Documents\maya\20xx\scripts folder.
2) Copy poseLib.png to your …\Documents\maya\20xx\prefs\icons folder .
3) Restart Maya if it was open.
4) Type:
source poseLib.mel;
poseLib;
For OSX:
1) Copy poseLib.mel and poseLibModule.py to /Users/<yourname>/Library/Preferences/Autodesk/maya/20xx/scripts.
2) Copy poseLib.png to /Users/<yourname>/Library/Preferences/Autodesk/maya/20xx/prefs/icons.
3) Restart Maya if it was open and source poseLib.mel. Finally call the command poseLib.
FEATURES:.
DIRECTORY STRUCTURE:
PoseLib stores poses in a “category” directory, which is itself stored in a “character” directory, which is itself stored in a “casting” directory, which itself resides in a “archetype” directory. Sounds complicated, so here’s a diagram of the way things are organized:
The Archetype (or “Type”) directory: This is where the different types of assets are separated. Usually you find “chars” for characters, “sets” for sets, “props” for props, “cams” for cameras, etc…
The Casting (or “Cast”) directory: This is where you separate the main actors (“main”, “primary”, “hero”, etc…) from the rest (“crowd, “secondary”, etc…).
The Character directory: This is where you find the names of the characters, or the sets, or the props, depending on which branch you’re in at the archetype level.
The Category directory: This is where you find the poses themselves. Categories could be “face”, “body”, etc…
A valid question would be “Why do we need archetype or casting folders?” Because poseLib is a tool used in production on movies such as “Despicable Me” and “The Lorax”, and we have hundreds of characters, many sets, props, etc… And it would quickly become tedious for artists to have to scroll through huge messy lists of names. Separating things by type and importance allows us to keep things clean and readily accessible.
The poses themselves are .xml files and the icons are .bmp files. So a pose displayed as “toto” in the UI is made up of two files: toto.xml (which stores all the controls and attributes settings), and toto.bmp (the icon captured when the pose was created).
LIBRARY STATUS:
The library status is either “Private” or “Public”. The “private” path should point to your private library, where you store your poses and organize things the way you like. The “public” path should lead to the poses that are available to other animators.
Again, this is most useful if you’re in a studio structure and you need to share poses while keeping things separated between you own playing ground and the common library. If you don’t need that, then the private path is the only one you’ll ever care about.
WORKFLOW:
Creating a new pose:
- Select the controls for which you want to record a pose.
- Click on the “Create New Pose” button.
- Move the camera in the little preview camera frame to define the way the icon will look like.
- Click “Create Pose”.
Once the pose is created, it will appear automatically in the list of poses available (they’re sorted in alphabetical order).
Note: You can move poses around by middle-mouse clicking them and drag-and-dropping them where you want.
Applying a pose:
Just click on a pose icon. It works differently depending on what you’ve selected:
- If you don’t have anything selected, poseLib will attempt to apply the entire pose.
or
- If you’ve selected some controls the pose will just be applied to those. (You. You define the amount of the pose being applied with the “ALT/CTRL Pose” slider.
Note: Remember you can also apply a pose only to the selected channels in the channelBox!
Editing a pose:
Right-click on the pose icon; A menu will appear, letting you: Rename, Move, Replace, Delete, or Edit the pose.
Replacing the pose simply means that you don’t have to go through the process of re-capturing a new icon.
The edit sub-menu will let you: Select the pose’s controls (if you don’t remember what was part of the pose), Add/Replace the selected controls (they’ll be added if they weren’t part of the pose, or replaced if they are), or Remove the selected controls. The “Ouput Pose Info” will print out information (pose author, when the pose was created, modified, etc…) about the pose in the script editor.
NAMESPACES:
When using a referenced rig with a namespace, you have three choices:
1) Use Selection Namespace: This means that when you click on a pose with some controls selected, poseLib will apply the pose if those controls were parts of the pose, regardless of the namespace stored in the pose. This lets you apply a pose recorded with a certain namespace to the namespace of your selection. For example, if the pose only contains a control named “Tintin:head_joint” and your current selection is “Gandalf:head_joint”, the pose will be applied. Basically this lets you apply a pose from a character to another character.
2) Use Pose Namespace: This means that poseLib will only apply the pose if the pose’s controls and namespaces are present in your selection (or in the scene if you don’t have anything selected). Again, if the pose only contains a control named “Tintin:head_joint” and your current selection is “Gandalf:head_joint”, the pose will NOT be applied. This is so you can record a single pose containing multiple characters and still only apply the pose to the one selected character.
3) Use Custom Namespace: This means that the pose will only be applied to the controls whose namespace matches the one defined in the text field.
Note: The afore-mentioned namespace options play no role when saving poses: the namespace options are only relevant when applying poses.
OPTIONS:
Archetypes/Casting:
Now if you want to create a new entry for a character name or a category, just click on the “Edit Options” button.
Display:.
Paths:.
Text Editor:
This is where you choose the text editor to be launched when manually editing a pose.
TROUBLESHOOTING:
The icons for my poses come up as red squares:
Check the Images path of your current project (in the Project Manager). It should just say “images” or something similar.
I keep getting the “# Error: NameError: name ‘poseLibModule’ is not defined” error:
That’s because you have to source poseLib before you launch it. Please follow carefully step 4 of the installation instructions. There are a bunch of similarly named directories in similar places; make sure you didn’t mistakenly copy the files to the wrong ones.
I am sure I copied the files to the right folders, but I still get the “No module named” error:
Then try to edit the “Maya.env” file in your “…\maya\20xx” directory and add the following line:
PYTHONPATH = C:\[…]\maya\20xx\scripts;
… Where you need of course to indicate the correct path (where you copied the files), as well as the correct Maya version.
Note: Be aware that you could have several Maya.env files in different directories (eg: in “…/Documents/maya” or “…/Documents/maya/20xx”. But Maya will only look at ONE of them (the first one it finds). So make sure it’s the right one!
seith[at]seithcg[dot]com
History:
v6.5.0:
- Support for Maya 2014 and up!
- Reorder icons!
- Colored icons!
- Too many changes to list here!
v6.2.3:
-.
- Fixed right-click menu not displaying properly in Maya 2013.
- Now only shows poses whose file actually exists (no more empty red icons).
v6.1.7:
- Fixed a bug with the Options window not opening the very first time it’s called.
- Fixed a bug where old poses conversion would fail due to CRLF symbols.
- Fixed a bug with old poses conversion ignoring the last character of a pose file.
- The projects menu in the Path options tab now accurately reflects the current poseLib project.
- Removed useless warnings when a character or category is not found.
- Fixed a bug with the Public path not being properly updated.
- PoseLib now handles cases when switching to a project without an existing proper directory structure.
- Fixed a bug when switching between Private and Public library status.
- Fixed a bug with setting a project to a networked path.
- Fixed a bug wen selecting a pose’s controls while using a custom namespace.
- Conversion of old poses does not truncate the first word before a “_” character in the pose name anymore.
- Fixed a bug when creating or applying a pose with controls devoid of keyable attributes.
v6.0.8:
- Fixed a nasty bug that could crash Maya when deleting a pose.
v6.0.7:
- Fixed a bug with blendshapes when saving and applying poses.
- Fixed erroneous user warning reporting success when the pose was not applied.
v6.0.1:
- Added support for Macs (OSX)._4<<
I still use an older version of Maya and as a result the script doesn’t support Maya 2017 (yet), sorry.
I agree a mirror tool is indispensable but it is not really the purpose of poseLib: rigs are very different between studios/productions and it would be impossible to try and guess all the varying rigging configurations. Usually in a studio the rigging department provides tools for things like mirroring as it is very much linked to choices made during the process of building the characters.
Sorry,I cant download it from the link.It tells me that the file isn’t there anymore.
Hi, I just fixed the link. Sorry about that! | https://seithcg.com/wordpress/?page_id=19 | CC-MAIN-2022-40 | refinedweb | 1,725 | 72.56 |
Given
(1->2->3 is the longest path in the given tree which has a length of 2 units).
Algorithm-
1.Loop through the vertices, starting a new depth first search whenever the loop reaches a vertex that has not already been included in previous DFS calls.
2.A dist[] array is constructed to record the distances of all the vertices from the starting vertex ie. vertex on which DFS is called.
3.Maximum of all the values of the dist[] array is found and the respective vertex number is found.Let it be v.
4.Now,DFS is called on v and dist[] array records the distances of all the vertices from the vertex v.
5.Maximum of all the values of the dist[] array is the final answer that is it is the length of the longest path in the tree.
Code-
#include<stdio.h> #include<vector> #include<algorithm> #include<iostream> using namespace std; vector<int>arr[10005]; int n,a,b,j=0,m; int color[10005],dist[10005]; //d denotes the distance of the node on which DFS is called from the starting vertex. void dfs(int node,int d) { color[node]=2; //marking the visited vertices dist[node]=d; for(int i=0;i<arr[node].size();++i) { if(color[arr[node][i]]==0) { dfs(arr[node][i],d+1); } } } int main() { int p; int i1; p=1;j=0; //n is the number of vertices scanf("%d",&n); for(i=0;i<n;i++) arr[i].clear(); for(i=0;i<n;++i) { color[i]=0;dist[i]=0; } //tree has n-1 edges while(j<n-1) { scanf("%d %d",&a,&b); //a and b denote the starting and ending vertices of an edge arr[a-1].push_back(b-1); arr[b-1].push_back(a-1); j++; } for(i=0;i<n;++i) { if(color[i]==0) dfs(i,0); } int max=0; // p denotes the vertex with maximum distance for(i=0;i<n;i++) { if(dist[i]>=max) { max=dist[i]; p=i; } } //resetting the arrays to call DFS again for(i=0;i<n;++i) { color[i]=0;dist[i]=0; } //calling dfs on vertex which has max distance dfs(p,0); max=0; for(i=0;i<n;i++) { if(dist[i]>=max) { max=dist[i]; p=i; } } cout<<max<<endl; return 0; } | http://www.codemarvels.com/2013/08/longest-path-in-a-tree/ | CC-MAIN-2017-17 | refinedweb | 394 | 59.74 |
Creating Your First Elm App: From Authentication to Calling an API (Part 1)
Creating Your First Elm App: From Authentication to Calling an API (Part 1).
TL;DR:. In part two, we'll add authentication using JSON Web Tokens. The full code is available at this GitHub repository.
All JavaScript app developers are likely familiar with this scenario: we implement logic, deploy our code, and then in QA (or worse, production) we encounter a runtime error! Maybe it was something we forgot to write a test for, or it's an obscure edge case we didn't foresee. Either way, when it comes to business logic in production code, we often spend post-launch with the vague threat of errors hanging over our heads.
Enter Elm: a functional, reactive front-end programming language that compiles to JavaScript, making it great for web applications that run in the browser. Elm's compiler presents us with friendly error messages before runtime, thereby eliminating runtime errors.
Why Elm?
Elm's creator Evan Czaplicki positions Elm with several strong concepts, but we'll touch on two in particular: gradual learning and usage-driven design. Gradual learning is the idea that we can be productive with the language before diving deep. As we use Elm, we are able to gradually learn via development and build up our skillset, but we are not hampered in the beginner stage by a high barrier to entry. Usage-driven design emphasizes starting with the minimimum viable solution and iteratively building on it, but Evan points out that it's best to keep it simple, and the minimum viable solution is often enough by itself.
If we head over to the Elm site, we're greeted with an attractive featureset highlighting "No runtime exceptions", "Blazing fast rendering", and "Smooth JavaScript interop". But what does this boil down to when writing real code? Let's take a look.
Building an Elm Web App
In the first half of this two-part tutorial, we're going to build a small Elm application that will call an API to retrieve random Chuck Norris quotes. In doing so, we'll learn Elm basics like how to compose an app with a view and a model, how to update application state, and how to implement common real-world requirements like HTTP. In part two of the tutorial, we'll add the ability to register, log in, and access protected quotes with JSON Web Tokens.
If you're familiar with JavaScript but new to Elm the language might look a little strange at first—but once we start building, we'll learn how the Elm Architecture, types, and clean syntax can really streamline development. This tutorial is structured to help JavaScript developers get started with Elm without assuming previous experience with other functional or strongly typed languages.
Setup and Installation
The full source code for our finished app can be cloned on GitHub here.
We're going to use Gulp to build and serve our application locally and NodeJS to serve our API and install dependencies through the Node Package Manager (npm). If you don't already have Node and Gulp installed, please visit their respective websites and follow instructions for download and installation.
Note: Webpack is an alternative to Gulp. If you're interested in trying a customizable webpack build in the future for larger Elm projects, check out elm-webpack-loader.
We also need the API. Clone the NodeJS JWT Authentication sample API repository and follow the README to get it running.
Installing and Configuring Elm App
To install Elm globally, run the following command:
npm install -g elm
Once Elm is successfully installed, we need to set up our project's configuration. This is done with an
elm-package.json file:
// elm-package.json { "version": "0.1.0", "summary": "Build an App in Elm with JWT Authentication and an API", "repository": "", "license": "MIT", "source-directories": [ "src", "dist" ], "exposed-modules": [], "dependencies": { "elm-lang/core": "4.0.1 <= v < 5.0.0", "elm-lang/html": "1.0.0 <= v < 2.0.0", "evancz/elm-http": "3.0.1 <= v < 4.0.0", "rgrempel/elm-http-decorators": "1.0.2 <= v < 2.0.0" }, "elm-version": "0.17.0 <= v < 0.18.0" }
We'll be using Elm v0.17 in this tutorial. The
elm-version here is restricted to minor point releases of 0.17. There are breaking changes between versions 0.17 and 0.16 and we can likely expect the same for 0.18.
Now that we've declared our Elm dependencies, we can install them:
elm package install
Once everything has installed, an
/elm-stuff folder will live at the root of your project. This folder contains all of the Elm dependencies we specified in our
elm-package.json file.
Build Tools
Now we have Node, Gulp, Elm, and the API installed. Let's set up our build configuration. Create and populate a
package.json file, which should live at our project's root:
// package.json ... "dependencies": {}, "devDependencies": { "gulp": "^3.9.0", "gulp-connect": "^4.0.0", "gulp-elm": "^0.4.4", "gulp-plumber": "^1.1.0", "gulp-util": "^3.0.7" } ...
Once the
package.json file is in place, install the Node dependencies:
npm install
Next, create a
gulpfile.js file:
// gulpfile.js var gulp = require('gulp'); var elm = require('gulp-elm'); var gutil = require('gulp-util'); var plumber = require('gulp-plumber'); var connect = require('gulp-connect'); // File paths var paths = { dest: 'dist', elm: 'src/*.elm', static: 'src/*.{html,css}' }; // Init Elm gulp.task('elm-init', elm.init); // Compile Elm to HTML gulp.task('elm', ['elm-init'], function(){ return gulp.src(paths.elm) .pipe(plumber()) .pipe(elm()) .pipe(gulp.dest(paths.dest)); }); // Move static assets to dist gulp.task('static', function() { return gulp.src(paths.static) .pipe(plumber()) .pipe(gulp.dest(paths.dest)); }); // Watch for changes and compile gulp.task('watch', function() { gulp.watch(paths.elm, ['elm']); gulp.watch(paths.static, ['static']); }); // Local server gulp.task('connect', function() { connect.server({ root: 'dist', port: 3000 }); }); // Main gulp tasks gulp.task('build', ['elm', 'static']); gulp.task('default', ['connect', 'build', 'watch']);
The default
gulp task will compile Elm, watch and copy files to a
/dist folder, and run a local server where we can view our application at.
Our development files should be located in a
/src folder. Please create the
/dist and
/src folders at the root of the project. Our file structure now looks like this:
Syntax Highlighting
There's one more thing we should do before we start writing Elm, and that is to grab a plugin for our code editor to provide syntax highlighting and inline compile error messaging. There are plugins available for many popular editors. I like to use VS Code with vscode-elm, but you can download a plugin for your editor of choice here. With syntax highlighting installed, we're ready to begin coding our Elm app.
Chuck Norris Quoter App
We're going to build an app that does more than echo "Hello world". We're going to connect to an API to request and display data and in part two, we'll add registration, login, and make authenticated requests—but we'll start simple. First, we'll display a button that appends a string to our model each time it's clicked.
Once we've got things running, our app should look like this:
Let's fire up our Gulp task. This will start a local server and begin watching for file changes:
gulp
Note: Since Gulp is compiling Elm for us, if we have compile errors they will show up in the command prompt / terminal window. If you have one of the Elm plugins installed in your editor, they should also show up inline in your code.
HTML
We'll start by creating a basic
index.html file:
<!-- index.html --> <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Chuck Norris Quoter</title> <script src="Main.js"></script> <link rel="stylesheet" href=""> <link rel="stylesheet" href="styles.css"> </head> <body> </body> <script> var app = Elm.Main.fullscreen(); </script> </html>
We're loading a JavaScript file called
Main.js. Elm compiles to JavaScript and this is the file that will be built from our compiled Elm code.
We'll also load the Bootstrap CSS and a local
styles.css file for a few helper overrides.
Finally, we'll use JS to tell Elm to load our application. The Elm module we're going to export is called
Main (from
Main.js).
CSS
Next, let's create the
styles.css file:
/* styles.css */ .container { margin: 1em auto; max-width: 600px; } blockquote { margin: 1em 0; } .jumbotron { margin: 2em auto; max-width: 400px; } .jumbotron h2 { margin-top: 0; } .jumbotron .help-block { font-size: 14px; }
Introduction to Elm
We're ready to start writing Elm. Create a file in the
/src folder called
Main.elm. The full code for this step is available in the source repository on GitHub:
Main.elm - Introduction to Elm
Our file structure should now look like this:
If you're already familiar with Elm you can skip ahead. If Elm is brand new to you, keep reading: we'll introduce The Elm Architecture and Elm's language syntax by thoroughly breaking down this code. Make sure you have a good grasp of this section before moving on; the next sections will assume an understanding of the syntax and concepts.
import Html exposing (..) import Html.App as Html import Html.Events exposing (..) import Html.Attributes exposing (..)
At the top of our app file, we need to import dependencies. We expose the
Html package to the application for use and then declare
Html.App as
Html. Because we'll be writing a view function, we will expose
Html.Events and
Html.Attributes to use click and input events, IDs, classes, and other element attributes.
Everything we're going to write is part of The Elm Architecture. In brief, this refers to the basic pattern of Elm application logic. It consists of
Model (application state),
Update (way to update the application state), and
View (render the application state as HTML). You can read more about The Elm Architecture in Elm's guide.
main : Program Never main = Html.program { init = init , update = update , subscriptions = \_ -> Sub.none , view = view }
is a type annotation. This annotation says "
main : Program Never
main has type
Program and should
Never expect a flags argument". If this doesn't make a ton of sense yet, hang tight—we'll be covering more type annotations throughout our app.
Every Elm project defines
main as a program. There are a few program candidates, including
beginnerProgram,
program, and
programWithFlags. Initially, we'll use
main = Html.program.
Next, we'll start our app with a record that references an
init function, an
update function, and a
view function. We'll create these functions shortly.
subscriptions may look strange at first. Subscriptions listen for external input and we won't be using any in the Chuck Norris Quoter so we don't need a named function here. Elm does not have a concept of
null or
undefined and it's expecting functions as values in this record. This is an anonymous function that declares there are no subscriptions.
Here's a breakdown of the syntax.
\ begins an anonymous function. A backslash is used because it resembles a lambda (λ).
_ represents an argument that is discarded, so
\_ is an anonymous function that doesn't have arguments.
-> signifies the body of the function.
subscriptions = \_ -> ... in JS would look like this:
// JS subscriptions = function() { ... }
(What would an anonymous function with an argument look like? Answer:
\x -> ...)
Next up are the model type alias and the
init function:
{- MODEL * Model type * Initialize model with empty values -} type alias Model = { quote : String } init : (Model, Cmd Msg) init = ( Model "", Cmd.none )
The first block is a multi-line comment. A single-line comment is represented like this:
-- Single-line comment
Let's create a
type alias called
Model:
type alias Model = { quote : String }
A type alias is a definition for use in type annotations. In future type annotations, we can now say
Something : Model and
Model would be replaced by the contents of the type alias.
We expect a record with a property of
quote that has a string value. We've mentioned records a few times, so we'll expand on them briefly: records look similar to objects in JavaScript. However, records in Elm are immutable: they hold labeled data but do not have inheritance or methods. Elm's functional paradigm uses persistent data structures so "updating the model" returns a new model with only the changed data copied.
Now we've come to the
init function that we referenced in our
main program:
init : (Model, Cmd Msg) init = ( Model "", Cmd.none )
The type annotation for
init means "
init returns a tuple containing record defined in Model type alias and a command for an effect with an update message". That's a mouthful--and we'll be encountering additional type annotations that look similar but have more context, so they'll be easier to understand. What we should take away from this type annotation is that we're returning a tuple (an ordered list of values of potentially varying types). So for now, let's concentrate on the
init function.
Functions in Elm are defined with a name followed by a space and any arguments (separated by spaces), an
=, and the body of the function indented on a new line. There are no parentheses, braces,
function or
return keywords. This might feel sparse at first but hopefully you'll find the clean syntax speeds development.
Returning a tuple is the easiest way to get multiple results from a function. The first element in the tuple declares the initial values of the Model record. Strings are denoted with double quotes, so we're defining
{ quote = "" } on initialization. The second element is
Cmd.none because we're not sending a command (yet!).
{- UPDATE * Messages * Update case -} type Msg = GetQuote update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of GetQuote -> ( { model | quote = model.quote ++ "A quote! " }, Cmd.none )
The next vital piece of the Elm Architecture is update. There are a few new things here.
First we have
type Msg = GetQuote: this is a union type. Union types provide a way to represent types that have unusual structures (they aren't
String,
Bool,
Int, etc.). This says
type Msg could be any of the following values. Right now we only have
GetQuote but we'll add more later.
Now that we have a union type definition, we need a function that will handle this using a
case expression. We're calling this function
update because its purpose is to update the application state via the model.
The
update function has a type annotation that says "
update takes a message as an argument and a model argument and returns a tuple containing a model and a command for an effect with an update message."
This is the first time we've seen
-> in a type annotation. A series of items separated by
-> represent argument types until the last one, which is the return type. The reason we don't use a different notation is to indicate the return has to do with currying. In a nutshell, currying means if you don't pass all the arguments to a function, another function will be returned that accepts whatever arguments are still needed. You can learn more about currying elsewhere.
The
update function accepts two arguments: a message and a model. If the
msg is
GetQuote, we'll return a tuple that updates the
quote to append
"A quote! " to the existing value. The second element in the tuple is currently
Cmd.none. Later, we'll change this to execute the command to get a random quote from the API. The case expression models possible user interactions.
The syntax for updating properties of a record is:
{ recordName | property = updatedValue, property2 = updatedValue2 }
Elm uses
= to set values. Colons
: are reserved for type definitions. A
: means "has type" so if we were to use them here, we would get a compiler error.
We now have the logic in place for our application. How will we display the UI? We need to render a view:
{- VIEW -} view : Model -> Html Msg view model = div [ class "container" ] [ h2 [ class "text-center" ] [ text "Chuck Norris Quotes" ] , p [ class "text-center" ] [ button [ class "btn btn-success", onClick GetQuote ] [ text "Grab a quote!" ] ] -- Blockquote with quote , blockquote [] [ p [] [text model.quote] ] ]
The type annotation for the
view function reads, "
view takes model as an argument and returns HTML with a message." We've seen
Msg a few places and now we've defined its union type. A command
Cmd is a request for an effect to take place outside of Elm. A message
Msg is a function that notifies the
update method that a command was completed. The view needs to return HTML with the message outcome to display the updated UI.
The
view function describes the rendered view based on the model. The code for
view resembles HTML but is actually composed of functions that correspond to virtual DOM nodes and pass lists as arguments. When the model is updated, the view function executes again. The previous virtual DOM is diffed against the next and the minimal set of updates necessary are run.
The structure of the functions somewhat resembles HTML, so it's pretty intuitive to write. The first list argument passed to each node function contains attribute functions with arguments. The second list contains the contents of the element. For example:
button [ class "btn btn-success", onClick GetQuote ] [ text "Grab a quote!" ]
This
button's first argument is the attribute list. The first item in that list is the
class function accepting the string of classes. The second item is an
onClick function with
GetQuote. The next list argument is the contents of the button. We'll give the
text function an argument of "Grab a quote!"
Last, we want to display the quote text. We'll do this with a
blockquote and
p, passing
model.quote to the paragraph's
text function.
We now have all the pieces in place for the first phase of our app! We can view it at. Try clicking the "Grab a quote!" button a few times.
Note: If the app didn't compile, Elm provides compiler errors for humans in the console and in your editor if you're using an Elm plugin. Elm will not compile if there are errors! This is to avoid runtime exceptions.
That was a lot of detail, but now we're set on basic syntax and structure. We'll move on to build the features of our Chuck Norris Quoter app.
Calling the API
Now, we're ready to fill in some of the blanks we left earlier. In several places, we claimed in our type annotations that a command
Cmd should be returned, but we returned
Cmd.none instead. Now we'll replace those with the missing command.
When this step is done, our application should look like this:
Clicking the button will call the API to get and display random Chuck Norris quotes. Make sure you have the API running at so it's accessible to our app.
Once we're successfully getting quotes, our source code will look like this:
Main.elm - Calling the API
The first thing we need to do is import the dependencies necessary for making HTTP requests:
import Http import Task exposing (Task)
We'll need Http and Task. A task in Elm is similar to a promise in JS: tasks describe asynchronous operations that can succeed or fail.
Next, we'll update our
init function:
init : (Model, Cmd Msg) init = ( Model "", fetchRandomQuoteCmd )
Now instead of
Cmd.none we have a command called
fetchRandomQuoteCmd. A command is a way to tell Elm to do some effect (like HTTP). We're commanding the application to fetch a random quote from the API on initialization. We'll define the
fetchRandomQuoteCmd function shortly.
{- UPDATE * API routes * GET * Messages * Update case -} -- API request URLs api : String api = "" randomQuoteUrl : String randomQuoteUrl = api ++ "api/random-quote" -- GET a random quote (unauthenticated) fetchRandomQuote : Platform.Task Http.Error String fetchRandomQuote = Http.getString randomQuoteUrl fetchRandomQuoteCmd : Cmd Msg fetchRandomQuoteCmd = Task.perform HttpError FetchQuoteSuccess fetchRandomQuote
We've added some code to our update section. First, we'll store the API routes.
The Chuck Norris API returns unauthenticated random quotes as strings, not JSON. Let's create a function called
fetchRandomQuote. The type annotation declares that this function is a task that either fails with an error or succeeds with a string. We can use the
Http.getString method to make the HTTP request with the API route as an argument.
HTTP is something that happens outside of Elm. A command is needed to request the effect and a message is needed to notify the update that the effect was completed and to deliver its results.
We'll do this in
fetchRandomQuoteCmd. This function's type annotation declares that it returns a command with a message.
Task.perform is a command that tells the runtime to execute a task. Tasks can fail or succeed so we need to pass three arguments to
Task.perform: a message for failure (
HttpError), a message for success (
FetchQuoteSuccess), and what task to perform (
fetchRandomQuote).
HttpError and
FetchQuoteSuccess are messages that don't exist yet, so let's create them:
-- Messages type Msg = GetQuote | FetchQuoteSuccess String | HttpError Http.Error -- Update update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of GetQuote -> ( model, fetchRandomQuoteCmd ) FetchQuoteSuccess newQuote -> ( { model | quote = newQuote }, Cmd.none ) HttpError _ -> ( model, Cmd.none )
We add these two new messages to the
Msg union type and annotate the types of their arguments.
FetchQuoteSuccess accepts a string that contains the new Chuck Norris quote from the API and
HttpError accepts an
Http.Error. These are the possible success/fail results of the task.
Next, we add these cases to the
update function and declare what we want returned in the
(Model, Cmd Msg) tuple. We also need to update the
GetQuote tuple to fetch a quote from the API. We'll change
GetQuote to return the current model and issue the command to fetch a random quote,
fetchRandomQuoteCmd.
FetchQuoteSuccess's argument is the new quote string. We want to update the model with this. There are no commands to execute here, so we will declare the second element of the tuple
Cmd.none.
HttpError's argument is
Http.Error but we aren't going to do anything special with this. For the sake of brevity, we'll handle API errors when we get to authentication but not for getting unauthenticated quotes. Since we're discarding this argument, we can pass
_ to
HttpError. This will return a tuple that sends the model in its current state and no command. You may want to handle errors here on your own after completing the provided code.
It's important to remember that the
update function's type is
Msg -> Model -> (Model, Cmd Msg). This means that all branches of the
case statement must return the same type. If any branch does not return a tuple with a model and a command, a compiler error will occur.
Nothing changes in the
view. We altered the
GetQuote onClick function logic, but everything that we've written in the HTML works fine with our updated code. This concludes our basic API integration for the first half of this tutorial. Try it out! In part two, we'll tackle adding users and authentication.
Aside: Reading Compiler Type Errors
If you've been following along and writing your own code, you may have encountered Elm's compiler errors. Though they are very readable, type mismatch messages can sometimes seem ambiguous.
Here's a small breakdown of some things you may see:
String -> a
A lowercase variable
a means "anything could go here." The above means "takes a string as an argument and returns anything."
[1, 2, 3] has a type of
List number: a list that only contains numbers.
[] is type
List a: Elm infers that this is a list that could contain anything.
Elm always infers types. If we've declared type definitions, Elm checks its inferences against our definitions. We'll define types upfront in most places in our app. It's best practice to define the types at the top-level at a minimum. If Elm finds a type mismatch, it will tell us what type it has inferred. Resolving type mismatches can be one of the larger challenges to developers coming from a loosely typed language like JS (without Typescript), so it's worth spending time getting comfortable with this.
Recap and Next Steps
We've covered installing and using the Elm language and learned how to create our first app. We've also integrated with an external API through HTTP. You should now be familiar with Elm's basic syntax, type annotation, and compiler errors. If you'd like, take a little more time to familiarize with Elm's documentation. The Elm FAQ is another great resource from the Elm developer community. In the second half of this tutorial (soon to come), we'll take a deeper dive into authenticating our Chuck Norris Quoter app using JSON Web Tokens. Stay tuned! Kim Maida , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/creating-your-first-elm-app-from-authentication-to | CC-MAIN-2018-39 | refinedweb | 4,266 | 66.64 |
Opened 1 year ago
Last modified 1 month ago
Once the hot club of france opens and we have more third party apps, the probability of clashing app names will increase. We can fix this by allowing users to define a custom app_label in their settings file. In addition, we can add a verbose_name for apps at the same time. The patch here also correctly assigns and app_label to models in a package as long as you import all of those models in the packages __init__.py This possible breaks quite a few internals if people are using them, and needs to wait until after 0.96 to be considered.
Here's an example:
from django.conf.directives import app
INSTALLED_APPS = (
'django.contrib.auth', # allow the old syntax
app('mypkg.auth', 'myauth', 'My cool auth app'), # and the new. (path, app_label, verbose_name)
)
Originally discussed in this thread
A couple of internal tests are still broken, but admin, manage.py, etc. work
A couple of questions from an initial quick read:
Replying to mtredinnick:
A couple of questions from an initial quick read:
A couple of questions from an initial quick read:
Hey Malcolm
First of all, thanks for taking a look at this, and second, none of this is meant to be final. This is the "get it working" version that I more or less only put up as a reference for another ticket that this fixes.
1. In management.py, why is the import change on line 366 (of the original) necessary? I think there's some subtlety escaping me
1. In management.py, why is the import change on line 366 (of the original) necessary? I think there's some subtlety escaping me
there (if it's not necessary, it's probably worth removing so that we can tell the real changes from the stylistic ones).
It has an effect. Changing the current import from
from django.db.models import get_models
from django.db.models import get_app, get_models to
would get the right effect..
It's the latter. That could probably be worked around as well, but I haven't spent much time thinking about that part of it yet.
It looks like we have a little bit of end-goal overlap with #4144. The patch over there is just a simple move of a single line, but it relates to how Django determines app_label, so it looks like that might be best handled in this ticket.
Essentially, this patch moves the model_module.__name__.split('.')[-2] into get_app_label, and in fact seems to remove any need for sys.modules at all (unless I'm reading something wrong). If the sys.modules line referenced in #4144 were to be removed as part of this ticket, #4144 would be unnecessary, and I'd be very happy.
A new, more comprehensive (IMHO) patch
I've added an updated patch. This covers changes to make-messages.py and also documentation changes, and is against trunk revision r5146. All tests in runtests.py pass, plus browsing data from apps with a custom app label in admin and databrowse works fine. Here are the relevant changes to my settings.py:
from django.conf import app
# ...
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.humanize',
'django.contrib.admin',
'django.contrib.databrowse',
'mysite.myapp',
'mysite.registration',
app('mysite.orghier', 'oh', 'Organization Hierarchy'),
)
Marty: Although my latest patch still leaves the line where it is, the model_module is not being used at all. I have removed the line entirely from my working copy - once I get other feedback on the patch, I will post another patch which incorporates the feedback.
Minor tweaks (tidy-ups) to the last patch.
Updated patch to work cleanly against trunk revision r5171.
Replaces previous patch which was not created using svn diff - sorry.
An updated patch to cater for recent changes in trunk.
I'm seriously +1 on this -- at work we're hoping we'll be using a mix of our custom apps, apps built by other sites in our division and some of the stellar open source apps being built.
I think the modularity/plug-n-customize model of Django's "app" system is one of it's great features!
I'm recording this idea here: What if INSTALLED_APPS itself was a *class*, rather than a tuple? That would solve a lot of the wacky issues.
This patch is quite interesting.. ;-)
Would it make sense to use a dictionary and named arguments
for making things more explicit?
In example:
INSTALLED_APPS = (
app('apps.foobar', {'verbose_name': 'foobar app',
'app_label': 'foobar1'}),
)
#2982 was a duplicate.
#3343 needs to be dealt with when a resolution is reached here.
#4470 was a duplicate.
Replying to ubernostrum:
#4470 was a duplicate.
#4470 was a duplicate.
That ticket would have fixed the problem of app_label being required when using the Django ORM outside of an actual Django application. I often write programs at my job which use the Django ORM for non-website programming, but I always have to add the Meta class to the model definition, which is ugly and looks like black magic to my co-workers who aren't familiar with Django internals:
class Something(models.Model):
class Meta:
app_label = ""
name = models.CharField(maxlength=100)
description = models.TextField()
quantity = models.IntegerField()
This ticket doesn't seem to address this problem, whereas ticket #4470 did.
Eli, app_label is required to cleanly split models up into multiple files inside a models module. And I'm doing my best to keep all of the app_label-related stuff here in this ticket so we can get a clean solution that solves all of the associated issues.
That makes sense. I just hope that whatever that clean solution is, it will allow us to define models outside of Django webapps without having to declare the Meta class and manually set app_label.
I'll try to find time this week to take a look at the work on this ticket so far and see how well that solves this problem, and offer any advice/patches I can come up with.
The latest patch doesn't work against the current version of the Django trunk in svn. Nor does it work against r5171, which is the revision that patch 4 is said to work against, nor does it work against r5343, which was the latest revision on the date that the most recent patch was uploaded.
Someone needs to update the patch to work with the current version of Django. I won't have time to do this for at least the next few weeks, and I'm probably the wrong person to do it anyway, since I only use the Django ORM and don't write Django webapps. However, if no one else gets to this by the time I have some more free time, I'll take a stab at it.
I'm surprised that the last patch doesn't version with the earlier SVN version of Django - before uploading the patch, I had checked that all tests passed. Never mind - I'll try to look at this over the next week or two.
Oops, that should be "... last patch doesn't work ..."
An updated patch to cater for recent changes in trunk. Applies to r6453.
Latest patch (against r6453) passes all tests (tests/runtests.py).
updated to apply cleanly against r6635
I tested this patch and updated to apply to current trunk (r6635). All works for me.
However, I do have a question why get_installed_app_paths exists? Why can't it be done in __init__ of Settings when it expands out .* app paths? That way there isn't a need to change all instances of settings.INSTALLED_APPS. Making this completely backward compatible.
Good call - I'd done it the other way because it leaves INSTALLED_APPS untouched, and I wanted the impact of the patch to be clearly visible at least until the overall idea was approved. However, though it's been quite a long time - 8 months - since my first patch, and though there have been no adverse comments and a few comments with the gist "it just works", the devs have not seen fit to pronounce on it - so I'm treading water until I get a pronouncement about it. I just occasionally check to see if trunk changes break the patch, and update the patch accordingly. I'll merge your changes into mine so that my next patch has your changes, too.
Replying to brosner:
However, I do have a question why get_installed_app_paths exists? Why can't it be done in __init__ of Settings when it expands out .* app paths? That way there isn't a need to change all instances of settings.INSTALLED_APPS. Making this completely backward compatible.
However, I do have a question why get_installed_app_paths exists? Why can't it be done in __init__ of Settings when it expands out .* app paths? That way there isn't a need to change all instances of settings.INSTALLED_APPS. Making this completely backward compatible.
I had another look at the get_installed_app_paths vs. INSTALLED_APPS issue. With my patch, INSTALLED_APPS can contain either package names (explicit or wild-card) or app instances. However, get_installed_app_paths always returns a set of strings - the package paths of the applications. This is used, as you've seen, in a lot of places. If a user puts an app instance into INSTALLED_APPS, I'm not sure they'd take kindly to having it automatically replaced with the corresponding path string. So, get_installed_app_paths insulates the rest of the code from having to know whether the INSTALLED_APPS entries are path strings or app instances. It seems to me that some kind of encapsulation will be needed - and get_installed_app_paths performs this function. I would like to be able to use INSTALLED_APPS and do away with get_installed_app_paths - but I'm not quite sure how, yet. If you provide a patch which sorts out this issue, I'll happy incorporate it, as I mentioned.
In your testing, did you have any app instances in your INSTALLED_APPS? I'd be interested in seeing what your INSTALLED_APPS looks like. From a quick inspection of your patch, I would expect some tests to fail if INSTALLED_APPS contained any app instances, because in some places where framework code expects a string, it would get an app instance.
is #6080 not a partly shortcut?
Updated to apply cleanly against r6920.
Replying to wolfram:
is #6080 not a partly shortcut?
is #6080 not a partly shortcut?
No, I don't believe #6080 overlaps with this ticket. That ticket is to do with loading apps from eggs; this ticket allows for easy specification of an app_label to disambiguate apps which end in the same name (e.g. 'django.contrib.admin' clashing with 'myapp.mypackage.admin'), and allows verbose names with i18n support for use in the admin (e.g. 'Authentication/Authorization' rather than 'auth').
I believe another feature that would improve admin index page usability a lot belongs conceptually here.
Suppose I have a project that contains 10 applications, each containing several models. Some of the apps are of primay importance, some are less important. Currently, there is no way to impose either ordering or hide app contents. Thus it's hard for users to discern important bits from non-important ones and confusion is guaranteed.
So I propose the following addition to this patch:
See DjangoSpecifications/NfAdmin/FlexibleAppHandling
By Edgewall Software. | http://code.djangoproject.com/ticket/3591 | crawl-001 | refinedweb | 1,902 | 65.42 |
Many developers struggle when trying to integrate HERE Maps into their ReactJS application. Typically, when trying to adapt the Quick Start Maps API for JavaScript it is very easy to stumble into some of the challenges of using third party libraries like ours with React. Once you are aware of these pitfalls though, you can build an app to select a custom map theme and style from the Map Tile API.
Project
For this project, we want to display an interactive map that allows the user to choose a theme. A slippy map like this that allows the user to pan and zoom around is one of the most common maps on the web. Since it may not be straightforward how to fetch raster tiles and build the standard behaviors into a UI, using the Maps JavaScript SDK is invaluable for a consistent experience.
By clicking one of the thumbnail images, the interactive map will update with a new tile service provider as demonstrated in this gif:
Basic React
For a basic single-page app, you might start by including the React and HERE libraries from a CDN directly in your index.html.
<script src="" crossorigin></script> <script src="" crossorigin></script>
Create a simple ES6 class called
SimpleHereMap. The
componentDidMount() method runs after the
render() method per the React Component Lifecycle which means we can more or less include the HERE JavaScript Quick Start code just as is.
const e = React.createElement; class SimpleHereMap extends React.Component { componentDidMount() { var platform = new H.service.Platform({ app_id: 'APP_ID_HERE', app_code: 'APP_CODE_HERE', }) var layers = platform.createDefaultLayers(); var map = new H.Map( document.getElementById('map'), layers.normal.map, { center: {lat: 42.345978, lng: -83.0405}, zoom: 12, }); var events = new H.mapevents.MapEvents(map); var behavior = new H.mapevents.Behavior(events); var ui = H.ui.UI.createDefault(map, layers); } render() { return e('div', {"id": "map"}); } } const domContainer = document.querySelector('#app'); ReactDOM.render(e(SimpleHereMap), domContainer);
This example works if you use it standalone in a single index.html file but doesn’t make use of JSX and falls apart if you try to use
create-react-app. If you use that tool as described in a few of the other ReactJS posts you may see the next error.
‘H’ is not defined no-undef
Adapting the above example for
create-react-app requires a few minor changes.
- Move the includes of the HERE script libraries into public/index.html
- Create a Map.js with the SimpleHereMap class.
- Update the
render()method to use JSX to place the
<div/>element.
If you make those changes and
npm start you will likely see the following error in your console:
‘H’ is not defined no-undef
The initialization of
H.service.Platform() is causing an error because H is not in scope. This is not unique to HERE and is generally the case with any 3rd party code you try to include with React. Using
create-react-app implies using its toolchain including webpack as a module bundler, eslint for checking syntax, and Babel to transpile JSX.
Any library like the HERE JavaScript SDK that has a global variable like H might run into a similar problem during compilation (jQuery, Leaflet, etc.). By referencing non-imported code like this, the syntax linter which is platform agnostic will complain because it doesn’t know that the page will ultimately be rendered in a web browser.
The simple fix is to reference
window.H instead. Unfortunately, this does violate one of the basic principles of building modular JavaScript applications by tightly coupling our
<script> includes with our component but it works.
public/index.html
The script libraries are simply included in the public index.html.
@@ -4,6 +4,14 @@ <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> + + <link rel="stylesheet" type="text/css" href="" /> + + <script type="text/javascript" src=""></script> + <script type="text/javascript" src=""></script> + <script type="text/javascript" src=""></script> + <script type="text/javascript" src=""></script> +
src/Map.js
The Map component defines the rendered map. We’ll be making a few more changes to this class later once we get to the theme selection. We're storing a lot of the properties like lat, long, zoom, and app credentials as state so that they can be changed dynamically.
class Map extends Component { constructor(props) { super(props); this.platform = null; this.map = null; this.state = { app_id: props.app_id, app_code: props.app_code, center: { lat: props.lat, lng: props.lng, }, zoom: props.zoom, theme: props.theme, style: props.style, } } // TODO: Add theme selection discussed later HERE componentDidMount() { this.platform = new window.H.service.Platform(this.state); var layer = this.platform.createDefaultLayers(); var container = document.getElementById('here-map'); this.map = new window.H.Map(container, layer.normal.map, { center: this.state.center, zoom: this.state.zoom, }) var events = new window.H.mapevents.MapEvents(this.map); // eslint-disable-next-line var behavior = new window.H.mapevents.Behavior(events); // eslint-disable-next-line var ui = new window.H.ui.UI.createDefault(this.map, layer) } render() { return ( <div id="here-map" style={{width: '100%', height: '400px', background: 'grey' }} /> ); } }
At this point though we have a working and extensible ReactJS component that is ready to display a HERE Interactive Maps.
Themes
Since a map can be an extension to a brand or preferences, there are many themes and styles available for how to present a map on a page. The following image depicts some of the examples of maps you can use from the Maps Tile API.
The src/ThemeSelector.js component is simply intended to provide a listing of thumbnail images the user can choose from. It includes some of the more popular themes:
class ThemeSelector extends Component { render() { var themes = [ 'normal.day', 'normal.day.grey', 'normal.day.transit', 'normal.night', 'normal.night.grey', 'reduced.night', 'reduced.day', 'pedestrian.day', 'pedestrian.night', ]; var thumbnails = []; var onChange = this.props.changeTheme; themes.forEach(function(theme) { thumbnails.push(<img key={ theme } src={ 'images/' + theme + '.thumb.png' } onClick= { onChange } alt={ theme } id={ theme } />); }); return ( <div> { thumbnails } </div> ); } }
To make the click event work, we’re going to add a bit more to our src/Map.js component. The
changeTheme method described below is an example like you’d find for most any HERE JavaScript implementation.
changeTheme(theme, style) { var tiles = this.platform.getMapTileService({'type': 'base'}); var layer = tiles.createTileLayer( 'maptile', theme, 256, 'png', {'style': style} ); this.map.setBaseLayer(layer); }
We will call this method from the
shouldComponentUpdate() method. From the React Component Lifecycle, this method is called when state changes occur in order to determine if it’s necessary to re-render the component. When we select a new theme, we call the
setBaseLayer method and can update the map without requiring React to make a more costly re-render of the entire DOM.
shouldComponentUpdate(props, state) { this.changeTheme(props.theme, props.style); return false; }
App
Putting it all together, we use src/App.js to track state for the theme selection as the common ancestor to both the Map and ThemeSelector components.
The source code looks like this:
import Map from './Map.js'; import ThemeSelector from './ThemeSelector.js'; class App extends Component { constructor(props) { super(props); this.state = { theme: 'normal.day', } this.onChange = this.onChange.bind(this); } onChange(evt) { evt.preventDefault(); var change = evt.target.id; this.setState({ "theme": change, }); } render() { return ( <div className="App"> <SimpleHereMap app_id="APP_ID_HERE" app_code="APP_CODE_HERE" lat="42.345978" lng="-83.0405" zoom="12" theme={ this.state.theme } /> <ThemeSelector changeTheme={ this.onChange } /> </div> ); } }
Summary
Ideally we’d like to include an
npm package that encapsulates the HERE Map APIs as React components for use in our applications. There are some community projects to create these packages but your experience may vary depending on which one you choose to use. It would be good to know what you’ve used successfully, so leave a note in the comments.
For everybody else just looking for a quick way to get compatibility with many of the other JavaScript API examples hopefully the
window.H trick is what you were looking for.
You can find the source code for this project on GitHub.
If you found this article helpful, check out the post
HERE with React - Location Based TODO App
for more #ReactJS #JavaScript examples. | https://developer.here.com/blog/use-here-interactive-maps-with-reactjs-to-pick-a-theme | CC-MAIN-2021-25 | refinedweb | 1,374 | 59.09 |
How do I restart starling from the Game class or from the Main Class itself ?
Sharing the code from Main class :-
star = new Starling(Game, stage, viewPort, null, "auto", ["baselineExtended", "baseline"]);
star.supportHighResolutions = true;
star.skipUnchangedFrames = true;
star.antiAliasing = 1;
star.addEventListener(starling.events.Event.ROOT_CREATED, onRootCreated);
star.start();
I am actually trying to reload the app, instead of a force restart. I learned that its possible on Android using ane but not on iOS so...
Any help?
Or
its not possible at all to restart starling?
Personally, I do this manually. Inside the main Game class, I have a child Screen class which contains the entire game. When I want to restart, I dispose of everything in the Screen and the Screen itself, then create a new Screen and add it.
I keep in Game all objects I don't want to be restarted (resource managers, server connection, etc.), and in Screen all I want to be restarted (like current game state).
htmiel Initially I also went for the same, but that was too much work! My Game class has a lot of things which needs to be restarted, thats why I am looking for the other option!
just dispose the current starling instance, force garbage collect and run a new one.
ChrisDS But I do I access the starling variable star from Game class ?
star
ChrisDS is it really that simple?? It sounds just too good to be true... is there anything that we should take care of?
Shaun_Max Starling.current, or from your main starling class, stage.starling.
sipahiro Obviously you're going to want to clean any allocations etc up too.
ChrisDS Starling.current, or from your main starling class, stage.starling.
wow,dint know that! thanks a lot!
The documentation is your friend, I promise 🙂
ChrisDS Yeah! I should have, my bad!
Since I already asked here, I will continue: -
but my requirement is to destroy the current star instance and create a new one. Which will also involve disposing of the Game class(root), in that case I need some access to main class, how do I gain that! If I am correct public static functions wont work.
Shaun_Max Theres no reason you cant keep a static or public reference to the main class in your "boot" class that invokes starling. Simply use an event event listener and listen for starling.events.Event's ROOT_CREATED, and then allocate starling.root to a variable, static or not.
With that said, you can access the main or as we call the root class by stage.starling.root or Starling.current.root (and starling.rootClass if you need it)
ChrisDS With that said, you can access the main or as we call the root class by stage.starling.root or Starling.current.root (and starling.rootClass if you need it)
This part I know, but how do I call a public function restart() in the root class??
restart()
I found this in the forum but that does't work anymore: -
I dont know what you mean....Just create a reference to it when you instantiate starling.
public static var starling = new Starling()
Main Class : -
public class Main extends Sprite {
public function Main() {
loadStarling();
}
private function onRootCreated(event: starling.events.Event, root: Game): void {
myRoot = root;
myRoot.startGame();
star.removeEventListener(starling.events.Event.ROOT_CREATED, onRootCreated)
}
public function loadStarling(): void {
Texture.asyncBitmapUploadEnabled = SystemUtil.isIOS;
//Starling.multitouchEnabled = true;
if(star != null)
{
star.dispose();
}
star = new Starling(Game, stage, viewPort, null, "auto", ["baselineExtended", "baseline"]);
star.supportHighResolutions = true;
//star.simulateMultitouch = true;
//star.showStats = true;
star.skipUnchangedFrames = true;
star.antiAliasing = 1;
star.addEventListener(starling.events.Event.ROOT_CREATED, onRootCreated);
star.start();
}
}
Game Class : -
public class Game extends Sprite {
public function Game() {
//Game Initiated
}
public function startGame():void{
//Game Started
}
public function reloadGame():void{
//Starling.current.root.dispose();
Starling.current.root.loadStarling(); // giving error
}
}
I believe the above code explains what I am trying to do.
Shaun_Max Root is Game, not Main... just create a static reference to the main instance in main, something like Main.current.loadStarling();
ChrisDS just create a static reference to the main instance in main
how?
public static current:Main;
current = this;
????
yes, current = this in your constructor | https://forum.starling-framework.org/d/22795-restarting-starling | CC-MAIN-2021-21 | refinedweb | 698 | 61.22 |
#include <sys/types.h> #include <sys/fcntl.h> int directio(int fildes, int advice);.
The advice argument is kept per file; the last caller of directio() sets the advice for all applications using the file associated with fildes.
Values for advice are defined in <sys/fcntl.h>.
DIRECTIO_OFF
When an application reads data from a file, the data is first cached in system memory and then copied into the application's buffer (see read(2)). If the system detects that the application is reading sequentially from a file, the system will asynchronously "read ahead" from the file into system memory so the data is immediately available for the next read(2) operation.
When an application writes data into a file, the data is first cached in system memory and is written to the device at a later time (see write(2)). When possible, the system increases the performance of write(2) operations by cacheing the data in memory pages. The data is copied into system memory and the write(2) operation returns immediately to the application. The data is later written asynchronously to the device. When possible, the cached data is "clustered" into large chunks and written to the device in a single write operation.
The system behavior for DIRECTIO_OFF can change without notice.
DIRECTIO_ON
When possible, data is read or written directly between the application's memory and the device when the data is accessed with read(2) and write(2) operations. When such transfers are not possible, the system switches back to the default behavior, but just for that operation. In general, the transfer is possible when the application's buffer is aligned on a two-byte (short) boundary, the offset into the file is on a device sector boundary, and the size of the operation is a multiple of device sectors.
This advisory is ignored while the file associated with fildes is mapped (see mmap(2)).
The system behavior for DIRECTIO_ON can change without notice.
Upon successful completion, directio() returns 0. Otherwise, it returns −1 and sets errno to indicate the error.
The directio() function will fail if:
EBADF
ENOTTY
EINVAL
Small sequential I/O generally performs best with DIRECTIO_OFF.
Large sequential I/O generally performs best with DIRECTIO_ON, except when a file is sparse or is being extended and is opened with O_SYNC or O_DSYNC (see open(2)).
The directio() function is supported for the NFS, UFS and ZFS file system types (see fstyp(8)).
See attributes(7) for descriptions of the following attributes:
mmap(2), open(2), read(2), write(2), fcntl.h(3HEAD), attributes(7), fstyp(8)
Switching between DIRECTIO_OFF and DIRECTIO_ON can slow the system because each switch to DIRECTIO_ON might entail flushing the file's data from the system's memory. | https://man.omnios.org/man3c/directio | CC-MAIN-2022-33 | refinedweb | 459 | 53.51 |
/* * . * * General recursion handler * */ #include "cvs.h" #include "save-cwd.h" #include "fileattr.h" #include "edit.h" static int do_dir_proc (Node * p, void *closure); static int do_file_proc (Node * p, void *closure); static void addlist (List ** listp, char *key); static int unroll_files_proc (Node *p, void *closure); static void addfile (List **listp, char *dir, char *file); static char *update_dir; static char *repository = NULL; static List *filelist = NULL; /* holds list of files on which to operate */ static List *dirlist = NULL; /* holds list of directories on which to operate */ struct recursion_frame { FILEPROC fileproc; FILESDONEPROC filesdoneproc; DIRENTPROC direntproc; DIRLEAVEPROC dirleaveproc; void *callerdat; Dtype flags; int which; int aflag; int locktype; int dosrcs; char *repository; /* Keep track of repository for rtag */ }; static int do_recursion (struct recursion_frame *frame); /* I am half tempted to shove a struct file_info * into the struct recursion_frame (but then we would need to modify or create a recursion_frame for each file), or shove a struct recursion_frame * into the struct file_info (more tempting, although it isn't completely clear that the struct file_info should contain info about recursion processor internals). So instead use this struct. */ struct frame_and_file { struct recursion_frame *frame; struct file_info *finfo; }; /* Similarly, we need to pass the entries list to do_dir_proc. */ struct frame_and_entries { struct recursion_frame *frame; List *entries; }; /* Start a recursive command. * * INPUT * * fileproc * Function called with each file as an argument. * * filesdoneproc * Function called after all the files in a directory have been processed, * before subdirectories have been processed. * * direntproc * Function called immediately upon entering a directory, before processing * any files or subdirectories. * * dirleaveproc * Function called upon finishing a directory, immediately before leaving * it and returning control to the function processing the parent * directory. * * callerdat * This void * is passed to the functions above. * * argc, argv * The files on which to operate, interpreted as command line arguments. * In the special case of no arguments, defaults to operating on the * current directory (`.'). * * local * * which * specifies the kind of recursion. There are several cases: * * 1. W_LOCAL is not set but W_REPOS or W_ATTIC is. The current * directory when we are called must be the repository and * recursion proceeds according to what exists in the repository. * * 2a. W_LOCAL is set but W_REPOS and W_ATTIC are not. The * current directory when we are called must be the working * directory. Recursion proceeds according to what exists in the * working directory, never (I think) consulting any part of the * repository which does not correspond to the working directory * ("correspond" == Name_Repository). * * 2b. W_LOCAL is set and so is W_REPOS or W_ATTIC. This is the * weird one. The current directory when we are called must be * the working directory. We recurse through working directories, * but we recurse into a directory if it is exists in the working * directory *or* it exists in the repository. If a directory * does not exist in the working directory, the direntproc must * either tell us to skip it (R_SKIP_ALL), or must create it (I * think those are the only two cases). * * aflag * locktype * update_preload * dosrcs * * repository_in * keeps track of the repository string. This is only for the remote mode, * specifically, r* commands (rtag, rdiff, co, ...) where xgetcwd() was used * to locate the repository. Things would break when xgetcwd() was used * with a symlinked repository because xgetcwd() would return the true path * and in some cases this would cause the path to be printed as other than * the user specified in error messages and in other cases some of CVS's * security assertions would fail. * * GLOBALS * ??? * * OUTPUT * * callerdat can be modified by the FILEPROC, FILESDONEPROC, DIRENTPROC, and * DIRLEAVEPROC. * * RETURNS * A count of errors counted by walking the argument list with * unroll_files_proc() and do_recursion(). * * ERRORS * Fatal errors occur: * 1. when there were no arguments and the current directory * does not contain CVS metadata. * 2. when all but the last path element from an argument from ARGV cannot * be found to be a local directory. */ int start_recursion (FILEPROC fileproc, FILESDONEPROC filesdoneproc, DIRENTPROC direntproc, DIRLEAVEPROC dirleaveproc, void *callerdat, int argc, char **argv, int local, int which, int aflag, int locktype, char *update_preload, int dosrcs, char *repository_in) { int i, err = 0; #ifdef CLIENT_SUPPORT List *args_to_send_when_finished = NULL; #endif List *files_by_dir = NULL; struct recursion_frame frame; #ifdef HAVE_PRINTF_PTR TRACE ( TRACE_FLOW, "start_recursion ( fileproc=%p, filesdoneproc=%p,\n" " direntproc=%p, dirleavproc=%p,\n" " callerdat=%p, argc=%d, argv=%p,\n" " local=%d, which=%d, aflag=%d,\n" " locktype=%d, update_preload=%s\n" " dosrcs=%d, repository_in=%s )", (void *) fileproc, (void *) filesdoneproc, (void *) direntproc, (void *) dirleaveproc, (void *) callerdat, argc, (void *) argv, local, which, aflag, locktype, update_preload ? update_preload : "(null)", dosrcs, repository_in ? repository_in : "(null)"); #else TRACE ( TRACE_FLOW, "start_recursion ( fileproc=%lx, filesdoneproc=%lx,\n" " direntproc=%lx, dirleavproc=%lx,\n" " callerdat=%lx, argc=%d, argv=%lx,\n" " local=%d, which=%d, aflag=%d,\n" " locktype=%d, update_preload=%s\n" " dosrcs=%d, repository_in=%s )", (unsigned long) fileproc, (unsigned long) filesdoneproc, (unsigned long) direntproc, (unsigned long) dirleaveproc, (unsigned long) callerdat, argc, (unsigned long) argv, local, which, aflag, locktype, update_preload ? update_preload : "(null)", dosrcs, repository_in ? repository_in : "(null)"); #endif frame.fileproc = fileproc; frame.filesdoneproc = filesdoneproc; frame.direntproc = direntproc; frame.dirleaveproc = dirleaveproc; frame.callerdat = callerdat; frame.flags = local ? R_SKIP_DIRS : R_PROCESS; frame.which = which; frame.aflag = aflag; frame.locktype = locktype; frame.dosrcs = dosrcs; /* If our repository_in has a trailing "/.", remove it before storing it * for do_recursion(). * * FIXME: This is somewhat of a hack in the sense that many of our callers * painstakingly compute and add the trailing '.' we now remove. */ while (repository_in && strlen (repository_in) >= 2 && repository_in[strlen (repository_in) - 2] == '/' && repository_in[strlen (repository_in) - 1] == '.') { /* Beware the case where the string is exactly "/." or "//.". * Paths with a leading "//" are special on some early UNIXes. */ if (strlen (repository_in) == 2 || strlen (repository_in) == 3) repository_in[strlen (repository_in) - 1] = '\0'; else repository_in[strlen (repository_in) - 2] = '\0'; } frame.repository = repository_in; expand_wild (argc, argv, &argc, &argv); if (update_preload == NULL) update_dir = xstrdup (""); else update_dir = xstrdup (update_preload); /* clean up from any previous calls to start_recursion */ if (repository) { free (repository); repository = NULL; } if (filelist) dellist (&filelist); /* FIXME-krp: no longer correct. */ if (dirlist) dellist (&dirlist); #ifdef SERVER_SUPPORT if (server_active) { for (i = 0; i < argc; ++i) server_pathname_check (argv[i]); } #endif if (argc == 0) { int just_subdirs = (which & W_LOCAL) && !isdir (CVSADM); #ifdef CLIENT_SUPPORT if (!just_subdirs && CVSroot_cmdline == NULL && current_parsed_root->isremote) { cvsroot_t *root = Name_Root (NULL, update_dir); if (root) { if (strcmp (root->original, original_parsed_root->original)) /* We're skipping this directory because it is for * a different root. Therefore, we just want to * do the subdirectories only. Processing files would * cause a working directory from one repository to be * processed against a different repository, which could * cause all kinds of spurious conflicts and such. * * Question: what about the case of "cvs update foo" * where we process foo/bar and not foo itself? That * seems to be handled somewhere (else) but why should * it be a separate case? Needs investigation... */ just_subdirs = 1; } } #endif /* * There were no arguments, so we'll probably just recurse. The * exception to the rule is when we are called from a directory * without any CVS administration files. That has always meant to * process each of the sub-directories, so we pretend like we were * called with the list of sub-dirs of the current dir as args */ if (just_subdirs) { dirlist = Find_Directories (NULL, W_LOCAL, NULL); /* If there are no sub-directories, there is a certain logic in favor of doing nothing, but in fact probably the user is just confused about what directory they are in, or whether they cvs add'd a new directory. In the case of at least one sub-directory, at least when we recurse into them we notice (hopefully) whether they are under CVS control. */ if (list_isempty (dirlist)) { if (update_dir[0] == '\0') error (0, 0, "in directory .:"); else error (0, 0, "in directory %s:", update_dir); error (1, 0, "there is no version here; run '%s checkout' first", program_name); } #ifdef CLIENT_SUPPORT else if (current_parsed_root->isremote && server_started) { /* In the the case "cvs update foo bar baz", a call to send_file_names in update.c will have sent the appropriate "Argument" commands to the server. In this case, that won't have happened, so we need to do it here. While this example uses "update", this generalizes to other commands. */ /* This is the same call to Find_Directories as above. FIXME: perhaps it would be better to write a function that duplicates a list. */ args_to_send_when_finished = Find_Directories (NULL, W_LOCAL, NULL); } #endif } else addlist (&dirlist, "."); goto do_the_work; } /* * There were arguments, so we have to handle them by hand. To do * that, we set up the filelist and dirlist with the arguments and * call do_recursion. do_recursion recognizes the fact that the * lists are non-null when it starts and doesn't update them. * * explicitly named directories are stored in dirlist. * explicitly named files are stored in filelist. * other possibility is named entities whicha are not currently in * the working directory. */ for (i = 0; i < argc; i++) { /* if this argument is a directory, then add it to the list of directories. */ if (!wrap_name_has (argv[i], WRAP_TOCVS) && isdir (argv[i])) { strip_trailing_slashes (argv[i]); addlist (&dirlist, argv[i]); } else { /* otherwise, split argument into directory and component names. */ char *dir; char *comp; char *file_to_try; /* Now break out argv[i] into directory part (DIR) and file part * (COMP). DIR and COMP will each point to a newly malloc'd * string. */ dir = xstrdup (argv[i]); /* It's okay to cast out last_component's const below since we know * we just allocated dir here and have control of it. */ comp = (char *)last_component (dir); if (comp == dir) { /* no dir component. What we have is an implied "./" */ dir = xstrdup("."); } else { comp[-1] = '\0'; comp = xstrdup (comp); } /* if this argument exists as a file in the current working directory tree, then add it to the files list. */ if (!(which & W_LOCAL)) { /* If doing rtag, we've done a chdir to the repository. */ file_to_try = Xasprintf ("%s%s", argv[i], RCSEXT); } else file_to_try = xstrdup (argv[i]); if (isfile (file_to_try)) addfile (&files_by_dir, dir, comp); else if (isdir (dir)) { if ((which & W_LOCAL) && isdir (CVSADM) && !current_parsed_root->isremote) { /* otherwise, look for it in the repository. */ char *tmp_update_dir; char *repos; char *reposfile; tmp_update_dir = xmalloc (strlen (update_dir) + strlen (dir) + 5); strcpy (tmp_update_dir, update_dir); if (*tmp_update_dir != '\0') strcat (tmp_update_dir, "/"); strcat (tmp_update_dir, dir); /* look for it in the repository. */ repos = Name_Repository (dir, tmp_update_dir); reposfile = Xasprintf ("%s/%s", repos, comp); free (repos); if (!wrap_name_has (comp, WRAP_TOCVS) && isdir (reposfile)) addlist (&dirlist, argv[i]); else addfile (&files_by_dir, dir, comp); free (tmp_update_dir); free (reposfile); } else addfile (&files_by_dir, dir, comp); } else error (1, 0, "no such directory `%s'", dir); free (file_to_try); free (dir); free (comp); } } /* At this point we have looped over all named arguments and built a coupla lists. Now we unroll the lists, setting up and calling do_recursion. */ err += walklist (files_by_dir, unroll_files_proc, (void *) &frame); dellist(&files_by_dir); /* then do_recursion on the dirlist. */ if (dirlist != NULL) { do_the_work: err += do_recursion (&frame); } /* Free the data which expand_wild allocated. */ free_names (&argc, argv); free (update_dir); update_dir = NULL; #ifdef CLIENT_SUPPORT if (args_to_send_when_finished != NULL) { /* FIXME (njc): in the multiroot case, we don't want to send argument commands for those top-level directories which do not contain any subdirectories which have files checked out from current_parsed_root. If we do, and two repositories have a module with the same name, nasty things could happen. This is hard. Perhaps we should send the Argument commands later in this procedure, after we've had a chance to notice which directores we're using (after do_recursion has been called once). This means a _lot_ of rewriting, however. What we need to do for that to happen is descend the tree and construct a list of directories which are checked out from current_cvsroot. Now, we eliminate from the list all of those directories which are immediate subdirectories of another directory in the list. To say that the opposite way, we keep the directories which are not immediate subdirectories of any other in the list. Here's a picture: a / \ B C / \ D e / \ F G / \ H I The node in capitals are those directories which are checked out from current_cvsroot. We want the list to contain B, C, F, and G. D, H, and I are not included, because their parents are also checked out from current_cvsroot. The algorithm should be: 1) construct a tree of all directory names where each element contains a directory name and a flag which notes if that directory is checked out from current_cvsroot a0 / \ B1 C1 / \ D1 e0 / \ F1 G1 / \ H1 I1 2) Recursively descend the tree. For each node, recurse before processing the node. If the flag is zero, do nothing. If the flag is 1, check the node's parent. If the parent's flag is one, change the current entry's flag to zero. a0 / \ B1 C1 / \ D0 e0 / \ F1 G1 / \ H0 I0 3) Walk the tree and spit out "Argument" commands to tell the server which directories to munge. Yuck. It's not clear this is worth spending time on, since we might want to disable cvs commands entirely from directories that do not have CVSADM files... Anyways, the solution as it stands has modified server.c (dirswitch) to create admin files [via server.c (create_adm_p)] in all path elements for a client's "Directory xxx" command, which forces the server to descend and serve the files there. client.c (send_file_names) has also been modified to send only those arguments which are appropriate to current_parsed_root. */ /* Construct a fake argc/argv pair. */ int our_argc = 0, i; char **our_argv = NULL; if (! list_isempty (args_to_send_when_finished)) { Node *head, *p; head = args_to_send_when_finished->list; /* count the number of nodes */ i = 0; for (p = head->next; p != head; p = p->next) i++; our_argc = i; /* create the argument vector */ our_argv = xmalloc (sizeof (char *) * our_argc); /* populate it */ i = 0; for (p = head->next; p != head; p = p->next) our_argv[i++] = xstrdup (p->key); } /* We don't want to expand widcards, since we've just created a list of directories directly from the filesystem. */ send_file_names (our_argc, our_argv, 0); /* Free our argc/argv. */ if (our_argv != NULL) { for (i = 0; i < our_argc; i++) free (our_argv[i]); free (our_argv); } dellist (&args_to_send_when_finished); } #endif return err; } /* * Implement the recursive policies on the local directory. This may be * called directly, or may be called by start_recursion. */ static int do_recursion (struct recursion_frame *frame) { int err = 0; int dodoneproc = 1; char *srepository = NULL; List *entries = NULL; int locktype; bool process_this_directory = true; #ifdef HAVE_PRINT_PTR TRACE (TRACE_FLOW, "do_recursion ( frame=%p )", (void *) frame); #else TRACE (TRACE_FLOW, "do_recursion ( frame=%lx )", (unsigned long) frame); #endif /* do nothing if told */ if (frame->flags == R_SKIP_ALL) return 0; locktype = noexec ? CVS_LOCK_NONE : frame->locktype; /* The fact that locks are not active here is what makes us fail to have the If someone commits some changes in one cvs command, then an update by someone else will either get all the changes, or none of them. property (see node Concurrency in cvs.texinfo). The most straightforward fix would just to readlock the whole tree before starting an update, but that means that if a commit gets blocked on a big update, it might need to wait a *long* time. A more adequate fix would be a two-pass design for update, checkout, etc. The first pass would go through the repository, with the whole tree readlocked, noting what versions of each file we want to get. The second pass would release all locks (except perhaps short-term locks on one file at a time--although I think RCS already deals with this) and actually get the files, specifying the particular versions it wants. This could be sped up by separating out the data needed for the first pass into a separate file(s)--for example a file attribute for each file whose value contains the head revision for each branch. The structure should be designed so that commit can relatively quickly update the information for a single file or a handful of files (file attributes, as implemented in Jan 96, are probably acceptable; improvements would be possible such as branch attributes which are in separate files for each branch). */ #if defined(SERVER_SUPPORT) && defined(SERVER_FLOWCONTROL) /* * Now would be a good time to check to see if we need to stop * generating data, to give the buffers a chance to drain to the * remote client. We should not have locks active at this point, * but if there are writelocks around, we cannot pause here. */ if (server_active && locktype != CVS_LOCK_WRITE) server_pause_check(); #endif /* Check the value in CVSADM_ROOT and see if it's in the list. If not, add it to our lists of CVS/Root directories and do not process the files in this directory. Otherwise, continue as usual. THIS_ROOT might be NULL if we're doing an initial checkout -- check before using it. The default should be that we process a directory's contents and only skip those contents if a CVS/Root file exists. If we're running the server, we want to process all directories, since we're guaranteed to have only one CVSROOT -- our own. */ /* (NULL,; } } } /* * Fill in repository with the current repository */ if (frame->which & W_LOCAL) { if (isdir (CVSADM)) { repository = Name_Repository (NULL, update_dir); srepository = repository; /* remember what to free */ } else repository = NULL; } else { repository = frame->repository; assert (repository != NULL); assert (strstr (repository, "/./") == NULL); } fileattr_startdir (repository); /* * The filesdoneproc needs to be called for each directory where files * processed, or each directory that is processed by a call where no * directories were passed in. In fact, the only time we don't want to * call back the filesdoneproc is when we are processing directories that * were passed in on the command line (or in the special case of `.' when * we were called with no args */ if (dirlist != NULL && filelist == NULL) dodoneproc = 0; /* * If filelist or dirlist is already set, we don't look again. Otherwise, * find the files and directories */ if (filelist == NULL && dirlist == NULL) { /* both lists were NULL, so start from scratch */ if (frame->fileproc != NULL && frame->flags != R_SKIP_FILES) { int lwhich = frame->which; /* be sure to look in the attic if we have sticky tags/date */ if ((lwhich & W_ATTIC) == 0) if (isreadable (CVSADM_TAG)) lwhich |= W_ATTIC; /* In the !(which & W_LOCAL) case, we filled in repository earlier in the function. In the (which & W_LOCAL) case, the Find_Names function is going to look through the Entries file. If we do not have a repository, that does not make sense, so we insist upon having a repository at this point. Name_Repository will give a reasonable error message. */ if (repository == NULL) { Name_Repository (NULL, update_dir); assert (!"Not reached. Please report this problem to <" PACKAGE_BUGREPORT ">"); } /* find the files and fill in entries if appropriate */ if (process_this_directory) { filelist = Find_Names (repository, lwhich, frame->aflag, &entries); if (filelist == NULL) { error (0, 0, "skipping directory %s", update_dir); /* Note that Find_Directories and the filesdoneproc in particular would do bad things ("? foo.c" in the case of some filesdoneproc's). */ goto skip_directory; } } } /* find sub-directories if we will recurse */ if (frame->flags != R_SKIP_DIRS) dirlist = Find_Directories ( process_this_directory ? repository : NULL, frame->which, entries); } else { /* something was passed on the command line */ if (filelist != NULL && frame->fileproc != NULL) { /* we will process files, so pre-parse entries */ if (frame->which & W_LOCAL) entries = Entries_Open (frame->aflag, NULL); } } /* process the files (if any) */ if (process_this_directory && filelist != NULL && frame->fileproc) { struct file_info finfo_struct; struct frame_and_file frfile; /* Lock the repository, if necessary. */ if (repository) { if (locktype == CVS_LOCK_READ) { if (Reader_Lock (repository) != 0) error (1, 0, "read lock failed - giving up"); } else if (locktype == CVS_LOCK_WRITE) lock_dir_for_write (repository); } #ifdef CLIENT_SUPPORT /* For the server, we handle notifications in a completely different place (server_notify). For local, we can't do them here--we don't have writelocks in place, and there is no way to get writelocks here. */ if (current_parsed_root->isremote) notify_check (repository, update_dir); #endif /* CLIENT_SUPPORT */ finfo_struct.repository = repository; finfo_struct.update_dir = update_dir; finfo_struct.entries = entries; /* do_file_proc will fill in finfo_struct.file. */ frfile.finfo = &finfo_struct; frfile.frame = frame; /* process the files */ err += walklist (filelist, do_file_proc, &frfile); /* unlock it */ if (/* We only lock the repository above when repository is set */ repository /* and when asked for a read or write lock. */ && locktype != CVS_LOCK_NONE) Simple_Lock_Cleanup (); /* clean up */ dellist (&filelist); } /* call-back files done proc (if any) */ if (process_this_directory && dodoneproc && frame->filesdoneproc != NULL) err = frame->filesdoneproc (frame->callerdat, err, repository, update_dir[0] ? update_dir : ".", entries); skip_directory: fileattr_write (); fileattr_free (); /* process the directories (if necessary) */ if (dirlist != NULL) { struct frame_and_entries frent; frent.frame = frame; frent.entries = entries; err += walklist (dirlist, do_dir_proc, &frent); } #if 0 else if (frame->dirleaveproc != NULL) err += frame->dirleaveproc (frame->callerdat, ".", err, "."); #endif dellist (&dirlist); if (entries) { Entries_Close (entries); entries = NULL; } /* free the saved copy of the pointer if necessary */ if (srepository) free (srepository); repository = NULL; #ifdef HAVE_PRINT_PTR TRACE (TRACE_FLOW, "Leaving do_recursion (frame=%p)", (void *)frame); #else TRACE (TRACE_FLOW, "Leaving do_recursion (frame=%lx)", (unsigned long)frame); #endif return err; } /* * Process each of the files in the list with the callback proc * * NOTES * Fills in FINFO->fullname, and sometimes FINFO->rcs before * calling the callback proc (FRFILE->frame->fileproc), but frees them * before return. * * OUTPUTS * Fills in FINFO->file. * * RETURNS * 0 if we were supposed to find an RCS file but couldn't. * Otherwise, returns the error code returned by the callback function. */ static int do_file_proc (Node *p, void *closure) { struct frame_and_file *frfile = closure; struct file_info *finfo = frfile->finfo; int ret; char *tmp; finfo->file = p->key; if (finfo->update_dir[0] != '\0') tmp = Xasprintf ("%s/%s", finfo->update_dir, finfo->file); else tmp = xstrdup (finfo->file); if (frfile->frame->dosrcs && repository) { finfo->rcs = RCS_parse (finfo->file, repository); /* OK, without W_LOCAL the error handling becomes relatively simple. The file names came from readdir() on the repository and so we know any ENOENT is an error (e.g. symlink pointing to nothing). Now, the logic could be simpler - since we got the name from readdir, we could just be calling RCS_parsercsfile. */ if (finfo->rcs == NULL && !(frfile->frame->which & W_LOCAL)) { error (0, 0, "could not read RCS file for %s", tmp); free (tmp); cvs_flushout (); return 0; } } else finfo->rcs = NULL; finfo->fullname = tmp; ret = frfile->frame->fileproc (frfile->frame->callerdat, finfo); freercsnode (&finfo->rcs); free (tmp); /* Allow the user to monitor progress with tail -f. Doing this once per file should be no big deal, but we don't want the performance hit of flushing on every line like previous versions of CVS. */ cvs_flushout (); return ret; } /* * Process each of the directories in the list (recursing as we go) */ static int do_dir_proc (Node *p, void *closure) { struct frame_and_entries *frent = (struct frame_and_entries *) closure; struct recursion_frame *frame = frent->frame; struct recursion_frame xframe; char *dir = p->key; char *newrepos; List *sdirlist; char *srepository; Dtype dir_return = R_PROCESS; int stripped_dot = 0; int err = 0; struct saved_cwd cwd; char *saved_update_dir; bool process_this_directory = true; if (fncmp (dir, CVSADM) == 0) { /* This seems to most often happen when users (beginning users, generally), try "cvs ci *" or something similar. On that theory, it is possible that we should just silently skip the CVSADM directories, but on the other hand, using a wildcard like this isn't necessarily a practice to encourage (it operates only on files which exist in the working directory, unlike regular CVS recursion). */ /* FIXME-reentrancy: printed_cvs_msg should be in a "command struct" or some such, so that it gets cleared for each new command (this is possible using the remote protocol and a custom-written client). The struct recursion_frame is not far back enough though, some commands (commit at least) will call start_recursion several times. An alternate solution would be to take this whole check and move it to a new function validate_arguments or some such that all the commands call and which snips the offending directory from the argc,argv vector. */ static int printed_cvs_msg = 0; if (!printed_cvs_msg) { error (0, 0, "warning: directory %s specified in argument", dir); error (0, 0, "\ but CVS uses %s for its own purposes; skipping %s directory", CVSADM, dir); printed_cvs_msg = 1; } return 0; } saved_update_dir = update_dir; update_dir = xmalloc (strlen (saved_update_dir) + strlen (dir) + 5); strcpy (update_dir, saved_update_dir); /* set up update_dir - skip dots if not at start */ if (strcmp (dir, ".") != 0) { if (update_dir[0] != '\0') { (void) strcat (update_dir, "/"); (void) strcat (update_dir, dir); } else (void) strcpy (update_dir, dir); /* * Here we need a plausible repository name for the sub-directory. We * create one by concatenating the new directory name onto the * previous repository name. The only case where the name should be * used is in the case where we are creating a new sub-directory for * update -d and in that case the generated name will be correct. */ if (repository == NULL) newrepos = xstrdup (""); else newrepos = Xasprintf ("%s/%s", repository, dir); } else { if (update_dir[0] == '\0') (void) strcpy (update_dir, dir); if (repository == NULL) newrepos = xstrdup (""); else newrepos = xstrdup (repository); } /* Check to see that the CVSADM directory, if it exists, seems to be well-formed. It can be missing files if the user hit ^C in the middle of a previous run. We want to (a) make this a nonfatal error, and (b) make sure we print which directory has the problem. Do this before the direntproc, so that (1) the direntproc doesn't have to guess/deduce whether we will skip the directory (e.g. send_dirent_proc and whether to send the directory), and (2) so that the warm fuzzy doesn't get printed if we skip the directory. */ if (frame->which & W_LOCAL) { char *cvsadmdir; cvsadmdir = xmalloc (strlen (dir) + sizeof (CVSADM_REP) + sizeof (CVSADM_ENT) + 80); strcpy (cvsadmdir, dir); strcat (cvsadmdir, "/"); strcat (cvsadmdir, CVSADM); if (isdir (cvsadmdir)) { strcpy (cvsadmdir, dir); strcat (cvsadmdir, "/"); strcat (cvsadmdir, CVSADM_REP);_REP); dir_return = R_SKIP_ALL; } /* Likewise for CVS/Entries. */ if (dir_return != R_SKIP_ALL) { strcpy (cvsadmdir, dir); strcat (cvsadmdir, "/"); strcat (cvsadmdir, CVSADM_ENT);_ENT); dir_return = R_SKIP_ALL; } } } free (cvsadmdir); } /* Only process this directory if the root matches. This nearly duplicates code in do_recursion. */ /* (dir,; } } } /* call-back dir entry proc (if any) */ if (dir_return == R_SKIP_ALL) ; else if (frame->direntproc != NULL) { /* If we're doing the actual processing, call direntproc. Otherwise, assume that we need to process this directory and recurse. FIXME. */ if (process_this_directory) dir_return = frame->direntproc (frame->callerdat, dir, newrepos, update_dir, frent->entries); else dir_return = R_PROCESS; } else { /* Generic behavior. I don't see a reason to make the caller specify a direntproc just to get this. */ if ((frame->which & W_LOCAL) && !isdir (dir)) dir_return = R_SKIP_ALL; } free (newrepos); /* only process the dir if the return code was 0 */ if (dir_return != R_SKIP_ALL) { /* save our current directory and static vars */ if (save_cwd (&cwd)) error (1, errno, "Failed to save current directory."); sdirlist = dirlist; srepository = repository; dirlist = NULL; /* cd to the sub-directory */ if (CVS_CHDIR (dir) < 0) error (1, errno, "could not chdir to %s", dir); /* honor the global SKIP_DIRS (a.k.a. local) */ if (frame->flags == R_SKIP_DIRS) dir_return = R_SKIP_DIRS; /* remember if the `.' will be stripped for subsequent dirs */ if (strcmp (update_dir, ".") == 0) { update_dir[0] = '\0'; stripped_dot = 1; } /* make the recursive call */ xframe = *frame; xframe.flags = dir_return; /* Keep track of repository, really just for r* commands (rtag, rdiff, * co, ...) to tag_check_valid, since all the other commands use * CVS/Repository to figure it out per directory. */ if (repository) { if (strcmp (dir, ".") == 0) xframe.repository = xstrdup (repository); else xframe.repository = Xasprintf ("%s/%s", repository, dir); } else xframe.repository = NULL; err += do_recursion (&xframe); if (xframe.repository) { free (xframe.repository); xframe.repository = NULL; } /* put the `.' back if necessary */ if (stripped_dot) (void) strcpy (update_dir, "."); /* call-back dir leave proc (if any) */ if (process_this_directory && frame->dirleaveproc != NULL) err = frame->dirleaveproc (frame->callerdat, dir, err, update_dir, frent->entries); /* get back to where we started and restore state vars */ if (restore_cwd (&cwd)) error (1, errno, "Failed to restore current directory, `%s'.", cwd.name); free_cwd (&cwd); dirlist = sdirlist; repository = srepository; } free (update_dir); update_dir = saved_update_dir; return err; } /* * Add a node to a list allocating the list if necessary. */ static void addlist (List **listp, char *key) { Node *p; if (*listp == NULL) *listp = getlist (); p = getnode (); p->type = FILES; p->key = xstrdup (key); if (addnode (*listp, p) != 0) freenode (p); } static void addfile (List **listp, char *dir, char *file) { Node *n; List *fl; /* add this dir. */ addlist (listp, dir); n = findnode (*listp, dir); if (n == NULL) { error (1, 0, "can't find recently added dir node `%s' in start_recursion.", dir); } n->type = DIRS; fl = n->data; addlist (&fl, file); n->data = fl; return; } static int unroll_files_proc (Node *p, void *closure) { Node *n; struct recursion_frame *frame = (struct recursion_frame *) closure; int err = 0; List *save_dirlist; char *save_update_dir = NULL; struct saved_cwd cwd; /* if this dir was also an explicitly named argument, then skip it. We'll catch it later when we do dirs. */ n = findnode (dirlist, p->key); if (n != NULL) return (0); /* otherwise, call dorecusion for this list of files. */ filelist = p->data; p->data = NULL; save_dirlist = dirlist; dirlist = NULL; if (strcmp(p->key, ".") != 0) { if (save_cwd (&cwd)) error (1, errno, "Failed to save current directory."); if ( CVS_CHDIR (p->key) < 0) error (1, errno, "could not chdir to %s", p->key); save_update_dir = update_dir; update_dir = xmalloc (strlen (save_update_dir) + strlen (p->key) + 5); strcpy (update_dir, save_update_dir); if (*update_dir != '\0') (void) strcat (update_dir, "/"); (void) strcat (update_dir, p->key); } err += do_recursion (frame); if (save_update_dir != NULL) { free (update_dir); update_dir = save_update_dir; if (restore_cwd (&cwd)) error (1, errno, "Failed to restore current directory, `%s'.", cwd.name); free_cwd (&cwd); } dirlist = save_dirlist; if (filelist) dellist (&filelist); return(err); } /* vim:tabstop=8:shiftwidth=4 */ | http://opensource.apple.com/source/cvs/cvs-42/cvs/src/recurse.c | CC-MAIN-2016-26 | refinedweb | 4,761 | 53.61 |
[
]
Jan Høydahl commented on SOLR-3613:
-----------------------------------
bq. I also don't think we should force "solr." for all the system properties. If someone ads
the ability to optionally check for the webapp prefix, then I think we should still be free
to use zkHost, collection.*, etc, in the examples/doc.
Why not? It is consistent, short and concise. I was first thinking that the "solr." prefix
is better had as a convention rather than code? But say we do as you propose and add prefix
logic so that given ${myProp:foo}, we'll look for:
# {{solr.myProp}}
# else look for {{myProp}}
In this case we would need to change all literal {{solr.*}} props in all xml config files.
I see two drawbacks with this approach; one is that the examples then promote the use of short
form while we'd like to encourage use of namespaced form and the other is that if webapp XYZ
sets {{myProp}}, and we have not explicitly set {{solr.myProp}} then Solr will pick up a faulty
value for it. This last could very well happen for generic opts like the ${host:} currently
defined in solr.xml.
So I still think it is better to require a {{solr.}} prefix for all sys props and leave in
the {{solr.}} prefix in config files as today.
Another problematic one from solr.xml is this: hostPort="${jetty.port:}". It assumes Jetty
as Java Application Server, and it feels awkward to say {{-Djetty.port=8080}} to tell SolrCloud
that Tomcat is running on port 8080. Imagine an ops guy reading the Solr bootstrap script,
scratching his head. If all we do is read the value and add +1000 to pick the port for our
internal ZK, why not be explicit instead and have a {{solr.localZkPort}} prop? (No API to
get the web containers port? In that case we could support relative values and default to
value of "+1000" which would behave as today, but less to specify on cmdLine).
While in picky mode :-) I'd prefer {{zkRun}} to be {{solr.localZkRun}} to distinguish that
this starts a *local* Zk as opposed to the remote one in {{zkHost}}. Also, the prop {{zkHost}}
is misleading, in that it takes a list of host:port; perhaps {{solr.zkServers}} is more clear?
{quote}
bq. a thin HTTP layer around Lucene
I've certainly never thought of Solr as that
{quote}
Well, not a pure HTTP layer, but still thin in as in the sense that Lucene does as much of
the core features as possible
> this | http://mail-archives.apache.org/mod_mbox/lucene-dev/201207.mbox/%3C872322764.44732.1342133615022.JavaMail.jiratomcat@issues-vm%3E | CC-MAIN-2014-15 | refinedweb | 424 | 83.15 |
Last but certainly not least, Microsoft’s Virajith Jalaparti (left) and Ashvin Agrawal (right) discussed the evolution of the “provided storage” feature in HDFS, which allows for HDFS clients to transparently access external storage systems (such as Azure Data Lake Storage or Amazon S3). They described a mechanism whereby the NameNode would “mount” an external store as part of its own namespace, and clients would be able to access the data as if it resided on HDFS itself. The DataNodes, which normally store the data in HDFS, would transparently fetch the data from the remote store and serve it back to the client. They were even brave enough to give us a live demo! You can view their slides here and a recording of their presentation here.
Breakout sessions
Following all of our planned presentations, we held informal “birds of a feather” discussions about topics pertinent to the Hadoop community at large.
One session discussed the management of Hadoop releases, in particular the 2.X release series as opposed to the 3.X release series. Major version upgrades in Hadoop can be painful, and many large operators are wary of upgrading from Hadoop 2 to 3. There is some support in the community for a “bridge” release, or a final release on the Hadoop 2 release line before making the plunge for a major version upgrade.
Another session discussed Java versioning. Previously, the stance of the Hadoop community was that Java version upgrades would always be accompanied by a Hadoop major version upgrade; for example, Hadoop 2 supports Java 7 and above, while Hadoop 3 only supports Java 8 and above. However, given the changes in Oracle’s release and support roadmap to a much more rapid release cycle, the Hadoop community must adapt its policies. We discussed that we will likely need to drop support for Java versions in minor, rather than major, releases of Hadoop.
Another major topic of discussion was the future of Ozone. There were deep dives into various portions of Ozone’s architecture, and in-depth discussions of how various frameworks such as Apache Spark, Apache Impala, and Presto would work on top of Ozone. Finally, there were discussions of its release timelines, and how erasure coding functionality, a recent addition to HDFS, could be supported in Ozone as well.
Acknowledgments
All of us here at LinkedIn were thrilled to be a part of the engaged community present at this meetup. Thanks to all of our speakers and participants for making this a fun and fruitful event. We’re greatly looking forward to the next one!
This meetup couldn’t have happened with the support of our amazing events staff here at LinkedIn. I owe great thanks to our media technician, Francisco Zamora, and to the rest of the catering and event services professionals who helped us out! | http://engineeringjobs4u.co.uk/the-present-and-future-of-apache-hadoop-a-community-meetup-at-linkedin | CC-MAIN-2019-30 | refinedweb | 473 | 59.43 |
A class defines a set of data and the operations you can perform on that data. Subclasses are similar to the classes from which they are derived, but they may have different properties or additional behavior. In general, any operation that is valid for a class is also valid for each subclass of that class.
Classes that you define with SCL can support two types of relationships:
Generally, the attributes, methods, events, event handlers, and interfaces that belong to a parent class are automatically inherited by any class that is created from it. One metaphor that is used to describe this relationship is that of the family. Classes that provide the foundation for other classes are called parent classes, and classes that are derived from parent classes are child classes. When more than one class is derived from the same parent class, these classes are related to each other as sibling classes. A descendent of a class has that class as a parent, either directly or indirectly through a series of parent-child relationships. In object-oriented theory, any subclass that is created from a parent class inherits all of the characteristics of the parent class that it is not specifically prohibited from inheriting. The chain of parent classes is called an ancestry.
Class Ancestry
Whenever you create a new class, that class inherits
all of the properties (attributes, methods, events, event handlers, and interfaces)
that belong to its parent class. For example, the Object class is the parent
of all classes in SAS/AF software.
The Frame and Widget classes are subclasses of the Object class, and they
inherit all properties of the Object class. Similarly, every class you use
in a frame-based application is a descendent of the Frame, Object, or Widget
class, and thus inherits all the properties that belong to those classes.
In addition to the inheritance relationship, classes have an instantiation or an "is a" relationship. For example, a frame is an instance of the Frame class; a radio box control is an instance of the Radio Box Control class; and a color list object is an instance of the Color List Model class.
All classes are instances of the Class class. The Class class is a metaclass. A metaclass collects information about other classes and enables you to operate on other classes. For more information about metaclasses, see Metaclasses.
Some SAS/AF software classes are specific types of classes.
Abstract classes group attributes and methods that are common to several subclasses. These classes themselves cannot be instantiated; they simply provide functionality for their subclasses.
The Widget class in SAS/AF software
is an example of an abstract class. Its purpose is to collect properties that
all widget subclasses can inherit. The Widget class cannot be instantiated.
In SAS/AF software, components that are built on the SAS Component Object Model (SCOM) framework can be classified either as views that display data or as models that provide data. Although models and views are typically used together, they are nevertheless independent components. Their independence allows for customization, flexibility of design, and efficient programming.
Models are non-visual components that provide data. For example, a Data Set List model contains the properties for generating a list of SAS data sets (or tables), given a specific SAS library. A model may be attached to multiple views.
Views are components that provide a visual representation of the data, but they have no knowledge of the actual data they are displaying. The displayed data depends on the state of the model that is connected to the view. A view can be attached to only one model at a time.
It may be helpful to think of model/view components as client/server components. The view acts as the client and the model acts as the server.
For more information on interfaces, see Interfaces. For more information
on implementing model/view communication, refer to
SAS Guide to Applications Development and to the SAS/AF online
As
previously
mentioned, the Class class (
sashelp.fsp.Class.class)
and any subclasses you create from it are metaclasses. Metaclasses enable you to collect information about other classes and to operate
on those classes.
Metaclasses enable you to make changes to the application at run time rather than only at build time. Examples of such changes include where a class's methods reside, the default values of class properties, and even the set of classes and their hierarchy.
Metaclasses also enable you to access information about parent classes, subclasses, and the methods and properties that are defined for a class. Specifically, through methods of the Class class, you can
For more information about metaclasses, see the Class class in the SAS/AF online Help.
You can create classes in SCL with the CLASS block. The CLASS block begins with the CLASS statement and ends with the ENDCLASS statement:
The CLASS statement enables you to define attributes, methods, events, and event handlers for a class and to specify whether the class supports or requires an interface. The remaining sections in this chapter describe these elements in more detail.The CLASS statement enables you to define attributes, methods, events, and event handlers for a class and to specify whether the class supports or requires an interface. The remaining sections in this chapter describe these elements in more detail.
The EXTENDS clause specifies the parent class. If you do not specify
an EXTENDS clause, SCL assumes that
sashelp.fsp.object.class is the parent class.
Using the CLASS block instead of the Class Editor to create a class enables the compiler to detect errors at compile time, which results in improved performance during run time.
For a complete description of the CLASS statement, see CLASS. For a
description of
using the Class Editor to define classes, refer to
SAS Guide to Applications Development.
Suppose you are editing an SCL entry in the Build window and that the entry contains a CLASS block. For example:
class Simple extends myParent; public num num1; M1: method n:num return=num / (scl='work.a.uSimple.scl'); M2: method return=num; num1 = 3; dcl num n = M1(num1); return (n); endmethod; endclass;To generate a CLASS entry from the CLASS block, issue the SAVECLASS command or select Generating the CLASS entry from the CLASS block is equivalent to using the Class Editor to create a CLASS entry interactively.
The CLASS block is especially useful when you need to make many changes to an existing class. To make changes to an existing class, use the CREATESCL function to write the class definition to an SCL entry. You can then edit the SCL entry in the Build window. After you finish entering changes, you can generate the CLASS entry by issuing the SAVECLASS command or selectingFor more information, see CREATESCL.
Any METHOD block in a class can refer to methods or attributes in its own class without specifying the _SELF_ system variable (which contains the object identifier for the class). For example, if method M1 is defined in class X (and it returns a value), then any method in class X can refer to method M1 as follows:
n=M1();You do not need to use the _SELF_ system variable:
n=_SELF_.M1();Omitting references to the _SELF_ variable (which is referred to as shortcut syntax) makes programs easier to read and maintain. However, if you are referencing a method or attribute that is not in the class you are creating, you must specify the object reference.
To instantiate a class, declare a variable of the specific class type, then use the _NEW_ operator. For example:
dcl mylib.classes.collection.class C1; C1 = _new_ Collection();You can combine these two operations as follows:
dcl mylib.classes.collection.class C1 = _new_ Collection();The _NEW_ operator combines the actions of the LOADCLASS function, which loads a class, with the _new method, which initializes the object by invoking the object's _init method.
You can combine the _NEW_ operator with the IMPORT statement, which defines a search path for references to CLASS entries, so that you can refer to these entries with one or two-level names instead of having to use a four-level name in each reference.
For example, you can use the following statements to
create a new collection object called C1 as an instance of the collection
class that is stored in
mylib.classes.collection.class:
/* Collection class is defined in */ /* the catalog MYLIB.MYCAT */ import mylib.mycat.collection.class; /* Create object C1 from a collection class */ /* defined in MYLIB.MYCAT.COLLECTION.CLASS */ declare Collection C1=_new_ Collection();
For more information, see _NEW_ and LOADCLASS.
Copyright 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved. | http://v8doc.sas.com/sashtml/sclr/z1107709.htm | CC-MAIN-2018-05 | refinedweb | 1,453 | 53.71 |
Table of Contents
Remez tutorial 4/5: fixing lower-order parameters
In the previous section, we took advantage of the symmetry of sin(x) to build the following minimax expression:
Leading to the following coefficient, amongst others:
const double a1 = 9.999999765898820673279342160490060830302e-1;
This is an interesting value, because it is very close to 1. Many CPUs can load the value 1 very quickly, which can be a potential runtime gain.
The brutal way
Now we may wonder: what would be the cost of directly setting
a1 = 1 here? Let’s see the error value:
Duh. Pretty bad, actually. Maximum error is about 10 times worse.
The clever way
The clever way involves some more maths. Instead of looking for polynomial Q(y) and setting Q(0) = 1 manually, we search instead for R(y) where Q(y) = 1 + yR(y):
Dividing by y up and down gives:
Once again, we get a form suitable for the Remez algorithm.
Source code
#include "lol/math/real.h" #include "lol/math/remez.h" using lol::real; using lol::RemezSolver; real f(real const &y) { real sqrty = sqrt(y); return (sin(sqrty) - sqrty) / (y * sqrty); } real g(real const &y) { return re(y * sqrt(y)); } int main(int argc, char **argv) { RemezSolver<3, real> solver; solver.Run("1e-1000", real::R_PI_2 * real::R_PI_2, f, g, 40); return 0; }
Only
f and
g changed here, as well as the polynomial degree. The rest is the same as in the previous section.
Compilation and execution
Build and run the above code:
make ./remez
After all the iterations the output should be as follows:
Step 8 error: 4.618689007546850899022101933442449327546e-9 Polynomial estimate: x**0*-1.666665709650470145824129400050267289858e-1 +x**1*8.333017291562218127986291618761571373087e-3 +x**2*-1.980661520135080504411629636078917643846e-4 +x**3*2.600054767890361277123254766503271638682e-6
We can therefore write the corresponding C++ function:
double fastsin2(double x) { const double a3 = -1.666665709650470145824129400050267289858e-1; const double a5 = 8.333017291562218127986291618761571373087e-3; const double a7 = -1.980661520135080504411629636078917643846e-4; const double a9 = 2.600054767890361277123254766503271638682e-6; return x + x*x*x * (a3 + x*x * (a5 + x*x * (a7 + x*x * a9)))); }
Note that because of our change of variables, the polynomial coefficients are now
a3,
a5,
a7…
Analysing the results
Let’s see the new error curve:
Excellent! The loss of precision is clearly not as bad as before.
Conclusion
You should now be able to fix lower-order coefficients in the minimax polynomial for possible performance improvements.
Please report any trouble you may have had with this document to sam@hocevar.net. You may then carry on to the next section: additional tips.
Attachments (2)
- bad-error.png (20.3 KB) - added by 7 years ago.
- better-error.png (29.8 KB) - added by 7 years ago.
Download all attachments as: .zip | http://lolengine.net/wiki/doc/maths/remez/tutorial-fixing-parameters | CC-MAIN-2018-47 | refinedweb | 459 | 58.48 |
25 May 2011 05:32 [Source: ICIS news]
By Helen Yan
SINGAPORE (ICIS)--Acrylonitrile (ACN) prices in ?xml:namespace>
ACN spot prices have fallen by $100/tonne (€71/tonne) since early May to $2,650-2,750/tonne CFR (cost and freight) NE (northeast)
ACN spot prices were on an uptrend since August 2010 until mid-May this year, when a number of AF producers have cut operating rates or shut down production in
AF - used in clothing and home furnishings such as carpets, upholstery and cushions -accounts for more than half of
“We expect ACN prices to continue to fall further as demand has dropped significantly in
Buying indications from Chinese traders this week have plunged to $2,400/tonne CFR NE Asia for June shipments, down from previous bids of $2,600/tonne CFR NE Asia in early May, in line with sharp falls in ACN values in the domestic Chinese market.
Chinese domestic ACN prices tumbled to yuan (CNY) 19,000/tonne EXWH (ex-warehouse) this week, down by CNY2,000/tonne ($307/tonne) since the end of April, according to traders.
AF plants in
In response to the weak demand, major Chinese ACN producer Jilin Petrochemical has brought forward a planned turnaround of its three ACN lines to June from July, a company source said. The lines have a combined capacity of 332,000 tonnes/year.
Meanwhile, ACN supply is expected to ease when PTT Asahi Chemical starts commercial production at its new 200,000 tonne/year ACN plant in
The plant is currently on trial runs, with commercial production likely to commence in July, said a company source.
“We expect ACN supply to ease soon and we are not looking to buy spot,” said a downstream AF producer.
($1 = €0.71 / $1 = CNY6 | http://www.icis.com/Articles/2011/05/25/9463065/asia-acn-to-fall-further-on-softening-demand-fresh-supply.html | CC-MAIN-2014-41 | refinedweb | 298 | 53.78 |
Hi Neil, Neil Jerram <address@hidden> writes: > FYI, I finally fixed some problems on IA64 that have been outstanding > for ages (at least about 2 years). Following is a big spiel about it, > and the actual patch (against 1.8.4 release). If anyone has any > comments or questions, please let me know. (And if not, I'll commit > in a couple of days' time.) Thanks a lot for fixing this! It just missed 1.8.5 my a few hours (my fault). Hopefully distributions will apply the patch by themselves until we release a new version... Thanks also for the nice explanation. I must confess I didn't grasp everything (especially since I'm not familiar with IA64 and its RBS thing), but I'm confident you did the right thing. ;-) > + * threads.h (scm_i_thread): New IA64 fields: > + register_backing_store_base and pending_rbs_continuation. This breaks ABI compatibility on IA64, but if Guile wasn't usable on IA64 (was it?) that's probably not a problem. > + void scm_ia64_longjmp (jmp_buf *, int); Add `SCM_API' at the beginning and `SCM_NORETURN' at the end. The latter should fix this: > +#ifdef __ia64__ > + /* On IA64, we #define longjmp as setcontext, and GCC appears not to > + know that that doesn't return. */ > + return SCM_UNSPECIFIED; > +#endif Thanks! Ludovic. | http://lists.gnu.org/archive/html/guile-devel/2008-05/msg00018.html | CC-MAIN-2017-30 | refinedweb | 207 | 76.11 |
ina De Jager1,779 Points
Won't change to uppercase
Hey guys. Posted this same question in the early afternoon yesterday and never got an answer... Any insights as to why my Grrr!!! isn't coming back in uppercase?
from animal import Animal class Sheep(Animal): pass sound = 'Grrr!!!' def __str__(self): self.sheep.noise = self.sound.upper()
2 Answers
cm21150,854 Points
Hi Carina,
Instead of a str method, let's create a method named noise, and have it return the uppercase value of the instance's sound. That way, noise will be set to that value (the uppercase value of self.sound.)
from animal import Animal class Sheep(Animal): pass sound = 'Grrr!!!' def noise(self): return self.sound.upper()
Hope this helps!
Carina De Jager1,779 Points
YES!!! Thank you! Was starting to go a bit bonkers. Haha | https://teamtreehouse.com/community/wont-change-to-uppercase-2 | CC-MAIN-2022-40 | refinedweb | 140 | 70.39 |
12514 [details]
XS project which shows the linker error described in the Description.
Attached is a zip file containing a project that succeeds when built with XS 5.9.5 + Xamarin.iOS 8.10.4.46, but fails the linker step when built with XS 5.9.5 (build 17) + Xamarin.iOS 8.99.3.290.
My current setup:
=== Xamarin Studio ===
Version 5.9.5 (build 17)
Installation UUID: bb12c0a1-844d-4ace-bbe9-508629c49e9a
Runtime:
Mono 4.0.3 ((detached/d6946b4)
GTK+ 2.24.23 (Raleigh theme)
Package version: 400030020
=== Apple Developer Tools ===
Xcode 7.0 (8190.6)
Build 7A176x
=== Xamarin.iOS ===
Version: 8.99.3.290 (Business Edition)
Hash: 2628f96
Branch: master
Build date: 2015-08-09 22:08:44-0400
=== Build Information ===
Release ID: 509050017
Git revision: 7d17e84374f953da1c64d66d75fc651520528e6e
Build date: 2015-07-21 20:36:20-04
Xamarin addins: 45b520f604ef71d1ad2cd3756544d45dac93867e
=== Operating System ===
Mac OS X 10.10.4
Darwin ws1799.lrscorp.net 14.4.0 Darwin Kernel Version 14.4.0
Thu May 28 11:35:04 PDT 2015
root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64
This linker error is still a problem with Xamarin.iOS 8.99.4.220.
=== Xamarin Studio ===
Version 5.9.5 (build 18)
Installation UUID: bb12c0a1-844d-4ace-bbe9-508629c49e9a
Runtime:
Mono 4.2.0 (explicit/a224653)
GTK+ 2.24.23 (Raleigh theme)
Package version: 402000179
=== Apple Developer Tools ===
Xcode 7.0 (8208.9)
Build 7A192o
=== Xamarin.iOS ===
Version: 8.99.4.220 (Business Edition)
Hash: 52034fb
Branch: master
Build date: 2015-08-26 23:50:57-0400
=== Build Information ===
Release ID: 509050018
Git revision: e9148b1cfc781f8e7751f88540c6d65cca5be410
Build date: 2015-08-24 11:44:21-04
Xamarin addins: 3b908d565411f1a7425b67926ede4359e7000172
=== Operating System ===
Mac OS X 10.10.5
Darwin ws1799.lrscorp.net 14.5.0 Darwin Kernel Version 14.5.0
Wed Jul 29 02:26:53 PDT 2015
root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64
This is the build error:
> Linking SDK only for assembly /Users/rolf/Downloads/TestApp/TestApp/bin/iPhoneSimulator/Debug//TestApp.exe into /Users/rolf/Downloads/TestApp/TestApp/obj/iPhoneSimulator/Debug/mtouch-cache/PreBuild
> MTOUCH: error MT2001: Could not link assemblies. Reason: Can't not find the nested type '<<.ctor>b__2c>d__34' in 'Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache/Timer/<>c__DisplayClass2f
full build log:
This issue persists in Xamarin.iOS 9.0.0.32.
We were able to work around the problem by recompiling the 3.5.1 MvvmCross DownloadCache source with Mono, which generates slightly different types.
➜ (from nuget) monop -p -r:Cirrious.MvvmCross.Plugins.DownloadCache.dll.orig|grep Timer
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<>c__DisplayClass2f
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<>c__DisplayClass2f+<<.ctor>b__2c>d__34
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<>c__DisplayClass2f+<>c__DisplayClass32
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+TimerCallback
➜ (built with mono) monop -p -r:Cirrious.MvvmCross.Plugins.DownloadCache.dll|grep Timer
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<Timer>c__AnonStorey5
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<Timer>c__AnonStorey5+<Timer>c__async3
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<Timer>c__AnonStorey5+<Timer>c__async3+<Timer>c__AnonStorey4
Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+TimerCallback
Any news on this issue? When can we expect a fix?
The issue seems to be with the .mdb file (copied from VS to the Mac). A release builds works fine.
*** Bug 33964 has been marked as a duplicate of this bug. ***
*** Bug 34063 has been marked as a duplicate of this bug. ***
Created attachment 12970 [details]
Test case, minimal
Just on the small chance it might be useful at some point over the course of this bug's life, the following class is sufficient to reproduce the problem (when it is compiled by Microsoft's C# 5 (VS 2013) `csc.exe` compiler):
> public class Class1
> {
> public Class1()
> {
> Action x = async () => { };
> }
> }
The attached test case includes a `csc`-compiled version of this class in `UnifiedSingleViewIphone1/lib/PortableClassLibrary1.dll`
## Steps to reproduce
> $ xbuild /t:Build /p:Platform="iPhone" /p:Configuration="Release" PortableClassLibrary1.sln
(You could build on Windows instead if you wanted, but since `PortableClassLibrary1.dll` is pre-compiled, it is sufficient to build the solution on Mac.)
## Regression status: regression in Xamarin.iOS 9.0
BAD: Xamarin.iOS 9.0.1.18 (xcode7-c5: d230615)
GOOD: Xamarin.iOS 8.10.5.26 (6757279)
The Xamarin developers are creating a follow-up build to fix this issue that will be released within the next few days.
(Small additional side note to users seeing this issue with libraries other than MvvmCross: the workaround from comment 4 is only possible if you have the source code of the library that causes the problem. There are almost certainly some closed source NuGet packages and Components that are affected by this issue. The workaround from comment 4 will not be possible with those libraries. Apart from disabling the linker entirely (which will not be suitable for App Store submissions) or downgrading, no general workarounds are known at this time.)
Is there a release date for 9.0.1.20?
I can't see it on the Beta or Stable channels. The latest version available is 9.0.1.18
Is there any additional information about a possible work around. My application is broken in iOS 9, but I cannot build my app with the linker turned on to submit a fixed version.
Please give an ETA of the release or a workaround. This is a major bug.
Upgrading to VS 2015 works fine. The catch is if the solution has any shared projects, it may complain about the MSBuild/v14.0.0/8.1/Microsoft.Windows.UI.Xaml.CSharp.targets not found. If that's the case, then just have to copy from v12.0.0/8.1 over and it should work as normal.
## Draft development build with a fix available via contact@xamarin.com
The Xamarin.iOS team has now created a draft development build that reverts the change that caused this problem. For anyone who would like access to this draft build, please send an email to contact@xamarin.com and refer to Bug 33124.
This draft build is under review by the engineering and QA teams to assess whether it is suitable for publication on the Stable updater channel. If all goes according to plan, it will be available on the Stable channel before the end of the week.
Thanks Brendan. I got a copy of the build earlier, and I am not able to build with the linker turned on.
For anyone who tries the new build and hits problems, please follow-up with the Support Team via email. You can use contact+xamarinios9.0@xamarin.com or one of the email addresses listed on.
Thanks in advance!
For bookkeeping, I will note that Xamarin.iOS 9.0.1.20 (that includes the fix for this bug) has now been released to the Stable channel. See comment 21 for any further follow-up on this bug. Thanks! | https://bugzilla.xamarin.com/33/33124/bug.html | CC-MAIN-2021-39 | refinedweb | 1,150 | 52.46 |
Forwarded by permission from the author. -- Juliusz
--- Begin Message ---
- To: Juliusz Chroboczek <jch@pps.jussieu.fr>
- Subject: Re: [Juliusz Chroboczek] A few observations about systemd
- From: Lennart Poettering <lennart@poettering.net>
- Date: Sun, 17 Jul 2011 17:53:38 +0200
- Message-id: <20110717155338.GA6393@tango.0pointer.de>
- In-reply-to: <7ivcv1cbv5.fsf@lanthane.pps.jussieu.fr>
- References: <7ivcv1cbv5.fsf@lanthane.pps.jussieu.fr>On Sun, 17.07.11 14:47, Juliusz Chroboczek (jch@pps.jussieu.fr) wrote: > Dear all, Wow, you are amazingly badly informed. >. Seriously? You have a pretty bogus definition of bloat. If you want to compare systemd in lines-of-code with sysvinit, then you need to sum everything up: inetd, the numerous rcS scripts and even the enourmous duplication that sysv init scripts are. And yes, systemd will easily win if you do: it will be much shorter. In fact, a minimal systemd system will win in almost very aspect against a remotely similarly powerful sysvinit system: you will need much fewer processes to boot. That means much shorter boot times. That means much fewer resources. You need a smaller set of packages to built it. It's fewer lines of code. And yet, it will be much more powerful. Also, systemd does not "take over the role" of modprobe in any way. I am not sure where you have that from. >. Oh, come on. systemd does not depend on Plymouth, it merely interacts with it if it is around it. Where interaction simply means writing a single message every now and then to ply to keep it updated how far the boot proceeded. It's more or a less a single line of text we send over every now and then in very terse code. It would be rother strange if we'd spawn a process for this each time (which would practically double the number of processes we need to spawn at boot). And even more stupid to spawn a shell script for it (which would at least triple it). I am not sure what makes you think D-Bus was only for the "desktop". It hasn't been for quite some time, as most commercial distros already shipped Upstart before systemd, which had a hard dependency on it. I like to think that that "D-Bus" actually stands for "D-Bus Bus", a truly recursive acronym. Also, the claim of D-Bus not being useful and unncessary on anything but the desktop is pure ignorance. Advanced programs need a form of call-based IPC. Now you have two options: every project can implement its own IPC, duplicate code, and fuck it up. Or all projects use the same, powerful one with bindings for all programming languages, that has been reviewed thoroughly, is well known and hence relatively secure. Reusing the same code also makes things much smaller, in contrast to the "bloat" that occurs when everybody implements their own IPC. You know, you are welcome to criticise D-Bus for its code or other qualities. But if you doubt its usefulness or the need for it, then this reveals more about your nescience than about D-Bus. >. Oh god. It's about 30 lines of code, which become NOPs if Plymouth is not around. it's the simplest scheme thinkable, and debloats the system a lot (see above). I see no point in supporting numerous alternative implementations of splash screens. We gently try to push people to use Plymouth, and only Plymouth, since it is the sanest implementation around. But that deosn't mean you have to use it. There is no dependency between the two. It's just that when you use the combination of systemd and Plymouth you get the most powerful combination. If you use only one of them things will still work. If you find short and minimal code "shocking", then you are easily shocked. I might recommend you a less drastic language though. >.) Simply not true. You can assign legacy runlevels to systemd targets relatively freely, by placing symlinks in /etc/systemd/. (With the exception of runlevels 0,1,6 which however cannot really be reconfigured on sysvinit either). I'll not comment on the benefit of doing so though. >. That "language" are .ini-files. Everybody knows .ini files. They are so simple and have been around for so long, that you don't need to learn them. Many programming languages come with parsers for them out of the box. OTOH shell is a turing complete language, and a very weird one on top. >.) Yeah, well, systemd is a lot more powerful with a much simpler language. To configure stuff like the IO or CPU scheduler, the CPU affinity, OOM adjustment, timer slack, capabilities, privilege dropping, namespaces, control groups, secure bits, and so on, you just need a line each in the systemd unit files. In shell however, you will have a hard time. You can install additional packages to make some of these things work, but that again increases bloat, and slows down boot (since it multiplies the number of processes you need). systemd, with all the power it gives you in unit files, encourages developers to ship their software robust and secure *by default*. In shell this is not realistically doable, unless you want to pull in a lot of additional dependencies. Again, systemd helps "debloating", and sysvinit encourages it. On top of that, everybody can easily understand systemd unit files, without having to learn a programming language. Unit files can easily be generated programmatically and parsed programmatically too. Shell scripts cannot be, unless you reimplement a full bourne shell interpereter. Finally, systemd does not stop you from using shell scripts. There are certainly things systemd won't do for you, and never will. If that's the case and you don't want to add that feature to your daemon code itself, then you are welcome to just spawn a shell script from the unit file, nothing will stop you. As every good software should: systemd makes the frequent things easy and the other thinks possible. > Systemd is Linux-specific > ------------------------- > > Systemd is specific to Linux. This is strange, since the only feature > of Linux used by systemd that doesn't have an exact equivalent on other > systems, cgroups, is optional in systemd. Yeah, that is really bogus. Here's a short and very incomprehensive list of Linux interfaces that systemd uses that the other Unixes don't have. We make use of these features and we empower the user and admin to take advantage of them, which we couldn't if we cared about POSIX and POSIX only: (sure, some of the other unixes have a few of these features, but that's not the point, and it doesn't make this POSIX) And this list isn't complete. It's just grepping through two source files. There's a reason why systemd is more powerful than other init systems: we don't limit ourselves to POSIX, we actually want to give the user/administrator the power that Linux can offer you. > Systemd's author is annoying > ---------------------------- > > While I haven't had the pleasure to meet Lennart in private, I find his > public persona annoying, both on-line and at conferences. While I haven't had the pleasure to meet Juliusz in private, I find his personal and public persona annoying online. He writes personal emails to people telling them how he finds them annoying. He sends FUD mails around while being amazingly badly informed. > He practices misleading advertising[2], likes to claim that the > universal adoption of systemd by all distributions is a done thing[3], > and attempts to bully anyone who has the gall to think that the > discussion is still open[4]. Juliusz practices misleading anti-advertising [1], likes to ignore the fact that all major distros either made systemd the default or include it in their distro with the exception of Ubuntu. [1] The mail this mail is a response to. You know, you personally attack me and that's quite an unfriendly move. Even if you think I am dick, I can tell you that I am not the one who runs personal attacks like this, and publicly calls people by their name. You write personal hate mail. I don't. Who's the real dick here? > Conclusion > ========== > > Systemd is the first init replacement worth criticising. Nah, the conclusion is more likely that nescience doesn't stop people from writing stupid opinion pieces. Feel free to forward this to your mailing list, since you wouldn't have forwarded this to me if you didn't want me to reply to this. And don't conveniently leave parts out of it. Cheers, Lennart
--- End Message --- | https://lists.debian.org/debian-devel/2011/07/msg00281.html | CC-MAIN-2015-27 | refinedweb | 1,454 | 73.07 |
Video playback
media_player_mpy plugin
The media_player_mpy plugin is based on MoviePy. As of OpenSesame 3.1, it is included by default with the Windows and Mac OS packages of OpenSesame. If it is not installed, you can get it by installing the
opensesame-plugin-media_player_mpy package, as described here:
The source code is hosted at:
media_player_vlc plugin
The
media_player_vlc plugin is outdated. It's better to use the
media_player_mpy plugin instead.
The media_player_vlc plugin is based on the well-known VLC media player. As of OpenSesame 3.1, it is no longer included by default with the Windows and Mac OS packages of OpenSesame. If it is not installed, you can get it by installing the
opensesame-plugin-media_player_vlc package, as described here:
The source code is hosted at:
In addition, you need to install the VLC media player in the default location:
Troubleshooting: If you encounter a black screen when running your experiment in fullscreen (i.e. the video appears to play, but you don't see anything), please try using a different backend (i.e. switch from legacy to xpyriment or vice versa), or change the backend settings for the legacy backend.
OpenCV
OpenCV is a powerful computer vision library, which contains (among many other things) routines for reading video files.
The following example shows how to play back a video file, while drawing a red square on top of the video. This example assumes that you're using the legacy backend.
import cv2 import numpy import pygame # Full path to the video file in file pool path = pool['myvideo.avi'] # Open the video video = cv2.VideoCapture(path) # A loop to play the video file. This can also be a while loop until a key # is pressed. etc. for i in range(100): # Get a frame retval, frame = video.read() # Rotate it, because for some reason it otherwise appears flipped. frame = numpy.rot90(frame) # The video uses BGR colors and PyGame needs RGB frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Create a PyGame surface surf = pygame.surfarray.make_surface(frame) # Now you can draw whatever you want onto the PyGame surface! pygame.draw.rect(surf, (255,0,0), (100, 100, 200, 200)) # Show the PyGame surface! exp.surface.blit(surf, (0, 0)) pygame.display.flip() | http://osdoc.cogsci.nl/3.1/manual/stimuli/video/ | CC-MAIN-2017-30 | refinedweb | 375 | 66.13 |
Despite:
I developed the project on a Mac using Sublime Text 3. This may mean if you are using another OS, we may have slightly different commands.
Setting Up Django Project
Before we can work with bokeh, we need to setup our django project. If you are already familiar with setting up django projects, feel free to skip ahead.
Let’s open the command line/terminal. Typically it will already be pointing to your home directory when you open it.
Navigate to your preferred directory area through use of the cd command. I am going to store the project in a directory called codeprojects.
I then make a new directory for the project using the mkdir command.
mkdir bokeh_project
Then navigate into the directory you created.
cd bokeh_project
We then need to make a virtual environment for this project. A virtual environment allows for python projects to have an isolated environment in which each project can store there own dependencies regardless of other project dependencies.
python3 -m venv myvenv
Activate your virtual environment
source myvenv/bin/activate
Now we have our virtual environment, we can install django within it using the pip command.
python -m pip install django
Create the Django project directories
django-admin startproject bokeh_example
Open the project in your IDE. You will see the project structure has been created.
Now in the terminal navigate into bokeh_example using cd
cd bokeh_example
Create the sqlite3 database using the following command in the terminal.
python manage.py migrate
Now check the website has been created by running the server command
python manage.py runserver
Navigate to the browser and enter this address
You should see a page like the below confirming you created the website correctly!
To keep everything tidy, we want to create a new area inside the project that will store all the site files. Run the following command
python manage.py startapp mysite
This will create a new directory structure.
The next step is to create the base.html file which will store the web page and bokeh visualisations.
Add a folder called templates inside the mysite folder. Add another directory within it called pages and then create a file called base.html inside it.
This base.html file will contain our core html code.
We can put some basic html inside it for now
<html>
<head>
</head>
<body>
<h1> Hello Medium! </h1>
</body>
</html>
We then need to link the html file to a view. Open mysite/views.py and create a new method called homepage.
def homepage(request):
return render(request, ‘pages/base.html’, {})
This method will redirect the view to the base.html file based on a request.
For this to work we need to change bokeh_project/urls.py. You must add a line to include the mysite.urls
urlpatterns = [
path(‘admin/’, admin.site.urls),
path(‘’, include(‘mysite.urls’)),
]
We now need to create a url connection mysite.urls file. This will point a url to our view which results in the base.html file.
from django.urls import path
from . import views
urlpatterns = [
path(‘’, views.homepage, name=’homepage’)
]
The final step is to add ‘mysite’ to INSTALLED_APPS in the settings.py file.
Now we have our base.html linked up. We can run our server again and we can see our html page displayed saying Hello Medium!
Integrating Bokeh Into Project
Now we have our django project, we can now integrate Bokeh into the html page.
First we must install Bokeh using pip in our virtual env.
python -m pip install bokeh
Now it’s ready to go.
Check the version of bokeh installed by firstly entering the below into the command line
python
This will open the python interactive environment. We can then enter the following commands to find out the bokeh version
import bokeh
bokeh.__version___
Once you have the version, you can quit the interactive environment by typing quit().
For reference I have version 1.0.4. This is important for when you integrate bokeh into the homepage.
Let’s go back to the base.html file.
We need to include bokeh dependencies in the header of the file. Make sure the dependencies reference the version of bokeh you own.
<link href=”" rel=”stylesheet” type=”text/css”>
<link href=”" rel=”stylesheet” type=”text/css”>
<script src=”"></script>
<script src=”"></script>
{{ script | safe }}
We also need to the html file to contain a div where the visualisation will be displayed
We then need to modify the views.py file to create a graph. We will first implement a basic line graph. Edit your views.py to include the following information
Once you have made these changes, run the server and you should see a graph like below:
We can take it to the next level by implementing a fancier graph from Bokeh’s user guide. I chose to implement the nested bar graph and modified the homepage method in views.py.
This results in the following graph.
CSS Makeover
The webpage looks quite bland. We can make the webpage look more realistic by leveraging bootstrap and css. Firstly let’s include bootstrap in our base.html file.
Copy and paste the stylesheet and javascript links for bootstrap into the base.html <head> section.
>
Let’s firstly add a navigation bar as a header for the webpage.
Once integrated into the file the result on the webpage should look like the following:
Let’s now focus on filling out the content of the website. I am going to create a style that makes graph look as though it is part of a blog post.
We will use containers. The container will contain one small column which will be the side bar and one larger column which be the blog feed.
Before adding containers in the html page we need to create our css document. Add a new folder called static, the same way we did for templates. Within static create another folder called css and a new file in called mysite.css
Now we have our css file, let’s go back to our base.html and include the file in the header.
<link rel=”stylesheet” href=”{% static ‘css/mysite.css’ %}”>
You must also make reference to loading static files at the beginning of the head tag.
<head>
{% load static %}
We also must make some changes in settings.py file. We must point the static directory to the correct area so the css file gets picked up.
Let’s go back to the base.html and create the containers which will store the site content. Under the navigation bar enter:
Firstly let’s populate the side bar. We will add a vertical navigation bar and a side widget.
We can then add some styling within the css file.
The webpage should now look like this:
Now let’s focus on creating the blog content. We must add information like a header and some Lorem Ipsum content for the post. We do so below:
We can improve the blog post by adding additional styles and importing google fonts. From google fonts I selected Oswald and Open Sans for use in the blog post.
Firstly you must include the link to the fonts in the head.
<link href=”" rel=”stylesheet”>
Now we can add the font families and additional styles in the css file.
Once these changes are made the site should look like the following:
You can now take it from here to experiment and build out the site with real content, different styles or more Bokeh visualisations! | https://hackernoon.com/integrating-bokeh-visualisations-into-django-projects-a1c01a16b67a?source=rss----3a8144eabfe3---4 | CC-MAIN-2020-05 | refinedweb | 1,255 | 75.61 |
Welcome to this tutorial on Multiple Linear Regression. We will look into the concept of Multiple Linear Regression and its usage in Machine learning.
Before, we dive into the concept of multiple linear regression, let me introduce you to the concept of simple linear regression.
What is Simple Linear Regression?
Regression is a Machine Learning technique to predict values from a given data.
For example, consider a dataset on the employee details and their salary.
This dataset will contain attributes such as “Years of Experience” and “Salary”. Here, we can use regression to predict the salary of a person who is probably working for 8 years in the industry.
By simple linear regression, we get the best fit line for the data and based on this line our values are predicted. The equation of this line looks as follows:
y = b0 + b1 * x1
In the above equation, y is the dependent variable which is predicted using independent variable x1. Here, b0 and b1 are constants.
What is Multiple Linear Regression?
Multiple Linear Regression is an extension of Simple Linear regression where the model depends on more than 1 independent variable for the prediction results. Our equation for the multiple linear regressors looks as follows:
y = b0 + b1 *x1 + b2 * x2 + .... + bn * xn
Here, y is dependent variable and x1, x2,..,xn are our independent variables that are used for predicting the value of y. Values such as b0,b1,…bn act as constants.
Steps to Build a Multiple Linear Regression Model
There are 5 steps we need to perform before building the model. These steps are explained below:
Step 1: Identify variables
Before you start building your model it is important that you understand the dependent and independent variables as these are the prime attributes that affect your results.
Without understanding the dependent variables, the model you build would be a waste, hence make sure you spend enough time to identify the variables correctly.
Step 2: Check the Cavet/Assumptions
It is very important to note that there are 5 assumptions to make for multiple linear regression. These are as follows:
- Linearity
- Homoscedasticity
- Multivariate normality
- Independence of errors
- Lack of Multicollinearity
Step 3: Creating dummy variables
Suppose, I want to check the relation between dependent and independent variables, dummy variables come into picture.
We create dummy variables where there are categorical variables. For this, we will create a column with 0s and 1s. For example, we have names of few states and our dataset has just 2 namely New York and California. We will represent New York as 1 and California as 0. This 0 and 1 are our dummy variables.
Step 4: Avoiding the dummy variable trap
After you create the dummy variables, it is necessary to ensure that you do not reach into the scenario of a dummy trap.
The phenomenon where one or more variables in linear regression predict another is often referred to as multicollinearity. As a result of this, there may be scenarios where our model may fail to differentiate the effects of the dummy variables D1 and D2. This situation is a dummy variable trap.
The solution to this problem could be by omitting one of the dummy variables. In the above example of New York and California, instead of having 2 columns namely New York and California, we could denote it just as 0 and 1 in a single column as shown below.
Step 5: Finally, building the model
We have many independent variables inputted to determine an output variable. But one policy we need to keep in mind, is garbage in- garbage out. This means that we must input only the necessary variables into the model and not all of them. Inputting all the variables may lead to error prone models.
Also, keep in mind, when you build a model it is necessary you present the model to the users. It is relatively difficult to explain too many variables.
There are 5 methods you can follow while building models. There are stepwise regression techniques:
- All-in
- Backward Elimination
- Forward Selection
- Bidirectional Elimination
- Scope comparison
Discussing each of these models in detail, is beyond the scope of this article. However, we will look at an example in this article.
Implementing Multiple-Linear Regression in Python
Let’s consider a dataset that shows profits made by 50 startups. We’ll be working on the matplotlib library.
The link to the dataset is –
Importing the dataset
import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv('50_Startups.csv') dataset.head()
Thus, in the above-shown sample of the dataset, we notice that there are 3 independent variables – R&D spend, Administration and marketing spend.
They contribute to the calculation of the dependent variable – Profit.
The role of a data scientist is to analyze the investment made in which of these fields will increase the profit for the company?
Data-preprocessing
Building the matrix of features and dependent vector.
Here, the matrix of features is the matrix of independent variables.
X = dataset.iloc[:,:-1].values y = dataset.iloc[:,4].values
Encoding the categorical variables
We have categorical variables in this model. ‘State’ is a categorical variable. We will be using Label Encoder.)
We have performed Label Encoding first because One hot encoding can be performed only after converting into numerical data. We need numbers to create dummy variables.
Avoiding the dummy variable trap
In the below code, we removed the first column from X but put all rows. We ignore only index 0. This is to avoid the dummy variable trap.
X = X[:, 1:]
Splitting the test and train set
Generally, we will consider 20% of the dataset to be test set and 80% to be the training set. By training set we mean, we train our model according to these parameters and perform test on the “test set” and check if the output of our testing matches the output given in the dataset earlier.)
The output of the above code snippet would be the small line below.
Predicting the test set results
We create a vector containing all the predictions of the test set profit. The predicted profits are then put into the vector called y_pred.(contains prediction for all observations in the test set).
‘predict’ method makes the predictions for test set. Hence, input is the test set. The parameter for predict must be an array or sparse matrix, hence input is X_test.
y_pred = regressor.predict(X_test) y_test
y_pred
The model-fit until now need not be the optimal model for the dataset. When we built the model, we used all the independent variables.
But what if among these independent variables there are some statistically significant (having a great impact) dependent variables?
What if we also have some variables that are not significant at all?
Hence we need an optimal team of independent variables so that each independent variable is powerful and statistically significant and definitely has an effect.
This effect can be positive (decrease in 1 unit of the independent variable, profit will increase) or negative (increase in 1 unit of the independent variable, profit will decrease).
We will perform backward elimination using stats model. But this topic will not be discussed in this article.
Complete Code for Multiple Linear Regression in Python
import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv('50_Startups.csv') dataset.head() # data preprocessing X = dataset.iloc[:,:-1].values y = dataset.iloc[:,4].values) X = X[:, 1:]) # predicting the test set results y_pred = regressor.predict(X_test) y_test y_pred
The output will be the predictions as follows:
Conclusion
To quickly conclude, the advantages of using linear regression is that it works on any size of the dataset and gives information about the relevance of features. However, these models work on certain assumptions which can be seen as a disadvantage. | https://www.askpython.com/python/examples/multiple-linear-regression | CC-MAIN-2021-31 | refinedweb | 1,312 | 56.05 |
Deep Linking to Featured Content from the Fire TV UI
Deep linking refers to links that direct users past the homepage of an app to specific content inside the app. In the context of merchandising, deep links take users from the merchandising placement to the exact content promoted in the merchandising, assuming the user has already downloaded your app. To set up deep linking, you must provide an encoded deep link intent that points to your content. Amazon will then implement this deep link intent with your campaign.
- About Deep Linking with Featured Content
- Prerequisites
- Generate an Encoded Deep Link Intent for your Content
- Handling Scenarios Where Deep Links Are Destroyed
About Deep Linking with Featured Content
Deep links take users from the Fire TV OS into a specific destination within an app on Fire TV. Within the context of featured content, deep links take users to the content being promoted.
For example, if a merchandising placement for "ACME Media" promotes a Football Playoff game, any ACME customers who click on the Football Playoff placement will be taken directly to the page for that Football Playoff game within the app, where they can watch the game or record it for future viewing.
If the customer has not previously downloaded your app on their Fire TV, the campaign will take users to your app details page, where the user can choose to download and open your app. After users download and open your app, they will be taken directly to the content promoted in the campaign.
Overall, deep linking gives more immediacy and continuity to the featured content displayed on Fire TV because the deep link provides a seamless transition from the content displayed in the campaign to the destination in the app for that same content.
Previously, when customers clicked on featured content on Fire TV, they were taken only to the detail page for that app, even if they had already downloaded the app. Recent developments now make it possible to deep link to content inside the app.
Prerequisites
You will need to use Android Studio and adb to generate the encoded deep link intent. adb is installed by default with Android Studio. If you're new to adb and need more info on setting it up, see Android Debug Bridge (adb) in the Android docs.
Generate an Encoded Deep Link Intent for your Content
You need to generate out an encoded deep link intent by customizing some code in a simple app. Your customizations for the deep link intent will depend on the parameters used in your adb command. To generate an encoded deep link intent to your content, follow the two sections below.
Step 1: Create adb Command That Deep Links to your Media
If you already know the deep link intent values for your media, you can proceed directly to the next section, Step 2: Encode your Deep Link Intent Using Android Studio. However, you should still configure an adb command to test that the values surface your media, since there's no other way to test the deep link intent before submitting it.
To create an adb command to your media:
Create an adb command that provides your deep link intent. For example, your adb command to play the media might look like this:
adb shell am start -a android.intent.action.VIEW -n air.ACMEMobilePlayer/.ACMEActivity -d
In Android, deep links can be structured in different ways, and how each app chooses to structure deep links is known only to the app. You might use different parameters and components in your command. Your adb command will typically use
adb shell am startand several parameters:
-a(action) - specifies the intent action
-n(package + component) - specifies the package and component name
-d(data_uri) - specifies the intent data URI
For a description of parameters to pass into adb, see Specification for intent arguments in the Android docs. For more general information about creating deep links, see Create Deep Links to App Content.
- Connect adb to Fire TV and run your adb command. (For help connecting, see Connect to Fire TV Through adb.)
- Ensure that the right media appears on your Fire TV. (If the adb command fails, then the encoded deep link that you generate in the next section will also fail.)
Step 2: Encode your Deep Link Intent Using Android Studio
You need to convert your adb command (deep link intent) to an encoded string. To do this, you will download a simple Android app. The app has an a class with various methods that you will customize as needed. In the app, you will take the parameters from your adb command and store them in an intent object called
amazonIntent.
After adding all your adb parameters into
amazonIntent, the
toURI method converts this
amazonIntent object into an encoded URI string. The encoded URI string contains all the information from your adb command (the action, package, component, flags, extra, etc.). When you run the Android app, the encoded URI string gets printed in the Logcat console.
To generate your encoded deep link intent in Android Studio:
- Download this simple Android app: toURI. After downloading it, unzip it.
Open Android Studio and open the toURI app. (When you open the app, Android Studio will automatically update the location of the Android SDK.)
In
MainActivity.javafile (inside app/java/com/example/touri), customize the code based on the description in the comments. There are six sections to customize.Note: Details for customizing each of these sections are included in comments in the app. For convenience, the instructions are also pasted below. The code below is the extent of the app (minus the Android Manifest file).
package com.example.touri; import android.content.ComponentName; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.util.Log; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // 1. Specify the ASIN of your app. (If you don't know your ASIN, go to your app in // the Amazon Appstore. The ASIN appears under Product details.) String asin = "B123ABC456"; // Leave the following line as is. Here you're creating a new intent object called // amazonIntent where you're storing all the parameters from your adb command. Intent amazonIntent = new Intent(); // 2. Specify the action. In your adb command, grab the value from your -a parameter // and insert it as the parameter in the method below, replacing // android.intent.action.VIEW. If you don't have an -a parameter in your adb command, // leave the existing value as is (don't comment it out). amazonIntent.setAction("android.intent.action.VIEW"); // 3. Specify any flags. In your adb command, grab the value from your -f parameter // and insert it as the parameter in the method below, replacing // Intent.FLAG_ACTIVITY_SINGLE_TOP. Then uncomment the line. If you don't have any // flags in your adb command, skip this section. //amazonIntent.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP); // 4. Specify the class and component. In your adb command, grab the value from your // -n parameter and insert it as the parameter for the ComponentName method below. In // your adb command, a slash separates the package name from the component. In the // parameter format here, separate the package from the component with a comma // following the format shown. If you don't have an -n parameter, skip this section. //amazonIntent.setComponent(new ComponentName("tv.acme.android","tv.acme.android.leanback.controller.LeanbackSplashOnboardActivity")); // 5. Specify any extras. In your adb command, grab the value from your -e parameter // and insert it as the parameter for the .putExtra method below, following the // key-value pair format shown. If you don't have an -e parameter, skip this section. //amazonIntent.putExtra("acme_id", 12345); // 6. Specify the data. In your adb command, grab the value from your -d parameter and // insert it as the parameter for the .Uri.parse method below. (This assumes your // content ID is a URI.) If you don't have a -d parameter in your adb command, skip // this section. //amazonIntent.setData(Uri.parse("")); Intent DeepLinkIntent = new Intent(Intent.ACTION_VIEW, android.net.Uri.parse("amzns://apps/android?asin=" + asin)); DeepLinkIntent.putExtra("intentToFwd", amazonIntent.toURI()); Log.i("amazon_intent=", DeepLinkIntent.toURI()); } // After completing any customizations listed in 1 through 6, run your app (on any // emulator or device) // and open Logcat to filter on "amazon". Your encoded deep // link intent will be printed there. }
- After you finish customizing the code, click the Run App button
and run the application on any device (any emulator or connected Fire TV device, etc.).
Open Logcat and filter on the word "amazon". The value appears after "amazon_intent=:".
Filtering for amazon to get the encoded deep link intent
The value will be an encoded string such as this:
amzns://apps/android?asin=B123ABC456#Intent;S.intentToFwd=https%3A%2F%2F;end
- Copy the value and send this to your Amazon representative.
Handling Scenarios Where Deep Links Are Destroyed
In most cases, users will click the campaign and the media from your app will open and play. However, you should handle scenarios where the deep link might be destroyed before the user opens the app, or cases where (for whatever reason) the media in the deep link isn't available. In those cases, you might decide to take users to your homepage instead.
The following diagram shows scenarios where the deep link might be destroyed.
After clicking the campaign, if user does not have the app, the user is taken to the app details page with the option to download the app. The deep link intent will be stored in memory as long as the app details page remains active. (This is called "deferred deep linking.") However, the stored intent data will be removed if the user navigates away from the details page.
For example, suppose the user installs the app, but before the user opens the app, the user navigates elsewhere (perhaps the user is impatient in waiting for the download to finish). When the user returns and opens the app, the deep link content will have been removed because the user navigated away from the app details page. In these cases, take the user to your homepage instead.
- How do I implement deep-link functionality in my app?
- See Create Deep Links to App Content in the Android docs for more details.
- How can I tell if my app is ready to deep link?
- If you can play the media in your app through the adb command (deep link intent), then your app is ready and doesn't need to be updated.
- How do I know if my deep link works on Fire TV?
- You can connect adb to Fire TV and run your adb command to ensure that the right media appears on your Fire TV, as per Step 1. If your adb command works correctly, and you followed the instructions on generating the encoded deep link string, it should work. Amazon will also test the deep link using an internal tool. There's no way for you to test the encoded deep link intent yourself.
- If a user tries to follow a deep link that has been destroyed, or if the deep link points to media that has been moved or is missing, how should the app behave?
- The link should fail gracefully, such as leading to the default homepage for the app.
- Do you support deferred deep linking?
- Yes. Deferred deep linking is the mechanism that caches a specific destination within an app, even if the user has not downloaded an app, and transfers the user to that specific, cached destination within the app the first time the user downloads and opens the app. In this documentation, all references to "deep linking" include both deep linking and deferred deep linking. Deferred deep linking works only if users do not navigate away from the app detail page before they finish downloading and opening the app. | https://developer.amazon.com/de/docs/fire-tv/deep-linking-featured-content.html | CC-MAIN-2019-30 | refinedweb | 1,998 | 55.03 |
40216/how-to-extract-values-from-a-string-or-a-sequence
you could use the statements like
name[index value : ending index]
suppose you want the string 'u'
name[2]
would get you 'u' in the output.
You could just simply use a conversion ...READ MORE
import json
from pprint import pprint
with open('data.json') as ...READ MORE
In case I want to remove some ...READ MORE
Try like this, it will give you ...READ MORE
By using isAlpha () to check whether ...READ MORE
You can use split() using space as ...READ MORE
If you are talking about the length ...READ MORE
$ to match the end of the ...READ MORE
it is very basic to install pandas
if ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/40216/how-to-extract-values-from-a-string-or-a-sequence | CC-MAIN-2020-16 | refinedweb | 128 | 86.3 |
Talk:Expelled: Leader's Guide
[edit] cover story
Please do not archive this section
I think this is very close to deserving rotation as a cover story. Reserving nomination until after one more round of everyone editing it. human
19:39, 27 March 2008 (EDT)
- Sounds like a good idea to me! (And I'm not saying this just because it contains some of my intemperate raving. :) --Gulik 03:07, 28 March 2008 (EDT)
- Thirded. I think that for the next couple of months, or for however long the movie remains "relevant", it would be worth having a permanent link on the front page to this.--Bayesyikes 10:57, 28 March 2008 (EDT)
- Yes, it seems a natural. In fact, when everybody has finished snarking it up, is there any way we could whore it round the internet?--Bobbing up 11:04, 28 March 2008 (EDT)
- A superb article, well done to those who've built it. But it's such an obvious one for us, I think we need to make it solid, bulletproof, the best thing on the topic on the net, period. It still seems a little weak and ad-hom'y in places. It's 98% brilliant though, and that's quantifiable FACT. DogP 12:23, 28 March 2008 (EDT)
<undent>Re: Bayes - better idea. One day we can just "cover story" it, but for the interim, it should be a permanent front page link in the top left box. IMO. human
12:26, 28 March 2008 (EDT)
I think this has now reached Cover Story quality, so I say Yes. DogP 21:17, 2 April 2008 (EDT)
- How do you feel about my idea, DP? To hold off on mixing it in with the others and literally feature a hard link to it on the main page for a bit? Apart from that idea, I agree. It is teh awesome. human
21:41, 2 April 2008 (EDT)
- Oh absolutely, I say sock it right up there in plain view, full-time, for a while. It's turned out great, and the movie publicity machine will no doubt start cranking up soon. Or not. DogP 21:51, 2 April 2008 (EDT)
- Cool. Tomorrow let's experiment with where to put the link and how to word it (gittin' tired ;)). Let's pull the cover nominee thing for this phase - as you say, the movie is in its "big publicity phase", and that's the time frame I think we should keep it there. Do you think we ought to ask permission for this at talk main? human
22:32, 2 April 2008 (EDT)
I'm going to post at talk main about what I think we should do. human
21:10, 3 April 2008 (EDT)
[edit] pikchures
Some photos would be a nice touch. CЯacke® 18:17, 6 April 2008 (EDT)
[edit] "Permanent" non-random front page thing expiration date
Anyone have any good ideas about how we will figure out when to put this in normal "random rotation"? IE, when will no one really give a rat's ass about Expelled anymore? human
17:14, 27 April 2008 (EDT)
- Do we still want this set up as a permanent front page entry or is it time to just "cover story" it? Has the froofraw died down yet? Should we analyze the article's traffic (ask Trent?) and see how many people come from elsewhere and how many click through from the main page? human
14:18, 7 May 2008 (EDT)
- Yeah--it's time, methinks. It's a beautiful page, and definite cover story material, but is becoming less timely.... Sterilexx 14:27, 7 May 2008 (EDT)
- Uhuh, yep, move along folks, nothing to see here. It can be retired to regular Cover Story rotation, I believe. DogP 14:43, 7 May 2008 (EDT)
- Aye, its down to 662 theaters and $100 a screen half of what it was at release. Yoko Ono's lawsuit means they can not up the screen even if they could find someone that wants it. The movie lasted less than a month, ouch. tmtoulouse vex 14:50, 7 May 2008 (EDT)
[edit] Weird off topic
"Talkpage" is acting up - where it says "new" there is a good link to Archive1 that Elassint created, that should be at "1", and "new" should link to editing Archive2. Anyone know whazzamatta? — Unsigned, by: Human / talk / contribs
- Dunno. -- Elassint Hi! ^_^ 23:04, 9 April 2008 (EDT)
- After some quick poking around, my guess is that talkpage is confused by the "extra" colon after Expelled. I think it's trying to find a namespace that isn't there, so it doesn't know that an archive page exists. Or something like that. I don't really know. I've never messed around with anything that complicated in whatever language the templates use though (is it MediaWiki specific?), so I don't think I'm the one to fix it.--Bayesyikes 23:53, 9 April 2008 (EDT)
- Good thought. I'll try to hack around that sometime (rename the archive path to drop the colon, if that succeeds it will be transparent to deal with in the future). human
14:48, 10 April 2008 (EDT)
- It didn't foob template:archivelinks so I just switched to that for now. human
15:02, 10 April 2008 (EDT)
[edit] "Survival of the fittest"
In the third paragraph of the Leader's Guide under The Theory of Darwinian Evolution, it might be worthwhile, toward the end of distancing the theory they seem to be talking about from Darwin himself, to mention that Darwin himself never used the term "survival of the fittest." Just a thought.
Bare 19:58, 10 April 2008 (EDT)
- Did that come from Descent of Man? Or is it just a common language adaptation of the ideas of natural selection? human
21:06, 10 April 2008 (EDT)
- The term is not used in the first edition of Origin of the Species published in 1859 (I have a facsimile at home), but appears to have been introduced in the 1872 edition. Rational Edfaith 21:39, 10 April 2008 (EDT) This link associates the term with the 6th edition.
Survival of the fittest did not originate with Darwin, but he later wrote about the subject. Google the phrase "survival of the fittest" and you'll hit the jack pot. — Unsigned, by: 76.187.190.208 / talk / contribs
- No! Well I didn't know that! Albert 23:13, 13 May 2008 (EDT)
[edit] You got Digg'd
Hey I found your article thru a link on Digg - man, this is good stuff! Congratulations on a fine article. I didn't know if you knew you got Digg'd as I see no ref to it here, so I thought I'd let you know. Bye, and best of luck with it. — Unsigned, by: 208.53.157.22 / talk / contribs
- I, for one, didn't know. Cheers. Also, yay! :D --AKjeldsenGodspeed! 13:47, 13 April 2008 (EDT)
- Cool! Thanks, bunchanumbers - why don't you sign up and join the bun fight? DogP 13:52, 13 April 2008 (EDT)
[edit] Other links
Here's another argument against Stein, video this time. DogP 13:38, 16 April 2008 (EDT)
- Put it at the movie article EL section with a brief description? human
13:53, 16 April 2008 (EDT)
[edit] Good news everyone!
I just talked with a PR guy for a couple liberal organizations, incl. the NCSE, and I gave him this link.-αmεσ (advocate) 17:58, 16 April 2008 (EDT)
- D-Day? the Ides of April? Should we send People Very Important E-mails? Should we send someone out for drinks because company is coming?PFoster 18:00, 16 April 2008 (EDT)
- Yeah, I'm worried I'll sound like Kenny boy by boosting this, but could be good for the site.-αmεσ (advocate) 18:03, 16 April 2008 (EDT)
- Old school it was called "selling-out". High time, too I think. Do original cabal members get a %age?CЯacke® 18:25, 16 April 2008 (EDT)
- I haven't sent anyone an important e-mail in some time Sterilexx 13:35, 23 April 2008 (EDT)
[edit] Scientific method
While observation comes first in science, hypothesis may come before or after "rigorous testing". Rational Edevidence
- Yes - usually "before", since it is the hypothesis that is being tested. However, pre-hypothesis, there may be another form of "rigorous" - rigorous observations to accompany the more casual ones to make sure there is a phenomena to hypothesize about in the first place. (Example: "I observe all USAians are male" - better increase my sample size to greater than two before worrying about a hypothesis though) human
12:53, 23 April 2008 (EDT)
[edit] Information
i think it's worthwhile to point out how the term "information" is mis-used. The ID view is that the information comes first and is then stored in the DNA specifically created to store that information. Whereas in biological systems the DNA comes first, and if a mutation of a particular combination of base pairs results in a useful protein, only then can it be considered to contain "information"; if no protein can be generated, then there is no information contained.
The best spot to insert this may be under the heading Molecular Genetics (subheading The Living Cell). Ginckgo 02:34, 13 May 2008 (EDT)
- I tried to edit in those ideas but I could not find a place where it would fit naturally. Feel free to do it yourself if you want however. - Icewedge 20:33, 13 May 2008 (EDT)
[edit] Wikipedia links here
By the way: Wikipedia's Expelled article links to this article -- 85.178.161.187 16:26, 15 May 2008 (EDT)
- Cool! And that EL ought to survive, even in wikipedia-world. human
16:32, 15 May 2008 (EDT)
- It didn't last a year. Removed by a Godbotherer. I am eating
& honeychat 15:34, 12 August 2009 (UTC)
[edit] Odds of the existence of a cell argument
Under the "The Living Cell" section, the "Leader's Guide" argues that cells must have been designed because the odds of them coming into existence randomly are near zero. Ignoring that "coming into existence randomly" is a gross oversimplification of what really happened (as is their wording), haven't we already refuted this class of argument with the Paulos quote? Saying that something was extremely unlikely to happen is not an argument for or against anything after the event has already happened. All we need is that the chance of the event is not zero (and they concede that it is not), and previous probability has no bearing on what has already happened. Why don't we mention the Paulos quote about the bridge hand here? OneForLogic 15:08, 8 July 2008 (EDT)
Ok, and the next one, Frederick Hoyle's argument: isn't this basically the same, a prior-probability argument? And how contrary to the style of RationalWiki would it be to just call the whole blind-people-solving-a-Rubik's-cube argument stupid, which it is? That seems like it would fit with most of the stuff you guys write. OneForLogic 15:12, 8 July 2008 (EDT)
[edit] Quote Mining
It seems to me it would be useful to put Dennett's Quote (as provided by the Guide) in context because it does sound quite damning on its own. If I can find the quote in context, I'll provide it. But perhaps someone else already has it?--WaitingforGodot 15:26, 8 July 2008 (EDT)
[edit] "Testable theory"
In the commentary accompanying the fourth paragraph of the "Leader's Guide," the author suggests that ID proponents come up with a "testable theory." Strictly speaking, theories are untestable -- they generate hypotheses which can be tested. Theories can be falsified, but not directly tested. In the spirit of the piece, this isn't a major quibble, but I think it's worth noting nonetheless.--Gonzoid 00:02, 26 May 2009 (UTC)
[edit] On Mr Flew's comments
I really enjoy all the responses to the Expelled article, but I really don't think that this particular response does anything in regard to answering the Expelled article. The article says that he believes that DNA points to ID, and therefore he is a theist, and the response is that he definitely isn't Christian? That doesn't really help the Evolution argument or heard the ID one either, it's just purposefully ignoring the point, and I don't think this article needs to drop to that level. Just my opinion, will keep reading now :).
[edit] "Most americans believe"
Introductions includes line "Despite the fact that most Americans believe that God created life, the only “origin of life” theory taught in the majority of American schools is Neo-Darwinism"
It should be mentioned that despite fact most Americans believe God created life, only 45 % (not most!) - don't believe evolution. It makes difference, anti-evolutionists don't make up the biggest part of American society.
- Unfortunately, data on this sort of thing that we could reference is really difficult to interpret and reliable data is hard to come by. It's something that's extremely suseptible to the wording used in the questions. Take "I believe in evolution: YES/NO" or "There are problems with the Theory of Evolution: YES/NO" or "I believe that random chance caused us to be here: YES/NO", it's an absolute minefield of bias so there's no wonder that they all conflict with each other. And in the US in particularly, there's a fairly sizable geographical correllation for YEC beliefs so any survey done would need to be very wide and have to take this into consideration. In short, I don't think anyone truely knows what the figure for "belief" in evolution is so we can't say either way on that point with any conviction.
narchist 16:13, 3 July 2009 (UTC)
I wish to address the phrase: Also, note the completely false assertion of "incredible support" for their ideas. This is not a completely false assertion, it is a true assertion. The support for their ideas is not credible. They have incredible support. 76.185.63.93 06:39, 6 August 2009 (UTC)
[edit] Behe and ad hom
Regarding: Behe, of course, is a vehement wedge strategist, and has built a small side-career to his professorship publishing books aimed at the popular market full of lies and misdirection about evolution and so-called intelligent design. This appears to be ad hominem. Behe may in fact be all of those things, but you still haven't refuted what he just said. 76.185.63.93 06:54, 6 August 2009 (UTC)
- He is, and we haven't? Where or where not? Keep in mind we have other articles addressing his tripe directly, which cover our ass on what a liar and loser he is. ħuman
06:59, 6 August 2009 (UTC)
- My apologies. Here's the full paragraph.
- Expelled says:.”
- RationalWiki's response: Behe, of course, is a vehement wedge strategist, and has built a small side-career to his professorship publishing books aimed at the popular market full of lies and misdirection about evolution and so-called intelligent design.
- This appears to me to be ad hominem. I don't actually care if it happens to be true, it lowers the value of our argument to make such a statement here. This is more appropriately placed on a special page about him. Here we should instead demonstrate that molecular evolution is based on scientific authority, and reference a publication of scientific literature which describes how complex biomechanical systems might occur. Then the reader won't have to take our word based on faith that Behe is an idiot or that by virtue of him being an idiot his argument must be wrong. His argument is wrong for being wrong, not because he is an idiot.
- 76.185.63.93 15:12, 6 August 2009 (UTC)
- Still not ad hom as such... add the evidence after this, but this really is a pretty good summary of why no one should ever listen to Behe. WazzaHello? Is there anybody in there? Just nod if you can hear me... 15:20, 6 August 2009 (UTC)
[edit] Side-by-side
Does this need converting into the new format? - π 12:02, 12 August 2009 (UTC)
- DPL 1.7.4 can't properly include tables, that's why I've unapproved Behe:The Edge of Evolution, Interview. This one doesn't break as spectacularly as that did, but It'd be nice to convert it to wikitables anyway. I'm upgrading DPL to 1.7.8, a slightly newer version that fixes the bug with table inclusion, but doesn't have the parser problems introduced in the rewrite after that Nx (talk) 12:08, 12 August 2009 (UTC)
[edit] Ken: get jealous
We're #1 on Google (UK) for the title Expelled leader's Guide with and without quotes. just sayin'. I am eating
& honeychat 13:55, 12 August 2009 (UTC)
[edit] Charles Townes quote
I did a bit of research on this guy, since the quote seemed off to me. It seems to be genuine, but I think their use of it is disingenuous: he's discussing the universe as a whole, and his idea of "ID" has nothing to do with life on Earth. Should this be added to the appropriate section? Wehpudicabok 02:40, 18 October 2009 (UTC)
- Yes, especially if you can provide better context and write it well ;) ħuman
03:19, 18 October 2009 (UTC)
[edit] Criticism on Peter Singer
You say that Peter Singer's ethics aren't based on darwinism. Look at the quote from singer right next to that, where he says "we're only just catching up to Darwin." You actually don't answer that.
Furthermore, you don't answer the question, "If we are only evolved animals, why should I have to act a particular way towards other people?" The movie made clear that darwinism is a necessary, not sufficient condition for the holocaust. You act as though the movie said, "all darwinists are nazis." The whole problem is, if Hitler puts this twisted darwinism into use, the darwinist has no way to say that it is wrong, because Hitler can just say, "says who?"
And finally, all your insulting words are just the ad hominem fallacy at work. --Idiot number 59 (talk) 06:10, 3 August 2010 (UTC)
- Thank you for your input. ħuman
06:25, 3 August 2010 (UTC)
- This time I was serious. You must answer my questions and not pretend that there aren't any problems with the article just because it was idiot # 59 who pointed these problems out. --Idiot number 59 (talk) 07:07, 3 August 2010 (UTC)
- Um, no, this is a pretty good article. I'm happy with the way it stands. ħuman
07:56, 3 August 2010 (UTC)
- I suppose you are. But I pointed out that a) You don't answer Singer's quote "we're only just catching up to Darwin." b) You misrepresent the movie (as though the movie said, "all darwinists are nazis.") c) If we are only evolved animals, why should I have to act a particular way towards other people? The article doesn't answer that d) If Hitler puts this twisted darwinism into use, you have no way to say that it is wrong, because Hitler can just say, "says who?". The article does not offer any solution. e) All your insulting words are just the ad hominem fallacy at work. --Idiot number 59 (talk) 08:36, 3 August 2010 (UTC)
[edit] Missing space in title
The title is "Expelled:Leader's Guide" without a space after the colon. Was this intended to be a namespace? If not, maybe it's worth moving the article to "Expelled: Leader's Guide" (with the space)? --Tweenk (talk) 07:25, 21 March 2011 (UTC)
[edit] I don't agree with the programmer argument
First of all a disclaimer. This is my first post here and it would be reasonable to assume I haven't a clue. Feel free/encouraged to point out this is this should of been posted at the bottom/on a different page/in a different topic/not at all. I thought the programmer argument in Expelled:_Leader's_Guide#The_Living_Cell was weak.
- "We also observe that any computer programmer that hasn't already been fired liberally salts their code with comments explaining what the various subroutines do and what the variables are for.
- One challenge to ID has always been: Show me the comments in the DNA before you claim someone (or thing) wrote it. "
Unfortunately at work a lot of our code base lacks good comments. It's just not true that people who don't comment get canned though I kind of wish it was. people are busy and the main purpose of comments is to explain to OTHER people whats going on. It's possible that god doesn't want people tampering with his supper optimized code, or wasn't thinking of others when he wrote it. Another problem with this argument is that comments are removed when the code is compiled from source. for those who don't know there are two different kinds of code, source code which is human or at least programmer readable(more readable anyway) and compiled code which is computer runnable. So it's possible that the source code or documentation exists but is written elsewhere. or that the so called junk DNA contains the comments but its in some kind of god language or something. I think I may have read that at least some of the junk dna had a function somewhere, but I forget. Finally its possible that god thought it was obvious what it did. For the time being I'm removing the sentences that I disagree with but feel free to put them back and slap me down/revert me explaining this isn't how we do things and/or I'm wrong.
NonPerson (talk) 14:55, 17 May 2014 (UTC)
- It looks like your edit has survived long enough to stand a chance of staying. FWIW, I agree that "show me the comments!" is weak to the point of irrelevance. God, being omniscient and unique (by his own commandment) doesn't face the problem of "WTF was I thinking when I wrote that!?!??" I don't know enough about genetics to say anything substantial about "junk" DNA, but I can still wave my hands at how new facets of the way the code interacts epigenetically with the environment are still being discovered. Sprocket J Cogswell (talk) 01:30, 3 June 2014 (UTC) | http://rationalwiki.org/wiki/Talk:Expelled:_Leader's_Guide | CC-MAIN-2015-22 | refinedweb | 3,816 | 70.53 |
Email.
Sections of contents in this article are covered and researched highly in C#. But that's not all. An in-depth overview and real investigation of the email addresses, before it is saved as a primary contact of the user that is registering with the website, is needed. And that remains the objective of this article.
We would cover not only a real validation of addresses, but also some subtopics and associated topics like Password Recovery etc.
Since the topic is in general and to facilitate even object model and database designers, I am just trying to give an example code snippet, links to some applications and also about how to achieve it in different language implementations.
A very preliminary validation of email addresses is by analyzing the pattern of addresses. That is absolutely straight forward and we can define a regular expression to get the job done.
The following regular expression method in C#, would tell you, if the passed email address is syntactically valid or not. Note that, this verifies only syntactical validity and not whether the email address exists or); }
The next level of validation, we can attempt is to make a negotiation with the SMTP server and validate. Some mail servers respond even to VRFY and/or RCPT SMTP commands as to whether the email address is valid or not. But servers which are very strictly configured of not disclosing non-existing addresses, will always acknowledge any junk addresses in their domain and would bounce them on their own later. We need to tackle each of the following.);
This step will throw an exception, if the domain is not valid, so that you can flag that the email address is invalid.
If the domain was okay, we can try to handshake with the actual server and find out whether the email address is valid or not. Perhaps, at this point, I would like to suggest the way an application used to negotiate to a SMTP server similar to how Peter has explained here (EggHeadCafe). We would not need the entire block of code anyway.
We can check each section of the SMTP negotiation like MAIL FROM and RCPT TO and optionally VRFY SMTP commands. Perhaps a descriptive SMTP command list is available here.
If from domains or from addresses are prohibited or not in the SMTP server's allow list, MAIL FROM may fail. Mail servers which allow VRFY command will let you understand whether the email address is valid or not.
Since we had a similar requirement, the EggHeadCafe was really useful and I would like to share the code snippet for other users, who might be having a similar requirement.); //Attempting to connect if(!Check_Response(s, SMTPResponse.CONNECT_SUCCESS)) { s.Close(); return false; } //HELO server Senddata(s, string.Format("HELO {0}\r\n", Dns.GetHostName() )); if(!Check_Response(s, SMTPResponse.GENERIC_SUCCESS)) { s.Close(); return false; } //Identify yourself //Servers may resolve your domain and check whether //you are listed in BlackLists etc. Senddata(s, string.Format("MAIL From: {0}\r\n", "testexample@deepak.portland.co.uk")); if(!Check_Response(s, SMTPResponse.GENERIC_SUCCESS)) { s.Close(); return false; } //Attempt Delivery (I can use VRFY, but most //SMTP servers only disable it for security reasons) Senddata(s, address); if(!Check_Response(s, SMTPResponse.GENERIC_SUCCESS)) { s.Close(); return false; } return (true);
Check_Response,
SendData are available in the original source code and you can download it from there. But you may need to read through the associated license agreement, regarding retaining copyright notices in your code. Since this is just a code snippet to introduce you to the idea, only relevant code area are being mentioned.
All goes well, if network conditions are ok. But there may be temporary network problems preventing connections. If you expect that your host may be slow, then you can send a dummy link to the email address and activate the account only if the user goes to the address and clicks the link. Otherwise, you can stop the account activation step, periodically reclaiming junk accounts by having a scheduled task in your web application.
We were mentioning at the start of the article, that we would also cover in brief about subtopics. One such is the password recovery. But I think this article is getting too long, I am just compiling more information and would submit to you as a separate article, since it has its own configurations and limitations and may not fully apply within Email Validations.
In fact, I hope a lot of web developers would be in need of similar validation routines, to ensure that the email addresses are valid and I really hope that the above hints would be helpful to them. Thanks Peter, your article really helped me and I hope your article and whatever hints I have been learning, which I have shared above, would really help more developers having similar requirements to solve.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/validation/Valid_Email_Addresses.aspx | crawl-002 | refinedweb | 820 | 52.09 |
Very nice job !!!
Printable View
Very nice job !!!
Things.
The car is getting new engine ... so sad.
However I've got some free time :)
It happens that I bought a brand new Sony RM-X2S for ... just 2 GBP :)
Adding the fact I'm on "Arduino stage" I decided to play with them both to see what could become. Well it seems it will work. I'm still testing this on the desk. N-Joy
Always a very nice job !
It's possible that you share the schematic and the arduino software ?
Thanks
It's a test sktech but... here you are:
If you don't have LCD Display, you may use Serial.print instead. Mind that when powered by USB and by external PSU there WILL be difference in returned analogRead values for each button.If you don't have LCD Display, you may use Serial.print instead. Mind that when powered by USB and by external PSU there WILL be difference in returned analogRead values for each button.Code:
#include <Wire.h>
#include <LiquidCrystal_I2C.h>
//Write down the returned value for each button. My sketch will do the rest for you.
int Center = 35;
int TrapSmall = 81;
int VolDown = 115;
int VolUP = 158;
int SeekDown = 209;
int SeekUp = 269;
int TrapBig = 333;
int Mute = 427;
int Source = 612;
int OFF = 1018;.setCursor(0, 0);
lcd.print("Sony RM-X2S");
}
void loop()
{
float SonyRM = 0;
float SonyRMShift = 0;
SonyRM = ((float)analogRead(A0));
SonyRMShift = ((float)analogRead(A1));
if (SonyRM < (Center / 2)) {
lcd.setCursor(0, 1);
lcd.print("Nothing ");
}
else if (SonyRM < (((TrapSmall - Center) / 2) + Center)) {
lcd.setCursor(0, 1);
lcd.print("Center ");
}
else if (SonyRM < (((VolDown - TrapSmall) / 2) + TrapSmall)) {
lcd.setCursor(0, 1);
lcd.print("Small Trapezoid ");
}
else if (SonyRM < (((VolUP - VolDown) / 2) + VolDown)) {
lcd.setCursor(0, 1);
lcd.print("Vol Down ");
}
else if (SonyRM < (((SeekDown - VolUP) / 2) + VolUP)) {
lcd.setCursor(0, 1);
lcd.print("Vol Up ");
}
else if (SonyRM < (((SeekUp - SeekDown) / 2) + SeekDown)) {
lcd.setCursor(0, 1);
lcd.print("Seek Down ");
if (SonyRMShift > 500) {
lcd.setCursor(0, 1);
lcd.print("Shift + SeekDown");
}
}
else if (SonyRM < (((TrapBig - SeekUp) / 2) + SeekUp)) {
lcd.setCursor(0, 1);
lcd.print("Seek Up ");
if (SonyRMShift > 500) {
lcd.setCursor(0, 1);
lcd.print("Shift + Seek Up ");
}
}
else if (SonyRM < (((Mute - TrapBig) / 2) + TrapBig)) {
lcd.setCursor(0, 1);
lcd.print("Big Trapezoid ");
}
else if (SonyRM < (((Source - Mute) / 2) + Mute)) {
lcd.setCursor(0, 1);
lcd.print("Mute ");
}
else if (SonyRM < (((OFF - Source) / 2) + Source)) {
lcd.setCursor(0, 1);
lcd.print("Source ");
}
else if (SonyRM > (((OFF - Source) / 2) + Source)) {
lcd.setCursor(0, 1);
lcd.print("OFF ");
}
lcd.setCursor(12, 0);
lcd.print(SonyRM);
delay(200);
}
For real working solution you'll need a software on the PC that will "listen". I plan writing a AutoHotKey code, but not this night. So keep reading this thread and one day there will be complete solution with schematics, codes, examples AND A LOT PICTURES :) Just in my style :) (not in the next 2-3 weeks, sorry)
You are the best . Thanks
Another crazy Idea. I got after-market LPG system fitted. It has just 4-5 LED's inside showing me how many fuel I got. What's more, it will be nice to be able to able to monitor the voltages of both batteries + some other stuff related to the PC controller I'm building.
This is just a test sketch, made to help me decide whether this functionality is worth or not. The sketch is not measuring anything - just simulating some basic stages.
N-Joy
I like it.id really love that if it was a pixleQ /white numbers and black background?? Screen you know the sunlight freindly ones...
Another crazy idea eh? The same idea I try to get dual/multi-battery people to implement.
Fine, so you think I'm crazy too! ;)
IMO it is well worthwhile having, but only if its implementation is practical. And due to normal console limitations etc, a voltage alarm is probably better but that's more complicated.
In practice it's usually not practical - ie, considering cost & benefit and that despite the voltmeters, battery connection etc problems can still occur. (Hence why alarms with isolator lockouts etc.)
In practice, monitoring of the aux batteries is done as part of routine inspection and maintenance so a faulty battery can be taken offline. And aux batteries should have battery protectors (low voltage cutouts) if flattening is an issue - especially for AGMs!
But dual battery monitoring is a great idea. The same way a voltmeter tells you all for the main battery - at least FAR more than an ammeter does - why not for the aux battery(s) as well? Hence if you see a very flat aux battery, you may decide to inhibit charging or connection to the main or other batteries. And if its a collapsed AGM, you might want to remove it before it flames (if left charging, or connected to to other batteries).
My only suggestion is to make it else include a digital display - 3 digits (for normal use & cruising). A difference of 0.2V - 0.1V can be significant but hard to determine from an analog display.
Some implementations are digital only with the digits changing color if abnormal - eg, red of 14.5V and above, yellow if below say 13.5V when charging or maybe 12.5V when not charging & with no load, red if below say 12.5V or 11.5V or 10.5V when charging or maybe below 12.0V when not charging & with no load. Or they may stagger the analog and digital colors - eg digital green might be from 12.5V to 14.4V but analog yellow below say 14.0V to signify that the normal 14.2V - 14.4V alternator voltage is not being reached. And those colors may be RPM sensitive - ie, stay green even if 12.3V because of heavy loads and an idling engine (until a timer turns them red etc...).
But that's the beauty of soft instrumentation - their set points can be reprogrammed as alternator or battery or load characteristics change, and adaptive set points are possible as other sensors are added like RPM, brakelight & headlight & wiper status, etc.
If adding aux battery monitoring, consider adding temp sensing (especially if AGM) - ie, one on the battery and one for nearby ambient air. Temperature is still a simple alert to battery & electrical (safety) problems.
to camo.b: It can be any colour. Everything you see has been drown pixel by pixel, by lines or with filled rectangles. No images used. Just green best suits to my existing interior back-light.
The idea - the available fuel to be green, and the rest to be gray (sorry my camera is junk, the bottom gauge lines age light gray)
To OldSpark: As I said everything is drawn on-the-fly. Further more, I've added the possibility to easy change the colour with which the graphic of battery is drawn. 3 colours prepared: red, green, yellow/orange. At first I was thinking to change the color of the text (13.8) but then it was hardly readable and well - the text is so small so I could miss it, while if the whole battery become red it will take my attention easily, isn't that the idea :)
Ohh and the lightening between the batteries tells me whether the batteries are connected or separated :) (err the second is charging, sorry - my English...)
So let's call this InfoPannel. It has 3 different functions: Battery monitor, Fuel level monitor, and CarPC monitor.
I said a couple of words for the first two, so it's left the CarPC monitor.
Well, I draw a monitor and PC icons. Again colours are changeable. This is supposed to tell me what my controller think it's happening with this two. In the middle you see some text "Auto", "All ON", "OFF". This is the controllers state. It left some space on the left, so i can use if for something else. Probably there I'll display the timers till ShutDown, postpone delay and such. I use a lot timers in my sketch. | http://www.mp3car.com/worklogs/153671-volvo-s80-carpc-work-in-progress-17-print.html | CC-MAIN-2015-48 | refinedweb | 1,354 | 77.33 |
Karl Goetz wrote:
On Wed, 2008-07-23 at 13:10 +0200, Sam Geeraerts wrote:KkYep. The Web didn't give me any clues either.diff -Naur ubuntu-hardy/arch/x86/ia32/syscall32_syscall-xen.S ubuntu-hardy-xen/arch/x86/ia32/syscall32_syscall-xen.S --- ubuntu-hardy/arch/x86/ia32/syscall32_syscall-xen.S 1970-01-01 01:00:00.000000000 +0100 +++ ubuntu-hardy-xen/arch/x86/ia32/syscall32_syscall-xen.S 2008-04-09 13:17:22.000000000 +0100 @@ -0,0 +1,28 @@ +/* 32bit VDSOs mapped into user space. */ + <trim> +syscall32_int80: + .incbin "arch/x86/ia32/vsyscall-int80.so" +syscall32_int80_end: + +#endif <trim> Would this '.incbin' make you worry? Its made me wonder if the .so files /are/ accidents..
It may be worth me meantioning these files dont exist in the debian source tree either. address@hidden:~/MyDownloads/linux-kernel-debian/clean/linux-2.6-2.6.25$ grep -R vsyscall-int80 * scripts/namespace.pl: $def{$name}[0] eq "arch/x86/kernel/vsyscall-int80_32.o" && Athough that file does refer to 2.0/2.1 kernels: # Tuned for 2.1.x kernels with the new module handling, it will # work with 2.0 kernels as well. # # Last change 2.6.9-rc1, adding support for separate source and object # trees. kk | https://lists.gnu.org/archive/html/gnewsense-users/2008-07/msg00097.html | CC-MAIN-2022-40 | refinedweb | 207 | 56.72 |
First of all, I want to declare that all the following contents are my own opinions about the source code. I have read the source code many times before I can understand it. In fact, it is very difficult to design AQS, so I want to restore the author Doug thoroughly Lea's design idea needs a very rigorous analysis of each line of code. I have limited ability, but I will try to restore it as much as possible. I hope you can give me more advice on some mistakes. Science needs to be rigorous, think more and go further to success.
public class AOSTest { public static void main(String[] args) { MyThread thread1 = new MyThread(); thread1.setName("t1 thread "); MyThread thread2 = new MyThread(); thread2.setName("t2 thread "); thread1.start(); //Thread.sleep(1000); this line of code is not considered thread2.start(); } } class MyThread extends Thread{ ReentrantLock lock = new ReentrantLock();//The default is non fair lock @Override public void run() { try{ lock.lock(); //Lock it //Business logic processing System.out.println(Thread.currentThread().getName()); }finally { lock.unlock(); //Unlock } } }
First of all, let's think about it. We all know that ReetrantLock can lock and unlock, but have you ever thought about it in depth
1. When a thread is locked, what will other threads do?
2. What happens when a thread locks several times at the same time?
Let's look down with these two questions. We can guess, but we must have proof. Otherwise, what you hear is always what others say. It's impossible to know whether it's right or not. Not all people think it is. The truth is like this. The real truth is to dare to question the so-called all people.
The simplest is right. Click to have a look.
First of all, this is an abstract static inner class in ReetrantLock. It mainly implements FairSync and NonfairSync, that is, the implementation of fair lock and non fair lock. We mainly talk about fair lock here.
static final class FairSync extends Sync { private static final long serialVersionUID = -3000897897090466540L; final void lock() { acquire(1); //Click the corresponding lock } public final void acquire(int arg) { //Let's look at the core code. This code is very simple, that is, and operation //Let's see what the left part is doing first? //When the first thread calls the method on the left and returns false, it returns directly. When the last step is completed, the locking is successful. if (!tryAcquire(arg) && //We know that the second thread comes in and goes to the lock method. After coming here, the following judgment just now shows that if the left side is true, the code on the right side will be executed. The code on the right side is divided into two parts. Let's first see what's going on in the brackets, that is, addWaiter(); //After building the linked list relationship, execute the method outside the brackets. acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); } //What is this method doing? final boolean acquireQueued(final Node node, int arg) { //First of all, the parameter passed in is the node corresponding to our t2 thread boolean failed = true; try { boolean interrupted = false; for (;;) { //It's another dead cycle //This line of code is very simple, that is to get the node in front of the t2 thread. final Node p = node.predecessor(); //Judge whether the node in front of t2 is the head node, obviously yes, and then try to obtain the lock. Why? In fact, if you have seen some source codes, such as concurrent HashMap, you will have a certain understanding of this. Because the cpu executes threads in time slices, it is entirely possible that the thread holding the lock has released the lock, so you need to make a judgment, //1. If the lock can be obtained at this time, set the t2 thread as the header, and then release the linked list relationship with the previous node, and no longer hold the reference. When the reachability analysis is carried out, the GC can be completed successfully if (p == head && tryAcquire(arg)) { setHead(node); p.next = null; // help GC failed = false; return interrupted; } //If the t1 thread does not release the lock at this time, it will go to the code of the face //Here are two more ways. According to the old rule, let's look at the one on the left first //The first loop sets the status to - 1 through cas, and the second call returns true //Why block threads directly here? In my opinion, considering the performance problem, the reason for spinning once is to avoid blocking the thread as much as possible, because the blocking and wake-up of the thread are heavyweight, consuming cpu resources. Spinning once, if the above tryAcquire gets the lock, it doesn't need to block. Some people will ask, why not spin many times? Can't spinning all the time avoid blocking? This involves our great computer researchers, who think that spin once performance is the best. if (shouldParkAfterFailedAcquire(p, node) && //The method on the right is actually very simple, that is, calling //LockSupport.park(this); use os level to block threads. parkAndCheckInterrupt()) interrupted = true; } } finally { if (failed) cancelAcquire(node); } //This method is to make a state judgment private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { //What is passed in is the previous node of p:t2 and t2 node int ws = pred.waitStatus; //By default, this state is also 0, which is in the node object just now if (ws == Node.SIGNAL) //Judge whether it is - 1 //Because the method called above is in a dead loop, when it comes in the second time, it is already - 1, so it returns true directly return true; if (ws > 0) { do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else { //Obviously, the first time I come in, I will come here and set the state cas to - 1, //Why do you need to set this state? In fact, when our thread is blocked, it can't update its own state. It needs to be changed by the next thread. compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; //Method inside brackets private Node addWaiter(Node mode) { //mode=Node.EXCLUSIVE This is the parameter passed in from above, which is an empty node. //Here, a Node node is initialized and the current t2 thread is set to the thread value Node node = new Node(Thread.currentThread(), mode); Node pred = tail; //Pass the tail reference, which is obviously null from the beginning to the end //When the t2 thread comes in, this is empty, so the enq method is used instead of judgment if (pred != null) { node.prev = pred; if (compareAndSetTail(pred, node)) { pred.next = node; return node; } } enq(node);//This method is called here, as shown below return node;//Returns the node corresponding to the t2 thread, which is what the method in brackets mainly does. } private Node enq(final Node node) {//Pass in the initial good current thread node for (;;) { //There's a dead cycle here Node t = tail; //It's still null here //The next time the loop comes in, because we virtual an empty node, so //t=tail!=null, enter else if (t == null) { // Must initialize if (compareAndSetHead(new Node())) //When t2 comes in, it is empty, so it starts to initialize. Let the head reference point to an empty node through cas. As for why to virtualize an empty node, we need to think carefully. First of all, remember that the thread holding the lock is no longer in the queue, and the first node, that is, the head node, is not a queued node. tail = head;//Let the head and tail point to the empty node } else { //Start to maintain the relationship between the two-way linked list, and point the node we pass in, that is, the head of the node corresponding to the t2 thread, to our empty node. node.prev = t; //Judge whether the current t is the tail node. Obviously, it is. Then point the tail node to the node of the t2 thread, and finally point the next node of the empty node to t2. Return this T, which is the t that has maintained the linked list relationship. This is a simple maintenance of two-way linked list relationship. You can draw a diagram to understand it. if (compareAndSetTail(t, node)) { t.next = node; return t;//After returning, exit the loop } } } //The specific implementation of the left part above protected final boolean tryAcquire(int acquires) { //acquires=1 //Get the current thread final Thread current = Thread.currentThread(); //This is a basic variable in AQS with an initial value of 0 int c = getState(); if (c == 0) { //When the first thread comes in, it gets the initial value of 0, //Let's take a look on the left. What can we do? //From the following method, we can know that when the first thread comes in, the left side will return true, and then execute the following cas method to determine whether the current state is 0. If yes, set it to the passed in value acquires = 1, set it to successful state=1, set the current thread to exclusive thread, and finally make a return. if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } //If you enter this judgment, state = 0, that is, state=1. When you get the lock, you first judge whether it is the thread holding the lock. If it is, you make a reentry lock, and continue to add 1 to state, which is 2 else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } //If state! =0. If another thread comes in at the same time, it will directly return false return false; } //First of all, this method is very critical. There are many possibilities in a few lines of code. //Before explaining this method, we should know several basic variables and inner classes in AQS Node tail This is a tail reference. The default is null Node head This is a header reference. The default is null Node{ volatile int waitStatus;//This is the state of the current node volatile Node prev;//Points to the previous node volatile Node next;//Point to the next node volatile Thread thread;//Current thread } public final boolean hasQueuedPredecessors() { Node t = tail; //Get the current tail reference Node h = head;//Get the current header reference Node s; //The first step is to judge whether the head and tail point to the same node. Obviously, it is null when it comes in the first time, so here is to directly return false,h==t return h != t && ((s = h.next) == null || s.thread != Thread.currentThread()); }
Through the above analysis, which is a simple explanation of what happened? But it can answer the two questions mentioned above.
When a thread has been locked, if the second lock is still the same thread, it will do a reentry, and the state will be directly increased by one, which is 2. If another thread comes to get the lock, if the first thread has released the lock, that is to say, the thread will execute alternately without the help of queue, which is a simple change of state, that is, to solve the problem at the java level If the lock is not released, you need to create an empty node, and then create the node corresponding to the current thread, maintain the linked list relationship, change the state of the empty node to - 1, and block the current thread to solve the lock problem at os level.
At the same time, there is a question: why should there be a two-way linked list, and why should the state of the blocked node be set to - 1? This is related to the release of the lock. Let's look at the unlock method;
public void unlock() { sync.release(1);//Called when the lock is released } public final boolean release(int arg) { if (tryRelease(arg)) { //First of all, make a judgment and look at the following specific logic //After unlocking correctly, start to execute the following logic, Node h = head; //Get the head node of the list //Judge whether the head node is empty and the state is not 0 //This is why the blocked thread should be set to - 1 //Obviously, both conditions are true, and the head node is not empty if (h != null && h.waitStatus != 0) //Call this method, see below unparkSuccessor(h); return true; } return false; } private void unparkSuccessor(Node node) { //Pass in the header node with the status of - 1 int ws = node.waitStatus; if (ws < 0) //Here, the state is set to the initialization value of 0, indicating that the thread will start to wake up compareAndSetWaitStatus(node, ws, 0); Node s = node.next; //Judge whether the head node has the next node if (s == null || s.waitStatus > 0) { //Obviously, it is not empty at present, because there are nodes in the future s = null; for (Node t = tail; t != null && t != node; t = t.prev) if (t.waitStatus <= 0) s = t; } if (s != null) //Using os level to wake up the next thread of the head node LockSupport.unpark(s.thread); } protected final boolean tryRelease(int releases) { //It's very clear here that when locking, add 1 and when unlocking, subtract 1. We can see the importance of this variable int c = getState() - releases; //This thread is not locked in general if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; if (c == 0) { //After unlocking, it will be 0 free = true; setExclusiveOwnerThread(null);//Setting the hold lock flag to null means that the current lock is free and waiting for other threads to acquire the lock. } setState(c);//Set the variable to the latest value of 0 return free; //Return true }
Now it is clear why it is a two-way linked list, because as long as the previous node of the current thread is the head node, when releasing the lock, the head node will also wake up the corresponding next node.
This piece is close to the end. I'm a little sleepy. I'm still blogging in the early morning. It's also because I forget the source code once I see it. It's better to write it out. Of course, there are in-depth analysis of some scenes and some core code. The method calls above are a bit messy, but the call relationship is very clear. It's suggested to analyze them line by line according to my local source code.
Don't blog casually, just write something valuable. | https://www.fatalerrors.org/a/in-depth-analysis-of-aqs-source-code-level.html | CC-MAIN-2021-17 | refinedweb | 2,457 | 70.02 |
RDF::SN - Short names for URIs with prefixes from prefix.cc
use RDF::SN; $abbrev = RDF::SN->new('20170111'); $abbrev->qname(''); # rdfs:type
This module supports abbreviating URIs as short names (aka qualified names), so its the counterpart of RDF::NS.
Create a lookup hash from a mapping hash of namespace URIs to prefixes (RDF::NS). If multiple prefixes exist, the shortest is used. If multiple prefixes with same length exist, the first in alphabetical order is used.
Returns a prefix and local name (as list in list context, concatenated by
: in scalar context) if the URI can be abbreviated with given namespaces.
This software is copyright (c) 2013- by Jakob Voß.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~voj/RDF-NS-20170111/lib/RDF/SN.pm | CC-MAIN-2017-51 | refinedweb | 136 | 55.64 |
Dissecting the First Visual Basic Program You Created - 03
- Posted: Nov 21, 2011 at 9:24AM
- 105,152 views
- 12 picks up from the previous one by discussing at length each action and line of code you wrote. It discusses the relationship between the Visual Basic code you write, the Visual Basic compiler, the .NET Framework and more. The lesson discusses the concept of code blocks at a high level explaining how methods, classes and namespaces are related. Finally, the lesson shows you where your project files are stored and the location of your code after it is compiled by the Visual Studio IDE and the different types of compilation (i.e., debug versus release).
Download the source code for Dissecting the First very good and informative series. good work Bob. You are the best. Keep it up../
superbbbb nd awsmeeee, keep it up!
I cant find out how to get it to show up in my release folder its not under debug drop down im using visual studio 2010 btw
@Damion ...?
@Damion Allen: See my comments, above ... I forgot to hit the Reply button. Doh!
I am a physician but had a long wish to learn computer programming language & you are perfect teacher to teach it!
before i used to get confused assuming that every characters in language may be for main pragramming pupose but in fact not all, some to give a clean & organised look in runtime?! Am I right Sir?
@drmofa: Clean and organized DURING DEVELOPMENT -- so that you can read your own code, or others can ... yes. Some "white space" characters are ignored by the compiler. Code comments are ignored by the compiler. Everything else will be processed and eventually compiled by the compiler. Hope that helps? Best wishes to you! "This isn't rocket surgery" so I'm thinking you'll pick this up in no time at all.
Bob, I just wanted to say this is a great video! You do a really great job explaining everything clearly, and you are a great speaker! Keep up the good work :) I look forward to learning more from your videos
- derek
@Derek: Thanks, I appreciate that. Hope the videos continue to work well for you!
Bob really I appreciate your work , its an ART
thanks
Hey Bob! I have decided to take part in the Duke of Edinburgh award, and for my activity my goal is to "Learn the fundamentals of Visual basic and be able to create a variety of programs". and this series of videos is clear, easy to understand for an "absolute beginner" and just is really helping me get through my DofE award!!
I have followed the first three videos and am hoping to continue through the series.
Thank you for creating this guide, and im really looking forward to completing the series!
Great Work!
Chris Markwell
@Chris Markwell: re: DofE award ... sounds cool ... I'll have to check out what that is exactly. Best wishes towards your success!
Remove this comment
Remove this threadClose | https://channel9.msdn.com/Series/Visual-Basic-Development-for-Absolute-Beginners/Dissecting-the-First-Visual-Basic-Program-You-Created-03?format=smooth | CC-MAIN-2016-22 | refinedweb | 503 | 74.79 |
The QImage class provides a hardware-independent pixmap representation with direct access to the pixel data. More...
#include <qimage.h>
List of all member functions.
It is one of the two classes Qt provides for dealing with images, the other being QPixmap. QImage is designed and optimized for I/O and for direct pixel access/manipulation. QPixmap is designed and optimized for drawing. There are (slow) functions to convert between QImage and QPixmap: QPixmap::convertToImage() and QPixmap::convertFromImage().
An image has the parameters width, height and depth (bits per pixel, bpp), a color table and the actual pixels. QImage supports 1-bpp, 8-bpp and 32-bpp image data. 1-bpp and 8-bpp images use a color lookup table; the pixel value is a color table index.
32-bpp images encode an RGB value in 24 bits and ignore the color table. The most significant byte is used for the alpha buffer.
An entry in the color table is an RGB triplet encoded as a uint. Use the qRed(), qGreen() and qBlue() functions (qcolor.h) to access the components, and qRgb to make an RGB triplet (see the QColor class documentation).
1-bpp (monochrome) images have a color table with a most two colors. There are two different formats: big endian (MSB first) or little endian (LSB first) bit order. To access a single bit you will must do some bit shifts:
QImage image; // sets bit at (x,y) to 1 if ( image.bitOrder() == QImage::LittleEndian ) *(image.scanLine(y) + (x >> 3)) |= 1 << (x & 7); else *(image.scanLine(y) + (x >> 3)) |= 1 << (7 - (x & 7));
If this looks complicated, it might be a good idea to convert the 1-bpp image to an 8-bpp image using convertDepth().
8-bpp images are much easier to work with than 1-bpp images because they have a single byte per pixel:
QImage image; // set entry 19 in the color table to yellow image.setColor( 19, qRgb(255,255,0) ); // set 8 bit pixel at (x,y) to value yellow (in color table) *(image.scanLine(y) + x) = 19;
32-bpp images ignore the color table; instead, each pixel contains the RGB triplet. 24 bits contain the RGB value; the most significant byte is reserved for the alpha buffer.
QImage image; // sets 32 bit pixel at (x,y) to yellow. uint *p = (uint *)image.scanLine(y) + x; *p = qRgb(255,255,0);
On Qt/Embedded, scanlines are aligned to the pixel depth and may be padded to any degree, while on all other platforms, the scanlines are 32-bit aligned for all depths. The constructor taking a uchar* argument always expects 32-bit aligned data. On Qt/Embedded, an additional constructor allows the number of bytes-per-line to be specified.
QImage supports a variety of methods for getting information about the image, for example, colorTable(), allGray(), isGrayscale(), bitOrder(), bytesPerLine(), depth(), dotsPerMeterX() and dotsPerMeterY(), hasAlphaBuffer(), numBytes(), numColors(), and width() and height().
Pixel colors are retrieved with pixel() and set with setPixel().
QImage also supports a number of functions for creating a new image that is a transformed version of the original. For example, copy(), convertBitOrder(), convertDepth(), createAlphaMask(), createHeuristicMask(), mirror(), scale(), smoothScale(), swapRGB() and xForm(). There are also functions for changing attributes of an image in-place, for example, setAlphaBuffer(), setColor(), setDotsPerMeterX() and setDotsPerMeterY() and setNumColors().
Images can be loaded and saved in the supported formats. Images are saved to a file with save(). Images are loaded from a file with load() (or in the constructor) or from an array of data with loadFromData(). The lists of supported formats are available from inputFormatList() and outputFormatList().
Strings of text may be added to images using setText().
The QImage class uses explicit sharing, similar to that used by QMemArray.
New image formats can be added as plugins.
See also QImageIO, QPixmap, Shared Classes, Graphics Classes, Image Processing Classes, and Implicitly and Explicitly Shared Classes.
This enum type is used to describe the endianness of the CPU and graphics hardware.
The functions scale() and smoothScale() use different modes for scaling the image. The purpose of these modes is to retain the ratio of the image if this is required.
See also isNull().
Using this constructor is the same as first constructing a null image and then calling the create() function.
See also create().
Using this constructor is the same as first constructing a null image and then calling the create() function.
See also create().
If format is specified, the loader attempts to read the image using the specified format. If format is not specified (which is the default), the loader reads a few bytes from the header to guess the file format.
If the loading of the image failed, this object is a null image.
The QImageIO documentation lists the supported image formats and explains how to add extra formats.
See also load(), isNull(), and QImageIO. (e.g. when the code is in a shared library) and ROMable when the application is to be stored in ROM.
If the loading of the image failed, this object is a null image.
See also loadFromData(), isNull(), and imageFormat().
If colortable is 0, a color table sufficient for numColors will be allocated (and destructed later).
Note that yourdata must be 32-bit aligned.
The endianness is given in bitOrder.
If colortable is 0, a color table sufficient for numColors will be allocated (and destructed later).
The endianness is specified by bitOrder.
Warning: This constructor is only available on Qt/Embedded.
This function is slow for large 16-bit (Qt/Embedded only) and 32-bit images.
See also isGrayscale().
Returns the bit order for the image.
If it is a 1-bpp image, this function returns either QImage::BigEndian or QImage::LittleEndian.
If it is not a 1-bpp image, this function returns QImage::IgnoreEndian.
See also depth().
Returns a pointer to the first pixel data. This is equivalent to scanLine(0).
See also numBytes(), scanLine(), and jumpTable().
Example: opengl/texture/gltexobj.cpp.
Returns the number of bytes per image scanline. This is equivalent to numBytes()/height().
See also numBytes() and scanLine().
Returns the color in the color table at index i. The first color is at index 0.
A color value is an RGB triplet. Use the qRed(), qGreen() and qBlue() functions (defined in qcolor.h) to get the color value components.
See also setColor(), numColors(), and QColor.
Example: themes/wood.cpp.
Returns a pointer to the color table.
See also numColors().
Returns *this if the bitOrder is equal to the image bit order, or a null image if this image cannot be converted.
See also bitOrder(), systemBitOrder(), and isNull().
The depth argument must be 1, 8, 16 (Qt/Embedded only) or 32.
Returns *this if depth is equal to the image depth, or a null image if this image cannot be converted.
If the image needs to be modified to fit in a lower-resolution result (e.g. converting from 32-bit to 8-bit), use the conversion_flags to specify how you'd prefer this to happen.
See also Qt::ImageConversionFlags, depth(), and isNull().
If the image needs to be modified to fit in a lower-resolution result (e.g. converting from 32-bit to 8-bit), use the conversion_flags to specify how you'd prefer this to happen.
Note: currently no closest-color search is made. If colors are found that are not in the palette, the palette may not be used at all. This result should not be considered valid because it may change in future implementations.
Currently inefficient for non-32-bit images.
See also Qt::ImageConversionFlags.
See also detach().
Returns a deep copy of a sub-area of the image.
The returned image is always w by h pixels in size, and is copied from position x, y in this image. In areas beyond this image pixels are filled with pixel 0.
If the image needs to be modified to fit in a lower-resolution result (e.g. converting from 32-bit to 8-bit), use the conversion_flags to specify how you'd prefer this to happen.
See also bitBlt() and Qt::ImageConversionFlags.
Returns a deep copy of a sub-area of the image.
The returned image always has the size of the rectangle r. In areas beyond this image pixels are filled with pixel 0.
The width and height is limited to 32767. depth must be 1, 8, or 32. If depth is 1, bitOrder must be set to either QImage::LittleEndian or QImage::BigEndian. For other depths bitOrder must be QImage::IgnoreEndian.
This function allocates a color table and a buffer for the image data. The image data is not initialized.
The image buffer is allocated as a single block that consists of a table of scanline pointers (jumpTable()) and the image data (bits()).
See also fill(), width(), height(), depth(), numColors(), bitOrder(), jumpTable(), scanLine(), bits(), bytesPerLine(), and numBytes().
See QPixmap::convertFromImage() for a description of the conversion_flags argument.
The returned image has little-endian bit order, which you can convert to big-endianness using convertBitOrder().
See also createHeuristicMask(), hasAlphaBuffer(), and setAlphaBuffer().
The four corners vote for which color is to be masked away. In case of a draw (this generally means that this function is not applicable to the image), the result is arbitrary.
The returned image has little-endian bit order, which you can convert to big-endianness using convertBitOrder().
If clipTight is TRUE the mask is just large enough to cover the pixels; otherwise, the mask is larger than the data pixels.
This function disregards the alpha buffer.
See also createAlphaMask().
Returns the depth of the image.
The image depth is the number of bits used to encode a single pixel, also called bits per pixel (bpp) or bit planes of an image.
The supported depths are 1, 8, 16 (Qt/Embedded only) and 32.
See also convertDepth().
If multiple images share common data, this image makes a copy of the data and detaches itself from the sharing mechanism. Nothing is done if there is just a single reference.
See also copy().
Example: themes/wood.cpp.
Returns the number of pixels that fit horizontally in a physical meter. This and dotsPerMeterY() define the intended scale and aspect ratio of the image.
See also setDotsPerMeterX().
Returns the number of pixels that fit vertically in a physical meter. This and dotsPerMeterX() define the intended scale and aspect ratio of the image.
See also setDotsPerMeterY().
If the depth of this image is 1, only the lowest bit is used. If you say fill(0), fill(2), etc., the image is filled with 0s. If you say fill(1), fill(3), etc., the image is filled with 1s. If the depth is 8, the lowest 8 bits are used.
If the depth is 32 and the image has no alpha buffer, the pixel value is written to each pixel in the image. If the image has an alpha buffer, only the 24 RGB bits are set and the upper 8 bits (alpha value) are left unchanged.
Note: QImage::pixel() returns the color of the pixel at the given coordinates; QColor::pixel() returns the pixel value of the underlying window system (essentially an index value), so normally you will want to use QImage::pixel() to use a color from an existing image or QColor::rgb() to use a specific color.
See also invertPixels(), depth(), hasAlphaBuffer(), and create().
See also QMimeSourceFactory, QImage::fromMimeSource(), and QImageDrag::decode().
Returns TRUE if alpha buffer mode is enabled; otherwise returns FALSE.
See also setAlphaBuffer().
Returns the height of the image.
See also width(), size(), and rect().
Examples: canvas/canvas.cpp and opengl/texture/gltexobj.cpp.
The QImageIO documentation lists the guaranteed supported image formats, or use QImage::inputFormats() and QImage::outputFormats() to get lists that include the installed formats.
See also load() and save().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.inputFormatList(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also outputFormatList(), inputFormats(), and QImageIO.
Example: showimg/showimg.cpp.
See also outputFormats(), inputFormatList(), and QImageIO.
If the depth is 32: if invertAlpha is TRUE, the alpha bits are also inverted, otherwise they are left unchanged.
If the depth is not 32, the argument invertAlpha has no meaning.
Note that inverting an 8-bit image means to replace all pixels using color index i with a pixel using color index 255 minus i. Similarly for a 1-bit image. The color table is not changed.
See also fill(), depth(), and hasAlphaBuffer().
For 8-bpp images, this function returns TRUE if color(i) is QRgb(i,i,i) for all indices of the color table; otherwise returns FALSE.
See also allGray() and depth().
Returns TRUE if it is a null image; otherwise returns FALSE.
A null image has all parameters set to zero and no allocated data.
Example: showimg/showimg.cpp.
Returns a pointer to the scanline pointer table.
This is the beginning of the data block for the image.
See also bits() and scanLine().FromData(), save(), imageFormat(), QPixmap::load(), and QImageIO.(), save(), imageFormat(), QPixmap::loadFromData(), and QImageIO.
Loads an image from the QByteArray buf.
Returns a mirror of the image, mirrored in the horizontal and/or the vertical direction depending on whether horizontal and vertical are set to TRUE or FALSE. The original image is not changed.
See also smoothScale().
Returns the number of bytes occupied by the image data.
See also bytesPerLine() and bits().
Returns the size of the color table for the image.
Notice that numColors() returns 0 for 16-bpp (Qt/Embedded only) and 32-bpp images because these images do not use color tables, but instead encode pixel values as RGB triplets.
See also setNumColors() and colorTable().
Example: themes/wood.cpp.
Returns the number of pixels by which the image is intended to be offset by when positioning relative to other images.
See also operator=().
See also copy().
Sets the image bits to the pixmap contents and returns a reference to the image.
If the image shares data with other images, it will first dereference the shared data.
Makes a call to QPixmap::convertToImage().
See also operator=().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.outputFormatList(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also inputFormatList(), outputFormats(), and QImageIO.
See also inputFormats(), outputFormatList(), and QImageIO.
Example: showimg/showimg.cpp.
If (x, y) is not on the image, the results are undefined.
See also setPixel(), qRed(), qGreen(), qBlue(), and valid().
Examples: canvas/canvas.cpp and qmag/qmag.cpp.
If (x, y) is not valid, or if the image is not a paletted image (depth() > 8), the results are undefined.
See also valid() and depth().
Returns the enclosing rectangle (0, 0, width(), height()) of the image.
See also width(), height(), and size().
Returns TRUE if the image was successfully saved; otherwise returns FALSE.
See also load(), loadFromData(), imageFormat(), QPixmap::save(), and QImageIO.
This function writes a QImage to the QIODevice, device. This can be used, for example, to save an image directly into a QByteArray:
QImage image; QByteArray ba; QBuffer buffer( ba ); buffer.open( IO_WriteOnly ); image.save( &buffer, "PNG" ); // writes image into ba in PNG format
If either the width w or the height h is 0 or negative, this function returns a null image.
This function uses a simple, fast algorithm. If you need better quality, use smoothScale() instead.
See also scaleWidth(), scaleHeight(), smoothScale(), and xForm().
The requested size of the image is s.
If h is 0 or negative a null image is returned.
See also scale(), scaleWidth(), smoothScale(), and xForm().
Example: table/small-table-demo/main.cpp.
If w is 0 or negative a null image is returned.
See also scale(), scaleHeight(), smoothScale(), and xForm().
Returns a pointer to the pixel data at the scanline with index i. The first scanline is at index 0.
The scanline data is aligned on a 32-bit boundary.
Warning: If you are accessing 32-bpp image data, cast the returned pointer to QRgb* (QRgb has a 32-bit size) and use it to read/write the pixel value. You cannot use the uchar* pointer directly, because the pixel format depends on the byte order on the underlying platform. Hint: use qRed(), qGreen() and qBlue(), etc. (qcolor.h) to access the pixels.
Warning: If you are accessing 16-bpp image data, you must handle endianness yourself. (Qt/Embedded only)
See also bytesPerLine(), bits(), and jumpTable().
Example: desktop/desktop.cpp.
An 8-bpp image has 8-bit pixels. A pixel is an index into the color table, which contains 32-bit color values. In a 32-bpp image, the 32-bit pixels are the color values.
This 32-bit value is encoded as follows: The lower 24 bits are used for the red, green, and blue components. The upper 8 bits contain the alpha component.
The alpha component specifies the transparency of a pixel. 0 means completely transparent and 255 means opaque. The alpha component is ignored if you do not enable alpha buffer mode.
The alpha buffer is used to set a mask when a QImage is translated to a QPixmap.
See also hasAlphaBuffer() and createAlphaMask().
Sets a color in the color table at index i to c.
A color value is an RGB triplet. Use the qRgb() function (defined in qcolor.h) to make RGB triplets.
See also color(), setNumColors(), and numColors().
Examples: desktop/desktop.cpp and themes/wood.cpp.
If the color table is expanded all the extra colors will be set to black (RGB 0,0,0).
See also numColors(), color(), setColor(), and colorTable().
If (x, y) is not valid, the result is undefined.
If the image is a paletted image (depth() <= 8) and index_or_rgb >= numColors(), the result is undefined.
See also pixelIndex(), pixel(), qRgb(), qRgba(), and valid().
Returns the size of the image, i.e. its width and height.
See also width(), height(), and rect().
For 32-bpp images and 1-bpp/8-bpp color images the result will be 32-bpp, whereas all-gray images (including black-and-white 1-bpp) will produce 8-bit grayscale images with the palette spanning 256 grays from black to white.
This function uses code based on pnmscale.c by Jef Poskanzer.
pnmscale.c - read a portable anymap and scale it.
See also scale() and mirror().
The requested size of the image is s.
See also systemByteOrder().
See also systemBitOrder().
Returns the string recorded for the keyword and language kl.
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.textKeys(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also textList(), text(), setText(), and textLanguages().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.textLanguages(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also textList(), text(), setText(), and textKeys().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QValueList<QImageTextKeyLang> list = myImage.textList(); QValueList<QImageTextKeyLang>::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also width(), height(), and pixelIndex().
Examples: canvas/canvas.cpp and qmag/qmag.cpp.
Returns the width of the image.
See also height(), size(), and rect().
Examples: canvas/canvas.cpp and opengl/texture/gltexobj.cpp.
The transformation matrix is internally adjusted to compensate for unwanted translation, i.e. xForm() returns the smallest image that contains all the transformed points of the original image.
See also scale(), QPixmap::xForm(), QPixmap::trueMatrix(), and QWMatrix.
Copies a block of pixels from src to dst. The pixels copied from source (src) are converted according to conversion_flags if it is incompatible with the destination (dst).
sx, sy is the top-left pixel in src, dx, dy is the top-left position in dst and sw, \sh is the size of the copied block.
The copying is clipped if areas outside src or dst are specified.
If sw is -1, it is adjusted to src->width(). Similarly, if sh is -1, it is adjusted to src->height().
Currently inefficient for non 32-bit images.
Writes the image image to the stream s as a PNG image, or as a BMP image if the stream's version is 1.
Note that writing the stream to a file will not produce a valid image file.
See also QImage::save() and Format of the QDataStream operators.
Reads an image from the stream s and stores it in image.
See also QImage::load() and Format of the QDataStream operators.
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.3/qimage.html | crawl-001 | refinedweb | 3,458 | 60.51 |
Resque - Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.
version 0.35. Accepts a string, Redis, Redis::Fast or any other object that behaves like those.
When a string is passed in, it will be used as the server argument of a new client object. When Redis::Fast is available this will be used, when not the pure perl Redis client will be used instead.
This is useful to run multiple queue systems with the same Redis backend.
By default 'resque' is used.
Failures handler. See Resque::Failures.
Returns a new Resque::Worker on this resque instance. It can have plugin/roles applied. See Resque::Pluggable.
my $worker = $r->worker();.
my $resque_job = $r->pop( 'queue_name' );
Returns the size of a queue. Queue name should be a string.
my $size = $r->size();
Returns an array of jobs currently queued, or an arrayref in scalar context.:
my @jobs = $resque->peek('my_queue', 59, 30)
Returns an array of all known Resque queues, or an arrayref in scalar context.
my @queues = $r->queues();
Given a queue name, completely deletes the queue.
$r->remove_queue( 'my_queue' );
Given a queue name, creates an empty queue.
$r->create_queue( 'my:
my $num_removed = $rescue->mass_dequeue({ queue => 'test', class => 'UpdateGraph' });
Whereas specifying args will only remove the 2nd job:
my $num_removed = or string(payload for object).
Resque::Job class can be extended thru roles/plugins. See Resque::Pluggable.
$r->new_job( $job_or_job_hashref );
Concatenate $self->namespace with the received array of names to build a redis key name for this resque instance.
Returns an array of all known Resque keys in Redis, or an arrayref in scalar context. Redis' KEYS operation is O(N) for the keyspace, so be careful this can be slow for big databases.
This method will delete every trace of this Resque system on the redis() backend.
$r->flush_namespace();
Does the dirty work of fetching a range of items from a Redis list.
my $items_ref = $r->list_range( $key, $stat, $count );
As in any piece of software there might. | http://search.cpan.org/~diegok/Resque/lib/Resque.pm | CC-MAIN-2018-13 | refinedweb | 337 | 74.69 |
Now that you've got your server and domain set up, it is time to set up Flask and get your very first web application up! There are many commands that we will need to run, but, have no fear, I will put all of the commands and code blocks here!First, you'll need to run:
sudo apt-get install apache2 mysql-client mysql-serverOnce you do that, you'll get the start up page for MySQL, where you will need to set your root user for MySQL. This is the specific MySQL root user, not your server root user.
That setup should take about 20-30 seconds. After that, we need to get WSGI, so run the following:
sudo apt-get install libapache2-mod-wsgi
Once we have that, we need to make sure we've enabled WSGI with the following:
sudo a2enmod wsgi
It is probably already enabled from the installation, but it is a good idea to make sure.
Next we are ready to set up our Flask environment.
Run:
cd /var/www/
Now let's make our Flask environment directory:
mkdir FlaskApp
Move into that directory:
cd FlaskApp
Now make the actual application directory:
mkdir FlaskApp
Now let's go in there:
cd FlaskApp/
Now we're going to make two directories, static and template:
mkdir static
mkdir templates
Now we're ready to create the main file for your first Flask App:
nano __init__.py
Here is where we have our initialization script for our Flask application. You can actually keep all of your main website code right here for simplicity's sake, and that's what we'll be doing. Within your __init__.py file, you will type:
from flask import Flask app = Flask(__name__) @app.route('/') def homepage(): return "Hi there, how ya doin?" if __name__ == "__main__": app.run()
Press control+x to save it, yes, enter.
Now we should probably actually get Flask. Let's do that now.
Since this is likely a new server for you, you will want to go ahead and run:
apt-get update
apt-get upgrade
To get Flask, we're going to use pip, so you will need to first get pip if you do not already have it:
apt-get install python-pip
Now that we have pip, we also need virtualenv to create the virtual environment for Flask to run Python and your application in:
pip install virtualenv
Now to set up the virtualenv directory:
sudo virtualenv venv
Activate the virtual environment:
source venv/bin/activate
Now install Flask within your virtual environment
pip install Flask
Find out if everything worked out by going:
python __init__.py
If you didn't get any major errors, congrats!
Hit control+c to get out of the running text, then type deactivate to stop the virtual environment running locally. This is only a local version, so you wont be able to type in anything to your browser to access it.
So now we need to set up our Flask configuration file:
nano /etc/apache2/sites-available/FlaskApp.conf
This is where your Flask configuration goes, which will apply to your live web site. Here's the code that you need to include:
<VirtualHost *:80> ServerName yourdomain.com ServerAdmin youemail@email>
For your notes, if you want to add more domains/subdomains that point to the same Flask App, or a different app entirely, you can use a
ServerAlias, added underneath the
ServerAdmin line.
We are now ready to enable the server.
Run:
sudo a2ensite FlaskApp
service apache2 reload
Almost there... now we just need to configure our WSGI file. To do this:
cd /var/www/FlaskApp
nano flaskapp.wsgi
Within the wsgi file, enter:
#!/usr/bin/python import sys import logging logging.basicConfig(stream=sys.stderr) sys.path.insert(0,"/var/www/FlaskApp/") from FlaskApp import app as application application.secret_key = 'your secret key. If you share your website, do NOT share it with this key.'
Save and exit.
Once that is done, run:
service apache2 restart
Get used to running the above command. Flask is very finicky about your python file changes. Every .py file change you make to your webapp, you need to run this command.
Once you have done all of this, you are ready to visit your domain name in your browser. You should see the "Hi there, how ya doin?" string that we output in your __init__.py file. | https://pythonprogramming.net/creating-first-flask-web-app/?completed=/flask-web-development-introduction/ | CC-MAIN-2021-39 | refinedweb | 738 | 71.65 |
One way .NET benefits Windows developers is by bringing together previously separate APIs and SDKs under one framework. For example, consider the adaptation of the CryptoAPI to the .NET System.Security.Cryptography namespace. The cryptographic services have left their mysterious corner of the Platform SDK to become, in a sense, “just another .NET namespace.” Of course, there is more to it than that, but the point is that the cryptographic services are more approachable because of what they share with the rest of the framework as a whole. Now, you just have to learn what the System.Security.Cryptography namespace does and which classes are appropriate for specific situations.
Grab the code
You can download the .cs files for this article here.
System.Security.Cryptography namespace
The namespace contains classes that implement security solutions such as:
- · Encryption and decryption of data.
- · Management of persisted encryption keys.
- · Verification of the integrity of a piece of data to ensure that it has not been tampered with.
I will limit this article to encryption and decryption, but keep in mind that this is only one piece of the puzzle; a truly secure solution will make use of the other pieces as well. Our examples start with the encryption of a local text file and then move on to the more complicated encryption of messages between networked computers.
Symmetric algorithms
To encrypt a local text file, we use one of the symmetric algorithms; symmetric because the same key and initialization vector (IV) are used to both encrypt and decrypt a piece of data. (The IV’s relationship to the key is explained in the Cryptography Overview section of the .NET documentation.)
.NET implementations of symmetric algorithms derive from a common abstract base class, SymmetricAlgorithm, highlighting that the programmer can treat each of the specific algorithms—DES, TripleDES, and Rijndael—in the same fashion. The algorithms differ in how they encrypt the data, but the public interfaces are the same. This doesn’t mean that all algorithms are equal. For instance, as you may have guessed by the name, TripleDES is a more secure successor to DES.
Because the same key encrypts and decrypts data, symmetric algorithms are best suited for situations where the key does not need to be broadcast. Network encryption calls for a combination of asymmetric and symmetric algorithms, as you’ll later see. But first let’s put the symmetric algorithms to good use.
Encrypting a text file
Listing A contains a console program, TextFileCrypt, which encrypts a text file you specify on the command line. The top of Listing A shows how to invoke the program. Let’s look at some of the more important pieces of the code.
The symmetric algorithms work by encrypting data as it passes through a stream. We create a “normal” output stream (such as a file I/O stream), followed by an instance of the CryptoStream class, which will then piggyback on that normal stream.
You write byte arrays to the CryptoStream, and as the data streams through, it gets encrypted and put into the normal stream. To put the original text file into an array of bytes to be fed to the CryptoStream, you employ the FileStream class to read it. You also use another instance of FileStream as the output mechanism that the CryptoStream will hand the encrypted data to.
FileStream fsIn = File.Open(file,FileMode.Open, FileAccess.Read);
FileStream fsOut = File.Open(tempfile, FileMode.Open,FileAccess.Write);
It’s all about streams
.NET makes considerable use of streams to read and write data. In fact, the symmetric algorithm classes require you to use them. If you aren’t comfortable with .NET’s stream-based input and output, I encourage you to familiarize yourself with it, perhaps by reading this article.
We can instantiate and use any one of the symmetric algorithm providers while specifying the object variable as the abstract type SymmetricAlgorithm. I chose Rijndael, but you could just as easily instantiate DES or TripleDES:
SymmetricAlgorithm symm = new RijndaelManaged();
// could just as easily be “new TripleDESCryptoServiceProvider()”
.NET sets these provider instances with strong random keys. It can be dangerous to try to choose your own keys; acceptance of the “computer-generated” key is good practice.
Next, the algorithm instance provides an object to perform the actual data transformation. Each algorithm has CreateEncryptor and CreateDecryptor methods for this purpose, and they return objects implementing the ICryptoTransform interface:
ICryptoTransform transform = symm.CreateEncryptor();
Finally, a special CryptoStream is instantiated and told which underlying stream it should piggyback on, which object will perform the transformation of the data, and whether the purpose of the stream is to read or write data:
CryptoStream cstream = new CryptoStream(fsOut,transform,CryptoStreamMode.Write);
Now you simply write the byte-array version of the original file to the CryptoStream. You do this by reading the original file with a BinaryReader, whose ReadBytes method returns a byte array. In the following snippet from Listing A, the BinaryReader reads the input stream of the original file, and its ReadBytes method is called as the byte array parameter to the CryptoStream.Write method. Note the important call to the FlushFinalBlock method of CryptoStream:
BinaryReader br = new BinaryReader(fsIn);
cstream.Write(br.ReadBytes((int)fsIn.Length),0,(int)fsIn.Length);
cstream.FlushFinalBlock();
The result is a temporary file containing the encrypted version of the original file. Listing A then reverses the process, decrypts the temporary file, and displays the decrypted text to the console so you can feel comfortable that the round-trip from encryption to decryption actually works. I won’t show each line of the decryption here, but the important difference is that the algorithm provides a decrypting instance of ICryptoTransform using CreateDecryptor, and then a new CryptoStream is used to read the encrypted file.
If you encrypt and decrypt files over multiple Windows sessions, you will want to persist and recall the symmetric key and IV. They are available in the provider class objects as byte arrays (e.g., TripleDESCryptoServiceProvider.Key), so technically, you could save them directly to a file. That’s dangerous. A better solution is to use the key management facilities in the System.Security.Cryptography namespace. Specifically, you would use the asymmetric providers such as RSA to store your symmetric keys. The key management facilities are beyond the scope of this article, but you can read more about the CspParameters class and the RSA provider’s PersistKeyinCsp property in the MSDN .NET documentation.
Asymmetric algorithms
The final example makes use of both symmetric and asymmetric algorithms. Asymmetric algorithms, such as RSA and DSA, deal with two keys, the “public” and “private” keys. Together, they can help securely send data over networks, as the following scenario shows.
If I have a document that I want only you to see, I shouldn’t simply e-mail it to you. I could encrypt it using a symmetric algorithm; then if anybody grabbed it along its way, they wouldn’t be able to read it because they wouldn’t have the single key that was used to encrypt it. But neither would you. I have to somehow get you the key so that you can decrypt the document, but without risking someone else intercepting both the key and the document.
Asymmetric algorithms are the solution. The two keys that these algorithms produce have the following relationship: Anything encrypted with the public key can be decrypted only with the companion private key. So I should first ask you to send me your public key. Anyone else can grab it on its way to me, but it doesn’t matter, since that just enables them to encrypt things for you. I use your public key to encrypt the document then send it to you. You decrypt it with your private key, which is the only thing that can decrypt it, and which you have not sent over the wire.
The asymmetric algorithms are computationally expensive and slower than the symmetric ones, so we don’t want to asymmetrically encrypt everything in our online sessions. Instead, we can go back to using symmetric algorithms. As the next example shows, we merely use asymmetric encryption to encrypt the symmetric key. Then, we use symmetric encryption from that point forward.
Encrypting network data
Although it’s a simplification, the description above is pretty much what the Secure Socket Layer (SSL) does to create secure sessions between browser and server. The idea is also put into practice in Listing B and Listing C.
Listing B is a small TCP server program you can run on your own computer in one process. You can then run the client contained in Listing C in another process (i.e., use two command windows). Near the top of each listing is a comment showing how to invoke the program at the command line.
The server:
- · Receives a public key from the client.
- · Uses that public key to encrypt a symmetric key that can be used by both.
- · Sends the encrypted symmetric key to the client.
- · Sends the client a secret message encrypted with the symmetric key.
The client:
- · Creates and sends a public key to the server.
- · Receives an encrypted symmetric key from the server.
- · Decrypts that symmetric key using its private asymmetric key.
- · Receives and decrypts a secret message encrypted with the symmetric key.
Upon startup, the client creates its own instance of the RSACryptoServiceProvider class. When instantiated, the object contains strong default keys. The client needs to get the public key out of this RSA object and send it to the server. The public key is extracted using the ExportParameters method, resulting in an RSAParameters object that holds the public key. How do we send this object to the server? We can use .NET’s binary serialization, from the System.Runtime.Serialization.Formatters.Binary namespace:
NetworkStream ns = client.GetStream();
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(ns,key); // where key is the RSAParameters object
The BinaryFormatter writes directly to streams, and in this case, it writes the serialized version of the RSAParameters to the network stream. The server receives those bytes and deserializes them into an RSAParameters object:
result = (RSAParameters)bf.Deserialize(ms);
// ms is a memory stream containing the bytes sent by client
// bf is a BinaryFormatter
Now the server creates a symmetric key and IV that both sides can use and encrypts them using the client’s public key:
symKeyEncrypted = rsa.Encrypt(symm.Key, false);
symIVEncrypted = rsa.Encrypt(symm.IV, false);
// symKeyEncrypted and symIVEncrypted are byte arrays
Unlike the symmetric providers, the asymmetric providers encrypt to a byte array, not to a stream. The byte array can then be sent to the client using the NetworkStream.
Once the client receives the encrypted versions of the symmetric key and IV, it decrypts them using its own private asymmetric key. Now both sides have an agreed-upon symmetric key and IV. From this point forward, they send each other data that is encrypted using only the symmetric key; the asymmetric algorithm has served its purpose and need not be used again.
Conclusion
We feel comfortable using the symmetric algorithms to encrypt local data. We can choose from multiple algorithms while keeping the code generic by typing them as the abstract SymmetricAlgorithm class. The algorithms make use of transformer objects to actually encrypt the data as it passes through the special CryptoStream. When we need to send the data over a wire, we first encrypt the symmetric key itself using the recipient’s public asymmetric key.
It’s important to restate in closing that encryption is just one of the services offered in the System.Security.Cryptography namespace. For instance, although the techniques in this article would guarantee that only a certain private key could decode the message encrypted with its companion public key, they do not guarantee anything about who sent the original public key; it could have been an impostor. Classes dealing with digital certificates would also have to be employed to address that risk. | http://www.techrepublic.com/article/net-demystifies-encryption/ | CC-MAIN-2017-13 | refinedweb | 1,996 | 54.32 |
Document Type Editor
Introduction
BloomReach Experience Manager includes a WYSIWYG Document Type Editor which allows web developers to create and modify document types to be used in their projects. A document type defines a type's data structure as well as the editing template used by authors to create and modify documents of that type.
Using the Document Type Editor
The Document Type Editor is available to users with administrator privileges and is located in the Content Workspace in the Document Types section.
Browsing, creating and editing document types is very similar to browsing, creating and editing actual documents.
When editing a document type fields can be added, moved, modified and removed.
A new or modified document type must be committed before it is available to authors.
Standard Field Types
BloomReach Experience Manager provides a standard set of field types that can be used to create document types. Additional field types can be added through plugins or custom development.
Primitive Types
Primitive Types are field types that translate directly to primitive JCR property types.
Boolean
A Boolean field is displayed as a single checkbox. Its value is either true (checked) or false (unchecked). Its default value if false.
Boolean Radio Group
A single value radio button group widget populated from a value list service.
CalendarDate
A CalendarDate allows a date to be entered in a text box or through the provided calendar widget.
Date
A Date field includes date as well as time. A date value can be entered in a text box or through the provided calendar widget. A time value can be entered through text boxes for hours and minutes.
Decimal Number
A Decimal Number field is used for decimal values. Its default value is 0.0.
Docbase
A Docbase field is used to create a weak reference to a folder, document, asset or image in the repository. It stores the UUID of the referenced item's JCR node in a String property.
Dynamic Dropdown
A single value dropdown widget populated from a value list service.
Html
A Html field is used for formatted text content. It is stored as HTML markup in a String property.
Integer Number
A Long field is used for integers (or whole numbers). Its default value is 0.
Radio Group
A single value radio button group widget populated from a value list service.
Static Dropdown
A single value dropdown widget populated from a static value list specified as comma-separated values in the field properties.
String
A String field is used for single-line plain text content.
Text
A Text field is used for multi-line plain text content. It is stored as a String property.
Compound Types
Compound Types are reusable blocks of fields. They are stored as child nodes. A Compound Type can contain both Primitive and Compound Type fields.
Image Link
An Image Link field is used to include an image from the gallery in the document. It is stored as a reference to the actual JCR node in which the image is stored. The delivery tier will resolve the reference and translate it to a website URL on-the-fly. If a CMS user tries to delete the referenced image they will see a warning that it is being referred to and deleting the image will cause a broken link in the referring document.
Link
A Link field is used to create an internal link to a different content item (such as a document or asset) in the repository. It is stored as a reference to the actual JCR node in which that content item is stored. The delivery tier will resolve the reference and translate it to a website URL on-the-fly. If a CMS user tries to delete the referenced content item they will see a warning that it is being referred to and deleting the item will cause a broken link in the referring document.
Resource
A Resource field is used to embed a file (e.g. an image or a PDF document) in a document. The file is physcially stored inside a JCR node in the document and can't be reused by other documents.
Rich Text Editor
A Rich Text Editor field is used to store fully featured rich text content. It is stored as HTML markup inside a JCR node in the document. The Rich Text Editor provides authors with the freedom to format text and include tables, images, links, etc. The delivery tier must parse rich text content in order to resolve references to other content items (such as images and links to other documents).
Value List Item
A key-value compound type that is part of the Value List document type used to manage values for Selection fields such as Dynamic Dropdowns and Radio Groups.
Field Properties
Each field in a document type has a number of properties. The exact properties differ per field type, but every field must at least have a Caption property and a Path property.
Caption
The caption of a field is the label that is displayed directly above the field in the editing template. Authors will know a field by its caption. A caption is single-line plain text and may contain spaces and special characters.
Path
The path of a field is the name that is used to store the value of the field under. Under the hood the system exclusively refers to a field by its path (prefixed by a namespace). It translates directly to a JCR property name (in case of a primitive type) or node name (in case of a compound type). A path may not contain spaces or special characters.
Hint
Optionally a hint to authors can be added to a field. The hint is displayed as a question mark with a mouseover popup.
CSS Classes
Optionally one or more CSS classes can be added in order to apply custom styling to the field. The actual CSS classes must be defined in a custom .css file included in the cms module of the project.
Required
Any field that is not Optional can be made required by checking the Required checkbox. Authors can't save documents if they haven't entered a value in a required field.
Optional
Any field that is not Required and not Multiple can be made optional by checking the Optional checkbox. Authors can remove the field completely from a document by clicking on an 'X' icon, and add it back by clicking on a '+' icon. This can be particularly useful for compound fields that contain required fields.
Multiple and Ordered
Any field that is not Optional can be made multi-valued by checking the Multiple checkbox. This adds plus and minus icons to the field so authors can add or remove values.
A multi-valued field can be made orderable by checking the Ordered checkbox. This adds arrow icons to the field so authors can move values up, down, to the top, and to the bottom. Optionally, the "move to the top" and "move to the bottom" arrows can be hidden by adding the CSS class hide-top-bottom-arrows to the field's CSS Classes property.
Custom Validation
In addition to defining fields as required (see above), it's possible to create custom field validators.
Document Type Inheritance
BloomReach Experience Manager supports document type inheritance. A project created using the archetype always contains a base document type from which all other document types inherit. This is the default super type when creating a new document.
You can choose to make a new document type inherit from a different document type by selecting one in the Super type field in the New document type dialog:
The new document type's editing template will initially be empty. The fields inherited from the super type can be manually added to the template by choosing them from the Inherited section in the right column of the editor:
| https://documentation.bloomreach.com/library/concepts/document-types/document-type-editor.html | CC-MAIN-2019-09 | refinedweb | 1,318 | 63.49 |
Did you know that there's no grid-like control in WPF 1.0? This post will show you how to get around that limitation. But first, may I say that the recently-released Orcas September CTP bits offers a great improvement at design-time when writing WPF applications. Instead of those three clunky tabs for .xaml, [Designer], and .xaml.cs that we had back in the June CTP, you now get the first glimpse of Microsoft's new cool “Split” view, which will become a part of ASP.NET in the Orcas timeframe!
Very cool stuff. Modifying the designer surface updates the underlying XAML immediately. But modifying the XAML requires that you click the design surface before the update takes place, the same as how Dreamweaver's split view works. Still very useful. The up-down thing between the two tabs swaps their place, and the horizontal vs vertical split things are found on the right. You can of course make either surface larger or smaller by dragging the “thumb” in the middle.
So anyway, on with the topic at hand: how to use “Crossbow” with the September CTP bits. Crossbow lets you use old-school WinForms controls in WPF with the WindowsFormsHost control, and also the other way around, using WPF controls in a WinForms app using the ElementHost control. As mentioned in the picture above, you have to add references to these two assemblies:
System.Windows.Forms (since that refers to the controls we're building a wrapper for.)WindowsFormsIntegration (which holds the WindowsFormsHost control we'll use.)
With those two references in place, you can now use the special WPF control called WindowsFormsHost (found in the System.Windows.Forms.Integration namespace) as a kind of “placeholder” for a single WinForms control. This opens up an area where the WinForms control gets rendered as part of WPF. Why would you want to do such a thing? If you have invested in creating a WinForms app then you can incrementally change over to WPF. Plus some WinForms controls are not yet found over in WPF. For instance, there is no control in WPF that acts like the very popular DataGrid or DataGridView. So this example shows how you can add one to a WPF app using 100% declarative code.
If you've used Crossbow in the past, This build changes a little the way you implement the WindowsFormsHost. Now instead of the <Mapping> tags you previously had to put at the top of the XAML, that's all handled as special namespce syntax in the <Window> or <Canvas> root element. As shown in the sample snippet of code in the image above, you have to add these two namespace declaration attributes to that element:
xmlns:wfi
And then you can make use of the new namespace and add in a WindowsFormsHost, and a single WinForms control inside, like this:
<wfi:WindowsFormsHost
<wf:DataGrid x:<wf:DataGrid>
<wfi:WindowsFormsHost>
Intellisense won't do anything for you with this build, but at least the declarative code will work as expected. And when you click on the designer surface, you should see the sample DataGrid jump to life, filling up that 200x100 pixel area. In your code-beside you can then programmatically reference the WinForms control just like you normally would, for instance after InitializeComponent() in the constructor you could use these few lines to show some simple content in the DataGrid:
DataTable
And then the end result would look like this:
The checkbox is a WPF control, and of course the DataGrid is a WinForms control. In this sort of way it's very easy to keep the rich functionality of old WinForms controls, and add in new slick animation and other WPF features in your apps. You can wire up events and reference all the objects on the page with the same code you normally would, so altogether it “just works”.
For more information about Crossbow, check out Mike Hender's blog.
Title:
Name:
Website:
Comment: | http://geekswithblogs.net/lorint/archive/2006/09/09/90740.aspx | CC-MAIN-2013-20 | refinedweb | 667 | 69.92 |
in reply to
Re: The 'eval "require $module; 1"' idiom
in thread The 'eval "require $module; 1"' idiom
Yes, polluting the UNIVERSAL namespace by adding a method that can impact every single other class used is so much better than writing one simple, well-understood line of Perl code.
So, when I run across a line of code like:
if( $module->require() ) {
[download]
in some Perl code, I'll likely suspect what it might be doing. But, especially the first time, I likely won't be completely sure, especially not on all of the details. So I'm likely to want to go read the documentation for this routine. And it is so easy to imagine having no idea where to find that documentation. Especially if the loading of UNIVERSAL::require was done from some other code file (yay for action at a distance).
A much less cute interface for this functionality would have been a much better idea. Running into code like:
use Module::Require qw< could_require >;
# ...
if( could_require( $module ) ) {
[download]
would make the breadcrumbs from the mystery code to the module that documents it obvious in the typical manner of Perl modules.
Such an interface would also prevent weird surprises when you misremember a similar method name on some other class and get silent behavior very much different from what you expected and then waste a ton of time trying to figure out what is going on. Because the silent behavior was made possible by the loading of some module by some code that has almost nothing to do with the code you are working on. Action at a distance at its finest.
- tye
Yes
No
Other opinion (please explain)
Results (99 votes),
past polls | http://www.perlmonks.org/?node_id=1082911 | CC-MAIN-2015-40 | refinedweb | 287 | 66.57 |
More or less exactly two months after the second developer preview, I'm delighted to announce that we've shipped the first (and hopefully only) beta release of the Couchbase Spark Connector. It is a major step forward, bringing Spark 1.4 support as well as official documentation and lots of smaller enhancements. In particular:
- Support for Spark 1.4
- Overhauled Spark SQL DataFrame support
- Java APIs
- saveToCouchbase() supports StoreModes
You can get it from the Couchbase Maven Repository right away:
Documentation is now officially available here!
Spark 1.4 Support
Spark 1.4 has been selected as the target Spark version for the 1.0 GA release. As a result, all the spark dependencies have been bumped. Since 1.4 brings a new API for DataFrames, the Connector modified its API as well to blend perfectly into it.
The DataFrame API has changed so that the underlying source works through the DataFrameReader and DataFrameWriter. Other than that, it feels very similar to the previous API.
Here is an example on how to read data out of the travel-sample bucket:
You can also write a DataFrame into couchbase:
Java APIs
Many people use spark through its Java API, so of course we also want to provide support for it. Since the API exposure of the connector is by design very small, not much API needs to be converted. The java API lives under the com.couchbase.spark.java namespace and can be used like this:
StoreModes
Previously the saveToCouchbase() method only used the underlying upsert method to store its data. Since there might be scenarios where you don't want to (or just) override documents, more flexibility is needed. This is why we've introduced the StoreMethod enum, which supports the following values:
- UPSERT: Insert if it doesn't exist and override if it does.
- INSERT_AND_FAIL: Try to insert and fail if it does exist.
- INSERT_AND_IGNORE: Try to insert and ignore failures if it does exist.
- REPLACE_AND_FAIL: Try to replace and fail if it doesn't exist.
- REPLACE_AND_IGNORE: Try to replace and ignore failures if it doesn't exist.
Using it is very easy, the following correctly fails since the document already exists:
The Road Towards GA
The 1.0.0 GA release of the connector is planned a month from now, leaving room to fix bugs and improve documentation. Please help us kick the tires as much as possible so we can ship an awesome GA release! | https://blog.couchbase.com/couchbase-spark-connector-1-0-beta-release/ | CC-MAIN-2022-33 | refinedweb | 409 | 64.1 |
I have the below code where I'm trying to write values to an excel file, but my output adds one letter in every single column, instead of the whole word, like so
I want the whole word to be in one column. I'm currently passing in an array that has the words
[u'Date / Time', u'City', u'State', u'Shape', u'Duration', u'Summary']
writer
import requests
import csv
from bs4 import BeautifulSoup
r = requests.get('')
soup = BeautifulSoup(r.text, 'html.parser')
csv.register_dialect('excel')
f = open('ufo.csv', 'wb')
writer = csv.writer(f)
headers = soup.find_all('th')
header_text = []
header_count = 1
for header in headers:
if header_count == len(headers):
print "value being written: " + str(header_text)
writer.writerows(header_text)
else:
header_text.append(header.text)
header_count += 1
f.close()
You're extracting the text of a single column via:
for header in headers
For each single column, you're writing it out like a row of columns via:
writer.writerows(header_text)
The writerows method expects a list of columns to write, so it iterates over it. You've passed it a single string, so it iterates over that and writes one character per column.
So either:
writer.writerows([header_text]) # turn this single column into a list # or writer.writerow(header_text) # just write out as a single item
should work. | https://codedump.io/share/QDnq0Xd4wVQh/1/writing-values-to-excel-csv-in-python | CC-MAIN-2017-04 | refinedweb | 220 | 59.09 |
Now that Service Bus Premium Messaging has become generally available, many of our customers are asking "how fast is it?" In order to answer that question, we made a quick performance bench-marking .NET application. The app should give you an idea of what kind of performance to expect with your setup.
We ran 3 different tests from a D4 V2 VM in North Europe with a premium namespace in North Europe as well (placing the resources in the same data center is important). We then ran the tests with 1, 2 and 4 messaging units, with a 1 KB message size. The tests hit 3 different use cases:
- A single queue
- A single topic with a single subscription
- A single topic with 5 subscriptions
You may notice that if you were to create your own application, and just sent messages in a loop, you might not get these results. This app used batching; one of the first things to do when looking to maximize throughput. Additionally, we are using AMQP and we aren't using any advanced features here such as transactions, duplicate detection, or sessions.
Please keep in mind that results will vary. For instance, you may see results with higher or lower throughput than ours. We simply wanted to provide a idea of what you can expect to see.
For reference, here is a link to the detailed results, and here is a link to the code on GitHub:
Below is a summary of what our numbers looked like for sending messages to Service Bus:
And here's what it looked like when receiving messages:
With 4 MU's, premium messaging was able to maintain over 16,728 messages per second in AND out.
The main thing to point out here is the consistency, which is the primary reason to think of premium messaging in the first place. If you can't afford to have messages backed up because of your volume, or to experience higher latencies because your "neighbors are being noisy," then you might want to think about premium messaging.
Either way, we are extremely excited about the results, and can't wait to see what everyone builds with it! Let us know what you think in the comments.
Happy (really fast) messaging!
-The Service Bus Team
Do you have charts for non-premium Service Bus?
Hi Artem – Unfortunately we do not, this is because the performance can vary greatly. The primary selling point of our premium offering is the predictability of it. | https://blogs.msdn.microsoft.com/servicebus/2016/07/18/premium-messaging-how-fast-is-it/ | CC-MAIN-2018-43 | refinedweb | 418 | 67.59 |
Ever since I first launched One Word Domains this summer, I've received plenty of requests for a search feature to help navigate the ever growing list of domains on the site (the current number is about 500K domains).
My first thought when I started planning the implementation of this feature was:
Oh, this shouldn't be too hard, I'm just gonna:
- Get the list of words that are the most similar to the user's search query using the spaCy NLP library and display them on the site.
- Also show the list of TLDs that are available for that particular query.
No biggie.
However, when I started translating my vision to code, I ran into a few technical constraints.
First, I wanted the search to be blazing fast, so calculating the cosine similarity in real-time is a no-go – that would simply take too long.
Also, even if I were able to optimize the calculation time to <50ms, how do I deploy the spaCy model to the web when even the
en_core_web_md model itself is over 200MB unzipped?
It took me 3 whole days to figure out a viable solution for this, which is why I'm hoping this blog post will help you do so in much shorter time.
Building A Word Association Network
After doing a bunch of research, I came to the conclusion that there was only one way for me to reduce the search time to sub-50ms levels – by pre-training the model locally and caching the results in PostgreSQL.
Essentially, what I'd be doing is building out a word association network that maps the relationship between the 20K words that are currently in my database.
Picture this:
Note: the decimal values on each of the edges represent the similarity scores between the auxiliary terms and the root term
Now picture that again, but this time with 20,000 adjectives, nouns, verbs, and 10 other categories + French and Spanish words*.
I know, sounds pretty crazy and cool at the same time, right?
First, I installed the spaCy library in my virtual environment and downloaded the
en_core_web_md model:
pip install spacy python3 -m spacy download en_core_web_md
Then, I imported them into my Flask app and loaded up the word vectors:
import spacy import en_core_web_md nlp = en_core_web_md.load()
Now it's time to build the word association model. I first took the list of vocabs that were on One Word Domains and tokenized them. Then, by using a nested for-loop, I calculated the cosine similarities between each of the terms, preserved a list of the top 100 most similar auxiliary terms for each word using the
addToList function, and stored everything in a dictionary. The code for all that is as follows:
# Add to list function to add the list of significant scores to list def addToList(ele, lst, num_ele): if ele in lst: return lst if len(lst) >= num_ele: #if list is at capacity if ele[1] > float(lst[-1][1]): #if element's sig_score is larger than smallest sig_score in list lst.pop(-1) lst.append((ele[0], str(ele[1]))) lst.sort(key = lambda x: float(x[1]), reverse=True) else: lst.append((ele[0], str(ele[1]))) lst.sort(key = lambda x: float(x[1]), reverse=True) return lst import json # list of English vocabs en_vocab = ['4k', 'a', 'aa', ...] # tokenizing the words in the vocab list tokens = nlp(' '.join(en_vocab)) # initiate empty dictionary to store the results en_dict = {} # Nested for loop to calculate cosine similarity scores for i in range(len(en_vocab)): word = en_vocab[i] print('Processing for '+ word + ' ('+ str(i) + ' out of '+ str(len(en_vocab)) + ' words)') for j in range(i+1, len(en_vocab)): prev_list_i = en_dict[str(tokens[i])]['similar_words'] en_dict[str(tokens[i])]['similar_words'] = addToList((str(tokens[j]), tokens[i].similarity(tokens[j])), prev_list_i, 100) prev_list_j = en_dict[str(tokens[j])]['similar_words'] en_dict[str(tokens[j])]['similar_words'] = addToList((str(tokens[i]), tokens[i].similarity(tokens[j])), prev_list_j, 100) with open('data.json', 'w') as f: json.dump(en_dict, f)
This code took forever to run. For 20,000 words, there were a total of 200,010,000 combinations (20000 + 19999 + 19998 +...+ 3 + 2 + 1 = 20001 * 10000 = 200,010,000). Given that each combination took about half a millisecond to complete, it wasn't a surprise when the whole thing took about 36 hours to execute completely.
But, once those gruelling 36 hours were up, I had a complete list of words along with 100 most similar words for each and every one of them.
To improve data retrieval speeds, I proceeded to store the data in PostgreSQL – a relational database that is incredibly scalable and powerful especially when it comes to large amounts of data. I did that with the following lines of code:
def store_db_postgres(): with open('data.json', 'r') as f: data = json.load(f) for word in data.keys(): db_cursor.execute("""INSERT INTO dbname (word, param) VALUES (%s, %s);""", (word, json.dumps(data[word]['similarity']))) db_conn.commit() return 'OK', 200
And, voilà! You can now traverse the millions edges in the word association network at minimum latency (last I checked, it was at sub-50ms levels). The search tool is live on One Word Domains now – feel free to play around with it.
*The French and Spanish words were trained on the
fr_core_news_md and
es_core_news_md models respectively.
Deploying to the Cloud
Here comes the tricky part. I knew that while I had over 20K of the most commonly-used English words in my word association network, there's always gonna be a chance where a user's query is not part of my knowledge base.
Therefore, I wanted to add an option for users to generate results on the fly. To do that, I needed to upload the spaCy model to the web so that I can build additional edges to the word association network in real-time.
However, this wasn't possible on Heroku given the 500MB slug size hard limit (my current slug size was already 300MB). I needed a more powerful alternative, and that's when I decided to go with AWS Lambda.
I had my first encounter with Lambda function s when I had to use them to host the NLP component of a movie chatbot that I helped built for my AI class this fall.
While Lambda functions were rather complicated to set up in the beginning, with the help of the handy Zappa library and tons of StackOverflow posts, I was able to build One Word Domains' very own lambda function that would find the most similar words for a given query and return them in JSON format.
I won't go too deep into the weeds about the deployment process, but this was the guide that helped me immensely. Also, here's the main driver function that would find the list of the top 100 most closely associated words for a given query:
@app.route('/generate') @cross_origin() def generate_results(): # get vocab vocab_pickle = s3.get_object(Bucket='intellisearch-assets', Key='vocab')['Body'].read() vocab_only = pickle.loads(vocab_pickle) vocab_only = list(set(vocab_only)) # add new query to vocab vocab_only.insert(0, query) # store vocab back to pickle serialized_vocab = pickle.dumps(vocab_only) s3.put_object(Bucket='intellisearch-assets', Key='vocab', Body = serialized_vocab) # do the rest tokens = nlp(' '.join(vocab_only)) results = [] for i in range(1, len(vocab_only)): if str(tokens[i]) != query: results.append([str(tokens[i]), tokens[0].similarity(tokens[i])]) results.sort(key=lambda x: x[1], reverse=True) return results[:100]
Here, I'm storing the list of vocabs in the form of a pickle file on AWS S3, and whenever I have a new query, I use
s3.get_obect to retreive the list and update them by inserting the new word into the list. Note that I'm using the
@cross_origin() decorator from Flask's CORS library since I'll be calling on this Lambda function from my Heroku app.
And that's...pretty much it. All I had to do now was connect the AWS Lambda API endpoint to my original Heroku app to help generate a list of relevant words for a given search query that a user enters on the site. Here's the outline for that:
import requests url = "" + query response = requests.get(url) similar_words = response.json() print(similar_words)
Finally, the moment of truth – here's a Loom recording showing how real-time search works on One Word Domains.
And there you go – a fully-functional search algorithm built on top of a word association network containing 20,000 nodes and millions of edges.
Feel free to play around with the search tool at the top of every page on One Word Domains. If you have any feedback, or if you find a bug, feel free to send me a message via chat, contact page, email or Twitter – I'd love to help!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/steventey/building-a-word-association-network-from-scratch-1j7e | CC-MAIN-2022-27 | refinedweb | 1,480 | 59.84 |
-2016 02:04 AM
Hey,
I would like to use the lwIP Example (XAPP1026) without the DDR RAM. Somewhere here in the forum a few people already mentioned that this is possible, but they did not go into details about their system setup.
So I built a system on my own and synthesis/implementation worked. Also I was abel to generate the board support packages for the
tcp/ip echo server example. However, now I run into the following problem on the follwing line:
int main() { struct ip_addr ipaddr, netmask, gw; /* the mac address of the board. this should be unique per board */ unsigned char mac_ethernet_address[] = { 0xde,0xad,0xbe,0xef,0x5b,0x00 }; echo_netif = &server_netif; #ifdef __arm__ #if XPAR_GIGE_PCS_PMA_SGMII_CORE_PRESENT == 1 || XPAR_GIGE_PCS_PMA_1000BASEX_CORE_PRESENT == 1 ProgramSi5324(); ProgramSfpPhy(); #endif #endif init_platform(); #if LWIP_DHCP==1 ipaddr.addr = 0; gw.addr = 0; netmask.addr = 0; #else /* initliaze IP addresses to be used */ IP4_ADDR(&ipaddr, 192, 168, 128, 50); IP4_ADDR(&netmask, 255, 255, 255, 0); IP4_ADDR(&gw, 192, 168, 128, 1); #endif print_app_header(); lwip_init(); /* Add network interface to the netif_list, and set it as default */ if (!xemac_add(echo_netif, &ipaddr, &netmask, &gw, mac_ethernet_address, PLATFORM_EMAC_BASEADDR)) { xil_printf("Error adding N/W interface\n\r"); return -1; } netif_set_default(echo_netif); /* now enable interrupts */ platform_enable_interrupts(); /* specify that the network if is up */ //ERROR NOW HERE netif_set_up(echo_netif);
The serial connection gives me:
axidma_recv_handler: Error: axidma error interrupt is asserted
Multiple times.
My fist bet would be, that I run out of memory. However, the local memory is not connected using DMA, but only the shared memory. Any ideas?
Or maybe someone could point me to a (working) setup without DDR?
01-09-2016 07:04 AM
As expected, I misconfigured something with the DMA controller or maybe DMA needs DDR RAM? I dont know. I solved the problem now by using a simple FIFO which uses less LUT anyway
01-05-2016 11:29 PM
Hi,
Please refer to for lwip without DDR.
Regards
Praveen
--------------------------------------------------------------------------------------------
Please mark the post as an answer "Accept as solution" in case it helped resolve your query.
Give kudos in case a post in case it guided to the solution.
01-07-2016 12:00 AM
Hey,
I did that, but I still got the same error. I started with the XAPP1026 example and configured a lwIP stack so that the whole echo server test uses around 500KB (see first screenshot). I configured the microblaze processor to have 512K address range for data and instructions so that should fit. I am able to execute this program on my system. I also added. 256K shared memory (see second screenshot).
However I still get
axidma_recv_handler: Error: axidma error interrupt is asserted
To my surprise, the shared memory is not listed in the hardware specification file (first screenshot). Maybe there is something wrong? Also, is it normal to have the same base address for instruction and data in local memory? This feels a little weird to me since this way the compiler has to make sure that .data and .text are in two different areas of memory.
01-09-2016 07:04 AM
As expected, I misconfigured something with the DMA controller or maybe DMA needs DDR RAM? I dont know. I solved the problem now by using a simple FIFO which uses less LUT anyway
04-19-2016 08:50 AM
I also observed axidma_recv_handler: Error: axidma error interrupt is asserted when trying to run the lwip echo server without external memory. Turns out, the DMA engine did not have access to the memory region where the lwip buffer was located. As far as I can tell, the lwip heap is static and, consequently, is located in the BSS memory segment.
I had placed the Code, Data (includes BSS), and Heap & Stack segments in the MicroBlase ILMB/DLMB block ram. However, this memory was accessable only to the MicroBlase and not to the AXI Ethernet DMA engine. Since lwip needs the ethernet data DMA'd into the BSS segment, which was impossible, lwip never got the data.
Assigning the entire Data section or just the .bss segment to an AXI block ram that was accessible to both the MicroBlaze and the AXI Ethernet DMA engine solved the problem. This can be done using the Generate Linker Script tool for the lwip application in SDK. | https://forums.xilinx.com/t5/Embedded-Processor-System-Design/AXI-DMA-error-interrupt-is-asserted/m-p/675083 | CC-MAIN-2019-35 | refinedweb | 713 | 62.78 |
A python framework for getting useful stuff out of HAR files
Project description
A Python Framework For Using HAR Files To Analyze Web Pages.
Overview
The haralyzer module contains two classes for analyzing web pages based on a HAR file. HarParser() represents a full file (which might have multiple pages), and HarPage() represents a single page from said file.
HarParser has a couple of helpful methods for analyzing single entries from a HAR file, but most of the pertinent functions are inside of the page object.
haralyzer was designed to be easy to use, but you can also access more powerful functions directly.
Quick Intro
HarParser
The HarParser takes a single argument of a dict representing the JSON of a full HAR file. It has the same properties of the HAR file, EXCEPT that each page in HarParser.pages is a HarPage object:
import json from haralyzer import HarParser, HarPage with open('har_data.har', 'r') as f: har_parser = HarParser(json.loads(f.read())) print har_parser.browser # {u'name': u'Firefox', u'version': u'25.0.1'} print har_parser.hostname # 'humanssuck.net' for page in har_parser.pages: assert isinstance(page, HarPage, None) # returns True for each
HarPage
The HarPage object contains most of the goods you need to easily analyze a page. It has helper methods that are accessible, but most of the data you need is in properties for easy access. You can create a HarPage object directly by giving it the page ID (yes, I know it is stupid, it’s just how HAR is organized), and either a HarParser with har_parser=parser, or a dict representing the JSON of a full HAR file (see example above) with har_data=har_data:
import json From haralyzer import HarPage with open('har_data.har', 'r') as f: har_page = HarPage('page_3', har_data=json.loads(f.read())) ### GET BASIC INFO har_page.hostname # 'humanssuck.net' har_page.url $ '' ### WORK WITH LOAD TIMES (all load times are in ms) ### # Get image load time in milliseconds as rendered by the browser har_page.image_load_time # prints 713 # We could do this with 'css', 'js', 'html', 'audio', or 'video' ### WORK WITH SIZES (all sizes are in bytes) ### # Get the total page size (with all assets) har_page.page_size # prints 2423765 # Get the total image size har_page.image_size # prints 733488 # We could do this with 'css', 'js', 'html', 'audio', or 'video' # Get the transferred sizes (works only with HAR files, generated with Chrome) har_page.page_size_trans har_page.image_size_trans har_page.css_size_trans har_page.text_size_trans har_page.js_size_trans har_page.audio_size_trans har_page.video_size_trans
MultiHarParser
The MutliHarParser takes a list of dict, each of which represents the JSON of a full HAR file. The concept here is that you can provide multiple HAR files of the same page (representing multiple test runs) and the MultiHarParser will provide aggregate results for load times:
import json from haralyzer import HarParser, HarPage test_runs = [] with open('har_data1.har', 'r') as f1: test_runs.append( (json.loads( f1.read() ) ) with open('har_data2.har', 'r') as f2: test_runs.append( (json.loads( f2.read() ) ) multi_har_parser = MultiHarParser(har_data=test_runs) # Get the mean for the time to first byte of all runs in MS print multi_har_parser.time_to_first_byte # 70 # Get the total page load time mean for all runs in MS print multi_har_parser.load_time # 150 # Get the javascript load time mean for all runs in MS print multi_har_parser.js_load_time # 50 # You can get the standard deviation for any of these as well # Let's get the standard deviation for javascript load time print multi_har_parser.get_stdev('js') # 5 # We can also do that with 'page' or 'ttfb' (time to first byte) print multi_har_parser.get_stdev('page') # 11 print multi_har_parser.get_stdev('ttfb') # 10 ### DECIMAL PRECISION ### # You will notice that all of the results are above. That is because # the default decimal precision for the multi parser is 0. However, you # can pass whatever you want into the constructor to control this. multi_har_parser = MultiHarParser(har_data=test_runs, decimal_precision=2) print multi_har_parser.time_to_first_byte # 70.15
Advanced Usage
HarPage includes a lot of helpful properties, but they are all easily produced using the public methods of HarParser and HarPage:
import json from haralyzer import HarPage with open('har_data.har', 'r') as f: har_page = HarPage('page_3', har_data=json.loads(f.read())) ### ACCESSING FILES ### # You can get a JSON representation of all assets using HarPage.entries # for entry in har_page.entries: if entry['startedDateTime'] == 'whatever I expect': ... do stuff ... # It also has methods for filtering assets # # Get a collection of entries that were images in the 2XX status code range # entries = har_page.filter_entries(content_type='image.*', status_code='2.*') # This method can filter by: # * content_type ('application/json' for example) # * status_code ('200' for example) # * request_type ('GET' for example) # * http_version ('HTTP/1.1' for example) # It will use a regex by default, but you can also force a literal string match by passing regex=False # Get the size of the collection we just made # collection_size = har_page.get_total_size(entries) # We can also access files by type with a property # for js_file in har_page.js_files: ... do stuff .... ### GETTING LOAD TIMES ### # Get the BROWSER load time for all images in the 2XX status code range # load_time = har_page.get_load_time(content_type='image.*', status_code='2.*') # Get the TOTAL load time for all images in the 2XX status code range # load_time = har_page.get_load_time(content_type='image.*', status_code='2.*', async=False)
This could potentially be out of date, so please check out the sphinx docs.
More…. Advanced Usage
All of the HarPage methods above leverage stuff from the HarParser, some of which can be useful for more complex operations. They either operate on a single entry (from a HarPage) or a list of entries:
import json from haralyzer import HarParser with open('har_data.har', 'r') as f: har_parser = HarParser(json.loads(f.read())) for page in har_parser.pages: for entry in page.entries: ### MATCH HEADERS ### if har_parser.match_headers(entry, 'Content-Type', 'image.*'): print 'This would appear to be an image' ### MATCH REQUEST TYPE ### if har_parser.match_request_type(entry, 'GET'): print 'This is a GET request' ### MATCH STATUS CODE ### if har_parser.match_status_code(entry, '2.*'): print 'Looks like all is well in the world'
Asset Timelines
The last helper function of HarParser requires it’s own section, because it is odd, but can be helpful, especially for creating charts and reports.
It can create an asset timeline, which gives you back a dict where each key is a datetime object, and the value is a list of assets that were loading at that time. Each value of the list is a dict representing an entry from a page.
It takes a list of entries to analyze, so it assumes that you have already filtered the entries you want to know about:
import json from haralyzer import HarParser with open('har_data.har', 'r') as f: har_parser = HarParser(json.loads(f.read())) ### CREATE A TIMELINE OF ALL THE ENTRIES ### entries = [] for page in har_parser.pages: for entry in page.entries: entries.append(entry) timeline = har_parser.create_asset_timeline(entries) for key, value in timeline.items(): print(type(key)) # <type 'datetime.datetime'> print(key) # 2015-02-21 19:15:41.450000-08:00 print(type(value)) # <type 'list'> print(value) # Each entry in the list is an asset from the page # [{u'serverIPAddress': u'157.166.249.67', u'cache': {}, u'startedDateTime': u'2015-02-21T19:15:40.351-08:00', u'pageref': u'page_3', u'request': {u'cookies':............................
With this, you can examine the timeline for any number of assets. Since the key is a datetime object, this is a heavy operation. We could always change this in the future, but for now, limit the assets you give this method to only what you need to examine.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/haralyzer/ | CC-MAIN-2018-30 | refinedweb | 1,283 | 56.66 |
: :> Considering how few items are left on the list for HAMMER, it will :> likely be production-ready long before the mid-year release. : :Do you think you'll get some of the clustering functionality in before the mid :year release, like volumes spanning multiple hosts? What about booting :DragonFly off a HAMMER filesystem? : :Petr I honestly don't know. The cluster functionality is the holy grail of the project, but it requires significant work not just on the filesystem, but on the kernel's entire filesystem data caching model. Big chunks of that work have already been done (new namecache, kernel is now responsible for namespace and data space locking, etc), but there are still bug chunks left. I do expect the semi-real-time mirroring technology to be in by mid-year, because that will be directly tied into the UNDO fifo and the UNDO fifo is needed for crash recovery. -Matt Matthew Dillon <dillon@backplane.com> | http://leaf.dragonflybsd.org/mailarchive/kernel/2008-02/msg00076.html | CC-MAIN-2014-42 | refinedweb | 158 | 61.16 |
x86 Disassembly/Print Version
The Wikibook of
Using C and Assembly Language
From Wikibooks: The Free Library
Introduction. We are going to look at the way programs are made using assemblers and compilers, and examine the way that assembly code is made from C or C++ source code. Using this knowledge, we will try to reverse the process. By examining common structures, such as data and control structures, we can find patterns that enable us to disassemble and decompile programs quickly.
Who Is This Book For?
This book is for readers at the undergraduate level with experience programming in x86 Assembly and C or C++. This book is not designed to teach assembly language programming, C or C++ programming, or compiler/assembler theory.
What Are The Prerequisites?
The reader should have a thorough understanding of x86 Assembly, C Programming, and possibly C++ Programming. This book is intended to increase the reader's understanding of the relationship between x86 machine code, x86 Assembly Language, and the C Programming Language. If you are not too familar with these topics, you may want to reread some of the above-mentioned books before continuing.
What is Disassembly?
Computer programs are written originally in a human readable code form, such as assembly language or a high-level language. These programs are then compiled into a binary format called machine code. This binary format is not directly readable or understandable by humans. Many programs -- such as malware, proprietary commercial programs, or very old legacy programs -- may not have the source code available to you.
Programs frequently perform tasks that need to be duplicated, or need to be made to interact with other programs. Without the source code and without adequate documentation, these tasks can be difficult to accomplish. This book outlines tools and techniques for attempting to convert the raw machine code of an executable file into equivalent code in assembly language and the high-level languages C and C++. With the high-level code to perform a particular task, several things become possible:
- Programs can be ported to new computer platforms, by compiling the source code in a different environment.
- The algorithm used by a program can be determined. This allows other programs to make use of the same algorithm, or for updated versions of a program to be rewritten without needing to track down old copies of the source code.
- Security holes and vulnerabilities can be identified and patched by users without needing access to the original source code.
- New interfaces can be implemented for old programs. New components can be built on top of old components to speed development time and reduce the need to rewrite large volumes of code.
- We can figure out what a piece of malware does. We hope this leads us to figuring out how to block its harmful effects. Unfortunately, some malware writers use self-modifying code techniques (polymorphic camouflage, XOR encryption, scrambling)[1], apparently to make it difficult to even detect that malware, much less disassemble it.
Disassembling code has a large number of practical uses. One of the positive side effects of it is that the reader will gain a better understanding of the relation between machine code, assembly language, and high-level languages. Having a good knowledge of these topics will help programmers to produce code that is more efficient and more secure.
Tools
Assemblers and Compilers
Assemblers.
MASMASM.
NASMASM
FASM, the "Flat Assembler" is an open source assembler that supports x86, and IA-64 Intel architectures.
(x86) AT&T Syntax Assemblers
AT&T syntax for x86 microprocessor assembly code is not as common as Intel-syntax, but the GNU Assembler (GAS) uses it, and it is the de facto assembly standard on Unix and Unix-like operating systems.
GAS
HLA Compiler Compiler Compilerrior
This compiler is commonly used for classic MacOS and for embedded systems. If you try to reverse-engineer a piece of consumer electronics, you may encounter code generated by Metrowerks CodeWarrior.
Green Hills Software Compiler
This compiler is commonly used for embedded systems. If you try to reverse-engineer a piece of consumer electronics, you may encounter code generated by Green Hills C/C++.
Disassemblers and Decompilers
What is a Disassembler? analyse and understand native x86 and x64 Windows software. It provides interactive code and.
-
- Binary Ninja
- Binary Ninja is a commercial, cross-platform (Linux, OS X, Windows) reverse engineering platform with aims to offer a similar feature set to IDA ad a much cheaper price point. It is currently in a semi-private beta (anyone requesting access is allowed on the beta) and a precursor written in python is open source (). Currently advertised pricing is $99 for student/non-commercial use, and $399 for commercial use.
-
-
As we have alluded to before, there are a number of issues and difficulties associated with the disassembly process. The two most important difficulties are the division between code and data, and the loss of text information.
Separating Code from Data even harder to determine what is going on. Another challenge is posed by modern optimising compilers; they inline small subroutines, then combine instructions over call and return boundaries. This loses valuable information about the way the program is structured.
Decompilers: [2][3]). boasted pretty good results in the past.
-
- snowman
- Snowman is an open source native code to C/C++ decompiler..
-
A General view of Disassembling
8 bit CPU code
On 8-bit CPUs, calculated jumps are often implemented by pushing a calculated "return" address to the stack, then jumping to that address using the "return" instruction. For example, the RTS Trick uses this technique to implement jump tables (w:branch table).
parameters after the call instruction [4] has some tips on reverse engineering programs in JavaScript, Flash Actionscript (SWF), Java, etc.
- the Open Source Institute occasionally has reverse engineering challenges among its other brainteasers.[5]
- The Program Transformation wiki has a Reverse engineering and Re-engineering Roadmap, and discusses disassemblers, decompilers, and tools for translating programs from one high-level language to another high-level language.
- Other disassemblers with multi-platform support
Analysis Tools
Resource Monitors
- SysInternals Freeware
- This page has a large number of excellent utilities, many of which are very useful to security experts, network administrators, and (most importantly to us) reversers. Specifically, check out Process Monitor, FileMon, RegMon, TCPView, and Process Explorer.
-
API Monitors
-.
Platforms
Microsoft Windows
Microsoft Windows
The Windows operating system is a popular reverse engineering target.
Windows Versions
Windows operating systems can be easily divided into 2 categories: Win9x, and WinNT. ME.), Windows Vista (NT 6.0), Windows 7 (NT 6.1), Windows 8 (NT 6.2), Windows 8.1 (NT 6.3), and Windows 10 (NT 10.0).
The Microsoft XBOX and and XBOX 360 also run a variant of NT, forked from Windows 2000. Most future Microsoft operating system products are based on NT in some shape or form.32,..
Native API; it is rumored that this prefix was chosen due to its having no significance at all.
In actual implementation, the system call stubs merely load two registers with values required to describe a native API call, and then execute a software interrupt (or the
sysenter instruction). load, manipulate and retrieve data from DLLs and other module resources
User Mode Versus Kernel Mode..
Differences.
Windows CE/Mobile, and other versions
Windows CE is the Microsoft offering on small devices. It largely uses the same Win32 API as the desktop systems, although it has a slightly different architecture. Some examples in this book may consider WinCE.
.
Windows Executable Files
MS-DOS COM Files
The PE portable executable file format includes a number of informational headers, and is arranged in the following format:
The basic format of a Microsoft PE file
MS-DOS headerZ"; short lastsize; short nblocks; short nreloc; short hdrsize; short minalloc; short maxalloc; void *ss; void *sp; short checksum; void *ip; void *cs; short relocpos; short noverlay; short reserved1[4]; short oem_id; short oem_info; short reserved2[10]; long e_lfanew; } shows that
The "PE Optional Header" is not "optional" per se, because it is required in Executable files, but not in COFF object files. PE Optional Header presented as a C data structure:
struct PEOptHeader { /*.
Code Sections)
- .testbss/TEXTBSS - Present if Incremental Linking is enabled
- .data/.idata/DATA/IDATA - Contains initialised data
- .bss/BSS - Contains uninitialised data
Section Flags
What is linking? ordinal from the AddressOfNameOrdinals array. This ordinal is then used to get an index to a value in AddressOfFunctions.
Forwarding
Resource structures
Alternate Bound Import Structure
Windows DLL Files.
Linux
The Print Version page of the X86 Disassembly Wikibook is a stub. You can help by expanding this section.
Linux
The GNU/Linux operating system is open source, but at the same time there is so much that constitutes "GNU/Linux" that it can be difficult to stay on top of all aspects of the system. Here we will attempt to boil down some of the most important concepts of the GNU/Linux Operating System, especially from a reverser's standpoint
System Architecture
The concept of "GNU/Linux" is mostly a collection of a large number of software components that are based off the GNU tools and the Linux kernel. GNU/Linux is itself broken into a number of variants called "distros" which share some similarities, but may also have distinct peculiarities. In a general sense, all GNU/Linux distros are based on a variant of the Linux kernel. However, since each user may edit and recompile their own kernel at will, and since some distros may make certain edits to their kernels, it is hard to proclaim any one version of any one kernel as "the standard". Linux kernels are generally based off the philosophy that system configuration details should be stored in aptly-named, human-readable (and therefore human-editable) configuration files.
The Linux kernel implements much of the core API, but certainly not all of it. Much API code is stored in external modules (although users have the option of compiling all these modules together into a "Monolithic Kernel").
On top of the kernel generally runs one or more shells. Bash is one of the more popular shells, but many users prefer other shells, especially for different tasks.
Beyond the shell, Linux distros frequently offer a GUI (although many distros do not have a GUI at all, usually for performance reasons).
Since each GUI often supplies its own underlying framework and API, certain graphical applications may run on only one GUI. Some applications may need to be recompiled (and a few completely rewritten) to run on another GUI.
Configuration Files
Shells
Here are some popular shells:
- Bash
- An acronym for "Bourne Again SHell."
- Bourne
- A precursor to Bash.
- Csh
- C Shell
- Ksh
- Korn Shell
- TCsh
- A Terminal oriented Csh.
- Zsh
- Z Shell
GUIs
Some of the more-popular GUIs:
- KDE
- K Desktop Environment
- GNOME
- GNU Network Object Modeling Environment
Debuggers
- gdb
- The GNU Debugger. It comes pre-installed on most Linux distributions and is primarily used to debug ELF executables. manpage
- edb
- A fully featured plugin-based debugger inspired by the famous OllyDbg. Project page
File Analyzers
- strings
- Finds printable strings in a file. When, for example, a password is stored in the binary itself (defined statically in the source), the string can then be extracted from the binary without ever needing to execute it. manpage
- file
- Determines a file type, useful for determining whether an executable has been stripped and whether it's been dynamically (or statically) linked. manpage
- objdump
- Disassembles object files, executables and libraries. Can list internal file structure and disassemble specific sections. Supports both Intel and AT&T syntax
- nm
- Lists symbols from executable files. Doesn't work on stripped binaries. Used mostly on debugging version of executables.
Linux Executable Files
The Print Version page of the X86 Disassembly Wikibook is a stub. You can help by expanding this section.
ELF Files
Relocatable ELF files are created by compilers. They need to be linked before running.
Those files are often found in
.a archives, with a
.o extension.
a.out Files.
File Format
Code Patterns
The Stack
The Stack
The following lines of ASM code are basically equivalent:
but the single command actually performs much faster than the alternative. It can be visualized that the stack grows from right to left, and esp decreases as the stack grows in size.
ESP In Action.
Functions and Stack Frames.
Standard Entry Sequence; ...
_MyFunction::
_MyFunction2: code:
: : | 2 | [ebp + 16] (3rd function argument) | 5 | [ebp + 12] (2nd argument) | 10 | [ebp + 8] (1st argument) | RA | [ebp + 4] (return address) | FP | [ebp] (old ebp value) | | [ebp - 4] (1st local variable) : : : : | | [ebp - X] (esp - the current stack pointer. The use of push / pop is valid now)
The stack pointer value may change during the execution of the current function. In particular this happens when:
- parameters are passed to another function;
- the pseudo-function "alloca()" is used.
[FIXME: When parameters are passed into another function the esp changing is not an issue. When that function returns the esp will be back to its old value. So why does ebp help there. This needs better explanation. (The real explanation is here, ESP is not really needed:)]:
_MyFunction3: push ebp mov ebp, esp sub esp, 12 ; sizeof(a) + sizeof(b) + sizeof(c) ;x = [ebp + 8], y = [ebp + 12], z = [ebp + 16] ;a = [ebp - 4] = [esp + 8], b = [ebp - 8] = [esp + 4], c = [ebp - 12] = [esp] mov esp, ebp pop ebp ret 12 ; sizeof(x) + sizeof(y) + sizeof(z)
Non-Standard Stack Frames
Frequently, reversers will come across a subroutine that doesn't set up a standard stack frame. Here are some things to consider when looking at a subroutine that does not start with a standard sequence:.
Local Static Variables
Local static variables cannot be created on the stack, since the value of the variable is preserved across function calls. We'll discuss local static variables and other types of variables in a later chapter.
Functions and Stack Frame Examples
Example: Number of Parameters
Given the following disassembled function (in MASM syntax), how many 4-byte parameters does this function receive? How many variables are created on the stack? What does this function do?
_Question1: push ebp mov ebp, esp sub esp, 4 mov eax, [ebp + 8] mov ecx, 2 mul ecx mov [esp + 0], eax mov eax, [ebp + 12] mov edx, [esp + 0] add eax, edx mov esp, ebp pop ebp ret
The function above takes 2 4-byte parameters, accessed by offsets +8 and +12 from ebp. The function also has 1 variable created on the stack, accessed by offset +0 from esp. The function is nearly identical to this C code:
int Question1(int x, int y) { int z; z = x * 2; return y + z; }
Example: Standard Entry Sequences
Does the following function follow the Standard Entry and Exit Sequences? if not, where does it differ?
_Question2: call _SubQuestion2 mov ecx, 2 mul ecx ret
The function does not follow the standard entry sequence, because it doesn't set up a proper stack frame with ebp and esp. The function basically performs the following C instructions:
int Question2() { return SubQuestion2() * 2; }
Although an optimizing compiler has chosen to take a few shortcuts.
Calling Conventions
Calling Conventions: STDCALL, CDECL, and FASTCALL. In addition, there is another calling convention typically used with C++: THISCALL. There are other calling conventions as well, including PASCAL and FORTRAN conventions, among others. We will not consider those conventions in this book.
Notes on Terminology
C++ requires that non-static methods of a class be called by an instance of the class. Therefore it uses its own standard calling convention to ensure that pointers to the object are passed to the function: THISCALL.
THISCALL); } MyClass::MyFunction(2) { }
And here is the resultant mangled name:
?MyFunction@MyClass@@QAEHH@Z
Extern "C"
- x86 Disassembly/Calling Convention Examples
- Embedded Systems/Mixed C and Assembly Programming describes calling conventions on other CPUs.
Calling Convention Examples
Microsoft C Compiler
Here is a simple function in C:
int MyFunction(int x, int y) { return (x * 2) + (y * 3); }
Using cl.exe, we are going to generate 3 separate listings for MyFunction, one with CDECL, one with FASTCALL, and one with STDCALL calling conventions. On the commandline, there are several switches that you can use to force the compiler to change the default:
/Gd: The default calling convention is CDECL
/Gr: The default calling convention is FASTCALL
/Gz: The default calling convention is STDCALL
Using these commandline options, here are the listings:
CDECL
int MyFunction(int x, int y) { return (x * 2) + (y * 3); }
becomes:
PUBLIC _MyFunction _TEXT SEGMENT _x$ = 8 ; size = 4 _y$ = 12 ; size = 4 _MyFunction PROC NEAR ; Line 4 push ebp mov ebp, esp ; Line 5 mov eax, _y$[ebp] imul eax, 3 mov ecx, _x$[ebp] lea eax, [eax+ecx*2] ; Line 6 pop ebp ret 0 _MyFunction ENDP _TEXT ENDS END
On entry of a function, ESP points to the return address pushed on the stack by the call instruction (that is, previous contents of EIP). Any argument in stack of higher address than entry ESP is pushed by caller before the call is made; in this example, the first argument is at offset +4 from ESP (EIP is 4 bytes wide), plus 4 more bytes once the EBP is pushed on the stack. Thus, at line 5, ESP points to the saved frame pointer EBP, and arguments are located at addresses ESP+8 (x) and ESP+12 (y).
For CDECL, caller pushes arguments into stack in a right to left order. Because ret 0 is used, it must be the caller who cleans up the stack.
As a point of interest, notice how lea is used in this function to simultaneously perform the multiplication (ecx * 2), and the addition of that quantity to eax. Unintuitive instructions like this will be explored further in the chapter on unintuitive instructions.
FASTCALL
int MyFunction(int x, int y) { return (x * 2) + (y * 3); }
becomes:
PUBLIC @MyFunction@8 _TEXT SEGMENT _y$ = -8 ; size = 4 _x$ = -4 ; size = 4 @MyFunction@8 PROC NEAR ; _x$ = ecx ; _y$ = edx ; Line 4 push ebp mov ebp, esp sub esp, 8 mov _y$[ebp], edx mov _x$[ebp], ecx ; Line 5 mov eax, _y$[ebp] imul eax, 3 mov ecx, _x$[ebp] lea eax, [eax+ecx*2] ; Line 6 mov esp, ebp pop ebp ret 0 @MyFunction@8 ENDP _TEXT ENDS END
This function was compiled with optimizations turned off. Here we see arguments are first saved in stack then fetched from stack, rather than be used directly. This is because the compiler wants a consistent way to use all arguments via stack access, not only one compiler does like that.
There is no argument is accessed with positive offset to entry SP, it seems caller doesn’t pushed in them, thus it can use ret 0. Let’s do further investigation:
int FastTest(int x, int y, int z, int a, int b, int c) { return x * y * z * a * b * c; }
and the corresponding listing:
PUBLIC @FastTest@24 _TEXT SEGMENT _y$ = -8 ; size = 4 _x$ = -4 ; size = 4 _z$ = 8 ; size = 4 _a$ = 12 ; size = 4 _b$ = 16 ; size = 4 _c$ = 20 ; size = 4 @FastTest@24 PROC NEAR ; _x$ = ecx ; _y$ = edx ; Line 2 push ebp mov ebp, esp sub esp, 8 mov _y$[ebp], edx mov _x$[ebp], ecx ; mov esp, ebp pop ebp ret 16 ; 00000010H
Now we have 6 arguments, four are pushed in by caller from right to left, and last two are passed again in cx/dx, and processed the same way as previous example. Stack cleanup is done by ret 16, which corresponding to 4 arguments pushed before call executed.
For FASTCALL, compiler will try to pass arguments in registers, if not enough caller will pushed them into stack still in an order from right to left. Stack cleanup is done by callee. It is called FASTCALL because if arguments can be passed in registers (for 64bit CPU the maximum number is 6), no stack push/clean up is needed.
The name-decoration scheme of the function: @MyFunction@n, here n is stack size needed for all arguments.
STDCALL
int MyFunction(int x, int y) { return (x * 2) + (y * 3); }
becomes:
PUBLIC _MyFunction@8 _TEXT SEGMENT _x$ = 8 ; size = 4 _y$ = 12 ; size = 4 _MyFunction@8 PROC NEAR ; Line 4 push ebp mov ebp, esp ; Line 5 mov eax, _y$[ebp] imul eax, 3 mov ecx, _x$[ebp] lea eax, [eax+ecx*2] ; Line 6 pop ebp ret 8 _MyFunction@8 ENDP _TEXT ENDS END
The STDCALL listing has only one difference than the CDECL listing that it uses "ret 8" for self clean up of stack. Lets do an example with more parameters:
int STDCALLTest(int x, int y, int z, int a, int b, int c) { return x * y * z * a * b * c; }
Let's take a look at how this function gets translated into assembly by cl.exe:
PUBLIC _STDCALLTest@24 _TEXT SEGMENT _x$ = 8 ; size = 4 _y$ = 12 ; size = 4 _z$ = 16 ; size = 4 _a$ = 20 ; size = 4 _b$ = 24 ; size = 4 _c$ = 28 ; size = 4 _STDCALLTest@24 PROC NEAR ; Line 2 push ebp mov ebp, esp ; pop ebp ret 24 ; 00000018H _STDCALLTest@24 ENDP _TEXT ENDS END
Yes the only difference between STDCALL and CDECL is that the former does stack clean up in callee, the later in caller. This saves a little bit in X86 due to its "ret n".
GNU C Compiler
We will be using 2 example C functions to demonstrate how GCC implements calling conventions:
int MyFunction1(int x, int y) { return (x * 2) + (y * 3); }
and
int MyFunction2(int x, int y, int z, int a, int b, int c) { return x * y * (z + 1) * (a + 2) * (b + 3) * (c + 4); }
GCC does not have commandline arguments to force the default calling convention to change from CDECL (for C), so they will be manually defined in the text with the directives: __cdecl, __fastcall, and __stdcall.
CDECL
The first function (MyFunction1) provides the following assembly listing:
_MyFunction1:
First of all, we can see the name-decoration is the same as in cl.exe. We can also see that the ret instruction doesn't have an argument, so the calling function is cleaning the stack. However, since GCC doesn't provide us with the variable names in the listing, we have to deduce which parameters are which. After the stack frame is set up, the first instruction of the function is "movl 8(%ebp), %eax". One we remember (or learn for the first time) that GAS instructions have the general form:
instruction src, dest
We realize that the value at offset +8 from ebp (the last parameter pushed on the stack) is moved into eax. The leal instruction is a little more difficult to decipher, especially if we don't have any experience with GAS instructions. The form "leal(reg1,reg2), dest" adds the values in the parenthesis together, and stores the value in dest. Translated into Intel syntax, we get the instruction:
lea ecx, [eax + eax]
Which is clearly the same as a multiplication by 2. The first value accessed must then have been the last value passed, which would seem to indicate that values are passed right-to-left here. To prove this, we will look at the next section of the listing:
movl 12(%ebp), %edx movl %edx, %eax addl %eax, %eax addl %edx, %eax leal (%eax,%ecx), %eax
the value at offset +12 from ebp is moved into edx. edx is then moved into eax. eax is then added to itselt (eax * 2), and then is added back to edx (edx + eax). remember though that eax = 2 * edx, so the result is edx * 3. This then is clearly the y parameter, which is furthest on the stack, and was therefore the first pushed. CDECL then on GCC is implemented by passing arguments on the stack in right-to-left order, same as cl.exe.
FASTCALL
.globl @MyFunction1@8 .def @MyFunction1@8; .scl 2; .type 32; .endef @MyFunction1@8: pushl %ebp movl %esp, %ebp subl $8, %esp movl %ecx, -4(%ebp) movl %edx, -8(%ebp) movl -4(%ebp), %eax leal (%eax,%eax), %ecx movl -8(%ebp), %edx movl %edx, %eax addl %eax, %eax addl %edx, %eax leal (%eax,%ecx), %eax leave ret
Notice first that the same name decoration is used as in cl.exe. The astute observer will already have realized that GCC uses the same trick as cl.exe, of moving the fastcall arguments from their registers (ecx and edx again) onto a negative offset on the stack. Again, optimizations are turned off. ecx is moved into the first position (-4) and edx is moved into the second position (-8). Like the CDECL example above, the value at -4 is doubled, and the value at -8 is tripled. Therefore, -4 (ecx) is x, and -8 (edx) is y. It would seem from this listing then that values are passed left-to-right, although we will need to take a look at the larger, MyFunction2 example:
.globl @MyFunction2@24 .def @MyFunction2@24; .scl 2; .type 32; .endef @MyFunction2@24: pushl %ebp movl %esp, %ebp subl $8, %esp movl %ecx, -4(%ebp) movl %edx, -8(%ebp) movl -4(%ebp), %eax imull -8(%ebp), %eax movl 8(%ebp), %edx incl %edx imull %edx, %eax movl 12(%ebp), %edx addl $2, %edx imull %edx, %eax movl 16(%ebp), %edx addl $3, %edx imull %edx, %eax movl 20(%ebp), %edx addl $4, %edx imull %edx, %eax leave ret $16
By following the fact that in MyFunction2, successive parameters are added to increasing constants, we can deduce the positions of each parameter. -4 is still x, and -8 is still y. +8 gets incremented by 1 (z), +12 gets increased by 2 (a). +16 gets increased by 3 (b), and +20 gets increased by 4 (c). Let's list these values then:
z = [ebp + 8] a = [ebp + 12] b = [ebp + 16] c = [ebp + 20]
c is the furthest down, and therefore was the first pushed. z is the highest to the top, and was therefore the last pushed. Arguments are therefore pushed in right-to-left order, just like cl.exe.
STDCALL
Let's compare then the implementation of MyFunction1 in GCC:
.globl _MyFunction1@8 .def _MyFunction1@8; .scl 2; .type 32; .endef _MyFunction1@8: $8
The name decoration is the same as in cl.exe, so STDCALL functions (and CDECL and FASTCALL for that matter) can be assembled with either compiler, and linked with either linker, it seems. The stack frame is set up, then the value at [ebp + 8] is doubled. After that, the value at [ebp + 12] is tripled. Therefore, +8 is x, and +12 is y. Again, these values are pushed in right-to-left order. This function also cleans its own stack with the "ret 8" instruction.
Looking at a bigger example:
.globl _MyFunction2@24 .def _MyFunction2@24; .scl 2; .type 32; .endef _MyFunction2@24: pushl %ebp movl %esp, %ebp movl 8(%ebp), %eax imull 12(%ebp), %eax movl 16(%ebp), %edx incl %edx imull %edx, %eax movl 20(%ebp), %edx addl $2, %edx imull %edx, %eax movl 24(%ebp), %edx addl $3, %edx imull %edx, %eax movl 28(%ebp), %edx addl $4, %edx imull %edx, %eax popl %ebp ret $24
We can see here that values at +8 and +12 from ebp are still x and y, respectively. The value at +16 is incremented by 1, the value at +20 is incremented by 2, etc all the way to the value at +28. We can therefore create the following table:
x = [ebp + 8] y = [ebp + 12] z = [ebp + 16] a = [ebp + 20] b = [ebp + 24] c = [ebp + 28]
With c being pushed first, and x being pushed last. Therefore, these parameters are also pushed in right-to-left order. This function then also cleans 24 bytes off the stack with the "ret 24" instruction.
Example: C Calling Conventions
Identify the calling convention of the following C function:
int MyFunction(int a, int b) { return a + b; }
The function is written in C, and has no other specifiers, so it is CDECL by default.
Example: Named Assembly Function
Identify the calling convention of the function MyFunction:
:_MyFunction@12 push ebp mov ebp, esp ... pop ebp ret 12
The function includes the decorated name of an STDCALL function, and cleans up its own stack. It is therefore an STDCALL function.
Example: Unnamed Assembly Function
This code snippet is the entire body of an unnamed assembly function. Identify the calling convention of this function.
push ebp mov ebp, esp add eax, edx pop ebp ret
The function sets up a stack frame, so we know the compiler hasnt done anything "funny" to it. It accesses registers which arent initialized yet, in the edx and eax registers. It is therefore a FASTCALL function.
Example: Another Unnamed Assembly Function
push ebp mov ebp, esp mov eax, [ebp + 8] pop ebp ret 16
The function has a standard stack frame, and the ret instruction has a parameter to clean its own stack. Also, it accesses a parameter from the stack. It is therefore an STDCALL function.
Example: Name Mangling
What can we tell about the following function call?
mov ecx, x push eax mov eax, ss:[ebp - 4] push eax mov al, ss:[ebp - 3] call @__Load?$Container__XXXY_?Fcii
Two things should get our attention immediately. The first is that before the function call, a value is stored into ecx. Also, the function name itself is heavily mangled. This example must use the C++ THISCALL convention. Inside the mangled name of the function, we can pick out two english words, "Load" and "Container". Without knowing the specifics of this name mangling scheme, it is not possible to determine which word is the function name, and which word is the class name.
We can pick out two 32-bit variables being passed to the function, and a single 8-bit variable. The first is located in eax, the second is originally located on the stack from offset -4 from ebp, and the third is located at ebp offset -3. In C++, these would likely correspond to two int variables, and a single char variable. Notice at the end of the mangled function name are three lower-case characters "cii". We can't know for certain, but it appears these three letters correspond to the three parameters (char, int, int). We do not know from this whether the function returns a value or not, so we will assume the function returns void.
Assuming that "Load" is the function name and "Container" is the class name (it could just as easily be the other way around), here is our function definition:
class Container { void Load(char, int, int); }
Branches
Branching).
Branch Examples
Example: Number of Parameters
What parameters does this function take? What calling convention does it use? What kind of value does it return? Write the entire C prototype of this function. Assume all values are unsigned values.
This function accesses parameters on the stack at [ebp + 8] and [ebp + 12]. Both of these values are loaded into ecx, and we can therefore assume they are 4-byte values. This function doesn't clean its own stack, and the values aren't passed in registers, so we know the function is CDECL. The return value in eax is a 4-byte value, and we are told to assume that all the values are unsigned. Putting all this together, we can construct the function prototype:
unsigned int CDECL MyFunction(unsigned int param1, unsigned int param2);
Example: Identify Branch Structures
How many separate branch structures are in this function? What types are they? Can you give more descriptive names to _Label_1, _Label_2, and _Label_3, based on the structures of these branches?
How many separate branch structures are there in this function? Stripping away the entry and exit sequences, here is the code we have left:
mov ecx, [ebp + 8] cmp ecx, 0 jne _Label_1 inc eax jmp _Label_2 :_Label_1 dec eax : _Label_2 mov ecx, [ebp + 12] cmp ecx, 0 jne _Label_3 inc eax : _Label_3
Looking through, we see 2 cmp statements. The first cmp statement compares ecx to zero. If ecx is not zero, we go to _Label_1, decrement eax, and then fall through to _Label_2. If ecx is zero, we increment eax, and go to directly to _Label_2. Writing out some pseudocode, we have the following result for the first section:
if(ecx doesnt equal 0) goto _Label_1 eax++; goto _Label_2 :_Label_1 eax--; :_Label_2
Since _Label_2 occurs at the end of this structure, we can rename it to something more descriptive, like "End_of_Branch_1", or "Branch_1_End". The first comparison tests ecx against 0, and then jumps on not-equal. We can reverse the conditional, and say that _Label_1 is an else block:
if(ecx == 0) ;ecx is param1 here { eax++; } else { eax--; }
So we can rename _Label_1 to something else descriptive, such as "Else_1". The rest of the code block, after Branch_1_End (_Label_2) is as follows:
mov ecx, [ebp + 12] cmp ecx, 0 jne _Label_3 inc eax : _Label_3
We can see immediately that _Label_3 is the end of this branch structure, so we can immediately call it "Branch_2_End", or something else. Here, we are again comparing ecx to 0, and if it is not equal, we jump to the end of the block. If it is equal to zero, however, we increment eax, and then fall out the bottom of the branch. We can see that there is no else block in this branch structure, so we don't need to invert the condition. We can write an if statement directly:
if(ecx == 0) ;ecx is param2 here { eax++; }
Example: Convert To C
Write the equivalent C code for this function. Assume all parameters and return values are unsigned values.
push ebp mov ebp, esp mov eax, 0 mov ecx, [ebp + 8] cmp ecx, 0 jne _Label_1 inc eax jne _Label_2 :_Label_1 dec eax : _Label_2 mov ecx, [ebp + 12] cmp ecx, 0 jne _Label_3 inc eax : _Label_3 mov esp, ebp pop ebp ret
Starting with the C function prototype from answer 1, and the conditional blocks in answer 2, we can put together a pseudo-code function, without variable declarations, or a return value:
unsigned int CDECL MyFunction(unsigned int param1, unsigned int param2) { if(param1 == 0) { eax++; } else { eax--; } if(param2 == 0) { eax++; } }
Now, we just need to create a variable to store the value from eax, which we will call "a", and we will declare as a register type:
unsigned int CDECL MyFunction(unsigned int param1, unsigned int param2) { register unsigned int a = 0; if(param1 == 0) { a++; } else { a--; } if(param2 == 0) { a++; } return a; }
Granted, this function isn't a particularly useful function, but at least we know what it does.
Loops
Loops
C only has Do-While, While, and For Loops, but some other languages may very well implement their own types. Also, a good C-Programmer could easily "home brew" a new type of loop using a series of good macros, so they bear some consideration:
Do-Until Loop
A common Do-Until Loop will take the following form:
do { //loop body } until(x);
which essentially becomes the following Do-While loop:
do { //loop body } while(!x);
Until Loop
Like the Do-Until loop, the standard Until-Loop looks like the following:
until(x) { //loop body }
which (likewise) gets translated to the following While-Loop:
while(!x) { //loop body }
Do-Forever Loop.
Loop Examples
Example: Identify Purpose
What does this function do? What kinds of parameters does it take, and what kind of results (if any) does it return?
This function loops through an array of 4 byte integer values, pointed to by esi, and adds each entry. It returns the sum in eax. The only parameter (located in [ebp + 8]) is a pointer to an array of integer values. The comparison between ebx and 100 indicates that the input array has 100 entries in it. The pointer offset [esi + ebx * 4] shows that each entry in the array is 4 bytes wide.
Example: Complete C Prototype
What is this function's C prototype? Make sure to include parameters, return values, and calling convention.
Notice how the ret function cleans its parameter off the stack? That means that this function is an STDCALL function. We know that the function takes, as its only parameter, a pointer to an array of integers. We do not know, however, whether the integers are signed or unsigned, because the je command is used for both types of values. We can assume one or the other, and for simplicity, we can assume unsigned values (unsigned and signed values, in this function, will actually work the same way). We also know that the return value is a 4-byte integer value, of the same type as is found in the parameter array. Since the function doesnt have a name, we can just call it "MyFunction", and we can call the parameter "array" because it is an array. From this information, we can determine the following prototype in C:
unsigned int STDCALL MyFunction(unsigned int *array);
Example: Decompile To C Code
Decompile this code into equivalent C source code.
Starting with the function prototype above, and the description of what this function does, we can start to write the C code for this function. We know that this function initializes eax, ebx, and ecx before the loop. However, we can see that ecx is being used as simply an intermediate storage location, receiving successive values from the array, and then being added to eax.
We will create two unsigned integer values, a (for eax) and b (for ebx). We will define both a and b with the register qualifier, so that we can instruct the compiler not to create space for them on the stack. For each loop iteration, we are adding the value of the array, at location ebx*4 to the running sum, eax. Converting this to our a and b variables, and using C syntax, we see:
a = a + array[b];
The loop could be either a for loop, or a while loop. We see that the loop control variable, b, is initialized to 0 before the loop, and is incremented by 1 each loop iteration. The loop tests b against 100, after it gets incremented, so we know that b never equals 100 inside the loop body. Using these simple facts, we will write the loop in 3 different ways:
First, with a while loop.
unsigned int STDCALL MyFunction(unsigned int *array) { register unsigned int b = 0; register unsigned int a = 0; while(b != 100) { a = a + array[b]; b++; } return a; }
Or, with a for loop:
unsigned int STDCALL MyFunction(unsigned int *array) { register unsigned int b; register unsigned int a = 0; for(b = 0; b != 100; b++) { a = a + array[b]; } return a; }
And finally, with a do-while loop:
unsigned int STDCALL MyFunction(unsigned int *array) { register unsigned int b = 0; register unsigned int a = 0; do { a = a + array[b]; b++; }while(b != 100); return a; }
Data Patterns
Variables
Variables
Here is a table to summarize some points about global variables:
When disassembling, a hard-coded memory address should be considered to be an ordinary global variable unless you can determine from the scope of the variable that it is static or extern.
Constants
Variable Examples
Example: Identify C++ Code.
Data Structures
Data Structures
Few programs can work by using simple memory storage; most need to utilize complex data objects, including pointers, arrays, structures, and other complicated types. This chapter will talk about how compilers implement complex data objects, and how the reverser can identify these objects.
Arrays
Arrays are simply a storage scheme for multiple data objects of the same type. Data objects are stored sequentially, often as an offset from a pointer to the beginning of the array. Consider the following C code:
x = array[25];
Which is identical to the following asm code:
mov ebx, $array mov eax, [ebx + 25] mov $x, eax
Now, consider the following example:
int MyFunction1() { int array[20]; ...
This (roughly) translates into the following asm pseudo-code:
:_MyFunction1 push ebp mov ebp, esp sub esp, 80 ;the whole array is created on the stack!!! lea $array, [esp + 0] ;a pointer to the array is saved in the array variable ...
The entire array is created on the stack, and the pointer to the bottom of the array is stored in the variable "array". An optimizing compiler could ignore the last instruction, and simply refer to the array via a +0 offset from esp (in this example), but we will do things verbosely.
Likewise, consider the following example:
void MyFunction2() { char buffer[4]; ...
This will translate into the following asm pseudo-code:
:_MyFunction2 push ebp mov ebp, esp sub esp, 4 lea $buffer, [esp + 0] ...
Which looks harmless enough. But, what if a program inadvertantly accesses buffer[4]? what about buffer[5]? what about buffer[8]? This is the makings of a buffer overflow vulnerability, and (might) will be discussed in a later section. However, this section won't talk about security issues, and instead will focus only on data structures.
Spotting an Array on the Stack
To spot an array on the stack, look for large amounts of local storage allocated on the stack ("sub esp, 1000", for example), and look for large portions of that data being accessed by an offset from a different register from esp. For instance:
:_MyFunction3 push ebp mov ebp, esp sub esp, 256 lea ebx, [esp + 0x00] mov [ebx + 0], 0x00
is a good sign of an array being created on the stack. Granted, an optimizing compiler might just want to offset from esp instead, so you will need to be careful.
Spotting an Array in Memory
Arrays in memory, such as global arrays, or arrays which have initial data (remember, initialized data is created in the .data section in memory) and will be accessed as offsets from a hardcoded address in memory:
:_MyFunction4 push ebp mov ebp, esp mov esi, 0x77651004 mov ebx, 0x00000000 mov [esi + ebx], 0x00
It needs to be kept in mind that structures and classes might be accessed in a similar manner, so the reverser needs to remember that all the data objects in an array are of the same type, that they are sequential, and they will often be handled in a loop of some sort. Also, (and this might be the most important part), each elements in an array may be accessed by a variable offset from the base.
Since most times an array is accessed through a computed index, not through a constant, the compiler will likely use the following to access an element of the array:
mov [ebx + eax], 0x00
If the array holds elements larger than 1 byte (for char), the index will need to be multiplied by the size of the element, yielding code similar to the following:
mov [ebx + eax * 4], 0x11223344 # access to an array of DWORDs, e.g. arr[i] = 0x11223344 ... mul eax, $20 # access to an array of structs, each 20 bytes long lea edi, [ebx + eax] # e.g. ptr = &arr[i]
This pattern can be used to distinguish between accesses to arrays and accesses to structure data members.
Structures
All C programmers are going to be familiar with the following syntax:
struct MyStruct { int FirstVar; double SecondVar; unsigned short int ThirdVar; }
It's called a structure (Pascal programmers may know a similar concept as a "record").
Structures may be very big or very small, and they may contain all sorts of different data. Structures may look very similar to arrays in memory, but a few key points need to be remembered: structures do not need to contain data fields of all the same type, structure fields are often 4-byte aligned (not sequential), and each element in a structure has its own offset. It therefore makes no sense to reference a structure element by a variable offset from the base.
Take a look at the following structure definition:
struct MyStruct2 { long value1; short value2; long value3; }
Assuming the pointer to the base of this structure is loaded into ebx, we can access these members in one of two schemes:
The first arrangement is the most common, but it clearly leaves open an entire memory word (2 bytes) at offset +6, which is not used at all. Compilers occasionally allow the programmer to manually specify the offset of each data member, but this isn't always the case. The second example also has the benefit that the reverser can easily identify that each data member in the structure is a different size.
Consider now the following function:
:_MyFunction push ebp mov ebp, esp lea ecx, SS:[ebp + 8] mov [ecx + 0], mov [ecx + 4], ecx mov [ecx + 8], mov esp, ebp pop ebp
The function clearly takes a pointer to a data structure as its first argument. Also, each data member is the same size (4 bytes), so how can we tell if this is an array or a structure? To answer that question, we need to remember one important distinction between structures and arrays: the elements in an array are all of the same type, the elements in a structure do not need to be the same type. Given that rule, it is clear that one of the elements in this structure is a pointer (it points to the base of the structure itself!) and the other two fields are loaded with the hex value 0x0A (10 in decimal), which is certainly not a valid pointer on any system I have ever used. We can then partially recreate the structure and the function code below:
struct MyStruct3 { long value1; void *value2; long value3; } void MyFunction2(struct MyStruct3 *ptr) { ptr->value1 = 10; ptr->value2 = ptr; ptr->value3 = 10; }
As a quick aside note, notice that this function doesn't load anything into eax, and therefore it doesn't return a value.
Advanced Structures
Lets say we have the following situation in a function:
:MyFunction1 push ebp mov ebp, esp mov esi, [ebp + 8] lea ecx, SS:[esi + 8] ...
what is happening here? First, esi is loaded with the value of the function's first parameter (ebp + 8). Then, ecx is loaded with a pointer to the offset +8 from esi. It looks like we have 2 pointers accessing the same data structure!
The function in question could easily be one of the following 2 prototypes:
struct MyStruct1 { DWORD value1; DWORD value2; struct MySubStruct1 { ...
struct MyStruct2 { DWORD value1; DWORD value2; DWORD array[LENGTH]; ...
one pointer offset from another pointer in a structure often means a complex data structure. There are far too many combinations of structures and arrays, however, so this wikibook will not spend too much time on this subject.
Identifying Structs and Arrays
Array elements and structure fields are both accessed as offsets from the array/structure pointer. When disassembling, how do we tell these data structures apart? Here are some pointers:
- array elements are not meant to be accessed individually. Array elements are typically accessed using a variable offset
- Arrays are frequently accessed in a loop. Because arrays typically hold a series of similar data items, the best way to access them all is usually a loop. Specifically,
for(x = 0; x < length_of_array; x++)style loops are often used to access arrays, although there can be others.
- All the elements in an array have the same data type.
- Struct fields are typically accessed using constant offsets.
- Struct fields are typically not accessed in order, and are also not accessed using loops.
- Struct fields are not typically all the same data type, or the same data width
Linked Lists and Binary Trees
Two common structures used when programming are linked lists and binary trees. These two structures in turn can be made more complicated in a number of ways. Shown in the images below are examples of a linked list structure and a binary tree structure.
Each node in a linked list or a binary tree contains some amount of data, and a pointer (or pointers) to other nodes. Consider the following asm code example:
loop_top: cmp [ebp + 0], 10 je loop_end mov ebp, [ebp + 4] jmp loop_top loop_end:
At each loop iteration, a data value at [ebp + 0] is compared with the value 10. If the two are equal, the loop is ended. If the two are not equal, however, the pointer in ebp is updated with a pointer at an offset from ebp, and the loop is continued. This is a classic linked-loop search technique. This is analagous to the following C code:
struct node { int data; struct node *next; }; struct node *x; ... while(x->data != 10) { x = x->next; }
Binary trees are the same, except two different pointers will be used (the right and left branch pointers).
Objects and Classes
The Print Version page of the X86 Disassembly Wikibook is a stub. You can help by expanding this section.
Object-Oriented Programming
Object-Oriented (OO) programming provides for us a new unit of program structure to contend with: the Object. This chapter will look at disassembled classes from C++. This chapter will not deal directly with COM, but it will work to set a lot of the groundwork for future discussions in reversing COM components (Windows users only).
Classes
A basic class that has not inherited anything can be broken into two parts, the variables and the methods. The non-static variables are shoved into a simple data structure while the methods are compiled and called like every other function.
When you start adding in inheritance and polymorphism, things get a little more complicated. For the purposes of simplicity, the structure of an object will be described in terms of having no inheritance. At the end, however, inheritance and polymorphism will be covered.
Variables
All static variables defined in a class resides in the static region of memory for the entire duration of the application. Every other variable defined in the class is placed into a data structure known as an object. Typically when the constructor is called, the variables are placed into the object in sequential order, see Figure 1.
A:
class ABC123 { public: int a, b, c; ABC123():a(1), b(2), c(3) {}; };
B:
0x00200000 dd 1 ;int a 0x00200004 dd 2 ;int b 0x00200008 dd 3 ;int c
However, the compiler typically needs the variables to be separated into sizes that are multiples of a word (2 bytes) in order to locate them. Not all variables fit this requirement, namely char arrays; some unused bits might be used pad the variables so they meet this size requirement. This is illustrated in Figure 2.
A:
class ABC123{ public: int a; char b[3]; double c; ABC123():a(1),c(3) { strcpy(b,"02"); }; };
B:
0x00200000 dd 1 ;int a ; offset = abc123 + 0*word_size 0x00200004 db '0' ;b[0] = '0' ; offset = abc123 + 2*word_size 0x00200005 db '2' ;b[1] = '2' 0x00200006 db 0 ;b[2] = null 0x00200007 db 0 ;<= UNUSED BYTE 0x00200008 dd 0x00000000 ;double c, lower 32 bits ; offset = abc123 + 4*word_size 0x0020000C dd 0x40080000 ;double c, upper 32 bits
In order for the application to access one of these object variables, an object pointer needs to be offset to find the desired variable. The offset of every variable is known by the compiler and written into the object code wherever it's needed. Figure 3 shows how to offset a pointer to retrieve variables.
;abc123 = pointer to object mov eax, [abc123] ;eax = &a ;offset = abc123+0*word_size = abc123 mov ebx, [abc123+4] ;ebx = &b ;offset = abc123+2*word_size = abc123+4 mov ecx, [abc123+8] ;ecx = &c ;offset = abc123+4*word_size = abc123+8
Figure 3: This shows how to offset a pointer to retrieve variables. The first line places the address of variable 'a' into eax. The second line places the address of variable 'b' into ebx. And the last line places the variable 'c' into ecx.
Methods
At a low level, there is almost no difference between a function and a method. When decompiling, it can sometimes be hard to tell a difference between the two. They both reside in the text memory space, and both are called the same way. An example of how a method is called can be seen in Figure 4.
A:
//method call abc123->foo(1, 2, 3);
B:
push 3 ; int c push 2 ; int b push 1 ; int a push [ebp-4] ; the address of the object call 0x00434125 ; call to method
A notable characteristic in a method call is the address of the object being passed in as an argument. This, however, is not a always a good indicator. Figure 5 shows function with the first argument being an object passed in by reference. The result is function that looks identical to a method call.
A:
//function call foo(abc123, 1, 2, 3);
B:
push 3 ; int c push 2 ; int b push 1 ; int a push [ebp+4] ; the address of the object call 0x00498372 ; call to function
Inheritance & Polymorphism
Inheritance and polymorphism completely changes the structure of a class, the object no longer contains just variables, they also contain pointers to the inherited methods. This is due to the fact that polymorphism requires the address of a method or inner object to be figured out at runtime.
Take Figure 6 into consideration. How does the application know to call D::one or C::one? The answer is that the compiler figures out a convention in which to order variables and method pointers inside the object such that when they're referenced, the offsets are the same for any object that has inherited its methods and variables.
The abstract class A acts as a blueprint for the compiler, defining an expected structure for any class that inherits it. Every variable defined in class A and every virtual method defined in A will have the exact same offset for any of its children. Figure 7 declares a possible inheritance scheme as well as it structure in memory. Notice how the offset to C::one is the same as D::one, and the offset to C's copy of A::a is the same as D's copy. In this, our polymorphic loop can just iterate through the array of pointers and know exactly where to find each method.
A:
class A{ public: int a; virtual void one() = 0; }; class B{ public: int b; int c; virtual void two() = 0; }; class C: public A{ public: int d; void one(); }; class D: public A, public B{ public: int e; void one(); void two(); };
B:
;Object C 0x00200000 dd 0x00423848 ; address of C::one ;offset = 0*word_size 0x00200004 dd 1 ; C's copy of A::a ;offset = 2*word_size 0x00200008 dd 4 ; C::d ;offset = 4*word_size ;Object D 0x00200100 dd 0x00412348 ; address of D::one ;offset = 0*word_size 0x00200104 dd 1 ; D's copy of A::a ;offset = 2*word_size 0x00200108 dd 0x00431255 ; address of D::two ;offset = 4*word_size 0x0020010C dd 2 ; D's copy of B::b ;offset = 6*word_size 0x00200110 dd 3 ; D's copy of B::c ;offset = 8*word_size 0x00200114 dd 5 ; D::e ;offset = 10*word_size
Classes Vs. Structs
Floating Point Numbers
Floating Point Numbers
This page will talk about how floating point numbers are used in assembly language constructs. This page will not talk about new constructs, it will not explain what the FPU instructions do, how floating point numbers are stored or manipulated, or the differences in floating-point data representations. However, this page will demonstrate briefly how floating-point numbers are used in code and data structures that we have already considered.
The x86 architecture does not have any registers specifically for floating point numbers, but it does have a special stack for them. The floating point stack is built directly into the processor, and has access speeds similar to those of ordinary registers. Notice that the FPU stack is not the same as the regular system stack.
Calling Conventions
With the addition of the floating-point stack, there is an entirely new dimension for passing parameters and returning values. We will examine our calling conventions here, and see how they are affected by the presence of floating-point numbers. These are the functions that we will be assembling, using both GCC, and cl.exe:
__cdecl double MyFunction1(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); } __fastcall double MyFunction2(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); } __stdcall double MyFunction3(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); }
CDECL
Here is the cl.exe assembly listing for MyFunction1:
PUBLIC _MyFunction11 PROC NEAR ; Line 2 push ebp mov ebp, esp ; Line 3 4 pop ebp ret 0 _MyFunction1 ENDP _TEXT ENDS
Our first question is this: are the parameters passed on the stack, or on the floating-point register stack, or some place different entirely? Key to this question, and to this function is a knowledge of what fld and fstp do. fld (Floating-point Load) pushes a floating point value onto the FPU stack, while fstp (Floating-Point Store and Pop) moves a floating point value from ST0 to the specified location, and then pops the value from ST0 off the stack entirely. Remember that double values in cl.exe are treated as 8-byte storage locations (QWORD), while floats are only stored as 4-byte quantities (DWORD). It is also important to remember that floating point numbers are not stored in a human-readable form in memory, even if the reader has a solid knowledge of binary. Remember, these aren't integers. Unfortunately, the exact format of floating point numbers is well beyond the scope of this chapter.
x is offset +8, y is offset +16, and z is offset +24 from ebp. Therefore, z is pushed first, x is pushed last, and the parameters are passed right-to-left on the regular stack not the floating point stack. To understand how a value is returned however, we need to understand what fmulp does. fmulp is the "Floating-Point Multiply and Pop" instruction. It performs the instructions:
ST1 := ST1 * ST0 FPU POP ST0
This multiplies ST(1) and ST(0) and stores the result in ST(1). Then, ST(0) is marked empty and stack pointer is incremented. Thus, contents of ST(1) are on the top of the stack. So the top 2 values are multiplied together, and the result is stored on the top of the stack. Therefore, in our instruction above, "fmulp ST(1), ST(0)", which is also the last instruction of the function, we can see that the last result is stored in ST0. Therefore, floating point parameters are passed on the regular stack, but floating point results are passed on the FPU stack.
One final note is that MyFunction2 cleans its own stack, as referenced by the ret 20 command at the end of the listing. Because none of the parameters were passed in registers, this function appears to be exactly what we would expect an STDCALL function would look like: parameters passed on the stack from right-to-left, and the function cleans its own stack. We will see below that this is actually a correct assumption.
For comparison, here is the GCC listing:
LC1: .long 0 .long 1073741824 .align 8 LC2: .long 0 .long 1074266112 .globl _MyFunction1 .def _MyFunction1; .scl 2; .type 32; .endef _MyFunction1:1 faddp %st, %st(1) fmulp %st, %st(1) flds 24(%ebp) fldl LC2 faddp %st, %st(1) fmulp %st, %st(1) leave ret .align 8
This is a very difficult listing, so we will step through it (albeit quickly). 16 bytes of extra space is allocated on the stack. Then, using a combination of fldl and fstpl instructions, the first 2 parameters are moved from offsets +8 and +16, to offsets -8 and -16 from ebp. Seems like a waste of time, but remember, optimizations are off. fld1 loads the floating point value 1.0 onto the FPU stack. faddp then adds the top of the stack (1.0), to the value in ST1 ([ebp - 8], originally [ebp + 8]).
FASTCALL
Here is the cl.exe listing for MyFunction2:
PUBLIC @MyFunction2@20 PROC NEAR ; Line 7 push ebp mov ebp, esp ; Line 8 9 pop ebp ret 20 ; 00000014H @MyFunction2@20 ENDP _TEXT ENDS
We can see that this function is taking 20 bytes worth of parameters, because of the @20 decoration at the end of the function name. This makes sense, because the function is taking two double parameters (8 bytes each), and one float parameter (4 bytes each). This is a grand total of 20 bytes. We can notice at a first glance, without having to actually analyze or understand any of the code, that there is only one register being accessed here: ebp. This seems strange, considering that FASTCALL passes its regular 32-bit arguments in registers. However, that is not the case here: all the floating-point parameters (even z, which is a 32-bit float) are passed on the stack. We know this, because by looking at the code, there is no other place where the parameters could be coming from.
Notice also that fmulp is the last instruction performed again, as it was in the CDECL example. We can infer then, without investigating too deeply, that the result is passed at the top of the floating-point stack.
Notice also that x (offset [ebp + 8]), y (offset [ebp + 16]) and z (offset [ebp + 24]) are pushed in reverse order: z is first, x is last. This means that floating point parameters are passed in right-to-left order, on the stack. This is exactly the same as CDECL code, although only because we are using floating-point values.
Here is the GCC assembly listing for MyFunction2:
This is a tricky piece of code, but luckily we don't need to read it very close to find what we are looking for. First off, notice that no other registers are accessed besides ebp. Again, GCC passes all floating point values (even the 32-bit float, z) on the stack. Also, the floating point result value is passed on the top of the floating point stack.
We can see again that GCC is doing something strange at the beginning, taking the values on the stack from [ebp + 8] and [ebp + 16], and moving them to locations [ebp - 8] and [ebp - 16], respectively. Immediately after being moved, these values are loaded onto the floating point stack and arithmetic is performed. z isn't loaded till later, and isn't ever moved to [ebp - 24], despite the pattern.
LC5 and LC6 are constant values, that most likely represent floating point values (because the numbers themselves, 1073741824 and 1074266112 don't make any sense in the context of our example functions. Notice though that both LC5 and LC6 contain two .long data items, for a total of 8 bytes of storage? They are therefore most definitely double values.
STDCALL
Here is the cl.exe listing for MyFunction3:
PUBLIC _MyFunction3@20 PROC NEAR ; Line 12 push ebp mov ebp, esp ; Line 13 14 pop ebp ret 20 ; 00000014H _MyFunction3@20 ENDP _TEXT ENDS END
x is the highest on the stack, and z is the lowest, therefore these parameters are passed from right-to-left. We can tell this because x has the smallest offset (offset [ebp + 8]), while z has the largest offset (offset [ebp + 24]). We see also from the final fmulp instruction that the return value is passed on the FPU stack. This function also cleans the stack itself, as noticed by the call 'ret 20. It is cleaning exactly 20 bytes off the stack which is, incidentally, the total amount that we passed to begin with. We can also notice that the implementation of this function looks exactly like the FASTCALL version of this function. This is true because FASTCALL only passes DWORD-sized parameters in registers, and floating point numbers do not qualify. This means that our assumption above was correct.
Here is the GCC listing for MyFunction3:
.align 8 LC9: .long 0 .long 1073741824 .align 8 LC10: .long 0 .long 1074266112 .globl @MyFunction3@20 .def @MyFunction3@20; .scl 2; .type 32; .endef @MyFunction9 faddp %st, %st(1) fmulp %st, %st(1) flds 24(%ebp) fldl LC10 faddp %st, %st(1) fmulp %st, %st(1) leave ret $20
Here we can also see, after all the opening nonsense, that [ebp - 8] (originally [ebp + 8]) is value x, and that [ebp - 24] (originally [ebp - 24]) is value z. These parameters are therefore passed right-to-left. Also, we can deduce from the final fmulp instruction that the result is passed in ST0. Again, the STDCALL function cleans its own stack, as we would expect.
Conclusions
Floating point values are passed as parameters on the stack, and are passed on the FPU stack as results. Floating point values do not get put into the general-purpose integer registers (eax, ebx, etc...), so FASTCALL functions that only have floating point parameters collapse into STDCALL functions instead. double values are 8-bytes wide, and therefore will take up 8-bytes on the stack. float values however, are only 4-bytes wide.
Float to Int Conversions
FPU Compares and Jumps
Floating Point Examples
Example: Floating Point Arithmetic
Here is the C source code, and the GCC assembly listing of a simple C language function that performs simple floating-point arithmetic. Can you determine what the numerical values of LC5 and LC6 are?
__fastcall double MyFunction2(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); }
For this, we don't even need a floating-point number calculator, although you are free to use one if you wish (and if you can find a good one). LC5 is added to [ebp - 16], which we know to be y, and LC6 is added to [ebp - 24], which we know to be z. Therefore, LC5 is the number "2.0", and LC6 is the number "3.0". Notice that the fld1 instruction automatically loads the top of the floating-point stack with the constant value "1.0".
Difficulties
Code Optimization
Code Optimization
An optimizing compiler is perhaps one of the most complicated, most powerful, and most interesting programs in existence. This chapter will talk about optimizations, although this chapter will not include a table of common optimizations.
Stages of Optimizations
There are two times when a compiler can perform optimizations: first, in the intermediate representation, and second, during the code generation.
Intermediate Representation Optimizations
While in the intermediate representation, a compiler can perform various optimizations, often based on dataflow analysis techniques. For example, consider the following code fragment:
x = 5; if(x != 5) { //loop body }
An optimizing compiler might notice that at the point of "if (x != 5)", the value of x is always the constant "5". This allows substituting "5" for x resulting in "5 != 5". Then the compiler notices that the resulting expression operates entirely on constants, so the value can be calculated now instead of at run time, resulting in optimizing the conditional to "if (false)". Finally the compiler sees that this means the body of the if conditional will never be executed, so it can omit the entire body of the if conditional altogether.
Consider the reverse case:
x = 5; if(x == 5) { //loop body }
In this case, the optimizing compiler would notice that the IF conditional will always be true, and it won't even bother writing code to test x.
Control Flow Optimizations
Another set of optimization which can be performed either at the intermediate or at the code generation level are control flow optimizations. Most of these optimizations deal with the elimination of useless branches. Consider the following code:
if(A) { if(B) { C; } else { D; } end_B: } else { E; } end_A:
In this code, a simplistic compiler would generate a jump from the C block to end_B, and then another jump from end_B to end_A (to get around the E statements). Clearly jumping to a jump is inefficient, so optimizing compilers will generate a direct jump from block C to end_A.
This unfortunately will make the code more confused and will prevent a nice recovery of the original code. For complex functions, it's possible that one will have to consider the code made of only if()-goto; sequences, without being able to identify higher level statements like if-else or loops.
The process of identifying high level statement hierarchies is called "code structuring".
Code Generation Optimizations
Once the compiler has sifted through all the logical inefficiencies in your code, the code generator takes over. Often the code generator will replace certain slow machine instructions with faster machine instructions.
For instance, the instruction:
beginning: ... loopnz beginning
operates much slower than the equivalent instruction set:
beginning: ... dec ecx jne beginning
So then why would a compiler ever use a loopxx instruction? The answer is that most optimizing compilers never use a loopxx instruction, and therefore as a reverser, you will probably never see one used in real code.
What about the instruction:
mov eax, 0
The mov instruction is relatively quick, but a faster part of the processor is the arithmetic unit. Therefore, it makes more sense to use the following instruction:
xor eax, eax
because xor operates in very few processor cycles (and saves three bytes at the same time), and is therefore faster than a "mov eax, 0". The only drawback of a xor instruction is that it changes the processor flags, so it cannot be used between a comparison instruction and the corresponding conditional jump.
Loop Unwinding
When a loop needs to run for a small, but definite number of iterations, it is often better to unwind the loop in order to reduce the number of jump instructions performed, and in many cases prevent the processor's branch predictor from failing. Consider the following C loop, which calls the function
MyFunction() 5 times:
for(x = 0; x < 5; x++) { MyFunction(); }
Converting to assembly, we see that this becomes, roughly:
mov eax, 0 loop_top: cmp eax, 5 jge loop_end call _MyFunction inc eax jmp loop_top
Each loop iteration requires the following operations to be performed:
- Compare the value in eax (the variable "x") to 5, and jump to the end if greater then or equal
- Increment eax
- Jump back to the top of the loop.
Notice that we remove all these instructions if we manually repeat our call to
MyFunction():
call _MyFunction call _MyFunction call _MyFunction call _MyFunction call _MyFunction
This new version not only takes up less disk space because it uses fewer instructions, but also runs faster because fewer instructions are executed. This process is called Loop Unwinding.
Inline Functions
The C and C++ languages allow the definition of an
inline type of function. Inline functions are functions which are treated similarly to macros. During compilation, calls to an inline function are replaced with the body of that function, instead of performing a
call instruction. In addition to using the
inline keyword to declare an inline function, optimizing compilers may decide to make other functions inline as well.
Function inlining.
It is not necessarily possible to determine whether identical portions of code were created originally as macros, inline functions, or were simply copy and pasted. However, when disassembling it can make your work easier to separate these blocks out into separate inline functions, to help keep the code straight.
Optimization Examples
Example: Optimized vs Non-Optimized Code
The following example is adapted from an algorithm presented in Knuth(vol 1, chapt 1) used to find the greatest common denominator of 2 integers. Compare the listing file of this function when compiler optimizations are turned on and off.
/*/ }
Compiling with the Microsoft C compiler, we generate a listing file using no optimization:
PUBLIC _EuclidsGCD _TEXT SEGMENT _r$ = -8 ; size = 4 _q$ = -4 ; size = 4 _m$ = 8 ; size = 4 _n$ = 12 ; size = 4 _EuclidsGCD PROC NEAR ; Line 2 push ebp mov ebp, esp sub esp, 8 $L477: ; Line 4 mov eax, 1 test eax, eax je SHORT $L473 ; Line 6 mov eax, DWORD PTR _m$[ebp] cdq idiv DWORD PTR _n$[ebp] mov DWORD PTR _q$[ebp], eax ; Line 7 mov eax, DWORD PTR _m$[ebp] cdq idiv DWORD PTR _n$[ebp] mov DWORD PTR _r$[ebp], edx ; Line 8 cmp DWORD PTR _r$[ebp], 0 jne SHORT $L479 ; Line 10 mov eax, DWORD PTR _n$[ebp] jmp SHORT $L473 $L479: ; Line 12 mov ecx, DWORD PTR _n$[ebp] mov DWORD PTR _m$[ebp], ecx ; Line 13 mov edx, DWORD PTR _r$[ebp] mov DWORD PTR _n$[ebp], edx ; Line 14 jmp SHORT $L477 $L473: ; Line 15 mov esp, ebp pop ebp ret 0 _EuclidsGCD ENDP _TEXT ENDS END
Notice how there is a very clear correspondence between the lines of C code, and the lines of the ASM code. the addition of the "; line x" directives is very helpful in that respect.
Next, we compile the same function using a series of optimizations to stress speed over size:
cl.exe /Tceuclids.c /Fa /Ogt2
and we produce the following listing:
As you can see, the optimized version is significantly shorter then the non-optimized version. Some of the key differences include:
- The optimized version does not prepare a standard stack frame. This is important to note, because many times new reversers assume that functions always start and end with proper stack frames, and this is clearly not the case. EBP isnt being used, ESP isnt being altered (because the local variables are kept in registers, and not put on the stack), and no subfunctions are called. 5 instructions are cut by this.
- The "test EAX, EAX" series of instructions in the non-optimized output, under ";line 4" is all unnecessary. The while-loop is defined by "while(1)" and therefore the loop always continues. this extra code is safely cut out. Notice also that there is no unconditional jump in the loop like would be expected: the "if(r == 0) return n;" instruction has become the new loop condition.
- The structure of the function is altered greatly: the division of m and n to produce q and r is performed in this function twice: once at the beginning of the function to initialize, and once at the end of the loop. Also, the value of r is tested twice, in the same places. The compiler is very liberal with how it assigns storage in the function, and readily discards values that are not needed.
Example: Manual Optimization
The following lines of assembly code are not optimized, but they can be optimized very easily. Can you find a way to optimize these lines?
mov eax, 1 test eax, eax je SHORT $L473
The code in this line is the code generated for the "while( 1 )" C code, to be exact, it represents the loop break condition. Because this is an infinite loop, we can assume that these lines are unnecessary.
"mov eax, 1" initializes eax.
the test immediately afterwards tests the value of eax to ensure that it is nonzero. because eax will always be nonzero (eax = 1) at this point, the conditional jump can be removed along whith the "mov" and the "test".
The assembly is actually checking whether 1 equals 1. Another fact is, that the C code for an infinite FOR loop:
for( ; ; ) { ... }
would not create such a meaningless assembly code to begin with, and is logically the same as "while( 1 )".
Example: Trace Variables
Here are the C code and the optimized assembly listing from the EuclidGCD function, from the example above. Can you determine which registers contain the variables r and q?
/
At the beginning of the function, eax contains m, and esi contains n. When the instruction "idiv esi" is executed, eax contains the quotient (q), and edx contains the remainder (r). The instruction "mov ecx, edx" moves r into ecx, while q is not used for the rest of the loop, and is therefore discarded.
Example: Decompile Optimized Code
Below is the optimized listing file of the EuclidGCD function, presented in the examples above. Can you decompile this assembly code listing into equivalent "optimized" C code? How is the optimized version different in structure from the non-optimized version?
Altering the conditions to maintain the same structure gives us:
int EuclidsGCD(int m, int n) { int r; r = m % n; if(r != 0) { do { m = n; r = m % r; n = r; }while(r != 0) } return n; }
It is up to the reader to compile this new "optimized" C code, and determine if there is any performance increase. Try compiling this new code without optimizations first, and then with optimizations. Compare the new assembly listings to the previous ones.
Example: Instruction Pairings
- Q
- Why does the dec/jne combo operate faster than the equivalent loopnz?
- A
- The dec/jnz pair operates faster then a loopsz for several reasons. First, dec and jnz pair up in the different modules of the netburst pipeline, so they can be executed simultaneously. Top that off with the fact that dec and jnz both require few cycles to execute, while the loopnz (and all the loop instructions, for that matter) instruction takes more cycles to complete. loop instructions are rarely seen output by good compilers.
Example: Avoiding Branches
Below is an assembly version of the expression
c ? d : 0. There is no branching in the code, so how does it work?
; ecx = c and edx = d ; eax will contain c ? d : 0 (eax = d if c is not zero, otherwise eax = 0) neg ecx sbb eax, eax and eax, edx ret
This is an example of using various arithmetic instructions to avoid branching. The neg instruction sets the carry flag if c is not zero; otherwise, it clears the carry flag. The next line depends on this. If the carry flag is set, then sbb results in
eax = eax - eax - 1 = 0xffffffff. Otherwise,
eax = eax - eax = 0. Finally, performing an and on this result ensures that if ecx was not zero in the first place, eax will contain edx, and zero otherwise.
Example: Duff's Device
What does the following C code function do? Is it useful? Why or why not?
void MyFunction(int *arrayA, int *arrayB, int cnt) { switch(cnt % 6) { while(cnt != 0) { case 0: arrayA[--cnt] = arrayB[cnt]; case 5: arrayA[--cnt] = arrayB[cnt]; case 4: arrayA[--cnt] = arrayB[cnt]; case 3: arrayA[--cnt] = arrayB[cnt]; case 2: arrayA[--cnt] = arrayB[cnt]; case 1: arrayA[--cnt] = arrayB[cnt]; } } }
This piece of code is known as a Duff's device or "Duff's machine". It is used to partially unwind a loop for efficiency. Notice the strange way that the
while() is nested inside the
switch statement? Two arrays of integers are passed to the function, and at each iteration of the while loop, 6 consecutive elements are copied from
arrayB to
arrayA. The switch statement, since it is outside the while loop, only occurs at the beginning of the function. The modulo is taken of the variable
cnt with respect to 6. If cnt is not evenly divisible by 6, then the modulo statement is going to start the loop off somewhere in the middle of the rotation, thus preventing the loop from causing a buffer overflow without having to test the current count after each iteration.
Duff's Device is considered one of the more efficient general-purpose methods for copying strings, arrays, or data streams.
Code Obfuscation
Code Obfuscation.
Debugger Detectors
Detecting Debuggers
Timeouts
OllyDbg is a popular 32-bit usermode debugger. Unfortunately, the last few releases, including the latest version (v1.10) contain a vulnerability in the handling of the Win32 API function OutputDebugString(). [7].
Resources
Resources
Wikimedia Resources
Wikibooks
- X86 Assembly
- Subject:Assembly languages
- Compiler Construction
- Floating Point
- C Programming
- C++ Programming
Wikipedia
External Resources
External Links
- The MASM Project:
- Randall Hyde's Homepage:
- Borland Turbo Assembler:
- NASM Project Homepage:
- FASM Homepage:
- DCC Decompiler: [8]
- Boomerang Decompiler Project: [9]
- Microsoft debugging tools main page:
- Solaris observation and debugging tools main page:
- Free Debugging Tools, Static Source Code Analysis Tools, Bug Trackers
- Microsoft Developers Network (MSDN):
- Gareth Williams: http: //gareththegeek.ga.funpic.de/
- B. Luevelsmeyer "PE Format Description": PE format description
- TheirCorp "The Unofficial TypeLib Data Format Specification":
- MSDN Calling Convention page: [10]
- Dictionary of Algorithms and Data Structures
- Charles Petzold's Homepage:
- Donald Knuth's Homepage:
- "THE ISA AND PC/104 BUS" by Mark Sokos 2000
- "Practically Reversing CRC" by Bas Westerbaan 2005
- "CRC and how to Reverse it" by anarchriz 1999
- "Reverse Engineering is a Way of Life" by Matthew Russotto
- "the Reverse and Reengineering Wiki"
- F-Secure Khallenge III: 2008 Reverse Engineering competition (is this an annual challenge?)
- "Breaking Eggs And Making Omelettes: Topics On Multimedia Technology and Reverse Engineering"
- "Reverse Engineering Stack Exchange"
Books
- Yurichev, Dennis, "An Introduction To Reverse Engineering for Beginners". Online book:
- Eilam, Eldad. "Reversing: Secrets of Reverse Engineering." 2005. Wiley Publishing Inc. ISBN 0764574817
- Hyde, Randall. "The Art of Assembly Language," No Starch, 2003 ISBN 1886411972
- Aho, Alfred V. et al. "Compilers: Principles, Techniques and Tools," Addison Wesley, 1986. ISBN: 0321428900
- Steven Muchnick, "Advanced Compiler Design & Implementation," Morgan Kaufmann Publishers, 1997. ISBN 1-55860-320-4
- Kernighan and Ritchie, "The C Programming Language", 2nd Edition, 1988, Prentice Hall.
- Petzold, Charles. "Programming Windows, Fifth Edition," Microsoft Press, 1999
- Hart, Johnson M. "Win32 System Programming, Second Edition," Addison Wesley, 2001
- Gordon, Alan. "COM and COM+ Programming Primer," Prentice Hall, 2000
- Nebbett, Gary. "Windows NT/2000 Native API Reference," Macmillan, 2000
- Levine, John R. "Linkers and Loaders," Morgan-Kauffman, 2000
- Knuth, Donald E. "The Art of Computer Programming," Vol 1, 1997, Addison Wesley.
- MALWARE: Fighting Malicious Code, by Ed Skoudis; Prentice Hall, 2004
- Maximum Linux Security, Second Edition, by Anonymous; Sams, 2001. | https://en.wikibooks.org/wiki/X86_Disassembly/Print_Version | CC-MAIN-2016-44 | refinedweb | 13,684 | 59.33 |
Hello,
I'm trying to move an object along path by touch, using iTween. I'm not very good at coding. I searched the web for days to find a solution but i couldn't. I found iTween example "Path-constrained Characters" and thought I could modify it for my needs.
I'm posting the code and a video of what I managed to do so far. You can find the video here:
This is close to what I want but as you can see in the video, it staggers a lot and most of the time touch looses the object. Also, since I get the distance between touch point and object on path and add it to path position, object moves forward when I touch and drag the opposite direction. I don't want that to happen.
I want the object to move only in one direction on the path and only if I drag it forward. I'm pretty sure this is not the way to do it. However I have no idea how to do it in any other way.
So I would appreciate if anyone can point me in the right direction.
Thanks in advance.
using UnityEngine;
using System.Collections;
public class Controller : MonoBehaviour {
public Transform[] controlPath;
float pathPosition = 0;
Vector3 coordinateOnPath;
Vector2 inputPosition;
GameObject draggedObject;
void OnDrawGizmos(){
iTween.DrawPath(controlPath,Color.blue);
}
void Update() {
coordinateOnPath = iTween.PointOnPath (controlPath, pathPosition % 1);
if (Input.touchCount > 0)
{
inputPosition = Camera.main.ScreenToWorldPoint (Input.GetTouch (0).position);
RaycastHit2D hit = Physics2D.Raycast(inputPosition, inputPosition, 1f, LayerMask.GetMask("Circle"));
if (hit.transform != null) {
draggedObject = hit.transform.gameObject;
float dist = Vector2.Distance (inputPosition, draggedObject.transform.position);
pathPosition += dist * .02f;
draggedObject.transform.position = new Vector3 (coordinateOnPath.x, coordinateOnPath.y, coordinateOnPath.z);
}
}
}
}
I am exactly looking for this for past few days. Did you get it working. @Dek.
Touch controls for Pong game for Android devices
0
Answers
iTween Path-Constrained character loop trouble
0
Answers
Unity - Problem when dragging game object
1
Answer
iTween path constrained character: force original orientation?
0
Answers
iTween, Put on path & arrow keys
1
Answer | https://answers.unity.com/questions/1432195/touch-and-drag-object-on-path-with-itween.html | CC-MAIN-2019-43 | refinedweb | 346 | 52.46 |
IRC log of rif on 2008-10-21
Timestamps are in UTC.
14:30:46 [RRSAgent]
RRSAgent has joined #rif
14:30:47 [RRSAgent]
logging to
14:30:52 [ChrisW]
zakim, this will be rif
14:30:52 [Zakim]
ok, ChrisW; I see SW_RIF()11:00AM scheduled to start in 30 minutes
14:30:59 [ChrisW]
rrsagent, make minutes
14:30:59 [RRSAgent]
I have made the request to generate
ChrisW
14:31:36 [ChrisW]
Meeting: RIF Telecon 21-Oct-08
14:31:41 [ChrisW]
Chair: Chris Welty
14:31:53 [ChrisW]
Agenda:
14:32:06 [ChrisW]
ChrisW has changed the topic to: 21 Oct RIF Telecon Agenda
14:32:26 [ChrisW]
rrsagent, make logs public
14:59:04 [DaveReynolds]
DaveReynolds has joined #rif
15:00:04 [sandro]
sandro has joined #rif
15:00:12 [sandro]
zakim, who is here?
15:00:12 [Zakim]
apparently SW_RIF()11:00AM has ended, sandro
15:00:13 [Zakim]
On IRC I see sandro, DaveReynolds, RRSAgent, ChrisW, Zakim, trackbot
15:00:15 [Zakim]
SW_RIF()11:00AM has now started
15:00:20 [Zakim]
+ +1.914.784.aaaa
15:00:23 [ChrisW]
zakim, aaaa is me
15:00:23 [Zakim]
+ChrisW; got it
15:00:28 [Zakim]
+StuartTaylor
15:00:34 [Zakim]
+Sandro
15:00:55 [mdean]
mdean has joined #rif
15:01:12 [StellaMitchell]
StellaMitchell has joined #rif
15:01:29 [Zakim]
+Mike_Dean
15:01:37 [Hassan]
Hassan has joined #rif
15:02:08 [Zakim]
+[IBM]
15:02:15 [StellaMitchell]
zakim, ibm is temporarily me
15:02:15 [Zakim]
+StellaMitchell; got it
15:02:24 [ChrisW]
hassan, will you be able to scribe today?
15:03:00 [Harold]
Harold has joined #rif
15:03:00 [Gary_Hallmark]
Gary_Hallmark has joined #rif
15:03:00 [yzhao]
yzhao has joined #rif
15:03:07 [ChrisW]
zakim, who is on the phone?
15:03:07 [Zakim]
On the phone I see ChrisW, StuartTaylor, Sandro, Mike_Dean, StellaMitchell
15:03:18 [Zakim]
-StuartTaylor
15:03:46 [josb]
josb has joined #rif
15:03:47 [Zakim]
+josb
15:04:01 [Zakim]
+Hassan
15:04:06 [Zakim]
+??P13
15:04:11 [Zakim]
+??P14
15:04:26 [ChrisW]
Scribe: Hassan
15:04:41 [Zakim]
+ +43.158.801.3aabb
15:04:46 [ChrisW]
Last week's minutes:
15:04:57 [ChrisW]
PROPOSED: accept minutes of last telecon
15:05:01 [ChrisW]
RESOLVED: accept minutes of last telecon
15:05:25 [Harold]
zakim, +43.158.801.3aabb is me
15:05:25 [Zakim]
+Harold; got it
15:05:33 [ChrisW]
zakim, take up item 2
15:05:33 [Zakim]
agendum 2. "Action Review" taken up [from ChrisW]
15:05:37 [Hassan]
No telecon on Oct. 28 - meetin cancelled
15:05:39 [Zakim]
+Gary
15:05:49 [Hassan]
s/meetin/meeting/
15:05:53 [ChrisW]
zakim, who is on the phone?
15:05:53 [Zakim]
On the phone I see ChrisW, Sandro, Mike_Dean, StellaMitchell, josb, Hassan, DaveReynolds, ??P14, Harold, Gary
15:06:35 [yzhao]
zakim, ??P14 is me
15:06:35 [Zakim]
+yzhao; got it
15:07:03 [sandro]
action-621 done
15:07:16 [sandro]
ACTION-621: done
15:07:16 [trackbot]
ACTION-621 Start F2F12 wiki page notes added
15:07:23 [sandro]
ACTION-621 done
15:07:27 [sandro]
ACTION-621 complete
15:07:53 [DaveReynolds]
ACTION-621: completed
15:07:54 [trackbot]
ACTION-621 Start F2F12 wiki page notes added
15:07:56 [josb]
ACTION-621 is done
15:08:33 [sandro]
ACTION-621 closed
15:08:34 [trackbot]
ACTION-621 Start F2F12 wiki page closed
15:10:47 [Zakim]
+[IPcaller]
15:11:08 [ChrisW]
zakim, who is on the call
15:11:08 [Zakim]
I don't understand 'who is on the call', ChrisW
15:11:11 [ChrisW]
zakim, who is on the call?
15:11:11 [Zakim]
On the phone I see ChrisW, Sandro, Mike_Dean, StellaMitchell, josb, Hassan (muted), DaveReynolds, yzhao, Harold, Gary, [IPcaller]
15:11:25 [mkifer]
mkifer has joined #rif
15:11:25 [ChrisW]
zakim, [ipcaller] is AdrianP
15:11:25 [Zakim]
+AdrianP; got it
15:13:15 [Zakim]
+Michael_Kifer
15:13:47 [StuartTaylor]
StuartTaylor has joined #rif
15:14:39 [Harold]
zakim, who is on the call?
15:14:39 [Zakim]
On the phone I see ChrisW, Sandro, Mike_Dean, StellaMitchell, josb, Hassan (muted), DaveReynolds, yzhao, Harold, Gary, AdrianP, Michael_Kifer
15:15:04 [StuartTaylor]
Zakim, StuartTaylor is with yzhao
15:15:04 [Zakim]
+StuartTaylor; got it
15:15:49 [sandro]
action-613 closed
15:15:49 [trackbot]
ACTION-613 Put f2f12 on agenda next week closed
15:17:17 [ChrisW]
zakim, take up item 3
15:17:17 [Zakim]
agendum 3. "Core" taken up [from ChrisW]
15:17:41 [ChrisW]
PROPOSED: Core should keep safe disjunction in rule bodies. Implementations can be direct or use a well-known preprocessing step.
15:18:15 [ChrisW]
RESOLVED: Core should keep safe disjunction in rule bodies. Implementations can be direct or use a well-known preprocessing step.
15:18:54 [DaveReynolds]
It was Issue-75 I thinkg
15:19:34 [ChrisW]
PROPOSED: Core should keep safe disjunction in rule bodies. Implementations can be direct or use a well-known preprocessing step.
15:19:44 [ChrisW]
PROPOSED: CLose Issue-75
15:19:59 [ChrisW]
RESOLVED: Close Issue-75
15:20:10 [ChrisW]
action: Chris to close issue 75
15:20:10 [trackbot]
Created ACTION-622 - Close issue 75 [on Christopher Welty - due 2008-10-28].
15:21:18 [Hassan]
Harold discussing 2 sorts of "safeness"
15:21:20 [josb]
I already said last week I'm fine with either choice
15:21:43 [ChrisW]
PROPOSED: Parameterize the conformance clauses of Core with
15:21:43 [ChrisW]
safeness requirements "strict" and "none" (default: "none").
15:22:11 [ChrisW]
PROPOSED: Parameterize the conformance clauses of Core with safeness requirements "strict" and "none" (default: "none").
15:22:51 [ChrisW]
action: chris to put proposal in agenda for next telecon (to close issue-70)
15:22:51 [trackbot]
Created ACTION-623 - Put proposal in agenda for next telecon (to close issue-70) [on Christopher Welty - due 2008-10-28].
15:24:43 [Hassan]
DaveReynolds: Looking for definition of Skolem functions that would suit both BLD and PRD. But this looks like two different concepts.
15:25:15 [Hassan]
ChrisW: Does this mean Skloem functions should not be on Core?
15:25:48 [Hassan]
DaveReynolds: Not necessarily - just that the logical notion is not good for PRD.
15:26:14 [Hassan]
Harold: member is in Core - but not subclass
15:26:41 [ChrisW]
zakim, take up item 4
15:26:41 [Zakim]
agendum 4. "UCR" taken up [from ChrisW]
15:26:42 [Hassan]
ChrisW: these are the 2 things remaining to discuss for Core
15:27:53 [Hassan]
s/Skloem/Skolem/
15:28:35 [Hassan]
AdrianPaschke: Updated some examples in the UCR document to use the canonical syntax
15:29:59 [Hassan]
DaveReynolds: discusses the changes needed in his UCR examples (frames vs. predicates)
15:31:29 [Zakim]
-Mike_Dean
15:32:42 [DaveReynolds]
The 4.2 example seems to use nested frames - I didn't think that was supported in BLD PS.
15:34:02 [Hassan]
ChrisW: looking for reviewers of the UCR document after AP is done with it (two weeks)
15:34:13 [ChrisW]
action: stella to review UCR in two weeks
15:34:13 [trackbot]
Created ACTION-624 - Review UCR in two weeks [on Stella Mitchell - due 2008-10-28].
15:34:21 [ChrisW]
zakim, pick a victim
15:34:21 [Zakim]
Not knowing who is chairing or who scribed recently, I propose Michael_Kifer
15:34:46 [ChrisW]
zakim, pick a victim
15:34:46 [Zakim]
Not knowing who is chairing or who scribed recently, I propose Gary
15:35:06 [ChrisW]
action: Gary to review UCR in two weeks
15:35:07 [trackbot]
Created ACTION-625 - Review UCR in two weeks [on Gary Hallmark - due 2008-10-28].
15:35:19 [ChrisW]
zakim, take up item 5
15:35:19 [Zakim]
agendum 5. "SWC" taken up [from ChrisW]
15:35:43 [josb]
15:36:51 [Hassan]
ChrisW: asking for comments before we publish this? discussion?
15:38:01 [Hassan]
ChrisW: asking for reviewers for the SWC document...
15:38:13 [ChrisW]
action: sandro freeze RDF&OWL
15:38:13 [trackbot]
Created ACTION-626 - Freeze RDF&OWL [on Sandro Hawke - due 2008-10-28].
15:38:23 [ChrisW]
action: Chris to review RDF&OWL
15:38:23 [trackbot]
Created ACTION-627 - Review RDF&OWL [on Christopher Welty - due 2008-10-28].
15:38:58 [sandro]
zakim, who is on the call?
15:38:58 [Zakim]
On the phone I see ChrisW, Sandro, StellaMitchell, josb, Hassan (muted), DaveReynolds, yzhao, Harold, Gary, AdrianP, Michael_Kifer
15:39:01 [Zakim]
yzhao has yzhao, StuartTaylor
15:39:08 [ChrisW]
zakim, pick a victim
15:39:08 [Zakim]
Not knowing who is chairing or who scribed recently, I propose Hassan (muted)
15:39:23 [ChrisW]
zakim, pick a victim
15:39:23 [Zakim]
Not knowing who is chairing or who scribed recently, I propose AdrianP
15:39:39 [ChrisW]
zakim, pick a victim
15:39:39 [Zakim]
Not knowing who is chairing or who scribed recently, I propose AdrianP
15:39:43 [ChrisW]
zakim, pick a victim
15:39:43 [Zakim]
Not knowing who is chairing or who scribed recently, I propose ChrisW
15:39:45 [ChrisW]
zakim, pick a victim
15:39:45 [Zakim]
Not knowing who is chairing or who scribed recently, I propose StuartTaylor
15:40:15 [Hassan]
Stuart are U there?
15:40:23 [StuartTaylor]
sorry ChrisW phone problems again
15:40:33 [StuartTaylor]
yes, I'll do that
15:40:39 [ChrisW]
action: StuartTaylor to review RDF&OWL in two weeks
15:40:39 [trackbot]
Sorry, couldn't find user - StuartTaylor
15:40:48 [yzhao]
I can also review it
15:41:00 [ChrisW]
action: YutingZhao to review RDF&OWL in two weeks
15:41:00 [trackbot]
Sorry, couldn't find user - YutingZhao
15:41:26 [ChrisW]
action: Yuting to review RDF&OWL in two weeks
15:41:44 [ChrisW]
action: Stuart to review RDF&OWL in two weeks
15:42:03 [sandro]
action-600?
15:42:09 [sandro]
issue-1?
15:42:16 [sandro]
trackbot, help?
15:42:17 [trackbot]
Created ACTION-628 - Review RDF&OWL in two weeks [on Yuting Zhao - due 2008-10-28].
15:42:18 [trackbot]
Created ACTION-629 - Review RDF&OWL in two weeks [on Stuart Taylor - due 2008-10-28].
15:42:20 [trackbot]
ACTION-600 -- Christopher Welty to draft revised metadata conformance wording for BLD -- due 2008-10-03 -- CLOSED
15:42:20 [trackbot]
15:42:22 [trackbot]
ISSUE-1 -- This is a test issue. Please ignore. -- CLOSED
15:42:24 [trackbot]
15:42:26 [trackbot]
See
for help
15:43:07 [ChrisW]
zakim, list agenda
15:43:07 [Zakim]
I see 10 items remaining on the agenda:
15:43:08 [Zakim]
1. Admin [from ChrisW]
15:43:08 [Zakim]
2. Action Review [from ChrisW]
15:43:09 [Zakim]
3. Core [from ChrisW]
15:43:09 [Zakim]
4. UCR [from ChrisW]
15:43:10 [Zakim]
5. SWC [from ChrisW]
15:43:10 [Zakim]
6. Test Cases [from ChrisW]
15:43:12 [Zakim]
7. Liason [from ChrisW]
15:43:14 [Zakim]
8. Public Comments [from ChrisW]
15:43:15 [ChrisW]
zakim, take up item 6
15:43:16 [Zakim]
9. Pick Scribe [from ChrisW]
15:43:18 [Zakim]
10. AOB [from ChrisW]
15:43:20 [Zakim]
agendum 6. "Test Cases" taken up [from ChrisW]
15:43:45 [ChrisW]
15:46:03 [Hassan]
Discussing the "Disjuctive Information from Negative Guards" test case
15:46:26 [Hassan]
ChrisW: there are two cases
15:46:47 [Hassan]
s/Discussing/ChrisW: Discussing/
15:47:43 [Hassan]
ChrisW: discussing: Equality in conclusion 1, Equality in conclusion 2, Inconsistent Entailment - all seem to look good
15:47:45 [Zakim]
+Mike_Dean
15:48:38 [Hassan]
ChrisW: discussing No polymorphic symbols, Non-Annotation Entailment - both seem to look good
15:49:11 [Hassan]
ChrisW: discussing
15:49:54 [Hassan]
ChirsW: Annotation Entailment needs to be discussed
15:50:07 [Hassan]
s/ChirsW/ChrisW/
15:50:29 [ChrisW]
PROPOSED: accept TC
15:50:30 [Hassan]
ChrisW: this case is about annotation in OWL
15:50:48 [ChrisW]
RESOLVED: accept TC
15:50:56 [StellaMitchell]
yes
15:51:32 [ChrisW]
15:52:53 [sandro]
sandro: Wow -- that's not what I was expecting. I dont like that. We should allow IRIs.
15:53:27 [josb]
"The argument names in ArgNames are written as unicode strings that must not start with a question mark, "?"."
15:54:01 [DaveReynolds]
a countably infinite set of argument names, ArgNames (disjoint from Const and Var)
15:54:36 [sandro]
sandro: So maybe no one cares that Named Arguments are broken like this, since no one is ever going to use Named Arguments. :-( :-(
15:55:05 [sandro]
_p("
"->4)
15:55:19 [sandro]
(that IS okay)
15:55:42 [sandro]
_p("
"->4) IS OKAY
15:55:49 [josb]
"
"^^xsd:string
15:55:51 [sandro]
_p(<
>->4) NOT OKAY
15:56:04 [Hassan]
JosB: points out that shorthands syntax is interfering with this...
15:56:15 [StellaMitchell]
yes, they are in DTB
15:56:42 [Hassan]
q+
15:57:15 [StellaMitchell]
yes, it just can be a string that satisfies the syntaxs of a constant
15:57:27 [StellaMitchell]
cannot, I mean
15:57:31 [Hassan]
all: discussing the ambiguous syntax of identifiers
15:57:38 [StellaMitchell]
it can be anything except something that is syntactically a constant
15:57:40 [Hassan]
q?
15:57:50 [ChrisW]
ack hassan
15:57:55 [Hassan]
q+
15:58:06 [Hassan]
q?
15:59:33 [Hassan]
q-
15:59:39 [ChrisW]
action: open issue on ambiguity in presentation syntax
15:59:39 [trackbot]
Sorry, couldn't find user - open
15:59:48 [ChrisW]
action: chris to open issue on ambiguity in presentation syntax
15:59:48 [trackbot]
Created ACTION-630 - Open issue on ambiguity in presentation syntax [on Christopher Welty - due 2008-10-28].
16:01:03 [josb]
q+
16:01:05 [Zakim]
-Mike_Dean
16:01:45 [StellaMitchell]
the BLD doc does give a reason for argname different from conts
16:02:05 [StellaMitchell]
16:02:09 [Hassan]
discussing the rationale of syntactic choices ...
16:02:20 [ChrisW]
ack jos
16:02:39 [josb]
16:02:54 [josb]
_p(-
>4)
16:03:12 [Hassan]
JosB: IRI's and Strings are usable there (as slots) but we need a means to identify that case.
16:03:46 [Gary]
why don't we just get rid of named arg uniterms?
16:03:59 [josb]
:)
16:04:09 [Hassan]
All: discussing the nature of named-argument terms' slots
16:04:14 [Hassan]
q-
16:04:33 [Hassan]
q-
16:04:49 [sandro]
Sandro: Oh, okay, I remember now why these argument names can't be constants (like slot names) -- we don't want equality to apply (as Michael is saying) -- we want them to be purely syntactic sugar.
16:04:59 [Hassan]
Hassan has joined #rif
16:05:17 [sandro]
Hassan, hello?
16:05:20 [Hassan]
hello
16:06:00 [josb]
q+
16:06:31 [josb]
q-
16:06:40 [StellaMitchell]
I can do that test case
16:06:43 [Hassan]
JosB: let use the XML not the PS when the latter is ambiguous
16:06:53 [StellaMitchell]
I can
16:07:08 [Hassan]
s/let/let's/
16:07:13 [StellaMitchell]
will make xml versions of all the test
16:07:15 [ChrisW]
action: Stella make a positive syntax test version of Argument names not Const
16:07:16 [trackbot]
Created ACTION-631 - Make a positive syntax test version of Argument names not Const [on Stella Mitchell - due 2008-10-28].
16:07:16 [StellaMitchell]
test
16:08:30 [ChrisW]
16:10:55 [sandro]
Can someone read this in English: fam:isParent(?Y ?X):- And (?Y=fam:Uwe fam:Uwe#fam:Parent ?X=fam:Adrian fam:Adrian#fam:Child)
16:12:58 [josb]
q+
16:13:00 [DaveReynolds]
q+
16:13:27 [DaveReynolds]
q-
16:13:38 [DaveReynolds]
Exactly what I was going to say!
16:13:50 [Hassan]
JosB: I don't understand the purpose of the TC - should be simpler to illustrate membership
16:13:52 [sandro]
Sandro: Yes, please, let's do this in a much simpler way.
16:13:58 [josb]
q-
16:14:24 [ChrisW]
action: adrian to shorten test case Class Membership
16:14:24 [trackbot]
Sorry, couldn't find user - adrian
16:14:34 [ChrisW]
action: apaschke to shorten test case Class Membership
16:14:34 [trackbot]
Sorry, couldn't find user - apaschke
16:14:40 [ChrisW]
action: paschke to shorten test case Class Membership
16:14:40 [trackbot]
Sorry, couldn't find user - paschke
16:14:51 [ChrisW]
action: adrianp to shorten test case Class Membership
16:14:51 [trackbot]
Sorry, couldn't find user - adrianp
16:15:09 [Hassan]
I concur with Sandro: the last rule makes no sense ...
16:15:21 [sandro]
Sandro: It's baffling to have Adrian named in the body of the isParent rule.
16:16:17 [ChrisW]
action: sandro to ask adiran to shorten test case Class Membership
16:16:17 [trackbot]
Created ACTION-632 - Ask adiran to shorten test case Class Membership [on Sandro Hawke - due 2008-10-28].
16:16:26 [StellaMitchell]
q+
16:17:48 [sandro]
Sandro: I've been assuming we'd use the Prefix(...) declarations from the Premise in the Conclusion condition.
16:19:33 [Hassan]
I fully concur with Jos on this - I already requested this (Prefix and Base are pragmas)
16:19:45 [sandro]
Jos: Just say the the conclusion has the Prefix copied from the Premise
16:20:13 [StellaMitchell]
a lot of the test case conclusions are not documents, but they are condition formulas
16:20:29 [StellaMitchell]
and the xml validates by bldcond.xsd and not bldrule.xsd
16:21:29 [Hassan]
s/adiran/AdrianPaschke/
16:21:45 [stu_]
stu_ has joined #rif
16:22:07 [Hassan]
q+
16:22:18 [StellaMitchell]
q-
16:24:10 [josb]
pre:local
16:24:28 [josb]
=
16:24:45 [josb]
(if Prefix(pre
))
16:25:06 [Hassan]
q-
16:25:07 [StellaMitchell]
write conclusions in fully expanded format
16:25:18 [StellaMitchell]
or anglebracket iri form
16:26:26 [Hassan]
I agree with DaveReynolds - separting the macroexpansion from the syntax analysis
16:26:30 [StellaMitchell]
I think fully expanded it ok
16:26:37 [Hassan]
s/separt/separat/
16:28:03 [Hassan]
DaveRaynolds: no - I propose to separate the pragmas from the examples
16:28:35 [Hassan]
I agree wit Dave - that is what I proposed earlier
16:28:54 [Hassan]
s/DaveRay/DaveRey/
16:29:45 [ChrisW]
action: chris to discuss how to specify prefixes on email
16:29:45 [trackbot]
Created ACTION-633 - Discuss how to specify prefixes on email [on Christopher Welty - due 2008-10-28].
16:29:57 [Hassan]
BTW: What Dave is porposing is what we did De Facto in the previous versions using namespaces
16:30:07 [Hassan]
s/porpo/propo/
16:30:40 [Hassan]
AOB?
16:30:48 [Zakim]
-Michael_Kifer
16:30:49 [sandro]
Sandro: Agreed -- namespace handling of the presentation syntax of the conclusion is something test-case-specific, not BLD-general.
16:30:50 [Zakim]
-Gary
16:30:50 [Hassan]
+1 to adjourn
16:30:52 [Zakim]
-AdrianP
16:30:54 [Zakim]
-StellaMitchell
16:30:54 [DaveReynolds]
bye
16:30:57 [Zakim]
-josb
16:30:58 [Zakim]
-Harold
16:30:58 [Zakim]
-Hassan
16:31:00 [Zakim]
-DaveReynolds
16:31:00 [ChrisW]
zakim, list attendees
16:31:00 [Zakim]
As of this point the attendees have been +1.914.784.aaaa, ChrisW, StuartTaylor, Sandro, Mike_Dean, StellaMitchell, josb, Hassan, DaveReynolds, Harold, Gary, yzhao, AdrianP,
16:31:04 [Zakim]
... Michael_Kifer
16:31:07 [yzhao]
ye
16:31:12 [ChrisW]
Leora Morgenstern (Sukkot) StuartTaylor ChanghaiKe Christian de Sainte Marie (at risk) PaulVincent
16:31:28 [ChrisW]
Regrets: Leora Morgenstern (Sukkot) StuartTaylor ChanghaiKe Christian de Sainte Marie (at risk) PaulVincent
16:31:30 [sandro]
zakim, who is here?
16:31:30 [Zakim]
On the phone I see ChrisW, Sandro, yzhao
16:31:31 [Zakim]
yzhao has yzhao, StuartTaylor
16:31:32 [Zakim]
On IRC I see StuartTaylor, Hassan, mkifer, Gary, Harold, mdean, sandro, RRSAgent, ChrisW, Zakim, trackbot
16:32:02 [ChrisW]
rrsagent, make minutes
16:32:02 [RRSAgent]
I have made the request to generate
ChrisW
16:32:02 [sandro]
Zakim, yzhao is Hassan
16:32:02 [Zakim]
+Hassan; got it
16:32:07 [sandro]
zakim, who is here?
16:32:07 [Zakim]
On the phone I see ChrisW, Sandro, Hassan
16:32:08 [Zakim]
Hassan has yzhao, StuartTaylor
16:32:09 [Zakim]
On IRC I see StuartTaylor, Hassan, mkifer, Gary, Harold, mdean, sandro, RRSAgent, ChrisW, Zakim, trackbot
16:32:27 [ChrisW]
rrsagent, make logs public
16:33:37 [sandro]
16:33:39 [Hassan]
hello
16:33:41 [sandro]
16:33:42 [sandro]
16:33:47 [sandro]
:_)
16:34:45 [Zakim]
-Hassan
16:36:40 [Zakim]
-ChrisW
16:36:42 [Zakim]
-Sandro
16:36:43 [Zakim]
SW_RIF()11:00AM has ended
16:36:45 [Zakim]
Attendees were +1.914.784.aaaa, ChrisW, StuartTaylor, Sandro, Mike_Dean, StellaMitchell, josb, Hassan, DaveReynolds, Harold, Gary, AdrianP, Michael_Kifer | http://www.w3.org/2008/10/21-rif-irc | CC-MAIN-2017-04 | refinedweb | 3,530 | 51.11 |
YAML
YAML is a human friendly data serialization standard, especially for configuration files. Its simple to read and use.
Here is an example:
--- # A list of tasty fruits fruits: - Apple - Orange - Strawberry - Mango
btw the latest version of yaml is: v1.2.
PyYAML
Working with yaml files in python is really easy. The python module: PyYAML must be installed in the system.
In an archlinux box, the system-wide installation of this python package, can be done by typing:
$ sudo pacman -S --noconfirm python-yaml
Python3 - Yaml Example
Save the above yaml example to a file, eg.
fruits.yml
Open the Python3 Interpreter and write:
$ python3.6 Python 3.6.4 (default, Jan 5 2018, 02:35:40) [GCC 7.2.1 20171224] on linux Type "help", "copyright", "credits" or "license" for more information.
>>> from yaml import load >>> print(load(open("fruits.yml"))) {'fruits': ['Apple', 'Orange', 'Strawberry', 'Mango']} >>>
an alternative way is to write the above commands to a python file:
from yaml import load print(load(open("fruits.yml")))
and run it from the console:
$ python3 test.py {'fruits': ['Apple', 'Orange', 'Strawberry', 'Mango']}
Instead of print we can use yaml dump:
eg.
import yaml yaml.dump(yaml.load(open("fruits.yml"))) 'fruits: [Apple, Orange, Strawberry, Mango]n'
The return type of
yaml.load is a python dictionary:
type(load(open("fruits.yml"))) <class 'dict'>
Have that in mind.
Jinja2
Jinja2 is a modern and designer-friendly templating language for Python.
As a template engine, we can use jinja2 to build complex markup (or even text) output, really fast and efficient.
Here is an jinja2 template example:
I like these tasty fruits: * {{ fruit }}
where
{{ fruit }} is a variable.
Declaring the fruit variable with some value and the jinja2 template can generate the prefarable output.
python-jinja
In an archlinux box, the system-wide installation of this python package, can be done by typing:
$ sudo pacman -S --noconfirm python-jinja
Python3 - Jinja2 Example
Below is a python3 - jinja2 example:
import jinja2 template = jinja2.Template(""" I like these tasty fruits: * {{ fruit }} """) data = "Apple" print(template.render(fruit=data))
The output of this example is:
I like these tasty fruits: * Apple
File Template
Reading the jinja2 template from a template file, is a little more complicated than before. Building the jinja2 enviroment is step one:
env = jinja2.Environment(loader=jinja2.FileSystemLoader("./"))
and Jinja2 is ready to read the template file:
template = env.get_template("t.j2")
The template file: t.j2 is a litle diferrent than before:
I like these tasty fruits: {% for fruit in fruits -%} * {{ fruit }} {% endfor %}
Yaml, Jinja2 and Python3
To render the template a dict of global variables must be passed. And parsing the yaml file the yaml.load returns a dictionary! So everything are in place.
Compine everything together:
from yaml import load from jinja2 import Environment, FileSystemLoader mydata = (load(open("fruits.yml"))) env = Environment(loader=FileSystemLoader("./")) template = env.get_template("t.j2") print(template.render(mydata))
and the result is:
$ python3 test.py
I like these tasty fruits: * Apple * Orange * Strawberry * Mango
A few years ago, I migrated from ICS Bind Authoritative Server to PowerDNS Authoritative Server.
Here was my configuration file:
# egrep -v '^$|#' /etc/pdns/pdns.conf dname-processing=yes launch=bind bind-config=/etc/pdns/named.conf local-address=MY_IPv4_ADDRESS local-ipv6=MY_IPv6_ADDRESS setgid=pdns setuid=pdns
Α quick reminder, a DNS server is running on
tcp/udp port53.
I use dnsdist (a highly DNS-, DoS- and abuse-aware loadbalancer) in-front of my pdns-auth, so my configuration file has a small change:
local-address=127.0.0.1 local-port=5353
instead of
local-address, local-ipv6
You can also use pdns without dnsdist.
My named.conf looks like this:
# cat /etc/pdns/named.conf zone "balaskas.gr" IN { type master; file "/etc/pdns/var/balaskas.gr"; };
So in just a few minutes of work, bind was no more.
You can read more on the subject here: Migrating to PowerDNS.
Converting from Bind zone files to SQLite3
PowerDNS has many features and many Backends. To use some of these features (like the HTTP API json/rest api for automation, I suggest converting to the sqlite3 backend, especially for personal or SOHO use. The PowerDNS documentation is really simple and straight-forward: SQLite3 backend
Installation
Install the generic sqlite3 backend.
On a CentOS machine type:
# yum -y install pdns-backend-sqlite
Directory
Create the directory in which we will build and store the sqlite database file:
# mkdir -pv /var/lib/pdns
Schema
You can find the initial sqlite3 schema here:
/usr/share/doc/pdns/schema.sqlite3.sql
you can also review the sqlite3 database schema from github
If you cant find the
schema.sqlite3.sql file, you can always download it from the web:
# curl -L -o /var/lib/pdns/schema.sqlite3.sql \
Create the database
Time to create the database file:
# cat /usr/share/doc/pdns/schema.sqlite3.sql | sqlite3 /var/lib/pdns/pdns.db
Migrating from files
Now the difficult part:
# zone2sql --named-conf=/etc/pdns/named.conf -gsqlite | sqlite3 /var/lib/pdns/pdns.db 100% done 7 domains were fully parsed, containing 89 records
Migrating from files - an alternative way
If you have already switched to the generic sql backend on your powerdns auth setup, then you can use:
pdnsutil load-zone command.
# pdnsutil load-zone balaskas.gr /etc/pdns/var/balaskas.gr Mar 20 19:35:34 Reading random entropy from '/dev/urandom' Creating 'balaskas.gr'
Permissions
If you dont want to read error messages like the below:
sqlite needs to write extra files when writing to a db file
give your powerdns user permissions on the directory:
# chown -R pdns:pdns /var/lib/pdns
Configuration
Last thing, make the appropriate changes on the pdns.conf file:
## launch=bind ## bind-config=/etc/pdns/named.conf launch=gsqlite3 gsqlite3-database=/var/lib/pdns/pdns.db
Reload Service
Restarting powerdns daemon:
# service pdns restart Restarting PowerDNS authoritative nameserver: stopping and waiting..done Starting PowerDNS authoritative nameserver: started
Verify
# dig @127.0.0.1 -p 5353 -t soa balaskas.gr +short ns14.balaskas.gr. evaggelos.balaskas.gr. 2018020107 14400 7200 1209600 86400
or
# dig @ns14.balaskas.gr. -t soa balaskas.gr +short ns14.balaskas.gr. evaggelos.balaskas.gr. 2018020107 14400 7200 1209600 86400
perfect!
Using the API
Having a database as pdns backend, means that we can use the PowerDNS API.
Enable the API
In the pdns core configuration file:
/etc/pdns/pdns.conf enable the API and dont forget to type a key.
api=yes api-key=0123456789ABCDEF
The API key is used for authorization, by sending it through the http headers.
reload the service.
Testing API
Using curl :
# curl -s -H 'X-API-Key: 0123456789ABCDEF'
The output is in json format, so it is prefable to use jq
#}" } ]
jq can also filter the output:
# curl -s -H 'X-API-Key: 0123456789ABCDEF' | jq .[].version "4.1.1"
Zones
Getting the entire zone from the database and view all the Resource Records - sets:
# curl -s -H 'X-API-Key: 0123456789ABCDEF'
or just getting the serial:
# curl -s -H 'X-API-Key: 0123456789ABCDEF' | \ jq .serial 2018020107
or getting the content of SOA type:
# curl -s -H 'X-API-Key: 0123456789ABCDEF' | \ jq '.rrsets[] | select( .type | contains("SOA")).records[].content ' "ns14.balaskas.gr. evaggelos.balaskas.gr. 2018020107 14400 7200 1209600 86400"
Records
Creating or updating records is also trivial.
Create the Resource Record set in json format:
# cat > /tmp/test.text <<EOF { "rrsets": [ { "name": "test.balaskas.gr.", "type": "TXT", "ttl": 86400, "changetype": "REPLACE", "records": [ { "content": ""Test, this is a test ! "", "disabled": false } ] } ] } EOF
and use the http Patch method to send it through the API:
# curl -s -X PATCH -H 'X-API-Key: 0123456789ABCDEF' --data @/tmp/test.text \ | jq .
Verify Record
We can use dig internal:
# dig -t TXT test.balaskas.gr @127.0.0.1 -p 5353 +short "Test, this is a test ! "
querying public dns servers:
$ dig test.balaskas.gr txt +short @8.8.8.8 "Test, this is a test ! " $ dig test.balaskas.gr txt +short @9.9.9.9 "Test, this is a test ! "
or via the api:
# curl -s -H 'X-API-Key: 0123456789ABCDEF' | \ jq '.rrsets[].records[] | select (.content | contains("test")).content' ""Test, this is a test ! ""
That’s it.
AC_2<< | https://balaskas.gr/blog/2018/03/ | CC-MAIN-2020-50 | refinedweb | 1,367 | 58.99 |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
You need to download the Tennis dataset on the book's website, and extract it in the current directory. ()
import numpy as np import pandas as pd import scipy.stats as st import matplotlib.pyplot as plt %matplotlib inline
player = 'Roger Federer' filename = "data/{name}.csv".format( name=player.replace(' ', '-')) df = pd.read_csv(filename)
print("Number of columns: " + str(len(df.columns))) df[df.columns[:4]].tail()
npoints = df['player1 total points total'] points = df['player1 total points won'] / npoints aces = df['player1 aces'] / npoints
plt.plot(points, aces, '.'); plt.xlabel('% of points won'); plt.ylabel('% of aces'); plt.xlim(0., 1.); plt.ylim(0.);
If the two variables were independent, we would not see any trend in the cloud of points. On this plot, it is a bit hard to tell. Let's use Pandas to compute a coefficient correlation.
DataFramewith only those fields (note that this step is not compulsory). We also remove the rows where one field is missing.
df_bis = pd.DataFrame({'points': points, 'aces': aces}).dropna() df_bis.tail()
df_bis.corr()
A correlation of ~0.26 seems to indicate a positive correlation between our two variables. In other words, the more aces in a match, the more points the player wins (which is not very surprising!).
df_bis['result'] = df_bis['points'] > df_bis['points'].median() df_bis['manyaces'] = df_bis['aces'] > df_bis['aces'].median()
pd.crosstab(df_bis['result'], df_bis['manyaces'])
scipy.stats.chi2_contingency, which returns several objects. We're interested in the second result, which is the p-value.
st.chi2_contingency(_)
The p-value is much lower than 0.05, so we reject the null hypothesis and conclude that there is a statistically significant correlation between the proportion of aces and the proportion of points won in a match (for Roger Federer!).
As always, correlation does not imply causation... Here, it is likely that external factors influence both variables. ()
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter07_stats/04_correlation.ipynb | CC-MAIN-2017-47 | refinedweb | 363 | 52.66 |
How to: Bind to XDocument, XElement, or LINQ for XML Query Results
Updated: July 2008
This example demonstrates how to bind XML data to an ItemsControl using XDocument.
The following XAML code defines an ItemsControl and includes a data template for data of type Planet in the XML namespace. An XML data type that occupies a namespace must include the namespace in braces, and if it appears where a XAML markup extension could appear, it must precede the namespace with a brace escape sequence. This code binds to dynamic properties that correspond to the Element and Attribute methods of the XElement class. Dynamic properties enable XAML to bind to dynamic properties that share the names of methods. To learn more, see LINQ to XML Dynamic Properties. Notice how the default namespace declaration for the XML does not apply to attribute names.
> ... <ItemsControl ItemsSource="{Binding }" > </ItemsControl> </StackPanel>
The following C# code calls Load and sets the stack panel data context to all subelements of the element named SolarSystemPlanets in the XML namespace.
XML data can be stored as a XAML resource using ObjectDataProvider. For a complete example, see L2DBForm.xaml Source Code. The following sample shows how code can set the data context to an object resource.
The dynamic properties that map to Element and Attribute provide flexibility within XAML. Your code can also bind to the results of a LINQ for XML query. This example binds to query results ordered by an element value. | http://msdn.microsoft.com/en-us/library/vstudio/cc165615(v=vs.90) | CC-MAIN-2014-15 | refinedweb | 244 | 55.13 |
Hey guys, I'm new with the allegro community and I know C programming.
I've recently started doing allegro Code::blocks development, and that's been a real drag lately. I cant seem to load a bitmap image (mario8.bmp), and yes I'am totally aware of the allegro 5 library, but I prefer to use allegro 4 for now, and its much easier. Here down below I will provide the code that I used from a tutorial ( ) I'm running on a 32 bit machine with windows vista, it appears in this tutorial he uses something older. Anyway, I'm not sure what's going on, and just to answer any sample questions, yes the image is named correctly, the file is in the folder, the image is where the .exe file is. Every time I run the program it goes to the 8-bit color scheme on windows vista then crashes and says "Allegro Game.exe has stopped working", then returns to its normal color scheme. Please help me, I don't know how to solve this or what to do.
Kind regards coderthatcancode
The code that I have used:
#include <allegro.h>
int main(){ allegro_init(); install_keyboard();
set_color_depth(8); // 8, 15, 16, 32 set_gfx_mode(GFX_AUTODETECT_WINDOWED, 640, 480, 0, 0);
BITMAP *bmp = create_bitmap(640, 480); clear_bitmap(bmp);
BITMAP *character = load_bitmap("mario8.bmp", NULL);
while(!key[KEY_ESC]) { blit(character, bmp, 0, 0, 0, 0, character->w, character->h); blit(bmp, screen, 0, 0, 0, 0, bmp->w, bmp->h); }
destroy_bitmap(bmp); destroy_bitmap(character);
return 0;}END_OF_MAIN()
Try changing the colour depth like this and see if it makes a difference:
set_color_depth(desktop_color_depth());
EDIT:
If it still isn't working, try using the bmp that I attached. Using that combined with your code and using desktop_color_depth() worked for me.
+LennyLen Yeah I did that but it doesn't seem to work and still crashes..btw I'm using Code::Blocks as the IDE
Response to edit: Yeah, the image doesnt seem to work, do you think it could be possibly to do with the image being next to the executable??
Did you try the bmp I uploaded?
yeah, and quick question here? are you using codeblocks? if so, what did you do to make it work?
If you're running it from within C::B, then the image needs to be with your .cbp file, not with the .exe file.
Here's a more robust version of that code:
Thanks so much it worked, would I do this for all of my allegro games, but what about the 8-bit image how would I load that.
EDIT: yeah i will also use to use the little check if the character file is in the folder, quick question though, how would I do it, if I was going to create my own bitmap?
Yes, for each C::B project, if you're running it from within the IDE you need to have all your files (images, dlls, etc) in the project's base directory. When the project is finished and you want to share it, then everything needs to be with the .exe.
You don't need to be in 8-bit mode to load 8-bit images. Allegro will automatically convert them to the bit depth you're using when it loads the image.
edit:
When you say create your own bitmaps, do you mean you want to load ones you've created in a paint program, or ones you've created in your code?
When I said create your own bitmaps, I mean as in create your own bitmap image using the paint software or GIMP for the game. So, How would you accomplish this?
NOTE: I don't mean in code just to confirm that
Just a note, newer versions of Windows don't do 8bit well anymore, they tend to mess up the palette, so I strongly recommend moving up to 24 or 32bit (32bit is just 24 with an alpha channel). You can still use the pink colour for transparent sections in Allegro 4 at these depths.
My Deluxe Pacman 1 (link in my sig) uses Allegro 4 and used to be 8 bit many moons ago. Switching it to use 32bit with Allegro 4 was easy and ensures it looks proper on all systems.
My Deluxe Pacman 2 uses Allegro 5. If you take the time to learn to use Allegro 5 it is well worth switching to.
+Neil Roy so good to see you sir, I didn't know you use allegro, I remember seeing you on VertoStudio's video in the comment section advising him a lot, thanks for the recommendation and I just recently moved on from SDL 2, I like allegro 4 better than allegro 5, I find that they completely changed the whole API and I wasn't happy at all with that, plus its way easier for me to get my way around. I'm not just focusing on the performance of it, I'm also looking out for the rate/ difficulty of it I must say it is way easier than SDL and in allegro, you could write int main() with out any argument counters with in the parentheses of it. Its just the way I learnt it with the simple int main(). No doubt about it I will probably use allegro 5 for my CPP projects but as I am doing C right now, I thinks its best to play it allegro 4 safe and plus if I couldn't even load a bitmap image in allegro 4 (now fixed) then how could I load an bitmap image in allegro 5.
Anyway, I'm just saying, after all I know C and its rather fun to write apps in it, and I am aware of course that C can do allegro 5, but I still prefer allegro 4 just because of its simplicity. Thanks again though
EDIT: After all I did just start with allegro 4 the first time I saw allegro in a video was this: so thats what got me to allegro 4.
Also, I'm having problems with my bitmap, what did you do for the other bitmap to go well with allegro I tried using another mario image, what did you do for the other one, and why isn't this one showing up?
gah, please don't litter your project directory with data files. Put them in their own folder under a single folder where your executable resides. C::B has a setting to change the working directory when you use C::B to run it. Go to Menu->Project->Properties. Go to the Build targets tab and change the execution working directory and output filename. Done. Voila.
As for your bitmap, it's corrupt, or A4 doesn't recognize its particular format.
{"name":"610911","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/3\/e3c575f1e66a52bb266e0ce061bf2531.png","w":800,"h":629,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/3\/e3c575f1e66a52bb266e0ce061bf2531"}{"name":"610912","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/d\/9d92d25c32a1af449a3602369c3a95ab.png","w":314,"h":188,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/d\/9d92d25c32a1af449a3602369c3a95 Thanks for responding, how do you make the bitmap not corrupted so that I can load the Mario image?
I did set the working directory to what it should read but, what will that do also?
{"name":"610914","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/1\/c\/1c55d6d968f3305af0406bf1af5def9d.png","w":1113,"h":665,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/1\/c\/1c55d6d968f3305af0406bf1af5def9d"}
Try this mario8a.bmp with exbitmap.exe. Run the Run443Examples.bat file in the allegro folder and then run "exbitmap.exe mario8a.bmp" after saving mario8a to the bin/examples folder. All I did was load it in paint.net and then save it again as a different 8 bit bitmap. Whatever paint program was used to create and save mario8.bmp apparently didn't save it in a format allegro 4 can load.
+Edgar Reynaldo I don't have the run thing, I installed allegro using this tutorial: and it showed me how to configure allegro for code::blocks in my MinGW folder for code::blocks it doesn't have the run thing that you suggest I use I didn't install it the way you last told me to do it.
EDIT: What do you mean you saved it again?
gah, please don't litter your project directory with data files. Put them in their own folder under a single folder where your executable resides.
Which executable? I have several build profiles, and thus have several executables.
What I actually do is have a data directory that is in the base directory. That way all builds use the one copy of the data.
+LennyLen how did you load the 8-bit mario image without it crashing mine seems to crash every time I load it and I set it to the desktop color in the allegro code, i don't get it. It should support my windows because I set it to my desktop color. Please help!! D:
Please help!! D:
Things like this don't help. What we need is details. Show your latest code, show us how you run the program, and show us the directory structure of your project.
As I said, the mario8.bmp image you posted is not understood by allegro. The image I posted is. Use mario8a.bmp. I simply saved it in paint.net, which uses a format Allegro 4 understands.
If you're using the updated image and your program is still crashing, then you have an error in your code, or you are not running the program from the correct directory.
Post details.
Okay, I'll try this again. with screenshots this time,
So load_bitmap in allegro 4 wont work and every time I try and load an image it crashes, I have my bitmap images in the same folder next to my code block files and I'm trying to load the 8-bit Mario, I provided an image of my folder as well as the code::blocks IDE in the Attachment.
I don't know why its not loading or what it means for it to be corrupted.
The code used:#include <allegro.h>
set_color_depth(desktop_color_depth()); // 8, 15, 16, 32 set_gfx_mode(GFX_AUTODETECT_WINDOWED, 640, 480, 0, 0);
BITMAP *character = load_bitmap("mario32.bmp", NULL);
int x = 0; int y = 0;
while(!key[KEY_ESC]) { draw_sprite(bmp, character, x, y); blit(bmp, screen, 0, 0, 0, 0, bmp->w, bmp->h); clear_to_color(bmp, makeacol(255, 255, 255, 255)); }
return 0;}END_OF_MAIN()
I don't get what's wrong with it I get 0 errors and 0 Warnings in code.When I seem to load mario 8 its fine, I don't know if its the color or anything, whereas if mario 32 was loaded (which has no difference what so ever its just an 8-bit mario with no change in color scheme what so ever) it fails and I get "Allegro Game has stopped working", and the console returns a negative value, and other stuff...
I just don't know what to do now, I've provided images and stuff to help and make it easier. The main problem is when ever I load the image, it NEVER shows up.. well, only if its mario8.bmp which is just a purple square which Lenny gave me instead, because he said to try it.
QUESTION: How would you create your own bitmap images for allegro
Thanks again coderthatcancode
EDIT: It worked it turns out you need to edit the mario32.bmp in a paint program and then save it as an 24-bit bitmap, unblock the file as it may be blocked in properties
You're not checking any return values. load_bitmap can, and sometimes does, fail. Your original mario8.bmp file was in a format that Allegro 4 cannot understand. That's why it fails, and load_bitmap would have returned a null pointer in that case. That's why Lenny's and my mario8a.bmp both display properly. Or else it is because your image file is not in the directory it is supposed to be in, or your current working directory is not the one with the image in it.
It worked it turns out you need to edit the mario32.bmp in a paint program and then save it as an 24-bit bitmap, unblock the file as it may be blocked in properties
Although when I edited in paint it came out really weird when I saved it as 24-bit image, which is what allegro understands. Do you think if I used a different paint program it would save differently if I saved it as 24 bit?
FYI, Allegro 4 sucks. Use Allegro 5. Or PNG.
Allegro 4 CANNOT LOAD certain types of BMP files. More accurately, a BMP file with a specific VERSION header. Merely loading the same bitmap in GIMP and resaving it as BMP can fix it.
If you have a choice for file formats, always use PNG. It's basically a compressed BMP. (Lossless. So it looks the same.) Nobody uses BMP anymore. There's basically zero advantage to using BMP, and PNG is supported in Allegro 4.
I ran into the exact same problem when I trying to use Allegro 4 for an image manipulation tool. It kept blowing up on bitmap files and I found it out the hard way the header issue.
-----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
Nah, I still prefer allegro 4, its way easier than allegro 5 and for me it makes a lot more sense.
I will probably move on to allegro 5 when I learn C++, but I will take note of it, its still possible to develop a game in allegro 4 anyway.
Yes, I am aware of .png files I've used them a lot when it came to SDL2, which is why I moved on to allegro 4. Its way easier than SDL2 in my opinion and that's what I like about it, it also seems that there a lot of people on the allegro web site, so if I launch a new game people might know about it, plus allegro 5 in a way is kinda advanced, I've programmed in it before and I didn't like it because it just changed the allegro that we know. Why would I want that? I will most likely use if for C++ for sure, and I know it works for C as well, but if you ask me I think C++ would work better, Thanks anyway.
FYI, Allegro 4 sucks.
I would have to disagree with this. Allegro 4 is good at what it does, which is software rendering. However if you want hardware acceleration, or an event based system that doesn't poll, then yes, A4 sucks compared to Allegro 5.
My game Skyline uses Allegro 4 and a custom function I wrote to draw gradient circles. It's quite fast. I had SiegeLord's knowledge and expertise to help me write it though. However, the modern way to do it would be to use a shader on the gpu. I don't have that knowledge yet, so for me and Skyline A4 is still a good choice. It has it's niches, but yes in the overall picture A4 is obsolete and should be replaced by A5 whenever possible.
You didn't mention whether your problem was solved yet.
You didn't mention whether your problem was solved yet.
It worked it turns out you need to edit the mario32.bmp in a paint program and then save it as an 24-bit bitmap
So basically, what you were telling him all along. And if he'd used the full version of the code I posted, it would have told him that the problem wasn't finding the file, but that the problem was loading it. | https://www.allegro.cc/forums/thread/616921/1030704 | CC-MAIN-2018-30 | refinedweb | 2,676 | 70.53 |
A lovely javascript testing framework -- want to contribute? join us in
CalendarElement (<calendar-element></calendar-element>) Attributes .labelFormat ✔ is EEEE by default ✔ is is bound to the `label-format` attribute when a valid format is entered ✔ is reset back to default when an invalid format is entered ✔ is reset back to default ✔ Logs a RangeError .headerStyle ✔ is an empty string by default ✔ is is bound to the `header-style` attribute ✔ is cooerced to an unvalidated string ✔ is rendered in the section header .headerFormat ✔ is MMMM yyyy by default ✔ is is bound to the `header-format` attribute when a valid format is entered ✔ is reset back to default when an invalid format is entered ✔ is reset back to default ✔ Logs a RangeError Events date-change ✔ is fired when .selectedDate changes Properties .bubbles ✔ is true .composed ✔ is true .cancelable ✔ is true .detail ✔ is an object Properties value when .selectedDate is set ✔ is a String ✔ is the current .selectedDate in ISO 8601 format (yyyy-MM-dd) when .selectedDate is unset ✔ is null date when .selectedDate is set ✔ is a Date ✔ is the current .selectedDate when .selectedDate is unset ✔ is null // EVERYTHING BELOW IS ACTUALLY IN THE `Attributes` BLOCK .selectedDate ✔ is empty by default ✔ is is bound to the `selected-date` attribute ✔ is set by clicking a date cell when an invalid date is entered ✔ sets the value to NULL ✔ Logs the invalid date as an Error .dayFormat ✔ is d by default ✔ is is bound to the `day-format` attribute when a valid format is entered ✔ is reset back to default when an invalid format is entered ✔ is reset back to default ✔ Logs a RangeError
.mocharc.json:
{ "require": ["@babel/register", "core-js/stable", "regenerator-runtime/runtime"] }
import axios from 'axios'to
const axios = require('axios'), it's also working (axios isn't undefined at function execution)
indexand try it. Be sure to install
core-js+
axios.
beforehook. Perhaps try
—retriesflag.
Should I be able to pass along Promises to Mocha and have the fulfillment / rejection of said Promise be used as the result for the it? Because the below testing is appearntly succeeding, whilst giving me UnhandledPromiseRejectionWarning in Node.. My presumption is that a rejection of a Promise should be a failing test.
it("should fail", function () {
return Promise.reject();
})
Hey guys!
In Chrome 76 they disabled 'disable-infobars' flag.
I found and replacement for this,
...but can't get how can I pass this in chrome args. Can anybody advise?
npm i mochawesome
mocha --reporter mochawesome
//why my 2nd test fails? any suggestions? /*************calculate.js************/ function calculateSquare(number, callback) { setTimeout(() => { if (typeof number !== 'number') { callback('Argument of type number is expected'); return; } const result = number * number; callback(null, result); }, 1000); } module.exports = calculateSquare; /*************calculate.test.js************/ const calculateSquare = require('../calculate.js'); const expect = require('chai').expect; describe('calculateSquare', function () { it('should return 4 if passed 2', function (done) { calculateSquare(2, function (error, result) { console.log('callback got called'); expect(result).to.equal(4); done(); // call this when Async! }) }); it('it should return an error if passed a string', function (done) { calculateSquare('string', function (error, result) { expect(error).to.not.equal(null); expect(error.message).to.equal('Argument of type number is expected'); done(); }) }) });
calculateSquare callback got called ✓ should return 4 if passed 2 (1008ms) 1) it should return an error if passed a string 1 passing (2s) 1 failing 1) calculateSquare it should return an error if passed a string: Uncaught AssertionError: expected null to not equal null + expected - actual at /Users/nadia/test/calculate.test.js:15:28 at Timeout.setTimeout [as _onTimeout] (calculate.js:8:5) npm ERR! Test failed. See above for more details.
"test": "mocha --require babel-core/register"
{ "presets": [ [ "env",{"targets": {"node": "current"}} ] ] } | https://gitter.im/mochajs/mocha?at=5d542ef27d56bc608051b7ad | CC-MAIN-2020-05 | refinedweb | 615 | 50.73 |
Unary UDDT Issues
From OWL
Contents
- 1 Unary User Defined Datatypes: Issues
- 1.1 Some Key Email threads
- 1.2 Open Issues
- 1.3 Closed Issues
- 1.4 Additional Items (Potential Issues)
Unary User Defined Datatypes: Issues
Some Key Email threads
Open Issues
ISSUE-29: User-defined Datatypes: owl:DataRange vs rdfs:Datatype
Some context for this design comes from the Protégé 3.x implementation, discussed at
ISSUE-31: Canonical URI for externally defined datatypes
This issue is concerned with referencing XML schema definitions from within OWL RDF/XML, as discussed in this SWBPD WG Note.
ISSUE-71: Create datarange of literals matching given language range
ISSUE-74: Use the xsd namespace for the facet names
E.g., why is
owl11:maxExclusive preferred to
xsd:maxExclusive?
Closed Issues
ISSUE-11: Specification of which facet is being restricted in a datatype restriction is missing
Resolved as a now corrected problem with the XML Schema, see minutes from 2007-11-07 telecon.
ISSUE-28: Multiple facet restrictions per data range
Email from Boris says multiple facets would be interpreted conjunctively and specs could be extended to support this.
Additional Items (Potential Issues)
- The Structural Spec and Functional Syntax document itemizes a set of supported facets but does not compare that with the complete set of constraining facets in XML Schema (whitespace and enumeration are omitted).
- The RDF/XML mapping document does not itemize the acceptable facets. | http://www.w3.org/2007/OWL/wiki/Unary_UDDT_Issues | CC-MAIN-2015-14 | refinedweb | 234 | 53.41 |
Stability and hardware support
Stability in 10.1 has been comparable to 10.0.4--that is to say, excellent. The same caveats about actual stability vs. perceived stability still apply to 10.1. I haven't run it long enough to know if user interface death is more or less of a problem in 10.1. There is still no way to recover from a total UI crash without another computer from which to connect and kill processes. A hardware-based interrupt system (something like "virtual consoles" on other Unix variants) that was guaranteed to remain accessible during anything short of an actual kernel panic would go a long way towards getting Mac OS X over that final stability hump.
I don't have access to enough hardware to know how much hardware support has improved. My serial printer (attached to an adapter in the G3's internal modem port) is still not supported, nor do I really expect it to be in the future. Networked laser printers accessible from the G4 worked in 10.0.x, and continue to work in 10.1. But 10.1 still does not include the proper PPD files for several of the LaserJet printers on the network.
As mentioned earlier, the Displays preference pane still does not list all of the supported refresh rates for the G4's monitor, forcing me to use a slower refresh rate in OS X than in OS 9.
On the G3, a series of repeating console messages cause delays in both startup and shutdown. They look like this:
Sep 23 14:57:26 localhost mach_kernel: ADPT_OSI_IndicateQueueFrozen: id 5, freeze
Sep 23 14:57:26 localhost mach_kernel: ADPT_OSI_IndicateGenerationChange (nop)
Sep 23 14:57:26 localhost mach_kernel: ADPT_OSI_IndicateQueueFrozen: id 5, unfreeze
[repeat many times]
My only guess is that they're related to the G3's cable modem, SCSI card, or ATA/66 card. These messages do not appear at all on the G4.
Enhancing Your 10.1 Experience
Here are a few of the third party applications that I find beneficial to my Mac OS X experience. The first is ASM, an application-switcher menu replacement that also includes an option to change the OS X window layering policy to be per-application (like classic Mac OS) instead of per-window. ASM is implemented as a "Menu Extra" (despite Apple's refusal to make these APIs public--way to go, Frank!) and includes its own preference pane.
(Mercifully, Apple has seen fit to make the preference pane API public. But I'm not sure how the System Preference application plans to handle what is sure to be a flood of new preference panes. There's not a scroll-bar to be found in the System Preferences application, and that window can only get so big...)
DragThing provides an example of what the Dock could have been, allowing an arbitrary number of highly customizable, moveable, achorable Dock-like palettes. I simply use it to recreate the classic Mac OS application switcher palette, but it is capable of much more.
Classic Menu provides a user-configurable Apple menu for Mac OS X. Unfortunately, it must use the "hack" method of drawing directly on top of the existing Apple menu, and it therefore does not interact with other menus in the expected way. But it's the best option so far for users who miss the functionality it provides. (Bonus points for including an optional rainbow-striped Apple icon :-)
Finally, TinkerTool prvides a convenient interface for many of settings that were previosuly adjustable only from the command line: Dock pinning, Terminal transparency, finer control over font smoothing, etc. It is also implemented as a preference pane. The version that is compatible with Mac OS X 10.1 is still in beta, however.
Miscellaneous
Mac OS X 10.1 includes attractive new transparent overlays that appear in response to the dedicated volume control keys on the new Apple keyboards, then "fade out" when they're done:
Volume control overlay
The F12 key doubles as the dedicated "media eject" key on all Macs running 10.1, not just portables which require this functionality. This means that desktop Macs with one of the new Apple keyboards now have two eject keys on their keyboard. Worse, accidentally hitting the F12 key during, say, a CD burning session can produce coasters in some situations, so be careful. (This is a known bug.)
A "CrashReporter" daemon is running by default in 10.1. Its purpose is to write crash reports to per-user log files. It is controlled through the Console application, and does not create crash logs by default.
On the Unix side of 10.1, the new compiler toolchain has thrown a monkey wrench into the build processes. Traditional Unix applications that once built flawlessly on 10.0.x now require significant tweaking to build on 10.1. The main culprit seems to be the new two-level namespace linking option, which is enabled by default in 10.1. While this new featue stands to enable programs that produced run-time symbol conflicts in 10.0.x to build and run successfully on 10.1, at this early stage in 10.1's life cycle, it is causing more build problems than it solves.
10.1 supports CD-R, CD-RW, and DVD-R burning from the desktop--provided you're using a supported configuration (usually a Mac that shipped from Apple with one of those drives in an internal bay). I do not have a supported configuration (external SCSI CD-RW on the G3, internal DVD-ROM on the G4) so I could not test these features.
AppleScript support has been greatly enhanced in 10.1. Many of the new abilities of AppleScript in 10.1 were demonstrated during the keynote speech at the 2001 Seybold publishing conference. AppleScript has been elevated to first class status among the programming languages available on Mac OS X. Complete Mac OS X native GUI applications can be created using the new AppleScript Studio development environment.
On a slightly personal note (I use Perl in my day job), 10.1 still ships with perl 5.6.0 rather than 5.6.1, which has been the latest stable build of perl since February, 2001). It's understandable that 10.0, released in March 2001, shipped with 5.6.0, but 10.1 should have come with 5.6.1.
Apple is also reported to have a Cocoa-to-Perl bridge functioning in-house, but not released. There is already a petition online asking for the release of this code. If AppleScript can do it, why not Perl too?
Conclusion. | http://arstechnica.com/apple/2001/10/macosx-10-1/13/ | crawl-003 | refinedweb | 1,112 | 65.01 |
Designing and building cases for case management
Abstract
Preface
As a developer, you can use Business Central to configure Red Hat Process Automation Manager assets for case management.
Case management differs from Business Process Management (BPM). It focuses more on the actual data being handled throughout the case rather than on the sequence of steps taken to complete a goal. Case data is the most important piece of information in automated case handling, while business context and decision-making is in the hands of the human case worker.
Red Hat Process Automation Manager includes the IT_Orders sample project in Business Central. This project is referred to throughout this document to explain case management concepts and provide examples.
The Getting started with case management tutorial describes how to create and test a new IT_Orders project in Business Central. After reviewing the concepts in this guide, follow the procedures in the tutorial to ensure that you are able to successfully create, deploy, and test your own case project.
Prerequisites
- Red Hat JBoss Enterprise Application Platform 7.2 is installed. For information about installing Red Hat JBoss Enterprise Application Platform 7.2, see Red Hat JBoss Enterprise Application Platform 7.2 Installation Guide.
- Red Hat Process Automation Manager is installed. For information about installing Red Hat Process Automation Manager, see Planning a Red Hat Process Automation Manager installation.
- Red Hat Process Automation Manager is running and you can log in to Business Central with the
userrole. For information about users and permissions, see Planning a Red Hat Process Automation Manager installation.
- The Showcase application is deployed. For information about how to install and log in to the Showcase application, see Using the Showcase application for case management.
Chapter 1. Case management
Case management is an extension of Business Process Management (BPM) that enables you to manage adaptable business processes.
BPM is a management practice used to automate tasks that are repeatable and have a common pattern, with a focus on optimization by perfecting a process. Business processes are usually modeled with clearly defined paths leading to a business goal. This requires a lot of predictability, usually based on mass-production principles. However, many real-world applications cannot be described completely from start to finish (including all possible paths, deviations, and exceptions). Using a process-oriented approach in certain cases can lead to complex solutions that are hard to maintain.
Case management provides problem resolution for non-repeatable, unpredictable processes as opposed to the efficiency-oriented approach of BPM for routine, predictable tasks. It manages one-off situations when the process cannot be predicted in advance. Case definition usually consists of loosely coupled process fragments that can be connected directly or indirectly to lead to certain milestones and ultimately a business goal, while the process is managed dynamically in response to changes that occur during run time.
In Red Hat Process Automation Manager, case management includes the following core process engine features:
- Case file instance
- A per case runtime strategy
- Case comments
- Milestones
- Stages
- Ad hoc fragments
- Dynamic tasks and processes
- Case identifier (correlation key)
- Case lifecycle (close, reopen, cancel, destroy)
A case definition is always an ad hoc process definition and does not require an explicit start node. The case definition is the main entry point for the business use case.
A process definition can still be introduced as a supporting construct of the case and can be invoked either as defined in the case definition or dynamically to bring in additional processing when required. A case definition defines the following new objects:
- Activities (required)
- Case file (required)
- Milestones
- Roles
- Stages
Chapter 2. Case Management Model and Notation
You can use Business Central to import, view, and modify the content of Case Management and Notation (CMMN) files. When authoring a project, you can import your case management model and then select it from the asset list to view or modify it in a standard XML editor.
The following CMMN constructs are currently available:
- Tasks (human task, process task, decision task, case task)
- Discretionary tasks (same as above)
- Stages
- Milestones
- Case file items
- Sentries (entry and exit)
Required, repeat, and manual activation tasks are currently not supported. Sentries for individual tasks are limited to entry criteria while entry and exit criteria are supported for stages and milestones. Decision tasks map by default to a DMN decision. Event listeners are not supported.
Red Hat Process Automation Manager does not provide any modeling capabilities for CMMN and focuses solely on the execution of the model.
Chapter
Chapter 4. Subcases
Subcases provide the flexibility to compose complex cases that consist of other cases. This means that you can split large and complex cases into multiple layers of abstraction and even multiple case projects. This is similar to splitting a process into multiple subprocesses.
A subcase is another case definition that is invoked from within another case instance or a regular process instance. It has all of the capabilities of a regular case instance:
- It has a dedicated case file.
- It is isolated from any other case instance.
- It has its own set of case roles.
- It has its own case prefix.
You can use the process designer to add subcases to your case definition. A subcase is a case within your case project, similar to having a subprocess within your process. Subcases can also be added to a regular business process. Doing this enables you to start a case from within a process instance.
You can find the
Sub Case asset in the case definition process designer Object Library, under Cases:
The Sub Case Data I/O window supports the following set of input parameters that enable you to configure and start the subcase:
- Independent
- Optional indicator that tells the process engine whether or not the case instance is independent. If it is independent, the main case instance does not wait for its completion. The value of this property is
falseby default.
- GroupRole_XXX
- Optional group to case role mapping. The role names belonging to this case instance can be referenced here, meaning that participants of the main case can be mapped to participants of the subcase. This means that the group assigned to the main case is automatically assigned to the subcase, where
XXXis the role name and the value of the property is the value of the group role assignment.
- DataAccess_XXX
- Optional data access restrictions where
XXXis the name of the data item and the value of the property is the access restrictions.
- DestroyOnAbort
- Optional indicator that tells the process engine whether to cancel or destroy the subcase when the subcase activity is aborted. The default value is
true.
- UserRole_XXX
- Optional user to case role mapping. You can reference the case instance role names here, meaning that an owner of the main case can be mapped to an owner of the subcase. The person assigned to main case is automatically assigned to the subcase, where
XXXis the role name and the value of the property is the value of the group role assignment.
- Data_XXX
- Optional data mapping from this case instance or business process to a subcase, where
XXXis the name of the data in the subcase being targeted. This paramerter can be provided as many times as needed.
- DeploymentId
- Optional deployment ID (or container ID in the context of Process Server) that indicates where the targeted case definition is located.
- CaseDefinitionId
- The mandatory case definition ID to be started.
- CaseId
- The case instance ID of the subcase after it is started.
Chapter.
Chapter 6. Adding dynamic tasks and processes to a case using the API
You can add dynamic tasks and processes to a case during run time to address unforeseen changes that can occur during the lifecycle of a case. Dynamic activities are not defined in the case definition and therefore they cannot be signaled the way that a defined ad hoc task or process can.
You can add the following dynamic activities to a case:
- User tasks
- Service tasks (any type that is implemented as a work item)
- Reusable subprocesses
Dynamic user and service tasks are added to a case instance and immediately executed. Depending on the nature of a dynamic task, it might start and wait for completion (user task) or directly complete after execution (service task). For dynamic subprocesses, the process engine requires a KJAR containing the process definition for that dynamic process to locate the process by its ID and execute it. This subprocess belongs to the case and has access to all of the data in the case file.
You can use the Swagger REST API application to create dynamic tasks and subprocesses.
Prerequisites
- You are logged in to Business Central and a case instance has been started using the Showcase application. For more information about using Showcase, see Using the Showcase application for case management.
Procedure
In a web browser, open the following URL:
/
- Open the list of available endpoints under Case instances :: Case Management.
Locate the
POSTmethod endpoints for creating dynamic activities.
POST /server/containers/{id}/cases/instances/{caseId}/tasks
Adds a dynamic task (user or service depending on the payload) to case instance.
POST /server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/tasks
Adds a dynamic task (user or service depending on the payload) to specific stage within the case instance.
POST /server/containers/{id}/cases/instances/{caseId}/processes/{pId}
Adds a dynamic subprocess identified by the process ID to case instance.
POST /server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/processes/{pId}
Adds a dynamic subprocess identified by process ID to stage within a case instance.
- To open the documentation, click the REST endpoint required to create the dynamic task or process.
- Click Try it out and enter the parameters and body required to create the dynamic activity.
- Click Execute to create the dynamic task or subprocess using the REST API.
6.1. Creating a dynamic user task using the REST API
You can create a dynamic user task during case run time using the REST API. To create a dynamic user task, you must provide the following information:
- Task name
- Task description (optional, but recommended)
- Actors or groups (or both)
- Input data
Use the following procedure to create a dynamic user task for the IT_Orders sample project available in Business Central using the Swagger REST API tool. click the following
POSTmethod endpoint to open the details:
/server/containers/{id}/cases/instances/{caseId}/tasks
Click Try it out and then input the following parameters:
Table 6.1. Parameters
- body
{ "name" : "RequestManagerApproval", "data" : { "reason" : "Fixed hardware spec", "caseFile_hwSpec" : "#{caseFile_hwSpec}" }, "description" : "Ask for manager approval again", "actors" : "manager", "groups" : "" }
- In the Swagger application, click Execute to create the dynamic task.
This procedure creates a new user task associated with case
IT-000000001. The task is assigned to the person assigned to the
manager case role. This task has two input variables:
reason
caseFile_hwSpec: defined as an expression to allow run time capturing of a process or case data.
Some tasks include a form that provides a user-friendly UI for the task, which you can locate by task name. In the IT Orders case, the
RequestManagerApproval task includes the form
RequestManagerApproval-taskform.form in its KJAR.
After it is created, the task appears in the assignee’s Task Inbox in Business Central.
6.2. Creating a dynamic service task using the REST API
Service tasks are usually less complex than user tasks, although they might need more data to execute properly. Service tasks require the following information:
name: The name of the activity
nodeType: The type of node that will be used to find the work item handler
data: The map of the data to properly deal with execution
During case run time, you can create a dynamic service task with the same endpoint as a user task, but with a different body payload.
Use the following procedure using the Swagger REST API to create a dynamic service task for the IT_Orders sample project available in Business Central. You can use the same endpoint}/stages/{caseStageId}/tasks
Click Try it out and then enter the following parameters:
Table 6.2. Parameters
- body
{ "name" : "InvokeService", "data" : { "Parameter" : "Fixed hardware spec", "Interface" : "org.jbpm.demo.itorders.services.ITOrderService", "Operation" : "printMessage", "ParameterType" : "java.lang.String" }, "nodeType" : "Service Task" }
- In the Swagger application, click Execute to create the dynamic task.
In this example, a Java-based service is executed. It consists of an interface with the public class
org.jbpm.demo.itorders.services.ITOrderService and the public
printMessage method with a single
String argument. When executed, the parameter value is passed to the method for execution.
Numbers, names, and other types of data given to create service tasks depend on the implementation of a service task’s handler. In the example provided, the
org.jbpm.process.workitem.bpmn2.ServiceTaskHandler handler is used.
For any custom service tasks, ensure the handler is registered in the deployment descriptor in the Work Item Handlers section, where the name is same as the
nodeType used for creating a dynamic service task.
6.3. Creating a dynamic subprocess using the REST API
When creating a dynamic subprocess, only optional data is provided. There are no special parameters as there are when creating dynamic tasks.
Use the following procedure to use the Swagger REST API to create a dynamic subprocess task for the IT_Orders sample project available in Business Central .}/processes/{pId}
Click Try it out and enter the following parameters:
Table 6.3. Parameters
The
pIdis the process ID of the subprocess to be created.
- body
{ "placedOrder" : "Manually" }
- In the Swagger application, click Execute to start the dynamic subprocess.
In this example, the
place-order subprocess has been started in the IT Orders case with the case ID
IT-0000000001. You can see this process in Business Central under Menu → Manage → Process Instances.
If the described example has executed correctly, the
place-order process appears in the list of process instances. Open the details of the process and note that the correlation key for the process includes the IT Orders case instance ID, and the Process Variables list includes the variable
placedOrder with the value
Manually, as delivered in the REST API body.
Chapter 7. Case roles
Case roles provide an additional layer of abstraction for user participation in case handling. Roles, users, and groups are used for different purposes in case management.
- Roles
- Roles drive the authorization for a case instance, and are used for user activity assignments. A user or one or more groups can be assigned to the owner role. The owner is whoever the case belongs to. Roles are not restricted to a single set of people or groups as part of a case definition. Use roles to specify task assignments instead of assigning a specific user or group to a task assignment to ensure that the case remains dynamic.
- Groups
- A group is a collection of users who are able to carry out a particular task or have a set of specified responsibilities. You can assign any number of people to a group and assign any group to a role. You can add or change members of a group at any time, so you should never hard code a group to a particular task.
- Users
- A user is an individual who can be given a particular task when you assign them a role or add them to a group.
The following example illustrates how the preceding case management concepts apply to a hotel reservation with:
- Role =
Guest
- Group =
Receptionist,
Maid
- User =
Marilyn
The
Guest role assignment affects the specific work of the associated case and is unique to all case instances. The number of users or groups that can be assigned to a role is limited by the
Case Cardinality, which is set during role creation in the process designer and case definition. For example, the hotel reservation case has only one guest while the IT_Orders sample project has two suppliers of IT hardware.
When roles are defined, case management must ensure that roles are not hard coded to a single set of people or groups as part of case definition and that they can differ for each case instance. This is why case role assignments are important.
Role assignments can be assigned or removed when a case starts or at any time when a case is active. Although roles are optional, use roles in case definitions to maintain an organized workflow.
Always use roles for task assignments instead of actual user or group names. This ensures that the case remains dynamic and actual user or group assignments can be made as late as required.
Roles are assigned to users or groups and authorized to perform tasks when a case instance is started.
7.1. Creating case roles
You can create and define case roles in the case definition when you design the case in the process designer. Case roles are configured on the case definition level to keep them separate from the actors involved in handling the case instance. Roles can be assigned to user tasks or used as contact references throughout the case lifecycle, but they are not defined in the case as a specific user or group of users.
Case instances include the individuals that are actually handling the case work. Assign roles when starting a new case instance. In order to keep cases flexible, you can modify case role assignment during case run time, although doing this has no effect on tasks already created based on the previous role assignment. The actor assigned to a role is flexible but the role itself remains the same for each case.
Prerequisites
- A case project that has a case definition exists in Business Central.
- The case definition asset is open in the process designer.
Procedure
- To define the roles involved in the case, click
to open the Properties menu on the right side of the designer and open the Editor for Case Roles.
Click Add Case Role to add a case role.
The case role requires a name for the role and a case cardinality. Case cardinality is the number of actors that are assigned to the role in any case instance. For example, the IT_Orders sample case management project includes the following roles:
Figure 7.1. ITOrders Case Roles
In this example, you can assign only one actor (a user or a group) as the case
ownerand assign only one actor to the
managerrole. The
supplierrole can have two actors assigned. Depending on the case, you can assign any number of actors to a particular role based on the configured case cardinality of the role.
7.3. Assigning a task to a role
Case management processes need to be as flexible as possible to accommodate changes that can happen dynamically during run time. This includes changing user assignments for new case instances or for active cases. For this reason, ensure that you do not hard-code roles to a single set of users or groups in the case definition. Instead, role assignments can be defined on the task nodes in the case definition, with users or groups assigned to the roles on case creation.
Use the following procedure to assign a case role to a task in the case definition.
Prerequisites
- A case definition has been created with case roles configured at the case definition level. For more information about creating case roles, see Creating case roles.
Procedure
- Click
to open the Object Library on the left side of the process designer.
- Open the Tasks list and drag the user or service task you want to add to your case definition on to the process design palette.
- With the task node selected, click
to open the Properties panel on the right side of the designer.
Click the field next to the Actors property and type the name of the role to which the task will be assigned. You can use the Groups property in the same way to for group assignments.
For example, in the IT_Orders sample project, the
Manager approvaluser task is assigned to the
managerrole:
In this example, after the
Prepare hardware specuser task has been completed the user assigned to the
managerrole will receive the
Manager approvaltask in their Task Inbox in Business Central.
The user assigned to the role can be changed during the case run time, but the task itself continues to have the same role assignment. For example, the person originally assigned to the
manager role might need to take time off (if they become ill, for example), or they might unexpectedly leave the company. To respond to this change in circumstances, you can edit the
manager role assignment so that someone else can be assigned the tasks associated with that role.
For information about how to change role assignments during case run time, see Modifying case role assignments during run time using Showcase or Modifying case role assignments during run time using REST API.
7.4. Modifying case role assignments during run time using Showcase
You can change case instance role assignments during case run time using the Showcase application. Roles are defined in the case definition and assigned to tasks in the case lifecycle. Roles cannot change during run time because they are predefined, but you can change the actors assigned to the roles to change who is responsible for carrying out case tasks.
Prerequisites
- There is an active case instance with users or groups already assigned to at least one case role.
Procedure
- In the Showcase application, click the case you want to work on in the Case list to open the case overview.
Locate the role assignment that you want to change in the Roles box in the lower-right corner of the page.
- To remove a single user or group from the role assignment, click the
next to the assignment. In the confirmation window, click Remove to remove the user or group from the role.
- To remove all role assignments from a role, click the
next to the role and select the Remove all assignments option. In the confirmation window, click Remove to remove all user and group assignments from the role.
- To change the role assignment from one user or group to another, click the
next to the role and select the Edit option.
In the Edit role assignment window, delete the name of the assignee that you want to remove from the role assignment. Type the name of the user you want to assign to the role into the User field or the group you want to assign in the Group field.
At least one user or group must be assigned when editing a role assignment.
- Click Assign to complete the role assignment.
7.5. Modifying case role assignments during run time using REST API
You can change case instance role assignments during case run time using the REST API or Swagger application. Roles are defined in the case definition and assigned to tasks in the case life cycle. Roles cannot change during run time because they are predefined, but you can change the actors assigned to the roles to change who is responsible for carrying out case tasks.
The following procedure includes examples based on the IT_Orders sample project. You can use the same REST API endpoints in the Swagger application or any other REST API client, or using Curl.
Prerequisites
- An IT Orders case instance has been started with
owner,
manager, and
supplierroles already assigned to actors.
Procedure
Retrieve the list of current role assignments using a
GETrequest on the following endpoint:
/{id}/cases/instances/{caseId}/roles
Table 7.1. Parameters
This returns the following response:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> <users>Katy</users> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list>
To change the user assigned to the
managerrole, you must first remove the role assignment from the user
Katyusing
DELETE.
/server/containers/{id}/cases/instances/{caseId}/roles/{caseRoleName}
Include the following information in the Swagger client request:
Table 7.2. Parameters
Click Execute.
Execute the
GETrequest from the first step again to check that the
managerrole no longer has a user assigned:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list>
Assign the user
Camito the
managerrole using a
PUTrequest on the following endpoint:
/server/containers/{id}/cases/instances/{caseId}/roles/{caseRoleName}
Include the following information in the Swagger client request:
Table 7.3. Parameters
Click Execute.
Execute the
GETrequest from the first step again to check that the
managerrole is now assigned to
Cami:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> <users>Cami</users> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list>
Chapter 8. Stages
Case management stages are a collection of tasks..
For example, in a patient triage case, the first stage may consist of observing and noting any obvious physical symptoms or a description from the patient of what their symptoms are, followed by a second stage for tests, and a third for diagnosis and treatment.
There are three ways to complete a stage:
- By completion condition.
- By terminal end event.
- By setting the
Completion Conditionto
autocomplete, which will automatically complete the stage when there are no active tasks left in the stage.
8 can also be defined using stages in the following way:
Figure 8.1. IT_Orders project stages example
activated by a signal event, configure the
SignalRefon the signal node with the name of the stage that you configured in the first step.
- Alternatively, configure the
AdHocActivationConditionproperty to activate
Completion Conditionproperty field in the with a free-form Drools expression for the completion condition you require. For more information about stage completion conditions, see Section 8.2, “Configuring stage activation and completion conditions”.
- Once the stage has been configured, connect it to the next activity in the case definition using a sequence flow line.
8.2. Configuring stage activation and completion conditions
Stages can be triggered by a 8.2. IT_Orders project stages example
Activation conditions can also be configured using a free-form Drools rule to configure the
AdHocActivationCondition property to activate a stage.
Prerequisites
- You have created a case definition in the Business Central process designer.
- You have added an ad hoc subprocess to the case definition that is to be used as a stage.
Procedure
- With the stage selected on the case design canvas, click
to open the Properties panel on the right.
- Open the
AdHocActivationConditionproperty editor to define an activation condition for the start node. For example, set
autostart: trueto make the stage automatically activated when a new case instance is started.
- The
Completion Conditionis set to
autocompleteby default. To change this, open the property editor to define a completion condition using a free-form Drools expression. For example, set
org.kie.api.runtime.process.CaseData(data.get("ordered") == true)to activate the second stage in the example shown previously.
For more examples and information about the conditions used in the IT_Orders sample project, see Getting started with case management.
8.3. Adding a dynamic task to a stage
Dynamic tasks can be added to a case stage during run time using a REST API request. This is similar to adding a dynamic task to a case instance, but you must also define the
caseStageId of the stage to which the task is added.
The IT_Orders sample case project can be defined using stages instead of milestones.
Use the following procedure to add a dynamic task to a stage in the IT_Orders sample project available in Business Central using the Swagger REST API tool. The same endpoint can be used for REST API without Swagger.
Prerequisites
- The IT_Orders sample project BPMN2 case definition has been reconfigured to use stages instead of milestones, as demonstrated in the provided example. For information about configuring stages for case management, see Defining a stage.
Procedure
Start a new case using the Showcase application. For more information about using Showcase, see Using the Showcase application for case management.
Because this case is designed using stages, the case details page shows stage tracking:
The first stage starts automatically when the case instance is created.
As a
manageruser, approve the hardware specification in Business Central under Menu → Track → Task Inbox, then check the progress of the case.
- In Business Central, click Menu → Manage → Process Instances and open the active case instance
IT-0000000001.
Click Diagram to see the case progress diagram:
In a web browser, open the following URL:
/.
- Open the list of available endpoints under Case instances :: Case Management.
Click click the following
POSTmethod endpoint to open the details:
/server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/tasks
Click Try it out to complete the following parameters:
Table 8.1. Parameters
The
caseStageIdis the name of the stage in the case definition where the dynamic task is to be created. This can be any dynamic or service task payload. See Creating a dynamic subprocess using the REST API or Creating a dynamic service task using the REST API for examples.
After the dynamic task has been added to the stage, it must be completed in order for the stage to complete and for the case process to move on to the next item in the case flow.
Chapter. Newly created milestones are not set to
Adhoc autostart by default.
Case management milestones generally occur Key Performance Indicator (KPI) tracking or identifying the tasks that are still to be completed.
Milestones can be in any of the following states during case execution: by a signal or automatically if `Adhoc autostart`is configured when a case instance starts. Milestones can be triggered as many times as required, however, it is directly achieved when the condition is met.
9.1. Milestone configuration and triggering
Case milestones can be configured to start automatically when a case instance starts or they can triggered using a signal, which is configured manually during the case design.
Prerequisites
- case design palette, open the Properties panel on the right.
- Set the
Signal Scopeproperty to
Process Instance.
Open the
SignalRefexpression editor and type the name of the milestone to be triggered.
Click Ok to finish.
Chapter 10. Rules in case management
Cases are data-driven, rather than following a sequential flow. The steps required to resolve a case rely on data, which is provided by people involved in the case, or the system can be configured to trigger further actions based on the data available. In the latter case, you can use business rules to decide what further actions are required for the case to continue or reach a resolution.
Data can be inserted into the case file at any point during the case. The decision engine constantly monitors case file data, meaning that rules react to data that is contained in the case file. Using rules to monitor and respond to changes in the case file data provides a level of automation that drives cases forward.
10.1. Using rules to drive cases
Refer to the the case management IT_Orders sample project in Business Central.
Suppose that the particular hardware specification provided by the supplier is incorrect or invalid. The supplier needs to provide a new, valid order so that the case can continue. Rather than wait for the manager to reject the invalid specification and create a new request for the supplier, you can create a business rule that will react immediately when the case data indicates that the provided specification is invalid. It can then create a new hardware specification request for the supplier.
The following procedure demonstrates how to create and use a business rule to execute the described scenario.
Prerequisites
- The IT_Orders sample project is open in Business Central, but it is not deployed to the Process Server.
The
ServiceRegistryis part of the
jbpm-services-apimodule, and must be available on the class path.Note
If building the project outside of Business Central, the following dependencies must be added to the project:
org.jbpm:jbpm-services-api
org.jbpm:jbpm-case-mgmt-api
Procedure
Create the following business rule file called
validate-document.drl:
package defaultPackage; import java.util.Map; import java.util.HashMap; import org.jbpm.casemgmt.api.CaseService; import org.jbpm.casemgmt.api.model.instance.CaseFileInstance; import org.jbpm.document.Document; import org.jbpm.services.api.service.ServiceRegistry; rule "Invalid document name - reupload" when $caseData : CaseFileInstance() Document(name == "invalid.pdf") from $caseData.getData("hwSpec") then System.out.println("Hardware specification is invalid"); $caseData.remove("hwSpec"); update($caseData); CaseService caseService = (CaseService) ServiceRegistry.get().service(ServiceRegistry.CASE_SERVICE); caseService.triggerAdHocFragment($caseData.getCaseId(), "Prepare hardware spec", null); end
This business rule detects when a file named
invalid.pdfis uploaded to the case file. It then removes the
invalid.pdfdocument and creates a new instance of the
Prepare hardware specuser task.
Click Deploy to build the IT_Orders project and deploy it to a Process Server stop any running Server environment mode, set the
org.kie.server.modesystem property to
org.kie.server.mode=developmentor
org.kie.server.mode=production. To configure the deployment behavior for a corresponding project in Business Central, go to project Settings → General Settings → Version and toggle the Development Mode option. By default, Process Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added
SNAPSHOTversion suffix to a Process Server that is in production mode.
- Create a file called
invalid.pdfand save it locally.
- Create a file called `valid-spec.pdf`and save it locally.
- In Business Central, go to Menu → Projects → IT_Orders to open the IT_Orders project.
- Click Import Asset in the upper-right corner of the page.
Upload the
validate-document.drlfile to the
defaultpackage (
src/main/resources).
The
validate-document.drlrule is shown in the rule editor. Click Save or close to exit the rule editor.
- Open the Showcase application by either clicking the Apps launcher (if it is installed), or go to
/.
Start a new case for the IT_Orders project.
In this example, Aimee is the case
owner, Katy is the
manager, and the supplier group is
supplier.
- Log out of Business Central, and log back in as a user that belongs to the
suppliergroup.
- Go to Menu → Track → Task Inbox.
- Open the
Prepare hardware spectask and click Claim. This assigns the task to the logged in user.
Click Start and click
to locate the
invalid.pdfhardware specification file. Click
to upload the file.
Click Complete.
The value in the Task Inbox for the
Prepare hardware specis
Ready.
In Showcase, click Refresh in the upper-right corner. Notice that a
Prepare hardware taskmessage appears in the Completed column and another appears in the In Progress column.
This is because the first
Prepare hardware spectask has been completed with the specification file
invalid.pdf. As a result, the business rule causes the task and file to be discarded, and a new user task created.
- In the Business Central Task Inbox, repeat steps 12 and 13, but upload the
valid-spec.pdffile instead of
invalid.pdf.
- Log out of Business Central and log back in again as
Katy.
- Go to Menu → Track → Task Inbox. There are two
Manager approvaltasks for Katy, one with the
invalid.pdfhardware specification file, the other with the
valid-spec.pdffile.
Open, claim, and complete each task:
- Check the
approvebox for the task that includes the
valid-spec.pdffile, then click Complete.
- Do not check the
approvebox on the task with the
invalid.pdffile, then click Complete.
- Go to Menu → Manage → Process Instances and open the Order for IT hardware process instance.
Open the Diagram tab. The
Order rejectedand
Place orderprocesses are now marked as Completed.
Similarly, the case details page in Showcase lists two
Manager approvaltasks in the Completed column.
Chapter 11. Case management security
Cases are configured at the case definition level with case roles. These are generic participants that are involved in case handling. These roles can be assigned to user tasks or used as contact references. Roles are not hard-coded to specific users or groups to keep the case definition independent of the actual actors involved in any given case instance. You can modify case role assignments at any time as long as case instance is active, though modifying a role assignment does not affect tasks already created based on the previous role assignment.
Case instance security is enabled by default. The case definition prevents case data from being accessed by users who do not belong to the case. Unless a user has a case role assignment (either assigned as user or a group member) then they are not able to access the case instance.
Case security is one of the reasons why it is recommended that you assign case roles when starting a case instance, as this will prevent tasks being assigned to users who should not have access to the case.
11.1. Configuring security for case management
You can turn off case instance authorization by setting the following system property to
false:
org.jbpm.cases.auth.enabled
This system property is just one of the security components for case instances. In addition, you can configure case operations at the execution server level using the
case-authorization.properties file, available at the root of the class path of the execution server application (
kie-server.war/WEB-INF/classes).
Using a simple configuration file for all possible case definitions encourages you to think about case management as domain-specific.
AuthorizationManager for case security is pluggable, which allows you to include custom code for specific security handling.
You can restrict the following case instance operations to case roles:
CANCEL_CASE
DESTROY_CASE
REOPEN_CASE
ADD_TASK_TO_CASE
ADD_PROCESS_TO_CASE
ADD_DATA
REMOVE_DATA
MODIFY_ROLE_ASSIGNMENT
MODIFY_COMMENT
Prerequisites
- The Red Hat Process Automation Manager Process Server is not running.
Procedure
Open
JBOSS_HOME/standalone/deployments/kie-server.war/WEB-INF/classes/case-authorization.propertiesfile in your preferred editor.
By default, the file contains the following operation restrictions:
CLOSE_CASE=owner,admin CANCEL_CASE=owner,admin DESTROY_CASE=owner,admin REOPEN_CASE=owner,admin
You can add or remove role permissions for these operations.
- To remove permission for a role to perform an operation, remove it from the list of authorized roles for that operation in the
case-authorization.propertiesfile. For example, removing the
adminrole from the
CLOSE_CASEoperation restricts permission to close a case to the case owner for all cases.
To give a role permission to perform a case operation, add it to the list of authorized roles for that operation in the
case-authorization.propertiesfile. For example, to allow anyone with the
managerrole to perform a
CLOSE_CASEoperation, add it to the list of roles, separated by a comma:
CLOSE_CASE=owner,admin,manager
To add role restrictions to other case operations listed in the file, remove the
#from the line and list the role names in the following format:
OPERATION=role1,role2,roleN
Operations in the file that begin with
#have restrictions ignored and can be performed by anyone involved in the case.
- When you have finished assigning role permissions, save and close the
case-authorization.propertiesfile.
Start the execution server.
The case authorization settings apply to all cases on the execution server.
Chapter 12. Closing cases
A case instance can be completed when there are no more activities to be performed and the business goal is achieved, or it can be closed prematurely. Usually the case owner closes the case when all work is completed and the case goals have been met. When you close a case, consider adding a comment about why the case instance is being closed. A closed case can be reopened later with the same case ID if required.
You can close case instances remotely using Process Server REST API requests or directly in the Showcase application.
12.1. Closing a case using the Process Server REST API
You can use a REST API request to close a case instance.
POSTrequest with the following endpoint:
/server/containers/{id}/cases/instances/{caseId}
Click Try it out and fill in the required parameters:
Table 12.1. Parameters
- Optionally, you can include a comment to be included in the case file. To leave a comment, type it into the
bodytext field as a
String.
- Click Execute to close the case.
- To confirm the case is closed, open the Showcase application and change the case list status to Closed.
12.2. Closing a case in the Showcase application
A case instance is complete when no more activities need to be performed and the business goal has been achieved. After a case is complete, you can close the case to indicate that the case is complete and that no further work is required. When you close a case, consider adding a specific comment about why you are closing the case. If needed, you can reopen the case later with the same case ID.
You can use the Showcase application to close a case instance at any time. From Showcase, you can easily view the details of the case or leave a comment before closing it.
Prerequisites
- You are logged in to the Showcase application and are the owner or administrator for a case instance that you want to close.
Procedure
- In the Showcase application, locate the case instance you want to close from the list of case instances.
- To close the case without viewing the details first, click Close.
To close the case from the case details page, click the case in the list to open it.
From the case overview page you can add comments to the case and verify that you are closing the correct case based on the case information.
- Click Close to close the case.
- Click Back to Case List in the upper-left corner of the page to return to the Showcase case list view.
- Click the drop-down list next to Status and select Canceled to view the list of closed and canceled cases.
Chapter 13. Canceling or destroying a case
Cases can be canceled if they are no longer required and do not require any case work to be performed. Cases that are canceled can be reopened later with the same case instance ID and case file data. In some cases, you might want to permanently destroy a case so that it cannot be reopened.
Cases can only be canceled or destroyed using an API request.
DELETErequest with the following endpoint:
/server/containers/{id}/cases/instances/{caseId}
You can cancel a case using the
DELETErequest. Optionally, you can also destroy the case using the
destroyparameter.
Click Try it out and fill in the required parameters:
Table 13.1. Parameters
- Click Execute to cancel (or destroy) the case.
- To confirm the case is canceled, open the Showcase application and change the case list status to Canceled. If the case has been destroyed, it will no longer appear in any case list.
Chapter 14. Additional resources
Appendix A. Versioning information
Documentation last updated on Wednesday, May 8, 2019. | https://access.redhat.com/documentation/en-us/red_hat_process_automation_manager/7.3/html-single/designing_and_building_cases_for_case_management/index | CC-MAIN-2021-31 | refinedweb | 7,217 | 54.12 |
Given an arbitrary function, wrap it so that it does variable sharing.
tf.compat.v1.make_template( name_, func_, create_scope_now_=False, unique_name_=None, custom_getter_=None, **kwargs )
This wraps
func_ in a Template and partially evaluates it. Templates are
functions that create variables the first time they are called and reuse them
thereafter. In order for
func_ to be compatible with a
Template it must
have the following properties:
- The function should create all trainable variables and any variables that should be reused by calling
tf.compat.v1.get_variable. If a trainable variable is created using
tf.Variable, then a ValueError will be thrown. Variables that are intended to be locals can be created by specifying
tf.Variable(..., trainable=false).
- The function may use variable scopes and other templates internally to create and reuse variables, but it shouldn't use
tf.compat.v1.global_variablesto capture variables that are defined outside of the scope of the function.
- Internal scopes and variable names should not depend on any arguments that are not supplied to
make_template. In general you will get a ValueError telling you that you are trying to reuse a variable that doesn't exist if you make a mistake.
In the following example, both
z and
w will be scaled by the same
y. It
is important to note that if we didn't assign
scalar_name and used a
different name for z and w that a
ValueError would be thrown because it
couldn't reuse the variable.
def my_op(x, scalar_name): var1 = tf.compat.v1.get_variable(scalar_name, shape=[], initializer=tf.compat.v1.constant_initializer(1)) return x * var1 scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2)
As a safe-guard, the returned function will raise a
ValueError after the
first call if trainable variables are created by calling
tf.Variable.
If all of these are true, then 2 properties are enforced by the template:
- Calling the same template multiple times will share all non-local variables.
- Two different templates are guaranteed to be unique, unless you reenter the same variable scope as the initial definition of a template and redefine it. An examples of this exception:
def my_op(x, scalar_name): var1 = tf.make_template('scale_by_y', my_op, scalar_name='y') z2 = scale_by_y2(input1) w2 = scale_by_y2(input2)
Depending on the value of
create_scope_now_, the full variable scope may be
captured either at the time of first call or at the time of construction. If
this option is set to True, then all Tensors created by repeated calls to the
template will have an extra trailing _N+1 to their name, as the first time the
scope is entered in the Template constructor no Tensors are created..compat.v1.get_variable
custom_getterdocumentation for more information.
**kwargs: Keyword arguments to apply to
func_.
Returns:
A function to encapsulate a set of variables which should be created once
and reused. An enclosing scope will be created either when
make_template
is called or when
name_is None. | https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/compat/v1/make_template | CC-MAIN-2020-05 | refinedweb | 492 | 54.22 |
In a prior post, Jorge Morales described a number of techniques for how one could reduce build times for a Java based application when using OpenShift. Since then there have been a number of releases of the upstream OpenShift Origin project.
In the 1.1.2 release of Origin a new feature was added to builds called an Image Source, which can also be useful in helping to reduce build times by offloading repetitive build steps to a separate build process. This mechanism can for example be used to pre build assets which wouldn’t change often, and then have them automatically made available within the application image when it is being built.
To illustrate how this works, I am going to use an example from the Python world, using some experimental S2I builders for Python I have been working on. I will be using the All-In-One VM we make available for running OpenShift Origin on your laptop or desktop PC.
Deploying a Python CMS
The example I am going to start with is the deployment of a CMS system called Wagtail. This web application is implemented using the popular Django web framework for Python.
Normally Wagtail would require a database to be configured for storage of data. As I am more concerned with the build process here rather than seeing the site running, I am going to skip the database setup for now.
To create the initial deployment for our Wagtail CMS site, we need to create a project, import the Docker image for the S2I builder I am going to use and then create the actual application.
$ oc new-project image-source Now using project "image-source" on server "". You can add applications to this project with the 'new-app' command. For example, try: $ oc new-app centos/ruby-22-centos7~ to build a new hello-world application in Ruby. $ oc import-image grahamdumpleton/warp0-debian8-python27 --confirm The import completed successfully. Name: warp0-debian8-python27 Created: Less than a second ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2016-03-02T00:14:37Z Docker Pull Spec: 172.30.118.161:5000/image-source/warp0-debian8-python27 Tag Spec Created PullSpec Image latest grahamdumpleton/warp0-debian8-python27 Less than a second ago grahamdumpleton/warp0-debian8-python27@sha256:ae947cc679d2c1... <same> $ oc new-app warp0-debian8-python27~ -->-demo-site:latest" * This image will be deployed in deployment config "wagtail-demo-site" * Port 8080/tcp will be load balanced by service "wagtail-demo-site" * Other containers can access this service through the hostname "wagtail-demo-site" --> Creating resources with label app=wagtail-demo-site ... imagestream "wagtail-demo-site" created buildconfig "wagtail-demo-site" created deploymentconfig "wagtail-demo-site" created service "wagtail-demo-site" created --> Success Build scheduled for "wagtail-demo-site", use 'oc logs' to track its progress. Run 'oc status' to view your app.
The initial build and deployment of the Wagtail site will take a little while for a few reasons. The first is that because we didn’t already have the S2I builder loaded into our OpenShift cluster, it needs to download it from the Docker Hub registry where it resides. Because I live down in Australia where our Internet is only marginally better than using two tin cans joined by a piece of wet string, this can take some time.
The next most time consuming part of the process is one which actually needs to be run every time we do a build. That is that we need to download all the Python packages that the Wagtail CMS application requires. This includes Wagtail itself, Django, as well as database clients, image manipulation software and so on.
Many of the packages it requires are pure Python code and so it is just a matter of downloading the Python code and installing it. In other cases, such as with the database client and image manipulation software, it contains C extension modules which need to be first compiled into a dynamically loadable object library.
The delay points are therefore the time taken to download the packages from the Python package index, followed by actually code compilation times.
A final source of an extra delay for the initial deploy is the pushing up of the image to the nodes in the OpenShift cluster so that the application can then be started. This takes a little bit of extra time on the first deploy as all the layers of the base image for this S2I builder will not be present on each node. Subsequent deploys will not see this delay unless the S2I builder image itself were updated.
When finally done, for me down here in this Internet deprived land we call OZ, that takes a total time of just under 15 minutes. This included around 5 minutes to pull down the S2I builder the first time and about 5 minutes to push the final image out to the OpenShift nodes the first time.
The actual build of the Wagtail application itself, consisting of the pulling down and compilation of the required Python packages, therefore took about 5 minutes.
Because we are using an S2I builder, which downloads the application code from the Git repository, and downloads any Python packages, compiling and installing them, all in one step, we have no way of speeding things up by using separate layers in Docker. Well we could, but it would mean needing to create a custom version of the S2I builder which had preinstalled into the base image all the Python packages we required. Although technically possible, this would not be the preferred option.
Using a Python Wheelhouse
If we were using Docker directly, an alternative one can use with Python is what is called a Wheelhouse.
What this entails is downloading and pre building all the Python packages we require to produce what are called Python wheels. These are stored in a directory called a ‘wheelhouse’.
When we now go to build our Python application, when installing all the packages we want, we would point the Python ‘pip’ program used to install the packages at our directory of wheels we pre built for the packages. What ‘pip’ will then do is that rather than download the packages and build them again, it will use our pre built wheels instead. We are therefore able to skip all that time taken to download and compile everything, resulting in a reduction of the time taken to build the Docker image.
Integrating the use of a wheelhouse directory into a build process when using Docker directly can be quite fiddly and involves a number of steps. Using the capabilities of OpenShift, we can however make that a very simple process.
All we need is an S2I builder for Python which is setup to be able to use a wheelhouse directory, as well as a way of constructing the wheelhouse directory in the first place. Having that, we can then use the ‘Image Source’ feature of OpenShift to combine the two.
As it happens the S2I builder I have been using here has both these capabilities, so lets see how that can work.
So we already have our Wagtail CMS application running with the name ‘wagtail-demo-site’.
The next step is to create that wheelhouse. To do this we are going to use
oc new-build with the same S2I builder and Git repository as we used before, but we are going to set an environment variable to have the S2I builder create a wheelhouse instead of preparing the image for our application.
$ oc new-build warp0-debian8-python27~ --env WARPDRIVE_BUILD_TARGET=wheelhouse --name wagtail-wheelhouse -->-wheelhouse:latest" --> Creating resources with label build=wagtail-wheelhouse ... imagestream "wagtail-wheelhouse" created buildconfig "wagtail-wheelhouse" created --> Success Build configuration "wagtail-wheelhouse" created and build triggered. Run 'oc logs -f bc/wagtail-wheelhouse' to stream the build progress.
Since we have already downloaded the S2I builder when initially deploying the application, and because we aren’t deploying anything, just building an image, this should take about 5 minutes. This is equivalent to what we saw for installing the packages as part of the application build.
Right now the wheelhouse build and the application build are separate. The next step is to link these together so that the application build can use the by products of what is created by the wheelhouse build.
To do this we are going to edit the build configuration for the application. To see the current build configuration from the command line, you can run
oc get bc wagtail-demo-site -o yaml. We are only going to be concerned with a part of that configuration, so I am only quoting the
source and
strategy sections.
source: git: uri: secrets: [] type: Git strategy: sourceStrategy: from: kind: ImageStreamTag name: warp0-debian8-python27:latest namespace: image-source type: Source
The main change we are going to make is to enable the Image Source feature. To do this we are going to change the
source section. This can be done using
oc edit bc wagtail-demo-site. We are going to change the section to read:
source: git: uri: images: - from: kind: ImageStreamTag name: wagtail-wheelhouse:latest namespace: image-source paths: - destinationDir: .warpdrive/wheelhouse sourcePath: /opt/warpdrive/.warpdrive/wheelhouse/. secrets: [] type: Git
What we have added is the
images sub section. Here we have linked the application image to our wheelhouse image called
wagtail-wheelhouse. We have also under
paths described where the pre built files are located that we want to have copied from the wheelhouse image into our application image. These being in the directory
/opt/warpdrive/.warpdrive/wheelhouse/. and that we want them copied into the directory
.warpdrive/wheelhouse relative to our application code directory.
A second change we make, although this is actually optional, is that since we have pre-built all the packages we know are needed by ‘pip’, it need not actually bother checking with the Python Package Index (PyPi) at all. We can therefore say to trust that the package versions in the wheelhouse are exactly what we need. This we can do by setting an environment variable in the
sourceStrategy sub section.
strategy: sourceStrategy: env: - name: WARPDRIVE_PIP_NO_INDEX value: "1" from: kind: ImageStreamTag name: warp0-debian8-python27:latest namespace: image-source type: Source
Having made these changes we can now trigger a rebuild and see whether things have improved.
Tracking build times
As to tracking building times, the best visual way of doing that is by using the build view in the web interface of OpenShift. Using this, what we find as a our end result is the following.
Ignoring our initial build, which as explained will take longer due to needing to first download the S2I builder and distribute it to nodes, our build time for the application turned out to be a bit under 5 minutes.
We would have expected this built time to always be about that for every application code change we made, even though we hadn’t changed what packages needed to be installed.
When we introduced the wheelhouse image and linked our application build to it so that the pre built packages could be used, the build time for the application has now dropped down to about a minute and a half. Hardly enough time to go get a fresh cup of coffee.
Wheelhouse build time
We have successfully managed to offload the more time consuming parts of the application image build off to the wheelhouse image. Because the wheelhouse is only concerned with pre building any required Python packages it doesn’t need to be rebuilt every time an application change is made. You only need to trigger a rebuild of it when you want to change what packages are to be built, or what versions of the packages.
Having to rebuild the wheelhouse would therefore generally be a rare event. Even so, there is actually a way we can reduce how long it takes to be rebuilt as well. This is by using an optional feature of S2I builds called incremental builds.
With support for incremental builds already implemented in the special S2I builder for Python I am using, to enable incremental builds all we need to do is edit the build configuration for the wheelhouse and enable it. In this case we are going to amend the
sourceStrategy sub section and add the
incremental setting and give it the value
true.
strategy: sourceStrategy: env: - name: WARPDRIVE_BUILD_TARGET value: wheelhouse from: kind: ImageStreamTag name: warp0-debian8-python27:latest namespace: image-source incremental: true type: Source
By doing this, what will now happen is that when the wheelhouse is being rebuilt, a copy of the ‘wheelhouse’ directory of the prior build will first be copied over from the prior version of the wheelhouse image.
Similar with how the application build time was sped up, ‘pip’ will realise that it already has pre-built versions of the packages it is interested in and skip rebuilding them. It would only need to go out and download a package if it was a new package that had been added, or the version required had been changed.
The end result is that by using both the Image Source feature of builds and the incremental builds, we have not only reduced how long it takes to build our application image, we have reduced how long it would take to rebuild our wheelhouse image that contains our pre-built packages.
Experimental S2I builder
As indicated above, this has all been done using an experimental S2I Python builder, it is not the default S2I Python builder that comes with OpenShift. The main point of this post hasn’t been to promote this experimental builder, but to highlight the Image Source feature of builds in OpenShift and provide an example of how it might be used.
The experimental builder only exists at this point as a means for me personally to experiment with better ways of handling Python builds with OpenShift. What I learn from this is being fed back to the OpenShift developers so they can determine what direction the default S2I Python builder will take.
If you are interested in the experiments I am doing with my own S2I Python builder, and how that can fit into a broader system for making Python web application deployments easier, I would suggest keeping an eye on my personal blog site. I have recently written two blogs posts about some of my work that may be of interest.
- Building a better user experience for deploying Python web applications.
- Speeding up Docker build times for Python applications.
You can drop me any comments if you have feedback about that separate project via Twitter (@GrahamDumpleton). | https://blog.openshift.com/using-image-source-reduce-build-times/ | CC-MAIN-2017-13 | refinedweb | 2,448 | 57.71 |
Cyril. KiBi.
From 8c01332a34b9f6a66fa6720e52b06c192fa4c049 Mon Sep 17 00:00:00 2001 From: Peter Hutterer <peter.hutterer@who-t.net> Date: Fri, 3 Sep 2010 11:54:41 +1000 Subject: [PATCH] mi: handle DGA subtypes when determining the master device. The subtype in the DGA event is the core type and all ET_ event types (where applicable) are identical to the core types. Thus the switch statement below will work as required and assign the right master device. Fixes a crasher bug on keyboard devices with valuators. If a device sends a motion event while grabbed and a DGA client is active (but has not selected input through DGA), the valuator event is posted through the VCK and eventually results in a NULL-pointer dereference on dev->valuator. Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net> (cherry picked from commit 31ab9f8860848504df18a8be9d19b817b191e0df) (cherry picked from commit faecab3b13bbaecf4f35f49b833d1b79a5fb647d) Signed-off-by: Cyril Brulebois <kibi@debian.org> --- mi/mieq.c | 8 +++++++- 1 files changed, 7 insertions(+), 1 deletions(-) diff --git a/mi/mieq.c b/mi/mieq.c index 9b6d0c9..97f4afc 100644 --- a/mi/mieq.c +++ b/mi/mieq.c @@ -320,6 +320,7 @@ CopyGetMasterEvent(DeviceIntPtr sdev, { DeviceIntPtr mdev; int len = original->any.length; + int type = original->any.type; CHECKEVENT(original); @@ -327,7 +328,12 @@ CopyGetMasterEvent(DeviceIntPtr sdev, if (!sdev || !sdev->u.master) return NULL; - switch(original->any.type) +#if XFreeXDGA + if (type == ET_DGAEvent) + type = original->dga_event.subtype; +#endif + + switch(type) { case ET_KeyPress: case ET_KeyRelease: -- 1.7.2.3
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-x/2011/01/msg00237.html | CC-MAIN-2017-26 | refinedweb | 253 | 52.56 |
Issue
I am supposed to preprocess some PDFs in a folder. I am supposed to remove punctuation, make everything lower case and remove stopwords, and add some extra data from another CSV to it (as metadata). But I cannot even open them. All the googling does not help, since I do not understand the error message (none of the examples from other people helped, since they had different data types).
This is my code so far:
import PyPDF2 import re for k in range(1,312): # open the pdf file object = PyPDF2.PdfFileReader("/Users/n_n/Desktop/Digitalization/reserve" % (k))
and this is what happens
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [37], in <cell line: 4>() 2 import re 4 for k in range(1,312): 5 # open the pdf file ----> 6 object = PyPDF2.PdfFileReader("/Users/n_n/Desktop/Digitalization/reserve" % (k)) TypeError: not all arguments converted during string formatting
Solution
object = PyPDF2.PdfFileReader("/Users/n_n/Desktop/Digitalization/reserve%s" % str(k))
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 | https://errorsfixing.com/opening-and-preprocessing-text-300-pdfs-in-python/ | CC-MAIN-2022-33 | refinedweb | 186 | 55.64 |
Internet Backbone DDOS "Largest Ever" 791
wontonenigma writes "It seems that yesterday the root servers of the internet were attacked in a massive Distributed DoS manner. I mean jeeze, only 4 or 5 out of 13 survived according to the WashPost. Check out the orignal Washington Post Article here."
And... (Score:4, Funny)
Re:And... (Score:4, Insightful)
A subterranean bunker is designed to withstand nuclear wars, but what do you think would happen if the nuke was inside the bunker?
Re:And... (Score:5, Funny)
Ummm... a lot more people would be safe? That is, the people who didn't fit in the bunker...
Re:And... (Score:5, Funny)
I think everybody outside the bunker would be like "What the hell was that?!"
Re:And... (Score:4, Funny)
It's nice to know that you do not have to quit your [favorite online game] 'just because' a nuclear war breaks out.
Re:And... (Score:5, Funny)
They'll have to pry my nuclear weapon out of my cold dead fingers. A man has a right to protect himself. Would you want to participate in a nuclear war without a nuclear weapon? Bringing a knife to a nuclear war ain't smart.
Re:And... (Score:5, Funny)
Ask Slashdot: My bunker had a nuclear weapon which disassembled itself as designed. Should I repair the bunker the way it was? Or should I remodel to make use of the larger space which is now available? Is water cooling better than air chillers? What bunker mods are your favorites?
Re:And... (Score:5, Informative)
Article: "The Domain Name System (DNS), which converts complex Internet protocol addressing codes into the words and names that form e-mail and Web addresses, relies on the servers to tell computers around the world how to reach key Internet domains."
The "IP system" should have been fine. The DNS system, which has become an integral part of the "internet" is not decentralized as regular internet infrastructure is. Yes it is supposed to withstand a nuclear war, and yes, it would have. btw, the system worked yesterday. only 4 of 13 may have survided, but the system still ran.
We can have the internet without dns, but we cannot have dns without the internet
Re:And... (Score:5, Informative)
DNS is hierarchical, both is naming and in server implementation. Small ISPs cache their DNS from more major providers, up until the A to J.ROOT-SERVERS.NET main Internet servers. There is in fact one critical file, but it is mirrored to the 13 root servers, and domain look-ups are cached at the ISP level. I'm not suprised most Internet users were not affected, you wouldn't be affected if several large mail servers where DDoSed would you?
Re:And... (Score:4, Interesting)
Re:And... (Score:4, Informative)
Re:And... (Score:5, Informative)
That's what I do with BIND9.
Re:And... (Score:5, Informative)
Re:And... (Score:5, Informative)
You don't know what you are talking about. There are two different types of DNS servers: authoritative servers and recursive resolvers. djbdns comes with tinydns, an authoritative server and dnscache, a recursive resolver. The two are completely separate. BIND includes both in the same server, which is why many people are confused into thinking they are the same thing.
tinydns does not restrict queries to only certain IP addresses. However, it can return different information depending on the source address of the query. This is usually called split horizon DNS.
dnscache does have access control. You do not want just anyone to be able to query your recursive resolvers. With dnscache, you need to explicitly allow access [cr.yp.to] for IP's that can query it.
There are not risks in opening your content (authoritative) DNS servers to everyone. There are risks in opening up your resolvers to everyone.
Re:And... (Score:5, Interesting)
What my DNS server does is mandate an ACL (list of IPs allowed to make recursive queries; this can be set to "all hosts on the internet" if desired) if recursion (talking to other DNS servers) is enabled. Recursion takes a lot more work to do than authoritative requests; it is best to limit access to this.
Unlike Dan, I feel that a DNS server should be both recursive and authoritative because it allows one to customize the resolution of certain hostnames. The idea is similiar to
/etc/hosts, but also works with applications which ignore /etc/hosts and directly perform DNS queries. For example, I was able to continue to connect to macslash.com [slashdot.org] when a squatter bought the domain and changed its official ip; I simply set up a zone for macslash.com, and made MaraDNS both recursive and authoritative.
SMTP servers have IP restrictions at the application layer because this gives people some idea why they can't send email to a given host. A firewall restriction gives a vague "connection timed out" message in the bounce email message; application-level filtering allows the bounce message to say something like "You're from a known Spam-friendly ISP; go away".
- Sam
Re:And... (Score:5, Informative)
The root servers run BIND.
Re:And... (Score:4, Informative)
You're correct in that there are more than 13 DNS servers.I've got my own, which may or my not lie - it's these 13 that are "trusted"
... so to speak.
Now, when you're configuring your network stack, in fact, when you described to me the various DNS servers, what is the important part- the name or the IP number? the number - which helps to prove my point that IP is more important than DNS.
Re:And... (Score:4, Interesting)
Re:And... (Score:5, Informative)
The DNS system provides an "MX" resource-record for handling mail exchangers. Before the MX record, to send mail one would resolve the DNS using an A record, and connect to the resulting IP address. Nowadays, *@foobar.com doesn't have to always be handled by 140.186.139.224. In fact, there is a nice system set up for prioritizing mail handlers, built into DNS's MX records:
To answer your question, you can use IP addresses. But you'll be missing out on the prioritized DNS mail system. And don't worry about this being offtopic, the article isn't that all interesting anyways--I'd rather teach someone something interesting than write lame drivel about some "backbone DDoS" that's not even a backbone DDoS. Hey, its about the structure of the Internet...
Well... (Score:5, Informative)
You can use any physical layer: ethernet, a modem, a cell phone, wifi, bluetooth, firewire, USB, power lines, etc with IP, and similarly you can use may other protocols with Ethernet or any other link Such as IPX, NetBui, Apple talk, etc.
TCP, UDP, and ICMP are tied to IP and wont work with anything else.
Then there are higher level protocols that sit on top of TCP or UDP, for example DNS sits on UDP, FTP, telnet, gnutella and others sit on TCP. Interestingly HTTP should work on other protocols as long as you can establish a link between a server and a host on it. And you have software that implements it on these other links.
There's also Ipv6, which is a newer version of IP.
One critical (Score:5, Funny)
Re:One critical (Score:5, Informative)
Re:One critical (Score:4, Funny)
OK, I'll send you my HOSTS.TXT file. But remember to update it every few weeks because the ARPAnet is growing faster then ever after the adoption of this new, fancy, so called "TCP/IP" technology.
Re:One critical (Score:4, Funny)
"Hey xant,
I've attached the critical file you alluded to in your comment at
Keep it on your hard drive in case we all need it.
Heh. In case his hard drive goes, maybe a couple other people should get it from here [internic.net].
Re:And... (Score:5, Informative)
Not quite. (Score:4, Informative)
It is heirarchial with regards to namespace, but not so much with regards to lookups.
But not distributed enough (Score:4, Interesting)
Bullshit.
I had obvious impacts trying to resolve DNS names during the time period of the attack (Delaware AT&T), despite having a caching name server on my local net, which queries AT&T's caching (primary?) servers.
ISPs should be responsible for providing the DNS services to their customers in timely and reliable fashion, querying their backbone providers in turn. Direct queries of the root servers by subnets should be verboten and expressly blocked by the ISP firewalls. If you need to resolve an refresh, probe the ISP DNS and let their system handle the distribution. That way the root servers become repositories and key distribution points instead of failure points like yesterday.
I'm sure someone will object that they have the "right" to use whatever ports they want and that they don't want to rely on the stability of their ISP's servers, but we're talking about the infrastructure people! We have no more "right" to hit the root directly than to clamp a feed from the power company mains to the house or splice into the cable TV/broadband wiring.
If we don't protect and distribute infrastructure resources adequately, everyone is affected. And if your ISP has servers that are too unreliable for this type of filtered distribution to work, change providers!
Sure, let's just do that (Score:5, Insightful)
After all, 99.5% of people wouldn't notice, and who *really* cares about the remaining
I really loathe the growing trend towards firewalling everything that moves. Mail outbound, other than to the ISP's mail server. Napster. Ping packets. It's really annoying to the people who actually *do* want to use said functionality.
Internet "license"? (Score:5, Insightful)
You want full functionality? Sign off with your ISP for the appropriate connection service. If you pay for a small business link, you get the higher level access, and also take responsibility for the maintenance and security of your node. You get hacked, you participate in DDOS attacks, you should be financially responsible. If you really know your stuff to use the extra functionality, you should have no issue with taking responsibility for the risks incurred.
Don't want to pay more? Don't want to be responsible? Don't get the access.
There is no such thing as "rights" when your activities impact others. If you aren't willing to stand up and be responsible for your traffic (subnet/link/servers), then internet "society" has the responsibility to protect the rest of the community from you.
If the internet is truly as critical to business as we all hope it to be, it only stands to reason that people are going to have to get "licenses" to run full service nodes and subnets. You don't get to drive without a license to demonstrate that you at least have the education and skills to do so safely -- why would you expect to do otherwise on the 'net?
"License"? WTF are you talking about? (Score:5, Interesting)
Yes, I do. The same peer-to-peer functionality that hosts on the Internet have had forever. I got my fill of "Internet access", but not being an Internet peer when everyone was selling dialup shell accounts but not PPP.
Sign off with your ISP for the appropriate connection service.
So *I* should pay *more* for them to do *less* work?
That's as bad as the pay-extra-if-you-don't-want-your-number-listed phone company procedure.
If you pay for a small business link, you get the higher access level, and also take responsibility for the maintenance and security of your node.
I *already* take responsibility for the maintenance and security of the node. I don't need to pay any more money to take said responsibility.
You get hacked, you participate in DDoS attacks, you sould be financially responsible.
There's no legal difference between a business and a home account from a financial responsibility point of view. What are you talking about?
If you really know your stuff to use the extra functionality, you should have no issue with taking responsibility for the risks incurred.
I *don't* have an issue with that. I just don't want to pay inflated business-class prices for standard peer-to-peer access.
Don't want to pay more?
Not particularly, no.
Don't want to be responsible?
Well, I'd kind of prefer to not be responsible (
Don't get the access.
Conclusion does not follow.
There are [sic] no such thing as "rights" when your activities impact others.
You seem to have misquoted me. I did not use the word "rights" anywhere in my original post, or claim that I had any such rights (legal or ethical) whatsoever. I did say that it was *annoying* to me.
If you aren't willing to stand up and be responsible for your traffic
Where, where, did you get the impression that I said this at all?
If the internet is truly as critical to business as we all hope it to be, it only stands to reason that people are going to have to get "licenses" to run full service nodes and subnets.
That has no bearing whatsoever on my argument. I also don't think that the potentially critical relationship to business can be said to imply that one needs a license. Electricity is quite critical to US industry (hell, it's physically dangerous), yet one doesn't need a license to utilize it.
You don't get to drive without a license to demonstrate that you at least have the education and skills to do so safely -- why would you expect to do otherwise on the 'net?
Still has no bearing on my argument.
Furthermore, I'd like to point out again that screwing up while driving can easily end up with many people dead. Even with the license system, cars are the leading cause of death of teens and young adults. I don't think you can compare that at all to the Internet, where maybe someone gets a Code Red infection. The Internet is important, but not knowing what you're doing on the Internet is wildly different (at least currently) from being an active threat to the lives of others.
Good point (Score:4, Insightful)
Amen.
The only reason we hear the words "web services" at *all* are because the bejeezus has been firewalled out of everything except for web access at most companies. From a technical standpoint, "web services" are a massive step backwards...we had much superior systems before we had to run all communication through http.
Web services are the ongoing rejection of developers and users of the blocking of services crossing the firewall. Eventually, everything will be tunneled over http, and we'll be back where we started (same things accessable across the firewall), abeit with a somewhat less efficient system.
"The Internet treats censorship as damage, and routes around it."
-- John Gilmore
Re:And... (Score:5, Informative)
You make some good points, but the Domain Naming Server system is in fact largely distributed.
and then you say:
DNS is hierarchical, both is naming and in server implementation.
Ok hold on here. It's both hierarchial, implying something at the top that everything is based on, and at the same time, distributed, implying that it's not dependand on some central source? Dude, you're contradicting yourself, and so you're wrong.
The truth is that the DNS system IS heirachial. ICANN runs the root. They say what information goes in at the highest level. The dot-com, and dot-aero, and dot-useless and so on. That is why there is so much scrutiny on ICANN for operating fairly [icannwatch.org]. They are the people who decide how the DNS system will be run, because they are at the top of the hierarchy.
"But wait!" you say, "Aren't there 13 root servers? That's distributed right there." Yes, but you are only half right. The LOAD is distributed, not the information. So you're distributing the LOAD, but the info is exactly the same on each one. And that info is controlled by ICANN.
Oh and yes, you CAN get that one file of information that the root servers have. Really you can. Take a look for yourself. Log into [internic.net] and get root.zone.gz [internic.net]. If you look at that file, you'll see it's a list of all the servers for all the TLDS.
Re:And... (Score:5, Funny)
Re:And... (Score:4, Insightful)
hint: read the last paragraph of Cmdrtaco's last journal.
just run a local DNS cache; if something is unreachable, you have the cached entry to work off of. When changes are made, you get the update automatically.
DNS (Score:4, Funny)
We can have the internet without dns, but we cannot have dns without the internet
Why would we want DNS without the Internet?
Re:And... (Score:5, Funny)
Oh my my face is burning off, and I thirsty like a mother grabber.. I hope the internet is still up, oh hey look there goes a cockroach.
Yeah... (Score:4, Insightful)
Re:And... (Score:5, Funny)
Well, if it does happen, I hope they finish them off. Otherwise, the cockroaches may try to revive XML and web services based on an acheological dig in a few hundred-million years. Then again, lets punish the little bastards for infesting our kitchens. Let them suffer dumb tech bubbles and useless fads afterall.
Re:And... (Score:5, Informative)
Actually that is an Internet myth. Look at the IETF RFCs, the first ocurrence of the word 'Nuclear' is several decades after the Internet was created.
The DNS cluster is designed with multiple levels of fault tolerance. In particular the fact that the DNS protocol causes records to be cached means that the DNS root could be switched off for up to a day before most people would even notice.
The root cluster is actually the easiest to do without. There are only 200 records. In extremis it would be possible to code them in by hand. Or more realistically we simply set up an alternative root and then use IP level hacks to redirect the traffic. The root servers all have their own IP blocks at this stage so it is quite feasible to have 200 odd root servers arround the planet accessed via anycast.
The article does not mention which of the servers stayed up apart from the VeriSign servers. However those people who were stating last week that the
.org domain can be run on a couple of moderately speced servers had better think again. The bid put in by Paul Vixie would not have covered a quarter of his connectivity bill if he was going to ride out attacks like this one.
Re:And... (Score:5, Insightful)
1) This was not an attack on the surrounding world. This was an attack on the network itself, from inside the network itself.
2) The Internet was designed to be able to route around problems in a specific global region (nuclear war) by having each node or site have connections to multiple other nodes, creating a redundancy that would be almost impossible to get around (at worst case, you could try to route a region through someone's 56K if that region's main providers went down). This redundancy is nowhere near what it should be.
Also, the amount of nodes is magnitudes greater than the original founders ever thought of. The number of sites when that was said was around 20-30, and it was fairly easy for most of them to connect to each other and form a semi-mesh network.
3) Dependance on centralized services. This attack was on one of the Internet's centralized services, the Alliance of 13 (DNS root servers). With a limited number of root DNS servers, it's easy to point to somewhere and say "There's the weakness, let's hit it". The root DNS servers are a balance between complexity (having more than one root server takes time to propogate complete changes amongst all of them) and redundancy (having only one or a few servers makes an even more vulnerable point than the Alliance of 13).
Another major weakness is the continental backbones (for example, North America has the East Coast, West Coast, and transcontinental backbones) and their switching stations, like MAE East and West. Imagine if someone was able to take out all of MAE East in one shot, how crippled most of the Internet would be, for at least 12-36 hours while the alternate routing was put in place.
DDOS? (Score:4, Funny)
Watch Out! (Score:4, Funny)
Everyone! Run for your lives, Jackie's comin!
And for all you tech support people out there... (Score:4, Funny)
Re:And for all you tech support people out there.. (Score:3, Insightful)
Re:And for all you tech support people out there.. (Score:3, Interesting)
Re:And for all you tech support people out there.. (Score:3, Funny)
One would assume you still have to check periodically to see if the IP address from DNS is the same as your cached one. Either way, you are not the majority of Internet users, so for most everyone, DNS going dead == Internet going dead.
Determining whether or not kicking the majority of users off the Internet is a bad thing is left as an exercise to the reader.
Couldn't have been that bad... (Score:4, Insightful)
I'd say this just goes to show how reliable the root name servers are. I didn't notice any dns problems yesterday. In fact, I don't remember any root name server problems since the infamous alternic takeover.
Re:Couldn't have been that bad... (Score:4, Interesting)
Twenty minutes later, though, everything seemed fine, and the sites that wouldn't resolve earlier finally did. I wondered if something... erm.. unusual was going on, and it looks like there was...
As always, your mileage will undoubtedly vary...
Re:Couldn't have been that bad... (Score:4, Informative)
If you believe this article [com.com] on news.com [com.com], it looks more like a storm in a glass of water.
Quote: the peak of the attack saw the average reachability for the entire DNS network dropped only to 94 percent from its normal levels near 100 percent.
Re:Couldn't have been that bad... (Score:3, Informative)
And...? (Score:3, Funny)
You don't think the military puts any critical systems on the Internet, do you?
13 servers (Score:3, Funny)
Re:Well, I would guess... (Score:4, Informative)
-Kevin
Well there we go! (Score:4, Interesting)
Article:
"Despite the scale of the attack, which lasted about an hour, Internet users worldwide were largely unaffected, experts said."
All I can say is that if you think of this as a test, I'm happy it passed.
(Insert joke about Beowulf cluster of DDOS attacks / the servers ability to withstand the slashdot effect.)
Re:Well there we go! (Score:5, Interesting)
The attackers were idiots. They used ICMP echo requests (easily filterable, since the DNS servers don't _have_ to answer those) and quit after an hour. More publicity stunt than actual attempt to damage, IMNSHO.
I've been trying to publish a paper about exactly this (and how to redesign DNS to avoid the vulnerability) and I'm just pissed that they didn't tell me in advance so that I could do some measurements.
:)
Re:Well there we go! (Score:5, Interesting)
Before anybody gets their panties in a knot (Score:5, Interesting)
"when uunet or at&t takes many customers out for many hours, it's not a problemWith something like the root nameservers, if it was an important attack, you would have noticed. I run an ISP and we had zero complaints, even from the Everquest whiners who complain at the drop of a hat about anything.
when an attack happens that was generally not even perceived by the users, it's a major disaster
i love the press"
I would draw an opposite conclusion (Score:4, Interesting)
Fine, so the attack was unintelligent. What will happen when someone attacks MAJORLY and INTELLIGENTLY?
This gets my panties in a knot. A piddly attack brought down 65% of the root name servers! A good attack would have brought them all down! That doesn't that worry you?
Ah ha. (Score:4, Funny)
I'm going to beat the crap out of that 12-year-old as soon as I find him; he made me look like I had no skillzzz.
Re:Ah ha. (Score:5, Funny)
Caching saves the day... (Score:5, Informative)
Thus the hour long attack was not enough to meaningfully disrupt things, as most lookups would not require querying the root, unless you were asking for some oddball TLD like
Change the attack to be several hours, or a few days, and then cache entries start to expire and people are unable to look up new domain names. But that attack would be harder to sustain, as infected/compromised machines could be removed.
It is an interesting question who or how this was achieved. THere seems to be a lot of scanning for open windows shares (Yet Another Worm? Who knows) also going on in the past couple of days, but there is no clue if it is related.
The important caching (Score:4, Interesting)
For the most common 2LD names, any major ISP will have cached the addresses for them, and won't need to hit the
.com server until the typical 1-week or 24-hour cache timeout periods. If your nameserver is ns.bigisp.net, somebody there will have looked up google.com in the last 2 seconds, even though nobody at your ISP has looked up really-obscure-domain.com this week - but even that one may be in the cache because some spammer was out harvesting addresses. An obvious scaling/redundancy play for the root servers and for the major ISPs would be to have them cache full copies of the root server domains to keep down the load and reduce dependency. It's not really that much data - 10 million domains averaging 30 characters for name and IP addresses is only half a CD-ROM. An interesting alternative trick would be for the Tier 1 ISPs to have some back-door access to root-level servers for recursive querying.
Preaching to the choir... (Score:3, Interesting)
I'd love to see a breakdown of what networks the attacks came from and what the OS distribution was... pie charts optional.
-B
Test run (Score:3, Insightful)
Maybe to cause a false sense of security, maybe to analyse how those crucial networks cope with DOS attacks so as to be more successful next time.
Whether these people were Bin Laden's boys or garden variety hax0rs don't get too comfortable. The worst is yet to come.
Sophisticated? (Score:5, Insightful)
I've never considered DDOS all that sophisticated myself. It's seems to me that "wow a script kiddie got more systems under his control than usual" more than "a great cracker is on the loose". Though I suppose if it were a great cracker then they could have been proving themselves by predicting the attack.
DDOS Sophistication Varies (Score:4, Interesting)
But, yeah, some of the attacks aren't much different than using a loudspeaker to announce "Free Beer at Victim.com"
OMG OMG (Score:4, Funny)
If DNS ever goes down totally, (Score:3, Informative)
We'll have to rely on IP addresses, obviously, so start changing your bookmarks now!
instead of
And...? (Score:5, Insightful)
Indeed, no traffic slowdown, no more than usual support calls. The system works as expected, even under attack.
Worth a read: Caida DNS analysis [caida.org], and more specifically those graphs [caida.org]. It would be interesting to know which DNS sustained the attack, in regard to the graphs.
Looks worse then it is (Score:4, Insightful)
If you really want to, build your own root server [ipal.net]
I work for JPNIC (Score:4, Informative)
I'm at JpNIC & JPRS we manage the Japanese servers here. The attack progressed through our networks and effected 4 of our secondary mapped servers (these servers are used as a backup and in no way are real root servers). The servers were running a suite of Microsoft products (Windows NT 4.0) and security firewall by Network Associates.
Here is a quick log review:
Oct20: The attackers probed our system around 2100 hours on Oct 20 (Japan). We saw a surge in traffic onto the honeypot (yes these backups are honeypots) systems right around then.
2238: We saw several different types of attacks on the system, starting with mundane XP only attacks (these were NT boxes). We then saw tests for clocked IIS and various other things that didnt exist on our system.
2245: We saw the first bind attacks, these attacks were very comprehensive. We can say they tried every single bind exploit out there. But nothing was working.
Attacks ended right then.
Then on the 22nd they resumed (remember we are ahead)
22nd: A new type of attack resumed. The attack started with port 1 on the NT box, we have never seen this type of attack and the port itself responding was very weird. Trouble started and alarms went off, we were checking but couldnt figure out what happend, then we saw a new bind attack. The attack came in and removed some entries from bind database (we use oracle to store our bind data)..
The following entries were added under ENTRI_KEY_WORLD_DATA
HACZBY : FADABOI
CORPZ : MVDOMIZN HELLO TO KOTARI ON UNDERNET
Several other things were changed or removed.
Till now, we have no idea what the exact type of hack this was, we are still looking into this. The attack calls himself "Fadaboi", and has been seen attacking other systems in the past.
We are now working hard with network solutions.
Thank you.
Re:I work for JPNIC (Score:5, Informative)
Re:I work for JPNIC (Score:5, Interesting)
CORPZ : MVDOMIZN HELLO TO KOTARI ON UNDERNET
Well, this shouldn't take the FBI long. A quick Google search shows that Undernet's Kotari owns the domain, which he's recently taken down but still shows whois records..
"Most sophisticated attack ever" (Score:4, Funny)
And that's just a little fragment of it. I'm really worried about these guys taking over the internet!!
Re:"Most sophisticated attack ever" (Score:5, Funny)
Re:I work for JPNIC (Score:5, Funny)
Unbreakable.
Running NT and BIND? (Score:5, Interesting)
It's really easy to setup a system which dumps your SQL database out to a TinyDNS file []. TinyDNS [cr.yp.to] is provably secure software. I would expect that you would use it on the root servers, since it's designed to work at very high levels of output/uptime, and be attack resistant to the point of being attack proof.
Say what you will about D. J. Bernstein [cr.yp.to], he does have a very capable DNS solution [cr.yp.to] available.
It certainly does provide that capability. (Score:4, Informative)
For dynamically updating zones, I use a small Perl DBI script which dumps zones from the DB into a directory. All files in the directory are sorted (via sort) into a main text file, which is hashed into data.cdb. I also have a big text file from the other DNS server scped over and included in the hash. The entire system is dynamic, with every important entry controllable from within an easily backed-up (and restorted) SQL server. Adding things like DynDNS to this setup would be trivial (all I'd need is another table for actual accounts, which allow people to modify their own zone files).
Best of all, because there is an order of magnitude less code running, TinyDNS is a lot easier to inspect for correctness. You can spend a couple of evenings reading over all the code for the package (even if it's not the best looking C code in the world), and really understand it.
In other news.... (Score:4, Funny)
HA! Jumping through their own ass. (Score:3, Funny)
A certain mil/gov organization I consult with was jumping through their own asses worried about this. The funny thing is, ummm... NOTHING CHANGED! We experienced NOTHING. I think they wanted us to do something... ANYTHING.
You know... next time this happens, I'm setting up my own root servers... errr... wait...
Can you say "SPIKE"? (Score:4, Informative)
My Brain Hurts (Score:5, Funny)
And I suppose the person who wrote this article would consider arithmetic a complex system of digits and symbols.
mrtg charts (Score:4, Informative)
Root-servers.net [root-servers.net]
The legendary cymru.com data. [cymru.com]
I haven't looked yet but LINX mrtg charts might show something interesting. [linx.net]
Of course, even if someone could knock all the root servers over, the net as we know it wouldn't stop working instantly. That's what the time to live value is for
:)
Traffic Stats (Score:5, Informative)
The stats for the h.root servers are available for the time period [root-servers.org] of the attack. Seems as though the h servers were taking in close to 94Mbits/second for a while.
More links to server stats can be found at Root Servers.org [root-servers.org] and some background is available at ICANNWatch [icannwatch.org].
Thoughts from a DNS implementor (Score:5, Insightful)
I only noticed it because I use my own DNS server [maradns.org] to resolve requests; and pay close attention whenever I see any problems resolving host names (there is the possibility of it being a bug with my software).
The person who orchastrated this attack is not very familiar with DNS. Attacking the root name servers is not very effective; all the root servers do is refer people to the
.com, .org, or other TLD (top-level-domain) name servers. Most DNS servers remember the list of the name servers for a given TLD for a period of two days, and do not need to contact the root servers to resolve those names. While some lesser-used country codes may have had slower resolution times, an attack on the root servers which only lasts an hour can not even be felt by the average end user.
In the case of MaraDNS, if a DOS (denial of service) is happening against the root servers, MaraDNS will be able to resolve names (albeit more slowly for lesser-used TLDs) until every single root server is sucessfully DOS'd.
- Sam
Follow-up Washington Post article... (Score:5, Funny)
Followup article, after slashdot story, was: "Attack on Washington Post Called Largest Ever".
Ah.. behold the mighty power of
Whats the difference between a dos attack & /. (Score:5, Funny)
--Joey
Re:al qaeda? (Score:5, Funny)
I was using the computer in Afghanistan to surf pr0n.
Re:oh my... (Score:4, Interesting)
And *nix systems are infinitely more scriptable, so I think it's more likely those were used for the attack (if I remember correctly, unsecured Linux where used for the big DDOS attacks on Yahoo and Ebay etc some years ago).
Re:That's why! (Score:4, Funny)
(It can't just have been me!)
graspee
Patent Infringement (Score:5, Funny)
Re:Reminds me... (Score:3, Funny)
Re:Why attack (Score:5, Informative)
I am not an expert but surely these servers connect to the net through some sort of router/hub whatever. The servers are made to handle a lot of traffic but what about the connecting hardware. If the routers were attacked directly wouldn't the DDOS attack still be succesful without touching or alerting the dns servers themselves.
It's an interesting idea, but it doesn't quite work like that. The routers we're talking about here (I imagine that most of the root servers are on 100BT or Gigabit Ethernet LANs which then plug into one or more DS-3s [45 Mbps] or more likely OC-3s [155 Mbps]) are designed to be able to handle many, many times more traffic than the servers are. Your average Cisco 7xxx or 12xxx router is built to handle far more traffic than any given server might see. Think about it
... you generally have many servers being serviced by one router, not the other way around. Additionally, each root server is most likely connected to multiple routers (say, they're hosted at an ISP with three DS-3s to different providers and each DS-3 is plugged into a different Cisco 7500).
Also I doubt that the routers are setup to recognize any kind of attack as they are just relays between the net and the server. Possibly the attack could go on for quite some time before any one realized what was going on.
Actually, it's the other way around. Most good routers are designed to have the ability (if you enable it) to look inside of the packets that pass through them and filter out "bad" ones based on various criteria. Thus, routers are actually perfectly suited to stopping attacks like this, while servers are expected to burn their CPU cycles doing other things (yes, servers can do this sort of filtering, but they generally have something more important to do). The only real problem is that it's often very difficult to tell the "good" packets from the "bad." After all, how do you distinguish automatically between a distributed flood of HTTP malicious requests and a Slashdotting? You get the idea.
WD40 (Score:5, Interesting)
Hmmm, last I looked at the Cisco feature set (or the like from Foundry and Nortel and what have you), it was a challenge to put in rules that
a) didn't take out significant "good" traffic, and
b) did take out significant "bad" traffic.
I agree that rate limiting ICMP traffic is an appropriate answer, especially in the light of this particular attack, but I'm appalled by the number of illitarate dorks who copy snippets titled "how to block all ICMP" from a textbook into their firewall without the slightest understanding of why ICMP was implemented in the first place.
I hate to think of what could happen if the 31334 hackers really start mixing attacks.
I positively _love_ wd40, but I will not apply it to reduce the squeeking of my cars brakes. Too many people use the Internet equivalent of WD40 on their network brakes.
Re:Where's the Inter in the 'Net? (Score:5, Insightful)
The Internet's roots have nothing to do with democracy. Quite the opposite, your military wanted a communications network that could survive a nuclear holocaust so that it would be the first to rebuild and conquer the world when the evil reds launched the first nuke.
Most of the TLDs are in the USA because the DNS system was created in the USA, and was largely hosted by US providers. It's too much trouble to move them, and of limited benefeit. If they ever decide to add new ones, it's likely that they'll put at least one in Japan, and probably a couple in Europe.
Even so, though, the main reason for their dispersal is to survive a nuclear attack that takes out one or two. I don't know if you've looked at a map recently, but the USA is big. It's not like all 13 of the TLD servers are located in a trailer in rural Kentucky. You'd have to carpet bomb the entire USA to be sure of taking out all 13 of them, and frankly, if somebody had the resources to turn the entire country into a self-illuminating glass-floored parking lot, the Internet would be the least of my worries.
Re:undisclosed location (Score:5, Interesting)
Disclaimer, I work for VeriSign. This is a personal opinion, not company policy. The details of the disaster recovery scheme are of course confidential. However I can tell people that we did think about these issues during the design. We have always known that people might think the DNS was a single physical point of failure for the internet. That is why we designed it so that it is not.
There are multiple locations. The 'A root' is NOT a single machine. There are actually multiple instances of the A root with multiple levels of hotswap capability.
Incidentally it is no accident that the VeriSign root servers stayed up. They were designed to handle loads way beyond normal load. The ATLAS cluster is reported to handle 6 billion transactions a day with a capacity very substantially in excess of that.
Even if all the A roots were physically destroyed the roots can be reconstructed at other locations. Basically all that is needed is a site with a very fast internet connection. In the case of a major terrorist attack AOL or UUNet or even an ARPAnet node could be comandered. The root could even be moved out of the country entirely, British Telecom is a VeriSign affiliate, there are also several other affiliates with nuclear hardened bunkers.
Most Americans have only been thinking about terrorism since 9-11. VeriSign security was largely designed by people who thought about terrorism professionaly, unless of course they were in charge of securing nuclear warheads.
All a terrorist could do is to kill a lot of people, there is absolutely no single point of failure. Even if the entire constellation is destroyed it would result in an outage of no more than a day given the resources that would become available in the aftermath.
Re:Punishment options. (Score:5, Insightful)
Seriously. How do you plan on enforcing this? Not only is it a huge expenditure of resources to track down the number of computers used in the attacks, to track down their IP addies, to obtain the needed court orders to obtain their ISP's logs, the resources to parse those logs to find out who was logged on, and *then* go about prosecuting the offenders, what would it accomplish?
If Code Red taught us anything, it's that the dumb won't change a thing about the way they work, regardless of how much the internet community ridicules them. It's also completely nuts to punish the ISPs for this... where does it stop? I'm pretty sure that some AOL clients were responsible (and while I wouldn't complain about no AOL'ers for a while, I bet they would). How about people who buy their access directly from UUNet? Gonna block out UUNet for a month?
Even if you could implement that punishment of the ISPs, it wouldn't accomplish much. It wouldn't hurt me at all if I was blocked from direct access to the TLD servers, because inside my network I'm running a mirror. My ISP is running a mirror. I know of a dozen open DNS servers on the internet. I'm betting I could find at least one that wouldn't block me.
Seriously, though. It's great to say we should punish these people for not securing their systems, but you have to understand just how many computers would be needed for this attack. The TLD servers aren't running on 64k ISDN: they're on OC48 at least. There's 13 of them. The kind of bandwidth needed to adequately DoS them is obscene. You either do it the dumb way and use 50 computers running on the fastest connection available, or you use *hundreds* of computers, possibly thousands or tens of thousands.
Looks great on paper, but realistically there's not much point in ranting like this. Besides... if it wasn't for the article, I'm betting that most of the world wouldn't have noticed.
Lots of people didn't notice (Score:4, Informative)
That's the scary part.... (Score:4, Interesting)
here's one; (Score:5, Funny)
4711 Mission Rd. - Westwood, KS (sub. of Kansas City), Tel: (913) 432-5678
Good enough for a lot of professional athletes, and they straightened me up after my car wreck.
But I don't think they can fix uunet. | http://slashdot.org/story/02/10/22/2332233/internet-backbone-ddos-largest-ever | CC-MAIN-2014-15 | refinedweb | 7,463 | 72.46 |
>
Alright, so there's plenty of information on how to do this with 3D animations, but I'm stumped with Unity 4 on how to do this in 2D. I have a very short animation, it's close to a second long, what I want to do is have it play once, and then delete itself. I tried doing:
if (!animation.isPlaying) {
Destroy (gameObject);
}
However, this gives me an error saying there is not animation connected. The only other way is to calculate how long the animation is in total, and then set a timer to delete after it's done, however that gives me little flexibility. What am I supposed to do, how would I achieve this?
I think your first approach is fine as long as the animation is playing when you start the scene. But you shouldn't get the error you got if things are setup right. You have a game object with an Animation component on it with at least 1 Animation in the Animations list on that component, in the Inspector. Then you attach the script above to that same game object. It should work. Is this not how you set your scene up? Could you describe, by Animation, do you mean an Animation Component?
@supernat
I use this way of doing it, if I try doing the way you say it gives me a warning about the animation having to be in Legacy.
Well you said you were using animation, not animator. I was kind of wondering, should have asked you. :)
Answer by firestoke
·
Apr 06, 2016 at 09:20 AM
Here is my solution: I create a script named "AnimationAutoDestroy", and add this component to the animator game object which you want to auto-destroy while animation is finished. You can adjust the delay time as you want.
using UnityEngine;
using System.Collections;
public class AnimationAutoDestroy : MonoBehaviour {
public float delay = 0f;
// Use this for initialization
void Start () {
Destroy (gameObject, this.GetComponent<Animator>().GetCurrentAnimatorStateInfo(0).length + delay);
}
}
great solution
Answer by Tyyy1997
·
Mar 23, 2014 at 09:49 PM
Fixed it, this code did the trick:
using UnityEngine;
using System.Collections;
public class ExplosionHandler : MonoBehaviour {
private IEnumerator KillOnAnimationEnd() {
yield return new WaitForSeconds (0.167f);
Destroy (gameObject);
}
void Update () {
StartCoroutine (KillOnAnimationEnd ());
}
}
I go the time from the inspector view when I selected the explosion animation in my assets, and then set it to not loop, work's like a charm.
Why is this a better answer than @firestoke's below? (curious)
it's not. unless you like to get hit over the head.
Answer by AssmeisTer
·
Oct 12, 2014 at 04:22 AM
A better solution that takes into account the length of the animation clip.
Code (csharp):
private int currentHealth;
public void TakeDamage(int damage)
{
currentHealth -= damage;
if (currentHealth <= 0)
{
StartCoroutine(Die());
}
}
private IEnumerator Die()
{
PlayAnimation(GlobalSettings.animDeath1, WrapeMode.ClampForever);
yield return new WaitForSeconds(gameObject, GlobalSettings.animDeath1.length);
Destroy(gameObject);
}
The code you have has StartCoroutine in Update which isn't good because it will kick off a new coroutine every frame.
Answer by astracat111
·
Jul 07, 2018 at 09:10 PM
Just a disclaimer. myAnimatorReferenceName.GetCurrentAnimatorStateInfo(0).length gets the SECONDS, so if you need the frames like I did just do length * 60f.
P.S - I'm using timers inside of state machines instead of IEnumer.
2D Animation does not start
0
Answers
After animation completion transition to default animation
1
Answer
Animation stops rotation
0
Answers
How to start an animation by the exit animation frame
0
Answers
OnTriggerStay2D Breaks When Adding AnimationController
1
Answer | https://answers.unity.com/questions/670860/delete-object-after-animation-2d.html?sort=oldest | CC-MAIN-2019-26 | refinedweb | 595 | 54.83 |
CodePlexProject Hosting for Open Source Software
is there a way to add standard asp controls in the xslt layout such as
<asp:updatepanel>
<asp:button> etc.....
or even registering your own controls
<icg:textbox> etc....
or can this document type be removed <xmlns:
You can load in ASP.NET Controls by using the C1 Function features - for instance, if your XSLT emit the following in its result, a User Control will be loaded:
<f:function xmlns:
<f:param
</f:function>
If you want to use standard asp.net controls and make them work together you should look at using Masterpage based layouting instead of XSLT. Webform controls don't do much by themselves and its first during postback and interaction with other controls in
the control hierarchy them become truly powerful. With XSLT based layout every function is a self contained unit that is executed without knowledge of each other. This is also the behavior you'll see with ie. MVC. This has its advantages due
to its simplicity but can be difficult get your head around if you're used to the rich environment that asp.net Webforms gives you with objects, control hierarchies and events.
@burningice You can do layout using XSLT and use asp.net controls at the same time. If needed you can have the controls reference each other etc., they all end up in the same control tree. There is no need to use asp.net master pages to use controls.
sure you can have several controls inside the same usercontrol referencing each other but thats also kinda where the fun stuff stops.
Say you wanted this
Lots of text
<f:function xmlns:
<f:param
</f:function>
Lots of more text
<f:function xmlns:
<f:param
</f:function>
and you want some code in the first control to reference the second control... you don't define any server side ids so what would you do... a lot of Parent.Child[2].Parent.Child[1] perhaps, but thats just not feasible. Its hard to argue around that
just as every other C1 function is a self-contained unit, that the same applies for the Composite.AspNet.LoadUserControl function. It's not simple to reference a textbox in one usercontrol from the button submit handler in another.
You can't
There are some rather large inaccuracies the claims above - we got a lot more asp.net webforms conformance than that.
> It's not simple to reference a textbox in one usercontrol from the button submit handler in another
False - your controls end up en the same control tree, below the same Page and you can define server side ID's all you wish - if you need to dig out a control from somewhere on the page, you can do this pretty easily. Code sample below.
> You can't ... reference Page.Header since its not defined as a server control
False - do this in a control, and you change the pages title: this.Page.Header.Title = "Hello World";
We create a Header element based on the <head /> that html, xslt'etc. generate - this Header can then be changed/used by asp.net.
> You can't ... do (Page as myBaseType).SomeBaseProperty since you don't have access to setting the type you want a template to inherit from
True - if ASP.NET Page sub classing is a requirement, Composite C1 wont help.
> You cant't ... use Ajax Page Methods since they need to be defined on the page which you don't have access to
False - if you need the <asp:ScriptManager /> on a page, include it. You can do so via a Control just fine.
> You cant't ... do template inheritance like with masterpages
False - you can do all kinds of crazy layout inheritance with the C1 Function system, and for instance XSLT. The XSLT starter site do this a lot.
True - if you specifically mean asp.net master page inheritance.
> You cant't ... use any of the Webform Menu Controls since they rely on asp.net Sitemap
False - you can add any control you want to. You would need to register a sitemap provider, but saying you can not have webforms menus is not true.
Video sample
Here is a video showing the bits below running:
Code sample
A few of the above points is shown in this code sample. It is to user controls you can place where you want, and they will do AJAX, talk to (find) each other and update the page Header. To install it:
A.ascx
<%@ Control <asp:TextBox <asp:Button <asp:ScriptManager <hr /> <asp:UpdatePanel <ContentTemplate> <%= DateTime.Now %>.<%= DateTime.Now.Millisecond %> <asp:Button </ContentTemplate> </asp:UpdatePanel></form>
A.ascx.cs
using System;using System.Web.UI;using System.Web.UI.WebControls;public partial class ControlA : System.Web.UI.UserControl{ protected void AButton_Click(object sender, EventArgs e) { // Accessing the page head this.Page.Header.Title = AText.Text; // Grabbing a label from another control - we expect to find it or explode Label bLabel = (Label)FindControlRecursive(this.Page,"BLabel"); bLabel.Text = "A's text box says " + AText.Text; } private Control FindControlRecursive(Control root, string id) { if (root.ID == id) return root; foreach (Control c in root.Controls) { Control t = FindControlRecursive(c, id); if (t != null) return t; } return null; }}
B.ascx
<%@ Control
B.ascx.cs
public partial class ControlB : System.Web.UI.UserControl{}
Embedding a user control on a page, template or xslt output:
<f:function <f:param </f:function>
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://c1cms.codeplex.com/discussions/260502 | CC-MAIN-2017-22 | refinedweb | 942 | 66.23 |
:
- An event declaration.
- A method named OnEventName that raises the event.
The event data class and the event delegate class might already be defined in the .NET Framework class library or in a third-party class library. In that case, you do not have to define those classes.
If you are not familiar with the delegate model for events in the .NET Framework, see Events and Delegates.
To provide event functionality
- Define a class that provides data for the event. This class must derive from System.EventArgs, which is the base class for event data. An example follows.
Note This step is not needed if an event data class already exists for the event or if there is no data associated with your event. If there is no event data, use the base class System.EventArgs.
public class AlarmEventArgs : EventArgs { private readonly int nrings = 0; private readonly bool snoozePressed = false; //Properties. public string AlarmText { ... } public int NumRings { ... } public bool SnoozePressed{ ... } ... } [Visual Basic] Public Class AlarmEventArgs Inherits EventArgs Private nrings As Integer = 0 Private _snoozePressed As Boolean = False 'Properties. Public ReadOnly Property AlarmText() As String ... End Property Public ReadOnly Property NumRings() As Integer ... End Property Public ReadOnly Property SnoozePressed() As Boolean ... End Property ... End Class
- Declare a delegate for the event, as in the following example.
Note You do not have to declare a custom delegate if the event does not generate data. In that case, use the base event handler System.ComponentModel.EventHandler.
- Define a public event member in your class using the event keyword whose type is an event delegate, as in the following example.
In the
AlarmClockclass the
Alarmevent is a delegate of type
AlarmEventHandler. When the compiler encounters an event keyword, it creates a private member such as
and the two public methods
add_Alarmand
remove_Alarm. These methods are event hooks that allow delegates to be combined or removed from the event delegate
al. The details are hidden from the programmer.
Note In languages other than C# and Visual Basic .NET, the compiler might not automatically generate the code corresponding to an event member, and you might have to explicitly define the event hooks and the private delegate field.
- Provide a protected method in your class that raises the event. This method must be named OnEventName. The OnEventName method raises the event by invoking the delegates. The code example at the end of this topic shows an implementation of OnEventName.
Note The protected OnEventName method also allows derived classes to override the event without attaching a delegate to it. A derived class must always call the OnEventName method of the base class to ensure that registered delegates receive the event.
The following code fragment puts together all of the elements discussed in this section. For a complete sample that implements and uses events, see Event Sample.
//Step 1. Class that defines data for the event // public class AlarmEventArgs : EventArgs { private readonly bool snoozePressed = false; private readonly int nrings = 0; // Constructor. public AlarmEventArgs(bool snoozePressed, int nrings) {...} // Properties. public int NumRings{ get { return nrings;}} public bool SnoozePressed { get { return snoozePressed;}} public string AlarmText { get {...}} } //Step 2. Delegate declaration. // public delegate void AlarmEventHandler(object sender, AlarmEventArgs e); // Class definition. // public class AlarmClock { //Step 3. The Alarm event is defined using the event keyword. //The type of Alarm is AlarmEventHandler. public event AlarmEventHandler Alarm; // //Step 4. The protected OnAlarm method raises the event by invoking //the delegates. The sender is always this, the current instance of //the class. // protected virtual void OnAlarm(AlarmEventArgs e) { if (Alarm != null) { //Invokes the delegates. Alarm(this, e); } } } [Visual Basic] 'Step 1. Class that defines data for the event ' Public Class AlarmEventArgs Inherits EventArgs Private _snoozePressed As Boolean = False Private nrings As Integer = 0 ' Constructor. Public Sub New(snoozePressed As Boolean, nrings As Integer) ... End Sub ' Properties. Public ReadOnly Property NumRings() As Integer Get Return nrings End Get End Property Public ReadOnly Property SnoozePressed() As Boolean Get Return _snoozePressed End Get End Property Public ReadOnly Property AlarmText() As String Get ... End Get End Property End Class 'Step 2. Delegate declaration. ' Public Delegate Sub AlarmEventHandler(sender As Object, e As AlarmEventArgs) ' Class definition. ' Public Class AlarmClock 'Step 3. The Alarm event is defined using the event keyword. 'The type of Alarm is AlarmEventHandler. Public Event Alarm As AlarmEventHandler ' 'Step 4. The protected OnAlarm method raises the event by invoking 'the delegates. The sender is always this, the current instance of 'the class. ' Protected Overridable Sub OnAlarm(e As AlarmEventArgs) 'Invokes the delegates. RaiseEvent Alarm(Me, e) End Sub End Class
See Also
Events and Delegates | Event Sample | http://msdn.microsoft.com/en-us/library/wkzf914z(d=printer,v=vs.71).aspx | CC-MAIN-2014-41 | refinedweb | 760 | 60.01 |
If you're either a wikiHow Admin or New Article Booster, you have the ability to retitle a wikiHow article. This article will explain the process of changing the title of a wikiHow article.
If you are not currently a wikiHow Administrator or New Article Booster, please see the instructions in How to Request a Title Change for a wikiHow Article to learn how to correctly apply a Title Tag template to an article.
StepsEdit
Part One of Three:
PreparationEdit
- 1Visit the article that needs the title change.
- If you're looking for articles that need their titles changed, visit Category:Title. Currently, there are 10 title changes to review.
- 2Check the discussion page for any comments regarding the title. There may be alternative title suggestions or a reason for the current title.
- 3Decide on a new title for the article (per the Title Policy, if you've chosen to retitle it. If there is a suggestion present on the Discussion page or in the {{title}} template, evaluate the suggested title for accuracy and economy of phrasing.
Advertisement
- Not all suggested titles are the better than the existing title. If the current title is better than the suggested one, remove the {{title}} template and explain your reasoning on the discussion page or edit summary.
- You can skip the consensus building portion of the title change process in several instances. For example, if the article is fairly new, and the author just made a simple typo when writing it, or if you are not changing anything other than simple grammar issues such as spelling, punctuation, first-person usage, or capitalization.
- The more well-established the article (determined by how old it is and how many page views it has), the more careful you need to be about changing the title. For example, you should be extra cautious about changing a title if an article has a high view count and has been read by a wide audience over a good period of time.
Part Two of Three:
Changing the TitleEdit
- 1Return to the article page. You'll need to be on the actual article page, not the discussion page.
- 2Access the "Move" page. Hover over the "Admin" tab and click Move.
- 3Change the namespace if required. Click the dropdown menu next to "To new title".
- 4Make the changes to the title. Tweak/write the new title in the "To new title" box. Always make sure to check "Move subpages of talk page (up to 100)".
- 5Decide whether to keep the redirect. Leaving a redirect behind should only be done if the old title is interchangeable (means the exact same thing) with the new title, per the Merge Policy.
- 6Provide a reason for the title change (optional).
- 7Move the page. Click the green Move page button.
- 8Check the confirmation screen to see that the page was successfully moved. Subsequently, complete the finishing steps listed below.Advertisement
Part Three of Three:
Finishing StepsEdit
- 1Change all incoming links to the old title. To do this, click on link to the old title from the confirmation screen.
- Choose "What Links Here" from the toolbox drop-down menu.
- Go through each link, find any references to the old title, and replace them with the new title that's been agreed upon. This isn't mandatory, since the redirects will keep all the old links fresh. However in the cases of a radical name change it will reduce reader confusion.
- In each of these pages, use an edit summary of something similar to "updated link to reflect moved page" or just "updating link" to explain why you are completing this action.
- If a broken redirect created by your move should be deleted rather than updated, per the wikiHow:Merge Policy (in other words, if the two topics are both viable but don't mean the exact same thing), you can send these links to an Admin for deletion of the redirect.
- 2Check the renamed article for the title template and remove it. Open the renamed article for editing and remove the {{title}} tag from the article (usually found at the very beginning).
Advertisement
- Include a simple edit summary (such as "title changed"), and publish your changes.
Community Q&A
Search
- QuestionDo I have to change all incoming links to the old title? There are too many!Bat 🦇Top Answerer
- No, you do not. The Redirects Bot should take care of it in due course.
Ask a Question
200 characters left
Include your email address to get a message when this question is answered.Submit
Advertisement
TipsEdit
- The discussion and waiting period are optional and can be skipped if both of the following apply:
- The wikiHow article is relatively new.
- Your reason for changing the title is to fix spelling, grammar, punctuation, first person usage, or capitalization, all of which are straightforward and usually not debated.
- The more well-established the article (determined by how old it is and how many page views it has), the more careful you need to be about changing the title. Consider that if the article has a high view count and thus has been read significantly for a good period of time, the existing title must already have decent merit in attracting readers.
- If you discover that you have a knack for finding good titles, regularly browse title change requests and new pages to help.
- Always return to checking the Double Redirects page, to make sure no other double redirects created need the new title of the redirect changed too.
Advertisement
WarningsEdit
- Resist the urge to just manually create a new article and move the items over to it, even if you're a non-booster or admin. It's improper to do, creates more problems in the long run (a big no-no), loses the the prior history of that article, and creates duplicate titles. Only use one of these approved methods above.
- Need a title changed immediately? Ask a booster or admin to do so. Most titles only take a few minutes to change.
Advertisement
About This Article
Did this article help you?
Yes
No
Advertisement | https://m.wikihow.com/Change-the-Title-of-a-wikiHow-Article | CC-MAIN-2019-30 | refinedweb | 1,016 | 63.7 |
IoT Hackathon with the Intel Galileo
*Windows 10 RTM update: you can now use the Visual Studio extension for AllJoyn and Windows 10 to generate AllJoyn code from Windows 10 interfaces. Many of the steps listed in this article are no longer necessary. Please see the following blog post for more information..
This blog post is a companion to the AllJoyn session presented at //build/ 2015 where you can learn AllJoyn fundamentals, understand how Windows 10 incorporates AllJoyn, and watch an end-to-end coding demo which will guide you through the same coding scenario outlined in this post:
AllJoyn: Building Universal Windows Apps that Discover, Connect, and Interact with Other Devices and Cloud Services Using AllJoyn
Building apps for Windows 10 using UWP AllJoyn APIs involves three categories of code:
The following diagram shows the layout for an example AllJoyn UWP project:
AllJoyn-enabled UWP apps can be either Producers (implement and expose interfaces, typically a device), Consumers (call interfaces, typically apps), or both. While this article will focus on the AllJoyn Consumer scenario, the steps required to implement an AllJoyn Producer are the same for the most part. If you want to implement an AllJoyn producer UWP app, at the bottom of this post, you'll find a link to a sample project which includes code for a UWP producer app and a UWP consumer app.
Developing AllJoyn-enabled UWP apps for Windows 10 Public Preview involves the following steps: (explained in detail later in this document)
The Windows 10 Public Preview:
You can use the getajxml command line application (included in the download attached to this post) to get introspection XML from AllJoyn devices running on your network. Running the tool without command line parameters will list all AllJoyn apps on the network.
When you run getajxml without parameters, you will see a list of all the AllJoyn devices on the network, each of the interfaces exposed by each device, and other AllJoyn metadata like the unique name, the session port, and the object path.
At //build 2015, an AllJoyn-enabled toaster device was shown which will serve as the example for this post, and the corresponding code that you can download. This toaster exposes controls for starting and stopping the toasting sequence, setting the "darkness", and notifications when toast is done.
The AllJoyn toaster hardware sample in action
The following output is obtained from the AllJoyn toaster device running on a network when you run the getajxml tool:
----------------------------------------------------------------------
Discovery : About Announcement
Manufacturer: Microsoft
Model # : 070773
Device Name : Raspberry Pi Toaster
Device ID : 41d9a124-6913-40c5-a20a-9d1b20f8121b
App Name : Toaster Producer
Bus Name Port Object Path
============================== ===== ===============================
:3yZG_wu1.2 25 /emergency
:3yZG_wu1.2 25 /info
:3yZG_wu1.2 25 /notificationDismisser
:3yZG_wu1.2 25 /notificationProducer
:3yZG_wu1.2 25 /toaster
:3yZG_wu1.2 25 /warning
...
----------------------------------------------------------------------
In this case, we're interested in obtaining the introspection XML for the toaster interface which has a unique name of ":3yZG_wu1.2", a session port value of "25", and an object path with value "/toaster".
In order to obtain the introspection XML, we pass these values to the getajxml tool as shown here:
getajxml.exe :3yZG_wu1.2 25 /toaster > toaster.xml
When the above command was run, the following was included in the XML which was output to the file:
<!DOCTYPE node PUBLIC "-//freedesktop//DTD D-BUS Object Introspection 1.0//EN" ""> <node> <interface name="com.microsoft.sample.toaster"> <method name="startToasting"> </method> <method name="stopToasting"> </method> <signal name="toastDone"> <arg name="status" type="i" direction="out"/> </signal> <property name="Darkness" type="u" access="readwrite"/> </interface> </node>
The AllJoyn code generator takes in an XML file containing one or more AllJoyn introspection interfaces and generates C++ code that implements APIs for the AllJoyn interface(s) described in the XML.
The AllJoyn code generator tool is included in the Windows 10 Public Preview build, and is located in the following directory: [Windows Kits Directory]\10\bin\x64\
Example directory: C:\Program Files (x86)\Windows Kits\10\bin\x64\
Using the AllJoyn code generator tool involves supplying two command line arguments: an input introspection XML file, and an output directory:
alljoyncodegen.exe -i <input xml file> -o <output folder>
The following example shows the AllJoyn code generator in use:
alljoyncodegen.exe -i c:\alljoyn\toaster\toaster.xml -o c:\alljoyn\toaster\toaster-uwp-component
This example assumes that the "c:\alljoyn\toaster\toaster-uwp-component" directory already exists.
After running the code generator you'll see a number of files in the output directory. For the toaster example, the following files are generated:
Once the C++ files are generated, it's then time to create a UWP Windows Runtime Component project that will contain these files.
In Visual Studio 2015, add a new "Windows Runtime Component (Windows Universal)" project to the solution containing your UWP app (You can do this easily by right clicking on the solution, then selecting Add New->Project). Once this project has been created, you will need to perform the following steps:
If you are using AllJoyn interfaces that don't share the same root namespace, you'll need to generate code for each root namespace used. The steps in this section need to be performed for each collection of code that is generated. In the end, you should have one UWP component project for each AllJoyn root namespace used by your UWP app.
At this point, you should have a Visual Studio solution that includes a UWP app project, and one or more UWP Windows Runtime Component projects. Before you embark on writing your AllJoyn consumer code, there's a few last things to take care of.
In each UWP component project, make the following updates/changes:
In the UWP app project, make the following updates/changes:
After completing these steps, build and run the UWP app project. Your builds should succeed, and the app should run. If you have build or deployment errors, investigate and fix before continuing on.
If you've navigated all of the instructions in this document and followed all of the steps correctly, you are ready to start writing AllJoyn consumer code in your app. The good news is, most of the steps outlined in this document will be replaced by AllJoyn integration in Visual Studio 2015 which will be available at the time Windows 10 is released. This means you'll be able to start writing AllJoyn consumer and/or producer code in just a few minutes.
The attached toaster sample project contains sample code for both a toaster producer (simulates a toaster device) and a toaster consumer (a toaster control app).
For a detailed walkthrough of how to create an AllJoyn consumer UWP application, please watch the AllJoyn session 623 from //build 2015:
"AllJoyn: Building Windows apps that discover, connect and interact with other devices and cloud services using AllJoyn".
We have also provided all of the code and tools that you will need to walkthrough the toaster UWP app development exercise demonstrated at //build and in this post:
Git Repo: AllJoyn Toaster Producer and Consumer Sample for Windows 10 Public Preview
In addition, here are some resources that will help you get up to speed with AllJoyn an AllJoyn support in Windows 10:
Thanks,
Gavin
Great stuff, but where can I get the getajxml.exe application?
Ken- the getajxml.exe tool is now included in the Git Repo that we linked to. Let us know if you have any issues with it.
Hi Gavin,
I'm creating a simple AllJoyn Led for excersing and I change a little the commands sequence you provided.
I didn't delete pch.cpp and pch.h from created runtime component but override them with the auto generated version (I saw in your toaster example that you have autogenerated pch.h and pch.cpp in the runtime project). However, if you try to compile ... it fails of course, becase it doesn't have all dependencies needed from step "Editing Project Settings and Dependencies".
My advice is to change in the following way :
override pch.h and pch.cpp with autogenerated
before trying to compile, the user can execute "Editing Project Settings and Dependencies" and then the build will work fine.
Does it make sense for you ?
Paolo.
It seems that a first compilation before copying all generated files is needed because I receive error regarding missing precompiled header pch.pch (in the Debug folder).
If I compile the runtime component just created before replacing all generated files, it works.
Another point ...
In general, when you don't have a capability in the package manifest file but use it at runtime, an exception is raised. In my example application, I forgot to set the AllJoyn capability in the consumer app but no exception is raised. The only result is that the added watcher event isn't called.
Paolo.
Thanks Paolo! We are fixing the exception issue, and I updated the steps to start with a build of the UWP component project before modifying it.
There is another main step that drove me crazy !
To use toaster on the Raspberry Pi 2, it's needed to disable the firewall on the board with :
netsh advfirewall set allprofiles state off
It's documented in the ZWave DSB guide. I didn't read the guide because I didn't need to implement a DSB bridge so I lost this mandatory step.
However, I think that it could be better to open only AllJoyn ports and not disable the firewall.
Paolo
I can't seem to get getajxml.exe to run on Windows 10 preview..I get the following message:
This version of C:\temp\getajxml.exe is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher.
Any Ideas?
I'm attempting to use this instead of alljoyn explorer due to issues I am encountering. Alljoyn explorer is showing two different bus objects having identical interfaces (com.microsoft.zwaveadapter.xxxx.interface_1). They therefore generate identical introspection XML. When connecting via my UWP app it is connecting to the wrong bus. I can't seem to figure out how to differentiate between the two. I'm hoping to see different XML generated by the getajxml tool.
I think that there may be an issue with current getajxml.exe and alljoyncodegen.exe, when there are interfaces with the same name under 2 different object paths.
My scenario is to write a Win UWP AllJoyn Client app that controls Aeon Lab Smart Switch via Device System Bridge. The Aeon Lab Smart Switch is used as a demo/sample codes for Device System Bridge.
The main control interface is "com.microsoft.ZWaveAdapter.HomeID25504235Node2.interface_1". However, both /Switch object path and /Enable_Wattage_Reports object path have the same interface.
Issue 1: When I use "getajxml.exe" to generate the introspection xml, the "node" does not specify which object path to use. Therefore, I got the same xaml for 2 different object paths.
Issue 2: Even when I manually add name="/Switch" to node and use "alljoyncodegen.exe", the resulting codes generated (I believe) do not consider the object path.
In auto generated AllJoynHelpers::GetObjectPath(), it only returns the first object path for that interface name. Therefore, I alwasy got /Enable_Wattage_Reports object path, not the intended /Switch object path.
Can some one shed some light about these issues?
Thanks,
Chih-fan Hsin.
Chihfan,
I am running into the exact same issue and have not been able to work around it. Please keep me updated if you find a solution.
Thanks!
Joe
Joe,
To workaround this issue, I modified the auto-generated codes at AllJoynHelpers::GetObjectPath(). You need to return multiple object paths that have the same interfaces. Then you need to loop through them, and get the object path you want, and return that object path.
Currently, the auto-generated codes only return the 1st object path with a matching interface. I also reported this issue at .
Chih-fan | https://channel9.msdn.com/Blogs/Internet-of-Things-Blog/Step-By-Step-Building-AllJoyn-Universal-Windows-Apps-for-Windows-10-Public-Preview | CC-MAIN-2020-40 | refinedweb | 1,991 | 62.17 |
Deploying web services with WSDL
Part 1
Introduction to web services and WSDL
Content series:
This content is part # of # in the series: Deploying web services with WSDL
This content is part of the series:Deploying web services with WSDL
Stay tuned for additional content in this series.
The idea of having interoperable web-based distributed applications is not new. As just one example, the requirements of the Electronic Data Interchange (EDI) market emerged well before B2B on-line e-commerce gained any significant presence -- and with the popularity of the B2B marketplace, interoperability has become the most compelling EDI requirement.
Take any online electronic Marketplace as an example. There are a lot
of businesses, each offering its own
services
(lets call them
web services). In present-day
e-commerce, there is no mechanism that allows one business to discover
automatically the services that its prospective partners offer. The
so-called next generation dotcom will offer a mechanism for exactly
this kind of automated discovery.
What is WSDL?
This new breed of dotcom needs a solution that can describe the services -- the web services -- it offers. Specifically, this means that you need a format or some type of grammar, with which you can describe the answers to the following questions:
- What are the services offered in your online business?
- How can you invoke your business services?
- What information do your business services need from the user when he or she invokes your service?
- How will the user provide the required information?
- In which format will the services send information back to the user?
Happily, WSDL provides the mechanism for doing all of these jobs.
WSDL and SOAP
To better understand how WSDL works, I will first describe how SOAP and HTTP work with WSDL. The purpose of WSDL is to "describe" your web services. Businesses will exchange WSDL files to understand the other's services. SOAP comes in once you know your partners' services and wish to invoke them. You can think of services as objects which are accessed by SOAP.
Most likely you will be communicating with potential partners via the Internet or through e-mail. The Internet, of course, uses HTTP and e-mail works on SMTP, making HTTP and SMTP the favored candidates for acting as "transport service providers" to SOAP.
WSDL Authoring
Now I'll look at the process of writing WSDL for the web service. The goal is to expose the existing web services. Your own situation may be any one of the following:
- You have an existing service (for instance, a web site) and you want to expose its functionality.
- You have a WSDL and you want to implement web server-side logic according to what you have already decided to expose. (Some people may think this an unlikely scenario, but UDDI's idea of fingerprints makes it quite probable; I will discuss UDDI in the fourth part of this series of articles).
- You are starting from scratch and have neither a web site nor a WSDL interface.
The information covered in this article allows for any or all of these possibilities.
Four steps to WSDL authoring
I will divide WSDL authoring in four simple steps. Follow each step, and your web service will be ready for deployment.
Step 1: The service interface
As a sample project, you will build the service interface of a mobile
phone retail company (I will call this service
MobilePhoneService). This company sells mobile phones
in different models, so back-end data storage of this company's web
service will contain a single table with two columns,
model number and
price.
(I am keeping this simple in order to maintain the focus on WSDL
itself). You will have two methods on your service which you will expose
using WSDL:
- getListOfModels ()
- getPrice (modelNumber)
The
GetListOfModels method provides an
array of strings, where each string represents the model number of a
mobile phone.
GetPrice takes a model number and
returns its price. WSDL calls these methods as operations. Now you'll start
building the
WSDL interface file.
The root element of every WSDL file is
<definitions>, in which you must provide a complete description of services. First of all, you have to provide various
namespace declarations in the
<definitions> element. The three external
namespace declarations you have to make are WSDL, SOAP, and XSD (XML Schema
Definition). There is another namespace, TNS, that refers to your
MobilePhoneService (this means TNS -- which is short for targetNamespace
-- will contain all names of elements and attributes that will be defined
specifically for the
MobilePhoneService). But
WSDL is the primary namespace that you'll use in most of your WSDL
authoring. I will mention the utility of other namespaces as they are used in this series of articles.
Just a note about namespaces: WSDL uses the concept of namespaces extensively. I encourage you to visit W3C's official web site to learn more about namespaces (see Related topics). WSDL is an implementation of this idea, since namespaces provide an infinite degree of flexibility and this is exactly what is required in a portable format for electronic data interchange.
The
<definitions> element contains
one or more
<portType> elements, each of
which is actually a set of
operations that you
want to expose. Alternatively, you can think of a single portType element
as a logical grouping of methods into classes. For example, if your supply
chain management solution requires interaction with both customers and
suppliers, you will most probably define functionality for interaction
with them separately; that is, you will define one portType for customers
and one for suppliers. You should call each portType a service, so that
your complete WSDL file will become a collection of services.
You have to provide a name for each service. In this case, you have only
one service (so one
<portType> element).
You need to use the
name attribute of this portType
element to assign a name to your mobile phone sales service.
Within each service you may have several methods, or
operations, which WSDL refers to via
<operation> elements. The sample application
has two methods to expose, namely
getListOfModels and
getPrice. Therefore, you need to provide two
<operation> elements, each having a
name.
I have used the
name attribute of the
<operation> element to name each operation.
At this point the WSDL file looks like Listing 1.
Listing 1: Defining operations
<?xml version="1.0" encoding="UTF-8" ?> <definitions name="MobilePhoneService" targetNamespace="" xmlns="" xmlns: <portType name="MobilePhoneService_port"> <operation name="getListOfModels "> ....... ....... </operation> <operation name="getPrice"> ....... ....... </operation> </portType> </definitions>
Step 2: Making parameters
Having defined the operations (or methods), you now need to specify the parameters that you will send to them and the parameters that they will return. In WSDL terms, all parameters are called "messages." It is useful to think that you are sending in messages and as a result getting back return messages. Method calls are the operations that take place to prepare return "messages" in response to incoming messages.
Recall from the first step that you have two operations to expose. The
first operation,
getListOfModels, does not take
any parameter and returns an array of strings, where each string
represents the model number of a mobile phone. Therefore, you have to
define a
<message> element that contains
an array of strings.
Have a look at the various
elements in Listing 2. The first of these has a name attribute equal to
ListOfPhoneModels (a logical name for this
message), and a single
<part> element with the name models, which
means
ListOfPhoneModels is a one-part message,
where the name of the only part present is "models." You can have any
number of parts in a message -- so long as you remember to give them
different names for unique identification.
I have included another attribute of the
<part> element, which is
type. Think of this "type" attribute as data types in
C++ or Java. I have specified the data type of
models as tns:Vector. (Recall that I specified a few
namespaces in the root
<definitions>
element, one of which was
tns). This refers to
the
MobilePhoneService namespace. What this
means is that you can create your own namespace while authoring WSDL.
You may now be asking two logical questions: Why? And how?
To answer the why, let's take the array of strings returned by
the
getListOfModels operation as an example.
WSDL uses a few primitive data types that XML Schema Definition (XSD)
defines (like int, float, long, short, byte, string, Boolean, etc.) and
allows you to either use them directly or to build complex data types based
on these primitive ones, before using them in messages. This is why you
need to define your own namespace when referring to complex data types. In
this case, you need to build a complex data type for an
array of strings.
Now coming to the how question, you will use XSD to create your
namespace. For this purpose I have used the xsd:complexType element
within the
<types> element to define data type named
Vector.
Vector
uses two primitive data types -- string (element data) and Integer
(element count). Hence
Vector becomes part of
the namespace and can be referred to by the alias
tns.
In a similar manner, I have defined the other two messages,
PhoneModel and
PhoneModelPrice, in Listing 2. These two messages use
only string primitive data types of the xsd namespace and therefore you do
not need to define any more complex data types in order to use them.
You may have noticed that while creating the
<message> elements, you did not specify whether
these messages are incoming parameters or return values. This is a job you
will take care of in the
<operation>
element within the
<portType> element.
Therefore, as you can see in Listing 2, I have added the
<input> and
<output> elements to each of the two
operations. Each input element refers to a message by its name and treats
it as a parameter that the user will provide when invoking this operation.
Each
<output> element similarly refers to
a message; it treats the message as the return value of the operation
call.
Listing 2 fits the discussion so far nicely into one frame.
Step 3: Messaging and transport
I have defined operations and messages in an abstract way, without
worrying about the details of implementation. In fact, WSDL's job is to
define or describe web services and then to provide a reference to an
external framework to define how the WSDL user will reach the
implementation of these services. You can think of this framework as a
binding between WSDL's abstract definitions and
their implementation.
Currently, the most popular
binding
technique is to use the Simple Object Access Protocol (SOAP). WSDL will
specify a SOAP server that has access to the actual implementation of your
web service, and from there it is entirely SOAP's job to take the user
from the WSDL file to its implementation. SOAP is the topic of next
installment in this series of articles, so for the time being I will
avoid SOAP details and keep focused on WSDL authoring.
The third step in WSDL authoring is to describe the process of SOAP
binding with a WSDL file. You will include a
<binding> element within the
<definitions> element. This binding element
should have a
name and a
type. The
name will
identify this binding and
type will identify
the portType (set of operations) that you want to associate with this
binding. In Listing 3, you will find that the
name of the
<portType> element matches the type attribute
value of the
<binding> element.
The WSDL binding element contains a declaration of which external
technologies you will use for binding purposes. Since you are using SOAP, you
will use SOAP's namespace here. In WSDL terminology, the use of an
external namespace is called the
extensibility
element.
In Listing 3, you will see an empty
<soap:binding/> element. The purpose of this
element is to declare that you are going to use SOAP as a binding and
transport service.
The
<soap:binding> element has two
attributes: style and transport. Style is an optional attribute that
describes the nature of operations within this binding. The transport
attribute specifies HTTP as the lower-level transport service that this
binding will use.
A SOAP client will read the SOAP structure from your WSDL file and
coordinate with a SOAP server on the other end, so you must be very
concerned with
interoperability. I intend to
cover this in detail in the third part of this series of articles.
After the empty
<soap:binding/>
element, you have two WSDL
<operation>
elements, one for each of your operations from Step 1. Each
<operation> element provides binding details
for individual operations. Therefore, I have provided another
extensibility element, namely
<soap:operation/> (again an empty element that
relates to the operation in which it occurs). This
<soap:operation/> element has a soapAction
attribute that a SOAP client will use to make a SOAP request.
Recall from Step 2 that the
getListOfModels
operation only has an output and does not have any input. Therefore, you
have to provide an
<output> element for
this operation. This output contains a
<soap:body/> element (again an empty element
that relates to the output in which it occurs). The SOAP client needs this
information to create SOAP requests. The value of the namespace attribute
of
<soap:body/> should correspond to the
name of the
service that you will deploy on your
SOAP server in the next part of this series of articles.
You are nearly finished with Step 3. Just copy the next operation after this one and you will come up with Listing 3.
Step 4: Summing it up
You have produced a WSDL file that completely describes the
interface of your service. WSDL now requires the
additional step of creating a summary of the WSDL file. WSDL calls this a
an
implementation file, which you will use while publishing
your web service at a UDDI registry in the fourth part of this series of
articles. Have a look at Listing 4, a WSDL implementation file. Its main
features are the following:
- The root
<definitions>element is exactly the same as in Listing 3 (a WSDL interface file), except that Listing 4 (the implementation file) refers to a different
targetNamespace, which refers to your implementation file.
- There is an
<import>element that refers to the interface file of Listing 3 (file name MobilePhoneService-interface.wsdl) and its namespace.
- There is a
<service>tag with a logical
namefor this service. Within the service element is a port element that refers to the SOAP binding that you created in Listing 3.
Using IBM's Web Services ToolKit (WSTK) for WSDL authoring
The web service is now completely ready for deployment. I have shown how to create these files manually (using a simple text editor like emacs). These same files can be generated using web services authoring tools like IBM's WSTK (see Related topics for links to the toolkit, and other resources mentioned in this article).
WSTK can generate these files using a wizard-assisted process. Users can generate WSDL files for the same two methods that I demonstrated in the above tutorial and compare WSTK files with the WSDL files of Listings 3 and 4.
You will notice following differences:
- WSTK creates all name attributes according to a logical formula; in the example, I used names of my own convenience.
- WSTK generates at least one input tag for each operation, even if that operation does not take any input. The
listAllPhoneModelsoperation did not have any input element, but if you generate the same file with WSTK, it will contain an empty input element for this method.
- WSTK produces a third file in addition to the two files that were produced. This third file is a SOAP deployment descriptor that the SOAP engine uses for service deployment. I will discuss service deployment in the article of this series.
In this installment I have demonstrated manual WSDL authoring to create interface and implementation files, and compared the files with those that IBM's Web Services ToolKit produces. In the next part of this series, I will discuss deployment of this WSDL service on a SOAP server.
Downloadable resources
Related topics
- Visit W3C's official web site to find the Web Services Description Language (WSDL) 1.1 specification and all other XML-related official specifications including technical documentation for XSD and namespaces.
- Visit IBM's alphaWorks web site to download the Web Services ToolKit (WSTK) used in this article.
- Download Apache's SOAP toolkit from Apache.org.
- Building Web Services: Making Sense of XML, SOAP, WSDL, and UDDI is a new book from Steve Graham, Simeon Simeonov, Toufic Boubez, Glen Daniels, Doug Davis, Yuichi Nakamura, Ryo Neyama -- a group of authors from all corners of the web services technology sector. (Sams publishing, 2001).
- Read this article on developerWorks that describes how to map WSDL elements to a UDDI registry. | https://www.ibm.com/developerworks/webservices/library/ws-intwsdl/ | CC-MAIN-2018-13 | refinedweb | 2,837 | 53.51 |
Currently using SDK 36, tested on Android.
I can’t seem to use react-native-svg. I get the error
Unable to resolve "./elements/ForeignObject" from "node_modules/react-native-svg/src/ReactNativeSVG.ts.
My code is really simple, taken straight from the example here.
import React from 'react'; import { View, StyleSheet } from 'react-native'; import Svg, { Circle, Rect } from 'react-native-svg'; export default class SvgExample extends React.Component { render() { return ( <View style={[ StyleSheet.absoluteFill, { alignItems: 'center', justifyContent: 'center' }, ]}> <Svg height="50%" width="50%" viewBox="0 0 100 100"> <Circle cx="50" cy="50" r="45" stroke="blue" strokeWidth="2.5" fill="green" /> <Rect x="15" y="15" width="70" height="70" stroke="red" strokeWidth="2" fill="yellow" /> </Svg> </View> ); } }
I’ve also tried importing it like this:
import * as Svg from 'react-native-svg'; const { Circle, Rect } = Svg;
As suggested on react-native-svg’s github, but it didn’t change anything.
Found this solution on github, but
expo start --clear didn’t work. And I don’t understand what this solution entails.
How do I fix this? | https://forums.expo.io/t/react-native-svg-unable-to-resolve-elements-foreignobject-from-node-modules-react-native-svg-src-reactnativesvg-ts/34616 | CC-MAIN-2020-29 | refinedweb | 179 | 51.34 |
<dce/acct.h>-Header file for the sec_rgy_acct API
#include <dce/acct.h>
Header file for the Registry API used to create and maintain accounts in the Registry database. All of these routines have the prefix sec_rgy_acct.
Data Types and ConstantsThere are no particular data types or constants specific to the sec_rgy_acct API (other than those that have already been introduced in this specification).
Status CodesThe following status codes (listed in alphabetical order) are used in the sec_rgy_acct API.
- error_status_ok
The call was successful.
- sec_rgy_no_more_entries
The cursor is at the end of the list of projects.
- sec_rgy_not_authorized
Client program is not authorized to add an account to the registry.
- sec_rgy_object_not_found
The registry server could not find the specified name.
-. | http://pubs.opengroup.org/onlinepubs/9696989899/dce_acct.h.htm | CC-MAIN-2017-30 | refinedweb | 119 | 50.12 |
pandas can be used conveniently to read a table of values from Excel. When extracting data from real-life Excel sheets, there are often metadata fields which are not structured as a table readable by pandas.
Reading the pandas docs, it is not obvious how we can extract the value of a single cell (without any associated headers) with a fixed position, for example:
In this example, we want to extract cell
C3, that is we want to end up with a string of value
Test value #123. Since there is no clear table structure in this excel sheet (and other cells might contain other values we are not interested in – for example, headlines or headers), we don’t want to have a
pd.DataFrame but simply a string.
This is how you can do it:
def read_value_from_excel(filename, column="C", row=3): """Read a single cell value from an Excel file""" return pd.read_excel(filename, skiprows=row - 1, usecols=column, nrows=1, header=None, names=["Value"]).iloc[0]["Value"]
# Example usage
read_value_from_excel(“Test.xlsx”, “C”, 3) # Prints
Let’s explain the parameters we’re using as arguments to
pd.read_excel():
Test.xlsx: The filename of the file you want to read
skiprows=2: Row number minus one, so the desired row is the first we read
usecols="C": Which columns we’re interested in – only one!
nrows=1: Read only a single row
header=None: Do not assume that the first row we read is a header row
names=["Value"]: Set the name for the single column to
Value
.iloc[0]: From the resulting
pd.DataFrame, get the first row (
[0]) by index (
iloc)
["Value"]From the resulting row, extract the
"Value"column – which is the only column available.
In my opinion, using pandas is the best way of extracting for most real-world usecases (i.e. more focused on development speed than on execution speed) because not only does it provide automatic engine selection for
.xls and
.xlsx files, it’s also present on most Data Science setups anyway and provides a standardized API. | https://techoverflow.net/2021/08/01/how-to-read-single-value-from-xlsx-using-pandas/ | CC-MAIN-2022-27 | refinedweb | 346 | 60.24 |
#014 Calculating Sparse Optical flow using Lucas Kanade method
Highlights: In this post, we will show how we can detect moving objects in a video using the Lucas Kanade method. This approach is based on tracking a set of distinctive feature points. Therefore, it is also known as a Sparse Optical Flow method. We will give a detailed theoretical understanding of the Lucas Kanade method and show how it can be implemented in Python using OpenCV.
Tutorial overview:
- Understanding the Concept of Motion
- Optical Flow and its types
- Optical Flow Constraint
- Gradient Component of Flow
- Lucas Kanade in Python
1. Understanding the Concept of Motion
Until now, we have covered many computer vision methods for object detection, object segmentation, and object tracking. However, in all these approaches one important piece of information is completely ignored. That information is the relationship between objects in two consecutive frames.
Let’s understand this using an example. Have a look at the GIF below that shows a sunrise motion. This GIF is nothing but a sequence of static frames combined together.
In examples such as above, our main objective is to capture the change that happens from one frame to another. In order to achieve that, we need to track the motion of objects across all frames, estimate their current position and then, predict their displacement in the subsequent frame.
Moreover, when we work with video sequences rather than static images, instead of using just space coordinates \(x\), and \(y\), we will also have to track a change of intensity of a pixel over time. Therefore, we will have one additional variable – time. This means that the intensity of a single-pixel can be represented as a function of space coordinates \(x\), \(y\), and time \(t\).
This approach is popularly known as Optical Flow. Let’s understand this better along with the various categorizations of Optical Flow.
2. Optical Flow and its types
Optical Flow can be defined as a pattern of motion of pixels between two consecutive frames. The motion can be caused either by the movement of a scene or by the movement of the camera.
A vital ingredient in several computer vision and machine learning fields such as object tracking, object recognition, movement detection, and robot navigation, Optical Flow works by describing a dense vector field. In this field, each pixel is assigned a separate displacement vector that helps in estimating the direction and speed of each pixel of the moving object, in each frame of the input video sequence.
If we were to categorize the technique of Optical Flow, we can divide it into two main types: Sparse Optical Flow and Dense Optical Flow.
- Sparse Optical Flow: This method processes the flow vectors of only a few of the most interesting pixels from the entire image, within a frame.
- Dense Optical Flow: Here, the flow vectors of all pixels in the entire frame are processed which makes this technique a little slower.
Have a look at the two GIF images below. The image on the left shows Sparse Optical Flow in action and the image on the right shows Dense Optical Flow.
As you must have realized by now, that tracking motion of objects from one sequence to another is a complicated problem due to the sheer volume of pixels in each frame. In the following paragraphs, we will give the necessary theoretical concepts, and finally, we will derive the equations for Lucas Kanade optical flow method.
3. Optical Flow Constraint
In the Lucas Kanade method, an important assumption was made, that there isn’t any significant change in the lighting between two consecutive frames. This assumption is called the Brightness Constancy Assumption. This means that if the object in the image moves or the camera moves, then, the colors of that object will remain the same, regardless of the lighting.
Brightness constancy constraint: Let’s assume that a pixel is moving in the direction \((u, v)\). That is \(u\) is the amount of movement in \(x\) direction, and \(v\) is the amount of movement in \(y\) direction when we get to \(t+1\). Here, \(t\) represents the time instance of the first frame where the consecutive frame is represented as \(t+1\).
The brightness constancy constraint equation can be written as follows:
$$ I(x, y, t) = I(x+u, y+v, t+1) $$
In this equation, \(I(x, y, t)\) represents the pixel intensity at time \(t \) at location \((x, y) \). The Brightness Constancy Assumption states that the intensity at \(t+1 \) at location \(I(x+u, y+v) \) is going to be the same as in the frame \(t\). Here, \(u\) and \(v\) are displacements for wich the poin has mooved in \(x\) and \(y\) direction.
This can also be rewritten as:
$$ 0 = I(x+u, y+v, t+1) – I(x, y, t) $$
Small Motion: When a pixel moves between two consecutive frames we assume that a movement is relatively small. We can then use the Taylor expansion in order to approximate this equation:
$$ 0 \approx I(x, y, t+1)+I_{x} u+I_{y} v-I(x, y, t) $$
Here, we are substituting the value \(I(x+u, y+v, t+1)\) with the Taylor expansion. So, we have the \(I(x,y,t+1) \) plus the derivative part \(I_{x}u + I_{y}v \). Note that \(I_{x}u\) is the derivative in the \(x\)-direction, and \(I_{y}v\) is the derivative in the \(y\)-direction.
In addition, \(I_{x} = \frac{\partial I}{x}\) and \(I_{y} = \frac{\partial I}{y}\) for both \(t\) and \(t+1\). Here, we are going to assume that the things are changing slowly, so the derivative at a particular point is going to be the same both at \(t\) and \(t+1\).
$$ 0 \approx [I(x,y,t+1) – I(x, y, t)]+ I_{x}u + I_{y}v $$
Now, when we rearrange equation terms:
$$ 0 \approx I_{t} + I_{x}u + I_{y}v $$
This \(I_{t}\), is the difference between \(I(x,y,t+1) – I(x, y, t)\). So \(I_{t}\) is called the temporal derivative. In addition, \(I_{x}\) is the derivative of \(I\) in the \(x\)-direction and \(I_{y}\) is the derivative of \(I\) in the \(y\)-direction. Next, \(I_{t}\) is the derivative of \(I\) with respect to time. Our video is a function \(I(x,y,t)\), so a derivative can be taken with respect to \(x\), \(y\), and \(t\).
$$ 0 \approx I_{t} + \bigtriangledown I \cdot [u, v] $$
Here \( \bigtriangledown I \cdot [u, v] \) is the gradient of the image, dotted with the vector \([u, v]\). We talked about Image Gradient before and you can check it out.
Finally, the brightness constancy constraint equation can be written as:
$$ I_{x}u + I_{y}v + I_{t} = 0 $$
This is an equation with two unknowns and as such cannot be solved. This is known as the aperture problem of optical flow algorithms. To find the optical flow another set of equations is needed to impose an additional constraint.
$$ I_{x}u + I_{y}v = – I_{t} $$
Note that this can be written as a single vector equation:
$$\vec{\bigtriangledown}I \cdot \vec{u} = -I_{t}$$
where:
$$ \vec{\bigtriangledown}I = \begin{bmatrix}
I_{x}\\
I_{y}
\end{bmatrix}$$, and $$ \vec{u} = \begin{bmatrix}
u\\
v
\end{bmatrix}$$
4. Gradient Component of Flow
In the last paragraph we get to this equation, which is the brightness constancy constraint equation:
$$ I_{x}u+I_{y}v+I_{t}= 0 $$
A question is how many unknowns and how many equations per pixel do we have? Here, we have two unknowns \(u \) and \(v \) but only one equation per pixel.
The component of \(\left ( u,v \right ) \) that is in the direction of the gradient is something we can measure.
Here, we can see that this red line in the image above is in the direction of the gradient. We may suppose that we have a displacement vector \(\left ( u,v \right ) \) that needs to be determined. Also, we have a \(\left ( {u}’,{v}’ \right ) \) that is an extra component parallel to the edge. This new vector \(\left ( u+{u}’,v+{v}’ \right ) \) has the same amount perpendicular to the edge and in the direction of the gradient but a different amount along the edge. Locally we can only determine the amount of motion that is perpendicular to the edge in a local neighborhood. We can think of it as looking through a hole. That is called an aperture and this general problem is called the aperture problem.
Aperture Problem
Now, let’s see this interesting example. It is a line with two parts. Notice that it has a corner. In the following example it is moving down and to the right:
Let us put an aperture over the line. If we can observe only the green area we will get a wrong impression that the line is moving in the direction perpendicular to the edge.
Next, if we are observing the movement of the line through a yellow area we can estimate the movement accurately.
That is the aperture problem. We can only tell the motion locally in the direction perpendicular to the edge.
Solving the Aperture Problem
Now we are going to solve the aperture problem. To do this we have to impose additional local constraints in order to get more equations per pixel. For example, we will assume that in a local area of one pixel, the motion field is very smooth. In fact, we assume that in a small rectangular window around this pixel, the value is the same for every \(\left ( u,v \right ) \). For example, if we have a \(5\times 5 \) window, there are \(25 \) pixels in that window. Now, if we were assuming that there was one \(\left ( u,v \right ) \) for all of these the \(\left ( u,v \right ) \) for the center pixel, that would give us \(25 \) equations per pixel. Here they are:
$$ 0= I_{t}\left ( p_{i} \right )+\bigtriangledown I\left ( p_{i} \right )\cdot \begin{bmatrix}u & v \end{bmatrix} $$
$$ \begin{bmatrix}I_{x}\left ( p_{1} \right ) & I_{y}\left ( p_{1} \right )\\ I_{x}\left ( p_{2} \right )& I_{y}\left ( p_{2} \right )\\\vdots & \vdots \\I_{x}\left ( p_{25} \right ) & I_{y}\left ( p_{25} \right )\end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix}= -\begin{bmatrix}I_{t}\left ( p_{1} \right )\\I_{t}\left ( p_{2} \right )\\\vdots \\I_{t}\left ( p_{25} \right )\end{bmatrix} $$
So we have the gradients dotted with \(\begin{bmatrix}u\\v\end{bmatrix} \) and that equals the negative of the temporal derivatives at each of those points. We can write this as:
$$Ad= b $$
where \(d \) is this displacement vector \(\begin{bmatrix}u\\v\end{bmatrix} \) and \(b \) is just this \(25\times 1 \) vector that is essentially the negative of all the temporal derivatives. So we have \(25 \) equations and \(2 \) unknowns.
The question is how do we solve a system when we have more equations than unknowns? Well, we will apply the least-squares approach and in that way, we will minimize the squared difference.
$$ Ad= b\rightarrow minimize\left \| Ad-b \right \|^{2} $$
$$ \left ( A^{T}A \right )d= A^{T}b $$
The way we do that is by using the standard pseudo-inverse method. So, we multiply \(A \) by \(A^{T} \) and we end up with this equation:
$$ \begin{bmatrix}\sum I_{x}I_{x} &\sum I_{x}I_{y} \\ \sum I_{x}I_{y}& \sum I_{y}I_{y}\end{bmatrix}\begin{bmatrix}u\\ v\end{bmatrix}= -\begin{bmatrix}\sum I_{x}I_{t}\\\sum I_{y}I_{t}\end{bmatrix} $$
These sums are calculated over the \(5 \times5 \) kernel window. This technique was first proposed by Bruce D. Lucas and Takeo Kanade way back in 1981.
Now, let’s first illustrate and visualize this solving method.
$$ I_{x}u+I_{y}v+I_{t}= 0 $$
$$ \bigtriangledown I\cdot \vec{U}+I_{t}= 0 $$
The idea is if we have a gradient in some direction (red arrows in the picture above) any \(\left ( u,v \right ) \) that’s along this blue line would be an acceptable solution.
Combining Local Constraints
If we have a particular gradient in the \(\left ( u,v \right ) \) space, that gives us a single line. In the image below we can have a green line for the gradient vector 1. Next, we can have another gradient that is present in the same kernel window (5×5). For instance, it can be a gradient 2 – blue line. Moreover, we can have another gradient 3 that we connect with the red line. So, a solution for (u, v) should lie on these lines and it will be at the point of their intersection. This is illustrated in the image below.
RGB Version
If we are actually working with \(RGB\) images, we would have \(75\) instead of \(25 \) equations. That is, we have \(5\times 5\times 3 \), where 3 is the number of channels.
$$ 0= I_{t}\left ( p_{i} \right )\left [ 0,1,2 \right ]+\bigtriangledown I\left ( p_{i} \right )\left [ 0,1,2 \right ]\cdot \left [ u,v \right ] $$
$$ \begin{bmatrix}I_{x}\left ( p_{1} \right )\left [ 0 \right ]& I_{y}\left ( p_{1} \right )\left [ 0 \right ]\\ I_{x}\left ( p_{1} \right )\left [ 1 \right ]&I_{y}\left ( p_{1} \right )\left [ 1 \right ] \\ I_{x}\left ( p_{1} \right )\left [ 2 \right ]&I_{y}\left ( p_{1} \right )\left [ 2 \right ] \\\vdots & \vdots \\ I_{x}\left ( p_{25} \right )\left [ 0 \right ] &I_{y}\left ( p_{25} \right )\left [ 0 \right ] \\I_{x}\left ( p_{25} \right )\left [ 1 \right ] & I_{y}\left ( p_{25} \right )\left [ 1 \right ]\\ I_{x}\left ( p_{25} \right )\left [ 2 \right ] & I_{y}\left ( p_{25} \right )\left [ 2 \right ]\end{bmatrix}\begin{bmatrix}u\\ v\end{bmatrix}= \begin{bmatrix}I_{t}\left ( p_{1} \right )\left [ 0 \right ]\\I_{t}\left ( p_{1} \right )\left [ 1 \right ]\\I_{t}\left ( p_{1} \right )\left [ 2 \right ]\\\vdots \\I_{t}\left ( p_{25} \right )\left [ 0 \right ]\\I_{t}\left ( p_{25} \right )\left [ 1 \right ]\\I_{t}\left ( p_{25} \right )\left [ 2 \right ]\end{bmatrix} $$
We might say that our window was only a single pixel and then we had one equation and two unknowns but with \(RGB\) we now have three equations and two unknowns. Well, we can’t solve it. The problem is that the \(RGB\) images are quite correlated so we can’t just use the different color planes in order to solve it.
5. Lucas Kanade in Python
Now, after a lengthy theoretical overview, it is a perfect time to start with the code. We will show how to use Lucas Kanade using OpenCV in Python.
First, we will import the necessary libraries.
import cv2 import numpy as np from google.colab.patches import cv2_imshow # params for ShiTomasi corner detection feature_params = dict( maxCorners = 100, qualityLevel = 0.3, minDistance = 7, blockSize = 7 ) # Parameters for lucas kanade optical flow lk_params = dict( winSize = (15,15), maxLevel = 3, criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
As the first step, we will need to detect feature points. We will detect them in the initial frame (prev), that is, in the grayscale image (prevgray – first frame of the video transformed to a grayscale image). To do this, we will use a function called
cv2.goodFeaturesToTrack(). It is an implementation of the Shi-Tomasi corner detector (an extension of the Harris corner detector). A nice property of this function is that we can specify a maximal number of feature points that we want to detect and track. Here, we will detect a maximum of 100 points.
We can visualize them in the following image.
cap = cv2.VideoCapture('cars.mp4') # Take first frame and find corners in it ret, prev = cap.read() fourcc = cv2.VideoWriter_fourcc('M','J','P','G') out = cv2.VideoWriter('output.avi', fourcc, 29, (1280, 720)) prevgray = cv2.cvtColor(prev, cv2.COLOR_BGR2GRAY) p0 = cv2.goodFeaturesToTrack(prevgray, mask = None, **feature_params) # Create a mask image for drawing purposes mask = np.zeros_like(prev)
The next step is to use these points (p0), feed them to Lucas Kanade algorithm and track them (p1).
The function Lucas Kanade is implemented as
cv2.calcOpticalFlowPyrLK(). It calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with the pyramid. It uses the following parameters:
Parameters for the function
cv2.calcOpticalFlowPyrLK(prevImg, nextImg, prevPts, nextPts):
- prevImg – The first 8-bit input image.
- nextImg – Second input image of the same size and the same type as prevImg.
- prevPts – Vector of 2D points for which the flow needs to be found.
- winSize – The size of the search window at each pyramid level.
- maxLevel – 0-based maximal pyramid level number; if set to 0, pyramids are not used (single-level). On the other hand, if set to 1, two levels are used. If pyramids are passed to input then the algorithm will use as many levels as pyramids have but no more than maxLevel.
- criteria – parameter, specifying the termination criteria of the iterative search algorithm.
Return:
- nextPts – Output vector of 2D points containing the calculated new positions of input features in the second image.
- status – Output status vector. Each element of the vector is set to 1 if the flow for the corresponding features has been found. Otherwise, it is set to 0.
- err – Status output vector of errors. Each element of the vector is set to an error for the corresponding feature. If the flow wasn’t found then the error is not defined (use the status parameter to find such cases).
This function accepts two grayscale images: prev and next. The set of detected data points are stored in the
p0. Using indexing we can access the data points and check their shape.
# Note the shape of the p0 variable and how we can access the data points print(p0.shape) print(p0[0][0][0]) print(p0[0][0][1])
Output:
(100, 1, 2)
981.0
390.0
The goal of the function is to return the coordinates of the initial feature points that are detected and tracked in the subsequent image
p1 within the next image frame.
while (cap.isOpened()): ret, frame = cap.read() if ret == True: gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # calculate optical flow p1, st, err = cv2.calcOpticalFlowPyrLK(prevgray, gray, p0, None, **lk_params) # Select good points good_new = p1[st==1] good_old = p0[st==1] # drawing for i,(new,old) in enumerate(zip(good_new,good_old)): a,b = new.ravel() c,d = old.ravel() mask = cv2.line(mask, (a,b),(c,d), (0,0,255), 2) frame = cv2.circle(frame,(a,b),5, (0,0,255),-1) img = cv2.add(frame,mask) #cv2_imshow(img) # Now update the previous frame and previous points prevgray = gray.copy() p0 = good_new.reshape(-1,1,2) out.write(img) else: break out.release() cap.release()
The image/video quality and the distinctiveness of the feature points will determine the accuracy of the tracking.
In case that we have a noisy video and feature points that are not distinctive (e.g. “not clear corners”), the LK algorithm can easily fail, so do not be surprised if that happens.
The final practical step of this experiment is to loop over the whole video and to iteratively call LK algorithm. In addition, it will be necessary to update a set of the tracked points. This is done within the statement
p0 = good_new.reshape(-1,1,2) where we additionally adjust the shape of this array.
The other parts of the code are mainly used for visualizations. You can figure out on your own how we managed to add tracked points onto the final output video (e.g. dummy image mask)
At the end, we will visualize this process over the whole video sequence.
The connected points show the “movement of the car”. On the other hand, there are other detected points that are static. The LK detects that there are no displacements for these points.
Summary
In this post, we explained how to apply the Lucas Kanade method to detect moving objects in a video. It is a commonly used differential method for optical flow estimation. This method assumes that the flow is a constant in a local neighbourhood of the pixel and solves the basic optical flow equations for all the pixels in that neighbourhood.
References:
[1] Introduction to Motion Estimation with Optical Flow by Chuan-en Lin | http://datahacker.rs/calculating-sparse-optical-flow-using-lucas-kanade-method/ | CC-MAIN-2021-39 | refinedweb | 3,432 | 62.48 |
A year ago, Harry Bagdi wrote an amazingly helpful blog post (link at bottom of article) on observability for microservices. And by comparing titles, it becomes obvious that my blog post draws inspiration from his work.
When he published it, our company, Kong, was doing an amazing job at one thing: API gateways. So naturally, the blog post only featured leveraging the Prometheus monitoring stack in conjunction with Kong Gateway. But to quote Bob Dylan, “the times they are a-changin [and sometimes an API gateway is just not enough]”. So, we released Kuma (which was donated to the Cloud Native Computing Foundation as a Sandbox project in June 2020), an open source service mesh to work in conjunction with Kong Gateway.
How does this change observability for the microservices in our Kubernetes cluster? Well, let me show you.
Prerequisites
The first thing to do is to set up Kuma and Kong. But why reinvent the wheel when my previous blog post already covered exactly how to do this. Follow the steps here to set up Kong and Kuma in a Kubernetes cluster.
Install Prometheus Monitoring Stack
Once the prerequisite cluster is set up, getting Prometheus monitoring stack setup is a breeze. Just run the following
kumactl install [..]
command and it will deploy the stack. This is the same
kumactl
binary we used in the prerequisite step. However, if you do not have it set up, you can download it on Kuma’s installation page.
$ kumactl install metrics | kubectl apply -f - namespace/kuma-metrics created podsecuritypolicy.policy/grafana created configmap/grafana created configmap/prometheus-alertmanager created configmap/provisioning-datasource created configmap/provisioning-dashboards created configmap/prometheus-server created persistentvolumeclaim/prometheus-alertmanager created persistentvolumeclaim/prometheus-server created ...
To check if everything has been deployed, check the
kuma-metrics
namespace:
$ kubectl get pods -n kuma-metrics NAME READY STATUS RESTARTS AGE grafana-c987548d6-5l7h7 1/1 Running 0 2m18s prometheus-alertmanager-655d8568-frxhc 2/2 Running 0 2m18s prometheus-kube-state-metrics-5c45f8b9df-h9qh9 1/1 Running 0 2m18s prometheus-node-exporter-ngqvm 1/1 Running 0 2m18s prometheus-pushgateway-6c894bb86f-2gflz 1/1 Running 0 2m18s prometheus-server-65895587f-kqzrf 3/3 Running 0 2m18s
Enable Metrics on Mesh
Once the pods are all up and running, we need to edit the Kuma mesh object to include the
metrics: prometheus
section you see below. It is not included by default, so you can edit the mesh object using
kubectl
like so:
$ cat <<EOF | kubectl apply -f - apiVersion: kuma.io/v1alpha1 kind: Mesh metadata: name: default spec: mtls: ca: builtin: {} metrics: prometheus: {} EOF
Accessing Grafana Dashboards
We can visualize our metrics with Kuma’s prebuilt Grafana dashboards. And the best part is that Grafana was also installed alongside the Prometheus stack, so if you port-forward the Grafana server pod in
kuma-metrics
namespace, you will see all your metrics:
$ kubectl port-forward grafana-c987548d6-5l7h7 -n kuma-metrics 3000 Forwarding from 127.0.0.1:3000 -> 3000 Forwarding from [::1]:3000 -> 3000
Next step is to visit the Grafana dashboard to query the metrics that Prometheus is scraping from Envoy sidecar proxies within the mesh. If you are prompted to log in, just use admin for both the username and password.
There will be three Kuma dashboards:
- Kuma Mesh: High level overview of the entire service mesh
- Kuma Dataplane: In-depth metrics on a particular Envoy dataplane
- Kuma Service to Service: Metrics on connection/traffic between two services
But we can do better…by stealing more ideas from Harry’s blog. In the remainder of this tutorial, I will explain how you can extend the Prometheus monitoring stack we just deployed to work in conjunction with Kong.
To start, while we are still on Grafana, let’s add the official Kong dashboard to our Grafana server. Visit this import page in Grafana to import a new dashboard:
On this page, you will enter the Kong Grafana dashboard ID
7424
into the top field. The page will automatically redirect you to the screenshot page below if you entered the ID correctly:
Here, you need to select the Prometheus data source. The drop down should only have one option named “Prometheus,” so be sure to just select that. Click the green “Import” button when you are done. But before we go explore that new dashboard we created, we need to set up the Prometheus plugin on the Kong API gateway.
Enabling Prometheus Plugins on Kong Ingress Controller
We need the Prometheus plugin to expose metrics related to Kong and proxied upstream services in Prometheus exposition format. But you may ask, “wait, didn’t we just set up Prometheus by enabling the metrics option on the entire Kuma mesh? And if Kong sits within this mesh, why do we need an additional Prometheus plugin?” I know it may seem redundant, but let me explain. When enabling the metrics option on the mesh, Prometheus only has access to metrics exposed by the data planes (Envoy sidecar proxies) that sit alongside the services in the mesh, not from the actual services. So, Kong Gateway has a lot more metrics available that we can gain insight into if we can reuse the same Prometheus server.
To do so, it really is quite simple.:
echo "apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: labels: global: "true" name: prometheus plugin: prometheus " | kubectl apply -f -
Export the PROXY_IP once again since we’ll be using it to generate some consistent traffic.
export PROXY_IP=$(minikube service -p kuma-demo -n kuma-demo kong-proxy --url | head -1)
This will be the same PROXY_IP step we used in the prerequisite blog post. If nothing shows up when you
echo $PROXY_IP
, you will need to revisit the prerequisite and make sure Kong is set up correctly within your mesh. But if you can access the application via the PROXY_IP, run this loop to throw traffic into our mesh:
while true; do curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items" curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items?q=dress" curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items/8/reviews" curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items" curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/dead_endpoint" sleep 0.01 done
“Show Me the Metrics!”
Go back to the Kong Grafana dashboard and watch those sweet metrics trickle in:
You now have Kuma and Kong metrics using one Prometheus monitoring stack. That’s all for this blog. Thanks, Harry, for the idea! And thank you for following along. Let me know what you would like to see next by tweeting at me at @devadvocado or emailing me at [email protected].
Previously published at
Harry Bagdi’s article:
read original article here | https://coinerblog.com/kuma-and-prometheus-for-observability-in-kubernetes-microservices-clusters-001y3uza/ | CC-MAIN-2020-40 | refinedweb | 1,121 | 59.84 |
12 August 2010 20:50 [Source: ICIS news]
(updates with Canadian, Mexican and overall North American shipment data)
?xml:namespace>
Canadian chemical railcar loadings for the week ended on 7 August were 13,916, compared with 10,945 carloads in the same week last year, according to data released by the Association of American Railroads (AAR).
The increase for the week came after a 16.6% year-over-year increase in Canadian chemical carloads in the previous week ended 31 July.
The weekly chemical railcar loadings data are seen as an important real-time measure of chemical industry activity and demand. In
For the year-to-date period to 7 August, Canadian chemical railcar shipments were up 25.0% to 443,009, from 354,334 in the same period in 2009.
The association said that chemical railcar traffic in
For the year-to-date period, Mexican shipments were up 2.1%, to 35,170, from 34,450 in the same period last year.
The AAR reported earlier on Thursday that
Year-to-date to 7 August,
Overall chemical railcar shipments for all of North America - US, Canada and Mexico - rose 10.1% for the week ended 7 August, to 43,646 from 39,656.
For the year-to-date period to 7 August, overall North American chemical railcar traffic was up 15.1% to 1,371,129, from 1,191,016 in the year-earlier period.
Overall, the
From the same week last year, total US weekly railcar traffic for the 19 carload commodity groups tracked by the AAR rose 3.5%, to 284,507 from 274,800, and was up 7.2% to 8,745,778 year-to-date to 7 August. | http://www.icis.com/Articles/2010/08/12/9384741/canadian-weekly-chemical-railcar-traffic-rises-27.1.html | CC-MAIN-2013-48 | refinedweb | 283 | 55.95 |
CodeGuru Forums
>
.NET Programming
>
C-Sharp Programming
> running C# program
PDA
Click to See Complete Forum and Search -->
:
running C# program
ncs
June 19th, 2002, 08:36 PM
From what I have heard about this language you must have C# loaded on a machine to run a program written in C#. (i.e. it is like Java) is this true, or does it generate an executable that can be run on a machine without C#???????
Meghansh
June 19th, 2002, 09:09 PM
Yeah ! It is like java. Any program in any .NET language starts with loading the CLR, which is in mscorlib.dll. So firstly this should be there on ur system. Secondly, all the dlls form which u r using ur namespaces in ur assembly folder in WINNT.
SO u've to download the runtime from MS site.
jparsons
June 21st, 2002, 08:58 AM
Originally posted by ncs
From what I have heard about this language you must have C# loaded on a machine to run a program written in C#. (i.e. it is like Java) is this true, or does it generate an executable that can be run on a machine without C#???????
The machine must be loaded with the CLR ( common language runtime). This is the equivalent of the JVM. Unlike the JVM though, the CLR can run any managed application (VB, C# etc).
You can use tools like ngen to precompile your IL code into the architecture of your choice but I believe you still need a lot of the DLL's that come with the CLR to run your applications.
Arild Fines
June 21st, 2002, 12:58 PM
ngen.exe will only affect startup times. You will still need the whole CLR to run the assembly. You will also need the original assemblies present on disk, since the CLR requires these for metadata. An ngen'ed image does not contain any metadata by itself.
dky1e
June 21st, 2002, 12:59 PM
Is there a way to package the program into an installation file that will take care of the nuisance?
How can I make an installation file, like with vb5?
sternaphile
June 21st, 2002, 01:14 PM
I haven't used it myself yet, but try
File->New->Project->Setup and Deployment Projects for a start...
dky1e
June 21st, 2002, 04:20 PM
err...
Where do I start?
I want to create a setup app for my (c#)solution.
JimH
June 24th, 2002, 01:48 PM
dky1e:
sternaphile wrote:
I haven't used it myself yet, but try File->New->Project->Setup and Deployment Projects for a start...
... which I believe presumed that you are using the C# .NET IDE, which would open the New Project wizard for you.
You could then select 'Visual C# Projects' from the Project Types frame, and either 'Windows Application' or 'ASP .NET Web Application' from the Templates frame.
Hope this helps,
JimH
codeguru.com | http://forums.codeguru.com/archive/index.php/t-195256.html | crawl-003 | refinedweb | 487 | 82.24 |
.
Hello Guys, i plan to finish the course before classes start, and i dont no abc of programming, if anyone want to accompny me plz say hello. I would preffer someone with some knowledge of programming to help me. thx in advance.. cheers :)
+)
meri CS201 hai.... pr zeyda ati nai hai kion k main ny B.com kiya hai pehly.. to Programming new hai mery liye.... classes start ho gain to or b aa jain gay members..... :)
Yes, If you required any type o f
Assalam o Alaikum,
yes, I want to accompany of programming
My study program is BS(CS) 2nd semester
thiz is my email id,
Mohsin345@aol.com
acha step hai meri b hai cs201 is dafa and muje b theek se nai aati yahan b disscuss kia karein plz jo sab k liye helpful ho gi
kindly bta dein is mein 2 errors kon kon se hain ?
// Definition of the circleArea function.
double circleArea ( double radius )
{
// the value of Pi = 3.1415926
return ( 3.1415926 * radius * radius ) ;
}
// This program calculates the area of a ring
#include <iostream.h>
// function declaration.
double circleArea ( double);
main ( )
{
double rad1 ;
double rad2 ;
double ringArea ;
cout "Please enter the outer radius value: ";
cin >> rad1 ;
cout " Please enter the radius of the inner circle: " ;
cin >> rad2 ;
ringArea = circleArea( rad1 ) – circleArea(rad2 ) ;
cout " Area of the ring having inner raduis " rad2 " and the outer radius " rad1 " is " ringArea ;
cout"\n\n\t";
system("pause");
}
if after doing this you will got same error then you have to change <iostream.h> to <iostream> then after this line press enter and add "using namespace std;"
y sign lagain cout k sth cout""; aise
Zulfiqar Ahmad Zakki and Marchal (3rd) thanx 4 xplaination
c201,cs601,phy101,eng201,cs304,phy301,cs401,cs610,cs504,cs403,mth101,mth301,cs302,cs301 these all handouts i give in half price i am from Lahore
please contact me 0300.8169924 I need it....
Hi, please add me at skype: baazi81 for discussion regarding CS201
© 2020 Created by +M.Tariq Malik.
Promote Us | Report an Issue | Privacy Policy | Terms of Service | https://vustudents.ning.com/group/cs201introductiontoprogramming/forum/topics/cs201-partner-needed?commentId=3783342%3AComment%3A3595337&groupId=3783342%3AGroup%3A58836 | CC-MAIN-2020-29 | refinedweb | 348 | 72.16 |
.
Another question, sorry >_< You seem to know a little about claims 😛
I'm having an issue with enumerating on claims to find if it exists. This is both when using 'ClaimsPrincipal.HasClaim' and manually doing it with a .Any() LINQ expression.
Whenever it's false, it'll hang at what should be the end of enumeration for ~10 seconds then throw me a 'HttpException' with'm not trying to access SQL Server during this time (and it works fine when I actually want to access it)
(see below)
private bool IsSchoolUser(ClaimsPrincipal principal)
{
if (principal.HasClaim(ClaimTypes.Role, "SchoolUser"))
{
return true;
}
else
{
return false;
}
}
Just wondering if you'd ever come across anything similar?
Hmm, you may try the following: open web-config and locate the system.webServer node. If you don’t have a modules section within system.webServer then add it. Within modules add the following: <remove name=”RoleManager” />
Does it make any difference?
(Oh, by that I mean to say: it worked!)
i was having the same issue. Thank you very much for this detailed series on claims based authentication. Much much appreciated.
Oh my god.
You have no idea how much I love you.
I am in tears I am so happy. Just… just- GAHHHH
I’ve been stuck on this for days and I have a pitch coming up damn soon.
Thank you so much Andras!
Thank you so much for this tutorial.
11 out of 10.
You are legend.
Thanks a lot for your comment. If you’re looking for a true legend within the topic of claims and security then check out Dominick Baier on
The last 3 days, I’ve been going through Dominick’s content on this subject along with pluralsight videos he did. Although he’s extraordinary expert in this area, I could only wrap my head around portions of it. Reading through your articles in this series really provided the glue for those portions. Very well written and now I’m off and coding!!!! Thank you very much and I bow down to you, Andras. 😉
I noticed in your CheckAccess method you are retrieving the first resource and action and then performing your check process based on the first values. You also demonstrate where we can pass in multiple resources to ClaimsAuthorizeAttribute eg: [ClaimsAuthorize(“Show”, “Code”, “TvProgram”, “Fireworks”)]. I’m trying to understand when you would want to pass in multiple resources. I’m working from an ASP.Net MVC perspective where the Action in the controller represents the resource and I suppose the HTTP verb would represent the action. Perhaps I’m letting ASP.Net MVC cloud my understanding, but I’m not seeing a reason or a scenario where you’d want to pass in multiple resources for a given action. Can you provide an example please?
Hi Tom,
Imagine that you have a domain with 3 objects: Code, TvProgram and Fireworks. You may as well have separate views for each of these resources, e.g. Index/TvProgram. Then you could have an admin page where you show all three in the same view using a joint viewmodel. You could take the old approach of checking the Role of the IPrincipal to make sure it’s an administrator. Instead, you declare that the person must have access to the three resources in a granular way – as opposed to creating some artificial “Everything”, or “CodeTvProgramAndFireworks” objects.
//Andras
Pingback: Claims Authorization | ynfiesta
Hi Tom,
your article is very good. I am willing to implement claim based authorization and i understand the benefits over the Role method. However, how can i limit access to some fields in the view returned by the Action without creating different views? For example, let’s assume i have a “Student” resource and he can only update some fields in the view where a “Professor” can update all fields. How do i implement such a logic with claim based authorisation?
Thank you in advance.
Hello Khashman,
You can build different versions of the viewmodel injected into the view based on the user’s claims.
//Andras, not Tom 🙂
Hi Andras,
I’m wondering what are the differences/benefits between
[ClaimsAuthorize(“Show”, “Everything”)]
and
[ClaimsPrincipalPermission(SecurityAction.Demand, Operation = “Read”, Resource = “Testdata”)]
Hallo Thomas,
//Andras
I’ve already read all your posts and know it’s coming from an additional package. But still I don’t get the benefits of ClaimsAuthorizeAttribute vs. ClaimsPrincipalPermissionAttribute. As to me it seems they both do the same, except a difference in passed arguments?!
Yes, they fulfil the same goal, but I thought I described the advantages of the MVC-style ClaimsAuthorize attribute in that post, maybe I wasn’t clear. I wrote the following about the ClaimsPrincipalPermission attribute:
“So, this attribute solves some of the problems with the PrincipalPermission. However, it does not solve all of them. It still gets in the way of unit testing and it still throws a SecurityException.”
And then write this about ClaimsAuthorize:
.”
So functionally they do the same thing but ClaimsAuthorize still wins overall. As far as I know the PrincipalPermission style of ensuring authentication is quite outdated precisely due to its shortcomings.
Gruss,
Andras
By considering your example, I’m stuck in a CheckAccess method, where action variable has a value of my property name “About”, and not the attribute string “Show”. How can it be rectified?
Hello, not sure I follow. Why do you have “About” as the action variable? What would you like to achieve?
Hi Andras, a quick question here. When you return false from CheckAccess method you end up with “HTTP Error 401.0 – Unauthorized” while the application should redirect you to back to the STS login page, shouldn’t it. How can it be achieved?
Hi Bartosz,
It depends on what you want your users to see. I as a visitor to a site would probably be confused if I were constantly redirected to the login page if my login fails for whatever reason. I think showing a clear messages such as “Unauthorized” helps me more.
//Andras
Where should I go from here then? Found this post: but seems to be rather complicated. Is there any simpler solution to redirect user to a custom view
why not redirect the user to a generic error page using web.config? you can use the httpErrors section to achieve that, example:
You can have an entry for a 401 response as well.
//Andras
Hi Andras. Thanks for your answer, not sure about it however. As I have read 401 error is handled only by IIS, and doesn’t reach ASP, therefore cannot be resolved using web.config settings?
Yep, I’ve tested the “” solution and didn’t succeed. Have seen many approaches in how to solve this problem (like this one:), but all of the refer to Authorize attribute and do not take Claims authorization into account
Have you tried deriving from ClaimsAuthorizeAttribute? You can override a number of methods, including HandleUnauthorizedAccess which sounds promising.
I could probably set context.Principal.Identity.IsAuthenticated to false, but not sure if this is a right way of doing it?
Hi Andras ,
Can you you explain how you define ClaimsAuthorize attribute and how you access the method checkaccess of ClaimsAuthorizationManager?
Hi Sunil,
ClaimsAuthorize is the claims-enabled equivalent of the standard MVC Authorize attribute. You can register your ClaimsAuthorizationManager in web.config as explained in the post. Your implementation of th CheckAccess method will be invoked automatically through the attribute.
Is anything unclear from the explanations in the post?
//Andras
Dude. You are Good. i wonder if you have written some books. if you didn’t you reaaaaaaaaaaaaaally should and write them the same way you Blog.No extra Theory. right to the point
thank you so much for your effort.keeep up the beautiful work
Thanks for your comment. I’ve been considering writing a book but I simply have no time for that now.
//Andras
How do you store the action-resource claims? Should they be hardcoded in the CheckAccess method? I thought that it was a right thing to do – to add claims like `action-resources` at authentication to the claims identity. I’ve found a similar question:
The op asks the same thing. But I don’t get the answer. Should I then harcode all variations of if-action-resources-are-like-that-and-principal-claims-are-like-that-then?
Hi Aleksey,
I think Dominick Baier means exactly what is shown in this post, i.e. use the CheckAccess method of the auth manager to check whether a user is allowed to carry out an action on a resource. I don’t think the action/resource pairs have anything to do in the claims list of a user. The action/resource pairs don’t describe a user like e.g. a Name claim does.
You can hardcode the action/resource pairs like in the example but you can turn to more sophisticated OOP ways to encapsulate these magic strings.
//Andras
How that helps when you have a lot of resources and actions? I agree that permission claims don’t relate to a user identity directly. Is there any other less sophisticated way of implementing such a thing?
The least sophisticated way of implementing that would be a kilometer-long if/else block that will quickly become unsustainable.
Instead you can encapsulate your resources and actions in proper objects and pair them up using the decorator pattern. E.g. have a “View” action object and a “Code” resource and the View can decorate the Code object. Let the decorators encapsulate the logic of what’s allowed based on the claims list. You’ll still have to create a lot of new code but you’ll get rid of the ugly if/else block.
//Andras
Thank you for your answer! I understand this approach. But have dicided to do this another way.
Yesterday I was reading the book about WebAPI security. When I was reading about claims I stumbled upon such a passage : “claims are used to make a claim about the identity. For example Jon’s email is… Jon is 25… Jon can delete users”. (`Badri can delete users.` originally).
I’ve decided to follow this scenario:
– the database model stays as it is;
– at login I can add claims of type “” with value like: “view:blogs”;
– to stay up with the semantics I can create custom authorization filter as [Can(“view”, “blogs”)]
I just want to try it like that:)
God bless you!
Hi,
I am very new to Owin. I have created an MVC application with authentication type WsFederation.
app.UseWsFederationAuthentication(
new WsFederationAuthenticationOptions
{
MetadataAddress = “”,
Wtrealm = “”
});
In Home Controller I just decorated actionresult as below
[ClaimsAuthorize(“IsAuthorized”, “CanView”)]
//[HttpPost]
public ActionResult About()
{
ViewBag.Message = “Your application description page.”;
return View();
}
I am overriding the CheckAccess method
public class AreWeAllowedToDoItManager : ClaimsAuthorizationManager
{
public override bool CheckAccess(AuthorizationContext context)
{
var resource = context.Resource.First().Value;
switch (resource)
{
case “CanView”:
{
if (PrincipalCanPerformActionOnResource(context))
return true;
break;
}
default:
{
throw new NotSupportedException(string.Format(“{0} is not a valid resource”, resource));
}
}
return false;
}
What is happening is if the user is not authorized, it just coming back to CheckAccess method and not throwing Not authorized error.
Please help me
Hello,
I’m not an OWIN guru myself either so I’m not really sure where the problem lies. I have a series based on my tests with OWIN here but I’ve never gone any deeper than that.
//Andras
How can I use the ClaimsAuthorize to restrict users based on a route parameter?
Suppose an action like this:
”’
[ClaimsAuthorize(“Create”, “Course”)]
public ActionResult Create(int departmentId, Course course)
{
//Logic to create a course for current department
}
”’
The user has a “Create” claim for the resource “Course”. But now I want to restrict the user to only create courses for some departments.
And on top of that, someone could have just view access in one department and additinal create access in another and update access in a third department… How do you advise for handling such cases?
Hello,
I think you’ll need to dig deeper in the list of claims of the user. I don’t see any way to declare that in the claims authorize attribute, if that’s what you meant. There could be a special claim type listing the department IDs the user has access to. If they try to create a course for an ID that’s not in the claims list then access is rejected.
You’ll need to create a separate View controller action with “View” and “Course” as resources. Then check the user’s claims list whether they have view access to the requested department.
//Andras
Thanks for your answer.
Thats what I thought. Only I think that’s not really a scalable option. Suppose a User has the following rights:
for dept #1: Read only
for dept #2: Read & update
for dept #3: Read, create & update
If I make a claim Departments with value [1,2,3] its not sufficient. I could adjust the claims to Course_1 – Course_3 but then I have to make claims from Course_1:Read to Course_3:Update. This gives me already 6 claims for one resource with 3 departments. In the system I’m building I have already 10 “departments” and 6 resources. If this extrapolates I need already 120 claims and I’ve barely started the real app.
I’ll probably change my solution to use something less granular.
I hope I have this wrong and that you can correct me, but it seems that resource/operation doesn’t solve much.
My Hope: A user might have two resources – address, and salary. They can view and edit address, but they can only view salary. I’d hoped there would be a constructor for a Claim that would take Claim(string resource, string operation). Then I could add these claims:
new Claim(“Address”, “View”), new Claim( “Address”, “Edit”) new Claim(“Salary”, “View”)
Then if I did [ClaimsPrincipalPermission(SecurityAction.Demand, Operation = “Edit”, Resource = “Salary”)] it would not permit access, because there was no Salary/Edit claim.
Reality?: But it seems that a user ends up with Resources(Address, Salary) and Operations(Read, Edit) meaning that if did [ClaimsPrincipalPermission(SecurityAction.Demand, Operation = “Edit”, Resource = “Salary”)] it would return true? As long as it finds Edit in the list and Salary they’re good to go?
I really hope I’m wrong and you can point me to a blog that sends me in the right direction.
I thank your for all your posts as the others do. Very, very informative.
Hi Thanks for the series of tutorial – it very clear and to the point ..
I have implemented the solution but at the time of debugging I am getting redirected to the page to show the source code of “ClaimsAuthorization.cs” and also have read the comments but did not found the way to redirect to the some page or open a pop up.
And I also want to use the the method style to bypass some function or code snippet. but unable to find the way – like
[ClaimsAuthorize(“CanCreateAdmins”, “TenantAdmin”)]
public string CreateClient()
{
return “Client Created”;
}
Thanks
that Thinktecture.IdentityModel is like 6 years old and said support mvc4 and webapi.. It has not been updated since 2014, Is there anything for an mvc5 application that is more current. | https://dotnetcodr.com/2013/03/04/claims-based-authentication-in-mvc4-with-net4-5-c-part-3-claims-based-authorisation/?replytocom=142312 | CC-MAIN-2021-43 | refinedweb | 2,562 | 65.62 |
can anyoe tell me whats wrong with this and if im on the right track. i need to make a program that displays a piece of advice to the user when they run the program. then it allows them to add some advice to the file and then it ends. the next time it runs it allows the next person toa dd more advice after it display the old advice. get it. heres what i have so far. plz give me some psuedocode to get me going.
NOTE this is not homework.NOTE this is not homework.Code:// This program will allow the user to add advice to a file // already made by the programmer #include <iostream> #include <fstream> #include <cstdlib> using namespace std; void editfile (ifstream& in_stream, ofstream& out_stream); int main() { ifstream advFileIn; ofstream advFileOut; cout << "Welcome to advice master. "<<endl; advFileIn.open("advice.txt"); if (advFileIn.fail()) { cout << "Error opening file"<<endl; cout << "Program is terminating"<<endl; exit(1); } advFileOut.open("outadvice.txt"); if (advFileOut.fail()) { cout << "Error opening file"<<endl; cout << "Program is terminating"<<endl; exit(1); } editfile(advFileIn, advFileOut); return 0; } void editfile(ifstream& in_stream, ofstream& out_stream) {} | https://cboard.cprogramming.com/cplusplus-programming/41873-help-streams.html | CC-MAIN-2017-43 | refinedweb | 191 | 66.44 |
In MarkLogic Server, you have both the XQuery and XSLT languages available. You can use one or both of these languages as needed. This chapter briefly describes some of the XSLT language features and describes how to run XSLT in MarkLogic Server, and includes the following sections:
MarkLogic Server implements the W3C XSLT 2.0 recommendation. XSLT 2.0 includes compatibility mode for 1.0 stylesheets. XSLT is a programming languages designed to make it easy to transform XML.
For details about the XSLT 2.0 recommendation, see the W3C website:
An XSLT stylesheet is an XML document. Each element is an instruction in the XSLT language. For a summary of the syntax of the various elements in an XSLT stylesheet, see.
To run an XSLT stylesheet in MarkLogic Server, you run one of the following functions from an XQuery context:
The xdmp:xslt-invoke function invokes an XSLT stylesheet from the App Server root, and the xdmp:xslt-eval function takes a stylesheet as an element and evaluates it as an XSLT stylesheet. As part of running a stylesheet, you pass the stylesheet a node to operate on. For details on xdmp:xslt-invoke and xdmp:xslt-eval, see the MarkLogic XQuery and XSLT Function Reference.
Besides the ability to invoke and evaluate XSLT stylesheets from an XQuery context (as described in Invoking and Evaluating XSLT Stylesheets), there are several extensions to XSLT available in MarkLogic Server. This section describes those extensions and includes the following parts:
You can call any of the MarkLogic Server Built-In XQuery functions from an XSLT stylesheet.
In addition to using
<xsl:import> to import other XSLT stylesheets into your stylesheet, you can use the
<xdmp:import-module> instruction to import an XQuery library module to an XSLT stylesheet. Once you have imported the module, any functions defined in the module are available to that stylesheet. When using the
<xdmp:import-module> instruction, you must specify
xdmp as a value of the
extension-element-prefixes attribute on the
<xsl:stylesheet> instruction and you also must bind the
xdmp prefix to its namespace in the stylesheet XML.
The following is an example of an
<xdmp:import-module> instruction:
xquery version "1.0-ml"; xdmp:xslt-eval( <xsl:stylesheet xmlns: <xdmp:import-module <xsl:template <xsl:copy-of </xsl:template> </xsl:stylesheet> , document{ <doc/> })
Similarly, you can import an XSLT sytlesheet into an XQuery library, as described in Importing XQuery Function Libraries to a Stylesheet.
You can use the
<xdmp:try> instruction to create a try/catch expression in XSLT. When using the
<xdmp:try> instruction, you must specify
xdmp as a value of the
extension-element-prefixes attribute on the
<xsl:stylesheet> instruction and you also must bind the
xdmp prefix to its namespace in the stylesheet XML.
The following is an example of a try/catch in XSLT. This example returns the error XML, which is bound to the variable named
e in the name attribute of the
<xdmp:catch> instruction.
xquery version "1.0-ml"; xdmp:xslt-eval( <xsl:stylesheet xmlns: <xsl:template <xdmp:try> <xsl:value-of <xdmp:catch <xsl:copy-of </xdmp:catch> </xdmp:try> </xsl:template> </xsl:stylesheet> , document{<doc>hello</doc>})
MarkLogic Server includes many of the EXSLT extensions (). The extensions include the
exslt:node-set and
exslt:object-type functions and the
exsl:document instruction. For details about the functions, see the MarkLogic XQuery and XSLT Function Reference and the EXSLT web site.
The following is an example of the
exsl:document instruction. Note that this is essentially the same as the
xsl:result-document instruction, which is part of XSLT 2.0.
xquery version "1.0-ml"; (: Assumes this is run from a file called c:/mypath/exsl.xqy :) xdmp:set-response-content-type("text/html"), let $nodes := xdmp:xslt-eval( <xsl:stylesheet xmlns: > </xsl:stylesheet>, document{element p { "hello" }}) for $node at $i in $nodes return if ( fn:document-uri($node) ) then xdmp:save( fn:resolve-uri(fn:document-uri($node), "C://mypath/exsl.xqy"), $node) else ($node)
The above query will save the two documents created with
exsl:document to the App Server root on the filesystem, making them available to the output document with the frameset. For more details about the exsl:document instruction, see the EXSLT web site.
You can add the attribute
xdmp:dialect to any element in a stylesheet to control the dialect in which expressions are evaluated, with a value of any valid dialect (for example,
"1.0-ml" or
"1.0"). If no
xdmp:dialect attribute is present, the default value is
"1.0", which is standards-compliant XSLT 2.0 XPath.
If you are using code shared with other stylesheets (especially stylesheets that might be used with other XSLT processors), use care when setting the dialect to
1.0-ml, as it might have subtle differences in the way expressions are evaluated.
For details about dialects, see Overview of the XQuery Dialects.
XSLT includes the
<xsl:import> instruction, which is used to import other stylesheets into a stylesheet. The MarkLogic implementation of the
<xsl:import> instruction is conformant to the specification, but the
<xsl:import> instruction can be complicated. For details on the
<xsl:import> instruction, see the XSLT specification or your favorite XSLT programming book.
Some of the important points to note about the
<xsl:import> instruction are as follows:
hrefattribute are resolved in the context of the current MarkLogic Server database URIs. Relative paths are resolved relative to current module in the App Server root. For details, see XQuery Library Modules and Main Modules in the Application Developer's Guide.
<xsl:import>instruction follows the rules of precedence for XSLT imports. In general, that means that a stylesheet that imports has precedence over an imported stylesheet.
<xdmp:import-module>extension instruction, as described in Importing XQuery Function Libraries to a Stylesheet.
As described in Invoking and Evaluating XSLT Stylesheets, you invoke a stylesheet from an XQuery program. To set up an HTTP App Server to invoke a stylesheet by directly calling it from the App Server, you can set up a URL rewriter. For general information on using a URL rewriter, see Creating an Interpretive XQuery Rewriter to Support REST Web Services in the Application Developer's Guide.
This section describes the sample URL rewriter for XSLT stylesheets and includes the following parts:
The sample XSLT rewriter consists of two files, both installed in the
<marklogic-dir>/Samples/xslt directory:
xslt-invoker.xqy
xslt-rewrite-handler.xqy
Once you set up the rewriter as described in the next section, URLs to the App Server of the form:
/filename.xsl?doc=/url-of-context-node.xml
will invoke the
filename.xsl stylesheet and pass it the context node at the URI specified in the
doc request field.
It will also take URLs if the form:
/styled/url-of-context-node.xml?stylesheet=/stylesheet.xsl
will invoke the stylesheet at the path specified in the stylesheet request field passing in the context node in the path after
/styled (/url-of-context-node.xml in the above sample).
The following table describes what the request fields you pass translate to when you are using the sample XSLT rewriter.
You can use the sample rewriter as-is or you can modify it to suit your needs. For example, if it makes sense for your stylesheets, you can modify it to always pass a certain node as the context node.
To use the sample XSLT rewriter, perform the following steps:
xslt-invoker.xqyand
xslt-rewrite-handler.xqymodules from the
<marklogic-dir>/Samples/xslt directory to your App Server root. The files must be at the top of the root of the App Server, not a subdirectory of the root. For example, if your root is set to
/space/my-app-server, the new files should be copied to
/space/my-app-server/xslt-invoker.xqyand
/space/my-app-server/xslt-rewrite-handler.xqy. If your root is in a modules database, then you must load the 2 files as text document (with any needed permissions) with URIs that begin with the App Server root.
url rewriterfield (it is towards the bottom of the page).
url rewriterfield.
Request against the App Server will now be automatically rewritten to directly invoke stylesheets as described in the previous section.
Both XQuery and XSLT are Turing Complete programming languages; that is, in theory, you can use either language to compute whatever you need to compute. XQuery and XSLT share the same data model and share XPath 2.0, so there are a lot of commonalities between the two languages.
On some level, choosing which language to perform a specific task is one of style. Different programmers have different styles, and so there is no 'correct' answer to what you should do in XQuery and what you should do in XSLT.
In practice, however, XSLT is very convenient for performing XML transformation. You can do these transformations in XQuery too, and you can do them well in XQuery, but some programmers find it more natural to write a transformation in XSLT. | http://docs.marklogic.com/guide/xquery/xslt | CC-MAIN-2017-47 | refinedweb | 1,516 | 62.27 |
[EMAIL PROTECTED] My ISP doesn't carry the gimp group and I tried posting via mixmaster but nothing gets on. Is the ng moderated and dumping anon posts?
Advertising
Could anyone please help me get a perl script invoked from HTML via the local server run Gimp? I have one conventional perl script that invokes other progs. I was hoping to also launch The Gimp from within this script and then have it return control to the calling script to continue with getting the images into an html page. The only thing I've been able to do is a separate standalone script launch The Gimp, do some graphics and save them to disk before quitting. Then I have to restart the original script to carry on with loading the saved images. Also, this standalone script must run from a bash shell or from a directly commanded bash script. It refuses to run either from any script via local served html. Is there something Gimp-specific that Apache (1.3.2) must have installed? I get server errors but not much in the logs which might suggest scripting syntax but Gimp has proven finicky enough to undermine this hunch. I've tried commands & syntax like... system('/usr/..path/gimpit.pl'); protocol error (1) at /usr/lib/perl5/site_perl/5.6.1/i586-linux/Gimp/Net.pm line 66. system('/usr/bin/gimp'); Gtk-WARNING **: cannot open display: system('/usr/bin/gimp-remote -n /usr/...path/test.png'); Gtk-WARNING **: cannot open display: ... the above 2 are less valuable as I DON'T want display. eval { require("/usr/..path/gimpit.pl"); }; ...nothing. print"<a href="/usr/..path/gimpit.pl">gimpit.pl</a>"; protocol error (1) at /usr/lib/perl5/site_perl/5.6.1/i586-linux/Gimp/Net.pm line 66. A bash script, otherwise able to launch gimpit.pl, invoked from perl/html also results in protocol error (1) at /usr/lib/perl5/site_perl/5.6.1/i586-linux/Gimp/Net.pm line 66. which is here... (line #'s added for clarity) 63 # this is hardcoded into gimp_call_procedure! 64 sub response { 65 my($len,$req); 66 read($server_fh,$len,4) == 4 or die "protocol error (1)"; 67 $len=unpack("N",$len); 68 read($server_fh,$req,$len) == $len or die "protocol error (2)"; 69 net2args(0,$req); 70 } ... & a long long way over my head! Would appreciate any pointers. Sorry if this is not exactly GIMP stuff, but the perl groups will say it's not perl, and www will say it's apache... etc ;-) and who would know better than Gimp users? TIA [EMAIL PROTECTED] The original (including needs) script structure -------------------------------------------------------------- #!/usr/local/bin/perl print"Content-type:text/html\n\n"; print"<html><body>"; &dothings; #nocando #&dosomeGimpPerChance; &dosomeotherthings; print"</body></html>"; exit; sub dothings { blah... blah } #sub dosomeGimpPerChance { # blah... blah #} sub dosomeotherthings { blah... blah } The standalone script structure -------------------------------------------------------------- #!/usr/local/bin/perl use Gimp qw (:auto); use Gimp::Fu; use Gimp::Util; #Gimp::set_trace(TRACE_ALL); register "", "", "", "", "", "", "<None>", "*", "", \&doit; exit main(); sub doit{ blah.. NO show, only load, change & save. } $final=gimp_image_flatten($image); file_png_save($image, $final, $targp, $targp, 0,9,0,0,0,0,0); return (); } -------------------------------------------------------------- _______________________________________________ Gimp-user mailing list [EMAIL PROTECTED] | https://www.mail-archive.com/gimp-user@lists.xcf.berkeley.edu/msg01186.html | CC-MAIN-2016-44 | refinedweb | 532 | 60.31 |
This is the third in a short series of articles that adds Ajax functionality to a Java EE web application developed in the NetBeans IDE.
The Java EE platform includes JavaServer Faces technology. JavaServer Faces technology provides standard components that you can extend to create your own custom components, which you can then reuse in different applications. Along with the custom component, you also create a custom renderer and a custom tag to associate the component with the renderer and to reference the component from the page.
When you create the JavaServer Faces custom component in this approach, you package the resources required by the component directly with the application bundle. The custom component generates the JavaScript code needed to handle Ajax interactions with the server. To fulfill the Ajax request, you use the same Java Servlet that was used in the do-it-yourself method. This approach uses the JavaServer Faces framework only as a rendering mechanism, ignoring most of the power of JavaServer Faces technology. For convenience in this article, this component-servlet approach is CompA.
A second JavaServer Faces component approach, for convenience CompB, uses a phase listener to serve the component's static resources. As an option, you can use the phase listener to intercept and fulfill the client component's Ajax requests. The phase listener can delegate responsibility to a managed bean's method or again use the legacy servlet. The client component's resources, such as the JavaScript file and CSS, are accessed through the phase listener. The phase listener approach takes advantage of more of the power of JavaServer Faces technology, and is described in the final article in this Ajax series, Creating an Ajax-Enabled Application, a Phase Listener Approach.
The CompA approach can be appropriate in the following cases:
XMLHttpRequestcall is used to perform polling action, for example.
The CompA approach also has its shortcomings, namely:
Weighing the advantages and shortcomings of the CompA approach, it could be a sensible way to introduce JavaServer Faces and Ajax technologies into your application.
This section summarizes the life cycle of the book catalog page and the Ajax-created pop-up balloon. The terminology and file names in the explanations are explained in more detail later in this article. If they are unfamiliar, review this section again after you have read the entire article.
The following figure shows the life cycle of the book catalog page, beginning with the user's click on the catalog link.
The following steps explain the architecture illustrated in the figure:
Dispatcherservlet.
Dispatcherservlet accepts the
/books/bookcatalogURL, prefixes it with
/facesand suffixes it with
.jsp, mapping the URL to a JSP page. The Dispatcher then forwards the request through the
RequestDispatcher.
web.xmlfile, sends the JSP page identified by the URL to the
FacesServlet.
FacesServlet, the JavaServer Faces framework identifies the
CompAcomponent by its tag and routes the page (along with the component it contains) to the
CompATagtag class.
CompATagclass extracts properties from the tag's attributes and populates the properties of the component. It then maps the component to a renderer type that is registered in
faces-config.xml, and sends the request to the renderer for further processing.
The following figure shows the Ajax life cycle of the pop-up balloon, produced when the user mouses over a link on the book catalog page.
onmouseoverevent handler.
onmouseoverevent handler calls the
bpui.compA.showPopup()function in the
compA.jsfile. This function sends a request to the
CompAServletthrough the
XMLHttpRequestobject.
CompAServletreceives the request and, using the existing
BookDBAobject, obtains the book title detail data and formats a response to the request.
CompAServletthen returns an XML response that holds the book detail.
ajaxReturnFunction()is called when the response is returned from the
CompAS first implementation of a JavaServer Faces component, you create a component with resources accessed directly from the web application. The component itself is bundled with the web application. ToolkitA.jsp file already present in the project.
bookcatalog.jspfile.
bookcatalog_compA.jspfile, right-click, and choose Copy from the contextual menu.
booksnode, right-click, and choose Paste from the contextual menu. A copy of the
bookcatalog_compA.jspfile appears in the list.
bookcatalog_comp the code is much simpler than either the do-it-yourself version from Creating an Ajax-Enabled Application, a Do-It-Yourself Approach or the toolkit version from Creating an Ajax-Enabled Application, a Toolkit Approach. The file is half the size of the do-it-yourself version.
Lines 32–39 in the file initialize the JavaServer Faces core tag library (taglib). They set the tag prefix to
f and the JavaServer Faces component tag prefix to
bpui for the custom taglib
<bpui:compA>.
Lines 36–38, shown below, provide the necessary view tags.
The
<f:view> tag is used to encapsulate all the JavaServer Faces components so the FacesServlet (discussed later) operates on them in the page. Without the view tags, a JavaServer Faces context would be created but wouldn't have an object in the component tree to render.
The
<bpui:compA> tag has associated attributes
id and
url. The
id attribute is the pop-up balloon object name. The
url attribute locates the services that respond to the Ajax request.
Recall that in the do-it-yourself version described in Creating an Ajax-Enabled Application, a Do-It-Yourself Approach, you hard-coded the URL in a JavaScript file. To use that implementation in another application, the URL would have to be changed in many places. Because the component approach encapsulates code for easy reuse, the URL is declared as part of the component tag's attributes in the JSP file rather than in JavaScript code.
Moreover, because the do-it-yourself approach can't distinguish multiple components, the number of pop-up balloons on a page is limited to one at a time. A properly architected component approach lets you use any number of pop-up balloons simultaneously. For example, you could have two
compA components, each identified by a different
id that is specified in the component's
<bpui:compA> tag, each referencing a different URL. One component could use the URL of the pop-up servlet on your server, while another component could use a URL that fulfills the Ajax request in a different way.
Scroll down in the
bookcatalog.jsp file and view the event handlers on lines 76–77:
These lines are similar to previous versions of the
onmouseover and
onmouseout event handlers. In this case, though, they pass the
id attribute of the pop-up object
pop0 to the
showPopup() and
hidePopup() functions. Recall that in line 37 the value for the
id attribute of the
<bpui:compA> tag was defined as
pop0. In addition to the pop-up
id attribute, the
showPopup() function is also passed the
event and
bookId attributes. The
bookId parameter is used to obtain information about a specific book, while the
event parameter denotes either an
onmouseover or
onmouseout event.
The
showPopup() and
hidePopup() functions are restricted to the
bpui.compA namespace to avoid naming conflicts. To use more than one pop-up component on the page, you would provide separate
id attributes for each component in a
bpui:compA tag, then use the
showPopup() and
hidePopup() functions to identify which to hide and which to show.
In summary, in this component version of the project, the only changes necessary in the
bookcatalog.jsp file are to declare the taglib, reference the component's tag, and provide a server-side component to fulfill the Ajax request. These changes provide the side benefit of allowing you to show more than one component on a page.
Now, examine your project's tag library descriptor (
.tld) file. Open the file in the NetBeans Editor:
ui.tldfile to open it in the NetBeans Editor.
In the
ui.tld file, you see that tags for
CompA lie between lines 14 and 67.
Recall line 35 from
bookcatalog.jsp:
For the application server, this line maps to the URI used for the user interface in line 8 of the
ui.tld file:
In line 15 of the
ui.tld file, you see the definition of the
<name> tag:
The name defined in line 15 is used with the taglib namespace prefix
bpui in the
bookcatalog.jsp file, as, for example, in line 38 from that file:
The
bpui:compA tag maps to the tag class defined in line 16 of the
ui.tld file:
Line 17 describes the
compA tag as scriptless, which means there will be no scripting between the opening and closing tags:
Take special note of the
id attribute in the definition of
compA (line 19). Recall how this attribute is used in the
onmouseover and
onmouseout event handlers in
bookcatalog.jsp. When passed as a parameter to the
showPopup() and
hidePopup() JavaScript functions, it references a unique pop-up component. The
id attribute is also used for other component-specific calls. By referencing components by id, the functions can allow more than one pop-up component to be used per page.
The
style and
styleClass attributes (lines 43 and 54) are defined to allow the component to override any styles that might be defined for the page elsewhere.
The CompA component also has an important JavaServer Faces configuration file called
faces-config.xml, which is in the same directory as
ui.tld. Double-click
faces-config.xml in the NetBeans Projects window to view the file in the NetBeans Editor.
CompA uses a standard JavaServer Faces output component (
javax.faces.Output) with a custom renderer that is specified in lines 12–24, shown below. The
component-family and the
renderer-type are mapped to the return of the
CompATag methods
getComponentType() and
getRendererType(), respectively.
Line 20 names the renderer:
In line 21, the renderer is referenced to the
CompARenderer class:
CompATagTag Class
You now examine the CompATag class referenced by the
ui.tld file to see how the pop-up component's tag data is used.
CompATag.javafile to open it in the NetBeans Editor.
The
CompATag class extracts attribute values from the tag, populates the component, and maps to a renderer type that is registered in
faces-config.xml. tag's
getComponentType() and
getRendererType() methods perform the important function of mapping the JavaServer Faces object to a specific renderer. Their return values are mapped to the values entered in the
faces-config.xml file to determine a specific renderer class to be used to render the component's markup.
CompARendererClass
To see how the
CompARenderer class executes the rendering, you now examine the
CompARenderer.java file.
CompARenderer.javafile to open it in the NetBeans Editor.
In the
CompARenderer.java file, scroll to lines 34–35 of the
CompARenderer class definition:
These lines show that the class uses the script resources of the
compA.js and
compA.css files. These resources enable the display and style of the HTML markup for the pop-up balloon and help process the information that the balloon displays.
Scroll to lines 82–103. Here, you see the formatting and content that was hand-coded into the page in the do-it-yourself approach. Now, this information is part of a component and is inserted into the page automatically when the component is rendered.
Scroll down and view lines 146–157. Here, you see that the component also contains references to the JavaScript file (
COMPA_SCRIPT_RESOURCE, which resolves to
/compA.js) and the style sheet (
COMPA_CSS_RESOURCE, which resolves to
/compA.css).
These classes illustrate the advantage of the JavaServer Faces component approach: after you create the component, you (and other developers) can reuse it easily. All of the resource information is contained in the component. By accessing the
CompATag class through the FacesServlet, you eliminate additional programming.
In summary, the
CompARenderer class renders the markup code that displays the component in the page. The JavaServer Page is routed through the FacesServlet. Because the page is within the scope of the
<f:view> tag, the JavaServer Faces framework recognizes that it must operate on the
bpui.CompA tag.
The FacesServlet is included in the libraries that are distributed with the GlassFish application server. The FacesServlet is registered in
web.xml, and is part of the Java EE framework. The JavaServer Faces framework identifies the
CompA component by its
bpui:compA tag and routes the request (along with the component it contains) to the tag class
CompATag. There, the
setProperties() method extracts properties from the tag's attributes and populates the properties of the component.
The renderer (
CompARenderer) outputs the component's markup and returns control to the the JavaServer Faces framework, so the next tag in the JSP page can be interpreted.
compA.cssStyle Sheet
The CSS file for the CompA implementation needs to change only slightly from the do-it-yourself or toolkit approaches. Open the file for viewing in the NetBeans IDE:
compA.cssfile. The file opens in the NetBeans Editor.
In the file, note that the class selector namespace had been changed from
.bpui_alone to
.bpui_compA. The namespace ensures a unique name for the styles, eliminating the possibility of inadvertent duplication. The reasoning is the same as that used in the
bookcatalog.jsp file, where the name
bpui.compA was used to create a separate namespace.
Extending this technique, expand the bookstore2 > Web Pages > images node and note that the images used for the corners of the pop-up balloons have also been given unique names. Best practice dictates that you always create a separate namespace when you design a component to help avoid clashes.
compA.jsFile
You now examine the
compA.js file. This file, in particular, illustrates the flexibility of the JavaServer Faces approach.
compA.jsfile. The file opens in the NetBeans Editor.
In the
compA.js file, note that the
bpui.compA.showPopup() function (line 9) occupies its own namespace. Otherwise, it is identical to the
bpui.alone.showPopup() function in the do-it-yourself version.
The
showPopupInternal() function (line 28) is markedly different, however. In line 32–35, this function retrieves the name of the pop-up object to be displayed and constructs the
url attribute by concatenating the
itemId value to the object's URI property:
In the do-it-yourself and toolkit versions, the pop-up balloon is identified by more explicitly coding its
url attribute:
As these lines reveal, in the do-it-yourself version a change in the name of the pop-up object or its URL requires a change in the JavaScript code. In contrast, the component approach requires you to make these changes only in the tag's attribute values.
In line 9, the identity of the pop-up object is passed into the
showPopup() function by the arguments
popupx and
itemId.
When setting the timeout period for the component (line 24) you identify the component by name and item identifier (
popupx and
itemId). These values identify the component that is operated upon.
Consider line 32 of the
compA.js file, in the
showPopupInternal() function:
Here you see that the function is accessing the
bpui.compA[] associative array with the name of the component. The array accounts for situations in which more than one component is present in the page. For example, if the page contained two components named
popup0 and
popup1, the associative array would contain two objects.
DispatcherClass
The
Dispatcher class finds known URL patterns, alters them if necessary, and forwards them for further processing.
To understand how the dispatcher does its job, first examine the
web.xml file:
web.xmlfile to open it in the NetBeans Editor.
web.xmlis an XML file, the IDE tries to interpret it rather than display it. Click the XML button in the NetBeans Editor toolbar to show the file in XML view.
web.xmlFile
Note lines 35–68 of the file, where URL patterns are mapped to servlets. In these lines, all of the application's pages are mapped to the Dispatcher servlet. Lines 45–48 are typical of this mapping.
In the do-it-yourself and toolkit versions, the pop-up balloon is identified by more explicitly coding its
url attribute:
This mapping is a remnant of the legacy coding for the project and presents a barrier to introducing JavaServer Faces technology into the project. If all URLs are routed through the Dispatcher, then JavaServer Faces processing must take place there along with other page processing. A mapping must be added to route JavaServer Faces components to the FacesServlet for processing.
Lines 25–33 set up mapping to the FacesServlet:
Consider line 32 of the
compA.js file, in the
showPopupInternal() function:
As a result of this mapping, any URL pattern that begins with
/faces/ is sent to the FacesServlet class.
Now, examine the Dispatcher to see how the
/faces/ prefix is concatenated to URL patterns.
Dispatcher.javaFile
Instead of editing the existing
Dispatcher.java file, replace it with the
Dispatcher.java_compX file already present in the project.
dispatcherfolder:
The
Dispatcher.java_compX file is used for both CompA and CompB, the the next example in this series.
Dispatcher.javafile. Right-click and choose Delete from the contextual menu. In the confirmation pop-up window, click Yes.
Dispatcher.java_compXfile, right-click, and choose Copy from the contextual menu.
com.sun.bookstore2.dispatchernode, right-click, and choose Paste from the contextual menu. A copy of the
Dispatcher.java_compXfile appears in the list.
Dispatcher.java_compXfile, right-click, and choose Rename from the contextual menu. Rename the file to
Dispatcher.javato make it part of the project build.
Dispatcher.javafile to open it in the NetBeans Editor.
In the
Dispatcher.java file, note line 74, in the method
doGet():
This line is the sole addition to the original
Dispatcher.java file. It prefixes the
/faces prefix to the
/books/bookcatalog URL pattern. The
/faces prefix allows the servlet mapping in the
web.xml file to route the component to the FacesServlet class.
The code following line 62 shows that the prefix is concatenated only to
/books/bookcatalog URL patterns. The bookcatalog page is the only one in the project that contains JavaServer Faces components.
The
Dispatcher servlet
doGet() method interacts with lines 45–48 of the
web.xml file, noted earlier. In those lines, any URL with the pattern
/books/bookcatalog is routed to the Dispatcher servlet. The Dispatcher servlet processes the URL pattern (line 74) by concatenating a
/faces prefix, changing the value of the
selectedScreen variable to
/faces/books/bookcatalog.
In line 105 of
Dispatcher.java, a JavaServer Pages
.jsp suffix is appended to the URL. The URL is then forwarded to the actual JSP page for processing. In this way, the
Dispatcher servlet lives up to its name: it finds known URL patterns, alters them if necessary, and forwards them.
When such a servlet appears in a legacy application, the URL patterns are typically hard-coded. To change a pattern in order to change information flow, the relevant URL pattern must be changed everywhere it appears. To add a page to the application, the new URL must be coded in several places. The advantage of a JavaServer Faces component approach is that these values are declared in the
faces-config.xml file and don't need to be hard-coded elsewhere.
Now, examine the generated HTML markup for the project by building and deploying it.
Note that the code for this page is almost identical to that produced in the do-it-yourself version of the project. Instead of being hard-coded in JavaScript as it was in the do-it-yourself method, the pop-up rendering code is now generated by the renderer. The most obvious difference in the code is the use of the
compA namespace in the pop-up portion of the file. Another difference comes at the end of the pop-up section, just before the
<!-- END: popup --> comment:
These lines initialize the pop-up balloon object. They create a new a new Popup object with the name of the object (
pop0) and the URL of the servlet to which it is being forwarded (
PopupServlet). These lines are generated by the CompARenderer servlet, in lines 116–122 of
CompARenderer.java.
Later in the HTML code, the
pop0 object name is referenced in the
onmouseover and
onmouseout event handlers for each book title. For example,
It is instructive to consider the page's HTML code alongside the
compA.js code that helps generate it. View the
compA.js file in the IDE and consider the
showPopup() function beginning on line 9. Lines 11–20 obtain the x-y coordinates of the mouse when the
onmouseover event occurs. In lines 21–22, those coordinates are associated with the name of the object that pops up.
The object name and coordinates are then passed to the
showPopupInternal() function (lines 28–41). That function looks up the object by name (line 32) and operates on it.
The object is actually created by the
createPopup() function (beginning on line 44), more properly called a 'closure' because it contains an inner function,
ajaxReturnFunction(), with variables that can be accessed outside of the
createPopup() function.
Although the CompA approach requires some additional effort to code the JavaServer Faces component, it has several advantages:
bookcatalog.jspfile and return to your original application, affecting only one page.
The CompA approach described here has a disadvantage, namely that all of the JavaScript files, images, and Java classes must be bundled with the application. You address this issue in the next article in the series.
In the next article iin this series you take better advantage of JavaServer Faces technology by using a phase listener approach to implement the pop-up balloons. In that approach, you can bundle all of the component's resources into a reusable
.jar file. | http://www.oracle.com/technetwork/articles/javaee/compa-135961.html | CC-MAIN-2015-18 | refinedweb | 3,622 | 56.35 |
No project description provided
Project description
Redscope
A schema introspection tool for aws Redshift
Redscope is used to read the raw SQL DDL from your redshift database and save
the results to
.sql files in an easy-to-use project tree.
Why use Redscope?
Keeping track of the database schema is annoying. If DDL files are kept under version control, and a database migration tool is used as part of your CI, there is a chance that the table definition can be out of date as it changes over time. This tool allows the current state of the database schema to be read from redshift, and put under version control so changes can be tracked over time.
getting started
pip install redscope
Create a redscope project
redscope init
populate the redshift/redscope/.redscope file
This file is used to tell
redscope how to connect to redshift. In order to easily support using existing
.env files in your projects,
it is necessary to tell
redscope the name of your environment file, as well as the name of the
environment variable which contains a standard
psycopg2 connection string.
[env] file = name_of_env_file.env [redshift] connection = ENV_VARIABLE_WITH_PSYCOPG2_CONNECTION_STRING
Introspect the database schema
rescope inspect
Redscope API
STILL UNDER DEVELOPMENT
For whatever reason, sometimes it is nice to be able to reference SQL ddl directly from your python code.
This can be accomplished using the
redscope api.
from pathlib import Path from dotenv import load_dotenv from redscope.api import RedshiftSchema load_dotenv('my-env-file.env') redshift_schema = RedshiftSchema('MY_DB_CONNECTION_NAME') # sales_report_table now is a Table DDL object sales_report_table = redshift_schema.schema('sales').table('report').fetch() # will print full table ddl including constraints, encoding, and defaults print(sales_report_table.ddl()) # will print simple ddl with just columns and data types print(sales_report_table.simple_ddl()) # Accessing a function definition func_foo = redshift_schema.schema('my_funcs').function('foo').fetch() # func foo is a Function DDL object print(func_foo.ddl()) # getting all objects in a schema reporting_views = redshift_schema.schema('reporting').views().fetch() for name, ddl in reporting_views.items(): print(f"the key is the schema qualified view name. {name}") print(f"the value is the SQL ddl string.{ddl}") # Files can also be saved root_path = Path.cwd() tables = redshift_schema.schema('sales').tables().fetch() """ root_path schema sales tables schema.table.sql -- one file per table """ for table in tables: table.save_file(root_path)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/redscope/ | CC-MAIN-2021-17 | refinedweb | 413 | 59.09 |
What's new in C# 11
Beginning with the .NET 6.0.200 SDK or Visual Studio 2022 version 17.1, preview features in C# are available for you to try.
Important
These are currently preview features. You must set
<LangVersion> to
preview to enable these features. Any feature may change before its final release. These features may not all be released in C# 11. Some may remain in a preview phase for longer based on feedback on the feature.
The following features are available in the 6.0.200 version of the .NET SDK. They're available in Visual Studio 2022 version 17.2.
- Generic attributes.
- static abstract members in interfaces.
- List patterns.
- Newlines in string interpolation expressions.
- Improved method group conversion to delegate
- Raw string literals.
- Warning wave 7
You can download the latest .NET 6 SDK from the .NET downloads page. You can also download Visual Studio 2022, which includes the .NET 6 SDK. You can also try all these features with the preview release of the .NET 7 SDK, which can be downloaded from the all .NET downloads page.
Generic attributes
You can declare a generic class whose base class is System.Attribute. This provides a more convenient syntax for attributes that require a System.Type parameter. Previously, you'd need to create an attribute that takes a
Type as its constructor parameter:
// Before C# 11: public class TypeAttribute : Attribute { public TypeAttribute(Type t) => ParamType = t; public Type ParamType { get; } }
And to apply the attribute, you use the
typeof operator:
[TypeAttribute(typeof(string))] public string Method() => default;
Using this new feature, you can create a generic attribute instead:
// C# 11 feature: public class GenericAttribute<T> : Attribute { }
Then, specify the type parameter to use the attribute:
[GenericAttribute<string>()] public string Method() => default;
You must supply all type parameters when you apply the attribute. In other words, the generic type must be fully constructed.
public class GenericType<T> { [GenericAttribute<T>()] // Not allowed! generic attributes must be fully constructed types. public string Method() => default; }
The type arguments must satisfy the same restrictions as the
typeof operator. Types that require metadata annotations aren't allowed. For example, the following types aren't allowed as the type parameter::
objectfor
dynamic.
- IntPtr instead of
nintor
unint.
stringinstead of
string?.
ValueTuple<int, int>instead of
(int X, int Y).
Static abstract members in interfaces
Important
static abstract members in interfaces is a runtime preview feature. You must add the
<EnablePreviewFeatures>True</EnablePreviewFeatures> in your project file. For more information about runtime preview features, see Preview features. You can experiment with this feature, and the experimental libraries that use it. We will use feedback from the preview cycles to improve the feature before its general release.
You can add static abstract members in interfaces to define interfaces that include overloadable operators, other static members, and static properties. The primary scenario for this feature is to use mathematical operators in generic types. The .NET runtime team has included interfaces for mathematical operations in the System.Runtime.Experimental NuGet package. For example, you can implement the
System.IAdditionOperators<TSelf, TOther, TResult> in a type that implements
operator +. Other interfaces define other mathematical operations or well-defined values.
You can learn more and try the feature yourself in the tutorial Explore static abstract interface members, or the Preview features in .NET 6 – generic math blog post.
Newlines in string interpolations
The text inside the
switch expressions, or LINQ queries.
{and
}characters for a string interpolation can now span multiple lines. The text between the
{and
}markers is parsed as C#. Any legal C#, including newlines, is allowed. This feature makes it easier to read string interpolations that use longer C# expressions, like pattern matching
You can learn more about the newlines feature in the string interpolations article in the language reference.
List patterns
List patterns extend pattern matching to match sequences of elements in a list or an array. For example,
sequence is [1, 2, 3] is
true when the
sequence is an array or a list of three integers (1, 2, and 3). You can match elements using any pattern, including constant, type, property and relational patterns. The discard pattern (
_) matches any single element, and the new range pattern (
..) matches any sequence of zero or more elements.
You can learn more details about list patterns in the pattern matching article in the language reference.
Improved method group conversion to delegate
The C# standard on Method group conversions now includes the following item:
- The conversion is permitted (but not required) to use an existing delegate instance that already contains these references.
Previous versions of the standard prohibited the compiler from reusing the delegate object created for a method group conversion. The C# 11 compiler caches the delegate object created from a method group conversion and reuses that single delegate object. This feature is first available in Visual Studio 17.2 as a preview feature. It's first available in .NET 7 preview 2.
Raw string literals
Raw string literals are a new format for string literals. Raw string literals can contain arbitrary text, including whitespace, new lines, embedded quotes, and other special characters without requiring escape sequences. A raw string literal starts with at least three double-quote (""") characters. It ends with the same number of double-quote characters. Typically, a raw string literal uses three double quotes on a single line to start the string, and three double quotes on a separate line to end the string. The newlines following the opening quote and preceding the closing quote are not included in the final content:
string longMessage = """ This is a long message. It has several lines. Some are indented more than others. Some should start at the first column. Some have "quoted text" in them. """;
Any whitespace to the left of the closing double quotes will be removed from the string literal. Raw string literals can be combined with string interpolation to include braces in the output text. Multiple
$characters denote how many consecutive braces start and end the interpolation:
var location = $$""" You are at {{{Longitude}}, {{Latitude}}} """;
The preceding example specifies that two braces starts and end an interpolation. The third repeated opening and closing brace are included in the output string.
You can learn more about raw string literals in the article on strings in the programming guide, and the language reference articles on string literals and interpolated strings.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-11 | CC-MAIN-2022-21 | refinedweb | 1,071 | 58.58 |
Implementation status: to be implemented
Synopsis
#include <stdio.h>
#include <wchar.h>
wchar_t *fgetws(wchar_t *ws, int n,
FILE *stream);
Description
The function gets a wide-character string from a stream.
Arguments:
ws - the array saving the read wide character string,
n - the maximum number of wide characters to read +1,
stream - the input stream.
The
fgetws() function reads characters from the stream, converts these to the corresponding wide-character codes, places them in the
wchar_t array pointed to by ws, until n-1 characters are read, or a
If an error occurs, the resulting value of the file position indicator for the stream is unspecified.
Return value
Upon successful completion, the
fgetws() function returns ws.
If the end-of-file indicator for the stream is set, or if the stream is at end-of-file, the end-of-file indicator for the stream is set and
fgetws() returns a null pointer and sets
errno to indicate the error.
Errors
[.
[
EINTR] The read operation was terminated due to the receipt of a signal, and no data was transferred.
[
EIO] A physical I/O error has occurred, or the process is in a background process group attempting to read from its controlling terminal, and either the calling thread is blocking
SIGTTIN or the process is ignoring
SIGTTIN or the process group of the process is orphaned.
[
E.
Implementation tasks
- Implement
wchar.hfile.
- Implement the
fgetws()function. | https://phoenix-rtos.com/documentation/libphoenix/posix/fgetws | CC-MAIN-2020-34 | refinedweb | 235 | 51.18 |
14 September 2012 11:38 [Source: ICIS news]
LONDON (ICIS)--NYMEX light sweet crude futures extended gains more than $2.00/bbl on Friday to take the front month October contract above $100.00/bbl, rising in tandem with buoyant global stock markets following the US Federal Reserve’s announcement of another round of stimulus measures to boost the ?xml:namespace>
By 10:15 GMT, October NYMEX crude had hit a high of $100.42/bbl, a gain of $2.11/bbl from Thursday's close of $98.31/bbl, before easing back to around $100.25/bbl.
At the same time, November Brent crude on ICE Futures was trading around $117.60/bbl, having hit a high of $117.95/bbl, a gain of $2.07/bbl from the previous close of $115.88 | http://www.icis.com/Articles/2012/09/14/9595505/nymex-crude-extends-gains-to-2bbl-on-us-fed-stimulus.html | CC-MAIN-2013-48 | refinedweb | 135 | 69.79 |
The
RichEditBox
I downloaded the sourcecode of Wordpad () and it has the same worst performance (4 minutes). But this sample is an old version of Wordpad.
So Microsoft has improved anything in Wordpad in the last years that is missing in the .NET framework.
Finally I found the solution:
The .NET framework uses the RichEdit20W class for the Richedit control as also the old Wordpad does. But the Wordpad of Windows XP uses the new RichEdit50W that has been highly improved by Microsoft.
So how do I tell the .NET framework to use RichEdit50W instead of RichEdit20W ?
This is very easy: Derive a class from RichTextBox and write a managed wrapper for LoadLibary.
The class RichEdit50W is created by MsftEdit.dll which is available since Windows XP SP1. I implemented a fallback to RichEdit20W for the very rare case that someone should still use XP without service pack.
And it works!
/// <summary> /// The framework uses by default "Richedit20W" in RICHED20.DLL. /// This needs 4 minutes to load a 2,5MB RTF file with 45000 lines. /// Richedit50W needs only 2 seconds for the same RTF document !!! /// </summary> protected override CreateParams CreateParams { get { CreateParams i_Params = base.CreateParams; try { // Available since XP SP1 Win32.LoadLibrary("MsftEdit.dll"); // throws // Replace "RichEdit20W" with "RichEdit50W" i_Params.ClassName = "RichEdit50W"; } catch { // Windows XP without any Service Pack. } return i_Params; } }
NOTE: See also
public class Win32 { [DllImport("kernel32.dll", EntryPoint="LoadLibraryW", CharSet=CharSet.Unicode, SetLastError=true)] private static extern IntPtr LoadLibraryW(string s_File); public static IntPtr LoadLibrary(string s_File) { IntPtr h_Module = LoadLibraryW(s_File); if (h_Module != IntPtr.Zero) return h_Module; int s32_Error = Marshal.GetLastWin32Error(); throw new Win32Exception(s32_Error); } } | https://codedump.io/share/ziaANDQLd8lQ/1/c-richeditbox-has-extremely-slow-performance-4-minutes-loading-solved | CC-MAIN-2017-30 | refinedweb | 268 | 61.43 |
Design patterns and practices in .NET: the Null Object pattern
May 6, 2013 4 Comments
Introduction
Null references are a fact of life for a programmer. Some would actually call it a curse. We have to start thinking about null references even in the case of the simplest Console application in .NET.:
static void Main(string[] args)
If there are no arguments passed in to the Main method and you access the args array with args[0] then you’ll get a NullReferenceException because the array has not been initialised. You have to check for args == null already at this stage.
This problem is so pervasive that if your class or method has some dependency then the first thing you need to check is if somebody has tried to pass in a null in a guard clause:
if (dependency == null) throw new ArgumentNullException("Dependency name");
Also, if you call a method that returns an object and you intend to use that object in some way then you may need to include the following check:
if (object == null) return;
…or throw an exception, it doesn’t matter. The point is that your code may be littered with those checks disrupting the flow. It would be a lot more efficient to be able to assume that the return value has been instantiated so that it is a ‘valid’ object that we can use without checking for null values first.
This is exactly the goal of the Null Object pattern: to be able to provide an ’empty’ object instead of ‘null’ so that we don’t need to check for null values all the time. The Null Object will be sort of a zero-implementation of the returned object type where the object does not perform anything meaningful.
Also, there may be times where you just don’t want to make use of a dependency. You cannot pass in null as that would throw an exception – instead you can pass in a valid object that does not perform anything useful. All method calls on the Null Object will be valid, meaning you don’t need to worry about null references. This may occur often in testing scenarios where the test may not care about the behaviour of the dependency as it wants to test the true logic of the system under test instead.
Of course it’s not possible to get rid null checks 100%. There will still be places where you need to perform them.
This pattern is also known by other names: Stub, Active Nothing, Active Null.
Demo
Open Visual Studio and create a new Console application. We’ll simulate that a method expects a caching strategy to cache some object. The example is similar to and builds on the example available here under the discussion on the Adapter Pattern. Insert the following interface:
public interface ICacheStorage { void Remove(string key); void Store(string key, object data); T Retrieve<T>(string key); }
Insert the following concrete type that implements the HttpContext.Current.Cache type of solution:
public class HttpContextCacheStorage : ICacheStorage { public void Remove(string key) { HttpContext.Current.Cache.Remove(key); } public void Store(string key, object data) { HttpContext.Current.Cache.Insert(key, data); } public T Retrieve<T>(string key) { T itemsStored = (T)HttpContext.Current.Cache.Get(key); if (itemsStored == null) { itemsStored = default(T); } return itemsStored; } }
You’ll need to add a reference to System.Web. It’s of course not too wise to rely on the HttpContext in a Console application but that’s beside the point right now.
Add the following private method to Program.cs:
private static void PerformWork(ICacheStorage cacheStorage) { string key = "key"; object o = cacheStorage.Retrieve<object>(key); if (o == null) { //simulate database lookup o = new object(); cacheStorage.Store(key, o); } //perform some work on object o... }
We first check whether object ‘o’ is available in the cache provided by the injected ICacheStorage object. If not then we fetch it from some source, like a DB and then cache it.
What if the caller doesn’t want to cache the object? They might intentionally force a database lookup. If they pass in a null then they’ll get a NullReferenceException. Also, if we want to test this method using TDD then we may not be interested in caching. The test may probably want to test the true logic of the code i.e. the ‘perform some work on object o’ bit, where the caching strategy is irrelevant.
The solution is a caching strategy that doesn’t do any work:
public class NullObjectCache : ICacheStorage { public void Remove(string key) { } public void Store(string key, object data) { } public T Retrieve<T>(string key) { return default(T); } }
If you pass this implementation to PerformWork then the object will never be cached and the Retrieve method will always return null. This forces PerformWork to look up the object in the storage. Also, you can pass this implementation from a unit test so that the caching dependency is effectively ignored.
Another example
Check out my post on the factory patterns. You will find an example of the Null Object Pattern there in the form of the UnknownMachine class. Instead of CreateInstance of the MachineFactory class returning null in case a concrete type was not found it returns this empty object which doesn’t perform anything.
Consequences
Using this pattern wisely will result in fewer checks for null values: your code will be cleaner and more concise. Also, the need for code branching may decrease.
The caller must obviously know that a Null Object is returned instead of a null, otherwise they may still check for nulls. You can help them by commenting your methods and classes properly. Also, there are certainly cases where an empty NullObject, such as UnknownMachine mentioned above may be confusing for the caller. They will call the TurnOn() method but will not see anything happening. You can extend the NullObject implementation with messages indicating this status, e.g. “Cannot turn on an empty machine.” or something similar.
Null objects are quite often implemented as singletons – the subject of the next post: as all NullObjects implementations of an abstraction are identical, i.e. have the same properties and states they can be shared across the application. This may become cumbersome in large applications where team members may not agree on what a NullObject representation should look like. Should it be empty? Should it have some minimal implementation? Then it’s wiser to allow for multiple representations of the Null Object.
View the list of posts on Architecture and Patterns here.
Brilliant post, Andras 🙂 Your style of writing is very good and it’s not boring like the books…
The main reason in this example to use a Null Object is to ignore the by-product of what the real code in the method is doing. I agree this is sometimes a case, but it also feels weird. It makes me start thinking of how I can try to get that by-product code out of that method so that I don’t need to do anything special for testing purposes. Do you have any better ideas about how to separate the code in this specific example that might be a better pattern?
Hi David,
Sure, check out the posts that take up the decorator pattern:
Pingback: Architecture and patterns | Michael's Excerpts | https://dotnetcodr.com/2013/05/06/design-patterns-and-practices-in-net-the-null-object-pattern/ | CC-MAIN-2018-47 | refinedweb | 1,223 | 62.07 |
In this post, you’ll learn to integrate PayPal payment in Ionic 5 apps and PWA, so you can accept payment both on mobile and desktop devices.
PayPal is one of the most widely used and easiest payment gateway to integrate in your website or app. Plus it is spread all over the globe, and supports a wide variety of payment options. PayPal can take care of almost all your payment requirements, so you don’t have to go all
What is Ionic ?
You probably already know about Ionic, but I’m putting it here just for the sake of beginners. Ionic is a hybrid mobile app development SDK. It provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass. Apps can be built with these Web technologies and then distributed through native app stores to be installed on devices by leveraging Cordova or Capacitor environment.
So, in other words — If you create Native apps in Android, you code in Java. If you create Native apps in iOS, you code in Obj-C or Swift. Both of these are powerful but complex languages. With Ionic you can write a single piece of code for your app that can run on both iOS and Android and web (as PWA),.
Ionic and Payment Gateways
Ionic can create a wide variety of apps, and hence a wide variety of payment gateways can be implemented in Ionic apps. The popular ones are PayPal, Stripe, Braintree, in-app purchase etc. For more details on payment gateways, you can read my blog on Payment Gateway Solutions in Ionic.
Also, there are different types of Ionic Apps you can build for same functionality. Most popular ones are :
- Front-end: Angular | Build environment : Cordova ✅
- Front-end: Angular | Build environment : Capacitor
- Front-end: React | Build environment : Capacitor
- Front-end: React/Angular | as PWA ✅
As you see, PayPal can be integrated in websites as well as mobile apps. In this blog we’ll learn how to integrate PayPal payment gateway in Ionic 5 apps and Ionic 5 PWA.
Structure of post
In this post we will learn how to implement Paypal payments for an Ionic 5 PWA and mobile app. We can break down the post in these steps:
Step 1— Create a PayPal developer account and configure it for integration
Step 2— Creating a basic Ionic 5 Angular app
PWA integration
Step 3 —Configure PayPal web integration
Step 4 — Run the PWA on
ionic serve to test web payments
App integration
Step 5 — Integrate Ionic Native plugin for PayPal
Step 6 — Build the app on android to test app payments.
Step 7 — Going Live credentials follow the steps below:
- Go to Sandbox - Accounts and create a sandbox business and personal test accounts.
- Go to My Apps & Credentials and generate a REST API app, and link it to your sandbox test business account (by default you have a business and personal sandbox account in developer dashboard).When your app’s PayPal SDK is running in Sandbox mode, you cannot make payments with an “actual” PayPal account, you need a Sandbox account. So, the personal sandbox account “pays” and business sandbox account “receives” the money.
Also note down your Client ID from the app details. This is mostly what you need to integrate PayPal in your app / PWA and test payments.
Country specific gotchas
- Being a payment gateway, PayPal has to respect different countries rules. E.g. In India, PayPal users can only pay and receive payments done from India in INR. So if you are testing from India, and your Sandbox accounts are created with India as country, you’ll have to make payments in INR. Similar restrictions may exist for other countries.
- Sandbox payment credit cards are also currency sensitive. If you are facing issue making payments with the default sandbox credit card, generate a new one using Credit Card Generator from the left menu. Make sure you keep the country same as the sandbox account. Add this card with the
Add new cardoption when you are making a sandbox payment
Step
blank starter using
$ ionic start PayPalIonic sidemenu - PayPal functionality — basically you can do away with just a button for PayPal payments.
My homepage looks like this
Overall
PayPal-web.page.html code looks like this
Step 3 — Configure PayPal web integration
We can’t use the PayPal Cordova plugin in an Ionic Progressive Web App (PWA). But we can use the PayPal front-end Javascript SDK in such case.
Warning : In websites or PWA, it is recommended to use payment gateway functions in back-end, because front-end implementation can reveal your client-ID, secret etc
Add PayPal Javascript SDK to index.html
For website/PWA front-end implementation, PayPal provides Payment Buttons (the yellow button in the above image). These are pre-configured PayPal buttons + functions, attached to a JS file we import in our PWA’s
index.html as :
<script src=""></script>
Replace
YOUR_CLIENT_IDin above script call with your own client ID, and change the currency as per your sandbox/live account currency. Client ID is what attaches the Payment with your PayPal REST API app, so do not get it wrong.
SDK Parameters
PayPal JavaScript SDK uses default values for parameters you don’t pass in the imported script. You might want to override these default values depending on your functionality.
Now, PayPal official documentation tells you to code the remaining part of the logic in index.html itself. But the default implementation is good for two reasons
- Ionic app takes time to compile and load in the webview environment, so the render function cannot find the button container
- We need to pass variables like amount, currency etc to the functions. So it makes more sense to keep the functions inside the page.ts file of PWA
Render Payment Buttons
In the HTML template, we replace the
ion-button with
<div id="paypal-button-container"></div>
This
id is used to identify the button, and attached correct render and payment functions to the button.
Payment Functions
createOrder — This function is called when the buyer clicks the PayPal button. This button
- Calls PayPal using
actions.order.create()to set up the details of a one-time transaction, including the amount, line item details, and more
- Launches the PayPal Checkout window so the buyer can log in and approve the transaction on paypal.com
onApprove — This function is called after the buyer approves the transaction on paypal.com. This function:
- Calls PayPal using
actions.order.capture()to capture the funds from the transaction.
- Shows a message to the buyer to let them know the transaction is successful.
onApprove function carries out the success or error part after a transaction. Here, you can call your server with a REST API and save a successful payment in your database.
Here’s the full code of
PayPal-web.page.ts
You can also try different styles of payment buttons at PayPal Payment Button demo page
Step 4 — Test payments in PWA
Run the PWA on browser using
ionic serve
When you click on the Payment Button, PayPal script automatically pops-up a modal with required functionality. Remember, this PayPal popup is attached to your PayPal REST API app, based on the Client ID you provided.
Login with your Sandbox account, and you’ll see payment options just like a normal PayPal transaction. Add a new credit card as I mentioned in Step 1, if you are getting an error saying
Your bank was not able to verify your card . This is generally a mismatch in currency of the app, sandbox account or the credit card.
Remember, you cannot login with a non-sandbox PayPal account for testing
Select the appropriate method, and your payment is done.
Sometimes, there will be an additional verification step involved, imitating a 3D secure password for real transaction. As you can see, being a Sandbox transaction, the password is mentioned as the Personal message.
Once the payment is done, the response object will look like this
You can easily use the response to determine if the payment was successful.
Step 5 — Integrate Ionic Native plugin for PayPal
To implement PayPal payment in Ionic Mobile apps, install PayPal Cordova plugin first
$ ionic cordova plugin add com.paypal.cordova.mobilesdk $ npm install @ionic-native/paypal
Import PayPal in app.module
Import and include
PayPal in the list of
providers in your
app.module.ts file.
import { PayPal } from '@ionic-native/paypal/ngx';
Import PayPal in your page
Create a separate folder for PayPal app implementation.
$ ionic generate page paypal-mobile
Import
PayPal in your
paypal-mobile.page.ts same as
app.module.ts
import { PayPal, PayPalPayment, PayPalConfiguration } from '@ionic-native/paypal/ngx'; .... export class PaypalPage {constructor(private payPal: PayPal) { }
Modify the page’s UI a little to match the phone payment functionality
Here’s the
paypal-mobile.page.html code, in case you need
PayPal payment function
Invoke the payment function to initiate a payment. As mentioned earlier, You will require your
client_id from your PayPal account. (How to get my credentials from PayPal account). This function will be attached to Pay with PayPal button we saw earlier in the app screenshot.
Here’s the complete code for PayPal mobile app implementation
In the
payWithPaypal() function, you start by initializing the
PayPal object with the PayPal environment (Sandbox or Production) to prepare the device for processing payments. Βy calling the
prepareToRender() method you pass the environment we set earlier. Finally, you render the PayPal UI to collect the payment from the user by calling the
renderSinglePaymentUI() method.
Notice the
PayPalEnvironmentSandboxparameter. This is used for Sandbox environment. For production environment, it will change to
PayPalEnvironmentProduction. Of course, do not forget to replace
YOUR_SANDBOX_CLIENT_IDwith your Sandbox Client ID .
Also notice, for the sake of a sample, we have taken
PaymentAmount and
currency as static in the logic, but these can be easily dynamic as per your app’s requirement.
Once payment is done, PayPal SDK will return a response. A sample sandbox response is shown in the gist above. One can use this response to show appropriate Toast or Alert to app users.
Step 6 — Build the app on android to test app payments
To build the app on android, run the following commands one after another
$ ionic cordova platform add android
$ ionic cordova run android
The final command will run the app on either default emulator, or an android device attached to your system. Once you click the Pay with PayPal button, you will see the PayPal payment screens
You can choose to
- Pay with PayPal — using your PayPal account, OR
- Pay with Card — This will use your’s device’s camera to help read your credit card (to avoid typing your card number, expiry date etc)
1. Pay with PayPal
You will need to login to your Sandbox Account to make a payment (because, remember, you are in a sandbox environment)
Once you are logged in, you’ll see the checkout options
Select one option, and pay for the dummy amount.
2. Pay with Card
In this case, your apps’ camera will open up to scan your card.
Once it is done scanning, it will confirm the card number, expiry date and ask for your CVV details etc. Lastly, it’ll show you a confirmation screen, and you proceed to pay the amount.
In both payment cases, you should receive a successful payment response like the following
This completes the Mobile app part of PayPal payment.
Going Live
After testing on app and PWA, when you want to go live, you will need to perform following steps
- Copy the
productionclient-ID from your PayPal account and use it in your app
- In app implementation, change the
PayPalEnvironmentSandboxto
PayPalEnvironmentProductionin
prepareToRenderfunction
- In web-implementation, change the script import with
<script src=”"> </script>
You’re all set now to make and accept payment from your Ionic app and PWA. Go enjoy, homie !
Conclusion
In this post, we learnt how to integrate PayPal in an Ionic app, as well as in an Ionic progressive web app. Testing can be performed easily using Sandbox accounts, and we can go live by simply changing sandbox client-ID with live-ID.
Leave comments if you face any issues in the implementation. I’ll be happy to help.
Next Steps
If you liked this blog, you will also find the following Ionic blogs interesting and helpful. Feel free to ask any questions in the comment section
- Ionic Payment Gateways — Stripe | PayPal | Apple Pay | RazorPay
- Ionic Charts with — Google Charts | HighCharts | d3.js | Chart.js
- Ionic Social Logins — Facebook | Google | Twitter
- Ionic Authentications — Via Email | Anonymous
-
Ionic React Full App with Capacitor
If you need a base to start your next Ionic 5 React Capacitor app, you can make your next awesome app using Ionic 5 React Full App in Capacitor
Ionic Capacitor Full App (Angular)
If you need a base to start your next Angular Capacitor app, you can make your next awesome app using Capacitor Full App
Ionic Full App (Angular and Cordova)
If you need a base to start your next Ionic 5 app, you can make your next awesome app using Ionic 5 Full App
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/enappd/paypal-payment-integration-in-ionic-5-apps-and-pwa-2l39 | CC-MAIN-2021-04 | refinedweb | 2,202 | 51.28 |
Simplest Clojure Test
Here's how to set up clojure testing environment.
Let's start a new project directory and create a
src directory:
mkdir src
Let's create the source file with
src/calc.clj. There's a simple namespace declaration and function that adds 2 numbers.
(ns calc) (def add [a b] (+ a b))
Now let's test that
add function. We need a test runner. There are a couple. The most standard is test-runner. Prep for install by adding it to the dependencies list in you project root with
vim deps.edn:
{:aliases {:test {:extra-paths ["test"] :extra-deps {io.github.cognitect-labs/test-runner {:git/url "" :sha "a85b3b02765fb68684ab9ee4a8598eacf7e471d2"}} :main-opts ["-m" "cognitect.test-runner"] :exec-fn cognitect.test-runner.api/test}}}
We are creating an alias that we can call to run tests, called
:test.
Next, create a
test directory to store the test file:
mkdir test
And create that test file with
vim test/calc_test.clj:
(ns calc-test (:require [clojure.test :refer [is deftest]] [calc]) (deftest test-add (is (= 4 (calc/add 3 1))))
Note that the significant parts of this setup. For test-runner to find the test:
Put the test files in the
test/directory.
Use a
-test-suffixed namespace.
Define tests with
deftest.
Now we can run the test via our alias by executing:
clj -X:test
This will give you some output like:
Running tests in #{"test"} Testing calc-test Ran 1 tests containing 1 assertions. 0 failures, 0 errors.
Does it get any simpler? What's your setup? | https://jaketrent.com/post/simplest-clojure-test/ | CC-MAIN-2022-40 | refinedweb | 260 | 78.14 |
The test separate-modules- is grammatically incorrect.
import module module = "";
declare option prohibit-feature "higher-order-function";
let $f in module:one() return 1
The second use of module should be "module" (string literal).
'in' should be ':='
Similar problems exist in the other separate-modules-X tests.
Correction,
import module module = "";
should be
import module namespace module = "";
In the file dummy.xquery the module declaration is grammatically incorrect :
module namespace = "";
Missing a name. Correction made:
module namespace m = "";
It looks like the other problems have been resolved. Therefore I am marking this bug as fixed.
This doesn't look fixed to me.
e.g.
separate-modules-1
import module module = "";
declare option prohibit-feature "higher-order-function";
let $f in module:one() return 1
See comment #1
I've taken the liberty of correcting a number of syntax errors which were afflicting these tests.
separate-modules-8 and 9 seem to be fundamentally wrong though.
I have correct separate-modules-2. Similar problem as described in comment #0.
The following:
for $f := module:one() return 1
Should be:
for $f in module:one() return.
Indeed it should. I suggest asking Ghislain to sort these tests out.
Thanks for noticing. I added a feature dependency on module import for all these tests.
Many thanks O'Neil for correcting the syntax errors. We'll have a definitive confirmation on them as soon as an implementation can run these tests.
I think these are all now working. | https://www.w3.org/Bugs/Public/show_bug.cgi?id=19630 | CC-MAIN-2015-32 | refinedweb | 244 | 51.85 |
Forum – Code of Conduct
Hello !
I am a user of Hanami router, but I never used 'resources' or thing like that, always to direct action routing.
I would like to try it but it doesn't find my action, how can I tell hanami router to find the class inside the correct namespace ?
namespace 'admin' do resources 'applications', only: %i[index] end get '/admin/applications', to: Controllers::Admin::Applications::Index
Name Method Path Action admin_applications GET, HEAD /admin/applications Applications::Index GET, HEAD /admin/applications Application::Controllers::Admin::Applications::Index
@memoht I'd strongly suggest to go with > 2.0-alpha releases. The framework had been almost entirely rewritten, and some basic concepts had been changed.
It's much, much easier to use now, but as it's still alpha, there are some things not finished yet, such as view helpers not being accessible. I already work with Hanami 2.0 early releases to identify such troublesome parts, and I'm working on official guides, as with video tutorials to make it easier to start.
Definitely if you'll work with 1.3 Hanami release, then you'll need to re-do a lot of the work soon.
:warning: A word from the unofficial shameless commerce division: Given the partial meltdown in Rails-world and Hanami 2 going alpha2, now may be a good time to promote and sponsor Hanami in some way e.g. Tim Riley, Piotr Solnica, Luca Guidi or Sebastian Wilgosz (hanamimastery).
@svoop thanks, I especially appreciate that you decided to give a support to HanamiMatery now, in this early stage, where it's hardest to keep pushing with the content!
rerun -- sidekiq -c path/to/configor similar.
Also, a question concerning Hanami 2.0.
My colleague and I are currently developing/maintaning a couple of Rails applications and have little production experience otherwise. Being heavily burned by both our earlier incompetence but also later by Rails, we are considering using Hanami 1.3 for our next commercial application product. Since Hanami 2.0 is on the horizon, we're wondering if there is any information available about how painful it would be to migrate 1.3 -> 2.0 when it's released.
You're not alone, at least :) We'll launch with 1.3 as we're closing in on the 1.0 with our app and will migrate later. | https://gitter.im/hanami/chat?at=60b2b457688a2624b8bc7527 | CC-MAIN-2021-31 | refinedweb | 394 | 56.55 |
Break a tall skinny matrix by rows into cache blocks. More...
#include <Tsqr_CacheBlocker.hpp>
Break a tall skinny matrix by rows into cache blocks.
A CacheBlocker uses a particular cache blocking strategy to partition an nrows by ncols matrix by rows into cache blocks. The entries in a cache block may be stored contiguously, or as non-contiguous partitions of a matrix stored conventionally (in column-major order).
The CacheBlocker blocks any matrix with the same number of rows in the same way, regardless of the number of columns (the cache blocking strategy's number of columns is set on construction). This is useful for TSQR's apply() routine, which requires that the output matrix C be blocked in the same way as the input matrix Q (in which the Q factor is stored implicitly).
Definition at line 72 of file Tsqr_CacheBlocker.hpp.
Constructor.
Definition at line 104 of file Tsqr_CacheBlocker.hpp.
Default constructor, so that CacheBlocker is DefaultConstructible.
Definition at line 116 of file Tsqr_CacheBlocker.hpp.
Copy constructor.
Definition at line 123 of file Tsqr_CacheBlocker.hpp.
Assignment operator.
Definition at line 131 of file Tsqr_CacheBlocker.hpp.
Cache size hint (in bytes).
This method is deprecated, because the name is misleading. Please call
cache_size_hint() instead.
Definition at line 143 of file Tsqr_CacheBlocker.hpp.
Cache size hint (in bytes).
Definition at line 148 of file Tsqr_CacheBlocker.hpp.
Number of rows in the matrix to block.
Definition at line 151 of file Tsqr_CacheBlocker.hpp.
Number of columns in the matrix to block.
Definition at line 154 of file Tsqr_CacheBlocker.hpp.
Split A in place into [A_top; A_rest].
Return the topmost cache block A_top of A, and modify A in place to be the "rest" of the matrix A_rest.
Definition at line 178 of file Tsqr_CacheBlocker.hpp.
View of the topmost cache block of A.
The matrix view A is copied so the view itself won't be modified.
Definition at line 199 of file Tsqr_CacheBlocker.hpp.
Split A in place into [A_rest; A_bot].
Return the bottommost cache block A_bot of A, and modify A in place to be the "rest" of the matrix A_rest.
Definition at line 227 of file Tsqr_CacheBlocker.hpp.
Fill the matrix A with zeros, respecting cache blocks.
A specialization of this method for a particular MatrixViewType will only compile if MatrixViewType has a method "fill(const Scalar)" or "fill(const Scalar&)". The intention is that the method be non-const and that it fill in the entries of the matrix with Scalar(0).
Definition at line 254 of file Tsqr_CacheBlocker.hpp.
Fill the matrix A with zeros, respecting cache blocks.
This version of the method takes a raw pointer and matrix dimensions, rather than a matrix view object. If contiguous_cache_blocks==false, the matrix is stored either in column-major order with leading dimension lda; else, the matrix is stored in cache blocks, with each cache block's entries stored contiguously in column-major order.
Definition at line 287 330 of file Tsqr_CacheBlocker.hpp.
"Un"-cache-block the given A_in matrix into A_out.
Definition at line 362 of file Tsqr_CacheBlocker.hpp.
Return the cache block with index
cache_block_index.
Definition at line 409 of file Tsqr_CacheBlocker.hpp.
Equality operator.
Two cache blockers are "equal" if they correspond to matrices with the same dimensions (number of rows and number of columns), and if their cache blocking strategies are equal.
Definition at line 449 of file Tsqr_CacheBlocker.hpp. | http://trilinos.sandia.gov/packages/docs/dev/packages/kokkos/doc/html/classTSQR_1_1CacheBlocker.html | CC-MAIN-2014-10 | refinedweb | 566 | 59.5 |
Tutorial: Building an In game Level Editor. (Part #2)
Game Engine: Unity3D 5.1
Language: C#
Subject: In-game Level Editor.
In this part of the tutorial we'll start working on making a basic object selection script and linking it to our input list.
The first thing we'll need to do for this is to make a few basic objects for the probably something like these below.
These are a the test objects that I'll be using for this tutorial.
Once you have made all the objects put them into prefabs into the 'Level Parts' folder (hopefully with better naming than me.)
Should probably have better naming for some of these.
Next we'll be working on storing this object collection in a script so we can select the object we want to place. To do this we'll start by making a new C# script called 'PartList' this will hold references to each of the prefabs along with the currently selected part.
The first thing we'll need to do when we open the 'PartList' script is to add the following to the top of the script:
using System.Collections.Generic;
This will enable us to use the things such as List<> and Dictionary<,>. So next we'll be adding the following variables to the script.
public List<GameObject> Parts = new List<GameObject>();
public GameObject SelectedPart;
private int _selectedPartNumber = 0;
The first 'Parts' variable will hold the list of GameObjects which we'll be using for the level editor. Please note that in an actual game it's probably a better idea to have a centralized list of objects, like in a file or something that can be loaded outside the editor itself as well you'll find out why in a later tutorial.
The second variable 'SelectedPart' is pretty self explanatory being that it holds the currently selected game object.
The third and final variable '_selectedPartNumber' holds the number of the selected part for use when cycling through the list.
(Which is what we're going on to now.)
The first thing that we'll need to do is remove the 'void Update()' method since we won't need that in this script.
Then we'll need to add a line of code to the 'void Start()' method which will select the first in the 'PartList'.
SelectedPart = Parts[_selectedPartNumber];
Next we'll need to code the logic for the logic for changing objects which we'll do with the following code:
public void SelectNextPart(){ _selectedPartNumber++; if(_selectedPartNumber >= Parts.Count){ _selectedPartNumber = 0; } SelectedPart = Parts[_selectedPartNumber]; }
Once you have put that into the script, save it and go back to the Unity Editor. Then drag the script into the 'scriptHolder' Game Object and then add all the objects to list like so:
You may have more objects. All you need to do is enter the amount of part prefabs you have .
Once all that is done we need to open up the 'LE_Input' script as we need to add a way to switch objects.
First we'll need to add another variable to the 'LE_Input' script as shown below.
private PartList _partList;
We'll also need to add a small part to the 'void Start()' method to link to the 'PartList' script.
_partList = GetComponent<PartList>();
Once those two little bits have been we'll need to extend the 'InputCheck()' method to support cycling through the parts, the following code can go after the check to reset the '_moved' boolean.
if(Input.GetButtonUp("Fire1")){ _partList.SelectNextPart(); }
Now if you save all of that, go back into the Unity Editor, make sure 'Maximize on Play' is unchecked and then run the project. If you select the 'scriptHolder' GameObject you should see the Selected Part defaulted to Floor (Or the object you set in the first element in the list.), if you then click on the Gameplay area and tap the Left Control button, you should then see the Selected Part change. | http://moongate-games.co.uk/blog/2015/7/27/tutorial-building-an-in-game-level-editor-part-2 | CC-MAIN-2017-17 | refinedweb | 662 | 67.79 |
On Tue, Aug 13, 2002 at 04:53:34PM +0200, Juliusz Chroboczek wrote: > Branden, I suggest you take this approach. Unfortunately, this will > significantly slow down the speed of the rasteriser. > > I guess something like the following (untested!) in > xc/lib/font/Imakefile should do the trick: > > #ifdef AlphaArchitecture > CDEBUGFLAGS = /**/ > #endif Applied to my 0pre1v3 tree. I would appreciate it if folks would test this once Alpha packages for 0pre1v3 are available (as of this writing, I haven't released the sources yet). -- G. Branden Robinson | Debian GNU/Linux | Ignorantia judicis est calamitas branden@debian.org | innocentis. |
Attachment:
pgpIStAnSE3yf.pgp
Description: PGP signature | https://lists.debian.org/debian-x/2002/08/msg00139.html | CC-MAIN-2017-26 | refinedweb | 103 | 57.98 |
LinuxQuestions.org
(
/questions/
)
-
Linux - Embedded & Single-board computer
(
)
- -
error: -1 Invalid module format when using insmod with module cross-compiled for arm
(
)
AndrewShanks
10-10-2007 10:47 AM
error: -1 Invalid module format when using insmod with module cross-compiled for arm
I am trying to build a kernel module for an Arcom zeus board with a PXA270 processor running arcom embedded linux with kernel 2.6.16.28-arcom1-2-zeus. I followed the steps described below:
1: Installed development environment (arm-linux- cross compiler etc.) on host pc (x86 processor running Centos 4 with kernel 2.6.9-55.0.9.EL)
2: Created linux-source-2.6.16.18-arcom1 source tree under /opt/arcom/src/ using tarball supplied by Arcom
3: in root directory of linux-source-2.6.16.18-arcom1 tree, entered command cp /arch/arm/configs/zeus_defconfig ./.config
4: ran make ARCH=arm CROSS_COMPILE=arm-linux- xconfig
5: changed nothing in xconfig
6: ran make ARCH=arm CROSS_COMPILE=arm-linux-
7: wrote module in /linux-source-2.6.16.18-arcom1/drivers/helloworld
#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
MODULE_LICENSE("GPL") ;
static int hello_init(void){
printk("<1> Hello World!\n") ;
return 0 ;
}
static void hello_exit(void) {
printk("<1> Goodbye cruel world!\n") ;
return ;
}
module_init(hello_init) ;
module_exit(hello_exit) ;
8: Created Makefile in helloworld directory
obj-m := hello.o
9:At this point, running
$make -C ../../ M=`pwd` ARCH=arm CROSS_COMPILE=arm_linux- modules
causes built-in.o, hello.ko, hello.mod.c hello.mod.o and hello.o to be created within the helloworld directory.
10: ftp hello.ko to the Zeus board.
11: put hello.ko in /lib/modules/2.6.16.28-arcom1-2-zeus/kernel/drivers/helloworld on the Zeus board
12: from that directory, run /sbin/insmod hello.ko
When I try to do the insmod command, my computer returns the error message
"insmod: error inserting 'hello.ko': -1 Invalid module format"
If I try to use modprobe by running /sbin/modprobe hello.ko, I get the error
"FATAL: Module hello.ko not found."
I have previously been able to cross compile applications for the zeus board using arm-linux-gcc, but this is my first driver.
I have also been able to use insmod with modules that have been provided on the zeus board, such as pxa2xx_spi.ko, but /sbin/modprobe pxa2xx_spi.ko also causes the message "FATAL: module pxa2xx_spi.ko not found."
Any help you can offer would be gratefully recieved.
EDIT:
After doing some more searches I found out about dmesg, which gives me the message
"hello: version magic '2.6.16.28-arcom1 ARMv5 gcc-3.4' should be '2.6.16.28-arcom-1-2-zeus ARMv5 gcc-3.4"
kennithwang
10-11-2007 07:52 AM
make modules
after then you
XXX-strip XXX.ko
Have you done that?
AndrewShanks
10-11-2007 08:17 AM
I've not done that - I got some advice from my vendor about how to build my development kernel so that the magics match, and that seemed to work when I tried it. What does the strip command do?
In case anyone else has the same problem, or is curious, I'm including a description of what I did following the vendor's advice:
===What I did following vendor's advice====
For my system (Arcom Zeus running Arcom embedded linux), the vendor supplies a board specific script "ael-kmake" that provides the appropriate parameters to the makefiles if called with ael-kmake -b zeus. Provided I used the vendor's script to ensure that my build was the same as their's, I could get away with changing the magic by hand. On my system, the extension for the magic "-arcom1" or "-arcom1-2-zeus" is defined in a file called localversion00-arcomn, although my reading indicates that in some versions you can get the same effect by changing the line of the top-level Makefile that reads "EXTRAVERSION = (whatever)".
In summary, the steps I took to get a working module were
1) unpack linux kernel to /opt/arcom/src/linux-source-2.6.16.27-arcom1
2) mkdir /opt/arcom/src/build-zeus
3) cp /opt/arcom/src/linux-source-2.6.16.27-arcom1/arch/arm/configs/zeus_defconfig /opt/arcom/src/build-zeus/.config
4) cd opt/arcom/src/linux-source-2.6.16.27-arcom1
5) open localversion00-arcomn in text editor and change "arcom1" to "arcom1-2-zeus"
6) eal-kmake -b zeus
At this stage the development kernel should be built. The folowing steps are to compile the modules against it.
7)change Makefile associated with module so that it fits better with the script - mainly defining parameters and testing how the makefile is being run:
CC=/opt/arcom/bin/arm-linux-gcc
#location of cross-compiler
ifneq ($(KERNELRELEASE),)
#standard test for whether or not makefile is being called from within kernel build tree
obj-m := hello.o
else
KERNELDIR ?= /*Put the path of your include file here*/
PWD := $(shell pwd)
CFLAGS = -O2 -D__KERNEL__ -DLINUX -Dlinux -DMODULE -DEXPORT_SYMTAB -O3 -Wall -I$(KERNELDIR) -O
default:
$(MAKE) ARCH=arm CROSS-COMPILE=$(CC) -C $(KERNELDIR) M=$(PWD) modules
endif
8) from the linux-source-2.6.16.28-arcom1 directory, issue the command (eg)
ael-kmake -b zeus -m drivers/helloworld
9) ftp the resulting hello.ko file from the hello world directory to the Zeuz board
10) ssh or telnet to the zeus board and run /sbin/insmod hello.ko
I don't know how well this exact sequence will work for non-zeus boards, but hopefully there are some bits that will be useful.
kennithwang
10-13-2007 10:41 PM
Have you experienced embedded linux OS development?
You can download arm cross-compilation-toolchain directly?
And you can make changes to other linux device driver in original source code, other than rewite a new device drviers, such as "hello" driver, so as to test your cross toolhain.
Sorry for my poor english?
Kennith
AndrewShanks
10-15-2007 04:50 AM
This is my first experience with embedded Linux, as you might have guessed from the fact that I'm doing a "hello world" module. I don't really know what makes up a toolchain. Is that preprocessor->compiler->linker, or is it something different? the supplier of my board provided a cd with a script for building what they call a "development environment", including cross-compilers and Linux source. I think there's some support for autoconf and other tools, but I don't really know how to use them.
I've been able to compile applications with the development environment, and I managed to get the hello world module to work after following the instructions from the supplier, and changing the version number in the makefile.
All times are GMT -5. The time now is
08:56 AM
. | http://www.linuxquestions.org/questions/linux-embedded-and-single-board-computer-78/error-1-invalid-module-format-when-using-insmod-with-module-cross-compiled-for-arm-590796-print/ | CC-MAIN-2017-04 | refinedweb | 1,152 | 57.27 |
There?
Great post!
In addition to your two C++ painpoints, I would add porting. Taking a large existing application that works fine with, say HP's compiler, and getting it to run on, say, Sun's compiler, was a nightmare. It was all standard C++, so it should be easy, right???
I'd also add core dumps. One stray pointer bug that happens intermittently and corrupts the stack could take days or weeks to track down. I was just ecstatic when I got my first Java NullPointerException and a nice, clean stack trace.
One other painpoint for me was having to learn everything anew each time I joined a project. Because there was no real library (before STL), every project had it's own data structures (hash, list, etc), logging, algorithms (sorting), and as you mentioned database and GUI.
I'm one of those that gripes about operator overloading, but I certainly agree that it was just uncomfortable, not a real pain. And even then, it was just programmer misuse and not an inherent language problem.
Another thing I found uncomfortable (in C more than C++) was the error checking without exception handling. Having every darn function in the world return true or false to indicate whether it worked or not was just weird.
Andy
Posted by: atripp on January 08, 2008 at 10:26 AM
My Java pain points:
Boilerplate, especially for listeners
Clumsy and inconsistent swing apis
WORA for the desktop doesn't exist - Just about everyone writes Windows apps in Java (even the Linux crowd, who mostly write Windows 95-style apps in Java). I'm predominantly a Mac user, so it's easy to spot the Windowsisms everywhere.
Posted by: goron on January 08, 2008 at 11:01 AM
I agree whole-heartedly on the pain-points with web applications! What are your thoughts on Wicket? When I'm not at work (read when I'm not shackled by corporate tool control) I choose Wicket for all Java web app development. It just makes sense to me.
Posted by: rjlorimer on January 08, 2008 at 11:08 AM
I agree Concurrency can be a real pain. Here are my "Pain Points" that run into most often.
Swing On/Off EDT management
Layout Managers
Quoted Strings + REGEX
Swing is an amazingly powerful API and the new scene graph is going to be really cool, but multi-threading + Swing can be a pain. The EDT forces one to program asynchronously. You spawn a SwingWorker, process long task, update GUI. Some things can't be done like that, say like spawning a background thread to sort the contents of a table model.
There is one place though that might be a good starting point for making things easier. Spawning a modal dialog. Somehow spawning a modal dialog halts the current processing on the EDT, launches a new window, continues to process events and yet returns execution to the line of code after the setVisible(). Could this mechanism be generalized into something reusable? That would be really handy.
Isn't it about time we had an easy to read layout manager? GUI designers come and GUI designers go but the code will always be there for as long as the application is supported. I can't think of any other place in Java where named parameters would make more sense. I put in a request to add a varargs version of add() to Component, but I don't think it's going to be included in 7. It has the possibility of helping to create simpler layout managers. (For the record I think MiG Layout is really nice, but the syntax in the strings is like a language of it's own)
Finally I'd really like to see some kind of syntax for absolute string literals. Meaning that there is no escaping necessary. Have you ever tried to put a \ into a regex? It's like 6 \'s. The change in the compiler is so trivial that I did it myself to OpenJDK and it works great. I'm not the only one to have done this and I know some other people actually submitted the change. Why don't we have this?
Posted by: aberrant on January 08, 2008 at 11:38 AM
I agree with atripp, Templates + portability problems turned me from loving to hating C++ in a few months. At the time I used a lot operator overloading (I was doing a lot of geometrical transformations) but they also caused some headaches - today I consider them a pure danger. Multiple inheritance didn't hurt me, but at the time I went merely with spaghetti code, I learnt design later and today I'm totally for the composition over inheritance thing.
Posted by: fabriziogiudici on January 08, 2008 at 11:59 AM
I'm right there with you. I never felt very comfortable with Swing and friends, too much noise for simple clicks, forms and tables. Add a few decent extensions to the language (closures, first-class properties) to get rid of the most annoying boilerplate, and the fun is back in desktop programming!
Web apps are a different story. I've gone through this madness since the very beginning: CGI, Servlets, JSPs, Struts, Cocoon (oh Lord!), JSF, Facelets and whatnot. Exactly every approach ended up in a messy conglomerate of HTML/XML fragments/templates, Java classes here, JavaScript snippets there, HTTP in between, glued together by other scripting/binding/template-languages (JSP/JSF-EL, Velocity, XSL/XPath, ...), executed by a truckload of JARs in WEB-INF/lib and configured by XML descriptors dictating a gazillion of XML-namespaces which a semi-smart person like me can never memorize. (For a fully-featured rant I have to mention browser incompatibilities, character encoding hiccups, URL encoding/decoding pains and many other things I really never wanted to solve or even know about, but that's not the fault of the Java frameworks).
Nowadays I use GWT, which has all the shortcomings of Swing-like programming, but you know what: For the first time since years I enjoy web-programming again! I'm joyfully writing lots and lots of boilerplate for wiring my widgets to anonymous listeners, and I even pardon the Google folks for their clumsy RPC (boilerplate^2), because with GWT I have a simple and cohesive programming model for both the client/browser- and the server-side, based upon a single primary language: Java (granted, HTML, DOM, CSS and a little bit of JavaScript leaks through this abstraction, but I did expect that).
Posted by: scotty69 on January 08, 2008 at 05:23 PM
(I've posted this on the editorial page, just for emphasis)
Can we spare a few minutes(/hours) trying to count the number of times someone has proclaimed Java to be "dead","in its twilight days","on the decline" and so forth? A few points merit further consideration:
1)Is is a pure coincidence that two professors working for adacore came up with these statements against Java?
2)Any surprise that it came on slashdot (which is replete with anti-[Solaris/Java/Sun] trolls)?
3)Java wasn't built to solve world hunger or to teach device driver programming to sophomores.
4)Its is not a substitute for C. It cannot teach you pointer arithmetic. C should still be taught before Java.
5)There is no inherent problem with the design of the Java language and runtime as much as there is with the way they're taught.
6)A course on Object orientation should accompany a Java programming course.
7)Dispensing with teaching the language structure and the workings of the VM, and instead focusing on struts/jsp/servlet/hibernate at colleges in the name of "practical education" is not a very sensible teaching method, to say the least. (Breaking news: Java != JSP)
8)Breaking news again: Scala binds to the JVM. So, the Java platform atleast isn't a fossil yet.
9)A few "luminaries" can abandon Java and do whatever fetches them more money for now. Nothing stopping them. We live in a free world.
Extrapolating that to the demise of Java is naive, to say the least.
In a few years/months/weeks/days, we will see more requiems sung for Java while (strangely?) Java continues to proliferate.
Posted by: bharathch on January 09, 2008 at 12:11 AM
@bharathch: Is it anything but a pure coincidence that these emeritus professors at Adacore blame Java for all curricular woes? Is Slashdot inherently replete with anti-[Solaris/Java/Sun] trolls? I don't think a bunker mentality is going to help us. Let's analyze what ails Java and fix it.
Actually, the professors have the opposite issue with Java. For them, Java is too alive. They decry the fact that students can actually get useful work done with Java, just by taking a class here and a library there, instead of having to build everything from raw bits.
In fact, the entire article ()
is a case study in the behavior of dinosaurs in the wild.
Posted by: cayhorstmann on January 09, 2008 at 07:34 AM
In my view, we need to fix existing Java features (most notably Generics) before moving onward to new features. I have nothing against new features but let's please first fix some of the complexity we've introduced with previous ones.
Posted by: cowwoc on January 09, 2008 at 08:12 AM
Actually, the professors have the opposite issue with Java. For them, Java is too alive. They decry the fact that students can actually get useful work done with Java, just by taking a class here and a library there, instead of having to build everything from raw bits.
I was thinking the same thing, though they don't want you to be able to build everything from "raw bits" - that would be the case if they were using assembly rather than ADA. Instead, they want you to be able to quickly pick up the over-designed ADA features.
Java features are well designed, C++ features are misdesigned, and ADA features are overdesigned. So, of course a programmer who is used to well-designed features has trouble using the very same over-designed features.
Posted by: atripp on January 09, 2008 at 09:01 AM
Java is a good language for desktop apps. Leave it alone. I don't want it to turn into (C++)++.
Java sucks for web apps. Just like, umm, everything else. For all the hype about Rails/GWT/whatever-your-current-fad-might-be, there is no good framework yet for web development. The web is not the desktop - it's entirely different - but everything we have today for the web is just bows and bells on the same old desktop paradigm.
Posted by: bpreece on January 09, 2008 at 09:16 AM
". They decry the fact that students can actually get useful work done with Java, just by taking a class here and a library there, instead of having to build everything from raw bits."
I don't understand why they just don't set a rule that only some core parts of the Java runtime can be used for such assignments. It seems more a problem of how to run an exam that of how Java is designed.
PS Yes, Slashdot attitude anti-Java has been debated (see here:). The editor hates Java because in the early ages Sun prohibited him from using the Java name in a project he was developing. After reading that, I've pulled out SlashDot from the list of interesting sites, I'm interested in engineering, not in personal revenges.
Posted by: fabriziogiudici on January 09, 2008 at 09:22 AM
I think syntax matters. I have heard the following many times:
1) Python folks on how much they like clean syntax.
2) Ruby folks on how much they like natural syntax.
3) LISP folks on how much they like minimalist and customizable syntax.
4) C++ folks on how much they like turning operator overloading and custom type conversions into very concise and easy-on-the-eyes code.
5) Java folks on how much they like the look of the language ("it's beautiful")
People will take exception with all of the above, and that's natural, everyone likes their own syntax. Java has lots of competition now, if it tries to look like any of the languages other than itself, it'll likely fail to attract the desired crowd and might risk losing its established base. JVM will go on of course, it is a powerful piece of technology, but IMO the more Java stewards attempt to tailor Java into the ivory-tower-academic-look with all the nifty symbols, the more it will appeal in inverse proportion to those who like the opposite. Once a language attains a certain feature-set, most of the modifications to it will revolve around syntax changes. Languages are interfaces to the Turing machine, having all the features such as closures is nice, but if one expression of closures looks familiar and another looks like a natural language which one has no propensity to learn, the individual making the choice might choose more familiar pastures.
Posted by: ide on January 09, 2008 at 10:19 AM
Let's analyze what ails Java and fix it.
One man's fix is another man's cognitive load. Its easy to get carried away with well-meaning language improvements without sparing much thought for the real-world implications of these changes (generics being a case in point). Sure we'd like to replace all the boilerplate with a new properties syntax or some kind of a shorthand morse code. But the result of that could be further alienation of the developer community and more "I love flex" ads. I'm not, by any means, against new features or improving the language. (For instance, replacing the EJB 2.0 boilerplate with JPA & EJB3.0 was much needed.) But we need to be careful in evaluating & introducing these features in a measured fashion.
They decry the fact that students can actually get useful work done with Java, just by taking a class here and a library there, instead of having to build everything from raw bits.
I'd urge everyone to listen to the first 2 minutes of Paul Hilfinger's lecture on sorting algorithms as part of UCBerkley's Fall 2007 Computer Science course (CS 61B) on iTunes. He clearly states that the study of those algorithms is necessary to ensure that the students have a very good understanding of the fundamentals. It is, however, not a substitute for real-world libraries (like Java Collections), he says.
Obviously, while its important to know what tools to use while engineering real world applications, there is no substitute for a good foundation in Computer Science. That being the case, should a course recommend using Java to build a console based text editor, or C to build a web application, or a shell script to illustrate the fundamental sorting/searching techniques?
Posted by: bharathch on January 09, 2008 at 10:52 AM
Just in case the connection wasn't obvious, EJB3.0 (or JUnit 4) wouldn't have been as useful without annotation support. So, carefully thought out, non-intrusive changes to the language are indeed good. I'm not against them.
Posted by: bharathch on January 09, 2008 at 10:57 AM
[i]Java sucks for web apps.[/i]
The problem isn't Java, the problem is the whole notion of "web apps" as *applications* built using a content delivery / hypermedia platform (HTML/HTTP). HTML and HTTP are great for what they were designed for, linking and displaying documents (including images, etc). As a platform for delivering applications, they suck. Quit writing "web applications" and use JNLP, or even (gawd forbid) use the X11 protocol, whatever, if you need to display an application UI remotely. The right tool for the right job, and all that...
Posted by: sprhodes on January 09, 2008 at 01:31 PM
Getting rid of property boilerplate would by my number 1 request.
Everyones code would be at least 20% shorter if we had real properties in java. I'm not talking about a revolution, just the elimination of setProperty and getProperty for trivial properties and easy event listeners. It would make Java much more readable and make swing more pleasant to work with. This could be done with @Property and @ListenableProperty and the boilerplate could be written for you at compile-time.
Please don't reply with "IDEs write the code for you". They don't read the code for you. The compiler should be writing the code for you if it is fully automatic enough for the IDE to write.
Posted by: kweinber on January 09, 2008 at 05:44 PM
The biggest problem with java?
Compatibility.
Specifically library compatibility. If Sun wasn't so attached to forwards compatibility library, public interfaces could evolve and code wouldn't fossilize in a morass of incorrect exception types, mismatched interfaces, boiler plate and abstraction leakage. Then when new language functionalities came around (closures) the language could adapt around them without compiler hacks.
Posted by: i30817 on January 10, 2008 at 04:00 AM
Why dont we stabilize JAVA and develop new language features as plug-ins that integrates well with JAVA, with this the user has a choice to add or remove new language features whilst mainting the stabiltiy of JAVA
Posted by: paksegu on January 10, 2008 at 08:35 AM
The solution to the the pain of using Java is already here. It's the Groovy language. It's absolutely wonderful. Give it a try, you'll love it! It's so expressive and natural. Also look at Grails for web development. It's a brilliant piece of work. Both Groovy and Grails are built for the JVM so you don't have to abandon what you already know. It's the wave of the future imho. Scala looks great, but Groovy (JSR-241) has been out longer - the latest version is 1.5.1.
Posted by: jmlmui on January 10, 2008 at 08:57 AM
@aberrant: Look into Foxtrot to improve your concurrency programming around the EDT in Swing. In some cases, it can be alot better than SwingWorker.
Posted by: guy_davis on January 10, 2008 at 12:08 PM
Java pains:
If there is that much compromise with existing code bases then, Java should stop evolving.
As a language
1. constructors, singletons, statics and initialization of resources: the whole process of lifting the minimal structure up is messy: use DI, conf files ...
2. glitches in inheritance, varargs : hiding ... I wonder why the "trainers" don't mind this !
3. Desktop : anon inner classes et al with events, Swing, EDT, no good component model used, no attr properties, no standard resource management, no swing app framework finished, no good standard binding, no event bus, no good interactive UI design solutions, ... using strings as glue! : you end up with loads of spaguetti and boilerplate. Swing has too much surface and inherits some bad awt designs, RCP too big for most desktop apps. Although some of this stuff is being addressed, 90% of desktop code is a mess to look at and refactor. And we do not have html, js, sandboxes or some corp to put the blame on like others ...
4. Generics and arrays, autoboxing and nulls: problem solutions provided do not properly solve the problems. New features are no good fit, due to not being able to break previous language design decisions.
5. Design: packages, visibility, extending interfaces, checked exceptions: most paths of least resistance are bad, lots of antipatterns.
As APIs: This is IMHO the most important, everybody wants new sexy language features, but what for? You still have to use the old APIs ... Perhaps for more roll-your-own fw :(
0. Some APIs solve the wrong problems, some have too much surface ...
1. Duplication of effort (Web fw, declarative UIs): everybody dissatisfied with existing tools, everybody starts an API-fw, most die, existing code bases full of dead APIs and roll-your-own frameworks.
2. Standard APIs full of rubbish and obsolete stuff due to compatibility. New features are not fully used to refactor the APIs.
As JVM:
1. Little support for more dynamic features, more meta-programming ...
2. Strange, the languages that some push ...
Posted by: javierbds on January 11, 2008 at 01:35 PM
@guy_davis - WOW Fortrot! This is exactly what I was talking about thanks!
Posted by: aberrant on January 11, 2008 at 06:07 PM
Well as far as the web goes, it's funny to watch people trying to turn the web into the desktop. Well, it's not the desktop and is never going to be the desktop...AJAX is a stupid Javascript trick, and like all tricks it's clever and entertaining and good enough for some things, but that's it. Rich client desktops that connect to web services are the once and future king of internet development. Where is my pain? No browser integration in Swing, no Flash or video support either. Ouch.
The attractiveness (read: alleged ease of development) of RIA is of course the runtime and container of your app is already distributed and you can interact with web services out of the box. That the container presents HTML so very well.
Web services in their turn are exciting because hosted software services is the only way anyone knows how to make (big) money anymore. It turns your app into an appliance and your company into a freaking cable provider. $50 bucks a month from 10 million people - woo hoo! At least, that's the story.
The data is out there. The UI is right here. Put them together and you'll have a winner. Just stop making every UI gesture amount to a hit to the server (JAXASS , er I mean AJAX notwithstanding) and leave that logic here, where it belongs. How hard is that to figure out?
Wish list
Flash support
OR
Video support to the core- it's just another object I write java against, like JTree.
HTML (Browser) support to the core...
Continual progression in the tools. Actually, most of what you complain about is a tools issue... the tools do not provide the highest level of abstraction they could.
None of the above is a language issue. DSLs will come and go with their Ds, but general purpose languages are here to stay and Java isn't going anywhere anytime soon.
Posted by: swv on February 05, 2008 at 11:25 AM
oh now I see how formatting works...
Posted by: swv on February 05, 2008 at 11:28 AM | http://weblogs.java.net/blog/cayhorstmann/archive/2008/01/dinosaurs_can_t_1.html | crawl-002 | refinedweb | 3,805 | 63.09 |
Recent changes to 724: Entry with no password Entry with no password2014-01-10T07:53:28Z2014-01-10T07:53:28ZRony Shapiro<div class="markdown_content"><p>Code should be compilable under VS2005. The errors you've reported are just changes in the source files, e.g., tinyxml et. al. has been replaced with pugixml. You should be able to open *.vcxproj files with a text editor (they're XML) and get a good idea of the current file set.</p> <p>Please contact me directly if you need more assistance. ronys at users dot sourceforge dot net.</p></div>#724 Entry with no password2013-11-18T13:42:52Z2013-11-18T13:42:52ZDrK<div class="markdown_content"><p>If I were to do this, I wouldn't change the code. I would export the entries with everything you want and then post-process the XML file with a simple VBScript procedure. I would replace every line that begins<br /> "<password>"<br /> with something like<br /> "<password>.</password>"</p> <p>I would probably add 2 more lines per entry to stop automatic copying of the password via double clicking an entry after import (the '1' means "View/Edit"):<br /> <dca>1</dca><br /> <shiftdca>1</shiftdca></p></div>#724 Entry with no password2013-11-15T16:30:39Z2013-11-15T16:30:39ZRony Shapiro<div class="markdown_content"><p>In principle, VS2005 should still work, as we're (still) not using any of the new C++11 features (this will change in 4.x).<br /> You should be able to open the VS11 project files with a text editor and see the files that are currently compiled. The difference from when VS2005 was used shouldn't be that great...</p></div>#724 Entry with no password2013-11-15T11:15:58Z2013-11-15T11:15:58ZDave Griffin<div class="markdown_content"><p>Hmmm, doesnt look like pwsafe8.sln for Visual Studio Pro 2005 works anymore, or I'm doing things wrong (I normally use make/gcc on unix command line).</p> <p>Various missing source files to start with - UUIDGen.cpp and tinyxmlparser.cpp to name but two, so cant even compile the code to start with, fairly major hurdle... Would need advice from an existing developer on this. Other errors cover things not being a member of a namespace and various syntax errors. Anyone willing to discuss this offlist to work out what I've done wrong, or is the code no longer compilable under VS2005 ?</p></div>#724 Entry with no password2013-11-14T16:10:59Z2013-11-14T16:10:59ZDave Griffin<div class="markdown_content"><p>Thanks, will have to see if I can persuade someone to buy me an upgrade :-)</p></div>#724 Entry with no password2013-11-14T16:08:32Z2013-11-14T16:08:32ZDrK<div class="markdown_content"><p>You may have a major issue there. The default VS has been 2012 for quite a while. There is no guarantee that all project files have been updated for old versions of VS.<br /> Good luck!</p></div>#724 Entry with no password2013-11-14T15:59:27Z2013-11-14T15:59:27ZDave Griffin<div class="markdown_content"><p>No worries. I've downloaded the source and have started to take a look around to see if if this is something I could do myself, just trying to get Visual Studio (2005) re-installed so I can do a test compile.</p></div>#724 Entry with no password2013-11-14T15:55:37Z2013-11-14T15:55:37ZDrK<div class="markdown_content"><p>I don't really develop PWS any more. If another developer can/wants to do this, you may be in luck.</p></div>#724 Entry with no password2013-11-14T15:13:44Z2013-11-14T15:13:44ZDave Griffin<div class="markdown_content"><p>Fair enough, I'll work around this then.</p></div>#724 Entry with no password2013-11-13T12:47:42Z2013-11-13T12:47:42ZDrK<div class="markdown_content"><p>Both Title and Password are the only mandatory fields for a PWS entry. This is used in many places within PWS and not easy to change. Any change would require significant regression testing.</p></div> | https://sourceforge.net/p/passwordsafe/feature-requests/724/feed.atom | CC-MAIN-2017-34 | refinedweb | 681 | 56.25 |
31 August 2011 17:13 [Source: ICIS news]
LONDON (ICIS)--Lotte Chemical UK has shut down its purified terephthalic acid (PTA) plant in ?xml:namespace>
“[I] confirm the plant is offline and the shutdown is on plan. It takes a couple of days to bring off line fully and the process of taking off line started earlier in the week, ” said Mark Kenrick, business director for Lotte Chemical UK.
The planned shutdown is a legal requirement that typically takes between two and three weeks.
The
“Assuming no surprises, we should be back on line by the 20th,” said Kenrick.
Lotte Chemical UK manufactures and sells polyester intermediate PTA and polyethylene terephthalate (PET) resin for use in the packaging industry. | http://www.icis.com/Articles/2011/08/31/9489227/lotte-chemical-shuts-down-uk-pta-plant-as-planned.html | CC-MAIN-2015-18 | refinedweb | 119 | 60.35 |
All strings used in the portal UI are stored in language files in the PT_HOME\ptportal\version\i18n folder. Using these language files, you can customize existing strings or add new strings to the portal UI.
Each individual language folder within the i18n directory contains a set of xml files specific to a single language. Folders are named according to the standard ISO 639 language code (i.e., de=German, en=English, es=Spanish, fr=French, it=Italian, ja=Japanese, ko=Korean, nl=Dutch, pt=Portugese, zh=Chinese).
The files in each language folder contain sets of strings for specific sections of the portal UI. The most commonly customized files are listed below:
ptmsgs_portaladminmsgs.xml: Strings used in the Administration section of the portal.
ptmsgs_portalbrowsingmsgs.xml: Strings used for most of the messages seen by portal users.
ptmsgs_portalcommonmsgs.xml: Strings used for common messages repeated throughout the portal.
ptmsgs_portalinfrastructure.xml: Strings used in the portal's underlying infrastructure components (i.e., the "Finish" and "Cancel" seen on editor pages).
The basic procedure for replacing a string in the portal UI is summarized below. See the string replacement examples in the sections that follow for a detailed explanation.
Search for the string in the language folder of your choice. To use Windows Explorer's "Containing text" feature, right-click on the language folder and choose Search....
Open any files that contain the string in a text editor. (The language files have a UTF-8 byte order mark (BOM) at the beginning of each file to help editors identify the file as UTF-8 character encoding. The BOM for UTF-8 is 0xEF 0xBB 0xBF. Use an editor that is capable of reading and writing UTF-8 files.
Replace the string with the message of your choice. Change the text between the
<S> </S> tags. Some strings are used in more than one place. As noted above, NEVER change the numbers in the
<S> tags or modify the order of the strings in a language file. Also note that XML tags are case sensitive; be careful not to inadvertently change the case of any tag.
After editing an XML language file, view the file in your browser to verify that the XML is well formed.
If your portal is load balanced, you must copy the updated language files to all portal servers.
Restart your application server and restart the portal. If the portal fails to start up, you might have corrupted the language files. It is a good practice to use Logging Spy to watch the portal load the files to verify that the XML files have been edited correctly.
Note: Making changes to one language folder does not change the same string in any other language folder. To internationalize your string replacements, you must add a translated version of the string to the appropriate file in each language folder.
Some customizations require additional UI strings. If your portal supports more than one language, adding strings to the portal XML language files allows your new strings to be localized using the portal's multi-language framework.
Note: To add new strings, use a new XML language file or the SampleMsgs.xml file instead of adding strings to any existing ptmsgs*.xml file. Adding strings to ptmsgs*.xml files can result in string number conflicts.
The sample HTML below can be used in a portlet to retrieve the first string from a new XML language file called my_ messsage_file.xml. The portal knows the locale of the current user and retrieves the string from the correct language folder automatically. (The ".xml" extension is not required when specifying the message file name.) For detailed information on adaptive tags, see theOracle WebCenter Interaction Web Service Development Guide.
<span xmlns: <pt:logic.value pt: </span>
The
GetString method of the ActivitySpace object can also be used to retrieve strings. The ActivitySpace knows the language of the current user; the
GetString method automatically retrieves the message from the correct language folder.
The sample code below retrieves the first string from a new XML language file called my_ messsage_file.xml:
import com.plumtree.uiinfrastructure.activityspace.*; ... public String MyNewCode() { myActivitySpace.GetString(1, "my_message_file"); ... }
Note: To add a new XML language file, you must add the file to every language folder, even if you do not provide translated strings for each language. The portal will fail to load if the XML language files are not synchronized for every language.
This example shows how to replace the text displayed at the bottom of all portal pages. As noted earlier, changes to one language folder (in this example, the \en folder) do not change the string for other languages.
In your browser window, copy the string you want to search for.
Navigate to the \en language folder in the \i18n directory.
Right-click on the language folder and choose Search....
Paste the string into the Containing text field and click Search Now.
Open the ptmsgs_portalcommonmsgs.xml file in a text editor.
Replace the string with the string you want displayed on each page (for example, "Hello World Corporation").
Save and close the ptmsgs_portalcommonmsgs.xml file.
Restart your application server.
Reload your portal; the new string should appear in the footer at the bottom of the page.
This example shows how to replace the login instructions on the main login page.
In your browser window, copy the string you want to change, for example "Log in to your personalized portal account".
Navigate to the \en language folder in the \i18n directory.
Right-click on the language folder and select Search....
Paste the string into the Containing text field and click Search Now.
Open the ptmsgs_portalcommonmsgs.xml file in a text editor.
Search for the "Log in to your personalized Portal account" string within the ptmsgs_portalcommonmsgs.xml file.
Replace the string with the string you want to appear on the login page, for example "Log in to the Hello World portal account".
Save and close the ptmsgs_portalcommonmsgs.xml file.
Restart your application server.
Reload your portal; the new string should appear on the login page. | http://docs.oracle.com/cd/E23010_01/wci.1034/e14110/usingstringreplacement.htm | CC-MAIN-2014-10 | refinedweb | 1,011 | 66.74 |
16 August 2012 17:03 [Source: ICIS news]
LONDON (ICIS)--The ?xml:namespace>
Koepp Schaum, a manufacturer and processor of cellular rubber, sponge rubber and polyethylene (PE) foam, said that there was a TDI leakage from a tank at its site in Oestrich-Winkel on Monday, 13 August.
Firefighters are continuing to cool the tank.
The town’s mayor, Paul Weimann, said that the situation on the site was still “complicated and tense”, but there was no risk for neighbouring residents.
Meanwhile Koepp, the town and experts are considering the best and safest way to empty and dispose of the tank, Weimann told German media.
Koepp said that following Monday’s accident, several people had to be hospitalised for observation. German media reports said that 26 people suffered injuries and were hospitalised for a short period.
Koepp uses the TDI to manufacture | http://www.icis.com/Articles/2012/08/16/9587776/germany-town-chem-firm-call-in-experts-after-tdi-accident.html | CC-MAIN-2014-41 | refinedweb | 142 | 62.68 |
Python and Django on Heroku
Posted by Adam
Python has joined the growing ranks of officially-supported languages on Heroku's polyglot platform, going into public beta as of today. Python is the most-requested language for Heroku, and it brings with it the top-notch Django web framework.
As a language, Python has much in common with Ruby, Heroku's origin language. But.
Let's take it for a spin on Heroku.
Heroku/Python Quickstart
Make a directory with three files:
app.py
import os from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello from Python!" if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port)
requirements.txt
Flask==0.7.2
Procfile
web: python app.py
Commit to Git:
$ git init $ git add . $ git commit -m "init"
Create an app on the Cedar stack and deploy:
$ heroku create --stack cedar Creating young-fire-2556... done, stack is cedar | git@heroku.com:young-fire-2556.git Git remote heroku added $ git push heroku master Counting objects: 5, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (5/5), 495 bytes, done. Total 5 (delta 0), reused 0 (delta 0) -----> Heroku receiving push -----> Python app detected -----> Preparing virtualenv version 1.6.1 New python executable in ./bin/python2.7 Also creating executable in ./bin/python Installing setuptools............done. Installing pip...............done. -----> Installing dependencies using pip version 1.0.1 Downloading/unpacking Flask==0.7.2 (from -r requirements.txt (line 1)) ... Successfully installed Flask Werkzeug Jinja2 Cleaning up... -----> Discovering process types Procfile declares types -> web -----> Compiled slug size is 3.5MB -----> Launching... done, v2 deployed to Heroku To git@heroku.com:young-fire-2556.git * [new branch] master -> master
Then view your app on the web!
$ curl Hello from Python!
Dev Center: Getting Started with Python on Heroku/Cedar
All About Python
Created by Guido van Rossum in 1991, Python is one of the world's most popular programming languages, and finds application in a broad range of uses.
Cutting-edge communities, like Node.js and Ruby, encourage fast-paced innovation (though sometimes at the cost of application breakage). Conservative communities, like Java, favor a more responsible and predictable approach (though sometimes at the expense of being behind the curve). Python has managed to gracefully navigate a middle path between these extremes, giving it a respected reputation even among non-Python programmers. The Python community is an island of calm in the stormy seas of the programming world.
Python is known for its clearly-stated values, outlined in PEP 20, The Zen of Python. "Explicit is better than implicit" is one example (and a counterpoint to "Convention over configuration" espoused by Rails). "There's only one way to do it" is another (counterpointing "There's more than one way to do it" from Perl). See Code Like a Pythonista: Idiomatic Python for more.
The Python Enhancement Proposal (PEP) brings a structured approach to extending the core language design over time. It captures much of the value of Internet standard bodies procedures (like Internet Society RFCs or W3C standards proposals) without being as heavy-weight or resistant to change. Again, Python finds a graceful middle path: neither changing unexpectedly at the whim of its lead developers, nor unable to adapt to a changing world due to too many approval committees.
Documentation is one of Python's strongest areas, and especially notable because docs are often a second-class citizen in other programming languages. Read the Docs is an entire site dedicated to packaging and documentation, sponsored by the Python Software Foundation. And the Django book defined a whole new approach to web-based publishing of technical books, imitated by many since its release.
Frameworks and the Web
In some ways, Python was the birthplace of modern web frameworks, with Zope and Plone. Concepts like separation of business and display logic via view templating, ORMs for database interaction, and test-driven development were built into Zope half a decade before Rails was born. Zope never had the impact achieved by the later generation of frameworks, partially due to its excessive complexity and steep learning curve, and partially due to simply being ahead of its time. Nevertheless, modern web frameworks owe much to Zope's pioneering work.
The legacy of Zope's checkered history combined with the Python community's slow recognition of the importance of the web could have been a major obstacle to the language's ongoing relevance with modern developers, who increasingly wanted to build apps for the web. But in 2005, the Django framework emerged as a Pythonic answer to Rails. (Eventually, even Guido came around.)
Django discarded the legacy of past Python web implementations, creating an approachable framework designed for rapid application development. Django's spirit is perhaps best summarized by its delightful slogan: "the web framework for perfectionists with deadlines." Where Rails specializes on CRUD applications, Django is best known for its CMS capabilities. It has an emphasis on DRY (Don't Repeat Yourself). The Django community prefers to create reusable components or contribute back to existing projects over single-use libraries, which helps push the greater Python community forward. While Django is a batteries-included framework, the loose coupling of components allows flexibility and choice.
Other frameworks have found traction as well. Flask, a Sinatra-like microframework, makes use of Python's decorators for readability. Pyramid emerged from the earlier Pylons and TurboGears projects, and their documentation already offers excellent instructions for deploying to Heroku.
Similarly, Python established a pattern for webserver adapters with WSGI. Many other languages have since followed suit, such as Rack for Ruby, Ring for Clojure, and PSGI/Plack for Perl.
In the Wild
Perhaps most striking about Python is the breadth of different realms it has taken root in. A few examples:
- Science and math computing, evidenced by books and the SciPy libraries and conferences.
- Video games, as seen in libraries such as PyGame and Cocos2d.
- As an embedded scripting / extension language, in software such as Blender3D, Civilization IV, and EVE Online (via Stackless Python).
- Major of the most-used languages on the Heroku platform, and are overjoyed to welcome our Python brothers and sisters into the fold.
Special thanks to all the members of the Python community that helped with alpha testing, feedback, and patches on Heroku's Python support, including: David Cramer, Ben Bangert, Kenneth Love, Armin Ronacher, and Jesse Noller.
We'll be sponsoring and speaking at PyCodeConf next week. Come chat with us about what you'd like to see out of Python on Heroku!
Further reading: | http://blog.heroku.com/archives/2011/9/28/python_and_django | CC-MAIN-2015-48 | refinedweb | 1,111 | 57.27 |
May 03, 2012 09:15 AM|Priya R|LINK
Have created a mvc application in tat for the create new link i have created a controller and view..but m not able ti render the view in the controller.The savechanges is not coming in drop down..How to proceed.Any help
COntroller code
public ActionResult CreateNew(studentDetail student)
{
dc.studentDetails.Add(student);
dc.SaveChanges();
return View(student);
}
m not getting the .Add and .SaveChanges dropdown..
May 03, 2012 10:03 AM|ignatandrei|LINK
Priya RThe savechanges is not coming in drop down..
What dropdown?
Did your code compiles?
Priya Rdc.SaveChanges();
Who is dc ?
May 03, 2012 10:36 AM|Priya R|LINK
dc is the object name of the model class used to retrieve the database...
not exactly dropdown i meant to say m not getting the savechanges function and also add method( which s inbuilt). It says the function does not exist
code s compiled..but on click of the link i m not getting the page.
Star
8727 Points
May 03, 2012 11:02 AM|christiandev|LINK
is the createNew being called when debugging? you seem to be missing [AcceptVerbs(HttpVerbs.Post)]
May 03, 2012 11:10 AM|Priya R|LINK
no..code is not cmpiled and createnew is not called.. tried with [AcceptVerbs(HttpVerbs.Post)]..still no luck.
the prob s with savechanges and add method does not exist..
How to get those functions?
Should i use any namespace to get the reference?
May 03, 2012 11:59 AM|ignatandrei|LINK
Priya R
no..code is not cmpiled
Please tell compiler error.
May 03, 2012 12:07 PM|Priya R|LINK
Error message is
System.Data.Linq.Table<MvcApplication2.studentDetail>' does not contain a definition for 'Add' and no extension method 'Add' accepting a first argument of type 'System.Data.Linq.Table<MvcApplication2.studentDetail>' could be found (are you missing a using directive or an assembly reference?)
and the same error for savechanges too
May 04, 2012 03:55 AM|ignatandrei|LINK
Priya RSystem.Data.Linq.Table<MvcApplication2.studentDetail>' does not contain a definition for 'Add
Maybe it is true. How did you generate the dc ? I think that you are using L2S with EF methods.
May 04, 2012 07:11 AM|ignatandrei|LINK
Priya R
yes i m using L2S
dc is created as
private MusicClassesDataContext dc = new MusicClassesDataContext();
where MusicClassesDataContext is the model.
If not wat s the correct method must be used
The method is
dc.SubmitChanges();
as stated at
not
Priya R
dc.SaveChanges();
You have to look to a L2S tutorial - such as links from
(see Topic / Location)
May 04, 2012 08:22 AM|ignatandrei|LINK
Priya Rwat should be used for Add method then
Did you look at second link and read what is below Topic/Location?
The reading of tutorials is greater that its sum of parts.
May 04, 2012 09:40 AM|ignatandrei|LINK
Priya R.submitchanges s not working
This because it does not compile , since you had the .add
Priya Rd..but nothing abt add s given in tat link.
Did you check all 3 links ?
Look here also : ( from the 3 links)
Most precisely, search for "inserting data"
13 replies
Last post May 04, 2012 09:40 AM by ignatandrei | http://forums.asp.net/p/1799605/4963359.aspx?MVC+Create+new+page | CC-MAIN-2014-42 | refinedweb | 546 | 68.47 |
Euclidean Algorithm to Calculate Greatest Common Divisor (GCD) of 2 numbers
Reading time: 20 minutes | Coding time: 5 minutes
The GCD of two integers X and Y is the largest integer that divides both of X and Y (without leaving a remainder). Greatest Common Divisor is, also, known as greatest common factor (gcf), highest common factor (hcf), greatest common measure (gcm) and highest common divisor.
Key Idea: Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number.
Example: GCD(20, 15) = GCD(15, 10) = 5
Algorithm
The Euclidean Algorithm for calculating GCD of two numbers A and B can be given as follows:
If A=0 then GCD(A, B)=B since the Greatest Common Divisor of 0 and B is B.
If B=0 then GCD(a,b)=a since the Greates Common Divisor of 0 and a is a.
Let R be the remainder of dividing A by B assuming A > B. (R = A % B)
Find GCD( B, R ) because GCD( A, B ) = GCD( B, R ). Use the above steps again.
Sample calculation
Sample calculation of Greatest Common Divisor of 2 numbers using Euclidean Algorithm is as follows:
Greatest Common Divisor of 285 and 741 We have to calculate GCD (285, 741) As 285 is less than 741, we need to calculate GCD (741, 285) GCD (285, 741) = GCD (741, 285) Now, remainder of dividing 741 by 285 is 171. Therefore, we need to calculate GCD (285, 171) GCD (285, 741) = GCD (741, 285) = GCD (285, 171) Now, remainder of dividing 285 by 171 is 114. Therefore, we need to calculate GCD (171, 114) GCD (285, 741) = GCD (741, 285) = GCD (285, 171) = GCD (171, 114) Now, remainder of dividing 171 by 114 is 57. Therefore, we need to calculate GCD (114, 57) GCD (285, 741) = GCD (741, 285) = GCD (285, 171) = GCD (171, 114) = GCD (114, 57) Now, remainder of dividing 114 by 57 is 0. Therefore, we need to calculate GCD (57, 0) As B=0, GCD( 57, 0) = 57. GCD (285, 741) = GCD (741, 285) = GCD (285, 171) = GCD (171, 114) = GCD (114, 57) = GCD (57, 0) = 57 Therefore, Greatest Common Divisor of 285 and 741 is 57.
Flowchart for the algorithm
Pseudocode
function gcd(A,B): if (A < B): return gcd(B,A) if (A = 0): return B return gcd(B, A%B)
Complexity
Worst case time complexity : O(log(min(A,B))
Average case time complexity : O(log A)
Best case time complexity : O(1)
Explanation/ Derivation of complexity for Euclidean Algorithm:
Let $T(a,b)$ be the number of steps taken in the Euclidean algorithm, which repeatedly evaluates $\gcd(a,b)=\gcd(b,a\bmod b)$ until $b=0$, assuming $a\geq b$.
Let $h=\log_{10}b$ be the number of digits in $b$ . (assuming the time-complexity of the $\mathrm{mod}$ function to be $O(1)$.
- Worst Case scenario
$a=F_{n+1}$ and $b=F_n$, where $F_n$ is the Fibonacci sequence, since it will calculate $\gcd(F_{n+1},F_n)=\gcd(F_n,F_{n-1})$ until it gets to $n=0$, so $T(F_{n+1},F_n)=\Theta(n)$ and $T(a,F_n)=O(n)$. Since $F_n=\Theta(\varphi^n)$, this implies that $T(a,b)=O(\log_\varphi b)$. Note that $h\approx log_{10}b$ and $\log_bx={\log x\over\log b}$ implies $\log_bx=O(\log x)$ for any $a$, so the worst case for Euclid's algorithm is $O(\log_\varphi b)=O(h)=O(\log b)$.
- Average case scenario
If $a$ is fixed and $b$ is chosen uniformly from $\mathbb Z\cap[0,a)$, then the number of steps $T(a)$ is
$$T(a)=-\frac12+6\frac{\log2}\pi(4\gamma-24\pi^2\zeta'(2)+3\log2-2)+{12\over\pi^2}\log2\log a+O(a^{-1/6+\epsilon}),$$
or, for less accuracy, $T(a)={12\over\pi^2}\log2\log a+O(1)$
- Best case scenario
$a=b$ or $b=0$ or some other convenient case like that happens, so the algorithm terminates in a single step. Thus, $T(a,a)=O(1)$.
If we are working on a computer using 32-bit or 64-bit calculations, as is common, then the individual $\bmod$ operations are in fact constant-time, so these bounds are correct. If, however, we are doing arbitrary-precision calculations, then in order to estimate the actual time complexity of the algorithm, we need to use that $\bmod$ has time complexity $O(\log a\log b)$. In this case, all of the "work" is done in the first step, and the rest of the computation is also $O(\log a\log b)$, so the total is $O(\log a\log b)$. I want to stress, though, that this only applies if the number is that big that you need arbitrary-precision to calculate it.
Implementations
Following are the implementation of Euclidean Algorithm in 10 different languages such as Python, C, C++, Java, CSharp, Erlang, Go, JavaScript, PHP and Scala.
- Python
- C
- C++
- Java
- CSharp
- Erlang
- Go
- JavaScript
- PHP
- Scala
Python
# Python applications of Euclidean Algorithm def gcd(a,b): '''The GCD Calculation function''' if a == 0: return b if b == 0: return a return gcd(b, a%b) # Utility code to test this gcd function: a=10,b=26 print ("GCD of {} and {} is {}".format(a, b, gcd(a,b))) # prints gcd as 2 a=1831,b=0 print ("GCD of {} and {} is {}".format(a, b, gcd(a,b))) # prints gcd as 1831 a=11024,b=32424 print ("GCD of {} and {} is {}".format(a, b, gcd(a,b))) # prints gcd as 8
C
// A simple implementation of eucliedan algorithm in C #include
int GCD (int x, int y) { if (x == 0) return y; return gcd(y%xx); } // Driver program to test above function int main() { int a = 100, b = 15; printf("GCD(%d, %d) = %dn", a, b, gcd(a, b)); a = 35, b = 0; printf("GCD(%d, %d) = %dn", a, b, gcd(a, b)); a = 31, b = 20; printf("GCD(%d, %d) = %dn", a, b, gcd(a, b)); return 0; }
C++
#include <stdio.h> #include <algorithm> // Part of Cosmos by OpenGenus Foundation int gcd(int x, int y) { while (y > 0) { x %= y; std::swap(x, y); } return x; } int lcm(int x, int y) { return x / gcd(x, y) * y; } int main() { int a, b; scanf("%d %d", &a, &b); printf("GCD = %d\n", gcd(a, b)); printf("LCM = %d", lcm(a, b)); }
Java
import java.util.*; // Part of Cosmos by OpenGenus Foundation class Gcd_Calc{ public int determineGCD(int a, int b) { while (b > 0) { int r = a % b; a = b; b = r; } return a; } public static void main(String[] args) { Gcd_Calc obj = new Gcd_Calc(); System.out.println("Enter two nos: "); Scanner s1 = new Scanner(System.in); int a = s1.nextInt(); int b = s1.nextInt(); int gcd = obj.determineGCD(a, b); System.out.println("GCD = " + gcd); } }
C#
using System; namespace gcd_and_lcm { class gcd_lcm { int a, b; public gcd_lcm(int number1,int number2) { a = number1; b = number2; } public int gcd() { int temp1 = a, temp2 = b; while (temp1 != temp2) { if (temp1 > temp2) { temp1 -= temp2; } else { temp2 -= temp1; } } return temp2; } public int lcm() { return a * b / gcd(); } } class Program { static void Main(string[] args) { int a = 20, b = 120; gcd_lcm obj = new gcd_lcm(a, b); Console.WriteLine("GCD of {0} and {1} is {2}", a, b, obj.gcd()); Console.WriteLine("LCM of {0} and {1} is {2}", a, b, obj.lcm()); Console.ReadKey(); } } }
Erlang
% Part of Cosmos by OpenGenus Foundation -module(gcd_and_lcm). -export([gcd/2, lcm/2]). gcd(X, 0) -> X; gcd(X, Y) -> gcd(Y, X rem Y). lcm(X, Y) -> X * Y / gcd(X, Y).
Go
package main // Part of Cosmos by OpenGenus Foundation import ( "fmt" ) func calculateGCD(a, b int) int { for b != 0 { c := b b = a % b a = c } return a } func calculateLCM(a, b int, integers ...int) int { result := a * b / calculateGCD(a, b) for i := 0; i < len(integers); i++ { result = calculateLCM(result, integers[i]) } return result } func main() { // 8 fmt.Println(calculateGCD(8, 16)) // 4 fmt.Println(calculateGCD(8, 12)) // 12 fmt.Println(calculateLCM(3, 4)) // 1504 fmt.Println(calculateLCM(32, 94)) // 60 fmt.Println(calculateLCM(4, 5, 6)) // 840 fmt.Println(calculateLCM(4, 5, 6, 7, 8)) }
JavaScript
function gcd(a, b) { return b === 0 ? a : gcd(b, a % b); } function lcm(a, b) { return b === 0 ? 0 : a * b / gcd(a, b); } // GCD console.log(gcd(15, 2)); // 1 console.log(gcd(144, 24)); // 24 // LCM console.log(lcm(12, 3)); // 12 console.log(lcm(27, 13)); // 351
PHP
<?php function gcd($a, $b) { if(!$b) { return $a; } return gcd($b, $a % $b); } function lcm($a, $b) { if($a || $b) { return 0; } return abs($a * $b) / gcd($a, $b); }
Scala
object gcdandlcm extends App{ def gcd(a:Int, b:Int):Int = { while(b > 0){ val r:Int = a % b a = b b = r } a } def lcm(a:Int, b:Int):Int = { var num1:Int = 0 var num2:Int = 0 var lcm:Int = 0 if(a > b){ num1 = a num2 = b } else { num1 = b num2 = a } for(x <- 1 to num2){ if((num1 * i) % num2 == 0){ return num1 * i } } return 0 } }
Applications
The Euclidean Algorithm is one of the most handy algorithms which one can use to speed up simple problems like calculation of Greatest Common Divisor of two numbers.
With Euclidean Algorithm, one can, efficiently, solve these problems:
- Simplify any fraction
- Find co-primes
- Find prime factors of a number
- Change numerical ranges of data that is scaling
- Arithmetic scaling of RSA Cryptosystem | https://iq.opengenus.org/euclidean-algorithm-greatest-common-divisor-gcd/ | CC-MAIN-2019-47 | refinedweb | 1,613 | 58.21 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.