text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
by Joshua Noble
First things first, OpenGL stands for Open Graphics Language but no one ever calls it that, they call it OpenGL, so we're going to do that too. Secondly, at a very high level, OpenGL is how your program on the CPU talks to the program on your GPU. What are those you ask? Well, the thing is that your computer is actually made out of a few different devices that compute, the Central Processing Unit and Graphics Processing Unit among them. The CPU is what runs most of what you think of as your OF application, starting up, keeping track of time passing, loading data from the file system, talking to cameras or the sound card, and so on. However, the CPU doesn't know how to draw stuff on the screen. CPUs used to draw things to screen (and still do on some very miniaturized devices) but people realized that it was far faster and more elegant to have another computational device that just handled loading images, handling shaders, and actually drawing stuff to the screen. The thing is that talking from one device to another is kinda hard and weird. Luckily, there's OpenGL to make it slightly easier, and OF to handle a lot of the stuff in OpenGL that sucks.
OpenGL’s main job is to help a programmer create code that creates points, lines, and polygons, and then convert those objects into pixels. The conversion of objects into pixels is called the "pipeline" of the OpenGL renderer and how that pipeline works at a high level is actually pretty important to understanding how to make OF do what you want it to and do it quickly. OF uses OpenGL for all of its graphics drawing but most of the calls are hidden. It actually uses an implementation of OpenGL called GLFW by default. All graphics calls in the ofGraphics class use calls to common OpenGL methods, which you can see if you open the class and take a look at what goes on in some of the methods. So, let's say you want to call OF line. Well, that actually calls ofGLRenderer::drawLine() which contains the following lines:
linePoints[0].set(x1,y1,z1); linePoints[1].set(x2,y2,z2); // use smoothness, if requested: if (bSmoothHinted) startSmoothing(); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, sizeof(ofVec3f), &linePoints[0].x); glDrawArrays(GL_LINES, 0, 2); // use smoothness, if requested: if (bSmoothHinted) endSmoothing();
Now, what's going on in there looks pretty weird, but it's actually fairly straight forward. Don't worry too much about the calls that are going on below, just check out the notes alongside them because, while the methods and variable names are kinda tricky, the fundamental ideas are not. So, we've got two points representing the beginning and end of our line, so we set those with the values we passed into ofDrawLine():
linePoints[0].set(x1,y1,z1); linePoints[1].set(x2,y2,z2);
If we're doing smoothing, let's go ahead and do it:
// use smoothness, if requested: if (bSmoothHinted) startSmoothing();
Alright, onto the tricky part:
glEnableClientState(GL_VERTEX_ARRAY); // #1 glVertexPointer(3, GL_FLOAT, sizeof(ofVec3f), &linePoints[0].x); // #2 glDrawArrays(GL_LINES, 0, 2); // #3
What we're doing is saying:
That's kinda gnarly but comprehensible, right? The thing is though, that even though it's a bit weird, it's really fast. openFrameworks code uses something called Vertex Arrays (note the "glEnableClientState(GL_VERTEX_ARRAY)") to draw points to the screen. The particulars of how these work is not super important to understand in order to draw in 3-D, but the general idea is important to understand; pretty much everything that you're drawing revolves around passing some vertices to the graphics card so that you can tell OpenGL where something begins and ends. That "something" could be just a line, it could be a texture from a video, it could be a point in a 3D model of a bunny rabbit, but it's all going to have some points in space passed in using an array of one kind of another. There are all kinds of extra things you can tell OpenGL about your vertices but you pretty much always need to make some vertices and pass them along.
Alright, so that's what some OpenGL looks like, how does this all work? Take a look at the following diagram.
For those of your who've read other OpenGL tutorials you may be wondering: why do these all look the same? Answer: because there's really no other way to describe it. You start with vertices and you end up with rastered pixels. Much like other inevitable things in life, that's all there is to it.
Vertices define points in 3d space that are going to be used to place textures, create meshes, draw lines, and set the locations of almost any other drawing operation in openFrameworks. Generally speaking, you make some vertices and then later decide what you're going to do with them. Drawing a line rectangle is just making 4 points in space and connecting them with lines. Drawing an ofImage is defining 4 points in 3D space and then saying that you're going to fill the space in between them with the texture data that the ofImage uses. Drawing a 3D sphere is, unsurprisingly, just calculating where all the vertices for a sphere would need to go, defining those in an array, and then uploading that array to the graphics card so they can be drawn when sphere.draw() is called. Every time your OF application does any drawing, it's secretly creating vertices and uploading those to the graphics card using what's called a vertex array that gets uploaded to the graphics card. In some cases, like when you call ofDrawRectangle(), the vertices are hidden from you. In other cases, like when you create an ofPolyline, you're participating in generating those vertices explicitly. Let's take a closer look at how that works. You call
line.addVertex(x, y);
Underneath, that just adds that point as a new ofVec2f to the ofPolyline instance. When it comes time to draw them, we have the ofGLRenderer calling:
if(!poly.getVertices().empty()) { // use smoothness, if requested: if (bSmoothHinted) startSmoothing(); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, sizeof(ofVec3f), &poly.getVertices()[0].x); glDrawArrays(poly.isClosed()?GL_LINE_LOOP:GL_LINE_STRIP, 0, poly.size()); // use smoothness, if requested: if (bSmoothHinted) endSmoothing(); }
So, really what you're doing is storing vertices and depending on whether you want OpenGL to close your application for you or not, you tell it in the glDrawArrays() method to either a) GL_LINE_LOOP close them all up or b) GL_LINE_STRIP don't close them all up. Again, like before, exactly what's going on there isn't super important, but it is good to understand that lines, rectangles, even meshes are all just vertices. Since I just mentioned meshes, lets talk about those! If you want some more info.
The ofMesh is, like the ofPolyline, lots of vertices with some attendant information around them. In the case of a mesh though, there's a lot more information for some interesting reasons. An ofMesh represents a set of vertices in 3D spaces, and normals at those points, colors at those points, and texture coordinates at those points. Each of these different properties is stored in a vector. Vertices are passed to your graphics card and your graphics card fill in the spaces in between them in a processing usually called the rendering pipeline. The rendering pipeline goes more or less like this:
Say how you're going to connect all the points.
Make some points.
Say that you're done making points.
You may be thinking: I'll just make eight vertices and voila: a cube. Not so quick. There's a hitch and that hitch is that the OpenGL renderer has different ways of connecting the vertices that you pass to it and none are as efficient as to only need eight vertices to create a cube. You've probably seen a version of the following image somewhere before.
Generally you have to create your points to fit the drawing mode that you've selected. A vertex gets connected to another vertex in the order that the mode does its winding and this means that you might need multiple vertices in a given location to create the shape you want. The cube, for example, requires eighteen vertices, not the eight that you would expect. If you note the order of vertices in the GL chart above you'll see that all of them use their vertices slightly differently (in particular you should make note of the GL_TRIANGLE_STRIP above). Drawing a shape requires that you keep track of which drawing mode is being used and which order your vertices are declared in.
If you're thinking: it would be nice if there were an abstraction layer for this you're thinking right. Enter the mesh, which is really just an abstraction of the vertex and drawing mode that we started with but which has the added bonus of managing the draw order for you. That may seem insignificant at first, but it provides some real benefits when working with complex geometry. You still do need to be able to think about how your vertices work. For instance, let's say we want to draw a square. Well, a square is 4 points, so we've got it figured out, right?
ofMesh quad; quad.addVertex(ofVec3f(0, 0, 1)); quad.addVertex(ofVec3f(500, 0, 1)); quad.addVertex(ofVec3f(500, 389, 1)); quad.addVertex(ofVec3f(0, 389, 1)); quad.draw();
And then we get:
That's not right. What you need to remember is that the default setting of the mesh is to make triangles out of everything, so you need to make two triangles. What you've given OpenGL is interpreted like so:
You can use other drawing modes if you want but it's really best to stick with triangles (connected triangles to be precise) because they're so much more flexible than other modes and because they're best supported across different devices. Points and wires are also supported everywhere, quads for example, are not. Anyhow, let's draw our mesh correctly:
ofMesh quad; // first triangle quad.addVertex(ofVec3f(0, 0, 1)); quad.addVertex(ofVec3f(500, 0, 1)); quad.addVertex(ofVec3f(500, 389, 1)); // second triangle quad.addVertex(ofVec3f(500, 389, 1)); quad.addVertex(ofVec3f(0, 389, 1)); quad.addVertex(ofVec3f(0, 0, 1)); // first triangle quad.addTexCoord(ofVec2f(0, 0)); quad.addTexCoord(ofVec2f(500, 0)); quad.addTexCoord(ofVec2f(500, 389)); // second triangle quad.addTexCoord(ofVec2f(500, 389)); quad.addTexCoord(ofVec2f(0, 389)); quad.addTexCoord(ofVec2f(0, 0)); quad.draw(); // now you'll see a square
And now we have a mesh, albeit a really simple one. Ok, actually, that's wrong, but it's wrong on purpose. As you can see, we have exactly duplicated some of our addVertex calls above. In a tiny little square it doesn't matter if we use a few extra vertices - but when you're modelling a giant particle blob or something like that, it'll matter a lot.
That's where the index comes in. Indices are just a way of describing which sets of vertices in our vertex array go together to make triangles. The first 3 indices in the index array describe the vertices of the first triangle, the second 3 describe the second triangle, and so on. It's pretty rad and it saves you having to make and store more vertices than necessary. A more typical usage is something like the following:
int width = 10, height = 10; ofMesh mesh; for (int y = 0; y < height; y++){ for (int x = 0; x < width; x++){ mesh.addVertex(ofPoint(x*20, y*20, 0)); // make a new vertex mesh.addColor(ofFloatColor(0,0, 0)); // add a color at that vertex } } // what this is basically doing is figuring out based on the way we inserted vertices // into our vertex array above, which array indices of the vertex array go together // to make triangles. the numbers commented show the indices added in the first run of // this loop - notice here that we are re-using indices 1 and 10 for (int y = 0; y < height-1; y++){ for (int x=0; x < width-1; x++){ mesh.addIndex(x+y*width); // 0 mesh.addIndex((x+1)+y*width); // 1 mesh.addIndex(x+(y+1)*width); // 10 mesh.addIndex((x+1)+y*width); // 1 mesh.addIndex((x+1)+(y+1)*width); // 11 mesh.addIndex(x+(y+1)*width); // 10 } } ofTranslate(20, 20); mesh.drawWireframe();
As we mentioned earlier when you’re using a mesh, drawing a square actually consists of drawing two triangles and then assembling them into a single shape. You can avoid needing to add multiple vertices by using 6 indices to connect the 4 vertices. That gets more complex when you start working with 3-D. You’re going to draw an icosahedron and to do that you’ll need to know how each of the vertices are connected to all of the others and add those indices. When you create your ofMesh instance, you’re going to add all the vertices first and then add all of the indices. Each vertex will be given a color so that it can be easily differentiated, but the bulk of the tricky stuff is in creating the vertices and indices that the icosahedron will use.
This is the icosahedron.h header file:
#pragma once #include "ofMain.h" const int X = 158; const int Z = 256; //This is the data for the vertices, which keeps the data as simple as possible:} }; //data for the indices, representing the index of the vertices //that are to be connected into the triangle. //You’ll notice that for 12 vertices you need 20 indices of 3 vertices each: static GLint indices} }; class icosahedron : public ofBaseApp{ public: float ang; ofMesh mesh; void setup(); void update(); void draw(); };
And now the cpp file:
#include "icosahedron.h" void icosahedron::setup() { ofColor color(255, 0, 0); float hue = 254.f; //Here’s where we finally add all the vertices to our mesh and add a color at each vertex: for (int i = 0; i<12; ++i) { mesh.addVertex( ofVec3f( vdata[i][0], vdata[i][1], vdata[i][2] )); mesh.addColor(color); color.setHue( hue ); hue -= 20.f; } for (int i = 0; i<20; ++i) { mesh.addIndex(indices[i][0]); mesh.addIndex(indices[i][1]); mesh.addIndex(indices[i][2]); } } // give it a little spin void icosahedron::update(){ ang+=0.1; } void icosahedron::draw() { ofBackground(122,122,122); ofPushMatrix(); ofTranslate(400, 400, 0); ofRotate(ang, 1.0, 1.0, 1.0); //Now it’s time to draw the mesh. The ofMesh has three drawing methods: drawFaces(), //which draws all the faces of the mesh filled; drawWireframe(), which draws lines //along each triangle; and drawVertices(), which draws a point at each vertex. //Since we want to see the colors we’re drawing, we’ll draw all the faces: mesh.drawFaces(); ofPopMatrix(); }
The order that you add the indices is vital to creating the right object because, I know this sounds repetitive, it's really important to tell things what order they're supposed to be connected in so that they get turned from points in space into planes in space into objects. There's a reason the ofMesh has a drawWireframe() mode and that reason is that you can always just tell the OpenGL renderer "hey, I don't care about connecting these up, just show me the points". Otherwise, when you want proper faces and shades and the ability to wrap textures on things, you need to make sure that your vertices are connected correctly.
A VBO is a way of storing all of the data of vertex data on the graphics card. You’ve perhaps heard of Vertex Arrays and Display Lists and the VBO is similar to both of these, but with a few advantages that we’ll go over very quickly. Vertex Arrays just let you store all the vertex data in an array on the client side, that is, on the CPU side and then send it to the graphics card when you’re ready to draw it. The downside of that is that you’re still storing the data on the client side and sending it over to the graphics card. So, instead of making all of our vertex data in what’s called “immediate mode”, which means between a glBegin() and glEnd() pair (which you might remember) you can just store vertex data in arrays and you can draw stuff by dereferencing the array elements with array indices. The Display List is a similar technique, using an array to store the created geometry, with the crucial difference that a Display List lives solely on the graphics card. That's a little better because we're not shipping things from one processor to another 60 times a second. This means that once you’ve created the vertex data for geometry, you can send it the graphics card and draw it simply by referencing the id of the stored data. The downside is that display lists can’t be modified. Once they’ve been sent to the card, you need to load them from the card, modify them, and then resend them to the card to see your changes applied. Since one of the conveniences of moving things to the graphics card is reducing the amount of traffic between the graphics card and the rest of your system. The VBO operates quite similarly to the Display List, with the advantage of allowing you to modify the geometry data on the graphics card without downloading all of it at once. So you make something, you store it on the graphics card, and when you're ready to upload it, you simply push the newly updated values leaving all the other ones intact and in the right place.
So, in OF we use the ofVboMesh to represent all the vertices, how they're connected, any colors to be drawn at those vertices, and texture coordinates. Because it extends ofMesh, everything you learned about ofMesh applies here too. You create some points in space, you give indices to the mesh so that it knows which points in space should be connected, colors if you want each vertex to contain a color, and finally texture coordinates for when you want to apply textures to that VBO, and you should be good to go. Creating an ofVboMesh is really easy, you can, for example, just make an ofSpherePrimitive and load it into a mesh:
ofSpherePrimitive sphere; sphere.set(100, 50); mesh = sphere.getMesh();
Adding colors is very easy:
for( int i = 0; i < mesh.getVertices().size(); i++ ) { mesh.addColor(ofFloatColor( float(i)/mesh.getVertices().size(), 0, 1.0 - (float(i)/mesh.getVertices().size()) )); }
There's a few new tricks to VBOs that you can leverage if you have a new enough graphics card, for instance, the ability to draw a single VBO many many times and position them in the vertex shader. This is called instancing and it's available in the ofVboMesh in the drawInstanced() method. You can see an example of this being used in the vboMeshDrawInstancedExample in examples/gl. Generally speaking, if you have something that you know you're going to keep around for a long time and that you're going to draw lots of times in lots of different places, you'll get a speed increase from using a VBO. This isn't always true, but it's true enough of the time.
Although that's nowhere close to everything about vertices and meshes, we're going to move on to another frequently misunderstood but vital part of OpenGL: matrices.
Now take a breath. Before we go further and start dig into matrices, let's set up a simple scene that you can use as reference while reading the next part of this dense tutorial. Since OF version 0.9, you need 5 things to set up a 3D scene: a window, a camera, a material, a light and an object. Let's start from the window.
Create a new project using the ProjectGenerator and edit the main.cpp file as follows. Since OF 0.9, that is the way to set up a window that uses the programmable pipeline. If you want to read in detail what was introduced with the 0.9 version, on the blog there is a detailed review, but for now it is not necessary.
#include "ofMain.h" #include "ofApp.h" //======================================================================== int main( ){ ofGLFWWindowSettings settings; settings.setGLVersion(3, 2); settings.width = 1280; settings.height = 720; ofCreateWindow(settings); ofRunApp(new ofApp()); }
Here you have defined the dimensions of our window and which OpenGL version we want to use.
The second thing that you need is a camera and a light. Later on this tutorial you will see how to get full controll over your camera, for now let's do something really basic. Edit your App.cpp and App.h as follow
// Add this in the App.h ofLight light; ofEasyCam cam; // add these lines to the setup and to the draw method in the App.cpp void ofApp::setup(){ light.setup(); light.setPosition(-100, 200,0); ofEnableDepthTest(); } void ofApp::draw(){ cam.begin(); // here you will draw your object cam.end(); }
With this code you have accomplished two important things. It's a bit like making a movie, you have first to position the light, to turn it on, and then you have to put your camera in the right position. Now the set of our movie is ready for our first scene. If you run this code, you will see a gray screen. That is obvious, there is nothing under our camera. Let's put an actor (a simple box) under the reflectors.
// add this to your App.h file ofBoxPrimitive box; ofMaterial boxMaterial; // edit your App.cpp file and add these lines void ofApp::setup(){ //... boxMaterial.setDiffuseColor(ofFloatColor::red); boxMaterial.setShininess(0.02); } void ofApp::draw(){ cam.begin(); boxMaterial.begin(); box.draw(); boxMaterial.end(); cam.end(); }
In this chunk of code you have added 2 things. The box, our main actor in this movie, and the material, that defines the color of the box and how it reacts to the light. If you run the code you will see a red box in the middle of your screen. In the next part we will see how to move things around using the incredible properties of the ofNode class, which simplifies all the matrices operations needed in a every 3D scene.
Matrices are collections of vertices that are used to move things around. This is a very simplified definition, but for now take it as it is. In the previous example with the red box, OF automatically put the box in the center of the screen. But what if we want to position our box a bit on the right and a bit away from the camera? We have to use the
move method. A method that internally applies a Matrix to our object and moves the object at the position that we want. The coordinates, in this example, are relative to the middle of the screen, in this case 0,0,0. But let's see how the position of our box changes.
void ofApp::setup(){ //... box.move(200, 0, -200); }
What if we want to define the position of an object not relative to the center of the screen, but relative to the position of another object? Think about drawing a car. You draw the body of the car, and then you draw the headlamp of the car, the wheels, and all the other parts that compose a car. If you define the position of all these object relative to the center of the screen (that in this case is the origin of the axes) you have to calculate the distance of every element from the center. But what if the car moves? you will have to recalculate all the positions of all the objects relative to the center, for each single element of the car. That would be terrible! To solve this problem, you have to define the position of each element composing the car not to be relative to the origin of the axis, but to be relative to the body of the car. In this way, moving the car will move all the parts that compose the car. What is happening under the hood, is a bunch of matrix operations. There is a first matrix that it is applied to the car, and that defines the position of the car relative to the center of the screen, and then there are other matrices, each for every element composing the car, that define the position of each element relative to the body of the car. You can find this example in the examples folder, under
examples/3d/ofNodeExample.
Let's add a sphere positioned 100 pixels left from the our box
//In your App.h file ofSpherePrimitive sphere; // In your App.cpp file void ofApp::setup(){ //... box.move(200, 0, -200); sphere.setParent(box); sphere.move(-100,0,0); } void ofApp::draw(){ cam.begin(); box.draw(); sphere.draw(); cam.end(); }
openFrameworks allows us to do matrix operations in an easy way. Under the hood, there are these 3 matrices that are defining how we see our object on the screen. We'll lay them all out really quickly (not because they're not important but because OF relieves you of having to do a ton of messing with them).
The Model matrix
A model, like our
box, is defined by a set of vertices, which you can think of as ofVec3f objects, but are really just X,Y,Z coordinates of these vertices which are defined relative to the center point where the drawing started. You can think of this as the 0,0,0 of your "world space". Imagine someone saying "I'm 10 meters north". If you don't know where they started from, that's not super helpful, but if you did know where they started from, it's pretty handy. That's what the Model matrix is. For OF, this is the upper left hand corner of your window. Really these aren't super meaningful without a view onto them, which is why usually in OpenGL we're talking about the ModelView matrix. That's just the Model matrix time the View matrix, and that begs the question: what's the view matrix?
The View matrix
Little known fact: cameras don't move, when you want to look at something new, the world moves around the camera. If I'm standing in Paris and I want to take a picture of a different side of the Eiffel Tower, I just walk around to the other side. Imagine if instead I just made the entire earth spin around so I could see a different side of the Eiffel tower. Totally not practical in real life but really simple and handy in OpenGL.
So initially your openFrameworks camera, an ofEasyCam instance let's say, is just at 0,0,0. To move the camera, you move the whole world, which is fairly easy because the location and orientation of our world is just matrices. So our
box that thinks it's at 100,100, might actually be at 400,100 because of where our camera is located and it never needs to change its actual values. We just multiply everything by the location of the view matrix and voila: it's in the right place. That means this whole "moving the whole world" is really just moving a matrix over by doing a translate. We're going to dig into what that looks like in a second, right now we just want to get to the bottom of what the "camera" is: it's a matrix. And the relationship between a camera and where everything is getting drawn is called the ModelViewMatrix. Super important? Not really, but you're going to run into it now and again and it's good to know what it generally means.
The Projection matrix
Ok, so know what the world space is and what the view space is, how does that end up on the screen? Well, another thing that the camera has, in addition to a location and a thing that it's looking at (aka View Matrix) is the space that it sees. Just like a movie screen, you've got to at some point turn everything into a 2D screen. A vertex that happens to be at 0, 0 should be rendered at the center of the screen. But! We can’t just use the x and y coordinates to figure out where something should be on screen. We also need to figure out its Z depth because something in front of something should be drawn (and the thing behind it shouldn't). For two vertices with similar x and y coordinates, the vertex with the biggest z coordinate will be more on the center of the screen than the other. This is called a perspective projection and every ofCamera has a perspective transform that it applies to the ModelView matrix that makes it represent not only how to turn a vertex from world space plus camera space but also to add in how a vertex should be shown in the projection that the camera is making. Ok, so before projection, we’ve got stuff in Camera Space:
Now here's what that projection matrix does to it.
Looks wrong, right? But when you look at through the camera, it will look right and that is the secret of the projection matrix: multiplying everything by it makes it all look correct. The frustum is cube and objects that are near to the camera are big and things far away are smaller.
That reminds me of a Father Ted joke. Unlike the toy cows, the projection matrix actually makes things far away small. Lots of times in OpenGL stuff we talk about either the ModelViewMatrix or the ModelViewProjectionMatrix. Both of those are just the different matrices multiplied by one another to get "where things are" and "where things are on the screen". Matrices themselves are the subject of a million different tutorials and explanations which range from awesome to useless but there is one thing that I want to put in here to explain a quick way to read and understand them in OpenFrameworks and OpenGL in general. There's a trick that I've learned to understand matrices which I'm going borrowing from Steve Baker for your edification. Here's an OpenGL matrix:
float m[16];
It's a 4x4 array like this:
m[0] m[4] m[ 8] m[12] m[1] m[5] m[ 9] m[13] m[2] m[6] m[10] m[14] m[3] m[7] m[11] m[15]
If you're not scaling, shearing, squishing, or otherwise deforming your shapes, then you're going to be using the last row, m[3], m[7], m[11] will all be 0 and and m[15] will be one, so we'll skip it for a moment. and focus on the rest. m[12],m[13] and m[14] tell you the translation, i.e. where something is, so that's easy, and the rest tell you the rotation.
So, this is the way that I always visualize this: imagine what happens to four points near to the origin after they are transformed by the matrix:
These are four vertices on a unit cube (i.e. what that's 1 x 1 x 1) that has one corner at the origin. So, what we can do is pull apart the matrix and use different elements to move that little cube around and get a better picture of what that matrix is actually representing.
Skipping the translation part (the bottom row, 3, 7, 11), then the rotation part simply describes the new location of the points on the cube. So with no rotation at all, we just have:
(1,0,0) ---> ( m[0], m[1], m[2] ) (0,1,0) ---> ( m[4], m[5], m[6] ) (0,0,1) ---> ( m[8], m[9], m[10]) (0,0,0) ---> ( 0, 0, 0 )
After that, you just add the translation onto each point so you get:
] )
That may seem a bit abstract but just imagine little cube at the origin. Think about where the cube ends up as the matrix changes. For example, looking at this matrix:
0.707, -0.707, 0, 0 0.707, 0.707, 0, 0 0 , 0 , 1, 0 0 , 0 , 0, 1
When we draw that out, the X axis of our cube is now pointing somewhere between the X and Y axes, the Y axis is pointing somewhere between Y and negative X and the Z axis hasn't moved at all. The entire cube has been moved 1 units in X direction and 0 in the Y and Z:
What you'll tend to see in your ModelView matrix is a lot of rotation and translation to account for the position of your camera and of world space (that is, stuff in the rotation and translation parts of the matrix), what you'll tend to see in your projection matrix is some translation but mostly a lot of skewing (m[3], m[7], m[11]) to show how the camera deforms the world to make it look right on the screen. We're going to come back to matrices a little bit later in this article when we talk about cameras.
There's tons more to know about matrices but we've got to move on to textures!
So, really, a texture is a block of pixels on your GPU. That's different and importantly different, than a block of pixels stored on your CPU (i.e. in your OF application). You can't loop over the pixels in a texture because it's stored on the GPU, which is not where your program runs but you can loop over the pixels in an ofPixels object because those are stored on the CPU, which is where your OF application runs. OF has two ways of talking about bitmap data: ofPixels, stored on your CPU and ofTexture, stored on your GPU. An ofImage has both of these, which is why you can mess with the pixels and draw it to the screen.
There are three important.
You’ve already used textures without knowing it because the ofImage class actually contains a texture that is drawn to the screen when you call the draw() method. Though it might seem that a texture is just a bitmap, it’s actually a little different. Textures are how bitmaps get drawn to the screen; the bitmap is loaded into a texture that then can be used to draw into a shape defined in OpenGL. I’ve always thought of textures as being like wrapping paper: they don’t define the shape of the box, but they do define what you see when you look at the box. Most of the textures that we’ve looked at so far are used in a very simple way only, sort of like just holding up a square piece of wrapping paper.
ofImage myImage; // allocate space in ram, then decode the jpg, and finally load the pixels into // the ofTexture object that the ofImage contains. myImage.loadImage("sample.jpg"); myImage.draw(100,100);
The ofImage object loads images from files using loadImage() and images from the screen using the grabScreen() method. Both of these load data into the internal texture that the ofImage class contains. When you call the draw() method of the ofImage class, you’re simply drawing the texture to the screen. If you wanted to change the pixels on the screen, you would also use an ofImage class to capture the image and then load the data into an array using the getPixels() method. After that, you could manipulate the array and then load it back into the image using setFromPixels():
ofImage theScreen; // declare variable theScreen.grabScreen(0,0,1024,768); // grab at 0,0 a rect of 1024×768. // similar to loadPixels(); unsigned char * screenPixels = theScreen.getPixels(); // do something here to edit pixels in screenPixels // ... // now load them back into theScreen theScreen.setFromPixels(screenPixels, theScreen.width, theScreen.height, OF_IMAGE_COLOR, true); theScreen.update(); // now you can draw them theScreen.draw(0,0);
Textures in openFrameworks are contained inside the ofTexture object. This can be used to create textures from bitmap data that can then be used to fill other drawn objects, like a bitmap fill on a circle. Though it may seem difficult, earlier examples in this chapter used it without explaining it fully; it’s really just a way of storing all the data for a bitmap. If you understand how a bitmap can also be data, that is, an array of unsigned char values, then you basically understand the ofTexture already.
There are three basic ways to get data into a texture:
allocate(int w, int h, int internalGlDataType)
This method allocates space for the OpenGL texture. The width (w) and height (h) do not necessarily need to be powers of two, but they do need to be large enough to contain the data you will upload to the texture. The internal datatype describes how OpenGL will store this texture internally. For example, if you want a grayscale texture, you can use GL_LUMINANCE. You can upload whatever type of data you want (using loadData()), but internally, OpenGL will store the information as grayscale. Other types include GL_RGB and GL_RGBA.
loadData(unsigned char * data, int w, int h, int glDataType) / loadPixels()
This method loads the array of unsigned chars (data) into the texture, with a given width (w) and height (h). You also pass in the format that the data is stored in (GL_LUMINANCE, GL_RGB, GL_RGBA). For example, to upload a 200 × 100 pixels wide RGB array into an already allocated texture, you might use the following:
unsigned char pixels[200*100*3]; for (int i = 0; i < 200*100*3; i++){ pixels[i] = (int)(255 * ofRandomuf()); } myTexture.loadData(pixels, 200, 100, GL_RGB); // random-ish noise
Finally, we can just use:
ofLoadImage(theTex, "path/toAnImage.png");
When we actually draw the texture what we're doing is, surprise, putting some vertices on the screen that say where the texture should show up and say: we're going to use this ofTexture to fill in the spaces in between our vertices. The vertices are used to define locations in space where that texture will be used. Voila, textures on the screen. The way that we actually say "this is the texture that should show up in between all the vertices that we're drawing" is by using the bind() method. Now, you don't normally need to do this. The draw() method of both the ofImage and the ofTexture object take care of all of this for you, but this tutorial is all about explaining some of the underlying OpenGL stuff and underneath, those draw() methods call bind() to start drawing the texture, ofDrawRectangle() to put some vertices in place, and unbind() when it's done. It's just like this:
tex.bind(); // start using our texture quad.draw(); // quad is just a rectangle, like we made in the ofMesh section tex.unbind(); // all done with our texture
Every texture that's loaded onto the GPU gets an ID that can be used to identify it and this is in essence what the bind() method does: say which texture we're using when we define some vertices to be filled in. The thing that's important in this is that each vertex has not only a location in space, but a location in the texture. Let's say you have a 500x389 pixel image. Since OF uses what are called ARB texture coordinates, that means that 0,0 is the upper left corner of the image and 500,389 is the lower right corner. If you were using "normalized" coordinates then 0,0, would be the upper left and 1,1 would be the lower right. Sidenote: normalized coordinates can be toggled with "ofEnableNormalizedTexCoords()". Anyhow, you have an image and you're going to draw it onto an ofPlanePrimitive:
// our 500x389 pixel image bikers.loadImage("images/bikers.jpg"); // make the plane the same size: planeHalf.set(500, 389, 2, 2); // now set the texture coordinates to go from // 0,0 to 250, 194, so we'll see the upper left corner planeHalf.mapTexCoords(0, 0, 250, 194);
Now we'll make a plane with texture coordinates that cover the whole image.
planeFull.set(500, 389, 2, 2); planeFull.mapTexCoords(0, 0, 500, 389);
Now to draw this:
void testApp::draw(){ ofSetColor(255); ofTranslate(250, 196); bikers.bind(); planeHalf.draw(); ofTranslate(505, 0); // 5px padding planeFull.draw(); bikers.unbind(); }
We should see this:
Take note that anything we do moving the modelView matrix around, for example that call to ofTranslate(), doesn't affect the images texture coordinates, only their screen position. What about when we go past the end of a texture?
Eww, right? Well, we can call:
ofLoadImage(bikers, "images/bikers.jpg"); bikers.setTextureWrap(GL_CLAMP_TO_BORDER, GL_CLAMP_TO_BORDER);
Now we get:
Since we're not using power of two textures, i.e. textures that are strange sizes, we can't use the classic GL_REPEAT, but that's fine, it's not really that useful anyways, honestly.
Depth v Alpha
What happens if you draw a texture at 100, 100, 100 and then another at 100, 100, 101? Good question. The answer however, is confusing, if you've got alpha blending on, then, em, it's going to look wrong.
bikers.draw(0, 0, 101); // supposed to up front tdf.draw(0, 0, 100); // getting drawn last
Enable depth test to get it to work:
ofEnableDepthTest(); bikers.draw(0, 0, 101); tdf.draw(0, 0, 100);
Ok, so let's say we made our weird TDF image and bike image PNGs with alpha channel, chopped a hole out of the middle and loaded them in.
bikers.draw(0, 0, 0); tdf.draw(100, 0, -50); // should be 50 pix behind bikers.
Well, we get the visibility, but the TDF is in from of the bikers, which it shouldn't be, let's turn on depth testing:
ofEnableDepthTest(); bikers.draw(0, 0, 0); tdf.draw(100, 0, -50); // should be 50 pix behind bikers.
That's not right either. What's happening? Turns out in OpenGL alpha and depth just don't get along. You can have which pixels selected according to their alpha values or you can have things placed according to their position in z-space. If you want to do both you need to do multiple render passes or other trickery to get it to work, which is a little out of the scope of this tutorial. Suffice to say, that it's a little bit tricky and that you might need to think carefully about how you're going to work with 3D objects and textures that have alpha enabled because it can induce some serious headaches. Alright, enough of that, this part of this tutorial has gone on long enough.
OpenFrameworks has two cameras: ofEasyCam and ofCamera. What's a camera you ask? Well, conceptually, it's a movie camera, and actually, it's a matrix. Yep, math strikes again. It's basically a matrix that encapsulates a few attributes, such as:
And that's about it, you're just making a list of how to figure out what's in front of the camera and how to transform everything in front of the camera. You always have "a camera" because you always have a view, projection, and model matrix (remember those?) but the camera lets you keep different versions of those to use whenever you want, turning them on and off with the flick of a switch, like so:
cam.begin(); // draw everything! cam.end();
So, we always have a camera? Yep, and it has a location in space too. Just imagine this:
What's that -7992 and 79? Well, those are just a guess at a 1024x768 sized window, from the renderers setupScreenPerspective() method:
float viewW = currentViewport.width; float viewH = currentViewport.height; float eyeX = viewW / 2; float eyeY = viewH / 2; float halfFov = PI * fov / 360; float theTan = tanf(halfFov); float dist = eyeY / theTan; float aspect = (float) viewW / viewH; if(nearDist == 0) nearDist = dist / 10.0f; if(farDist == 0) farDist = dist * 10.0f; matrixMode(OF_MATRIX_PROJECTION); ofMatrix4x4 persp; persp.makePerspectiveMatrix(fov, aspect, nearDist, farDist); loadMatrix( persp ); matrixMode(OF_MATRIX_MODELVIEW); ofMatrix4x4 lookAt; lookAt.makeLookAtViewMatrix( ofVec3f(eyeX, eyeY, dist), ofVec3f(eyeX, eyeY, 0), ofVec3f(0, 1, 0) ); loadMatrix(lookAt);
There's a bit of math in there to say: make it so the the view of the camera is relatively proportional to the size of the window. You'll see the same thing in the camera setupPerspective() method:
ofRectangle orientedViewport = ofGetNativeViewport(); float eyeX = orientedViewport.width / 2; float eyeY = orientedViewport.height / 2; float halfFov = PI * fov / 360; float theTan = tanf(halfFov); float dist = eyeY / theTan; if(nearDist == 0) nearDist = dist / 10.0f; if(farDist == 0) farDist = dist * 10.0f; setFov(fov); // how wide is our view? setNearClip(nearDist); // what's the closest thing we can see? setFarClip(farDist); // what's the furthest thing we can see? setLensOffset(lensOffset); setForceAspectRatio(false); // what's our aspect ratio? setPosition(eyeX,eyeY,dist); // where are we? lookAt(ofVec3f(eyeX,eyeY,0),ofVec3f(0,1,0)); // what are we looking at?
We get the size of the viewport, figure out what the farthest thing we can see is, what the nearest thing we can see is, what the aspect ratio should be, and what the field of view is, and off we go. Once you get a camera set up so that it knows what it can see, it's time to position it so that you can move it around. Just like in people, there are 3 controls that dictate what a camera can see: location, orientation, and heading. You can kind of separate what a camera is looking at from what it's pointing at but you shouldn't, stick with always looking ahead, the ofEasyCam does. Because a ofCamera extends a ofNode, it's pretty easy to move it around.
cam.setPosition(ofVec3f(0, 100, 100));
it's also pretty easy to set the heading:
cam.lookAt(ofVec3f(100, 100, 100));
You'll notice that the signature of that method is actually
void lookAt(const ofVec3f& lookAtPosition, ofVec3f upVector = ofVec3f(0, 1, 0));
That second vector is so that you know what direction is up. While for a person it's pretty hard to imagine forgetting that you're upside-down, but for a camera, it's an easy way to get things wrong. So as you're moving the camera around you're really just modifying the matrix that the ofCamera contains and when you call begin(), that matrix is uploaded to the graphics card. When you call end(), that matrix is un-multiplied from the OpenGL state card. There's more to the cameras in OF but looking at the examples in examples/gl and at the documentation for ofEasyCam. To finish up, lets check out the way that the ofEasyCam works, since that's a good place to start when using a camera.
So, as mentioned earlier, there are two camera classes in OF, ofCamera and ofEasyCam. ofCamera is really a stripped down matrix manipulation tool for advanced folks who know exactly what they need to do. ofEasyCam extends ofCamera and provides extra interactivity like setting up mouse dragging to rotate the camera which you can turn on/off with ofEasyCam::enableMouseInput() and ofEasyCam::disableMouseInput(). There's not a huge difference between the two, but ofEasyCam is probably what you're looking for if you want to quickly create a camera and get it moving around boxes, spheres, and other stuff that you're drawing.
Onto using these things: both of those classes provide a really easy method for setting a target to go to and look at:
void setTarget(const ofVec3f& target); void setTarget(ofNode& target);
These methods both let you set what a camera is looking at and since you can always count on them to allow you to track something moving through space, pretty handy. In ofCamera there are other methods for doing this and more but I'll let you discover those on your own. One last thing that's tricky to do on your own sometimes is how do you figure out what where something in space will be relative to a given camera? Like, say, where a 3D point will be on the screen? Voila, worldToScreen()!
ofVec3f worldToScreen(ofVec3f WorldXYZ, ofRectangle viewport = ofGetCurrentViewport()) const;
How do you figure out where something on the screen will be relative to the world? Like, say, where the mouse is pointing in 3d space?
ofVec3f screenToWorld(ofVec3f ScreenXYZ, ofRectangle viewport = ofGetCurrentViewport()) const;
How do you figure out where something on the screen will be relative to the camera?
ofVec3f worldToCamera(ofVec3f WorldXYZ, ofRectangle viewport = ofGetCurrentViewport()) const;
How do you figure out where something relative to the camera will be in the world?
ofVec3f cameraToWorld(ofVec3f CameraXYZ, ofRectangle viewport = ofGetCurrentViewport()) const;
As with everything else, there's a ton more to learn, but this tutorial is already pushing the bounds of acceptability, so we'll wrap it up here. A few further resources before we go though:
Have fun, ask questions on the forum, and read our shader tutorial if you want to keep learning more. | http://openframeworks.cc/ofBook/chapters/openGL.html | CC-MAIN-2017-04 | refinedweb | 8,329 | 70.02 |
Published 9 months ago by Riotsmurf
I am creating business logic in my Laravel project. I am creating an order and adding samples to the order. Each sample( depending on its type ) has a set of tests that it requires by law so i made this.
So i use getTestCollection() to return tests a product type needs.
<?php namespace App\TestLogic; class Product { private $type; private $typeFactory; public $tests = [ "Foreign Matter" => 0, "Microbial" => 0, "Moisture" => 0, "Mycotoxin" => 0, "Pesticide Residue" => 0, "Potency" => 0, "Residual Solvent" => 0, "Terpenes" => 0, "Water Activity" => 0 ]; public function __construct(ProductTypeFactory $typeFactory) { $this->typeFactory = $typeFactory; } /** * Gets a collection of tests based on the type of this product * @return Array collection of tests required by type. */ public function getTestCollection() { $classType = str_replace(' ', '',$this->type); $productType = $this->typeFactory->getProductTestType($classType); return $productType->getTestCollection($this); } public function setType($type) { $this->type = $type; } public function getType() { return $this->type; } }
There are some types that require the same set of tests like, Flower and Flower Mix. So i made this.
<?php namespace App\TestLogic; class VegetationType implements IproductTypeTest { protected $requiredTests = [ "Moisture" => 1, "Water Activity" => 1, "Potency" => 1, "Foreign Matter" => 1, "Microbial" => 1 // "Mycotoxins" ]; public function getTestCollection(Product $product) { foreach ($this->requiredTests as $key => $rTest) { $product->tests[$key]+=$rTest; } return $product->tests; } }
And my Flower, Flower Mix classes extend this. So now they are empty like this.
<?php namespace App\TestLogic; class FlowerMixTypeTest extends VegetationType { } <?php namespace App\TestLogic; class FlowerTypeTest extends VegetationType { }
Should VegitationType be a trait and not a class? This is my first go at this and i have a feeling its a little weird...
Two good ways to handle "same" fields applying is polymorphic relations and the old school way which is still 100% fine is a checkbox or radio button.
This set of data applies to this or that, via a radio button. I just like the checkbox better.
jlrdw: This is a select, not a checkbox or radio button. there are 6 parent types. VegetationType, ExtractType, TopicalType etc. I am not going to force the users to select "Flower" or "Flower Mix" and then check a radio button to pick vegetation because it leaves too much room for user error. This does not answer my question.
This is a select, not a checkbox or radio button
Still same idea instead of a check(0 or 1) you still have a "certain word" describe what set of data it is. Also index that column, and later you can query where ("that select option").
Sounds as though you are pretty much setup with what's needed.
seems like a bad idea to store such behaviour in code
It would be better to have the tests maintained in a database model.
As it stands, if flower mix no longer required moisture test, you have to change your code and redeploy.
Snapey: Hmm i see what you mean.
We had a database idea at first but then felt like we didn't want to store this information because its just requirements that are set by the state. We felt like the law does not change enough, and if it did it would not be a big deal to go into the FlowerMixTypeTest class and set the $requiredTests variable to what it would require. Then commit, push, and merge. Would you consider this bad still?
I suppose it would be about which is faster/easier for the system. Whether it is easier to store and update this information in the database, or less time to change 1 class.
Please sign in or create an account to participate in this conversation. | https://laracasts.com/discuss/channels/general-discussion/am-i-breaking-the-o-rule-in-solid | CC-MAIN-2018-13 | refinedweb | 598 | 62.27 |
Post your Comment
J2ME Draw Triangle, Rectangle, Arc, Line Round Rectangle Example
J2ME Draw Triangle, Rectangle, Arc, Line Round Rectangle Example... tutorial, we are going to describe how to draw a
triangle, rectangle, arc, line or a round rectangle in one small application.
Although we have already explained
Rectangle Canvas MIDlet Example
of rectangle in J2ME.
We have created CanvasRectangle class in this example... first two rectangle
draw from the solid and dotted line and the other two rectangle draw as round
Rectangle shape. To draw these types of rectangle we are useing
PHP GD Draw Rectangle
Java draw triangle draw method?
Java draw triangle draw method? hi
how would i construct the draw method for an triangle using the 'public void draw (graphics g ) method? im... a rectangle and this works for a rectangle:
public void draw(Graphics g
Rectangle
Rectangle Could anybody help me on this problem,
Write two Rectangle objects with the following properties:
Rectangle1:
Height 15
width 53
Y 25
X 15
Rectangle2:
height 47
Width 60
Y 12
X 0
It's to be used four-argument
Draw Rectangle in J2ME
Draw Rectangle in J2ME
... are used to draw a rectangle using J2ME language:
g.setColor (255, ... it to create rectangle and to set the
color of canvas and draw line or box.
print rectangle triangle with ?*? using loops
print rectangle triangle with ?*? using loops *
* *
* * *
i want print like this.what is the code?
import java.lang.*;
class Traingles
{
public static void main(String args[])
{
for(int i=1;i<=5
Draw Ellipse in Rectangle
Draw Ellipse in Rectangle
... the rectangle.
To draw an Ellipse inside the rectangle, we have defined two classes... a rectangle and a circle respectively.
By using draw() method of class
plotting of points inside a rectangle
example ,if I have a Rectangle of 20m by 14m then one possible placement of points...plotting of points inside a rectangle I want to plot various points inside a rectangle such that any two points are at a distance of at least 3
Program to draw rectangle on each mouse click and erase priviouse rectangles
Program to draw rectangle on each mouse click and erase priviouse rectangles Program to draw rectangle on each mouse click and erase previous rectangle on next mouse click
print a rectangle - Java Beginners
print a rectangle how do I print a rectangleof stars in java using simple while loop?Assuming that the length n width of the rectangle is given...("Give me the size of each triangle : ");
int size = Integer.parseInt(n.readLine
How to calculate area of rectangle
How to Calculate Area of Rectangle
... of rectangle. The area of
rectangle is specifies an area in a coordinate space that is enclosed by the
rectangle object. In the program coordinate of space its
Draw arc in J2ME
Draw arc in J2ME
The given example is going to draw an arc using canvas class of J2ME. You can
also set a color for it, as we did in our example.
Different methods
How to make a Rectangle type pdf
How to make a Rectangle type pdf
...
make a pdf file in the rectangle shape irrespective of the fact whether... rectangle.
The code of the program is given below
How to draw a television
with this example.
New File: Take a new file with required
size.
Rectangle Shape: First draw a Rectangle shape
with black color by using Rectangle tool (U...;
color and rounded rectangle tool (U key) to draw a rounded rectangle shape
Arc MIDlet Example
Arc MIDlet Example
In the previous draw arc example, we have explained how to draw an arch on
the screen. But in this example we are going to show how to draw arc
Draw a Triangle using a Line2D
draw
three line segments using the class Line2D to create a triangle. ...
Draw a Triangle using a Line2D
This section illustrates you how to draw a triangle using
print rectangle pattern in java
print rectangle pattern in java *
* *
* *
* *
how to generate this pattern in java??
Hi friend try this code may this will helpful for you
public class PrintRectangle
{
public static void main
Draw An Arc in Graphics
Draw An Arc in Graphics
In this section, you will learn how to draw an arc in Graphics.
An arc of a circle is a segment of the circumference of the circle. To draw
an arc
By using Applet display a circle in a rectangle
By using Applet display a circle in a rectangle Write a java applet to display a circle in a rectangle
triangle
triangle how to draw triangle numbers with stars in html with the help of javascript
J2ME Draw Triangle
J2ME Draw Triangle
... application, we
are using canvas class to draw the triangle on the screen. In this example... will look like as follow...
Source code to draw a triangle in J2ME
file name
Draw a Flowchart
and set the size using Rectangle class.
To connect the boxes, we draw... Draw a Flowchart
This section illustrates you how to draw a Flowchart to compute
Adapters Example
been used as an anonymous inner class to draw a
rectangle within an applet. This example demonstrates the functionality
of the mouse press. That is on every...
Adapters Example
Draw Line
Draw Line sir i want to draw a moving line in j2me.That line should also show arrow in moving direction. How can we do so
Java gui program for drawing rectangle and circle
Java gui program for drawing rectangle and circle how to write java gui program for drawing rectangle and circle?
there shoud be circle and rectangle button, check box for bold and italic, and radio button red,green and blue
How to draw a house, draw a house, a house
How to draw a house
Use this example to draw a house in the
photoshop, it has been...; color and Rectangle tool
(U key) to draw a rectangle shape like a window.
Go
Post your Comment | http://roseindia.net/discussion/22689-J2ME-Draw-Triangle-Rectangle-Arc-Line-Round-Rectangle-Example.html | CC-MAIN-2015-22 | refinedweb | 994 | 60.55 |
The objective of the game will be to hit a moving target with a projectile fired from a shooter!
Requirements:
Visual C++ 2008 express
The "Dark GDK" add-in for VC++ 08
_____________________________
First Steps: Create the sprites
You will need to create the sprites that will be used in the program, I am going to use sprites without animations for this tutorial.
First I will design the background of my game, the background image can be anything from a simple colored canvas to a 3D world!
Ok, somehow I managed to come up with this beautiful creation!!! (not really!)
Number of downloads: 22858
Next, the target
Number of downloads: 12203
the shooter
Number of downloads: 12917
and last but not least, the projectile!
Number of downloads: 10475
_____________________________________
Now that we have our sprites ready, we are ready to start on the coding part of our game!
1. Start VC++ 08
2. Click "File->New->Project"
3. Under wizards, select "Dark GDK - GAME"
4. Set the name & storage location
5. Click Ok.
1. Delete all the code in "main.cpp".
2. Insert the following starting code into the main.cpp file:
#include "DarkGDK.h" //Required Header File void DarkGDK ( void )//The Main Funtion for the Dark GDK { // in this application a backdrop is loaded and then several // sprites are loaded, the sprites are targets! dbSyncOn ( ); //Turn on Sync dbSyncRate ( 100 ); // Set Sync Rate for screen: // The lower it is, the faster the game runs but the visual effects will be bad. // The higher the setting - the opposite will occour! dbDisableEscapeKey ( ); //disable the "escape" key dbRandomize ( dbTimer ( ) ); //use the clock to obtian random numbers more effieciently dbLoadImage ( "backdrop.bmp", 1 ); // this will load our background onto the game dbSprite ( 1, 0, 0, 1 ); // sprite used for the background dbSetImageColorKey ( 255, 0, 255 );// Set the transparency color of the sprites, in this case, it will be bright pink while ( LoopGDK () ) { if ( dbEscapeKey ( ) ) // if the ecape key is pressed, the program will exit break; dbSync ( );//update the screen contents } // close the program // // delete all the sprites for ( int i = 1; i < 30; i++ ) dbDeleteSprite ( i ); // delete the backdrop image dbDeleteImage ( 1 ); return;//return back to windows }
Now, we have a simple game going! No I mean like REALLY Simple!
So let's add in the code that loads and displays the target and the shooter:
between dbSetImageColorKey ( 255, 0, 255 ); & while ( LoopGDK () ) Insert this code:
int i = 1, T = 2, S = 3, P = 4; dbLoadImage("target.bmp", T);//load the target dbSprite ( T, 250, 0, T );//Display the target
Now the target will be displayed at the top of the screen!
Insert the shooter:
before the while ( LoopGDK () ) line, Insert this code:
i++; dbLoadImage("shooter.bmp", S);//Load the shooter dbSprite ( S, 200, 460, S );//Display the Shooter
This will load the shooter onto the screen when the game is started!
After that, we are ready to create the part of the game that controls the animation and movement of the target!
Let's add the variables to our project:
beforewhile ( LoopGDK() )
add this:
//begin the animation loop// dbLoadImage("PROJ.bmp", P);//load the Projectile sprite int D = 0; int R = 1; int M = 200; int AC = 50; int BC = 4;
this code will load the projectile sprite and prepare the variables
To make the target move back and forth, add this code after the LoopGDK() statement:
//START the animation of the target// int DIF = 5;//The difficulty of the game, *Make sure this number is either 2, 5, 10, or 50!; if(R == 1) { dbSprite(T,D, 1,T); D=D+DIF; if(D==650) { R = 0; } } else if(R == 0) { dbSprite(T,D,1,T); D=D-DIF; if(D == 0) { R = 1; } } //END target animation//
Now, if you run the game, the target will bounce back and forth at the top of the screen!
We want the user to be able to move the shooter back and forth at the bottom of the screen. After //END target animation// add:
//START user input controls// if( dbRightKey() && !(M > 625)) { dbSprite(S, M, 460, S); M = M + 2; } if( dbLeftKey() && !(M <= 0)) { dbSprite(S, M, 460, S); M = M - 2; }
Then, after that, add the control that will enable the user to "Fire" the projectile:
if( dbSpaceKey()) { if(AC == 50) { dbSprite(P, M, 425, P); AC = 0; } } if(AC < 50)//If the projectile is active, { dbMoveSprite(P, 10);//then make it move AC++; } //END User input controls//
This will make the program fire the projectile when the [SPACE] bar is pressed. The seconde part (
if(AC < 50)//If the projectile is active, { dbMoveSprite(P, 10);//then make it move AC++; }) will make the projectile move up untill it hits the top of the screen
Now that that's done, The game is almost done, if you run it, you should be able to move the shooter and fire the projectile. Also, the target will move from back to forth.
Finally, we will add the part of the game where, if the projectile hits the target, the game will complete and exit:
After //END User input controls//
Add this code:
//START Target hit/missed controls// if(AC < 50) { if((dbSpriteY(P) >= 0 && dbSpriteY(P) <= 10)) { if((dbSpriteX(P) <= (dbSpriteX(T) + 20) && (dbSpriteX(P) >= (dbSpriteX(T) - 20)))) { dbDeleteSprite(T); MessageBox(NULL, "You Won!, Click OK to exit.", "Congrats!", MB_OK); break; } } } //END Target hit/missed controls//
Now, the game should be fully functional! You should be able to fire at the target and, if you hit it, the game will display a message box, and exit!
We are now pretty much done, there are just a few things that can be edited: The first thing is the difficulty level, do you see in the code int DIF = 5;//The difficulty of the game, *Make sure this number is either 1, 2, 5, 10, or 50!;
This code controls how fast the target moves, by default, it is set to 5, the higher the number, the harder it is to hit the target. *Remember: If you change the difficulty, it MUST be set to either: 1, 2, 5, 10, or 50, or the game will not work properly!
The second thing is the screen sync rate: dbSyncRate ( 100 ); , I have it set to 100, if this number is lower, the actions of the game will be slower, if higher, the actions of the game will work faster, but the game will require more system resources. In this game, there is not much to worry about, since it is not that complex, but in a very complex 3D game, higher sync rates will greaty affect the system resources required.
And that's about it, enjoy your new game!
-AJ32
| http://www.dreamincode.net/forums/topic/42855-creating-a-game-in-c/page__p__1401040 | CC-MAIN-2016-30 | refinedweb | 1,122 | 71.89 |
30 December 2010 16:08 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
Third-quarter financial results showed just how much progress had been made through 2010 quarter to quarter, let alone against a depressed 2009. And at the time, some firms were forecasting a record year.
While the market has delivered much needed relief and demand growth, it has been the attention to costs and inventories that has helped deliver such strong returns. Tight cost control remains paramount across the sector as it does in so many other industries with business primed to generate cash.
Companies have focused on debt repayment and sought more favourable, longer-term financing options. Only recently has attention turned more to what might be done with excess cash in terms of stronger merger and acquisition activity and enhanced capital spending.
For most of the sector, the return to growth in 2010 was surprising and significant, with the bounce-back driven first by demand from developing economies but then by strengthening European business.
EU statistics show the rate of growth of output across the sector improving markedly in 2010 compared with 2009 but also a growth slowdown in the third quarter. That slowdown in the rate of recovery will have been felt in the also seasonally slow fourth quarter and will be a major feature in the first months of 2011.
The high and rising oil prices has featured strongly on chemicals markets in recent weeks and producers are being hard-pressed to pass on higher costs in product prices. Closely matched output to demand, however, has kept markets tight.
At the end of October, BASF said that it expected 2010 to be a record year with operating earnings, earnings before interest and tax before special items, sales exceeding the prior 2007 high point. It added that it expected to earn a “high premium” on the cost of capital.
Bayer has remained optimistic for the 2010 group outlook and particularly for its MaterialScience business, although it has expected a significant seasonal fourth-quarter slowdown although a marked improvement on the 2009.
“Overall, the MaterialScience business has recovered impressively and more quickly than expected. This means we will meet our original target of returning to the pre-crisis level at MaterialScience by 2012 much earlier than planned,” it said on reporting the third quarter financial results.
Fourth-quarter year-on-year sales growth was expected to slow compared with 30% for the first three quarters. The slower fourth-quarter would keep growth for the whole year below 30%.
INEOS said in December that improving performance and disposals would help it pay down €200m ($264m) of debt at the year end in addition to a scheduled payment of €60m. At the end of September operating earnings before depreciation and amortisation were sharply higher compared with the equivalent 2009 period.
Improving finances were reported by a swathe of other chemical companies, although in operating terms European business was not as robust and growing as strongly as that in other parts of the world.
Players in the industry are relying very much on self-help in still difficult times although many see ways in which they can keep costs down to continue to hold margins.
The company’s optimism stemmed from its scale in diverse markets; the expectation of further cost savings and, importantly, what it said was “evidence of sustained industrial demand beyond re-stocking”.
( | http://www.icis.com/Articles/2010/12/30/9422572/outlook-11-european-chemical-companies-see-strong-2010-outturn.html | CC-MAIN-2014-52 | refinedweb | 567 | 56.59 |
So I Made A Python Script Which Sets My Desktop Background As An Image From Reddit
For the past few months I have been experimenting with Python. Coming from a background in Java and C#, I was amazed about how easy it is to learn and do from the simplest stuff like writing a “Hello World” program to more complex tasks like calling public APIs and manipulating their data.
After a couple of months of playing around doing some nonsense experiments with Python, I’ve finally built a script that would be of some use for me. A script that the top rated image from the last 24 hours from a reddit subreddit and set my desktop background to that image.
So How Does It Work?
In general the script works as follows:
- Get the top image from the a subreddit specified by the user (eg. spaceporn or japanpics) via the reddit API
- download the image to the user’s computer
- set the image as the user’s desktop background
Code Breakdown
The whole code for the project is available on GitHub. Go ahead, download the code, play around with it or even add your own touch to it!
Now I’m going to break down the code line by line:
1. Imports
import urllib.request
import json
import sys
import ctypes
import time
These are the libraries that need to be imported to the project for the script to work properly. These are:
- urllib.request: I used this library to get the response for the call to the reddit API, as plain JSON text
- json: this library decodes the JSON text into a Python object
- sys: the subreddit will be provided by the user in the command line. The sys library provides functions for grabbing the text provided by the user.
- ctypes: provides functionality to set the Windows background to an image of your choice.
- time: provides the sleep() function to make the script wait for a certain amount of time.
2. Get Subreddit URL
getSubredditURL = lambda subreddit: “" + subreddit + “/search.json?q=url%3A.jpg+OR+url%3A.png&sort=top&restrict_sr=on&t=day”
This lines creates a function called getSubredditURL() which takes the subreddit as parameter. The URL gives the top images of the last 24 hours in .jpg or .png format, as JSON text.
For example, for /r/EarthPorn, the URL of the top images is:
3. Get Text
getText = lambda url: urllib.request.urlopen(url).read()
getText() is a function for getting the text of the address above and returning it as a string.
4. Get JSON From Text
getJSONFromText = lambda text: json.loads(text.decode(‘utf-8’))
Next, we need to get the JSON formatted text returned from getText() and convert into a Python object. getJSONFromText() gets the JSON text as a parameter and returns the Python object.
5. Get Children From JSON
getChildrenFromJSON = lambda json: json[“data”][“children”]
In the reddit API, the individual posts (whether link or self posts) are known as children. To retrieve these, we must go to JSON’s “data” node and then “children”. The getChildrenFromJSON() takes in the Python object and returns the list of all the “children” retrieved from JSON.
6. Get Data Of Child
getDataOfChild = lambda children, index: children[index][“data”]
Now we need to get the data for a specific image. This will help us get details about the image, such as its URL, title, score etc. getDataOfChild() takes as parameters, the “children” list and an index. The index specifies which child to get data from.
7. Get Image Name From URL
getImageNameFromURL = lambda url: url.split(“/”)[-1].split(“?”)[0]
The image URL will be returned in the form of either or. This simple function cuts out the domain and parameters part and keeps the image file name with its extension. For example, from “”, it keeps the “abcde.jpg” part only.
8. Store Image
storeImage = lambda url, fileName: open(fileName, “wb”).write(urllib.request.urlopen(url).read())
The storeImage function simply stores the image with the specified URL on the user’s computer with the specified fileName.
9. Set Image As Background
setImageAsBackground = lambda image: ctypes.windll.user32.SystemParametersInfoW(20, 0, image , 0)
The final step is to set the downloaded image as the desktop background for the user.
10. All Together Now
The final function setBackground() takes all the previously mentioned functions and runs them together to run the whole process, from getting the image from reddit, to setting it as the user’s background. | https://medium.com/@SavvStudio/so-i-made-a-python-script-which-sets-my-desktop-background-as-an-image-from-reddit-677d78616b20 | CC-MAIN-2018-05 | refinedweb | 751 | 63.49 |
You can subscribe to this list here.
Showing
2
results of 2
Hi,
On 26.09.07, Axel Freyn wrote:
> I would like to have something like a "mintickdists", to guarantee that e.g. only Integers are used by the parter.
Yes, that's what I would do too. You could also try to fiddle with the
rating, but this could easily lead to cases, where no valid partitions
are found anymore. Better change the parter. Here we go:
#!/usr/bin/env python
from pyx import *
d = [[0, 1], [3, 4], [2, 6], [5, 2], [6, 3]]
class intparter(graph.axis.parter.autolin):
def partfunction(self, data, testint=1):
if data.sign == 1:
if data.tickindex < len(self.variants) - 1:
data.tickindex += 1
else:
data.tickindex = 0
data.base.num *= 10
else:
if data.tickindex:
data.tickindex -= 1
else:
data.tickindex = len(self.variants) - 1
data.base.denom *= 10
tickdists = [graph.axis.tick.rational(t) * data.base for t in self.variants[data.tickindex]]
linearparter = graph.axis.parter.linear(tickdists=tickdists, extendtick=self.extendtick, epsilon=self.epsilon)
if testint:
tests = 0
while tests < len(self.variants) or data.sign == 1:
if not tests:
ticks = linearparter.partfunctions(min=data.min, max=data.max, extendmin=data.extendmin, extendmax=data.extendmax)[0]()
else:
ticks = self.partfunction(data, testint=0)
for tick in ticks:
# if tick.labellevel is not None and tick.num % tick.denom: # labeled ticks are integer
if tick.num % tick.denom: # all ticks are integer
break
else:
return ticks
tests += 1
return None
else:
return linearparter.partfunctions(min=data.min, max=data.max, extendmin=data.extendmin, extendmax=data.extendmax)[0]()
intaxis = graph.axis.linear(parter=intparter())
g = graph.graphxy(width=4.5, x=intaxis)
g.plot(graph.data.list(d, x=1, y=2))
g.writeEPSfile("test")
Some random notes:
- You can decide whether you want to allow (sub-)ticks at non-integer
values or whether all ticks should be integers (see the comment in
the code).
- The parter does not just stop for non-integer ticks, since it could
be, that you have a partition with at tick at 2.5, but there are
other integer partitions available for smaller ticks.
- We could force integer bases by adjusting the partfunctions method
as well. Could gain speed in some cases, but would not lead to any
different result.
- There was a bug introduced in changeset 2592, where I accidentally
removed the range rating. Fixed in changeset 2882. Unfortunately I
released this buggy code in 0.9, but we'll have a 0.10 soon anyway.
Sorry for breaking the rating another time, back to as it was
intended and worked in 0.8 and earlier. (I do have the intension,
that the automatic partitioning should not change from version to
version, but as you see, I'm missing that goal from time to time.)
André
--
by _ _ _ Dr. André Wobst
/ \ \ / ) wobsta@...,
/ _ \ \/\/ / PyX - High quality PostScript and PDF figures
(_/ \_)_/\_/ with Python & TeX: visit
Hi James,
>Hi Axel,
>
>> > In my actual problem, depending on the size of the graph and
>> > the size of the fonts, i get sometimes
>> > 0,5,10,15,20 (which is exactly what I want)
>> > and sometimes
>> > 0,2.5,5,7.5,10,12.5,15,17.5,20.
>
> the "tickdists" argument to the partitioner is what you want. Have a
> look at this program: if you remove the parts about p and xaxis, you
> should see the non-integer ticks appearing.
>
> ---
>#!/usr/bin/env python
>
>from pyx import *
>
>d = [[0, 1], [3, 4], [2, 6], [5, 2], [6, 3]]
>
>p = graph.axis.parter.linear(tickdists=[1])
>xaxis = graph.axis.linear(parter=p)
>g = graph.graphxy(width=4.5, x=xaxis)
>g.plot(graph.data.list(d, x=1, y=2))
>g.writeEPSfile("test")
>---
Thank you for your proposal, but it is not exactly what I want;-) My problem with tickdists is, that it switches off the automatic partitioner: Using
d = [[0, 1], [3, 4], [2, 6], [5, 2], [200, 3]]
in your example does not look very nice...
I would like to have something like a "mintickdists", to guarantee that e.g. only Integers are used by the parter.
Axel
--
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten
Browser-Versionen downloaden: | http://sourceforge.net/p/pyx/mailman/pyx-user/?viewmonth=200709&viewday=26 | CC-MAIN-2014-23 | refinedweb | 711 | 70.5 |
I'm having a problem getting my buffer(string) to print out everything that the user inputs. When I run my code the buffer only prints out the last thing that is inputted. I've tried just about everything and I really do not know what else to do. I'm new to c++ so I have not got familiar with the language just yet. Any help would be greatly appreciated.
#include <iostream> #include <string> using namespace std; int main() { string name; string quote; string answer; cout<< "Enter user name: "; cin>> name; cout<< "Enter the quote: "; cin>> quote; cout<< "\nAny more users?(enter yes or no) "; cin>> answer; do { cout<< "\nEnter user name: "; cin>> name; cout<< "Enter the quote: "; cin >> quote; cout<< "\nAny more users?(enter yes or no): "; cin>> answer; } while (answer == "yes"); if(answer == "no") { string buffer; buffer = "The buffer contains:\n "; cout<< buffer << name << ": " << quote; } else cout<< "Invalid response"; } | https://www.daniweb.com/programming/software-development/threads/310209/c-accessing-shared-buffers-concurrently | CC-MAIN-2018-47 | refinedweb | 152 | 78.59 |
24 October 2008 06:47 [Source: ICIS news]
FINANCIAL TIMES
Front page
Business failures inevitable, say banks
Banks have warned it is ‘inevitable’ that businesses will fail in the coming recession, and no concrete pledges have been made to ministers including Lord Mandelson by lenders to improve their treatment of small companies.
‘I made a mistake,’ admits Greenspan
Alan Greenspan, the former Federal Reserve chairman, said on Thursday the credit crisis had exceeded anything he had imagined and admitted he was wrong to think that banks would protect themselves from financial market chaos.
Companies and markets
Fed takes $2.7bn loss on Bear
The Federal Reserve said it had suffered a $2.7bn paper loss on the $29bn portfolio of toxic assets it took over from Bear Stearns in March as part of JPMorgan Chase’s government-brokered takeover of the stricken investment bank.
Sony warning knocks ?xml:namespace>
The Nikkei 225 closed morning trading down 4.9% at 8,046.99 following Sony’s move on Thursday to slash its profit forecasts. The rest of
INTERNATIONAL HERALD TRIBUNE
Front page
Partying helps power a Dutch nightclub
Club Watt, which describes itself as the first sustainable dance club, has a new type of dance floor that harvests the energy generated by dancers and transforms it into electricity.
In a father's tough life, principles and examples to live by for Biden
Senator Joseph Biden Jr. credits his father, Joe Sr., with teaching him the lessons that have become a recurring theme of his campaign speeches.
Marketplace
Asian markets plummet on earnings fears
The Nikkei sank on Friday, dragged down by Sony's full-year profit forecast, while investors elsewhere in
West is in talks on credit to aid poorer nations
With the financial crisis engulfing developing countries, Western officials are weighing coordinated action to try to stabilise these economies.
THE
Front page
VEB thrust into the role of savior
Vneshekonombank, which does not even report to the Central Bank because it has no banking license, will soon take charge of $74bn, or 14% of the country's reserves.
Deputies approve $18.5bln bailout
The State Duma took less than two hours on Thursday to approve a raft of bills allowing the government to spend more than $18.5bn bailing out troubled banks and supporting the plummeting domestic stock market.
Business
Inter RAO to spend $5.5bln abroad
State-controlled electricity trader Inter RAO will spend at least $5.5bn on acquisitions in Latin America, Asia and
S&P downgrades outlook for
The ratings agency cites the heavy costs of the country's bailout package and use of its foreign reserves to prop up the ruble.
DER SPIEGEL
Front page
German politicians divided over anti-Semitism
The German parliament wanted to pass a unanimous resolution against anti-Semitism to coincide with the 70th anniversary of the Night of the Broken Glass. But the effort has become a victim of political bickering.
The clash of two worldviews
Porsche has long been a symbol of wealth, power and freedom. For
TURKISH DAILY
Front page
Muscling in on Parliament's turf
The
World elects Obama, most Turks indifferent
Most Turks say they do not care who the next president of the
Business and finance
Next generation homes for the next generation
PLS4M İnşaat is set to hand over the keys for its new development project, Yeni Nesil Evler, which translates as Next Generation Homes, located in Gebze's Şekerpınar neighborhood in September of 2009.
TOKİ puts social houses up for sale in
Front page
eCard CEO Korobowicz suspended from duties
The CEO of eCard Konrad Korobowicz has been suspended after being accused by several members of the company's supervisory board of not presenting relevant information to them and for being the cause of the company's worsening condition.
The ruling coalition between the Civic Platform (PO) and the Polish Peasants Party (PSL) is once again going through tough times as PO leaders claim that PSL continues to block an increasing number of draft bills proposed | http://www.icis.com/Articles/2008/10/24/9166008/in-fridays-europe-papers.html | CC-MAIN-2015-06 | refinedweb | 672 | 55.68 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to Search for product references only in Order line
Working on Windows.
I am trying to search reference/default_code of products from the sale order line without considering the case but only searching the beginning of each reference. similar to searching LIKE 's%' in sql. At the moment OpenERP search LIKE '%s%' is it possible to have a code that ---
get product_id in product.product for default_code (that is reference) like ? (search+'%',)
I want to apply this to sales order line for getting the products. Please anyone help. thanks Also I should say all products have unique reference (default_code) and we only search by these. So I would be glad if anyone could suggest how I can make the search look in the reference only rather than both reference and product name. Thank you in advance. python code:
def onchange_case(self, cr, uid, ids, default_code): result = {'value': { 'default_code': str(default_code).upper() } } return result
xml code:
<field name="default_code" on_change="onchange_case(default_code)"/>
use the lower() or upper() function to format the user input.
This will require your database to be uniformly upper or lower case, but that is easier to control than user input.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-search-for-product-references-only-in-order-line-38785 | CC-MAIN-2017-43 | refinedweb | 249 | 57.47 |
hi, here I am again. I'm in the pace where you will code a header or header of your own in a different file for program reusability. I have tried the code many times in different ways. I decided to make it small as shown below so that I could easily check the error quickly, unfortunately, I cannot see any problem in my code. I have no idea what's happening, but I've searched over the net and they have the same structure as mine. I don't know if it's with the compiler, but if I created Name.h without seperating the implementation of that class it works. I hope you could help me with this and give some advice. Hope to hear from you soon.
file: Name.h
Code:#include <string> using std::string; class Name { public: Name( string ); void setName( string ); string getName(); void displayName(); private: string name; };
file: Name.cpp
Code:#include <iostream> using std::cout; using std::endl; #include "Name.h" Name::Name( string n ) { setName( n ); } void Name::setName( string n ) { name = n; } string Name::getName() { return name; } void Name::displayName() { cout << "Name: " << getName() << endl; }
file: Person.cpp
Code:#include <iostream> #include <string> using namespace::std; #include "Name.h" int main( void ) { Name emp1( "John Travolta" ); Name emp2( "Nicolas Cage" ); cin.get(); return 0; }
Error:
C:\DOCUME~1\INFORM~1\LOCALS~1\Temp\ccYbcaaa.o(.tex t+0x1ac) In function `main':
[Linker error] undefined reference to `Name::Name(std::string)'
[Linker error] undefined reference to `Name::Name(std::string)'
C:\DOCUME~1\INFORM~1\LOCALS~1\Temp\ccYbcaaa.o(.tex t+0x1ac) ld returned 1 exit status
Compiler:
DevC++ 4.9.9.2 | http://cboard.cprogramming.com/cplusplus-programming/109675-user-defined-header.html | CC-MAIN-2015-48 | refinedweb | 282 | 66.33 |
:03AM
Last updated on Thursday, June 20, 2013 at 01:58PM
This morning, we updated the GenomeSpace user interface (GSUI) with several enhancements to make it easier to use and work with your files. The new GSUI looks like this:
Some of the changes and upgrades from the previous version include the following:
If you have suggestions for additional improvements, please contact us at gs-help@broadinstitute.org.
Posted by Judy McLaughlin on Thursday, May 02, 2013 at 03:27PM
Last updated on Friday, May 03, 2013 at 09:06AM
There are currently two methods for uploading files in GenomeSpace:
or
In general, we recommend dragging a file from your computer and dropping it on a GenomeSpace folder.
However, if you are uploading files larger than 1 GB, you should use the Java Uploader.
There are some known issues to be aware of:
Workaround: You can either try a newer browser or use the Java Uploader.
Workaround: For very large files, use the Java Uploader.
Workaround: Use Firefox.
Posted by Ted Liefeld on Monday, April 08, 2013 at 10:22AM
Last updated on Thursday, April 11, 2013 at 04:12PM
Since GenomeSpace was released, you have been able to move a file or folder by using the mouse to drag and drop it into the folder you would like to move it to. As of the latest update, you can now use this same approach to copy a file (or folder) to a new location. To copy the file/folder instead of moving it, press and hold down the CTRL, SHIFT, or ALT key on your keyboard (any one key will do) while you drop the file into its new location.
In addition, if you drag and drop a file or folder that you have read, but not write, access to, it will now copy the folder to the new location by default.
We hope that these changes will make it even easier for you to manage your GenomeSpace files and folders. Please feel free to contact us at gs-help at broadinstitute dot org if you would like to request any other changes or features.
In an effort to make it even easier to upload your data to GenomeSpace, we have enabled HTML5 uploading in GenomeSpace. What this means is that if you are using a modern browser that supports HTML5 (for example, Firefox 18 or higher, Chrome 24 or higher), you can now upload files to GenomeSpace by dragging them from your desktop and dropping them onto the folder in GenomeSpace where you would like them to be.
Please note that this is still an experimental feature, so there may be some rough edges. If you happen to experience isssues, please let us know at gs-help at broadinstitute dot org and we will try to smooth them out for you.
One warning for users of the Chrome browser: Chrome loads your files into memory during the upload process, so if you use this feature for very large files (e.g., 1GB+) the memory used by your Chrome process can become very large. We suggest that for very large files, you continue to use the Java Uploader applet at this time.
Posted by Ted Liefeld on Tuesday, February 05, 2013 at 10:44AM
Last updated on Wednesday, February 06, 2013 at 04:15PM
GenomeSpace is proud to announce the addition of another new tool to the GenomeSpace tool bar: Gitools, from Biomedical Genomics Group at the Biomedical Research Park in Barcelona (PRBB).
Gitools is an open-source tool that performs analyses and allows users to visualize data and results as interactive heatmaps that facilitate the integration of novel data with previous knowledge. Gitools can import data import from GenomeSpace, IntOGen, Biomart, Gene Ontology, and KEGG .
Posted by Ted Liefeld on Monday, February 04, 2013 at 09:52AM
Last updated on Tuesday, February 05, 2013 at 10:41AM
The GenomeSpace User Interface (GSUI) has received some minor updates and one bug fix this morning. To make the UI more in line with standard web applications, we have moved the menu bar to the top of the page above the toolbar. In addition, we have fixed a bug where the GSUI failed to properly re-open a directory after it has been renamed.
The new GSUI now looks like this:
Posted by Ted Liefeld on Friday, November 30, 2012 at 11:07AM
Last updated on Monday, November 02, 2015 at 11:56AM
The GenomeSpace Data Manager was originally built to save the files you upload to GenomeSpace in an Amazon Simple Storage System (S3) bucket that is managed by GenomeSpace itself. However you can add additional Amazon S3 buckets to GenomeSpace that you or a third party has set up to make the file contents available to your GenomeSpace and your GenomeSpace tools. For buckets that are publicly accessible, you only need to tell GenomeSpace the name of the bucket to mount it. However, for private buckets, or those with limited non-public accessibility, the process is more complex, requiring you to set up a sub-account and the minimal permissions in Amazon to share the bucket with GenomeSpace. Once a bucket has been mounted in GenomeSpace, you can share it with other GenomeSpace users using the standard GenomeSpace sharing dialogs.
For details on how to mount a bucket into your GenomeSpace, follow the steps in the documentation.
Posted by Judy McLaughlin on Thursday, November 29, 2012 at 10:55AM
Last updated on Friday, November 30, 2012 at 11:03AM
Do you use the AdBlock Plus plugin for Firefox or Chrome? If so, you may have noticed some issues with empty or mispositioned dialog boxes in GenomeSpace.
The solution: Select Tools>AdBlock Plus>Disable on gsui.genomespace.org, then restart your browser. You should start seeing the correct dialog boxes and functionality.
Posted by Ted Liefeld on Thursday, November 29, 2012 at 07:07AM
Last updated on Friday, November 30, 2012 at 11:02AM
We are pleased to announce that data from the following paper have been made available on GenomeSpace:
The Cancer Genome Atlas Network. Integrated genomic analyses of ovarian carcinoma. Nature. 474 (7353):609-615.
The GenomeSpace mirror of this dataset includes all public level 3 and level 4 data from.
You can access the data in GenomeSpace at the path /Public/SharedData/Datasets/TCGA/Ovarian Cancer/. (This requires a GenomeSpace login. If you don't have one, it's easy to register.)
Abstract.
Posted by Ted Liefeld on Monday, November 19, 2012 at 09:40AM
Last updated on Thursday, November 29, 2012 at 10:54AM
The Cancer Genome Atlas Network. Comprehensive Molecular Characterization of Human Colon and Rectal Cancer. Nature. 2012;487:330-337. [doi:10.1038/nature11252]
The GenomeSpace mirror of this dataset includes all public level 3 and level 4 data from.
You can access the data in GenomeSpace at the path /Public/SharedData/Datasets/TCGA/Colon and Rectal/. (This requires a GenomeSpace login. If you don't have one, it's easy to register.).
Posted by Ted Liefeld on Wednesday, November 14, 2012 at 03:37PM
Last updated on Tuesday, May 26, 2015 at 01:49PM
For today's technology tidbit, we're going to explore how to call the GenomeSpace REST-ful web interfaces from the Python programming language. (Full disclosure: this was my first attempt to use Python so I am sure there are stylistic issues and more elegant ways to do this).
Below, we'll review the steps to connect to the GenomeSpace Data Manager and list the user's home directory in an interactive Python session.
# set up imports
import urllib2
import json
# set some properties
gsUsername = 'ted'
gsPassword = '*****'
# get all the GenomeSpace URLs from the main web page
opener = urllib2.build_opener()
allUrls = opener.open('')
urlMap = {}
for line in allUrls:
tokens = line.split("=")
if (len(tokens) == 2):
urlMap[tokens[0]] = tokens[1][:-1] #the trailing \n is not handled properly
idUrl = urlMap['prod.identityServerUrl']
dmUrl = urlMap['prod.dmServer'] + '/v1.0/'
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm=" GenomeSpace ", uri=idUrl, user=gsUsername, passwd=gsPassword)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
urllib2.urlopen(idUrl)
req = urllib2.Request(dmUrl + 'file/Home/' + gsUsername)
req.add_header('Cookie','gs-token='+token)
mydir = sslopener.open(req)
homedirJson = mydir.read()
# turn it into a real object from a JSON string
homeDirObj = json.loads(homedirJson)
firstUrl = homeDirObj['contents'][0]['url']
req2 = urllib2.Request(firstUrl)
req2.add_header('Cookie','gs-token='+token)
firstFileContents = sslopener.open(req2)
print firstFileContents
That's it! For more details on how to use the GenomeSpace REST-ful APIs, the structure of the JSON objects, etc., please refer to the documents on the Technical Documentation page. | http://genomespace.org/blog?page=4 | CC-MAIN-2017-43 | refinedweb | 1,440 | 60.35 |
I. For example I tried wsdl2py() from the ZSI package, and got this error: Error loading services.xml: namespace of schema and import match I tried WSDL.Proxy() from the SOAPpy package and eventually end up with this error: xml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 6 I tried Client() from the suds package, and got this error: File "/usr/lib/python2.3/site-packages/suds/client.py", line 59 @classmethod ^ SyntaxError: invalid syntax I'm not an expert; I have no idea what any of these errors mean, and I have no idea how to go about resolving them. So I decided to take a step back and see if I could bypass all the fancy automagic methods and just create my own SOAP xml message from scratch and then send it to the web server. That would work, surely. But I'm having a tough time finding some good examples of that, because all the tutorials I've found just tell you to use the aforementioned magic methods, which unfortunately don;t seem to be working for me. Does anyone have some good examples of code that builds a "raw" xml SOAP message and sends it to a webserver, then reads the response? I think that would be a good place for me to start. Thanks for any replies. -- John Gordon A is for Amy, who fell down the stairs gordon at panix.com B is for Basil, assaulted by bears -- Edward Gorey, "The Gashlycrumb Tinies" | https://mail.python.org/pipermail/python-list/2009-August/549080.html | CC-MAIN-2014-15 | refinedweb | 253 | 70.13 |
I will try that when I get a chance, but it will have to wait until monday, as I am done with work for the week. I'll let you know the results when I have them.
This is a discussion on create fstream from raw file descriptor within the C++ Programming forums, part of the General Programming Boards category; I will try that when I get a chance, but it will have to wait until monday, as I am ...
I will try that when I get a chance, but it will have to wait until monday, as I am done with work for the week. I'll let you know the results when I have them.
I was able to debug the program... had to do it from the command line because it is a multi-process server, so I had to attach to a child process, but I was able to find the spot where it crashed.
the line from the debugger was:
I'm not really sure where to go from here.I'm not really sure where to go from here.Code:0xb7d01adc in std::istream::istream::sentry::sentry () from /usr/lib/libstdc++.so.6
Hmm, looks like the problem is in Boost.Iostreams after all. Most likely, it crashes when trying to skip whitespace. Unless the streambuf is for some reason null - that would crash there, too. Can you ensure that
just before the I/O operation that leads to the crash?just before the I/O operation that leads to the crash?Code:assert(static_cast<std::istream&>(stream).rdbuf());
All the buzzt!
CornedBeeCornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
inserting that line causes it to crash at that line.
The call to rdbuf() crashes? Now that is seriously weird. This is the implementation:
Or does the assertion simply fail?Or does the assertion simply fail?Code:basic_streambuf<_CharT, _Traits>* rdbuf() const { return _M_streambuf; }
All the buzzt!
CornedBeeCornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
the debugger shows that it crashes (SIGSEGV) right at the assert() line. In fact, it doesn't even show that it entered the rdbuf() function.
OK, I'm stumped.
All the buzzt!
CornedBeeCornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
I know this is an old thread but in case anyone else finds this:
Try using boost::iostreams::file_descriptor like this:
stream is now an std::iostream which you can both read and write to and of course fd is the file descriptor which you can lock.stream is now an std::iostream which you can both read and write to and of course fd is the file descriptor which you can lock.Code:int fd = open("filename.txt", O_RDWR); if (fd == -1) throw "Failed to open file"; io::stream<io::file_descriptor> stream(fd, true);
Well, after the better part of a day of dabbling in the black arts of the various C++ streams classes, I've constructed the following test code that _seems_ to work, and I thought it would be nice to share. I'm not really sure it works properly, so I welcome review by those more steeped in the black arts.
Code:#include <fstream> #include <iostream> #include <string> #include <cmath> #include <cassert> #include <cstdio> #include <stdlib.h> #include <string.h> #include <ext/stdio_filebuf.h> using namespace std; //std::ofstream stream_002; int main(){ char *f_template; f_template = new char(12); memcpy(f_template, "/tmp/XXXXXX", 12); int fd = mkstemp(f_template); std::cout << "File Descriptor # is: " << fd << " file name = " << f_template <<std::endl; // FILE *frp=fdopen(fd, "r"); // convert it into a FILE * FILE *fwp=fdopen(fd, "w"); // convert it into a FILE * // create a file buffer(NOT an iostream yet) from FILE * // __gnu_cxx::stdio_filebuf<char> frb (frp, ios_base::in); // Uses: // stdio_filebuf (std::__c_file *__f, std::ios_base::openmode __mode, size_t __size=static_cast< size_t >(BUFSIZ)) __gnu_cxx::stdio_filebuf<char> fwb (fwp, std::ios_base::out); // so fwb is of type stdio_filebuf // istream my_temp_in (&frb); // create a stream from file buffer std::iostream my_temp_stream_out (&fwb); // create a stream from file buffer std::fstream my_temp_fstream; // streambuf *bsbp; // bsbp = my_temp_out.rdbuf(); // my_temp_fstream.rdbuf(bsbp); my_temp_fstream.std::ios::rdbuf(my_temp_stream_out.rdbuf()); // my_temp_out << "iostream : Some test text" << std::endl; // my_temp_out.flush(); my_temp_fstream << "fstream : Some test text" << std::endl; my_temp_fstream.close(); // while(1){} // Now take a look in your temp directory for the file //name printed from the above cout statement. If you cat it out, //you should see the "test text". return 0; }
(from: Streams and File Descriptors - The GNU C Library)
11.1.1 Streams and File Descriptors.
==========================
anyone please present by a example about this ? how to open a connection as a file descriptor and then make a stream associated with that file descriptor ?.
Last edited by thavali; 02-13-2011 at 08:39 AM. | http://cboard.cprogramming.com/cplusplus-programming/107187-create-fstream-raw-file-descriptor-2.html | CC-MAIN-2013-48 | refinedweb | 864 | 72.26 |
Introduction: LinkIt One Capacitive Tutorial
Hello builders! Here in this 'ible we will make a project which will use a capacitive sensor to turn on a LED. This project uses a micro controller to send and receive the signal to and from the capacitive sensor. This micro controller is called the LinkIt One Board.
How does a capacitive sensor work?
A capacitive sensor works on the human body's ability to store small amounts of charges and then give away. So when we get close to the the sensor the signal going through the foil back to the LinkIt One board changes.
Step 1: Parts
1. LinkIt One Board
2. USB Board
3. Aluminum Foil
4. 10 Mega Ohm Resistor
5. LED (any color)
6. Box
Step 2: Foil
1. Hot glue the foil on top of the box.
2. Cut of the excess foil.
3. Make a hole next to one of the edges.
4. Put a wire through the hole and strip a long part near the foil.
5. Hot glue the wire to the foil making sure that it doesn't move and has conductive contact with the foil.
Step 3: LED
1. Insert the LED in the same hole as the capacitive signal wire.
2. Hot glue it to the hole.
3. Solder wires to the positive and the negative pins on the LED.
Step 4: USB
1. Make a Hole for the Usb cable to seek through.
2. Now hot glue the board to the base. ( Make sure you use little)
Step 5: Code
Lets code the board such that the led turns on when the capacitive sensor comes in touch with the finger.
Before that download this library and put it in your libraries folder in the Arduino folder.
Code:
#include <Capacitive.Sensor.h>
CapacitiveSensor capsensor = CapacitiveSensor(8,12);
int LED = 13;
void setup()
{
capsensor.set_CS_AutocaL_Millis(0xFFFFFFFF);
}
void loop()
{ long start = millis();
long TouchVal = capsensor.capacitiveSensor(25);
if(TouchVal > 1000) // change this number here to change the distance from which the led will turn on.
{
digitalWrite(LED, HIGH);
}
}
Upload the code and move on to wiring it.
Step 6: Wiring
1. Connect the LED to pin 13.
2. Connect the negative of LED to GND.
3. Connect the 10m resistor between pins 8 and 12.
4. Insert the wire from the foil in pin 12.
And thats it:) You are done with the wiring. Now lets make it pretty and test it out.
Step 7: Tape(Optional)
Using some colorful tape to cover the sides and make the sensor look pretty.
2. Then write something on the foil. ( BUTTON, Zombie killer... etc)
Step 8: Test and Conclusion
Plug in the board and test it out by touching the foil.
You can even change the sensitivity of how much distance there should be before the LED turns on.
In conclusion this project turns on a LED but you can use it to control much more ( light, fan, tv... etc)
If you have any questions feel free to ask me in the comments below.
Recommendations
We have a be nice policy.
Please be positive and constructive.
2 Comments;
Sorry for the late response,
Ok that shouldn't happen, I just checked my code and compiled it, it worked fine.
Try this code.
Let me know if you get the same error.
#include
CapacitiveSensor capsensor = CapacitiveSensor(8,12);
int LED = 13;
void setup()
{
capsensor.set_CS_AutocaL_Millis(0xFFFFFFFF);
}
void loop()
{
long start = millis();
long TouchVal = capsensor.capacitiveSensor(30);
if(TouchVal > 1000)
{
digitalWrite(LED, HIGH);
}
} | http://www.instructables.com/id/LinkIt-One-Capacitive-Tutorial/ | CC-MAIN-2018-26 | refinedweb | 585 | 86.5 |
format the time into a string
#include <time.h> size_t strftime( char *s, size_t maxsize, const char *format, const struct tm *timeptr ); is to take place. All ordinary characters are copied unchanged into the array. No more than maxsize characters are placed in the array. The format directives %D, %h, %n, %r, %t, and %T are from POSIX.
When the %Z directive is specified, the tzset() function is called.
If the number of characters to be placed into the array is less than maxsize, the strftime() function returns the number of characters placed into the array pointed to by s, not including the terminating null character. Otherwise, zero is returned.
When an error has occurred, errno contains a value that indicates the type of error that has been detected.
#include <stdio.h> #include <time.h> void main() { time_t time_of_day; char buffer[ 80 ]; time_of_day = time( NULL ); strftime( buffer, 80, "Today is %A %B %d, %Y", localtime( &time_of_day ) ); printf( "%s\n", buffer ); }
produces the output:
Today is Friday December 25, 1987
ANSI, POSIX
asctime(), clock(), ctime(), difftime(), errno, gmtime(), localtime(), mktime(), setlocale(), time(), tzset()
The tm structure is described in the section on <time.h> in the Header Files chapter. | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/src/strftime.html | CC-MAIN-2022-33 | refinedweb | 198 | 63.59 |
…
Perl and ustack helpers - the big problem
The primary problem with a Perl ustack helper is the lack of correspondence between the C stack and the Perl stack: unlike Python, V8 Javascript etc, Perl’s stack is handled entirely separately, and this is why the “standard” ustack helper will never be possible: there just aren’t the C stack frames there to annotate.
What we can try though, is to annotate a single C frame with as much of the Perl stack as we can find by chasing pointers from that frame.
This post explains my experiments with that idea. To give the game away early, it doesn’t entirely work, but there are some techniques here that might be useful elsewhere.
A lot of what I’ve done here is based directly on the existing ustack helpers for other languages, specifically V8 and Python.
Dynamically loading the ustack helper
If at all possible, I want the helper to work on an unmodified Perl binary - if the thing works at all, I want to be able to use it like a CPAN-style module rather than having to patch Perl. The first problem is to get the the helper loaded.
Given the way libusdt works, it
seems likely we can load a helper just like a provider, by
ioctl()ing the DOF down to the kernel from a Perl XS
extension. Of course, that’s all DTrace’s DOF-loading
_init
routine does anyway, just we’ll be doing it slightly later on in the
process’s life.
Unfortunately this facility isn’t part of libusdt’s public API yet, but it’s really not that much code, especially if we’re only supporting Illumos-based systems.
Actually building the helper DOF is trivial: compile the script with
dtrace_program_fcompile(), and wrap it in DOF with
dtrace_dof_create().
Loading the DOF containing the helper program works, and means we can initialise the helper from an extension module, rather needing to patch it into Perl’s build process.
Finding - or rather creating - a frame to annotate
Ideally we need a stack frame which is always found in the interpreter’s C stack, for which we can easily find the address of the function, and where the stack is passed as one of the arguments. There’s no such frame in the standard Perl interpreter, but we can manufacture one. Perl lets us replace the “runops loop” with one of our own, and we can use this to meet all of our requirements.
The runops loop function is the core of the interpreter, responsible for invoking the current “op”, which returns the new current op.
The usual, non-debug, runops loop looks like this (in Perl 5.16.2):
int Perl_runops_standard(pTHX) { dVAR; register OP *op = PL_op; while ((PL_op = op = op->op_ppaddr(aTHX))) { } TAINT_NOT; return 0; }
The top-level loop is always visible during execution, and we can replace the usual function with one of our own, fulfilling our first two requirements.
If we make this loop execute ops through another function, and pass that function a pointer to the Perl stack, we fulfill the final requirement. These functions are dtrace_runops and dtrace_call_op:
STATIC OP * dtrace_call_op(pTHX_ PERL_CONTEXT *stack) { return CALL_FPTR(PL_op->op_ppaddr)(aTHX); } STATIC int dtrace_runops(pTHX) { while ( PL_op ) { if ( PL_op = dtrace_call_op(aTHX_ cxstack), PL_op ) { PERL_ASYNC_CHECK( ); } } TAINT_NOT; return 0; }
We’ll target the annotation at dtrace_call_op(), and attempt to walk the stack starting from the PERL_CONTEXT pointer we’re given.
Actually installing the alternative runops loop is a standard Perl extension technique, and we just need to make sure it happens early enough that the top-level loop is ours rather than the standard one.
Frame annotations
The primary purpose of the ustack helper is to provide a descriptive
string for a frame where there’s no corresponding C symbol - for
JITted code, say. If there is such a symbol, the ustack helper’s
string will be ignored - and in this case, there is,
dtrace_call_op.
Fortunately there’s a mechanism for adding annotations to these
frames, and that’s what we’ll use here: a string beginning with an
@ will be used as an annotation. In the Python
helper,
it looks like this:
libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf [ build/proto/lib/python/mercurial/localrepo.py:1849 (addchangegroup) ]
Targetting a specific frame
In a helper action,
arg0 is the program counter in that stack
frame. If we make the address of our inserted
dtrace_call_op
function available to the helper, and of the preceding function, we
can compare the pc to these two addresses to determine when we’re
annotating that function.
Here,
this->start and
this->end have been initialised to the
addresses of
dtrace_call_op and the preceding function:
dtrace:helper:ustack: /this->pc >= this->start/ { this->go++; } dtrace:helper:ustack: /this->pc < this->end/ { this->go++; }
For a reason I’m not entirely sure of, combining these predicates into one doesn’t work.
Passing values into the helper
With the extra control over the helper initialisation we get from loading it “by hand”, it turns out that macros work fine! We can use this to pass values into the helper: symbol addresses and structure member offsets.
It doesn’t seem to be possible to simply
#include <perl.h> - the D
compiler barfs badly on Perl’s headers, which are
.. involved. Fortunately we can do the necessary sizeof and offsetof
work in C and pass the results into D with macros. This should buy at
least some ability to cope with changes to Perl’s data structures,
though more sweeping changes will still break things entirely.
Macros are strings, so all the values passed need to be formatted with
sprintf; at least this is just setup code.
Copying a C string
Unless I’ve missed something, this is awkward. Our final stacktrace
string that the helper will return as the frame’s annotation is
allocated out of D scratch space, so we need to copy C strings from
userspace into it. If we have the string’s length available this is
easily done with
copyinto(), but if we’ve just got a
char *,
it’s not.
Ideally we could take the string’s length with
strlen() and do a
copy – but
strlen isn’t available to helpers.
It doesn’t seem to be possible to use
strchr() either, since it
returns
string and not
char *, so we can’t find the length
that way.
I’m not sure if the lack of
strlen is an oversight, or if there’s
some reason that it’s unsafe in arbitrary context: it seems that if
something like
strchr is safe,
strlen also ought to be.
We can’t just copy a fixed length of data, so a character-by-character “strncpy” is needed:
/* Copy a string into this->buf, at the location indicated by this->off */ #define APPEND_CHR_IF(offset, str) \ dtrace:helper:ustack: \ /this->go == 2 && !this->strdone/ \ { \ copyinto((uintptr_t)((char *)str + offset), 1, this->buf + this->off); \ this->off++; \ } \ dtrace:helper:ustack: \ /this->go == 2 && !this->strdone && this->buf[this->off - 1] == '\0'/ \ { \ this->strdone = 1; \ this->off--; \ } #define APPEND_CSTR(str) \ dtrace:helper:ustack: \ /this->go == 2/ \ { \ this->strdone = 0; \ } \ APPEND_CHR_IF(0, str) \ APPEND_CHR_IF(1, str) \ APPEND_CHR_IF(2, str) \ ... [ up to the length of string required]
Walking the stack
After all that, actually walking the stack from the pointer we’ve been passed is relatively simple. Using the information in Perlguts Illustrated, we walk the context stack, appending frame annotations to our string buffer.
Obviously it’s only possible to walk a limited number of frames, and with the default size limit on helper size and the ops required for string copies, quite a limited number of frames.
The output!
Here’s an incredibly simple example of the output: [ t/helper/01-helper.t: t/helper/01-helper.t:24 t/helper/01-helper.t:25 t/helper/01-helper.t:21 t/helper/01-helper.t:17 t/helper/01-helper.t:13 ] Helper.so`dtrace_runops+0x56 libperl.so`perl_run+0x380 perl`main+0x15b perl`_start+0x83
This shows file:lineno pairs for each stack frame representing a subroutine call that was found walking the context stack.
Here’s a (slightly) less trivial example, taken during a run of the CPAN shell program: [ -e: -e:1 /opt/local/lib/perl5/5.14.0/CPAN.pm:325 /opt/local/lib/perl5/5.14.0/CPAN.pm:325 /opt/local/lib/perl5/5.14.0/CPAN.pm:345 /opt/local/lib/perl5/5.14.0/CPAN.pm:421 /opt/local/lib/perl5/5.14.0/CPAN/Shell.pm:1494 /opt/local/lib/perl5/5.14.0/CPAN/Shell.pm:1461 ] Helper.so`dtrace_runops+0x56 libperl.so`perl_run+0x246 perl`main+0x15b perl`_start+0x83
The code, and its limitations
The code is available on Github. I don’t plan to release this module to CPAN any time soon!
For anything but the most trivial examples this code probably won’t provide useful Perl stacktraces, and it’s only been tried on Perl 5.14.2 built with threads, on an Illumos-derived system.
It certainly won’t work on the Mac, since ustack helpers are disabled there, and won’t work without threads enabled in Perl because of an implementation detail of Perl OPs we’re exploiting that’s different without threads.
Hopefully though, this post sheds a bit of light on ustack helpers, and maybe there are some interesting techniques here for other situations. | https://chrisa.github.io/blog/2013/01/26/a-perl-ustack-helper/ | CC-MAIN-2021-49 | refinedweb | 1,581 | 55.58 |
Phylo cookbook
Here are some examples of using Bio.Phylo for some likely tasks. Some of these functions might be added to Biopython in a later release, but you can use them in your own code with Biopython 1.54.
Convenience functions
Get the parent of a clade
The Tree data structures in Bio.Phylo don't store parent references for each clade. Instead, the
get_path method can be used to trace the path of parent-child links from the tree root to the clade of choice:
def get_parent(tree, child_clade): node_path = tree.get_path(child_clade) return node_path[-2] # Select a clade myclade = tree.find_clades("foo").next() # Test the function parent = get_parent(tree, myclade) assert myclade in parent
Note that
get_path has a linear run time with respect to the size of the tree -- i.e. for best performance, don't call
get_parent or
get_path inside a time-critical loop. If possible, call
get_path outside the loop, and look up parents in the list returned by that function.
Alternately, if you need to repeatedly look up the parents of arbitrary tree elements, create a dictionary mapping all nodes to their parents:
def all_parents(tree): parents = {} for clade in tree.find_clades(order='level'): for child in clade: parents[child] = clade return parents # Example parents = all_parents(tree) myclade = tree.find_clades("foo").next() parent_of_myclade = parents[myclade] assert myclade in parent_of_myclade names (Robinson-Foulds)
- Quartets distance
- Nearest-neighbor interchange
- Path-length-difference
Consensus methods
TODO:
- Majority-rules consensus
- Strict consensus
- Adams (Adams 1972)
- Asymmetric median tree (Phillips and Warnow 1996)
Rooting methods)
Graphics
TODO:
- Party tricks with
draw_graphviz, covering each keyword argument
Exporting to other types
Convert to an 'ape' tree, via Rpy2
The R statistical programming environment provides support for phylogenetics through the 'ape' package and several others that build on top of 'ape'. The Python package rpy2 provides an interface between R and Python, so it's possible to convert a Bio.Phylo tree into an 'ape' tree object:
import tempfile from rpy2.robjects import r def to_ape(tree): """Convert a tree to the type used by the R package `ape`, via rpy2. Requirements: - Python package `rpy2` - R package `ape` """ with tempfile.NamedTemporaryFile() as tmpf: Phylo.write(tree, tmpf, 'newick') tmpf.flush() rtree = r(""" library('ape') read.tree('%s') """ % tmpf.name) return rtree
See that it works:
>>> from StringIO import StringIO >>> from Bio import Phylo >>> tree = Phylo.read(StringIO('(A,(B,C),(D,E));'), 'newick') >>> rtree = to_ape(tree) >>> len(rtree) 3 >>> print r.summary(rtree) Phylogenetic tree: structure(list(edge = structure(c(6, 6, 7, 7, 6, 8, 8, 1, 7, 2, 3, 8, 4, 5), .Dim = c(7L, 2L)), tip.label = c("A", "B", "C", "D", "E"), Nnode = 3L), .Names = c("edge", "tip.label", "Nnode" ), class = "phylo") Number of tips: 5 Number of nodes: 3 No branch lengths. No root edge. Tip labels: A B C D E No node labels. NULL >>> r.plot(rtree)
See the rpy2 documentation for further guidance.
Convert to a DendroPy or PyCogent tree
The tree objects used by Biopython, DendroPy and PyCogent are different. Nonetheless, all three toolkits support the Newick file format, so interoperability is straightforward at that level by writing to a temporary file or StringIO object with one library, then reading the same string again with another.
from Bio import Phylo import cogent Phylo.write(bptree, 'mytree.nwk', 'newick') # Biopython tree ctree = cogent.LoadTree('mytree.nwk') # PyCogent tree
import dendropy # Create or load a tree in DendroPy dtree = dendropy.Tree.get_from_string("(A, (B, C), (D, E))", "newick") dtree.write_to_path("tmp.nwk", "newick", suppress_rooting=True) # Load the same tree in Biopython bptree = Phylo.read("tmp.nwk', 'newick')
Convert to a NumPy array or matrix | http://biopython.org/w/index.php?title=Phylo_cookbook&redirect=no | CC-MAIN-2014-42 | refinedweb | 610 | 57.37 |
/* list.c - Functions for manipulating linked lists of objects. */ /* Copyright (C) 1996 (HAVE_UNISTD_H) # ifdef _MINIX # include <sys/types.h> # endif # include <unistd.h> #endif #include "shell.h" /* A global variable which acts as a sentinel for an `error' list return. */ GENERIC_LIST global_error_list; #ifdef INCLUDE_UNUSED /* Call FUNCTION on every member of LIST, a generic list. */ void list_walk (list, function) GENERIC_LIST *list; sh_glist_func_t *function; { for ( ; list; list = list->next) if ((*function) (list) < 0) return; } /* Call FUNCTION on every string in WORDS. */ void wlist_walk (words, function) WORD_LIST *words; sh_icpfunc_t *function; { for ( ; words; words = words->next) if ((*function) (words->word->word) < 0) return; } #endif /* INCLUDE_UNUSED */ /* Reverse the chain of structures in LIST. Output the new head of the chain. You should always assign the output value of this function to something, or you will lose the chain. */ GENERIC_LIST * list_reverse (list) GENERIC_LIST *list; { register GENERIC_LIST *next, *prev; for (prev = (GENERIC_LIST *)NULL; list; ) { next = list->next; list->next = prev; prev = list; list = next; } return (prev); } /* Return the number of elements in LIST, a generic list. */ int list_length (list) GENERIC_LIST *list; { register int i; for (i = 0; list; list = list->next, i++); return (i); } /* Append TAIL to HEAD. Return the header of the list. */ GENERIC_LIST * list_append (head, tail) GENERIC_LIST *head, *tail; { register GENERIC_LIST *t_head; if (head == 0) return (tail); for (t_head = head; t_head->next; t_head = t_head->next) ; t_head->next = tail; return (head); } #ifdef INCLUDE_UNUSED /* Delete the element of LIST which satisfies the predicate function COMPARER. Returns the element that was deleted, so you can dispose of it, or -1 if the element wasn't found. COMPARER is called with the list element and then ARG. Note that LIST contains the address of a variable which points to the list. You might call this function like this: SHELL_VAR *elt = list_remove (&variable_list, check_var_has_name, "foo"); dispose_variable (elt); */ GENERIC_LIST * list_remove (list, comparer, arg) GENERIC_LIST **list; Function *comparer; char *arg; { register GENERIC_LIST *prev, *temp; for (prev = (GENERIC_LIST *)NULL, temp = *list; temp; prev = temp, temp = temp->next) { if ((*comparer) (temp, arg)) { if (prev) prev->next = temp->next; else *list = temp->next; return (temp); } } return ((GENERIC_LIST *)&global_error_list); } #endif | http://opensource.apple.com/source/bash/bash-80/bash/list.c | CC-MAIN-2016-07 | refinedweb | 344 | 61.67 |
uvwsgi 0.2.0
Simple WSGI server using pyuv
uvwsgi is a Python WSGI server whhich uses libuv and http-parser libraries also used in Node.JS through their Python binding libraries:
Motivalion
There are abunch of great WSGI servers out there, so why create a new one? I’ve been playing with Flask and WSGI lately and I wanted to see the guts of it. As you can see the code is pretty short, I expect to make more changes and more features to it though.
Status
uvwsgi should not be used in production. It’s still work in progress.
Installtion
uvwsgi can be easily installed with pip:
pip install uvwsgi
Usage
Example usage:
from flask import Flask from uvwsgi import run app = Flask(__name__) @app.route('/') def index(): return 'hello world!' if __name__ == '__main__': run(app, ('0.0.0.0', 8088))
The uvwsgi command line application can also be used to serve WSGI applications directly. Assuming the code above this lines is stored in a file called tst.py, it can be served as follows:
uvwsgi tst:app --port 8888
NOTE: You need to install the package first in order to have the uvwsgi command available.
License
Unless stated otherwise on-file uvwsgi uses the MIT license, check LICENSE file.
- Downloads (All Versions):
- 6 downloads in the last day
- 94 downloads in the last week
- 332: uvwsgi-0.2.0.xml | https://pypi.python.org/pypi/uvwsgi/0.2.0 | CC-MAIN-2015-14 | refinedweb | 233 | 72.87 |
Auto Encoders for Anomaly Detection in Predictive Maintenance
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Autoencoders is an unsupervised version of neural network that is used for data encoding. This technique is mainly used to learn the representation of data that can be used for dimensionality reduction by training network to ignore noise. Autoencoders play an important role in unsupervised learning and deep architectures mainly for transfer learning (Pierre. B, 2012). When autoencoders are decoded, they are simple linear circuits that transforms inputs to outputs with least distortion. Autoencoders were first introduced in 1980’s to address the issue of back propagation without training and rather use input as a teacher (Rumelhart et al., 1986). Since then, autoencoders have taken a phase change to the form on Restricted Boltzman Machine. Today, autoencoders are used in various applications such as predicting sentiment distributions in Natural Language Processing (NLP) (Socher et al., 2011a) (Socher et al., 2011b), feature extraction (Masci et al., 2011), anomaly detection (Sakurada et al., 2014), facial recognition (Gao et al., 2015), clustering (Dilokthanakul et al., 2016), image classification (Geng et al., 2015) and many other application.
Image: Simple auto encoder representation
In today’s tutorial, I will go over on how to use auto encoders for anomaly detection in predictive maintenance.
Load Libraries
You will need only two libraries for this analysis.
options(warn=-1) # load libraries library(dplyr) library(h2o)
Load data
Here we are using data from a bench press and can be downloaded from my github repo. This is an experimental data I generated in a lab for my PhD dissertation. There are total of four different states in this machine and they are split into four different csv files. We need to load the data first. In the data time represents the time between samples, ax is the acceleration on x axis, ay is the acceleration on y axis, az is the acceleration on z axis and at is the G’s. The data was collected at sample rate of 100hz.
Four different states of the machine were collected
1. Nothing attached to drill press
2. Wooden base attached to drill press
3. Imbalance created by adding weight to one end of wooden base
4. Imbalance created by adding weight to two ends of wooden base.
setwd("/home/") #read csv files file1 = read.csv("dry run.csv", sep=",", header =T) file2 = read.csv("base.csv", sep=",", header =T) file3 = read.csv("imbalance 1.csv", sep=",", header =T) file4 = read.csv("imbalance 2.csv", sep=",", header =T) head(file1)
We can look at the summary of each file using summary function in R. Below, we can observe that 66 seconds long data is available. We also have min, max and mean for each of the variables.
# summary of each file summary(file2)
time ax ay az Min. : 0.004 Min. :-1.402700 Min. :-1.693300 Min. :-3.18930 1st Qu.: 27.005 1st Qu.:-0.311100 1st Qu.:-0.429600 1st Qu.:-0.57337 Median : 54.142 Median : 0.015100 Median :-0.010700 Median :-0.11835 Mean : 54.086 Mean : 0.005385 Mean :-0.002534 Mean :-0.09105 3rd Qu.: 81.146 3rd Qu.: 0.314800 3rd Qu.: 0.419475 3rd Qu.: 0.34815 Max. :108.127 Max. : 1.771900 Max. : 1.515600 Max. : 5.04610 aT Min. :0.0360 1st Qu.:0.6270 Median :0.8670 Mean :0.9261 3rd Qu.:1.1550 Max. :5.2950
Data Aggregation and feature extraction
Here, the data is aggregated by 1 minute and features are extracted. Features are extracted to reduce the size of the data and only storing the representation of the data.
file1$group = as.factor(round(file1$time)) file2$group = as.factor(round(file2$time)) file3$group = as.factor(round(file3$time)) file4$group = as.factor(round(file4$time)) #(file1,20) #list of all files files = list(file1, file2, file3, file4) #loop through all files and combine features = NULL for (i in 1:4){ res = files[[i]] %>% group_by(group) %>% summarize(ax_mean = mean(ax), ax_sd = sd(ax), ax_min = min(ax), ax_max = max(ax), ax_median = median(ax), ay_mean = mean(ay), ay_sd = sd(ay), ay_min = min(ay), ay_may = max(ay), ay_median = median(ay), az_mean = mean(az), az_sd = sd(az), az_min = min(az), az_maz = max(az), az_median = median(az), aT_mean = mean(aT), aT_sd = sd(aT), aT_min = min(aT), aT_maT = max(aT), aT_median = median(aT) ) features = rbind(features, res) } #view all features head(features)
Create Train and Test Set
To build an anomaly detection model, a train and test set is required. Here, the normal condition of the data is used for training and remaining is used for testing.
# create train and test set train = features[1:67,2:ncol(features)] test = features[68:nrow(features),2:ncol(features)]
Auto Encoders
Auto Encoders using H2O package
Use the h2o.init()method to initialize H2O. This method accepts the following options. Note: that in most cases, simply using h2o.init() is all that a user is required to do.
# initialize h2o cluser h2o.init()
The R object to be converted to an H2O object should be named so that it can be used in subsequent analysis. Also note that the R object is converted to a parsed H2O data object, and will be treated as a data frame by H2O in subsequent analysis.
# convert train and test to h2o object train_h2o = as.h2o(train) test_h2o = as.h2o(test)
The h2o.deeplearning function fits H2O’s Deep Learning models from within R..
# build auto encoder model with 3 layers model_unsup = h2o.deeplearning(x = 2:ncol(features) , training_frame = train_h2o , model_id = "Test01" , autoencoder = TRUE , reproducible = TRUE , ignore_const_cols = FALSE , seed = 42 , hidden = c(50,10,50,100,100) , epochs = 100 , activation ="Tanh") # view the model model_unsup
Model Details: ============== H2OAutoEncoderModel: deeplearning Model ID: Test01 Status of Neuron Layers: auto-encoder, gaussian distribution, Quadratic loss, 19,179 weights/biases, 236.0 KB, 2,546 training samples, mini-batch size 1 layer units type dropout l1 l2 mean_rate rate_rms momentum 1 1 19 Input 0.00 % NA NA NA NA NA 2 2 50 Tanh 0.00 % 0.000000 0.000000 0.029104 0.007101 0.000000 3 3 10 Tanh 0.00 % 0.000000 0.000000 0.021010 0.006320 0.000000 4 4 50 Tanh 0.00 % 0.000000 0.000000 0.024570 0.006848 0.000000 5 5 100 Tanh 0.00 % 0.000000 0.000000 0.052482 0.018357 0.000000 6 6 100 Tanh 0.00 % 0.000000 0.000000 0.052677 0.021417 0.000000 7 7 19 Tanh NA 0.000000 0.000000 0.025557 0.009494 0.000000 mean_weight weight_rms mean_bias bias_rms 1 NA NA NA NA 2 0.000069 0.180678 0.001542 0.017311 3 0.000008 0.187546 -0.000435 0.011542 4 0.011644 0.184633 0.000371 0.006443 5 0.000063 0.113350 -0.000964 0.008983 6 0.000581 0.100150 0.001003 0.013848 7 -0.001349 0.121616 0.006549 0.012720 H2OAutoEncoderMetrics: deeplearning ** Reported on training data. ** Training Set Metrics: ===================== MSE: (Extract with `h2o.mse`) 0.005829827 RMSE: (Extract with `h2o.rmse`) 0.0763533
Detect anomalies in an H2O data set using an H2O deep learning model with auto-encoding trained previously.
# now we need to calculate MSE or anomaly score anmlt = h2o.anomaly(model_unsup , train_h2o , per_feature = FALSE) %>% as.data.frame() # create a label for healthy data anmlt$y = 0 # view top data head(anmlt)
Calculate the threshold value for trainanomaly scores. Various methods can be used such as calculating the quantiles, max, median, min etc. It all depends on the use case. Here we will use quantile with probability of 99.9%.
# calculate thresholds from train data threshold = quantile(anmlt$Reconstruction.MSE, probs = 0.999)
Now, we have anomaly score for train and its thresholds, we can predict the new anomaly scores for test data and plot it to see how it differs from train data.
# calculate anomaly scores for test data test_anmlt = h2o.anomaly(model_unsup , test_h2o , per_feature = FALSE) %>% as.data.frame() # create a label for healthy data test_anmlt$y = 1
# combine the train and test anomaly scores for visulaizatio results = data.frame(rbind(anmlt,test_anmlt), threshold) head(results)
The results are plotted below. The x axis is the observations and y axis is the anomaly score. The green points are the trained data and red are test data. We can note that all the data that was trained except one lied below the anomaly limit. Its also interesting to note the increasing trend pattern for the anomaly scores for other state of the machine.
# Adjust plot sizes options(repr.plot.width = 15, repr.plot.height = 6) plot(results$Reconstruction.MSE, type = 'n', xlab='observations', ylab='Reconstruction.MSE', main = "Anomaly Detection Results") points(results$Reconstruction.MSE, pch=19, col=ifelse(results$Reconstruction.MSE < threshold, "green", "red")) abline(h=threshold, col='red', lwd=2)
Conclusion
Auto encoder is a very powerful tool and very fun to play with. They have been used in image analysis, image reconstruction and image colorization. In this tutorial you have seen how to perform anomaly detection on a simple signal data and few lines of code. The possibilities of using this are many. Let me know what you think about auto encoders in the comments below.
Follow my work
Github, Researchgate, and LinkedIn
Session info
Below is the session info for the the packages and their versions used in this analysis.
sessionInfo()
R version 3.3.3 (2017-03-06) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Debian GNU/Linux 9 (stretch) locale: [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 [4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 [7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C [10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] h2o_3.26.0.2 dplyr_0.8.3 loaded via a namespace (and not attached): [1] Rcpp_1.0.2 magrittr_1.5 tidyselect_0.2.5 [4] uuid_0.1-2 R6_2.4.0 rlang_0.4.0 [7] tools_3.3.3 htmltools_0.3.6 assertthat_0.2.1 [10] digest_0.6.20 tibble_2.1.3 crayon_1.3.4 [13] IRdisplay_0.7.0 purrr_0.3.2 repr_1.0.1 [16] base64enc_0.1-3 vctrs_0.2.0 bitops_1.0-6 [19] RCurl_1.95-4.12 IRkernel_1.0.2.9000 zeallot_0.1.0 [22] glue_1.3.1 evaluate_0.14 pbdZMQ_0.3-3 [25] pillar_1.4.2 backports_1.1.4 jsonlite_1.6 [28] pkgconfig_2. | https://www.r-bloggers.com/auto-encoders-for-anomaly-detection-in-predictive-maintenance/ | CC-MAIN-2020-29 | refinedweb | 1,767 | 61.33 |
- remove changelog from the file. we don't care about ancient history- update copyright year- update version- constify some stuff- empty lines removal- unused variables and macros removal- remove some asm/ includes, they are sucked by linux/ variantsSigned-off-by: Jiri Slaby <jirislaby@gmail.com>--- drivers/char/cyclades.c | 600 +---------------------------------------------- 1 files changed, 13 insertions(+), 587 deletions(-)diff --git a/drivers/char/cyclades.c b/drivers/char/cyclades.cindex 67f1739..1510506 100644--- a/drivers/char/cyclades.c+++ b/drivers/char/cyclades.c@@ -11,7 +11,7 @@ * Initially written by Randolph Bentson <bentson@grieg.seaslug.org>. * Modified and maintained by Marcio Saito <marcio@cyclades.com>. *- * Copyright (C) 2007 Jiri Slaby <jirislaby@gmail.com>+ * Copyright (C) 2007-2009 Jiri Slaby <jirislaby@gmail.com> * * Much of the design and some of the code came from serial.c * which was copyright (C) 1991, 1992 Linus Torvalds. It was@@ -19,577 +19,9 @@ * and then fixed as suggested by Michael K. Johnson 12/12/92. * Converted to pci probing and cleaned up by Jiri Slaby. *- * This version supports shared IRQ's (only for PCI boards).- *- * Prevent users from opening non-existing Z ports.- *- * Revision 2.3.2.8 2000/07/06 18:14:16 ivan- * Fixed the PCI detection function to work properly on Alpha systems.- * Implemented support for TIOCSERGETLSR ioctl.- * Implemented full support for non-standard baud rates.- *- * Revision 2.3.2.7 2000/06/01 18:26:34 ivan- * Request PLX I/O region, although driver doesn't use it, to avoid- * problems with other drivers accessing it.- * Removed count for on-board buffer characters in cy_chars_in_buffer- * (Cyclades-Z only).- *- * Revision 2.3.2.6 2000/05/05 13:56:05 ivan- * Driver now reports physical instead of virtual memory addresses.- * Masks were added to some Cyclades-Z read accesses.- * Implemented workaround for PLX9050 bug that would cause a system lockup- * in certain systems, depending on the MMIO addresses allocated to the- * board.- * Changed the Tx interrupt programming in the CD1400 chips to boost up- * performance (Cyclom-Y only).- * Code is now compliant with the new module interface (module_[init|exit]).- * Make use of the PCI helper functions to access PCI resources.- * Did some code "housekeeping".- *- * Revision 2.3.2.5 2000/01/19 14:35:33 ivan- * Fixed bug in cy_set_termios on CRTSCTS flag turnoff.- *- * Revision 2.3.2.4 2000/01/17 09:19:40 ivan- * Fixed SMP locking in Cyclom-Y interrupt handler.- *- * Revision 2.3.2.3 1999/12/28 12:11:39 ivan- * Added a new cyclades_card field called nports to allow the driver to- * know the exact number of ports found by the Z firmware after its load;- * RX buffer contention prevention logic on interrupt op mode revisited- * (Cyclades-Z only);- * Revisited printk's for Z debug;- * Driver now makes sure that the constant SERIAL_XMIT_SIZE is defined;- *- * Revision 2.3.2.2 1999/10/01 11:27:43 ivan- * Fixed bug in cyz_poll that would make all ports but port 0- * unable to transmit/receive data (Cyclades-Z only);- * Implemented logic to prevent the RX buffer from being stuck with data- * due to a driver / firmware race condition in interrupt op mode- * (Cyclades-Z only);- * Fixed bug in block_til_ready logic that would lead to a system crash;- * Revisited cy_close spinlock usage;- *- * Revision 2.3.2.1 1999/09/28 11:01:22 ivan- * Revisited CONFIG_PCI conditional compilation for PCI board support;- * Implemented TIOCGICOUNT and TIOCMIWAIT ioctl support;- * _Major_ cleanup on the Cyclades-Z interrupt support code / logic;- * Removed CTS handling from the driver -- this is now completely handled- * by the firmware (Cyclades-Z only);- * Flush RX on-board buffers on a port open (Cyclades-Z only);- * Fixed handling of ASYNC_SPD_* TTY flags;- * Module unload now unmaps all memory area allocated by ioremap;- *- * Revision 2.3.1.1 1999/07/15 16:45:53 ivan- * Removed CY_PROC conditional compilation;- * Implemented SMP-awareness for the driver;- * Implemented a new ISA IRQ autoprobe that uses the irq_probe_[on|off]- * functions;- * The driver now accepts memory addresses (maddr=0xMMMMM) and IRQs- * (irq=NN) as parameters (only for ISA boards);- * Fixed bug in set_line_char that would prevent the Cyclades-Z- * ports from being configured at speeds above 115.2Kbps;- * Fixed bug in cy_set_termios that would prevent XON/XOFF flow control- * switching from working properly;- * The driver now only prints IRQ info for the Cyclades-Z if it's- * configured to work in interrupt mode;- *- * Revision 2.2.2.3 1999/06/28 11:13:29 ivan- * Added support for interrupt mode operation for the Z cards;- * Removed the driver inactivity control for the Z;- * Added a missing MOD_DEC_USE_COUNT in the cy_open function for when- * the Z firmware is not loaded yet;- * Replaced the "manual" Z Tx flush buffer by a call to a FW command of- * same functionality;- * Implemented workaround for IRQ setting loss on the PCI configuration- * registers after a PCI bridge EEPROM reload (affects PLX9060 only);- *- * Revision 2.2.2.2 1999/05/14 17:18:15 ivan- * /proc entry location changed to /proc/tty/driver/cyclades;- * Added support to shared IRQ's (only for PCI boards);- * Added support for Cobalt Qube2 systems;- * IRQ [de]allocation scheme revisited;- * BREAK implementation changed in order to make use of the 'break_ctl'- * TTY facility;- * Fixed typo in TTY structure field 'driver_name';- * Included a PCI bridge reset and EEPROM reload in the board- * initialization code (for both Y and Z series).- *- * Revision 2.2.2.1 1999/04/08 16:17:43 ivan- * Fixed a bug in cy_wait_until_sent that was preventing the port to be- * closed properly after a SIGINT;- * Module usage counter scheme revisited;- * Added support to the upcoming Y PCI boards (i.e., support to additional- * PCI Device ID's).- *- * Revision 2.2.1.10 1999/01/20 16:14:29 ivan- * Removed all unnecessary page-alignement operations in ioremap calls- * (ioremap is currently safe for these operations).- *- * Revision 2.2.1.9 1998/12/30 18:18:30 ivan- * Changed access to PLX PCI bridge registers from I/O to MMIO, in- * order to make PLX9050-based boards work with certain motherboards.- *- * Revision 2.2.1.8 1998/11/13 12:46:20 ivan- * cy_close function now resets (correctly) the tty->closing flag;- * JIFFIES_DIFF macro fixed.- *- * Revision 2.2.1.7 1998/09/03 12:07:28 ivan- * Fixed bug in cy_close function, which was not informing HW of- * which port should have the reception disabled before doing so;- * fixed Cyclom-8YoP hardware detection bug.- *- * Revision 2.2.1.6 1998/08/20 17:15:39 ivan- * Fixed bug in cy_close function, which causes malfunction- * of one of the first 4 ports when a higher port is closed- * (Cyclom-Y only).- *- * Revision 2.2.1.5 1998/08/10 18:10:28 ivan- * Fixed Cyclom-4Yo hardware detection bug.- *- * Revision 2.2.1.4 1998/08/04 11:02:50 ivan- * /proc/cyclades implementation with great collaboration of- * Marc Lewis <marc@blarg.net>;- * cyy_interrupt was changed to avoid occurrence of kernel oopses- * during PPP operation.- *- * Revision 2.2.1.3 1998/06/01 12:09:10 ivan- * General code review in order to comply with 2.1 kernel standards;- * data loss prevention for slow devices revisited (cy_wait_until_sent- * was created);- * removed conditional compilation for new/old PCI structure support- * (now the driver only supports the new PCI structure).- *- * Revision 2.2.1.1 1998/03/19 16:43:12 ivan- * added conditional compilation for new/old PCI structure support;- * removed kernel series (2.0.x / 2.1.x) conditional compilation.- *- * Revision 2.1.1.3 1998/03/16 18:01:12 ivan- * cleaned up the data loss fix;- * fixed XON/XOFF handling once more (Cyclades-Z);- * general review of the driver routines;- * introduction of a mechanism to prevent data loss with slow- * printers, by forcing a delay before closing the port.- *- * Revision 2.1.1.2 1998/02/17 16:50:00 ivan- * fixed detection/handling of new CD1400 in Ye boards;- * fixed XON/XOFF handling (Cyclades-Z);- * fixed data loss caused by a premature port close;- * introduction of a flag that holds the CD1400 version ID per port- * (used by the CYGETCD1400VER new ioctl).- *- * Revision 2.1.1.1 1997/12/03 17:31:19 ivan- * Code review for the module cleanup routine;- * fixed RTS and DTR status report for new CD1400's in get_modem_info;- * includes anonymous changes regarding signal_pending.- *- * Revision 2.1 1997/11/01 17:42:41 ivan- * Changes in the driver to support Alpha systems (except 8Zo V_1);- * BREAK fix for the Cyclades-Z boards;- * driver inactivity control by FW implemented;- * introduction of flag that allows driver to take advantage of- * a special CD1400 feature related to HW flow control;- * added support for the CD1400 rev. J (Cyclom-Y boards);- * introduction of ioctls to:- * - control the rtsdtr_inv flag (Cyclom-Y);- * - control the rflow flag (Cyclom-Y);- * - adjust the polling interval (Cyclades-Z);- *- * Revision 1.36.4.33 1997/06/27 19:00:00 ivan- * Fixes related to kernel version conditional- * compilation.- *- * Revision 1.36.4.32 1997/06/14 19:30:00 ivan- * Compatibility issues between kernels 2.0.x and- * 2.1.x (mainly related to clear_bit function).- *- * Revision 1.36.4.31 1997/06/03 15:30:00 ivan- * Changes to define the memory window according to the- * board type.- *- * Revision 1.36.4.30 1997/05/16 15:30:00 daniel- * Changes to support new cycladesZ boards.- *- * Revision 1.36.4.29 1997/05/12 11:30:00 daniel- * Merge of Bentson's and Daniel's version 1.36.4.28.- * Corrects bug in cy_detect_pci: check if there are more- * ports than the number of static structs allocated.- * Warning message during initialization if this driver is- * used with the new generation of cycladesZ boards. Those- * will be supported only in next release of the driver.- * Corrects bug in cy_detect_pci and cy_detect_isa that- * returned wrong number of VALID boards, when a cyclomY- * was found with no serial modules connected.- * Changes to use current (2.1.x) kernel subroutine names- * and created macros for compilation with 2.0.x kernel,- * instead of the other way around.- *- * Revision 1.36.4.28 1997/05/?? ??:00:00 bentson- * Change queue_task_irq_off to queue_task_irq.- * The inline function queue_task_irq_off (tqueue.h)- * was removed from latest releases of 2.1.x kernel.- * Use of macro __init to mark the initialization- * routines, so memory can be reused.- * Also incorporate implementation of critical region- * in function cleanup_module() created by anonymous- * linuxer.- *- * Revision 1.36.4.28 1997/04/25 16:00:00 daniel- * Change to support new firmware that solves DCD problem:- * application could fail to receive SIGHUP signal when DCD- * varying too fast.- *- * Revision 1.36.4.27 1997/03/26 10:30:00 daniel- * Changed for support linux versions 2.1.X.- * Backward compatible with linux versions 2.0.X.- * Corrected illegal use of filler field in- * CH_CTRL struct.- * Deleted some debug messages.- *- * Revision 1.36.4.26 1997/02/27 12:00:00 daniel- * Included check for NULL tty pointer in cyz_poll.- *- * Revision 1.36.4.25 1997/02/26 16:28:30 bentson- * Bill Foster at Blarg! Online services noticed that- * some of the switch elements of -Z modem control- * lacked a closing "break;"- *- * Revision 1.36.4.24 1997/02/24 11:00:00 daniel- * Changed low water threshold for buffer xmit_buf- *- * Revision 1.36.4.23 1996/12/02 21:50:16 bentson- * Marcio provided fix to modem status fetch for -Z- *- * Revision 1.36.4.22 1996/10/28 22:41:17 bentson- * improve mapping of -Z control page (thanks to Steve- * Price <stevep@fa.tdktca.com> for help on this)- *- * Revision 1.36.4.21 1996/09/10 17:00:10 bentson- * shift from CPU-bound to memcopy in cyz_polling operation- *- * Revision 1.36.4.20 1996/09/09 18:30:32 Bentson- * Added support to set and report higher speeds.- *- * Revision 1.36.4.19c 1996/08/09 10:00:00 Marcio Saito- * Some fixes in the HW flow control for the BETA release.- * Don't try to register the IRQ.- *- * Revision 1.36.4.19 1996/08/08 16:23:18 Bentson- * make sure "cyc" appears in all kernel messages; all soft interrupts- * handled by same routine; recognize out-of-band reception; comment- * out some diagnostic messages; leave RTS/CTS flow control to hardware;- * fix race condition in -Z buffer management; only -Y needs to explicitly- * flush chars; tidy up some startup messages;- *- * Revision 1.36.4.18 1996/07/25 18:57:31 bentson- * shift MOD_INC_USE_COUNT location to match- * serial.c; purge some diagnostic messages;- *- * Revision 1.36.4.17 1996/07/25 18:01:08 bentson- * enable modem status messages and fetch & process them; note- * time of last activity type for each port; set_line_char now- * supports more than line 0 and treats 0 baud correctly;- * get_modem_info senses rs_status;- *- * Revision 1.36.4.16 1996/07/20 08:43:15 bentson- * barely works--now's time to turn on- * more features 'til it breaks- *- * Revision 1.36.4.15 1996/07/19 22:30:06 bentson- * check more -Z board status; shorten boot message- *- * Revision 1.36.4.14 1996/07/19 22:20:37 bentson- * fix reference to ch_ctrl in startup; verify return- * values from cyz_issue_cmd and cyz_update_channel;- * more stuff to get modem control correct;- *- * Revision 1.36.4.13 1996/07/11 19:53:33 bentson- * more -Z stuff folded in; re-order changes to put -Z stuff- * after -Y stuff (to make changes clearer)- *- * Revision 1.36.4.12 1996/07/11 15:40:55 bentson- * Add code to poll Cyclades-Z. Add code to get & set RS-232 control.- * Add code to send break. Clear firmware ID word at startup (so- * that other code won't talk to inactive board).- *- * Revision 1.36.4.11 1996/07/09 05:28:29 bentson- * add code for -Z in set_line_char- *- * Revision 1.36.4.10 1996/07/08 19:28:37 bentson- * fold more -Z stuff (or in some cases, error messages)- * into driver; add text to "don't know what to do" messages.- *- * Revision 1.36.4.9 1996/07/08 18:38:38 bentson- * moved compile-time flags near top of file; cosmetic changes- * to narrow text (to allow 2-up printing); changed many declarations- * to "static" to limit external symbols; shuffled code order to- * coalesce -Y and -Z specific code, also to put internal functions- * in order of tty_driver structure; added code to recognize -Z- * ports (and for moment, do nothing or report error); add cy_startup- * to parse boot command line for extra base addresses for ISA probes;- *- * Revision 1.36.4.8 1996/06/25 17:40:19 bentson- * reorder some code, fix types of some vars (int vs. long),- * add cy_setup to support user declared ISA addresses- *- * Revision 1.36.4.7 1996/06/21 23:06:18 bentson- * dump ioctl based firmware load (it's now a user level- * program); ensure uninitialzed ports cannot be used- *- * Revision 1.36.4.6 1996/06/20 23:17:19 bentson- * rename vars and restructure some code- *- * Revision 1.36.4.5 1996/06/14 15:09:44 bentson- * get right status back after boot load- *- * Revision 1.36.4.4 1996/06/13 19:51:44 bentson- * successfully loads firmware- *- * Revision 1.36.4.3 1996/06/13 06:08:33 bentson- * add more of the code for the boot/load ioctls- *- * Revision 1.36.4.2 1996/06/11 21:00:51 bentson- * start to add Z functionality--starting with ioctl- * for loading firmware- *- * Revision 1.36.4.1 1996/06/10 18:03:02 bentson- * added code to recognize Z/PCI card at initialization; report- * presence, but card is not initialized (because firmware needs- * to be loaded)- *- * Revision 1.36.3.8 1996/06/07 16:29:00 bentson- * starting minor number at zero; added missing verify_area- * as noted by Heiko EiÃfeldt <heiko@colossus.escape.de>- *- * Revision 1.36.3.7 1996/04/19 21:06:18 bentson- * remove unneeded boot message & fix CLOCAL hardware flow- * control (Miquel van Smoorenburg <miquels@Q.cistron.nl>);- * remove unused diagnostic statements; minor 0 is first;- *- * Revision 1.36.3.6 1996/03/13 13:21:17 marcio- * The kernel function vremap (available only in later 1.3.xx kernels)- * allows the access to memory addresses above the RAM. This revision- * of the driver supports PCI boards below 1Mb (device id 0x100) and- * above 1Mb (device id 0x101).- *- * Revision 1.36.3.5 1996/03/07 15:20:17 bentson- * Some global changes to interrupt handling spilled into- * this driver--mostly unused arguments in system function- * calls. Also added change by Marcio Saito which should- * reduce lost interrupts at startup by fast processors.- *- * Revision 1.36.3.4 1995/11/13 20:45:10 bentson- * Changes by Corey Minyard <minyard@wf-rch.cirr.com> distributed- * in 1.3.41 kernel to remove a possible race condition, extend- * some error messages, and let the driver run as a loadable module- * Change by Alan Wendt <alan@ez0.ezlink.com> to remove a- * possible race condition.- * Change by Marcio Saito <marcio@cyclades.com> to fix PCI addressing.- *- * Revision 1.36.3.3 1995/11/13 19:44:48 bentson- * Changes by Linus Torvalds in 1.3.33 kernel distribution- * required due to reordering of driver initialization.- * Drivers are now initialized *after* memory management.- *- * Revision 1.36.3.2 1995/09/08 22:07:14 bentson- * remove printk from ISR; fix typo- *- * Revision 1.36.3.1 1995/09/01 12:00:42 marcio- * Minor fixes in the PCI board support. PCI function calls in- * conditional compilation (CONFIG_PCI). Thanks to Jim Duncan- * <duncan@okay.com>. "bad serial count" message removed.- *- * Revision 1.36.3 1995/08/22 09:19:42 marcio- * Cyclom-Y/PCI support added. Changes in the cy_init routine and- * board initialization. Changes in the boot messages. The driver- * supports up to 4 boards and 64 ports by default.- *- * Revision 1.36.1.4 1995/03/29 06:14:14 bentson- * disambiguate between Cyclom-16Y and Cyclom-32Ye;- *- * Revision 1.36.1.3 1995/03/23 22:15:35 bentson- * add missing break in modem control block in ioctl switch statement- * (discovered by Michael Edward Chastain <mec@jobe.shell.portal.com>);- *- * Revision 1.36.1.2 1995/03/22 19:16:22 bentson- * make sure CTS flow control is set as soon as possible (thanks- * to note from David Lambert <lambert@chesapeake.rps.slb.com>);- *- * Revision 1.36.1.1 1995/03/13 15:44:43 bentson- * initialize defaults for receive threshold and stale data timeout;- * cosmetic changes;- *- * Revision 1.36 1995/03/10 23:33:53 bentson- * added support of chips 4-7 in 32 port Cyclom-Ye;- * fix cy_interrupt pointer dereference problem- * (Joe Portman <baron@aa.net>);- * give better error response if open is attempted on non-existent port- * (Zachariah Vaum <jchryslr@netcom.com>);- * correct command timeout (Kenneth Lerman <lerman@@seltd.newnet.com>);- * conditional compilation for -16Y on systems with fast, noisy bus;- * comment out diagnostic print function;- * cleaned up table of base addresses;- * set receiver time-out period register to correct value,- * set receive threshold to better default values,- * set chip timer to more accurate 200 Hz ticking,- * add code to monitor and modify receive parameters- * (Rik Faith <faith@cs.unc.edu> Nick Simicich- * <njs@scifi.emi.net>);- *- * Revision 1.35 1994/12/16 13:54:18 steffen- * additional patch by Marcio Saito for board detection- * Accidently left out in 1.34- *- * Revision 1.34 1994/12/10 12:37:12 steffen- * This is the corrected version as suggested by Marcio Saito- *- * Revision 1.33 1994/12/01 22:41:18 bentson- * add hooks to support more high speeds directly; add tytso- * patch regarding CLOCAL wakeups- *- * Revision 1.32 1994/11/23 19:50:04 bentson- * allow direct kernel control of higher signalling rates;- * look for cards at additional locations- *- * Revision 1.31 1994/11/16 04:33:28 bentson- * ANOTHER fix from Corey Minyard, minyard@wf-rch.cirr.com--- * a problem in chars_in_buffer has been resolved by some- * small changes; this should yield smoother output- *- * Revision 1.30 1994/11/16 04:28:05 bentson- * Fix from Corey Minyard, Internet: minyard@metronet.com,- * UUCP: minyard@wf-rch.cirr.com, WORK: minyardbnr.ca, to- * cy_hangup that appears to clear up much (all?) of the- * DTR glitches; also he's added/cleaned-up diagnostic messages- *- * Revision 1.29 1994/11/16 04:16:07 bentson- * add change proposed by Ralph Sims, ralphs@halcyon.com, to- * operate higher speeds in same way as other serial ports;- * add more serial ports (for up to two 16-port muxes).- *- * Revision 1.28 1994/11/04 00:13:16 root- * turn off diagnostic messages- *- * Revision 1.27 1994/11/03 23:46:37 root- * bunch of changes to bring driver into greater conformance- * with the serial.c driver (looking for missed fixes)- *- * Revision 1.26 1994/11/03 22:40:36 root- * automatic interrupt probing fixed.- *- * Revision 1.25 1994/11/03 20:17:02 root- * start to implement auto-irq- *- * Revision 1.24 1994/11/03 18:01:55 root- * still working on modem signals--trying not to drop DTR- * during the getty/login processes- *- * Revision 1.23 1994/11/03 17:51:36 root- * extend baud rate support; set receive threshold as function- * of baud rate; fix some problems with RTS/CTS;- *- * Revision 1.22 1994/11/02 18:05:35 root- * changed arguments to udelay to type long to get- * delays to be of correct duration- *- * Revision 1.21 1994/11/02 17:37:30 root- * employ udelay (after calibrating loops_per_second earlier- * in init/main.c) instead of using home-grown delay routines- *- * Revision 1.20 1994/11/02 03:11:38 root- * cy_chars_in_buffer forces a return value of 0 to let- * login work (don't know why it does); some functions- * that were returning EFAULT, now executes the code;- * more work on deciding when to disable xmit interrupts;- *- * Revision 1.19 1994/11/01 20:10:14 root- * define routine to start transmission interrupts (by enabling- * transmit interrupts); directly enable/disable modem interrupts;- *- * Revision 1.18 1994/11/01 18:40:45 bentson- * Don't always enable transmit interrupts in startup; interrupt on- * TxMpty instead of TxRdy to help characters get out before shutdown;- * restructure xmit interrupt to check for chars first and quit if- * none are ready to go; modem status (MXVRx) is upright, _not_ inverted- * (to my view);- *- * Revision 1.17 1994/10/30 04:39:45 bentson- * rename serial_driver and callout_driver to cy_serial_driver and- * cy_callout_driver to avoid linkage interference; initialize- * info->type to PORT_CIRRUS; ruggedize paranoia test; elide ->port- * from cyclades_port structure; add paranoia check to cy_close;- *- * Revision 1.16 1994/10/30 01:14:33 bentson- * change major numbers; add some _early_ return statements;- *- * Revision 1.15 1994/10/29 06:43:15 bentson- * final tidying up for clean compile; enable some error reporting- *- * Revision 1.14 1994/10/28 20:30:22 Bentson- * lots of changes to drag the driver towards the new tty_io- * structures and operation. not expected to work, but may- * compile cleanly.- *- * Revision 1.13 1994/07/21 23:08:57 Bentson- * add some diagnostic cruft; support 24 lines (for testing- * both -8Y and -16Y cards; be more thorough in servicing all- * chips during interrupt; add "volatile" a few places to- * circumvent compiler optimizations; fix base & offset- * computations in block_til_ready (was causing chip 0 to- * stop operation)- *- * Revision 1.12 1994/07/19 16:42:11 Bentson- * add some hackery for kernel version 1.1.8; expand- * error messages; refine timing for delay loops and- * declare loop params volatile- *- * Revision 1.11 1994/06/11 21:53:10 bentson- * get use of save_car right in transmit interrupt service- *- * Revision 1.10.1.1 1994/06/11 21:31:18 bentson- * add some diagnostic printing; try to fix save_car stuff- *- * Revision 1.10 1994/06/11 20:36:08 bentson- * clean up compiler warnings- *- * Revision 1.9 1994/06/11 19:42:46 bentson- * added a bunch of code to support modem signalling- *- * Revision 1.8 1994/06/11 17:57:07 bentson- * recognize break & parity error- *- * Revision 1.7 1994/06/05 05:51:34 bentson- * Reorder baud table to be monotonic; add cli to CP; discard- * incoming characters and status if the line isn't open; start to- * fold code into cy_throttle; start to port get_serial_info,- * set_serial_info, get_modem_info, set_modem_info, and send_break- * from serial.c; expand cy_ioctl; relocate and expand config_setup;- * get flow control characters from tty struct; invalidate ports w/o- * hardware;- *- * Revision 1.6 1994/05/31 18:42:21 bentson- * add a loop-breaker in the interrupt service routine;- * note when port is initialized so that it can be shut- * down under the right conditions; receive works without- * any obvious errors- *- * Revision 1.5 1994/05/30 00:55:02 bentson- * transmit works without obvious errors- *- * Revision 1.4 1994/05/27 18:46:27 bentson- * incorporated more code from lib_y.c; can now print short- * strings under interrupt control to port zero; seems to- * select ports/channels/lines correctly- *- * Revision 1.3 1994/05/25 22:12:44 bentson- * shifting from multi-port on a card to proper multiplexor- * data structures; added skeletons of most routines- *- * Revision 1.2 1994/05/19 13:21:43 bentson- * start to crib from other sources- * */ -#define CY_VERSION "2.5"+#define CY_VERSION "2.6" /* If you need to install more boards than NR_CARDS, change the constant in the definition below. No other change is necessary to support up to@@ -647,9 +79,7 @@ #include <linux/firmware.h> #include <linux/device.h> -#include <asm/system.h> #include <linux/io.h>-#include <asm/irq.h> #include <linux/uaccess.h> #include <linux/kernel.h>@@ -665,7 +95,6 @@ static void cy_send_xchar(struct tty_struct *tty, char ch); #ifndef SERIAL_XMIT_SIZE #define SERIAL_XMIT_SIZE (min(PAGE_SIZE, 4096)) #endif-#define WAKEUP_CHARS 256 #define STD_COM_FLAGS (0) @@ -715,7 +144,7 @@ static struct tty_driver *cy_serial_driver; causing problems, remove the offending address from this table. */ -static unsigned int cy_isa_addresses[] = {+static const unsigned int cy_isa_addresses[] = { 0xD0000, 0xD2000, 0xD4000,@@ -755,25 +184,25 @@ static int cy_next_channel; /* next minor available */ * HI VHI * 20 */-static int baud_table[] = {+static const int baud_table[] = { 0, 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, 76800, 115200, 150000, 230400, 0 }; -static char baud_co_25[] = { /* 25 MHz clock option table */+static const char baud_co_25[] = { /* 25 MHz clock option table */ /* value => 00 01 02 03 04 */ /* divide by 8 32 128 512 2048 */ 0x00, 0x04, 0x04, 0x04, 0x04, 0x04, 0x03, 0x03, 0x03, 0x02, 0x02, 0x02, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; -static char baud_bpr_25[] = { /* 25 MHz baud rate period table */+static const char baud_bpr_25[] = { /* 25 MHz baud rate period table */ 0x00, 0xf5, 0xa3, 0x6f, 0x5c, 0x51, 0xf5, 0xa3, 0x51, 0xa3, 0x6d, 0x51, 0xa3, 0x51, 0xa3, 0x51, 0x36, 0x29, 0x1b, 0x15 }; -static char baud_co_60[] = { /* 60 MHz clock option table (CD1400 J) */+static const char baud_co_60[] = { /* 60 MHz clock option table (CD1400 J) */ /* value => 00 01 02 03 04 */ /* divide by 8 32 128 512 2048 */ 0x00, 0x00, 0x00, 0x04, 0x04, 0x04, 0x04, 0x04, 0x03, 0x03,@@ -781,13 +210,13 @@ static char baud_co_60[] = { /* 60 MHz clock option table (CD1400 J) */ 0x00 }; -static char baud_bpr_60[] = { /* 60 MHz baud rate period table (CD1400 J) */+static const char baud_bpr_60[] = { /* 60 MHz baud rate period table (CD1400 J) */ 0x00, 0x82, 0x21, 0xff, 0xdb, 0xc3, 0x92, 0x62, 0xc3, 0x62, 0x41, 0xc3, 0x62, 0xc3, 0x62, 0xc3, 0x82, 0x62, 0x41, 0x32, 0x21 }; -static char baud_cor3[] = { /* receive threshold */+static const char baud_cor3[] = { /* receive threshold */09, 0x09, 0x08, 0x08, 0x08, 0x08, 0x07, 0x07@@ -804,7 +233,7 @@ static char baud_cor3[] = { /* receive threshold */ * cables. */ -static char rflow_thr[] = { /* rflow threshold */+static const char rflow_thr[] = { /* rflow threshold */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a, 0x0a, 0x0a, 0x0a, 0x0a, 0x0a, 0x0a, 0x0a@@ -826,7 +255,7 @@ static const unsigned int cy_chip_offset[] = { 0x0000, /* PCI related definitions */ #ifdef CONFIG_PCI-static struct pci_device_id cy_pci_dev_id[] __devinitdata = {+static const struct pci_device_id cy_pci_dev_id[] = { /* PCI < 1Mb */ { PCI_DEVICE(PCI_VENDOR_ID_CYCLADES, PCI_DEVICE_ID_CYCLOM_Y_Lo) }, /* PCI > 1Mb */@@ -892,7 +321,7 @@ static inline bool cyz_is_loaded(struct cyclades_card *card) } static inline int serial_paranoia_check(struct cyclades_port *info,- char *name, const char *routine)+ const char *name, const char *routine) { #ifdef SERIAL_PARANOIA_CHECK if (!info) {@@ -908,7 +337,7 @@ static inline int serial_paranoia_check(struct cyclades_port *info, } #endif return 0;-} /* serial_paranoia_check */+} /***********************************************************/ /********* Start of block of Cyclom-Y specific code ********/@@ -3029,11 +2458,9 @@ cy_set_serial_info(struct cyclades_port *info, struct tty_struct *tty, struct serial_struct __user *new_info) { struct serial_struct new_serial;- struct cyclades_port old_info; if (copy_from_user(&new_serial, new_info, sizeof(new_serial))) return -EFAULT;- old_info = *info; if (!capable(CAP_SYS_ADMIN)) { if (new_serial.close_delay != info->port.close_delay ||@@ -3375,7 +2802,6 @@ static int cy_break(struct tty_struct *tty, int break_state) static int get_mon_info(struct cyclades_port *info, struct cyclades_monitor __user *mon) {- if (copy_to_user(mon, &info->mon, sizeof(struct cyclades_monitor))) return -EFAULT; info->mon.int_count = 0;-- 1.6.3.2--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2009/6/18/78 | CC-MAIN-2016-18 | refinedweb | 4,835 | 53.51 |
03 November 2010 08:01 [Source: ICIS news]
SINGAPORE (ICIS)--Thai Caprolactam, a subsidiary of Japan-based Ube Industries, will halt exports of caprolactam once its downstream nylon plant in Rayong starts up next month following an expansion, a company source said on Wednesday.
The company’s 110,000 tonne/year caprolactam plant on the same site was currently operating at 100%, he added.
“There will be zero exports because the production will be kept for the downstream plant,” the source said.
The nylon plant has been expanded to 75,000 tonne/year from 25,000 tonne/year previously, he said.
“The (nylon) plant will start up in the second week of November,” the source added.
Caprolactam is primarily used in producing nylon.
With Thai Caprolactam ceasing exports, caprolactam supply will get even tighter in ?xml:namespace>
Besides a peak turnaround season, caprolactam prices rose to $2,730-2,740/tonne CFR NE Asia this week from $2,680-2,720/tonne (€1,929-1,958/tonne) CFR (cost and freight) NE (northeast) Asia last week, due to strong demand during nylon manufacturing season, | http://www.icis.com/Articles/2010/11/03/9406838/thai-caprolactam-to-halt-exports-on-expanded-nylon-plant-startup.html | CC-MAIN-2015-06 | refinedweb | 184 | 59.53 |
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text are shown as follows: "You can use generic variables, which are
EphesoftBatchID and
EphesoftDOCID."
A block of code is set as follows:
import com.ephesoft.dcma.da.id.BatchInstanceID; public interface SamplePluginService { void sampleMethod(BatchInstanceID batchInstanceID, final String pluginWorkflow) throws Exception; }
New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "Administrators can use the Up and Down buttons to reorder the ...
No credit card required | https://www.oreilly.com/library/view/intelligent-document-capture/9781783558582/pr07s04.html | CC-MAIN-2019-30 | refinedweb | 121 | 54.83 |
Apache::App::Mercury::Controller - Example Controller class
This is simply a skeleton class which illustrates how a controller should interact with Apache::App::Mercury. Please look at the code to see how it should contruct and initialize an Apache::App::Mercury object, run its main content handler, and then cleanup non-persistent instance variables on completion. It does not illustrate object persistence; not for difficulty reasons, simply for lack of time. I highly recommend Apache::Session.
The below instance variables and accessors are required in your controller class for Apache::App::Mercury to operate properly.
A CGI query object for the current http request.
An Apache->request object for the current http request.
Set or get a page-specific informational message. The controller should display this message in some prominent location on the resulting HTML page.
Set or get the HTML page title.
Set or get the page body content.
Return the current unixtime, as returned by the Perl time() function. This accessor is used for time synchronization throughout the application, so your controller can keep a single time for each http request.
Set or get a page-specific location mark, for logging purposes.
Adi Fairbank <adi@adiraj.org>
This software (Apache::App::Mercury and all related Perl modules under the Apache::App::Mercury namespace) is copyright Adi Fairbank.
July 19, 2003 | http://search.cpan.org/dist/Apache-App-Mercury/Mercury/Controller.pm | crawl-003 | refinedweb | 222 | 59.4 |
1 Jul 18:20 2010
Re: Number of pages on Wikipedia
Chrisil J. Arackaparambil <chrisil@...>
2010-07-01 16:20:37 GMT
2010-07-01 16:20:37 GMT
Thanks everybody! I just got the figure for the number of redirects as 4.5 million: ~/7zip/p7zip_9.13/bin/7z -so e enwiki-20100130-pages-meta-history.xml.7z 2>/dev/null | perl -ne 'print if m{<redirect />}' | wc -l 4493204 Chrisil Greg Hewgill wrote: > On Mon, Jun 28, 2010 at 06:06:07PM -0600, Chrisil J. Arackaparambil wrote: >> enwiki-20100130-pages-meta-history.xml.7z. What I found to my surprise >> is that there are (at least) 7 million pages in the main namespace. I >> got this figure by grepping for page titles that do not contain a ":" >> character. Is this really the case or am I missing something? > > Your page count likely includes redirect pages. Normally article counts > exclude redirects. > > Greg Hewgill > | http://blog.gmane.org/gmane.org.wikimedia.xmldatadumps/month=20100701 | CC-MAIN-2015-22 | refinedweb | 154 | 77.94 |
CodePlexProject Hosting for Open Source Software
public static class ContactsCommands { public static CompositeCommand SaveAllContactsCommand = new CompositeCommand(); }
public static class ContactsCommands { public static CompositeCommand SaveAllContactsCommand = new CompositeCommand(); }
[PartCreationPolicy(CreationPolicy.NonShared)] [Export] public class ContactsCommandProxy { public virtual CompositeCommand SaveAllContactsCommand { get { return ContactsCommands.SaveAllContactsCommand; } } }
Hi,
I would like to implement the SaveAll CompositeCommand, so that my childSaveCommands on each tab would subscribe to it.
1) Just to make sure I understand it right, the reason we are wrapping the static class into a proxy, is to make sure there is no memory leak if the commands are not unregistered from the composite command? Since the proxy could be garbage collected once the ViewModel doesnt live anymore.
But if the viewmodel registers to the static class directly and wouldnt unregister, GC can't collect the viewmodel in first place. Or if I am wrong what is the reason to use the proxy class?
2) The RI.Stocktrader is very confusing. I don't understand how the OrdersController class fits in there. It seems only TestableOrdersController (unit test) is using that class. Yet the controller class has the responsibility to register and unregister the child save commands to the composite command inside proxy. So it must be used somehow in the real app.
I just don't see how the controller class should be used. If I have a OrderModule, with one View and one ViewModel and I have set the view to export itself into a TabContol region on the shell, so that each instance would be landing on a new tabItem, how does the controller class fit in there?
Many Thanks,
Houman
1. On page 130 of the guidance it says "Note: To increase the testability of your code, you can use a proxy class to access the
globally available commands and mock that proxy class in your tests." when I look at the StockTraderRI that looks like exactly
what they are using it for.
From the looks of it, you need to deregister global commands when you release the ViewModel.
Edit: If you look at StartOrder in the OrdersController you can see them doing the UnregisterCommand in the
CloseViewRequested delegate.
2. Take a look at PositionSummaryViewModel. I find it helps to search for the Export of the class rather than the class name.
OrdersController is exported as IOrdersController.
Edit: One thing note is that OrdersController is exported as Shared, so it will be Singleton.
And yes, it can be confusing, I'm still muddling my way through it.
I'm still fighting with the where does the controller fit issue myself, so I'll let some one else answer that one.
Hi John,
Thanks for your help and advice. I have solved the issue in this way:
Case:
The Shell's MainRegion shall be a TabContainer. Each TabItem gets a closeButton templated, the button is bound to a command on the view Model.
The ViewModel in this case would be the ViewModel of the View hosted in the region. Within the handler of the command delegate I would do the unregistering for the CompositeCommand, however how do I remove the View in first place?
Therefore I had to refactor it. The CloseButton doesn't bind to the command anymore but uses a code-behind-shell eventhandler. WIthin the eventhandler I would do the removal of the active tab as planned. Now the challenge is how to unregister the command
in a clean way?
In the underlying View that is hosted in that region, I go into the code-behind and subscribe to the Unloaded() event. And do call my ViewModel's Command to unregister for the composite command like this:
Public ContactView(ContactViewModel vm)
{
InitializeComponent();
this.Unloaded += (s, e) => vm.CloseContactCommand.Execute(null);
}
The moment the view is closed as previously described, the Unloaded event is fired and the command would unregister itself from the Composite Command.
What do you think of this idea?
Regarding Controller, I made some more research, it seems the Controller is only used for Presenter-First pattern. You may find a good example of that on Prism Training Kit 4.0 project, excersice 4.
A fake ViewModel (without implementing NotificationObject nor any kind of INotifyPropertyChanged) is generated and the View is injected in it. The Controller class then holds the fake ViewModel.
Personally I don't see any use for a controller class yet, when you can use real MVVM classes. It seems to be a remaining of old Prism 1.0/2.0 code.
Still investigating...
Regards,
Houman
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://compositewpf.codeplex.com/discussions/242962 | CC-MAIN-2017-13 | refinedweb | 788 | 64.81 |
struts 2 mysql
struts 2 mysql In the example :
how is the username and password(in insertdata.jsp) which is entered by the user is transferred to the { ("INSERT employee VALUES
Struts 2 Login Form Example
Struts 2 Login Form Example tutorial - Learn how to develop Login form....
Let's start developing the Struts 2 Login Form Example
Step 1... you can create Login form in
Struts 2 and validate the login action problem with netbeans - Struts
struts 2 problem with netbeans i made 1 application in struts2... / and action name login.
The requested resource (There is no Action mapped for namespace / and action name login.) is not available.
here give two code what
login application - Struts
application using struts and database? Hello,
Here is good example of Login and User Registration Application using Struts Hibernate and Spring.
In this tutorial you will learn
1. Develop application using Struts
2. Write
Struts2 - Struts
Struts2 Hi,
I am using doubleselect tag in struts2.roseindia is giving example of it with hardcoded values,i want to use it with dynamic values.
Please give a example of it.
Thanks
Struts 2.1.8 Login Form
to
validate the login form using Struts 2 validator framework.
About the example...;
<title>Struts 2 Login Application!</title>
<link href="<...
Struts 2.1.8 Login Form
- Validation - Struts
Struts 2 - Validation annotations digging for a simple struts 2 validation annotations example
Struts 2 MySQL
Struts 2 MySQL
In this section, You will learn to connect the MySQL
database with the struts 2 application...;/struts-tags" %>
<html>
<head>
<title>Struts 2
struts2
struts2 dear deepak sir plz give the struts 2 examples using applicationresources.properties file
Running and Testing Struts 2 Login application
Running and Testing Struts 2 Login application
Running Struts 2 Login Example
In this section we will run... developed and
tested your Struts 2 Login application. In the next section Tutorial
;
Struts 2 Actions
Struts 2
Actions Example
When a client....
Struts 2 Login Application
Developing Login Application in Struts 2
In this section we are going to develop login
Struts2 connection pooling - Struts
Struts2 connection pooling Dear Friends ,
How to make connection pooling in "Struts 2
Struts 2 Login Application
Struts 2 Login Application
Developing Struts 2 Login Application
In this section we are going to develop login application based on Struts 2
Framework. Our current login application
Struts 2 Actions
Struts 2 Actions
In this section we will learn about Struts 2 Actions, which is a fundamental
concept in most of the web application frameworks. Struts 2 Action are the
important concept in Struts 2
Struts 2 Validation Example
Struts 2 Validation Example
... to write validations for your Struts 2
projects. The concepts defined in this section are so illustrative that a
learner quickly develops his/her skills in Struts 2
Struts2 ajax validation example.
Struts2 ajax validation example.
In this example, you will see how to validate login through Ajax in
struts2.
1-index.jsp
<html>
<...;
2_ LoginActionForm.jsp
<%@taglib datetimepicker Example
Struts 2 datetimepicker Example
In this section we will show you how to develop
datetimepicker in struts 2. Struts 2...;Struts 2 Format Date Example!</title>
<link href="<s Actions
.
However with struts 2 actions you can get different return types other than...
name of the action to
be executed.
Struts 2 processes an
action...Struts2 Actions
When
Struts 2 Ajax
Struts 2 Ajax
In this section, we explain you Ajax based
development in Struts 2. Struts 2 provides built... to the Struts 2
framework. Ajax allows the developers to develop GUI like web project samples
struts 2 project samples please forward struts 2 sample projects like hotel management system.
i've done with general login application and all.
Ur answers are appreciated.
Thanks in advance
Raneesh
struts2 - Struts
Struts2 and Ajax Example how to use struts2 and ajax
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-batis Redirect Action
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect
action and learn to use it in the struts 2 application.
Redirect After Post:
This post
Struts hi,
I am new in struts concept.so, please explain example login application in struts web based application with source code...://
I hope that, this link will help
struts - Struts
.shtml
http...struts Hi,
I am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send
Application |
Struts 2 |
Struts1 vs
Struts2 |
Introduction... |
Developing
Login Application in Struts 2 |
Running
and testing application |
Login/Logout With Session |
Connecting to MySQL Database in Struts 2
Struts 2 Validation
Struts 2 Validation Hello,I have been learning struts.
I have... entry, the users are getting added into database.
Kind help.
Database(MYSQL... username and password for mysql - Framework
struts Hi,roseindia
I want best example for struts Login... in struts... Hi Friend,
You can get login applications from the following links:
1)
2)http
Struts 2 Training
Struts 2 Training
The Struts 2 Training for developing enterprise applications with Struts 2 framework.
Course Code: STRUS-2
Duration: 5.0 day(s)
Apply for Struts 2 Training
Lectures Interceptor Example
Struts 2 Interceptor Example
Interceptor is an object which intercepts...;
</action>
Consider an example of Struts interceptor given bellow
Login.jsp
<%@ taglib prefix="s" uri="/struts-tags"%>
<
Struts 2.2.1 - Struts 2.2.1 Tutorial
Validators
Login form validation example
Struts 2.2.1 Tags
Type...
Struts 2 Interceptor Example
Struts Online Test...
Struts 2 hello world application using annotation
Running
struts- login problem - Struts
struts- login problem Hi all, I am a java developer, I am facing problems with the login application. The application's login page contains fields like username, password and a login button. With this functionality
Struts Hello !
I have a servlet page and want to make login page in struts 1.1
What changes should I make for this?also write struts-config.xml... = response.getWriter();
String connectionURL = "jdbc:mysql://localhost:3309/mysql
Struts 2 Format Examples
;Struts 2 Format Example!</title>
<link href="<s:url value...
Struts 2 Format Examples
In this section you will learn how to format Date and numbers in Struts 2
Framework. Our
Error - Struts
Error Hi,
I downloaded the roseindia first struts example...
-----------------------
RoseIndia.Net Struts 2 Tutorial
RoseIndia.net Struts 2... to test the examples
Run Struts 2 Hello Am newly developed struts applipcation,I want to know how to logout the page using the strus
Please visit the following link:
Struts Login Logout Application
Struts 2 Date Format Examples
Struts 2 Date Format Examples
In this tutorial you will learn about Date Format function in Struts 2. We...;
<html>
<head>
<title>Struts 2 Format
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/94979 | CC-MAIN-2013-20 | refinedweb | 1,173 | 57.06 |
in reply to
Favorite programming language, other than Perl:
I had to pick C, because of my fun MUD-coding experiences. Sure, it's not very OOP, and it's got a polluted namespace like you wouldn't believe, and uses linked lists from hell, but it's so much fun to play in MUD code, especially when a 'feature' nukes some player you didn't like by accident. ;)
However, I did almost pick BASIC. On my first personal computer, this ancient Epson (8086, 2 5.25" drives, no HD), I had GWBASIC. GWBASIC was interesting - it didn't have some things that QBASIC does that I liked, and so couldn't take some of the games I found for QBASIC over to GWBASIC, but I found a neat feature that it had.
GWBASIC had a command called play. It took a string which consisted of note names, I could put a dot after the letter to represent the musical dot notation, I could use a > to go up an octave (and < to go down an octave) etc - it was a blast.
Before I found play, I'd written generic little games here and there, like hangman, but with play, I added music to my games.
I even wrote my own screen saver (like I really needed one). I used my first song, Mary Had A Little Lamb (arguably the "Hello World!" of the music world), and also set it to take the simple lyrics and scroll them across my screen randomly, creating a snow-like effect of words set to music.
I still have all of my GWBASIC code sitting on some 5.25" disks in my mother's storage shed - I should go get those, find an old drive, and check it out again. hehe.
~Brian | http://www.perlmonks.org/index.pl/jacques?node_id=164738 | CC-MAIN-2015-18 | refinedweb | 298 | 76.15 |
).
In this article, I'm going to show you how you can use value converter in XAML to convert one format of data into another format. In our case, we'll convert a string value (value in textbox) to a Boolean value (checked status of a checkbox).
Things we are going to discuss
Let's discuss each one by one.
Value Converters are the classes that are used to convert one form of data into another.
There are certain case studies when we need to convert the data from one format to another format in software development, especially in application development.
So, we perform the conversion like -
We can write our custom code to convert the data from one form to another but it will be time taking and will increase the complexity. So, we'll use built-in .NET classes.
Suppose we have a control as textbox and another control as Checkbox. We want to check the checkbox whenever the user writes "Checked" into the textbox. Like this.
And when a user writes something else, this checkbox will be unchecked like this.
In this process, indirectly, we are converting a string value to a relevant Boolean value. As this is not possible directly, we'll use IValueConverter Interface and implement that in a user-defined class. That class will be used as a local resource in XAML code. To learn about a local resource, you can go through this article. Our control Binding property will consume this local resource.
Create a new UWP Empty Project.
Now, add a new class with the name "" to project.
Now, click class option and write the name of class.
This is the class you just added.
Now, inherit this class with IValueConverter.
Note - You need to use Windows.UI.Xaml.Data namespace in order to use that interface.
Now, inherit the class with interface.
Press [ctrl + . ] on Interface name and you'll see the options on the left side of the class. Select first one, i.e., "Implement Interface". Two functions will be added to the class with the names -
See the updated class now.
Now, we have to write the definition of these two methods. But before that, we must understand what we are going to do in both the methods.
Now, we are going to place controls using XAML code. Open MainPage.xaml from Solution Explorer.
You'll get a blank grid like this.
Place a textbox and a checkbox into the Grid. Use the following XAML code for this.
Here is the code.
Don't forget to build your project, otherwise you'll see an exception in designer like this. Our project setup and resources are ready. Next step is to bind the IsChecked property of Checkbox with the string of the textbox. We've already written code for that. Now, we just have to mention that code in binding like the following way.
Here is the code for you to copy from.
View All | https://www.c-sharpcorner.com/article/xaml-value-converter-with-simple-example/ | CC-MAIN-2019-26 | refinedweb | 495 | 76.22 |
Introduction to Dynamic Programming
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 30 minutes | Coding time: 10 minutes
Whether you dream to be a seasoned competitive programmer or want to dive in the field of software development, you need to be thorough with the things called "Data Structures and Algorithms", and the technique we will be discussing today is one of the most important technique to master these. I am talking about Dynamic Programming!
In today's post, we will cover the following:
- The crux of Dynamic Programming, (DP) hereafter
- Different methodologies for solving DP problems
- How to identify a DP problem?
- Hands on in a DP problem
What is Dynamic Programming?
Dynamic Programming is essentially based on the idea of doing smart work over hard work. It is a technique to solve a complex problem by breaking it down into smaller sub-problems recursively. The smartness lies in the idea of remembering the solutions to sub-problems so that they can be used for similar sub-problems and thereby eliminate the need to recompute them in future.
DP is simply an optimization over naive recursive approach, reducing the exponential time-complexity of recursive solutions to polynomial time complexity.
Different methods for solving DP problems
There are mainly 2 ways to approach a DP problem:
- Top Down or Memorization technique
- Bottom Up or Tabulation technique
Top down approach
This is the simplest approach to come up with, wherein we start from the top-most state and start breaking down the problems. As we traverse down, we either come across an already solved problem, for which we simply return the solution, or we some across an unsolved problem, which we compute and then save in the memory(hence, also called memoization).
Let's write a program for Fibonacci series using the memoization technique.
def fib(n, tab): # Base case if n == 0 or n == 1 : tab[n] = n # If the value is not calculated previously then calculate it if tab[n] is None: tab[n] = fib(n-1 , tab) + fib(n-2 , tab) return tab[n]
Bottom Up Approach
In this approach, we start from the most trivial subproblems and traverse our way up to the main problem. This ensures that the solutions to all the subproblems are computed and tabulated(hence, the name tabulation method), before the given problem.
Fibonacci program using tabulation method
def fib(n): f = [0]*(n+1) # base case f[1] = 1 for i in range(2 , n+1): f[i] = f[i-1] + f[i-2] return f[n]
How to identify a DP problem?
In order to apply the concept of dynamic programming to a problem, we first need to classify whether it can be solved using this concept or not. Now that we have a basic idea of what DP is, let's try to think which properties should we look for.
First, we want there to be subproblems that are repeated in a recursive manner, otherwise there is no point to solve a subproblem and save it for future, if it isn't going to be used in future.
Second we want to ensure that we can find an optimal solution to the main problem by using optimal solutions of its subproblems, which means that we are targeting global optimization and not local optimization.
More formally, the above two properties are defined as Overlapping subproblems and Optimal substructure respectively.
Let's discuss them with an example
- Overlapping subproblems:
Let's see how the Fibonacci series obeys this property
fib(5) / \ fib(4) fib(3) / \ / \ fib(3) fib(2) fib(2) fib(1) / \ / \ / \ fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) / \ fib(1) fib(0)
Here, we can see that values like fib(3), fib(2), fib(1) and fib(0) are repeated and therefore computing and saving them once, will save time when they are needed later.
- Optimal substructure:
Let's consider the problem of All Pair Shortest Paths. We know that if we are able to find an intermediate node x, between source node v1 to destination node v2, which lies at the shortest distance from v1, then we are assured that the shortest path from source to destination is nothing but the one from v1 to x and then x to v2. This ensures that finding the optimal solution to subproblems leads us to optimal solution of the main problem.
Coding Time
Let's take up a problem of finding the number of times a string occurs as a subsequence in given string.
Consider the string OpenGenus, now, we want to find the occurrence of another string, let's say, en, in it whether continuous or discontinuous.
On analyzing it, we find that the solution for this case should be 3:
- OpenGenus
- OpenGenus
- OpenGenus
Let's see how to approach this:
We simply need to start processing the string either from left or right and the check whether the last characters of the considered strings match or not.
Algorithm:
Let the length of first string be m and that of second be n and we are traversing from right end.
If last characters of two strings match, we can do two things:
a) We will either consider last characters and get count for remaining strings. So we perform recursion for lengths m-1 and n-1.
b) Or we ignore last character of first string and recurse for lengths m-1 and n.
else if the characters don't match:
We simply ignore last character of first string and recursively find solutions for lengths m-1 and n.
We will understand the algorithm through the following example:
First string = "OpenGenus"
Second string = "en"
We need to count the number of times a string occurs as a subsequence of another string.
Let's construct a matrix to understand this:
Explanation:
- Here we have our first string as a column and second as a row.
- The next step is to fill the 0th row with all zeros except for the first element. Similarly, fill the first column with '1s'. This is in accordance to our base conditions.
- Now we start filling the matrix elements one by one on the basis of whether the characters at the corresponding indices match or not.
- If they don't match, we simply copy the value in the previous row but same column.
- If they match, we add the value at previous row -previous column with the value at previous row - same column.
- Lastly, we return the value at the last row -last column.
- As, we can see it is 3 in this case, which is in accordance with our logic and hence verifies the concept of dynamic programming for such problems.
Pseudocode:
- Base conditions
if the first string is null, then the first row should be filled with 0's that is matrix[0][i] = 0
if the second string is null, then the first column should be filled with 1's that is matrix[i][0] = 1
- Filling the dynamic programming matrix
Either starting from left or right of the strings,
if string_1[i-1]== string_2[j-1], then matrix[i][j] = matrix[i-1][j-1] + matrix[i-1][j] (i and j are index iterators for the two strings)
else matrix[i][j] = matrix[i-1][j]
- Return the value at last row -last column of the matrix that is matrix[m][n]
Code:
# Let's use tabulation method of DP to program this def occur(a,b): m = len(a) n = len(b) tab = [[0]*(n+1) for i in range (m+1)] # Corner cases # if first string is null for i in range(n+1): tab[0][i] = 0 # if second string is null for i in range(m+1): tab[i][0] = 1 # filling the tab[][] in bottom up manner for i in range(1, m + 1): for j in range(1, n + 1): if a[i - 1] == b[j - 1]: tab[i][j] = tab[i - 1][j - 1] + tab[i - 1][j] else: tab[i][j] = tab[i - 1][j] return tab[m][n] if __name__ == '__main__': a = "OpenGenus" b = "en" print(occur(a, b))
Output: 3
Complexity analysis:
- The time complexity of the above problem will be O(mn), due to the 2 for loops.
- The space complexity will be O(mn), due to the auxiliary space occupied by the table.
By this, we come to the end of this article :)
I hope you are clear with the idea of dynamic programming and are ready to apply it to other problems!
Further reading:
- Different dynamic programming problems:
If you found the article useful, do share it with others! | https://iq.opengenus.org/introduction-to-dynamic-programming/ | CC-MAIN-2021-17 | refinedweb | 1,465 | 58.86 |
In this lesson, we'll walk through integrating time-based values in the Java side of our application.
However, before we can learn about time in any programming language, let alone Java, we need to understand the differences in the way humans and computers interpret time.
As you already know, humans calculate time in years, months, days, hours and seconds. We say things like "last Tuesday", or "May 5th, 1990" to communicate time to one another. Machines, however, don't organize the concept of time into years, months, or days unless we explicitly instruct them to. By default, they measure time on one, continuous timeline. Machine times are calculated in the amount of time ( in seconds) since the epoch. The term epoch simply means a reference point in history.
Epoch dates differ between programming languages. We can take a peek at this guide from Wikipedia and see that Java's specific epoch date is January 1, 1970. So, Java actually measures time in the number of seconds since January 1, 1970. Pretty crazy, right?
We can see what these 'machine' timestamps look like by importing Java's
Instant class into the REPL, and calling the following methods:
> import java.time.Instant; Imported java.time.Instant > Instant.now().getEpochSecond(); java.lang.Long res5 = 1474852514
But Java actually includes many, many classes responsible for creating and interacting with dates and time. In fact, they have an entire tutorial dedicated solely to the different types of date and time formats. (You're not required to go through this tutorial; but it's a great resource if you would like a more advanced exploration into time classes in Java). But even that immersive tutorial doesn't cover them all!
As you could imagine, with all these different ways to represent and interact with time, it could be difficult to manage. Especially making sure your time objects are in a format both Java and SQL can handle. There are many different approaches to this. We can work with Java's
Timestamp class, or with the
Date class, or even the
Calendar class.
But because we are going to be storing dates and times in our database, we not only need to consider how Java handles time, we also need to work with something SQL can store correctly. If you google "storing dates and times in postgres", you'll see that there are many approaches to storing time. But one of the simplest ones is not storing time as time, but as milliseconds. This makes it easier to time as a datatype
long in our objects, and use the column type
BIGINT to store our long values in the database.
Let's start by adding a property to our
Review class, as this is where time is most useful.
Let's add a long field to our model and update our constructor.
public class Review { private int id; private String writtenBy; private int rating; private String content; private int restaurantId; private long createdat; private String formattedCreatedAt; public Review(String writtenBy, int rating, String content, int restaurantId) { this.writtenBy = writtenBy; this.rating = rating; this.content = content; this.restaurantId = restaurantId; this.createdat = System.currentTimeMillis(); setFormattedCreatedAt(); //we'll make me in a minute } }
Let's generate getters and setters for this too. They need to look like this.
public long getCreatedat() { return createdat; } public void setCreatedat() { this.createdat = System.currentTimeMillis(); // It'll become clear soon why we need this explicit setter } public String getFormattedCreatedAt(){ String someString; return someString; //more on this in a sec } public void setFormattedCreatedAt(){ this.formattedCreatedAt = "some time" }
If we extend our model, we also need to update our database schema, as well as deleting our production database and forcing it to be rebuilt. Let's do that now.
Our database structure should look like this:
CREATE TABLE IF NOT EXISTS reviews ( id int PRIMARY KEY auto_increment, writtenby VARCHAR, rating VARCHAR, content VARCHAR, restaurantid INTEGER, createdat BIGINT );
The BIGINT column type is perfect for storing longs, as they are frequently longer (hah) than Integer types.
Let's delete our production database before we forget. Our testing database is created on the fly every time, and therefore doesn't need to be deleted for any schema changes to be included.
In a terminal prompt, run:
cd ~
ls -a
rm *.db
This sequence will delete all database files. If you don't want that, delete a specific one instead by amending the above command with the correct db name
Time to write a test to verify that we are storing the time correctly.
Write a test that looks like this in your Sql2oReviewDaoTest:
(); assertEquals(creationTime, reviewDao.getAll().get(0).getCreatedat()); }
If all goes well, this test should pass! Our times should be the same. Cool.
But here we can already see a potential dilemma that I have hinted at above - our creation time in Milliseconds is being retrieved correctly, but it isn't displaying very well. Who cares about time in milliseconds (not humans). We need it to be more readable. It would be great if we could return the time in ms and also a nicely formatted code to be used.
Converting millisecond time into humanly readable time is a very common practice. We'll employ a very handy DateFormatter to help us with this.
SimpleDateFormatter does what the name says - it allows us to convert times into different formats fairly easily. It can transform dates into strings, and vice versa. It uses a specific pattern to understand just how you want time to display. This pattern is highly customizable and used very widely.
Let's learn more about
SimpleDateFormatter by turning milliseconds into the following format:
MM/DD/YYYY @ H:MM AM/PM or '06/23/1980 @ 1:35 AM`
Let's use SimpleDateFormatter by creating a second getter method for our
createdat field that returns the time in a formatted way. Its a good idea to keep the method that returns milliseconds as well - we'll need both.
This should do the trick:
[...] public String getFormattedCreatedAt(){ Date date = new Date(createdat); String datePatternToUse = "MM/dd/yyyy @ K:mm a"; //see SimpleDateFormat sdf = new SimpleDateFormat(datePatternToUse); return sdf.format(date); } public void setFormattedCreatedAt(){ Date date = new Date(this.createdat); String datePatternToUse = "MM/dd/yyyy @ K:mm a"; SimpleDateFormat sdf = new SimpleDateFormat(datePatternToUse); this.formattedCreatedAt = sdf.format(date); }
Make sure you understand what we are getting up to here.
Datetype object using our milliseconds stored in
createdat.
SimpleDateFormatobject with out pattern as an argument.
format()on our SimpleDateFormat object, using our date as an argument. The method returns a String.
SimpleDateFormat can run a couple of different ways, but they all follow this structure more or less. Let's test that we are converting things correctly by writing an additional test with a known time instead of the system time.
We'll learn exactly why we need the
setFormattedCreatedAt when we create our frontend routes for this functionality, but first, let's extend the test we just wrote to check and see if our date is correct.
(); String formattedCreationTime = testReview.getFormattedCreatedAt(); String formattedSavedTime = reviewDao.getAll().get(0).getFormattedCreatedAt(); assertEquals(formattedCreationTime,formattedSavedTime); assertEquals(creationTime, reviewDao.getAll().get(0).getCreatedat()); }
If you set a breakpoint at the beginning of this test, and run your tests in debug mode, you'll see that we are doing everything right - our time is getting saved correctly, and our converter is formatting the time correctly too. Awesome. We can now use this getter method to display the time correctly in our queries to the API.
Let's test this with Postman.
Next, we'll learn about sorting custom Objects in Java, and apply that knowledge to sorting
Reviews newest to oldest. | https://www.learnhowtoprogram.com/java/api-development-extended-topics/working-with-time-in-java-an-introduction | CC-MAIN-2018-43 | refinedweb | 1,278 | 56.15 |
Mylyn/Restructuring 2013
This page described the process of simplifying the Mylyn project structure. Basically, all projects will be grouped below the Mylyn top-level project. The new structure becomes flat. Projects with overlapping committers should be merged into one (where possible).
Contents
Proposed Structure
- Task Focused Interface (unifies Builds, Commons, Context, Reviews, Tasks, and Versions)
- Wikitext (extracted from Docs)
- Intent (move out of docs)
- VEX (move out of docs)
- Incubator (as it is today)
- Model Focusing Tools (move out of sub-structure)
Candidates for archival
- R4E
- ePub (from Docs)
- HTMLText (from Docs)
Explanations
Task Focused Interface
The most significant simplification is the merge of a lot projects into TFI. This should greatly reduce the release overhead. They are all developed and released together today anyway. There is a significant However, separate documentation must be prepared given the old structure.
Docs
Although being declared as the "home" for documentation related projects at Eclipse.org, it currently servers two purposes: 1) parent for documentation related projects and 2) home for Wikitext, Htmltext and ePub. The overhead of maintaining a separate project just for having a "home" for documentation related projects feels wrong. The Mylyn project is about ALM and any software documentation related project is welcome in there. We really shouldn't maintain a separate parent project. There is no need for sub-projects to hide behind such an umbrella project.
Wikitext
Wikitext demonstrated that it's a successful project on its own with a vibrant community. It's used as a separate library and tool. Thus, it really should be a separate project that can produce releases on it's own.
R4E
It seems that R4E didn't manage to build an active and vibrant community. Code reviews are really successful at GitHub or in Gerrit. Mylyn Reviews integrates very well with Gerrit. R4E should be archived.
ePub
It's actually not a project but a component within Docs. However, it's not really clear who uses it actively. We should either archive it or discuss with the committers where to put it best.
Htmltext
Also a component within Docs. It's not mature enough to be a separate project. We should either move it back to the Incubator or archive it.
Open Questions
- Difference between Mylyn Top Level project and the "Mylyn" project?
- Can the new Task Focused Interface project use the "Mylyn" Bugzilla product?
- Can the new Task Focused Interface project use the /mylyn URL and namespace?
Plan of Actions
TFI
R4E
- The project will be archived
Intent
VEX | http://wiki.eclipse.org/index.php?title=Mylyn/Restructuring_2013&oldid=353250 | CC-MAIN-2014-42 | refinedweb | 421 | 58.08 |
1) we don't do your homework for you, show some effort
2) we're certainly not going to let you order us around to do it "asap". If and when we do anything it'll be at a time and place of our choosing, not yours, suggesting anything else makes us LESS eager to help you.
3) (general warning) it's NOT "urgent" to anyone except possibly you, and if it is you should have started sooner.
4) properly define your problem domain. WHAT do you want to measure exactly? What that is defines where and how to measure it.
can someone teach me how to get the running time of this program?
import java.io.*; public class ArrauSample { public static void main(String args[]){ BufferedReader console = new BufferedReader(new InputStreamReader(System.in)); int x=0; int max= 5; int myNum[] = new int[max]; String myString[] = new String[max]; try{ for(x=0; x<max; x++) { System.out.println("Input element #" + (x+1)); myString[x]=console.readLine(); } }catch(IOException e){} for(x=0; x<max; x++) { System.out.print("Element #" + (x+1)); System.out.println(myString[x]); } } }
thank you very much!
Well the basic way of getting time in a program is to use
long x = System.getCurrentTime();
which returns the current time of the computer in milliseconds.
Anyone with a brain could figure it out from there! :p
there is no such method in my Java API
perhaps you are thinking of System.currentTimeMillis() | https://www.daniweb.com/software-development/java/threads/79053/how-to-get-total-time-of-running-program | CC-MAIN-2015-11 | refinedweb | 246 | 67.35 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Making a Network Request3:43 with Pasan Premaratne
Let's take a look at how we can use closures to conduct network request operations in iOS
Code Snippet
import Foundation let label = UILabel() func getRecentBlogPost(completion: NSURLResponse -> Void) { let session = NSURLSession(configuration: .defaultSessionConfiguration()) let url = NSURL(string: "")! let request = NSURLRequest(URL: url) let dataTask = session.dataTaskWithRequest(request) { data, response, error in // Execute body of closure completion(response!) } dataTask.resume() }
- 0:00
The next example I want to look at,
- 0:02
is how we would implement a hypothetical example of a networking task in code.
- 0:08
Now here we have a bunch of code downloading some recent blog post
- 0:11
from the Treehouse blog and there's a lot of code here but
- 0:14
we're going to ignore most of it.
- 0:17
The part we want to actually focus on is this single method, dataTaskWithRequest.
- 0:23
With this block of code, we're attempting to make a really simple networking call
- 0:27
and get some data off the web.
- 0:30
The way we do that is by defining a data task on a networking session.
- 0:35
Because this data task can take an unknown amount of time as all things with
- 0:39
the internet do, all Apple's implementation of this method because it
- 0:43
comes from the foundation framework.
- 0:46
All this method does is create a background task
- 0:50
to make a network request and retrieve some data.
- 0:54
The method does not worry about actually doing
- 0:57
anything with the data that it retrieves.
- 1:00
For that we use a closure.
- 1:02
Now if you Option+Click this method or you look up the documentation for
- 1:06
a data task with request.
- 1:09
You'll see that the second parameter in the method signature
- 1:12
is a completion handler which takes a closure expression.
- 1:17
When the download task is complete, it calls back to the method that initiated
- 1:22
this download task and passes in a closure that we define for this parameter.
- 1:28
Here, as you can see I've created a trailing closure for this.
- 1:32
We have access to the objects passed in as arguments using local names and
- 1:38
we can define an implementation.
- 1:40
This is a really powerful pattern.
- 1:43
Now as an aside, it is common practice when a closure is passed back after
- 1:47
a task is complete, to name that closure completionHandler or completion.
- 1:54
So imagine this task going something like this.
- 1:57
We start the data task and
- 1:59
because it takes an unknown amount of time, our app continues its execution.
- 2:04
When the download task or the data task is done, it takes all the information it
- 2:08
receives and passes it as arguments to the closure expression that we've provided.
- 2:15
Whenever we execute the code in this closure of which we haven't provided any.
- 2:20
It doesn't really matter when the dataTask completed
- 2:23
because the data that was returned is captured in this closure.
- 2:26
This completion handler closure contains three arguments.
- 2:31
A data object of type NSData or optional NSData,
- 2:35
a response object of optional type NSURL response and an error object.
- 2:42
The way this closure works is similar to how the map function worked.
- 2:46
The map function passes in a single parameter,
- 2:49
a number from the array that it's iterating over.
- 2:53
Similarly, Apple's implementation passes in three
- 2:56
arguments to a completion closure after the dataTask is executed.
- 3:01
It provides a data object containing the actual downloaded data
- 3:05
if the call succeeded.
- 3:07
It contains a response object from the web service indicating
- 3:11
whether the request failed or succeeded.
- 3:13
And if it didn't succeed, if it failed, this error object contains an error.
- 3:19
Within the body of this closure, we can use these three objects,
- 3:23
that are passed in as arguments, to execute any custom code.
- 3:26
And do something with the data that we get from the web.
- 3:30
We use a closure here because even though the data, response, and
- 3:34
error objects aren't defined in the local scope, we still have reference to them.
- 3:39
This is where closures are extremely powerful. | https://teamtreehouse.com/library/closures-in-swift-2/closures-in-cocoa-touch/making-a-network-request | CC-MAIN-2017-22 | refinedweb | 792 | 61.36 |
Table of Content
- Table of Content
- CLI for the KDE Wallet
- Packaging
- Language Bindings
- Security
- Users
- ChangeLog
CLI for the KDE Wallet
What's it? A command-line interface to the KDE Wallet, for KDE 3 and KDE 4 both (so shell scripts, Python, etc. do not need to use DCOP or D-Bus directly to access it to store passwords, instead being able to call this convenient wrapper). Please read the wlog entry announcing kwalletcli public beta test for some more background information. Currently, only the default wallet is supported; while the CLI itself could be enhanced by a selection, the utilities also provided cannot really expose this functionality.
kwalletcli is OSI Certified
Download
Current version: kwalletcli 2.12 (2014-05-11)
- RMD160 (kwalletcli-2.12.tar.gz) = 9ef4ba4fdda6c6af0f537fe155a600b7e98c4b8d
- TIGER (kwalletcli-2.12.tar.gz) = bf31c1c2aed7be1ccedf0443f1e06bb0ecfdd0a56cfdfb32
- 327772195 60073 /MirOS/dist/hosted/kwalletcli/kwalletcli-2.12.tar.gz
- MD5 (kwalletcli-2.12.tar.gz) = 2051676c180ede4595a2a85b12c220b3
- Mirrors
- Germany
- Japan
Ingredients
The kwalletcli distfile provides a number of things:
- A LICENCE file. kwalletcli is covered by The MirOS Licence (HTML transcript); the logo is additionally restricted by the Terms and Conditions of the GNU LGPL v3+ (both licences are OSI certified, DFSG free, etc.)
- An SVG logo and a few compiled PNG versions.
- The CLI itself (binary). The manual page (HTML): kwalletcli(1)
- An ssh-askpass(1) alike tool called kwalletaskpass(1), which provides some kind of SSO by storing the SSH private key passphrase in the KDE Wallet (mksh(1) script)
- An pinentry alike tool called pinentry-kwallet(1) which provides some kind of SSO by storing pinentry replies, once given (it calls the original pinentry-{qt,gtk,curses} as coprocess), in the KDE Wallet and providing them to e.g. the GnuPG agent (mksh script)
- A pinentry (Assuan protocol) client called kwalletcli_getpin(1) which is used to request information from the user which is not yet stored in the KDE Wallet, as well as confirmation whether it should be stored there (script) and serves as generic pinentry/Assuan client as well
Wishlist
Possible extensions include gnome-keyring bindings as well as some for the new KDE/GNOME intra-desktop keyring/wallet standard talking D-Bus instead of using the libkwalletclient convenience libraries; support for selecting a non-default keyring; more utilities on top of kwalletcli(1) (e.g. a libpurple plugin, and means for M*zilla Firef*x, Opera and other desktop software to use it to store passwords in the Wallet).
Packaging
Debian has a kwalletcli (KDE 4) package from squeeze onwards. The backports repository contains a kwalletcli (KDE 3) package for lenny.
Suggested packaging: MidnightBSD mports (for they provide KDE
anyway), OpenSuSE Build Service (RPM for many platforms), etc.
If KDE (upstream) desires, they may include it (under The MirOS Licence) in their distribution, even.
Dependencies
Either Qt3 and KDE3, or Qt4 and KDE4, development headers and libraries, and the matching compiler (gcc/g++ is tested, others are not). Either MirMake (MirBSD make(1)) or GNU make. For the scripts, mksh R38+ is a run-time dependency. The manpages require nroff/gnroff and the -mdoc macropackage to compile. The HTML manpages can only be re-made on MirBSD.
Language Bindings
C binding
See the source file kwalletcli.h for details. This is the source-level C binding API (function kw_io() and a couple of return value definitions) that can be re-used. There is no C++ binding, because the high-level KWallet API is already C++, although, for ease of use, the C binding can be used from others' C++ code as well.
Python binding (external)
There's a sample Python 2 binding (we don't know which exact minimum version is required) contributed to the Gajim source code (dual-licenced under the same licence as Gajim (GPLv3 only), as well as the same licence (MirOS) as kwalletcli). The binding was originally written by the author of kwalletcli as well.
- initial submission
- the code (maintained inside the Gajim repository, as most prominent user of it; bugfixed by Yann “asterix” Leboulanger once already, thanks!
- usage example (again, Gajim code)
Note that the Python binding uses subprocess.Popen() and the Shell binding to do the actual work.
Shell binding
The kwalletcli(1) manpage provides a documentation of the shell binding. The other utilities part of the distribution, as well as the Python binding, serve as usage examples.
Python example (contrib)
This is a user-contributed example in Python, submitted by Stephen McIntosh:
import kwalletbinding as kw def operation(): op = raw_input("Add or Read? ") return op def addpass(): kw.kwallet_put('kdewalletcli', raw_input("Name: "), raw_input("Password: ")) print("...\nDone!") def getpass(): readpass = kw.kwallet_get('kdewalletcli', raw_input("Name: ")) print "...\nThe password is: " + readpass if kw.kwallet_available(): op = operation() if op.lower() == "add": addpass() else: getpass() else: print "KDE Wallet not available!"
(edited slightly for legibility)
Security
Passwords can, of course, only be accessed if the KDE Wallet is opened. Hence, the on-disc security of the passwords is the same as for all other applications using it. We make no statement on its security (the GnuPG mailing lists have some flamewars about it), but if this is “enough” for you (or, if you are a company sysadmin, your boss), you're welcome. On the other hand, since the KDE pop-up will only show “kwalletcli”, not the application/script using it, when it asks whether access to the Wallet is to be permitted, password stealing by untrusted-local applications is easier (but if you have these, you have totally different problems anyway). Hence, we suggest to “allow always” access for kwalletcli(1) and take the usual care when installing and running applications from third parties.
If you turn “iodebug” in pinentry-kwallet on, it will log the entire dialogue with both parent and co-process, including passwords, to a file in your home directory. (This can only be done by editing the script directly, which is why we refrain from warning the user in a dialogue, as an attacker can also remove that warning.)
Users
The Gajim Jabber client supports kwalletcli, by means of the Python binding, for storing Jabber passwords in the KDE Wallet in an encrypted manner, since version 0.13 (committed after some discussion; Gajim already supported gnome-keyring though).
ChangeLog
Changes in the current (unreleased) development version:
- Merge back from Debian packaging: add CXXFLAGS to CXX link invocation
kwalletcli 2.12
- Remove unused code from BSDmakefile, for better portability
- Apply patches from Timo Weingärtner to add recognition for git's question and ssh-agent-filter's confirmation
- Whitespace cleanup; add list of contributors to LICENCE file
kwalletcli 2.11
- Correct exit code for when the read routines die
- Catch signals and terminate gracefully
- Better protocol compliance
- Be more strict when parsing commands
- Quell warning for “GETINFO version”
- Prevent converting underscores into accelerator markings
- Document currently used exit codes
- Add CAVEATS to manual pages
kwalletcli 2.10
- In pinentry-kwallet(1), replace with the slave immediately if $DISPLAY is unset or empty (as we cannot contact the KDE Wallet in that case, anyway). Fixes another case of spurious “Do you want to store … in the KDE Wallet?” questions.
- Fix mis-read in recursive call case (parent, not slave).
- Align look and feel of fallbacks (both xmessage and TUI) with default pinentry GUI style
- kwalletcli_getpin(1) new options -m (message, with one button); -Y OK and -N Cancel (set button labels)
- Security fix in kwalletcli_getpin(1): tty I/O now properly disables echoing input when asking for a passphrase
- After scanning through ssh(1) and ssh-askpass(1) source, teach kwalletaskpass(1) to use boolean queries for some whitelisted strings and check it works with confirmation (Debian #603910)
- Store negative replies to “Do you want to store X in the KDE Wallet?” as “blacklist” in the wallet in kwalletaskpass(1) and pinentry-kwallet(1) to avoid asking the user every time
- Document limits and raise kwalletcli(1) -P limit
- Have kwalletcli(1) convert passwords from/to proper UTF-8 for Qt
kwalletcli 2.03
- Fix building the kwalletcli binary with indirect linking; patch from Felix Geyer <debfx@Debian derivate from Canonical that cannot be named.com>
- In kwalletaskpass(1), do not even attempt to call kwalletcli(1) if $DISPLAY is unset or empty, it will not be able to communicate with it anyway. Fixes spurious “Do you want to store … in the KDE Wallet?” questions when logged in via ssh(1).
- Small documentation improvements, mostly re. $DISPLAY | http://www.mirbsd.org/kwalletcli.htm | CC-MAIN-2016-18 | refinedweb | 1,405 | 51.89 |
Hi, I'm working on bringing pgpool-II packages (postgresql-pgpool-II-3.1.2-1) from Fedora over to RHEL6. I've already "hacked" Fedora version to play nice with RHEL (removing systemd references and supplanting systemd unit with init script). I have also conditionalized it to build either one depending on whether systemd is to be used or not using the %if %{systemd_enabled} ... %else ... %endif blocks. However I can't quite figure out an optimal way of determining which platform package is being compiled for (in other words: how to set up systemd_enabled macro automatically rather than rely on manual setup). I'm sure people on this list came across this problem more than once, and I would like to know what's the standard way of resolving it. What I'm trying to achieve is to get one SPEC for both Fedora and RHEL. Am I attacking this problem the wrong way? -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout --. --- | http://www.redhat.com/archives/epel-devel-list/2012-August/msg00009.html | CC-MAIN-2014-10 | refinedweb | 189 | 60.24 |
Okay, I'm still learning programming, and I have this assignment where I have to create a class Message, which 4 instance variables: sender, receiver, subject and body. I have to implement a constructor that takes four parameters and initializes the attributes accordingly.
I have to implement a method isValid() that returns true only if: the Message object on which the method is invoked has a non-empty sender and receiver, and at least one of the body or subject must be a non-empty string.
Along with a toString() method. I understand all this, and here is my code so far. I have no idea what I am doing wrong.
public class Message { String sender; String receiver; String subject; String body; public Message(String sender, String receiver, String subject, String body) { this.sender = sender; this.receiver = receiver; this.subject = subject; this.body = body; } public Message() { this.sender = null; this.receiver = null; this.subject = null; this.body = null; } public static void main (String[] args) { Message trying = new Message(); trying.sender = "sendername"; trying.receiver = null; trying.subject = "Hi"; trying.body = null; System.out.println(trying); System.out.println(trying.isValid()); } public boolean isValid () { boolean flag = false; if (sender.length() > 0 && receiver.length() > 0) flag = true; if(subject.length() >0 || body.length()>0) flag = true; else flag = false; return flag; } public String toString() { return ("From: " + sender + "\n" + "To: " + receiver + "\n" + "Subject: " + subject + "\n" + "Body: " + body); } }
When I run this, I get a runtime exception (it compiles, but when I run) I get this:
Exception in thread "main" java.lang.NullPointerException
at Message.isValid(Message.java:40)
at Message.main(Message.java:33) | http://www.javaprogrammingforums.com/whats-wrong-my-code/15725-getting-null-pointer-exception.html | CC-MAIN-2015-48 | refinedweb | 270 | 62.44 |
Hello all. I just started trying to learn how to program and I have been studying the tutorials here and reading C++ Programming in easy steps by Mike McGrath. I have the first three lessons down pretty well, but I get in over my head about halfway through Functions. So I went back to Lesson 2: If Statements and tried to write my own little program for my out-of-town gf.
I get these errors:I get these errors:Code:#include <iostream> using namespace std; int main () { int name; cout<<"Please give your full name. Don't forget your capitalization.: "; cin>> name; cin.ignore(); if ( name == Jennifer ***** ******** ) { cout<<"Hello love.\n"; } else { cout<<"Hey! What have you done with my girlfriend?!\n"; } cin.get(); }
`Jennifer' undeclared (first use this function)
(Each undeclared identifier is reported only once for each function it appears in.)
If I change this line
To thisTo thisCode:if ( name == Jennifer ***** ******** ) {
I get this errorI get this errorCode:if ( name == "Jennifer ***** ********" ) {
ISO C++ forbids comparison between pointer and integer
This is what I was using as a reference
I took out the else..if because in my case it is either right or wrong, and I really don't know what else to try. I know this is extremely basic but I am having trouble grasping any of it.I took out the else..if because in my case it is either right or wrong, and I really don't know what else to try. I know this is extremely basic but I am having trouble grasping any of it(); }
Thanks for any help. | https://cboard.cprogramming.com/cplusplus-programming/63590-undeclared-first-use-function.html | CC-MAIN-2017-22 | refinedweb | 268 | 72.66 |
Ideally, a good software application should never, or occasionally, come up with error conditions or exceptions. But as we all know, this is far from reality. So, when an exception does occur, a good application should just not log plain error messages, but also provide troubleshooting information and detailed information about the error in terms of its source and causes.
The Windows Event Viewer has always been the most suitable place to log error messages generated by applications. This article explains how we can use the .NET Event Logging API to effectively log error information. The article also touches upon some practices for effective error message management, like maintaining localized error messages and troubleshooting hyperlinks. We shall begin with some basics of event logging like event types, log-files, etc, and then cover the implementation aspects with respect to .NET.
There are basically five types of events that can be logged. Each event is of a particular type, and an error logging application indicates the type of event when it reports one. The Event Viewer uses this type, to determine the icon to display in the list view of the log.
Event Type
Error
The following are the major elements that are used when logging events:
log-file is the place where the all event entries are made. The event logging service uses information stored in the EventLog registry key. The EventLog key (shown in Figure.1) contains several sub-keys called logfiles. log-file information in the registry is used to locate resources that the event logging service needs when an application writes to and reads from the event log. The default log-files are Application, Security, and System.
EventLog
logfiles
Figure 1: EventLog key in the registry
Applications and services use the Application log-file, whereas device drivers use the System log-file. The system generates success and failure audit events in the Security log-file when auditing is turned on. Applications can also create custom log-files by adding an entry to the EventLog registry key (This can be done programmatically as well). These logs appear in the Event Viewer with the default log-files. It is a good practice to have a separate log-file in the event viewer for your application, as this makes isolating errors generated by your application easier. Also, each log-file will be an independently manageable unit. For instance, we can control the size of the log-file, attach ACLs for security purposes etc.
Information about each event is stored in the event log in an event log record. The event log record includes time, type, source, and category information of each event.
The event source is the name of the software component (or a module of the application) that logs the event. It is often the name of the application, or the name of a subcomponent of the application, if the application is large. While applications and services should add their names to the Application log-file, or a custom log-file, the device drivers should add their names to the System log-file. The Security log-file is for system use only.
Figure 2: Registry Structure: Event Source
Each log-file contains sub-keys for event sources (as shown in Fig 2).Each event source contains information specific to the software that logs the events. The following table shows the various registry keys that can be configured for an event source:
CategoryCount
CategoryMessageFile
DisplayNameFile
DisplayNameID
EventMessageFile
ParameterMessageFile
TypesSupported
Basically, event sources can be used to classify error messages as required by the application. For example, you can have sources like reporting, calculations, user interface, etc for an accounting application.
We shall see how we configure these values later in the article.
Categories help organize events so that we can filter them in the Event Viewer. Each event source can define its own numbered categories, and the text strings to which they are mapped. The categories must be numbered consecutively, beginning with the number one. The total number of categories is stored in the CategoryCount key for the event source. Categories can be stored in a separate message file, or in a file that contains messages of other types. We shall talk more about creating categories under the section titled Message Files.
Event identifiers identify a particular event uniquely. Each event source can define its own numbered events, and the description strings to which they are mapped. Event viewers present these descriptions strings to the user.
Message files are text files that contain information about the various messages and categories that applications want to support. These text files are then compiled as resource DLLs. Resource DLLs are small and fast when compared to normal DLLs. The advantage of these resource DLLs is that we can have messages written in multiple languages. By using these DLLs, we can have a truly localized application with localized error messages. Each event source should register message files that contain description strings for each event identifier, event category, and parameters. These files are registered in the EventMessageFile, CategoryMessageFile, and ParameterMessageFile registry values for the event source. We can create a single message file that contains descriptions for the event identifiers, categories, and parameters, or create three separate message files. Several applications can share the same message file.
This article so far discussed the concepts behind event logging. Now, let us consider the implementation aspects with respect to .NET.
There are two ways to create an event log-file:
This can be done by adding an entry under HKEY_LOCAL_MACHINE\SYSTEM\Services\EventLog.
HKEY_LOCAL_MACHINE\SYSTEM\Services\EventLog
To create log-files, event sources and also to log events in the event log we use the EventLog class defined in the System.Diagnostics namespace.
System.Diagnostics
'Create the source, if it does not already exist.
If Not EventLog.SourceExists("MySource", "MyServer") Then
EventLog.CreateEventSource("MySource", "MyApp", "MyServer")
End If
In the code piece shown above, we are creating an event source called MySource under MyApp log-file on the machine MyServer if one does not already exist.
MySource
Event sources can be created manually or programmatically (as shown earlier), similar to Event log-files. If you are creating sources manually, then you have to create several keys such as CategoryMessageFile and EventMessageFile under the source. The snapshot given below shows the registry entries after the log-file and event sources are created.
Figure 3: Event source registry entries
In Figure 3, we have created a log-file for the application called MyApp. Under this log-file, there are two sources MySource1 and MySource2. In the example shown above, Msg.dll in the temp directory is used to obtain both messages and category names (as provided for the CategoryMessageFile and EventMessageFile entries). The event source has three categories (CategoryCount is set to 3), and all event types are supported (TypesSupported is set to 7).
A detailed explanation of message text file syntax is beyond the scope of this document.<!-- However, for more information you can refer:.-->
Given below is a sample message text file:
;//**************Category Definitions************
MessageId=1
Facility=Application
Severity=Success
SymbolicName=CAT1
Language=English
MyCategory1
.
MessageId=2
Facility=Application
Severity=Success
SymbolicName=CAT2
Language=English
MyCategory2
.
;//***********Event Definitions**************
MessageId=1000
Severity=Success
SymbolicName=MSG1
Language=English
My Error Message
.
MessageId=2000
Severity=Success
SymbolicName=GENERIC
Language=English
%1
.
Note: Message files can be in Unicode to support messages written in any language. In the example given above, messages are written in English. The categories and messages have also been provided in the same file.
We need to compile the message file to a resource-only DLL. The following steps explain the conversion process:
mc filename.mc.
rc -r -fo filename.res filename.rc
link -dll -noentry -out:filename.dll filename.res
To log an event into the event log, the WriteEntry method of the EventLog class can be used. Given below is a code snippet detailing how a message can be logged:
WriteEntry
Dim objEventLog As New EventLog()
'Register the App as an Event Source
If Not objEventLog.SourceExists("MySource1") Then
objEventLog.CreateEventSource("MySource1","MyApp")
End If
objEventLog.Source = "MySource1"
objEventLog.WriteEntry("", EventLogEntryType.Error, 1000, CShort(1))
For the code sample above, refer to the message text file shown earlier.
Note a couple of things here. We are passing the EventID parameter as 1000. This corresponds to the message "My Error Message" in the message text file. This will be displayed as the message in the event log. Also, we are passing the CategoryID as 1. This corresponds to the category of MyCategory1. Note that I am not passing any error message as this is picked up from the message file. Figure 4 illustrates how the event log entry would look like when the above code snippet is executed:
EventID
CategoryID
Figure 4: An event log entry with the EventID and Category
Category
We now have our own node in the event viewer. By looking at the event log, the user gets a fair amount of information like the source of the error and its category. Not only that, this makes filtering messages easier as we now have our own Sources and Categories. Also, by using the same method, we have effectively segregated error messages from the application code itself. This would help us maintain a uniform standard for error messages across applications, and also take us a step closer to localization.
Another good feature the Event Viewer provides is that of hyperlinks in the error messages, which can be linked to a web page detailing more about the error. This would help an end user understand and troubleshot the error. For example, the message given below has a link to a web page:
Figure 5: Troubleshooting Link given with an Error Message
On clicking this link, we would be prompted with a dialog box as shown below:
Figure 6: Posting Error Information to a designated place
When the user confirms to send this information, a web page can be programmed to receive this information, and display troubleshooting information to the user based on the EventID, Category, and Source sent from the client (This information is sent as query string parameters).
Source
The event viewer definitely provides a wealth of features through which error messages can be effectively logged and tracked. By exploiting these features, we would help end users to understand the errors and their sources better, and this in turn can simplify things for the support group.
Included with this article is a simple VB.NET project which just shows how the WriteEntry method of the EventLog class can be used. The zip also includes a .reg file which should be run to create the custom log-file and to register the event sources. Note that some paths are hard coded in the .reg file. So, change them accordingly. Finally, a batch file Compile.bat is included to compile the .mc file to create a message. | http://www.codeproject.com/Articles/4153/Getting-the-most-out-of-Event-Viewer?msg=1476596 | CC-MAIN-2015-11 | refinedweb | 1,821 | 55.95 |
telecon notes - hapi-server/data-specification Wiki
2022-06-272022-06-27
Priority action items:
- Bob - provide draft for #136. Attempt to close issue at this meeting.
- All - Pick next issue to close out.
Other issues:
- Bob - update on updates of hapi-server.org/servers that he said would be complete by this meeting
- Bob - have finalized tutorial posted. Discuss having one of us present at monthly session
- Bob - issue of needing AWS and more permanent hosting of some HAPI servers and services
- Sandy - update on logo revisions
- Sandy - update on website revisions
- Sandy - report on GitHub discussions option for general communication with users
- Jon - update on SPASE issue (). Should we make a copy and create our own DOI if no action in another month?
- Jeremy - show web app that uses HAPI data
- Jon and Sandy - discussion of GAMERA HAPI server
2022-06-202022-06-20
Canceled
2022-06-132022-06-13
We won't be able to get to all of these and can push to the next telecon or a splinter meeting as needed.
Update on issues needed to get to 3.1 release. Assign action times. Put report on action item in the agenda for the next telecon.
Updates to get to the 3.1 release:
3.1 release: Items #117 and #136 required for 3.1. All other items downgraded to medium priority and saved for next release.
If folks have something ‘done’, speak now if you want it in 3.1
Next week, #117 and #136 will report out: #117 = ‘way to generically expose any existing complex metadata associated with a dataset’, #136 = ‘way for each dataset to indicate the max allowed time span for a request’
After next week, 1 week for everyone to review, then release 3.1 spec.
Discuss how we avoid having things "fall off" like the discussion of the web page revisions and the logo.
Jon: "One thing the server bug on CDAWeb makes me realize is that people will need a place to report things like that for any server. Servers can have contact info, but not all do. Plus, the summer school shows that people might use a persistent Slack channel for HAPI as a way to get quick help / pointers. Anything that can help people get “unstuck” quickly when they try something new will probably be very useful in aiding adoption."
(2 & 3) Use of Slack or similar for internal and/or external use: Sandy will look into GitHub’s “Discussion” and Wiki features and report back next week
CDAWeb issue. Bernie says Jenn is fixing the software for that. No additional discussion needed.
Discuss logo. Shrink ‘API’ to 2/3rd size and nestle closer to the ‘H’. Go to 2-color: API as dark blue. Sandy will email designers.
Jeremy's crawler code and relationship to solution to problems given in summer school. Jeremy will look at summer school code that Bob will post soon
Bob's updates to hapi-server.org/servers - Now has links for verifying and viewing server ping tests.
2022-02-212022-02-21
This telecon was to hear from the VirES implementors about their experience adding HAPI as a service to their data system.
VirES server can send gzip as requested by the client (via the right HTTP header) ==> we need to check our clients – do they properly trigger compression request?
HAPI spec needs to address special floating point values - in the JSON output, the special values `NaN, Infinity, -Infinity' could be used instead.
The issue is that CSV would need to have the string 'NaN', and binary would have an IEEE NaN value
The spec currently says nothing about text encoding. We have a ticket on this and it almost done. We are using UTF-8 with nil-terminated characters.
HAPI 1201 error is hard to implement – header leaves before data! Note that an HTTP 1.1 chunked response can be interrupted to tell client something is wrong. (main benefit of chunk is integrity check – gives you a way to tell the client that something went wrong.
Question: Are you doing your own chunking? Ans: Django is relied upon for chunking – pass it an iterator and it does the rest (same with gzipping)
traceability – what if the data is updated; how to indicate this to clients? Currently, this is hard since HAI does not report what went into the data. We could consider a new endpoint:
hapi/provenance
We will add a link to the VirES HAPI server GitHub project on our list of known implementations.
2021-12-062021-12-06
R and Julia client: Pete Reily from Predictive Science.
need to look at OpenAPI and get HAPI regisered there.
need to look at OGC/EDR - a standard in Earth Science for delivering data.
from Bobby: Rich Baldwin at NOAA is asking about a comparison between OGC EDR and HAPI.
- OGC EDR is at
- API docs for it are here:
- and here:
- New HAPI logo coming -- Jon and Sandy are leading this.
- Sandy will be presenting upcoming SuperMag HAPI server
- Jon will be presenting the HAPI server and its 3.1 features
2021-11-172021-11-17
TOC mechansim: Bob has a nice solution for this:
cd /tmp; pip install pyppeteer; git clone; git clone; cd data-specification/hapi-dev; /tmp/biedit/biedit -p 9000 -o
Note that
pyppeteer is optional (PDF output only). Also note that if you are working on a different branch, you would need to switch to that branch - the above command puts you on the master branch.
(Usage note: don't leave the TOC checkbox checked all the time - it triggers per-keystroke updates.)
SPASE at the Heliophysics Data Portal now lists HAPI URLs. Bob could update the hapi-server.org/servers page so that CDAWeb ind olink point to the SPASE URL.
Lots of work on
addtionalMetadata item in the
info response. See that ticket for the latest.
2021-10-042021-10-04
Overview of results from the IHDEA meeting:
- once per month, we will have a HAPI telecon devoted to IHDEA members as part of our role representing Working Group 5 (devoted to HAPI) within IHDEA. This will be the first Tuesday of the month at 9am Easter time, which is more internationally friendly, at least for Europeans, and it's at least not the middle of sleeping time for Japan.
- Jon is now coordinating IHDEA WG 3 on coordinate frames, and that group will try to come up with a schema and instances of coordinate frame definitions; there will be several meeting throughout the year; Jim Lewis already asked for access to all master CDFs at CDAWeb to get started cataloging the frame names in use
- there was some talk about adding images to HAPI - this is complex; IHDEA folks expressed interest in keeping HAPI simple, emphasizing that this is one of its main strengths; there is the EPN-TAP protocol and the CSO interface which can server images, so maybe those are enough; IHDEA folks also suggested maybe an IHAPI interface - something separate for images
Update on the HAPI paper - Bob needs edits by Friday; Jon to add COSPAR recommended standard language and reference to COSPAR Space Weather Panel Resolution on Data Access (from 2018):
Referee also wanted update on ability to deal with image data. Bob to just say this is a possibility but is not in there right now.
Discussion of image handling in HAPI
- if we don't return number, this should be via another endpoint ('hapi/references' or similar? this needs thought)
- could possibly also serve event lists - but those can have repeat events at the same time - this is at odds with HAPI time series data
- we need to try this and see if its worth it
For coordinate frames, and to support the full machine interpretability of vector data in HAPI, the following changes will be added to the spec:
- add a 'coordinateSchema' optional entry so that each dataset can specify a machine readable schema for interpreting coordinate frame names
- add an optional item to a parameter to indicate that it is a vector quantity, and this element will indicate the coordinate name (to be interpreted according tot eh 'coordinateSchema') and also a 'componentType' which is an enum of 'cartesian', 'cylindrical' or 'spherical'
Discussion on custom parameters: possibly add a section to the 'hapi/cpapbilities' section: ''' capabilities : { optionalParameters : [ { name: 'c_avg', restriction: { type: 'double', range:[0,inf], default: 0, description: "average according to the number of seconds given for the value of this parameter" }, { name: 'x_subtractBackground', restriction: { type: 'string', enumeration:['yes','no'] }, default: "yes', description: "subtract the background from the data?"}, { name: 'c_qualityFlagFilter', restriction: { type: 'int', range[0,4] }, default: 0, description: "quality level to accept, with 0=best quality, 4 = worst" }, ] } '''
Discussion on SuperMAG - might be good to get something working soon, rather than fit SuperMAG intricacies into HAPI mechanisms
2021-09-272021-09-27
demo of test site for SuperMAG HAPI interface by Sandy Antunes; several options for SuperMAG data (baseline subtraction or not, etc); different ways to handle this - possibly use different prefixes, but danger is proliferation of HAPI Server URLs with confusion about which one is for what data; alternately could be done with additional request parameters, possibly non-standard ones
There is likely a need for HAPI to support additional request parameters.
There are two types of new request parameters that might be needed.
- parameters that any server might want to support, but do require some effort to implement; examples: time averaging filter; spike removal filter; possibly a parameter value constraint option, although this is getting really complex! These parameters would have a prefix to indicate that they are optional, additional parameters, but if servers want to implement them, they should use the existing names and syntax and meaning (all time averaging should use the same request parameter and should behave the same on the server)
- parameters that are truly custom to one server or even one dataset; these have a prefix of x_
There should be a way to convey the presence of and also the meaning of any additional (standard or custom) parameters in the capabilities file.
2021-09-202021-09-20
We talked about serving images - see today's entry for issue #116 for more discussion.
Sandy showed what he's doing for SuperMAG to add a HAPI server there. He's created additional prefix elements in the URL before the /hapi/ part of the server URL to allow for the combinatorics of options for SuperMAG data. There are many, but the two options discussed were:
- baseline = daily, yearly or none
- give_data_as_offset_from_baseline = true, false (I think Sandy called this "delta")
In this case, some of the combinatorics for data options might be able to be gotten rid of if SuperMAG woucl also allow it's baseline data to be released as a separate dataset. We can ask about this, but it would still be worth it to look at ways to support extensions to the standard HAPI query parameters.
Extensions to HAPI query parameters could be described in the
capabilities endpoint. They would have to be simple and fully described. You could envision enumerated options as in
option1={A,B,C} or numeric options like
0.0 <= option2 <= 10.0 (expressed in JSON syntax in the capabilities). There would need to be a description for each elements, and units for any numeric quantities.
Sandy will make his existing server publically availble for testing, and Jeremy will try it out. Bob will make his Intermagnet development server available too, and we can compare them next week to see how to proceed.
2021-09-132021-09-13
For issue 115 ( ) the SPASE group and the IHDEA group are planning to come up with a way to identify coordinate frames in a standard way. There's actually an existing IHDEA working group on this already for one year, but nothing was done yet. There is a potential leveraging of SPICE-based techniques, but SPICE does not actually have a naming convention for frames. Several folks have their own conventions, but none have gotten wide adoption. There are a few standard papers on coordinate frames people use as the basis of their conventions.
We talked about allowing non-standard schemas for 'units' and also for 'coordinateFrameName' elements. As long as there is also listed a reference to the specification, we though it would be OK for people to specify a schema that we did not explicitly list. There was talk of listing all custom schemas in the 'about' endpoint, but then Bernie suggested and we agreed that it really belongs in the 'info' response (close to the use of the new schema name), which is where you need it anyway. A ticket has been opened for this.
Next week, Sandy will present progress towards a SuperMAG HAPI server. SuperMAG present some challenges since it currently presents data in a way somewhat orthogonal to HAPI (each station is not a dataset, but HAPI tends to think of them that way), and also there are lots of options or flags that the SuperMAG native access mechanism exposes.
We briefly talked about how HAPI could possibly be used in a cloud-based setting. This is being explored for SuperMAG, and then also for possibly model output data. Model data may have variable grid structure, so the dta structures are changing shape at each time point. HAPI does not currently support this.
Finally we talked about using HAPI for images. See ticket #116. Bob and Sandy are both interested in this, so we can talk about it next time too.
Agenda for next meeting:
- quick status update on HAPI paper and any HAPI presentations
- Sandy to present on SuperMAG (15-20 minutes, plus discussion)
- overview of a sample coordinate frame schema (written by JonV and based on CDAWeb info) - this is just a toy version of a real schema, but we can reference it and it shows people the basics of what would be needed for a more full-scoped coordinate frame naming standard.
- talk about HAPI for images
- review of outstanding, high-priority tickets
2021-06-142021-06-14
talk about 3.1 issues and priorities
2021-06-072021-06-07
no meeting
2021-05-242021-05-24
To discuss:
Bob - will make webinar on client usage of HAPI for science users after the paper comes out
Eric: "target specific users; scientists versus data providers"
This is ready to be closed after confirmation by Aaron and others at CDAWeb: Closed the ticket on user identity management.
These are some older tickets with no champion, so we reviewed those to see if anyone wants to revive them, or else they should be closed. (servers emit HTML) -- agreed to close with coments about other solution (use self-documenting REST style) -- agreed to keep open with low priority for now
Wed 1pm Eastern is next ticket review meeting
Next regular meeting will be June 14.
2021-05-172021-05-17
- HAPI 3.0.0 on zenodo: communities are SPA and PyHC
- HAPI paper submitted to JGR
- Jeremy: what about clients if servers go to 3.0.0? How do clients negotiate with the server which versino to use?
What about a capabilities object that identifies other versions of the spec:
otherVersions: [ { "version": "2.1", "url": "" }, { "version": "3.0", "url": "" } ]
This above approach is possibly non-standard - Bob found this reference with 4 methods.
Need an issue for this - not for before 3.1
Note: some things in 3.0 that will be deprecated.
Clients: Python and Matlab: not tested yet against 3.0; SPEDAS - not yet at 3.0; Eric will mention to SPEDAS team
Sample 3.0 data:
2020-11-302020-11-30
Discussion
- ok to close issue #107 (it's attached to the 3.0 release); create new ticket for longer term web site updates
- wording discussion for issue #77 on keyword normalization: best to use the latest language in the spec, but include the older style keywords to indicate that they are deprecated; have a block at the start of the 3.0 spec indicating the big changes
- how to improve landing pages so that people who don't know anything about HAPI can get started.
- COSPAR drafts / updates due Dec 7.
Landing page improvement ideas:
a. better hapi-server.org page (The content at hapi-server.org comes form the README.md checked in to the project associated with hapi-server.github.io. The actual server is at Amazon, but the landing page gets pulled / reserved from github.
b. improve the user interface of the HAPI Server Explorer" (or whatever Bob wants to call it), running at hapi-server.org/servers Ideas: add verb to dropdown menus; add intro paragraph; include name (HAPI Server Explorer, or equivalent)at top of page; create a set of slides to explain the usage, or maybe a video (use Camtasia for making videos, or maybe OBS);
At the Heliophysics Data Portal, maybe have two links for a HAPI-accessible data set: use HAPI data, info about how to HAPI. Aaron emphasized the need to let people know that this service is available and how to use it.
Action items
- Jon - close issue #107 about (after changing link to hapi-server.org)
- Jon - create new ticket for better hapi-server.org intro site - a "Getting Started" page for brand new HAPI users.
- Jon - look into updates for the hapi-server.org main page
- Bob - continue with spec updates for issue #77
- Bob - revisit / restart the HAPI paper; submit to JGR, Space Science Reviews, Advances in Space Research
- Jeremy - keep thinking about sub-group for server mods
- Jon - check with Masha on COSPAR group status
- Bobby - maybe include HAPI (and SPASE) on COSPAR presentation
Next meeting is: Dec 7 - this is during AGU, but it keeps us checking up on action items.
2020-11-092020-11-09
went through all issues to categories by milestone (3.0 or 3.0+)
Action items:
- Jon to review pull request for issue #94
- Jon to check with Eric on MMS units status
- change units pull request to only have units specs with good online info
- Bob to write up spec changes for keyword normalization
- all to read related issues #82 #83 and #87 for discussion next week
- Jon to ask Jeremy about starting server extensions working group
2020-10-192020-10-19
- How to describe access to a dataset via HAPI in SPASE via the
<AccessURL>element?
There was a lot of discussion about this - our solution that we proposed last time much more than I anticipated. SPASE is not clear about the intent about
<AccessURL> so we debated about responding using HTML versus JSON.
Two tickets are opened after the discussion: 101 (use HTML) and 102 (add links to make HAPI more truly REST-ful)
2020-10-052020-10-05
- Jon gave summary of meeting with Beatriz Martinez at EASC; HAPI server coming for Cluster Science Archive (CSA), but they are re-doing the server and will be done in January - no changes to metadata schema, so Bob and Jeremy will look at their metadata to see how it maps to SPASE; they are interested in using Bob's server initially as a living (actually used) example and then create their own implementation eventually; they were interested in any Java components we might be able to offer to help
- discussion about how best to incorporate HAPI info in the AccessURL of a SPASE record. The info response is not the right thing, since that is very computer-centric, so the current thought is to use the top level URL for the HAPI serer and then reference the dataset ID in the ProductKey element of AccessURL.
- Nand asked us if we were really making a difference and suggested it is time to zoom out and ask larger questions about impact and relevance. We need to push adoption more by getting totorials / quick starts out there. We need to finish the HAPI paper.
- IHDEA meeting is 19-22 Oct; agenda still being formed - people need to submit talks now since that's how the agenda will be formed; see this link:
Actions:
- Bob to review CDA metadata with Jeremy ahead of Dec 2 meeting with Beatriz
- Jon to look at adding quick start link to HAPI server home page
- Jon to look through SPASE records to see which ones would need HAPI access info
- Jon to add HAPI access info to agenda of next SPASE meeting (likely to be on Thursday Oct 15)
- all: submit your IHDEA talk now
2020-09-212020-09-21
- for Issue 94 (server info page), we decided to use the example.com/hapi/about endpoint, and added a
publisherCitationoptional element. This is ready to go into the spec. Additional items which are more dynamic belong on a different endpoint, discussion of which belongs in another issue.
- broken links now fixed on hapi-server.github.io
- updates from Bob: generic HAPI server being updated to work on a Windows server. Supported operating systems for this generic server are: Unix, Mac, Windows, Raspberry Pi, Docker
- HAPI client being upgraded to chunk up requests for longer time ranges; discussion of caching, which is linked to this capability due to need to
Action: Bob will update spec with new "about" endpoint; others will review his pull request and can make suggestions Action: Jon to incorporate time-varying changes into spec (different pull request) ahead of IHDEA meeting in October
2020-08-312020-08-31
- opened new issues #98 regarding how a server can indicate the ability to handle parallel requests from the same client; note that sometimes, it's hard to tell how many requests are from a single client if you have multiple servers behind a load balancer
- AMDA server is using HAPI now based on Bob's node-js front end; no public API could be found yet at AMDA (ask about this!); a new versin of the node-js server is about to be released; the AMDA folks asked about "HAPI inside" label or logo. Several will look into this (the PyHC group has a logo design process underway). Several responded to Genot's questions.
- need to finalize and close issue about HAPI error codes for time ranges
- no meeting next week
2020-08-242020-08-24
- closed issue #95 (should stop=None be interpreted by server to be the stopDate for the dataset?) since this can be handled in client code
- action item for all: comment on issue 97 about time error code clarification
- Bob presented the case for more complete info about each HAPI server (all.txt is very plain right now). We need a schema for the server list, and it would also be great to have an endpoint whereby servers can emit their own info (presumably using the same schema element). Bob has already created issue #94, and he will use that to come up with a schema for server details.
- Bob to update web page on the HAPI main page - hopefully he can talk about recent hapi-server.org web page updates next week
- Jeremy gave demo of Sparkline () capability he has added to his Autoplot-based HAPI server, which can be added to other servers if they want a quick visualization capability
2020-07-202020-07-20
Agenda
- release 2.1.1 is pretty much ready to go; procedural questions:
- retain a running changelog versus just changes for most recent version (maybe each major release has running changelog);
- the use of pull request mechanism works as long as people work on the same branch for modifications; that
- need to copy in the contents to the 2.1.1 directory
- decide which changes are key for resolving and including in 3.0; see this list:
- meeting this Wednesday 9-11:30am noon with ESAC about HAPI server
- GEM meeting this week
- AGU abstracts due July 29; ; some interesting sessions:
- PyHC session:
- Tools, analytics, etc for Solar and Planetary:
- Helio HackWeek - maybe a short presentation about uniform data access to participants?
- Aug 25-27
-
- ISWAT sign up - still needs doing:
- Bob - updating the IDL client - needs installation instructions to be added by Scott; also making more user friendly; better landing page for list of active servers; schema for server list; Bob to report / demo next week.
2020-07-132020-07-13
Agenda:
- release discussion: 3 issues (small, clarifications) to go into 2.1.1; then stamp this out; then copy to hapi-dev to begin work for 3.0; make sure there are no version 3.0 issues contaminating the 2.1.1. release.
- ISWAT cluster sub team? General agreement to join. Sepearate paper for COSPAR meeting (different than Bob's current draft); include Batiste Cecconi, and Arnaud Masson
- HAPI web site improvements needed
- Other examples:
-
-
- paper: journal is for Space Weather (SPASE is in here), or maybe Space Science Reviews? (same as SPEDAS paper)
- Heliophysics 2050 workshop - looking for white papers; need one for Data Environment:
- papers due in September 2020
-
- around the room - action item updates; new servers:
- Arnaud asking for time to meet for ESAC server; Bob's generic server an option
- no news from InSook; PPI node needs more support or other project leadership
Remember to do for 2.1.1 release: add leading comments about key clarifications.
2020-06-292020-06-29
Actions
- read Bob's draft paper - see his email dated 2020-06-22
- see previous action items
- optional: read the recent SunPy paper:
- side issue: move URI template wiki and effort to spearate GitHub project; make more useful by implementing in more languages -- translate JavaCC file to antlr, then to other languages? Summer of Code Project? find JAvaCC code for URI template parsing (Jon)
2020-06-222020-06-22
Today's call was a review of tasks needed to focus on pushing HAPI forward. Here is what we came up with, in order of importance.
- finish spec updates
- get SuperMAG HAPI server up and running
- get PDS HAPI server up and running
- finish and publish Bob's paper on HAPI - the spec and it's uses, showing wide adoption in Heliophysics and also at the PDS/PPI node
- status dashboard for existing servers
- integration with SpacePy
- continued and even more coordination with PyHC projects
For the next few months, we will try month telecons of 30 minutes to keep momentum going. The link to these notes will be included in the weekly telecon notice.
Action items from today:
- Bob to email Rob about SuperMAG (done)
- Bob to send link to paper (done)
- Jeremy to contact InSook
- Jon - get busy with those spec updates!
2020-05-112020-05-11
- walk-through of Jeremy's stand-alone Java client (no dependency on Autoplot); currently offers low level access to an iterator of records as they stream in; server oriented methods are static methods; very low level options expose the JSON content of the response; higher level methods allow conceptual access that isolates you from the actual JSON content (which also insulates users from potential changes in the JSON)
- discussion of issue:77 on some naming changes to remove a few quirks - see the issue for details
2020-04-272020-04-27
- Looked over issues
- talked briefly about availability info; some tickets related to this already
- Jon will present at the Python meeting on Wednesday.
- Jeremy will present Java client next time: May 11
- presentation / discussion of SuperMAG HAPI interface targeted for June 1.
2020-04-132020-04-13
- Jeremy is starting a Java client, mostly coded and looking for people he could work with.
- talked a bit about client identities, to support Super Mag server Bob Weigel is working on.
next meetingnext meeting
- "Time Series Data" vs "Time Ordered Data". Shing says that Time Series Data implies F(T) where F is an array of scalars.
2020-03-232020-03-23
- Chris L from LASP presented their HAPI server, and plans for the next version.
2020-03-162020-03-16
- meeting was planned but was cancelled.
2020-03-022020-03-02
- Jeremy has been working with In-Sook with the UCLA server.
- Bob Weigel has been working to have the verifier check on version numbers.
2020-02-122020-02-12
- in 2.1.0, need an update now about labels and units so the verifier can be updated
- we need a 2.1.0 official release (at the right time point - to allow proper differencing)
- changelog entries need to link to individual commits or diffs
- examples need to be added to clarify units and labels:
The spec doesn't describe very well what to do for the label (or for the units) of multi-dimensional arrays. For a 1-D array, it seems clear enough: the label (or units) can be a scalar (that applies to all elements in the array) or an array of values with one string per data element (and the length must match the size of the 1-D array). "name": "Hplus_velocity", "description": "velocity vector for H+ plasma", "type" :"double", "size": [3], "label": "plasma_velocity" "units" :"km/s" For a two-dimensional (or higher) array, we should allow for the units and the label to be a scalar that can then apply to the entire multi-dimensional object: "name": "velocities", "description": "two velocity vectors for different plasma species", "type" :"double", "size": [2,3], "label": "plasma_velocity" "units" :"km/s" The idea of having an array parameter is that all these elements have a strong "sameness" about them, so expecting the units to be the same is reasonable. Note: for an array of two vector velocities, the size should be [2,3] instead of [3,2] since the fastest changing index is at the end of the array. Note that the ordering is not ambiguous for things like a [2,2] because the spec indicates that the later size elements are the fastest changing. See this:
You could also give a label for each dimension: "name": "velocities", "description": "two velocity vectors measured from different look directions", "type" :"double", "size": [2,3], "label": [ ["species index"], ["vector component"]], "units" :"km/s" Each label in this case applies to the entire dimension. But the values still all have the same units. It's hard to think of a case where the units would be different - otherwise, why is it an array? You could also label each vector component: "name": "velocities", "description": "two velocity vectors measured from different look directions", "type" :"double", "size": [2,3], "label": [["species index"], ["Vx", "Vy", "Vz"] ], "units" :"km/s" Or, you might want to label everything: "name": "velocities", "description": "two velocity vectors measured from different look directions", "type" :"double", "size": [2,3], "label": [["H+ velocity ", "O+ velocity"], ["Vx", "Vy", "Vz"] ], "units" :"km/s" The units behave in a similar way, in that a scalar unit string is broadcast to all elements in its dimension, but an array of string values are applied to each element in the array. One use cse for this would be for vectors specified as R, theta, phi values. add example with different units (R, theta, phi, for example)
2020-01-272020-01-27
meeting agenda
- brief report on Jon's UCLA visit - I will tag-up with In Sook a few more times via phone call over the next few months; she had some fairly complex data and had some issues fitting it into HAPI; the PDS group there needs help converting PDS3 data into CDF
- going through outstanding issues identified by Bob: PDF problem, nulls in bin ranges, null in labels
- next telecon meeting: Feb. 12, 1pm
Notes: for UCLA HAPI server: what about using VOTable tools already used; leverage Eric on different floor! also some PDS3 to CDF tools at CDAWeb! Jeremy to work with In Sook - possibly loop Jon in for a few discussions
MAVEN SWEA (Solar Wind Electro Analyzer) data is in CDAWeb too; the elevations vary for the first 8 energies only, and then are fixed for the remaining 56 energies; HAPI could capture these elevations as a separate variable in the header;
Action: Jon to find out how the team views this data; PI at LASP is listed in metadata
Action: explore how image pixel references could also be captured using these bins
Action: remake HAPI 2.1.0 PDF and see if it fixed the Github renderer; ask around - is this broken?
Action: add phrase about bins content that having both centers and ranges is also OK.
Action: need more bins examples since this is one of the most complex parts of the spec
Action: Bob to write up description about allowing a null ranges if there are bins with centers but no ranges (for just some bins - if there are no ranges then just don't have a ranges object); Jon will review the writeup
Action: for integral channels, explain that you still need to put a (very high, but just high enough) bin boundary
Action: add-write-up for time-varying bins and for header references
Action: Jeremy to meet with In Sook Moon at UCLA to help along the HAPI effort there; see above too
Action: fix time string length for datashop Cassini MAG dataset (Bob noticed this)
Action: Doodle poll on new meeting time (and alternate meeting week of Feb 10)
2020-01-062020-01-06
- Discussion about CDAWeb server's approach to ordering of parameters in the request; can CDAWeb be 2.1 compliant? Nand to consider this soon.
- In looking at CHANGELOG: need to clarify changes in CHANGELOG: add numbers; categorize as to effect on servers
- AGU updates: Amazon lambdas could be useful
Plans for this year:
- get spec ready for 3.0 release; Jon to come up with a reading list for the "what to put in 3.0" discussion on Jan 27
- check up on PDS server - Jon to visit UCLA in January - Jeremy participating via telecon?
- Python client - spruce up docs and packaging; make sure other libraries use this as lower layer elements
- training for scientists - tutorials at meetings; tutorial telecon captured as video and crib sheets - borrow this technique form Eric Grimes (Jon to ask for assistance)!
- status and continuity of LASP server
- paper out on the 3.0 spec; Bob has an early draft he will send around
Discussion about data and DOIs; CDAWeb will acquire DOIs via SPASE; can retrieve data now using DOI or CDAWeb ID or SPASE ID; waiting for missions to coordinate DOI assignment; should HAPI offer a more generic data query capability for other IDs? Question about versioning and provenance? HAPI will use this standard when it comes avaialble.
Next meeting: Jan 27 to decide about issues to include for HAPI 3.0
2019-10-212019-10-21
- IHDEA meeting update: the verifier is very popular; the new ability to handle time-varying bins was presented; HAPI is now accepted as the interoperable way to deliver time series data; ESDC (which stands for ESAC Science Data Center, where ESAC stands for European Space Astronomy Center) is planning to adopt HAPI - they are waiting for a lull in activity - we will coordinate with them starting around the new calendar year; CDPP is also planning an implementation
- hapi-server.org is having problems in some browsers because of it's certificate and https issues
Action items:
- Jeremy to fix the hapi-server.org certificate issue
- Jeremy to prepare a demo of Autoplot using SAMP for something other than granule access. SAMP can delivery das2 endpoints, and could similarly expose HAPI endpoints (either at the dataset level or probably also at the whole server level)
- Bob to give a demo of the generic server capability (it is now installable via pip)
- Jon to update the spec with all recently (conceptually) approved changes, including: time-varying bins, references in the header, a cleaning up of the usage of id versus dataset, etc; changing time.min and time.max to start and stop in the request interface (keep but deprecate the older terms)
- PyHC meeting in two weeks - Jon and Aaron to attend; others will participate online; ensuring a sensible, common data access mechanism within the emerging library is of particular interest to the HAPI crowd
- next telecon on Nov 18
2019-10-072019-10-07
- The two new features (time-varying bins and references in the info header) have both been tried on live demo servers, and seem to be working well. See Ticket #83. These are ready to be written up in version 3 of the spec.
- units - For HAPI 3.0, we would also like to add an optional "unitsSchema" as an optional Dataset Attribute. This would allow data providers to specify what convention is to be used for interpreting the units strings in the metadata (i.e., info header). As mentioned in Ticket #81, which is about this topic, conventions like UDUNITS2 are suitable for this, and they satisfy case 1 and case 2 that are described in Ticket #83. There needs to also be a way to specify which version of the schema is in play, and we decided to start with a rough version identifier, such as "udunits2" rather than being very specific like "udunits2.2.26" since that would be harder for clients to manage when there are minor version changes. The other example are the units from AstroPy, which are apparently part of the core AstroPy package, which is now at version 3.2.1 so that using AstroPy-comnpliant units in HAPI metadata could be indicated using a "unitsSchema" of "astropy3". Rather than force people to choose a units schema from a list, we will describe the ones commonly in use and provide recommendations for how to come up with the appropriate schema name. If clients do not recognize the unitsSchema, they will just ignore it. Note that each dataset specifies it's own unitsSchema (but not individual parameters).
- other news: WHPI (Whole Heliosphere and Planetary Interactions, see) is attempting to make plasma data from all relevant Heliophysics missions and models accessible. There's a meeting next September and ideas are floating around now to help make this happen. HAO has money to work on this. This would be a great chance to point out that HAPI was designed exactly for this problem, and try to get some traction with and support from this group.
- other news: Cluster data is going to be mirrored at CDAWeb, where the default option is to present it in it's converted CDF form (ISTP-compliant) and serve it via the usual CDAWeb conventions, including HAPI.
- upcoming meetings:
- Aaron headed to Big Data meeting for NAS - he is looking for ideas and slides
- IHDEA - Jon to will present the latest updates to HAPI
- PyHC meeting; Aaron and Jon to attend; Bobby attending remotely and has ideas ha wants advanced
- AGU - relevant sessions are Monday (IN11E - Tools and Databases in Solar and Planetary Big Data) and Thursday (SH41C - Python for Solar and Space Physics)
2019-09-092019-09-09
This call was to give a quick status update from the sub-group working on references and time-varying parameters. A few suggestions were logged in Ticket 82.
Aaron also commented about maintaining a focus on implementations, and having something ready for people who want to implement a HAPI server but want to just drop is a pre-existing, generic server that can districtue their data using HAPI. We also talked about future connections to NSF efforts, such as we hope to bolster using the SuperMAG effort that is underway. Madrigal would also be a useful connection to make.
2019-08-122019-08-12
- Eric will send Bob a note to update the HAPI main web page about the IDL client in SPEDAS.
- Time-varying bins virtual hack-a-thon is this Thursday; iron out spec changes and implications
- upcoming meetings: AGU (PySPEDAS poster will mention HAPI, Jeremy is in Python session, Jon has 2 abstracts on HAPI), also the IHDEA meeting in October - present time-varying bins update to ESA contingent
2019-06-032019-06-03
agenda: Jeremy's presentation on Das2 server options
Das2 servers have flags for individual datasets that grew out of the original use for Das2 servers, which was as a somewhat internal protocol between a client and server written by one developer, who understood what all the "secret" options were and could use them to optimize the data transfer for what the client needed. Jeremy advised against this kind of behind-the-scenes options proliferation.
Because the ensuing discussion led to significant interest in adding optional capabilities to HAPI servers, the bulk of the content for this telecon is captured in Issue:79
See that issue for details about adding server processing options.
Action items:
- Bob to present about the FunTech server
- Jon to follow up on server implementers
- need examples of capabilities modifications to support binning, interpolation, and spike removal
2019-05-202019-05-20
agenda:
- discuss release notes
- organizing efforts for the next release - which issues to work on and who
- plan for getting servers updated to the latest spec
- getting the word out: documentation, posters at meetings, training sessions at meetings, training videos
- news: Python meeting tomorrow in Boulder and via telecon:
- Python meeting agenda:
Focus for the future:
- more complete example package showing people how to access typical dataset using multiple clients
- paper describing HAPI - Bob W. to send around draft; options: JGR, Space Phys. Rev
Actions:
- Bob: send around client test suggestions
- all test clients per Bob's directions
- Jon: check with Nand and Doug about server status
- Jeremy: prepare demo of Das2 dataset options management (only retrieve finest resolution, etc)
- Jon: straw-man examples of binning, interpolation and de-spiking
- all: bring servers up to new spec
2019-04-222019-04-22
We decided to proceed with the release of 2.1.0. The one thing left to do is update the spec to reflect the resolution of Issue #69 about how to handle a user request for parameters that are not in the same order as what is in the metadata. We are also adding an additional error code (1411 - out of order or duplicate parmeters).
2019-04-152019-04-15
Suggestions - update the "server" nomenclature in the spec to reflect intent: this is the full server and URL prefix of the top server location / entry point. After the first example () clarify that "server" includes the hostname and possible prefix path to the HAPI service.
Lots of questions about prescribing the order of returned parameters - Nand: this can add confusion when there is no header in the response (then you have to consult the info to see what you got in the response). The differences are focused on client expectations (return what I ask for) versus a data-centric perspective (the data exists and will be returned with as little changes as possible - no re-orderings). Jon will discuss with Bob and Jeremy and bring a suggestion to the next telecon.
2019-03-252019-03-25
topics discussedtopics discussed
- Duration (issue #75, now closed) is tied to time-varying bins, so the explanation is not in the spec document, but a separate implementation page until the time-varying bins are figured out.
- Need one last look at all changes since last release:
- Need to make a changelog with diffs for key updates; roll up typos, etc, into one item
- Bob looked at timeseries data from earthquakes (including electric field values). He said their standards seems pretty easy to map to HAPI - we have similar elements and a similar approach (of course the details differ); he will send some links See. The timeseries link is the one for data.
- normalizing ids and labels fro version 3.0- (See discussion below)
- coordinating efforts on Python HDEE proposals
Considerations for Normalizing the use of descriptive labels (see below for details)Considerations for Normalizing the use of descriptive labels (see below for details)
For parameters: id - machine readable ID with limited characters (no spaces or odd characters); BX_GSE label - short, human readable version of ID; spaces ok, "Bx in GSE coordinates" description - up to a paragraph of information about the parameter or dataset; think figure caption; same level of info from SPASE record
SPASE analogs are: parameter key, name, description (the main thing is to have them correspond one-to-one with SPASE, and maybe others?)
Relationship to resourceURL? If this is present, then 'description' is obtainable there.
For catalog entries (each entry is a dataset): currently, each dataset has: id (Required), title (optional) Suggestions for 3.0: A. each dataset has: id (required), optionals: label, description, start, stop, cadence B. have a verbose flag on catalog request that generates a full, parameter-level catalog of all datasets; like this If present, advertise in capabilities as catalog verbosity
Does this make HAPI too much of a registry? Original idea was to let discovery focus be outside HAPI. It makes HAPI usable in other contexts.
2019-02-112019-02-11
Discussion about the generic server Bob W. is creating:
The server has multiple installation methods, one of which is a Docker image. This option has drawback, since it's hard to edit files inside Docker (you have to ssh into the Docker VM, and then use whatever primitive OS tools are available, like vi or nano). So after someone configures their server, they could build a Docker image, but it might not be too useful like this as a deliver mechanism. Unless-- you could have the server config file be external, and then tell the Docker image about it at startup. Bob will look into allowing the run option for the Docker image to take a URL argument pointing to an external config file.
Volunteers are needed to try out Bob's method and see how easy or hard it is to build the back end components to feed the HAPI front end.
What is also needed is a GUI mechanism for building that back end. This could be a separate open source project to build this part.
NOAA Space weather week is coming up; this is a good time to connect with both the science and operations / developer side of the house at NOAA. Also, the archiving side (NGDC) and the realtime side (NOAA Space Weather Prediction Center) will both be there, and they have separate mandates that don't mix often. Jon will contact Larry P. and Bob S. to see about connecting with NOAA people about using HAPI for their archive and real-time data
Specification updates
Jon is planning to put some revamped TIMED/GUVI data behind a hAPI server, and one issue is that each measurement needs to be correlated with a lat/lon on the Earth. We need a way to associate data columns with support info columns, like lat/lon. Also, sometimes, the lat/lon may be fixed, or partly fixed, i.e., changes every few years (when the ground magnetometer station is moved). Options are:
- just have a column that repeats the same value (this is the default now, and probably until HAPI V3)
- the header could list all the options for a slowly varying quantity, and also provide labels for each value, and then the data column could reference the label and only repeat that instead of the enire set of values; this is a kind of built-in compression
We should look at how Earth science organizes data products that need lat/lon registration.
Next steps for HAPI - better on-boarding process for people who want to adopt HAPI. Groups so far that have done this are CCMC and Fundamental Technologies (PDS/PPI sub node in Kansas).
We need to make our documentation have more of a flow or be more organized and cookbook oriented.
There are still a lot of outstanding open issues on the spec document. These need to be cleaned out. Most are documentation clarification, but two are larger issues. The biggest one is handling "mode changes" (bin vluaes that change with time). This is issue 71. Jeremy, Jon and Bob need to meet separately to try their latest approach as outlined in the issue.
2019-01-282019-01-28
2019-01-072019-01-07
attendees: Jon, Jeremy, Todd, Chris, Eric
- Happy New year everyone; we are missing our NASA colleagues and hoping they can get back in there soon
- is this meeting time OK for the upcoming year? will do a poll later to see if this time is OK
- EGU - abstracts due Thursday; Tom is going from LASP; no session identified for data environment topics; no one else likely to go
- iSWA HAPI Server is up and Jeremy reports that it performs well; Jon to ask CCMC to advertise it more on their main page
- Masha from the CCMC mentioned at the AGU that HAPI was approved by COSPAR and that we should form a group about it before the next COSPAR meeting in March; Jon to follow up with her about this, since it was a hurried conversation in the poster hall; the COSPAR approval of SPASE is still in process pending some clarifications, possibly related to how SPASE and HAPI interact
- is the URI template mechanism a part of SPASE? Todd thinks it can be listed in the AccessURL
- discussion about creating a "drop-in server"; we need to first define this more clearly; some kind of ready-to run mechanism to support the use case where a provider does not already have a server that can be modified; definitely it should provide proper HAPI parameter parsing and a secure environment; maybe these parts could be done in multiple languages (NodeJS, Python, Java) to give people options. Bob's server is coming along nicely and could be made into something installable via NPM (installer / repository specific to JavaScript); maybe we can start a group project for this effort; there are some datasets at APL to which we could try applying the generic server: TIMED data (time series of atmospheric retrievals and images) and also SuperMAG (which has some strict user registration and data usage acknowledgement requirements)
- client work - need to keep bolstering the Python client to make sure it is industrial strength Python; Bob is working this - does he need / want help? this will hopefully end up in the Heliophysics Python library
- next meeting: Jan. 28 (since 21 is Federal holiday)
2018-12-172018-12-17
Post-AGU meeting discussion about AGU - discussion with Bob and Jeremy and Larry Brown - Bob wants more issues closed, especially bug ones; the ambiguity of cadence is a key one; 2. other AGU news: charter in the works for IHDEA (International Heliophysics Data Environment Alliance) 3.
2018-11-192018-11-19
- meeting reports from various events: IVOA - Jon V.; very short - astronomers have preliminary interest in HAPI; contact is Ada Nebot; ADASS - anyone go to this?; EarthCube RCN - Jon V.; HelioPython - Aaron, Bob, others
- Update on time-varying bins - not much news yet
- server status check, including LASP; development so far on Github at LASP site
- AGU Plans
Meeting updates: IVOA - interest in HAPI and our experience; only a preliminary connection - further dialog needed; interest in re-using existing standards, such as Apache AVRO
Meeting updates: Python meeting - presentations from contributing libraries and other existing libraries in terms of practices and structure; possible HDEE call for exploring e.g., library governance; Bob and Aaron met with NGDC (Eric Keane, and Rob Redmond) who have their own APIs (spider, and 2 others since then, now another); API is mostly for internal use within web-page plotting and for access to their own database; DISCVR and GOES products; most of their products already in CDAWeb; SWPC real-time data is separate, and they only expose files for security reasons - thus would need a wrapper; question: what is latency with iSWA at CCMC? if low, then probably good enough; could ask CCMC to cover more products; group at ONERA (French radiation belt group, Sebastien Bourdarie) also building a HAPI server - eventually using a Python Django - would they be willing to contribute it as open source?!!!
Overview of LASP HAPI server from Chris Lindholm; it will be generic as a LaTiS server - if users can set up their data to fit into the LaTiS framework, then the data can be served via HAPI. More at AGU, including public HAPI server. Functional programming (Scala) being used.
Next meeting (after the AGU): January 7, 2019
2018-10-292018-10-29
- report on International Heliophysics Data Environment Alliance (IHDEA) - meeting as ESAC (archive for all ESA missions); Arnoud Masson; enabling cross-agency interoperability; public site is at ihdea.net with dev and info mailing lists
- upcoming meetings: HASA HQ Data and Computing across all SMD, IVOA (Nov 7-9), Python Meeting at LASP, ADASS, EarthCube
- connection with NOAA being sought (Bob Weigel working this with Aaron); need to prime the discussion with the right NOAA people before Space Weather Week (April 1-5, 2019 in Boulder)
- Update from LASP (Doug Linholm) - code for scala-based somewhat modularized HAPI server available at which might be demonstrated next time
- Update on Python client - able to push data directly to Autoplot; lots of other features for a demo next time
- next telecon - Nov. 19; topics include more on Python client and possibly some on the LASP HAPI server
2018-10-012018-10-01
topics covered:topics covered:
Upcoming meetings:
- Python meeting in Boulder: Aaron and Bob Bob attending; Aaron (with Alex DeWolfe) is coordinating Python library development for Heliophysics
- Astrophysics Data Analysis conference in College Park, MD
- NSF EarthCube RCN meeting at NJIT: Jon going; let Aaron know if you want to be invited attend
Python client: Bob has a basic package installer working and a Jupyter notebook;
Specification updates: Jeremy and Jon presented ideas for dealing with issue:71 about constants in the header and about time-varying header elements; suggestion from Todd and Bob: use native JSON reference capability; possibly also have our own reference syntax when using a parameter value as time-varying bin values
Action items:
- Jon and Jeremy - revise the suggestion for issue 71 to use native JSON refs
- Bob and everyone - find more Python helpers
2018-08-202018-08-20
topics coveredtopics covered
linking parameters in the header this relates to issue 71; there has not been much work on this yet; issue 71 now has a write-up of some options; Jeremy will explore some options in the next few days
email lists for now, we will just use the hapi-dev list for most communications; we can use hapi-news occasionally, but that should include instructions for getting on the hapi-dev list, since that is still going to be the priority list for a while
NOAA data would it make sense to have NOAA data via a pass-through HAPI server (written outside of NOAA)? we should interact with NOAA some, especially at next year's Space Weather Week, when developers and scientists are all available
server updates:
APL: JUNO data going to be put behind a HAPI server
Iowa: Autoplot bug fixes; das2 server codebase is shared with hapi server codebase, and a setting determines if the das2 server is also a hapi server; decided dataset by dataset within a das2 server
LASP: development underway for HAPI server, which will be part of the LATiS version 3 effort; work is all being done on Github and so the codebase will be usable by others interested in serving data via HAPI or LATiS
PDS/PPI: server is up and running; CAPS data available - more testing needed; any dataset in PDS4 can be easily added to the HAPI server
CDAWeb: Nand's server still running OK; saw some accesses from APL; problematic variables being removed
client updates:
Nand is working on Java client - this could be coordinated with Jeremy and Larry Brown
VisualBasic client for MS Excel is going slowly at APL; high school intern will continue this fall
action itemsaction items
Todd - send Jon and Aaron the email addresses on the hapi-news and hapi-dev distribution lists
Jon - send something to HAPI-news occasionally to keep people up to date on development
Jon - test the hapi-dev list using the WebEx meeting setup tool to see if everyone will get the WebEx invite
Jon - email Alex DeWolfe about adding more data formatting discussion to this Friday's Python telecon
Jon - work with Aaron to touch base with the CCMC people for a status update on their server and we're especially interested in any feedback they have regarding the specification
Jeremy - work on implementing something for linking variables and/or header items
Jeremy and Bob - remove time library dependence from Python client; look into Jupyter notebook as a demo for how to use Python client to interact with a HAPI server
2018-07-302018-07-30
AGU sessions - planning for multiple sessions; SPEDAS training after Mini-GEM (and poster in Cecconi's session)
Oct 2,3,4 Python for Space Physics at LASP; presentations on existing capabilities; architecture discussion and layout; Alex DeWolfe coordinating; she also has mailing list and telecons every other Friday
Actions:
Jon - send Alex D. a note about Python integration of HAPI; jump in on upcoming Python telecon
Jon - write up summary of discussion on reference variables and include in issue 71, then notify everyone
next telecon: August 20
2018-07-232018-07-23
COSPAR summary - news from ToddCOSPAR summary - news from Todd
SPASE and HAPI put forward in resolutions recommending their use as standards
AGU submission possibilitiesAGU submission possibilities
Jeremy will submit to this session by Baptiste Cecconi:
IN044: Interoperable tools and databases in Planetary Sciences and Heliophysics
Bob is thinking about this session:
IN007: ASCII Data for Public Access
Jon will put a HAPI specification poster in this session:
IN042: Integrating Data and Services in the Earth, Space and Environmental Sciences across Community, National and International Boundaries
Bobby and Bernie will not create a HAPI-specific poster, but can support a CDAWeb description on another HAPI poster, which should also include Nand.
Doug will present the HAPI-fied version of LaTiS at the AGU as well, session is still TBD.
Next telecon will be July 30
topics to include:
issue 71:
updates from various servers (CDAWeb, PDS/PPI, GMU, UIowa, APL, LASP, and maybe the CCMC developers)
2018-07-162018-07-16
Note: next telecon is in one week (July 23) in order to have a short tag-up on AGU abstract submissions.
The two action items from today are:The two action items from today are:
A. peruse the AGU session list and think about what HAPI abstracts we can submit. There are multiple options:A. peruse the AGU session list and think about what HAPI abstracts we can submit. There are multiple options:
- multiple posters: a poster on the Spec, one on clients, one on servers
- one poster on all of these (spec, clients, servers)
- other permutations: one on the spec and servers; then one more for clients
There's a session by Baptiste Cecconi:
There's also a Heliophysics Python session by Alex DeWolfe:
B. take a look at issue 71 - it's about how to handle constant parameters or references in the header and in the data.B. take a look at issue 71 - it's about how to handle constant parameters or references in the header and in the data.
URL is:
Be ready to talk about this at the next telecon
Server updates: The CDAWeb HAPI server is going to use Nand's approach for the foreseeable future: (We did not talk about this, but it uses https (encrypted), which has to be considered when mixing with regular http (non-encrypted) sites.)
CCMC - Aaron can check with them soon to see how they're doing
PDS - Todd not on the call (COSPAR); will get an update next time
LASP - Doug says funding all set up and work is starting / progressing
Client updates
- Autoplot and the MIDL HAPI client were presented at the MOP meeting last week. 20- people attended the tutorial. A few scientists are starting to see the value of having one access method across data centers.
- SPEDAS tutorial held at GEM meeting. Another planned for Sunday evening after mini-GEM at AGU. A part of this will be about HAPI, so Eric was fine with a HAPI representative helping out with or being present for that part of the annual SPEDAS tutorial. We hope to have CCMC and PDS and maybe LAP online with HAPI by then!
- At APL, some interns are going to attempt a fully Excel-based HAPI client, or at least some mechanism that can produce more regularized CSV files that can be opened easily in Excel.
2018-07-022018-07-02
AGENDA
- news from Bob on updates to the verifier
hapi-server.org/verify is link to new verifier; it just is a pass-through to his own site at GMU
also from Bob - update on the generic server Bob - few tweaks to docs and ready to start advertising about in 2 weeks;
transitioning hapi-server.org to actually serve HAPI content
Jeremy: change documentation so that it points to working examples on hapi-server.org
- feedback about Jeremy's proposal for constant parameters
Jeremy's proposal for constant elements in the header or data:
Lots of discussion about exactly how to arrange references in the header. Should there be a more generic way to link variables - i.e., treat even the constant elements as a kind of parameter, and then just have them linked in the header, like CDF does. Or should we keep header variables different than time-varying data parameters?
2018-05-072018-05-07
Today's discussion: PDS HAPI server is up and running. Send issues to Todd. The rest of today's discussion was mostly about how to handle data with unusual bins, such as 3D data that in addition to a regular grid of bins along each dimension, somehow also has a separate grid of bin values that applies to a specific slice or face of the data. This is a MAVEN dataset, and Todd will be sending around more info about it for next week.
We will have another telecon next week, May 14, and then take off the week of the 21st, since that is the TESS meeting week.
2018-04-092018-04-09
- upcoming meetings:
- EGU is this week; HAPI poster is on Wednesday, presented by Baptiste
- CCMC meeting (Friday session) is devoted to comparing interoperability mechanisms and has international participation
- TESS meeting - no updates - registration is open
in applying HAPI to Cassini data, scientists wanted to be able to manipulate and combine the data, doing more than just presenting what is in the file; MIDL does this because it knows what type of science data it is dealing with - effectively, it has more metadata so it can make a particle data object (with look directions, or pitch angles, etc); the way to have HAPI support this with the current spec would be to add custom metadata extensions (allowed by the spec) that would allow a client to know more about a dataset discussion about using HAPI to capture more
status updates: incremental progress on server development
Jeremy and Eric are working on supporting caching using the If-Modified-Since HTTP request header mechanism; Jeremy has a draft document out about how to do this; Autoplot can already do caching, and adding the If-Modified-Since to Jeremy's test server did not take too long (few hours of Python modificaitons). Eric is working on adding caching to SPEDAS - he is planning to use daily chunking of data (same as Jeremy).
discussion about a generic server - see next paragraph
Generic Server Ideas
Jeremy and Jon want to start a group development of a generic server that is independent of current servers, many of which are modifications of existing, historically motivated servers, and since HAPI is being added as a secondary delivery mechanism, these modified servers are not suitable as generic examples. Also, it would be good to focus on web security in the design from the beginning. So we envision a 2-level system with a front-end that manages incoming requests, and also returns the response. The front end is completely generic and re-usable and as the outward facing element, it is made to be very secure. The back end deals with the data management needed to fulfill the request. It should be made able to handle data arrangements that are nearly HAPI-ready, such as a static HAPI site that has files and metadata as fixed entities (and the back end knows how to subset them, etc).
The back-end could be made generic if the data center can provide three elements of functionality:
- the ability to read a dataset for a given time range and bring it into an internal data structure of that data centers choosing (QDataset for Autoplot, ITableWithTime for MIDL, something similar for CDF programmers). This capability is something each data center will possibly have already.
- the ability to subset this internal data model by parameters or by time
- the ability to turn this internal data model into a HAPI-specific structure that the back end knows about (and is essentially a HAPI-based data model with the right metadtata).
If a server can provide these 3 things, the back-end code can handle the rest of the HAPI-specific processing.
Doug mentioned that this essentially reflects the design layout of LISARD, and some of the code is already on Github, and the upcoming development will likely be another Github project within the current hapi github project. The generic server should not be tied to a single institutions code base, but we can certainly pull ideas from existing implementations.
Jon wants to get Rob Barnes and Bob Schaefer involved, since they both have relevant data that we can try to make available through HAPI, and as we do this, we could also spend a little extra time to create a generic server like the one outlined above. Schaefer's data is interesting since it is ITM data with higher dimensionality, and this would demonstrate that HAPI can be used for ITM data.
Bob Weigel (not on the call today) needs to also be heavily involved in the design and implementation of this generic capability, since he has expressed an interest in it for a long time already.
action itemsaction items
- create feature request for overlay metadata to identify specific data types; this topics is related to time-varying metadata, so this could be incorporated into any updates to the spec
- write up ideas about generic server and create feature request (or update existing one).
- next meeting is Monday, April 16, when Rick M. from CCMC will demo his HAPI interface; no meeting on April 23 since that is the week of the CCMC meeting
2018-03-122018-03-12
updates:
- HAPI error codes - spec document update almost done - needs example still
- HAPI caching in Autoplot - few small bugs before production; structured so that the cached content could be used by other clients in other languages; detection of stale cache is via the optional modification date (which is not granular) or just age in cache; maybe flesh out a common set of refresh rules on this telecon?
- modification dates and HTTP status codes - Bob, Jeremy and Jon to talk at next week's time slot
- CDAWeb HAPI server; Nand's is running at proto.hapistream.org/hapi ; add this to servers.txt (Jon); being migrated inside CDAWeb
- LASP - getting set up soon
- SPEDAS - bug fixes and time format handling updates (will use regex from Github to handle YYYY-DOY formats); the validator (by Bob Weigel) may have a better tested way to parse times -- see the verifier code here: (needs leap seconds updates); also SPEDAS accepts parameter restrictions; also handles first time column OK
- demo by Larry about MIDL4 HAPI client
- Aaron - need long term organization mechanism
- Jeremy, Bob, Jon to use next week's 1pm slot to talk about modification times and expiration dates
- next telecon: March 26
Action Items:
- HAPI web page () needs to mention SPEDAS! (Jon)
- discussion about posting a Java client to Github main page (Jeremy and Larry)
2018-02-262018-02-26
Agenda
- demo by Eric of HAPI support in SPEDAS
- touch base on other development efforts underway
- mention upcoming meetings:
- EGU, April 8-13, Vienna, Austria,
- TESS, May 20-24, Leesburg, VA,
- COSPAR, July 14-22, Pasadena, CA,
Updates:
- PDS PPI node - server update in progress; works in development; being pushed into Git repo for move to production environment; available for PDS4 datasets (MAVEN and LADEE now; soon Cassini and MESSENGER; migration of everything else underway too)
- Jeremy and Bob - more generic servers; Jeremy: multi-threaded Python; Bob: node.js server in dev.
- GSFC HAPI server - Nand has new version; also has API for HAPI input stream and output stream
- could be some interest in making data from active missions jointly usable; stay tuned for senior review report
Switch to every 2 weeks - next telecon in March 12.
Next time - MIDL demo.
2018-02-122018-02-12
CDAWeb - JSON update still in progress
Bob and Jeremy - working on generic server and developer documentation;
the HAPI verifier - up to 2.0! ability to check JSON and binary is still in progress; ability to set timeout will be added soon
discussion about error codes: the spec points out that when no JSON is requested, only the HTTP status response is available; Bob and Nand already implemented mechanisms that do more than this, and they suggest we add to the spec so that it recommends the following for HAPI server error responses:
- modify the HTTP response text (not the code number) to include the HAPI-specific error code and message
- even for error conditions that report "not found" still return JSON content to describe the error message
Note: These are all small enough changes (and are just recommendations) so that they only trigger a version number increase to 2.0.1
Before adding it to the spec, we need to see which servers can do this, and which clients can utilize this information. We expect it is not a problem, but want to be sure. What we know already about servers: Tomcat (yes), node.js (yes), Perl(?), Python(?). About clients: curl (yes), wget (no)
Next week: Eric Grimes - will demo IDL HAPI client and SPEDAS crib sheet
action items:
study the following server capabilities to implement 1 and 2 above; Jeremy (Python and Perl servers)
see how proxies affect the transmission of the JSON content when there is an HTTP 404 error; was this going to be Bob or Jeremy?
clarify the error handling section in the spec to describe the new recommendations (Jon)
2018-02-052018-02-05
discussion about streaming implication of timeouts - need statement in the spec about servers needing to meet reasonable timeout assumptions for clients; current typical values are around 2 minutes; we need to check these; must specify for time-to-first-byte and time-between-bytes
Bob's verifier currently has multiple tiers of checks; it will be switched to allow the timeout to be an input
also need to clarify expectations about multiple simultaneous requests (do servers need to be multi-threaded?); CDAWeb limits simultaneous connections for security reasons; Apache has settings to limit connections; does Tomcat?
how to clarify any confusion about streaming? record variance is the fastest changing item
make sure the spec mentions that servers can respond with "too much data" which is especially relevant if delivering data in any of the column-based formats were considering as optional output format
Discussion about current JSON format - there was a question about the validity of records with different types in the array for one record; JSON Lint parses this fine, claiming mixed values are OK; JSON Spec RFC7519 agrees;
2018-01-082018-01-08
AgendaAgenda
related topic of interest: Open Code / Source white papers
- NASA is serious about it's commitment to encourage / require open code.
- people are encouraged to submit short statements with support or opposition or suggestions of pitfalls to avoid, etc.
- some comments about streamlining the legal / formal release process; also documentation is time consuming
- difference between open source project (lots of global developers contributing) versus open code (source code available, but not necessarily supporting active, joint development)
- overlap with SPASE descriptions for publicly available resources
HAPI email list now set up
Web site improvements: minor improvements only, add dates to releases; mention the news listserv and how to subscribe; current telecon members have post capability - new members are moderated starting off; others listen only; eventually have a [email protected]; add all the logos fro supporting organizations
Lessons from the AGU:
- discussion with Arnaud Masson (Aaron's counterpart at EGU); Aaron will set up a meeting about interoperability at the right level of formality, using HAPI as an example case
- feedback from Hayes: OK to proceed with some HAPI development
plans for the year
conference presence this year? EGU - joint abstract with Baptiste (ask about collaborators) and Arnaud and LASP group (Tom, etc) supporting the presentation of the material at the meeting; Jon will write tomorrow TESS in June - abstracts due in February (AGU-based site)
- Jeremy: update from SPEDAS group - re-writing client for latest version
- need to get feedback from CCMC on their server?
- Bob: working on generic HAPI front-end server to manage HAPI requests; if a provider has a command-line way to stream data, it can be connected to the front end to make data available via HAPI; updates in a few weeks; (this would be run on existing servers at the provider site); includes validation mechanism internally
Next telecon is Jan. 22.
2017-12-182017-12-18
Action items:
- Jon: Draft note for SPA email newsletter. Request for comments on HAPI 2.0.0; emphasize good lowest common denominator
- Aaron: start talking with ESA; get names of telecon people
- Todd, Jeremy, Jon: get listserv email set up at hapi-server.org; Todd will look
- all: keep working on implmentations
- Bobby: send AGU notes
- all: what standards group to join or become: SPASE, Apache, IVOA, COSPAR
Request:
- Nand wants someone to check the JSON output of his CDAWeb server; Bob says the verifier will eventually do a cross comparison between the CSV and JSON and binary data
Discussion:
Topic 1: how to capture start and stop times
Write-up proposals for handling start and stop times: option 1: reserved keywords for the start time column and stop time column option 2: keywords that refer to the names of the start time column and stop time columns option 3: delta +/-; use units on the column to capture a duration
suggestions: accumulationTimeStart accumulationTimeStop
accumulationStartReference accumulationStopReference
accumulationStartTimes -> name start time column accumulationStopTimes -> name of stop time column
measurementStartTimes measurementStopTimes
Topic 2: what about extended request keywords? lots of issues: in capabilities (server-wide) or in info (dataset specific)?
Need a document to capture topics we've discussed and not put in the spec, but need to remember.
next meeting: Jan. 8
2017-11-272017-11-27
- Bernie demonstrated a way for servers to indicate that data has not changed since last requested; servers emit a last-modified header value, and clients and include a if-modified-since header, to which servers can give a 403 "Not Modified" if nothing has changed; this is harder for a service-based approach, since these header values are supposed to relate to the actuat content of the response (rather than the underlying data used to construct the response).
- There is already an optional attribute in the HAPI info header for
modificationDateand clients can look at this and just not issue a request for data if nothing has changed (rhather than issue a request and look for the 403)
- It would take a lot of work for all servers to implement an accurate
modificationDatesince there could be a lot of granules to examine; for static datasets, it is easier since it does not change
- So for now, we will not make any changes to the spec.
AGU plans - still need to choose a night for the HAPI dinner - Wed. is current winner on doodle poll
2017-11-202017-11-20
- update spec: error if you mix date format within an info header
- next week: Bernie illustrates last-modified in info header or catalog?
2017-11-132017-11-13
action items:
- review Bob's list of 1.0 to 2.0 changes (Jon)
- add example to clarify the single string or array of strings for parameter units and labels (Jon)
- update the spec document to clarify what the data stream should look like for 2D arrays when streaming JSON formatted data; the JSON mechanism of arrays of arrays is what the spec calls for
- look into mailing list options (Jon and Jeremy)
- keep working on implementations (everyone)
2017-11-062017-11-06
Bob showed a simplified version of the website that removed duplicate info on the GitHub developer page and the GitHub Pages web site page. He's attempting to link index.md to README.md to go even farther in avoiding duplication.
We still need a novice friendly landing page at
We reviewed modification to the
units and
label attributes within the Parameter definition in the spec. They need some tweaks:
- add to each "In the later case," to clarify about array values.
- instead of referring to the one unit or label string as a scalar, just call it "a single string" since scalar sounds too numeric
Lots of discussion about Extensions to HAPI - it is captured here as we discussed it.
maybe have an area where new endpoints can appear:
- this could serve as both "extensions" and "experimental" in that people can try out new things
Doug: dap2 - does not define extensions; it has simple query mechanism for index-based selection of data
in the CAPABILITIES description, need to capture the fact that the extension exist:
"extensions": [ "average", "decimate" ]
Or, maybe we define some higher level functionality as part of the spec (for the
data endpoint), and just make it optional.
"options": [ { "data": ["average","filter", "interpolate"] } ]
Bob: needs examples to help us see how it works: easy one would be decimation (only include every Nth point)
Lots of different ideas:
- this does not work well since you will want to do more than decimate - it needs to be a request parameter
- Doug: could use function syntax: id(ACE_MAG)&stride(10)&average(60)
- this is similar enough to regular request syntax that it is probably better to stick with one syntax
For constraints on data, recall that we are using time.min and time.max with an eye for extending this to data¶m.min=X¶m.max=Y
We could have users stuff all their extended capability into one additional parameter (with CSV function calls with parameters to the functions)
Most people liked having extension right on the data endpoint, but with the
x_ prefix to indicate they are extensions and experimental.
- These could be advertised in the
capabilitiesendpoint like this:
"extensions": [ { "data": { "name": "x_UIOWA_average", "description": "mid-western averages", "URL": "" }, "x_stride" : {} } ]
Todd: we are talking about two things:
- additional processing done by the data endpoint (averaging, etc)
- different endpoints (listing coverage intervals for a dataset)
Aaron: maybe moving too fast with extensions - let's get a solid base working first
Nand had a question about mixed time resolution - he's going to ask it via email.
Add a SUPPORT email link to the main HAPI page!
- try to use GitHub mechanism for listserv to keep track of asked questions
- we should use the hapi-server.org domain for listserv options
2017-10-302017-10-30
The web site is finally transitioned to show version 2.0 as the latest version. Note that this version was finalized a while ago.
The issue of mixed units was discussed again. With Todd present, we revisited the use of unitsArray and labelArray, and have decided not to add those attributes. Instead the
units attributes (which is required) and the
label attribute (optional) will be allowed to have two meanings. A scalar value must be used for a scalar parameter, but for array parameters, you can use either a scalar or an array. The scalar means that all array elements have the same units, and the array means you have to specify a units value for each element in the array (so the array must have the same shape as given by the
size attribute). The spec will be updated so people can see if they like that. This is also very backwards compatible.
Jeremy said the regular expression he mentioned in issue #54 (which some people tried and did not work) does indeed have a problem (with interpreting colons?) and he's looking into it.
CCMC attendees: Chiu Weygand and (I think?) Richard Mullinix
- they showed the beginnings of a HAPI server for ISWA data at the CCMC, namely this one:
- it is online here:
- but note that it is a work in progress and does not fully support the spec
Questions from the discussion with the CCMC people:
- what about extensions to the API? they had additional filters they wanted to allow; we mentioned the possibility of defining how people could add extensions, and then having a suggested set of optional extensions as part of the spec; it would take another working group or a dedicated effort to clarify this
- time parsing was more difficult for them - this might end up being a common difficulty, so we should think about providing time parsing libraries in multiple languages
- they wanted to know about subsetting the catalog and how to arrange their server URLs
We will try to have a HAPI dinner at the AGU on Tuesday, Wednesday or Thursday night. Doodle poll will be taken soon.
actions:
- Jon: update dev spec with new definitions of parameter attributes
unitsand
label
- Jon: Doodle poll for AGU dinner
- Jon and Bob: figure out how best to arrange the main GitHub site and GitHub Pages site to avoid duplication
2017-10-232017-10-23
discussion about mixed units for arrays: we decided to try a
unitsArray attribute on parameters to capture different units for each array dimension
also decided to add an optional
label attribute for parameters, with a corresponding
labelArray
Jeremy has new regular expressions for checking date format compliance - see issue #54
Add Jeremy's regular expressions (for Java (uses named field) and others) to validate allowed ISO8601 date formats.
Client and Server updates:
- any 2.0 servers? not yet
- ask Nand about status of CDAWeb HAPI server (Aaron)
- alternate CDAWeb approach: Bob's server
- datashop - eventaully get Cassini APL data
- Iowa HAPI server - Chris has it in non-public beta
- CCMC - still working on it
- SPEDAS - aware of and interested in; not urget yet?
- idl client - update from Scott imminent
2017-10-092017-10-09
a. implementation status
Chris Piker has the current spec worked into UIowa's das2 server and JEremy has questions about CSV from him:
Question: why NaN for CSV fill?
Answer: keeps it consistent with binary
Question: why no comments allowed in CSV?
Answer: makes readers more complex and slow
Question: how to handle progress info between client and server?
Possible Answers: two-way communication? use multiple connections to the server, one of which is for tracking progress; maybe see web-workers mechanism;
a clever option: track rough progress using the time tags in the data, since the overall time range is known!
Question: How well defined is the CSV spec? Answer: not sure what we decided on this; Jeremy was going to look at cleaning it up?
b. Todd mentioned on the SPASE call last week about the PDS/PPI plans for HAPI servers
c. Aaron is hoping to have an HDMC meeting at some point to solidify plans
d. the Github web site has still not been changed
e. I heard back from Daniel Heynderickx, who works with data servers at ESA and wants to use HAPI
f. update from Doug Lindholm: LASP white paper sent to Aaron; Lattice extensions to implement the HAPI spec; also, a HAPI client reader implementation so Lattice could ingest data form other HAPI servers and re-serve it via a Lattice API
g. Jeremy reports that the SPEDAS group looking at Scott Boardsen's IDL implementation
he's hoping to convince them to expose data that's been read via SPEDAS through an IDL HAPI server (so Autoplot could read it from the server); MMS has LEvel 2 products only available via the IDL routines in SPEDAS
2017-10-022017-10-02
add section numbers to TOC?
next meeting: Monday, Oct 9, 1pm: status of implementations
2017-09-272017-09-27
call with Jon V and Bob Weigel
We are planning on redoing the web site to make it more coherent for visitors. Landing page not be the Github page, but just the README.md, and modify the README to have not hyperlink to a release, but just to the markdown and the PDF and HTML, as well as to the JSON schema.
Use GitHub pages mechanism for the web site, possibly using Jeremy's domain "hapi-server.org" so that this points to the README.
Get rid of the "versions" directory (in the structure branch) using a more flat arrangement.
Not expose Github tags to people, since that would lead them to download the whole repository (with all older versions of the spec).
2017-09-252017-09-25
The SPASE group has been told about our preferred way to indicate the availability of a HAPI server within a SPASE record. There can just be an AccessURL pointing to the "info" endpoint for a particular dataset.
Bob showed the Matlab and Python clients he has.
Action items:
- Jon:
- rename current development version to release version
- add updated Table of Contents
- release version 2.0
- Bob:
- fix problem with JSON schema (centers and/or ranges)
- look over the file arrangement before 2.0 is released
- update the verifier to the latest spec (use a separate branch of the verifier code for each version?)
In subsequently looking over the HAPI specification Github page,' I think we need to prepare it for long-term stability with multiple releases. The standard approach is to have one directory for each release, and then have a landing page that points to the most recent release, as well as the development version.
Jon is setting up a separate telecon later this week to propose, tweak, and settle on a directory arrangement scheme for this and subsequent releases.
2017-09-112017-09-11
How to incorporate HAPI URL into SPASE?
- Give an info URL like this let software figure out how to parse it
- Just give a URL to the top of the HAPI server, and assume the SPASE ID (product key) is the dataset name in HAPI
- Give the URL to the top of the HAPI server, but also give the HAPI dataset name (in case HAPI data server names things differently)
- What about a data request?
Nand's request: need clarifying use case.
Two other Nand suggestions:
- We should always provide the header; original reason was to be able concatenate subsequent requests; value of always having header is that data self-identifies when you save it. Discussion: communicating just the numbers is sometimes useful; the API already emphasizes a division between the header and the data; importing just the numbers with no header might be important (in Excel, for instance, or IDL using its CSVread mechanism); Conclusion: keep the option to leave off the header
- Precision in general and about time values. Conclusion: let the server decide. Good practice is to limit the output to the precision you (the server) actually have.
2017-06-272017-06-27
Telecon notesTelecon notes
- issue 51: should time column have required name "Time"? -- decided not to require this, but to add to spec a clarification on the importance of having an appropriately names time column (don't leave the time column name as Unix Milliseconds when you changed it to be UTC to meet the HAPI spec)
- issue 40: why only string values for fill? -- decision is that it is OK to require fill values be strings; the problem is that JSON does not enforce double precision to be 8-byte IEEE floating point, so we can't rely on JavaScript or the JSON interpreter to convert the fill value ASCII into a proper numeric value; thus, we will just leave it as a string and the programming language on the client will need to do the conversion
- issue 42: what about a request for specific parameters that is somehow empty? -- decision: treat this as an error condition; in fact this is generically an error: any optional request parameter, if present, must also have a value associated with it; since it was optional, its presence then requires a value
- issue 46: need to clarify about the length of strings and how to use null termination in a string; the spec currently does not capture what we wanted to say; the null terminator is needed only in binary output, and only when the binary content of the string data ends before filling up the required number of bytes for that element in the binary record; so the length should NOT include any space for a null terminator; if you fill up the entire number of bytes with the string, there is no need for the terminator; if you are less than the number of bytes, then you do use a null, with arbitrary byte content padding to the required length
- issue 49: time precision -- change spec to say that the server should emit whatever time resolution is appropriate for the data being served; servers should be able to interpret input times also down to a resolution that makes sense for the data; any resolution more fine that what the server handles should result in a "start time cannot equal stop time error"; the precision the clients can handle is outside the scope of the spec, so users concerned about high time resolution should be aware of any restrictions of the clients they use.
- email notifications: just use a listserv at APL for people who want notification of any change to the hapi-specification repository (not just issues); so far, this will be: Bob, Todd, Jeremy, Jon; no need for more complex scheme using pull requests with branching and merging (the complexity of that is warranted only with larger source code projects)
Bob's UpdatesBob's Updates
- MATLAB client: **
hapi.mis feature complete from my perspective except for some minor changes for the binary read code. **
hapiplot.mis feature complete from my perspective. **
hapi.mand
hapiplot.mwork using data from four different HAPI servers ** Neither of the scripts have been systematically tested on invalid HAPI responses. Common errors are caught and other errors generally lead to exceptions. This could be improved and we'll probably add code to catch errors as we find them.
- Python client: **
hapi.pyis feature complete from my perspective. It handles CSV and binary. **
hapiplot.pyhas far fewer features than
hapiplot.m. I am now certain that I don't like
matplotlib. ** Both scripts work on
dataset1from, which includes many possible types of parameters. I have not tested on data from Jon's, Nand's, and Jeremy's server.
- There are some issues that we'll need to discuss about the clients that are related to whether there is a difference between a parameter has no
sizevs.
size=[1]. See also a question about size on the issue tracker.
- Verifier ** Mostly feature complete and I still need to post the schema that I am using at I have a few questions for Todd about encoding conditional requirements. ** I added a few new checks and emailed Jeremy, Nand, and Jon warning them to expect new errors and warnings.
- Issues ** Hopefully I am done posting issues and questions ...
- Specification document ** I made several editorial changes to the HAPI-1.2-dev document
- Outreach ** Tried to do a phone call with Redmon last week. Will try again next week as I am out after Wed of this week. ** Looked of SpacePy and figured I would wait till
hapi.pywas complete before I emailed Morley. Will email him next week.
2017-06-062017-06-06
Discussion 1: clarity needed for multi-dimensional data when one or more dimensions does not have any 'bins' associated with it; right now, the spec pretty much says you have to have bins for all dimensions;
We settled on adding a single line to the spec: If a dimension does not represent binned data, this dimension must still be present in the 'bins' array for each dimension, but that dimension should have '"centers": null' to indicate the lack of binned elements.
Discussion 2: we need a place on the wiki to describe a common set of routines and calling parameters so that all the scripting languages can use the same names for the various types of calls.
Progress on action items from last time:
- Scott added the IDL client to GitHub
- Jeremy started a Java checking client
- most of the people to be contacted have not been yet - we need more to advertise first...
Bob created basic Python and Matlab clients, creating areas for them at the top level of GitHub; these are ready for others to mess around with and add/augment as a kind of joint development.
Jeremy has a Java API checking app (verifier) also at the top level in GitHub, and also open for joint development.
Action Items:
- Bob: email several people to ask about their interest in and potential use of the HAPI spec for their data serving interface
- Bob: still working on basic Matlab HAPI client
- Bob: email SpacePy people about HAPI client development and status of SpacePy
- Jeremy: work on rudimentary server checking mechanism
- Jon: add code to the verifier and see how it could be migrated to be or at least use a generic Java client
- Bernie, Bobby, Nand - CDAWeb server is progressing but not done
- Jon, Todd - start a collaborative effort to create a Python client in association with the SpacePy people
- Jon: waiting to hear back from Daniel Heynderickx about newly release version 1.1
- Aaron: update the CCMC people with news about version 1.1
2017-05-232017-05-23
topics discussed:
public versus private data served by HAPI: we won't make usernames and passwords part of the spec, but will have a part of the wiki devoted to implementation guidelines, where we can describe how to best serve data that has both private and public regimes.
Issue: citations - data providers will not like that HAPI obscures the source of the data; data providers won't get credit for serving their data, the won't know who is using it, and the appropriate reference wont' get cited Temporary resolution: will add an official issue to capture the need to address this concern; for now, the SPASE record that a HAPI dataset can point to can contain a citation; ultimately, it would be great to have a DOI associated with each HAPI dataset. Also, the resourceURL or the resourceID (or both) can serve as substitutes for a more robust citation.
how many different server implementations are needed? The only viable way for lots of data to stay accessible through HAPI is if the providers who install the HAPI servers also maintain them. Unused services will fall into disrepair (liek OPeNDAP services at CDAWeb, which got little use.)
Instead of creating an implementation that anyone can use (via a possibly hard-to-design interview process), maybe we focus on getting key providers to have an implementation, and we focus our energy and funding on a team that can help them understand the spec and get a sustainable HAPI installation going.
Multiple groups are working on servers that could be installed by 3rd party users, so this would give users a choice of HAPI server implementations.
We listed organizations that we hope would be interested in providing this kind of common access via a HAPI mechanism:
- CCMC/iSWA
- NOAA - National Weather Service (older:SWPC); Howard Singer
- NGDC -> NCEI (Spyder, now retired; Rob Redmond potentially interested)
- USGS (Jeff Love)
- Madrigal (MIT/Haystack)
- CDAWeb - Nand Lal working on updating his server to HAPI 1.1
- other SPDF data
- PDS PPI Node (Todd King)
- LASP (Doug Lindholm) LISIRD2 / Lattice Evolution to 3rd party use
- GMU / ViRBO / TSDS
- Univ. of Iowa - Heliophysics and planetary missions
- APL - Heliophysics and planetary missions
- SuperMAG
- other ground-based magnetometers
- European groups: VESPA, AMDA (Baptiste Cecconi), other ESA projects (Daniel Heynderickx)
- software/tool providers:
- SpacePy - Steve Morely, also John Niehof
- SPEDAS - Vassilis Angelopoulus (?)
For now, we will focus on working with the set of these groups that are more internal (to the existing HAPI community), such as PDS, CDAWeb, LASP, and CCMC. After we hace some success here, we can branch out to groups like NOAA, USGS, SuperMAG, and the Europeans.
Also, we need client libraries first, before HAPI becomes a compelling option, so several people will start working on those.
Action Items:
- Bob: email several people to ask about their interest in and potential use of the HAPI spec for their data serving interface
- Bob: work on basic Matlab HAPI client
- Bob: email SpacePy people about HAPI client development and status of SpacePy
- Jeremy: work on rudimentary server checking mechanism
- Bernie, Bobby: report back with status from Nand about his updating of the CDAWeb HAPI server to meet the 1.1 spec
- Jon, Todd - start a collaborative effort to create a Python client in association with the SpacePy people
- Jon: email Daniel Heynderickx about newly release version 1.1
- Aaron: update the CCMC people with news about version 1.1
- Scott: commit IDL client to Github area
2017-05-162017-05-16
Agenda:
- final review of changes to HAPI spec document for version 1.1 release
- discussion about implementation activities based on the distributed list of proposed activities
topics discussed:
review of recent edits of spec by Todd, Bob, Jeremy, Jon
new domain hapi-server.org available for examples; Jeremy to make our example links live soon (tonight?)
Question: should we allow HAPI servers to have additional endpoints beyond the 4 required ones in the spec?
Todd: no - put them under another root url outside the hapi/ endpoints.
Bob, Jeremy, Jon : yes, but put in separate namespace under hapi/ext/ or with specified prefix (like underscore)
Answer for now: punt and push this to future version; might be good idea to allow extensions, but we need to figure out how to allow servers to advertise their extensions - it needs to be in the capabilities endpoint. Also, we need to think more about implications. We have a pretty controlled namespace now, so we don't want to dilute that. Silence in the spec for now means people will hopefully realize they are in exploratory territory.
release of new spec! now at Version 1.1.0; tag is v1.1, name is Version 1.1.0
discussion about https: we'll need to address this in the spec at some point
re-arrangement of top level documents:
- move spec to something else besides the README.md
- describe all files in the README.md including the recent versions
- for now, indicate in the README.md where to find the stable release versions
Action Items
- Todd: create PDF stamp of version 1.1.0 and put in repository
- Jon and others?: update main spec document to indicate that the live version at the tip of the master branch - list of released versions; probably use a different name for the key spec document and put more general explanation in the README.md
- Jon: issues to add:
- extension to endpoints
- supporting for https; Let's Encrypt offers free certificates
- Jon: create wiki to keep track of longer running issues, like the activities document or telecon notes
- Jon: close out old issues related to release of version 1.1
- all: consider our set of next key activities: creating personal servers, creating drop-in servers for other people, making lots of data available, creating clients in multiple languages, lists/registries of HAPI servers, integration with SPASE | https://github-wiki-see.page/m/hapi-server/data-specification/wiki/telecon-notes | CC-MAIN-2022-27 | refinedweb | 16,724 | 50.7 |
Slashback: Arch, Bubbles, Keystrokes 112
This research could still lead to new and powerful sink cleansers. mrsalty writes "A topic of brief and skeptical discussion back in april, Sonoluminescence as a fusion catalyst seems to be circling the drain. According to this BBC News article, new research shows that the collapsing bubbles' temperatures fall a bit short of that needed for fusion. A bit in this case being a few million degrees."
Discretion is sometimes the better part of avoiding attention. stinky wizzleteats writes: "Looks like OddTodd got off on charges that he defrauded the State of New York by starting (Laid Off Land) while receiving unemployment payments. I didn't know he was only getting 67% of the take (his provider was getting the rest, which sort of explains why the site didn't get /.ed when the first story about him was run."
Try explaining this one to your parents. Earlier this year, we posted about Project Dolphin, an effort to measure the number of keystrokes you make as you IRC, email, program, whatever. Now, Wes N. a.k.a c3 writes with a largish update from the project's homepage, excerpting:
To this end, Dolphin has found itself its own dedicated server that serves as a home that is now (finally) suitably equipped to handle the growth we want to see, and fully expect. Previous participants will notice that this site itself has been fully redesigned and revamped toward a more professional look, while remaining commercial free in the original spirit of the project.
At the very core, this is a research project for its designers. It's made by geeks and it's made for geeks. The positive feedback received over the last few months since its initial launch has ensured that it will continue along it's current path of growth in the spirit of fun and experimentation for the forseeable future. (end from website) The new version of project-dolphin's Pulse is due to come out any time now. The new version is supposed to have a few bug fixes and how loads of new features. to check how the progress is coming along check out The development website some of the new features include . Typing Activity tab, Keystroke Frequencies chart , and alot of other neat stuff check it out on the website or goto irc.project-dolphin.net #projectdolphin on IRC."
"Arch" is adjective, verb and noun in one. When it comes to replacing CVS, Subversion is not the only game in town. We posted in May about the even-more-ambitious arch revision control system. Now, bshanks writes: "Tom Lord, the author of the revolutionary arch revision control system (slashdot article here), needs some monetary help."
A lucky SOB :) (Score:4, Interesting)
Keystokecounter! (Score:4, Funny)
Re:Keystokecounter! (Score:1)
I've been posting on Slashdot all day it it has barely moved! Oh, wait- it's moving now... !
Oh I get it, I have to actually be typing my posts instead of copying and pasting them!
----
Any website that uses the phrase "a simple 30KV power supply" is okay in my book.
Re:Keystokecounter! (Score:3, Funny)
(mad props to the time-phone-lady)
Re:Keystokecounter! (Score:2)
That is odd, for me it was 30.
Linux kernel keystroke counter hack (Score:2, Interesting)
Re:Linux kernel keystroke counter hack (Score:1)
Re:Linux kernel keystroke counter hack (Score:1)
Linux kernel pixel counter hack? (Score:1)
(Remember that if the number of keystrokes is globally visible on a machine, any program/user on that machine may be able to read what you are typing. The timing of keystrokes can be cryptographically attacked to produce the typed text.)
About the cold fusion claims. (Score:4, Funny)
Hands up if you saw this coming from the start.
Re:About the cold fusion claims. (Score:2, Funny)
Still, would have been cool.
Re:About the cold fusion claims. (Score:2, Funny)
Re:About the cold fusion claims. (Score:5, Funny)
Re:uhhh........ no, more like (Score:2)
Just wait till the next version of PULSE (Score:1)
Re:Just wait till the next version of PULSE (Score:1)
Yeah and if you want to view my stats...... Go here GRAPH HISTORY [project-dolphin.net]
Or here: My Main Stats page [project-dolphin.net]
I KICK BUTT!!!
About the keystroke counting (Score:5, Funny)
This person has been typing an average of 305 keystrokes a minute since May 31.
THAT'S 5.1 KEYSTROKES A SECOND, NON STOP, FOR TWO MONTHS.
And you thought that you didn't have a life.
Re:About the keystroke counting (Score:2)
Re:About the keystroke counting (Score:4, Funny)
> This person has been typing an average of 305 keystrokes a minute since May 31.
> THAT'S 5.1 KEYSTROKES A SECOND, NON STOP, FOR TWO MONTHS.
> And you thought that you didn't have a life.
Actually, most of that time was spenth replying to slashdot posts. I copy the entire message by hand first before I begin to write. You'd be amazed what you're capable of when you don't try to spellcheck, grammar check, or anything-proof your typing.
-magictiti
It's all in the wrist.
Re:About the keystroke counting (Score:2, Insightful)
Here's my card, magicman (Score:1)
If you never get RSI, you got my blessing and I say keep on truckin'!
Peace.
Re:About the keystroke counting (Score:3, Funny)
How many times have you been asked what's so magic? =)
EASY ANSWER! (Score:1)
You didn't think the Palladium kernel would be smaller than 140GB, did you?
Re:About the keystroke counting (Score:1)
I held a key down for 5 seconds and got 113 characters, so I assume magictiti didn't have the stuck key all the time.
Maybe her titi is magic because it hangs over her keyboard.
Re:About the keystroke counting (Score:1)
Or it may be Quake.
_w_
asd
I suspect a lot of those keystrokes were WWWWWWWWWWWWWWW wwwwwwwwwwwwww WWWWWWWD ddddddddd WWWDWDWDWDWDWDWD SSSSSSSSSSS
From the Pulse User's guide. (Score:2, Funny)
This warning is stated only half in jest."
They were right. RIGHT, I tells you! I just...can't...stop...pressing...the...buttons...
. AAAH!GHGH!
If you think that's bad... (Score:3, Funny)
I Have To Hand It To Slashdot... (Score:4, Interesting)
Could this perhaps be stealth R&D for SourceForge 4.0, which might perhaps act as a front-end for all types of source maintenance tools? Given VA's past record, they're not apt to be that savvy. Perhaps Taco et. al. are just trying to convince upper management that they need to do something Real Soon Now. Perhaps they desire to have said higher powers become so disgusted with
/. that they will decide to sell it to someone like Salon or NYT so that the editorial staff can finally become real journalists like they always wanted.
Re:I Have To Hand It To Slashdot... (Score:4, Interesting)
First, CVS is quite limited to what can be done with it. A lot of third party tools like PVCS and ClearCase provide a lot of GUI enhancements that make working with the products easier. And other systems, like Perforce and Bitkeeper really give developers a lot of control over concurrent development. CVS was a great idea to RCS, but now, it rates right up their with Visual Source Safe.
We looked at using SourceForge where I work. Basically, since we didn't use CVS or have a need for the mailing list features, we only saw value to the bug tracker, task list and document section. The document section of SourceForge is very simple so it didn't buy us anything more than posting pages in HTML. Why would we pay for it. Now, if they could develop nice web based stuff like tinderbox for different source control systems, we would buy it hands down.
Re:I Have To Hand It To Slashdot... (Score:3, Interesting)
I only mention this because we just got SFEE here at work and man, it rocks!
We would have used the crippleware^w free release available, but we couldn't even make it render the front page of our demo site.
Re:I Have To Hand It To Slashdot... (Score:2)
Re:I Have To Hand It To Slashdot... (Score:2)
So yes, the deal we got was definitely worth it.
Re:I Have To Hand It To Slashdot... (Score:2)
Also we have a lot of windows only programmers and they get freaked by command line tools sometimes what are you using for CVS integration?
PVCS GUI enhancements (Score:2)
If salon bought it... (Score:1)
bubbles? (Score:1)
Re:bubbles and fake results. (Score:1)
Advice for Tom Lord (Score:4, Funny)
He should just make a website [oddtodd.com] or something.
Steve
Done (Score:1)
Little Timmy (Score:5, Funny)
Re:Little Timmy (Score:2)
Re:Little Timmy (Score:2)
Doing the math (Score:2)
In case anyone was wondering, assuming you leave a book lying on the keyboard and a repeat rate of 20 characters per second, that works out to one cent every 1584.4 years.
-
Re:Doing the math (Score:1)
In case anyone was wondering, assuming you leave a book lying on the keyboard and a repeat rate of 20 characters per second, that works out to one cent every 1584.4 years.
It's actually worse than you say. You're off by a factor of 10: 15,854 years %
20keys/sec * 60sec/min * 60min/hr * 24 hrs/day * 365 days/year = 630,720,000 keystrokes/year.
1.0E-15 * 6.3072E8 = 6.3072E-7 $s/year
We're looking for %, so use 6.3072E-5 instead.
(6.3072E-5)^-1 = 15,854.896 years %.
Of course, that number is just an appox and is a bit off anyway due to the leap years and other adjustments done every so often, but it's still almost 16.0 thousand years, as opposed to 1.6 thousand.
Re:Doing the math (Score:2)
Drats. Yeah, I miscounted the decimal place.
bit off anyway due to the leap years
I generally type 365.24 days per year into the calculator without thinking. 0.25 for the one day every four years, but -0.01 for the leapyear you skip every hundred years. But then there's the leap year you DON'T skip every 400 years, so I guess I should really be using 365.2425.
15,844 years 135 days for one cent. Give or take a day or two. Unless, of course, your key repeat rate doesn't happen to be running off of an atomic clock
-
Arch: support bash tab completion or no one will u (Score:1)
Bash will never, ever change to support tab completion with '{' characters so what Tom seems to be saying is that Arch will never support bash. Which seems to tell me that he should pack it in and stop wasting his time.
Re:Arch: support bash tab completion or no one wil (Score:1)
Re:Arch: support bash tab completion or no one wil (Score:2)
thanks for the press, slashdot (Score:5, Informative)
I am grateful to supporters for the purchases and contributions received so far.
I'm still a rather far from having enough to stay on-line, but the contributions so far suggest that there is a chance.
The problems faced by arch aren't unique. Whenever I've talked to those more senior engineers who are my friends and who have lots of "open source" involvement, they say "We're hearing this same sad story from a large number of very talented hackers.".
The botttom line: please do contribute to arch. It really is a fiscal emergency and your support is much appreciated. But in addition to sending support, please also send a short, polite note to your favorite budgeted manager or exec at an open source or free software friendly company. Point out to them that you are doing their job and spending money in a way that will benefit them. Ask them to be more proactive in supporting free software researchers, including working on their host organizations to establish some winning policies in this regard.
Re:thanks for the press, slashdot (Score:5, Interesting)
We've been taken to task here and elsewhere for not making BitKeeper be open source. This is a reasonable opportunity for us to explain why we haven't done so.
Tom's managed to raise $10K this year in support of all of his fine projects, arch being only one of them. We're not trying to do everything he is doing, all we do is source management. The problem is that we spend $10K every day or so in salaries. And we are dramatically understaffed compared to any other SCM company, when they figure out how small our engineering staff is they are amazed that we are able to do what we do.
The reality is that we should be at more like $100K per day in salaries to really have a good product. The problem is that all you lovely slashdot folks want to get everything for free. And you'll insist on it if you can get away with it. Given that the SCM market is so small, the only way to get the money for the salaries is if you have a product which is based on IP and requires people to pay for it. Face it, if we gave BitKeeper away for free but asked you to support us with "donations" not one of you would do so. Remember, Tom is a really bright guy doing really nice work and he's managed to gather all of $10K this year. Which we spend in a day or two. And we're also really bright people doing really nice work, but that doesn't mean you'll give us or him money.
The point is that certain market spaces simply don't work based on the tradition open source support style model. That model works great for things where there is a huge market and the product is broken so you can ask for support and people will want to pay for it. That model fails completely if you ever provide a product which works. It also fails if the market is small.
The point is that if you want Arch to succeed, encourage Tom to make it a closed source product and get some funding and create a business. Anything less is a joke in poor taste. It's great to imagine that you'll get all your problems solved for free, but that's just not going to happen.
It's not what you want to hear but I can't help that..
Re:thanks for the press, slashdot (Score:2)
Seems Tom is a wee bit more efficient. That's your problem.
And we are dramatically understaffed compared to any other SCM company, when they figure out how small our engineering staff is they are amazed that we are able to do what we do.
So Tom is doubly amazing!.
You're making the assumption that a good SCM can't be developed for less than $10k (or is it $100k?) a day. Subversion, Arch and OpenCM are proving you wrong. Sometimes one or a few really good developers working for next to nothing are better than a companyful of developer seats.
Re:thanks for the press, slashdot (Score:3, Insightful)
Who's efficient again? Next time, make sure there's an actual release-quality project available before even attempting to make this argument. Subversion is the closest of the bunch, and even then it'll probably be 1-2 years before they are as polished as CVS or BitKeeper.
Sometimes one or a few really good developers working for next to nothing are better than a companyful of developer seats.
Sometimes? Sure. But those developers still have to eat. Assuming that Tom manages to keep up his current level of funding, he'll make all of $20k this year. Wow. Isn't that awfully close to the poverty line for a family of four? Tell us again why he should be doing this for free?
I like the OS movement, I use Unix daily, I use open source products, libraries, and tools daily and deeply appreciate the work and time that goes into them. But whenever someone such as yourself spouts off utter economic bullshit about OS being so much better than commercial (whether totally closed or partially closed), it just proves yet again how little of a fucking clue a lot of people on
Re:thanks for the press, slashdot (Score:1)
That ain't the CEO of BitMover
What is the difference between trol and Flamebait?
Re:thanks for the press, slashdot (Score:1)
(my apologies if your post was just a troll, I'm rather dense sometimes)
free as in freedom vs free as in beer (Score:2)
But at some point and in some markets it's just not good enough. And you end up having no alternative than a propietary product.
I believe the problem is that distributed is fine, but distributed funding is a complete mess full of duplication and high costs. It also poses the free-riding problem.
We should all pay $20 a year to a single entity and have the change to distribute that money among projects. Companies should contribute more. Independant developers would be automatically paid $20 perpetually and automatically used every year to secure their membership.
It'll still save us hundred of dollars a year and the sources will be ours to look, modify an redistribute. Money would go where our mouth is.
That would mean killing free as in beer to save our free as in freedom. And cheap beer it will be. If you paid you $20, you'll have access to all the OSS in the world.
But the OSS comunity sees this kind of proposals as trolling of flamebaits or maybe just stupid ideas proved wrong.
Asuming 100 milion users around the globe paying $20, this would pump 2 billion bucks a year for your apache server, your mplayer media player, your favorite gui, that app that needs be done asap and your loved card's enhanced drivers. This money'd go directly to developers salaries: no marketing expenses. Arguing we are better of paying nothing is just nonsense. Because for $20 you'll get a 100 millions greater value. You may think OSS already works fine: it does! But that doesn't mean it's not completely underfunded. We need more of this, not less or even the same.
This would be Democracy at it's best. But for that to happen we need to change the say we see open source. It should be "almost free as in almost free beer, free as in freedom, and free as in I can use it, modify it and distribute it".
Re:free as in freedom vs free as in beer (Score:2)
Wrong figure. You'll get 100 million greater value times this year. As time passes by, your boost will be 100 million x years since model switch.
And probably, in 10 years, you'll have like 700 or more contributors.
Commercial paradigm vs Free Software (Score:3, Interesting)
FS/OS has a totally different model. It certainly needs funding, because pgmrs gotta have their Twinkies&Jolt [or is that now Carob&Gingsing?]
:)
This funding comes _internal_ to the organization or individual.
They have a burning need for the code, so will fund it's creation.
This burning need drives the code creation, not some prospective
market. It is very likely that the code will meet the need [ROI]
-- not always the case in the commercial market.
The tricky bit with FS/OS becomes what to do with the code. The code [or more likely embedded data] might be so valuable that it is a competitive advantage. This code will never be licenced and guarded like the crown jewels. The code may be so duplicatable that you might as well give it away for the goodwill. Or now, thanks to Mr Gates, some managers will consider trying to sell the code. This usually proves awkward, since the producing entity usually looks more like a customer than a saleman, and will need all sorts of new functions.
The FS/OS model breaks down when there is no burning need, when the code becomes the crown jewels, or when people see no goodwill in publishing. I would have said that FS/OS isn't good for large GUI bloatware because no-one has that kind of burning need. But the existence of both Gnome and KDE proves that the World is a big place, and people have all sorts of needs and motivations.
In the specific case of SCM software, I would expect that a large organization that writes lots of software would have "the burning need". IBM, NASA, RedHat, the USDoD, MS, Oracle, SAP, CA, and looser organizations around Linux and *BSD come to mind. Many of these probably already have SCM in the "crown jewels" category, and the commercial software houses certainly aren't about to release code -- they're all about selling it. IBM might release code, and RedHat certainly would. I wouldn't be at all surprised to see RedHat fund `arch`. Patronage is not ignoble.
Re:thanks for the press, slashdot (Score:2)
You are expectiong flak, I'm sure. You wont be disapointed.
Seems you don't get business, or open source.
Youre problem and Tom's are very different, and both due to scale, and philosophy just at different ends of the spectrum.
Both are likely to fail, but for different reasons, not that either of you were right and the other wrong.
Open sourcing your project would bring you resources that you cannot afford to hire, possibly even decreasing your development cost.
To do that you must however give up some of your ownership.
Not a trade you appear to be willing to make.
Your reasons for not making the trade are as valid as a someone who is willing to make the trade.
C2Net/Stronghold (now owned by Redhat) is an example of how opensource and business works (as is Redhat).
You have, I think, a crack in your business model.
I looks like you didn't build your end game in at the begining and instead figured that the end would take care of itself.
Business isn't static, change points, and end points need to be considered before you start.
If you were to switch to open source(liberate the code), and have a seperate supported version(ala Redhat and others), and an open(free as in freedom) version that anyone can code on and improve then you can add coordination and project management staff.
Doing so could magnify the effective results of your existing programming and development staff by orders of magnitude.
Tom's problem is the reverse.. too open, too small, and die rather than sell.
The requirement is balance, not an easy thing... Your going to need a zealot/visionary that can work with a business/accountant and they BOTH have to cooperate.
Tough thing to do, very tough, but not really that different than how a business really should be run anyways.
Expanding beyond the normal customer base for feedback and improvement, and being a good neighbor to the community.
Re:thanks for the press, slashdot (Score:2)
Maybe $10K-100K/day is good for a company, but suppose a single developer could bring in just $100K/year? That's only $275/day, a modest amount, but it's enough for most developers to be able to quit their day jobs and devote their time and energy to the project. So why is even this out of reach, especially for the many open source projects that thousands or millions of people find valuable?
The problem is that all you lovely slashdot folks want to get everything for free. And you'll insist on it if you can get away with it.
This is a conundrum. Many of us would like to be able to write good software and just give it away for free. Of course, everyone loves to get free stuff also. But we also need to eat, and giving away your work doesn't pay the bills. Unfortunately, it seems that most of the options suck.
We can try to create a business around our software, but that's expensive, difficult and prone to failure. It also requires sales and marketing efforts, support staff and other overhead. All this takes quite a bit more money than a single developer needs to make a living. Of course, new businesses have a very high failure rate, so this option sucks.
We can try to build a business around open source, but we probably can't bring in much revenue from sales of the software -- someone will turn around and give it away for free and undercut the market. We can try to bring in revenue with support or services, but this is uncertain, and only some markets can truly support a company this way. Many open-source companies have had trouble with funding, and there's little incentive to create high-quality software that "just works". This option sucks.
We can take the traditional route, keeping the software closed and selling it to bring in revenue. This can work, but charging for closed code will alienate the open-source community, and possibly even motivate volunteers to compete with you by writing free software. Many potential users who might have been interested will not buy the commercial product, so the market is much smaller, even if the return (per person) is better. Also, we'd rather be able to release our code as open source if we could. This option sucks.
Most of the truly effective business models we've seen for open source businesses seem to rely on some sort of hybrid where the code is open, but the revenue comes from a proprietary source, such as large companies paying for consulting or support, or paying to use the free code in their non-free products (a la Sleepycat) or offering proprietary components to work with an underlying free system (a la Sendmail), etc. These can be effective, at least, but they often depend on someone else making money from closed software, or depend on the whims of large corporations who may make their money elsewhere. So these options can suck too, but at least they can be workable.
I think the root of the problem is human nature. We like free stuff, and we think we're getting a bargain if we get away with not paying for something, especially if we're told that's okay. So it's quite rare for us to send money to send money to authors of free projects, no matter how much we may value those projects. And even when a few of us do, it's just a drop in the bucket.
What developer can afford to quit their day job just because every week a half dozen users take it upon themselves to donate $20 when they didn't have to? Sure, it's nice. No doubt it's appreciated, but that $120/week would be barely over $6K/year, which is far too little to live on. Begging for donations just doesn't work well, whether it's a developer, a charity or public television.
On the other hand, if each of 10,000 users would pay just $10/year, that would be $100K that could support that developer and allow them to quit working a day job. Why doesn't this work? Because each of those 10,000 users will tell themselves that the other 9,999 users can pay their $10, and nobody would miss that $10. And it's true; $99,990 would be just as good as $100,000. Unfortunately, the vast majority makes this same argument, and suddenly you're back at $6K instead of $100K.
So, here's the question. How can we get the masses of regular users to pay a modest amount of money, on a regular basis, to support development of the software they want? And can we do it with the free-redistribution clauses in open-source licenses, or is it only possible if redistribution is restricted? Is it compatible with the GPL? Can we offer some other tangible selective benefit that only the paying users will benefit from, that will convince them to join up?
How about some creative responses here? You, there! Yes, you! What would convince you to chip in some money on a routine basis?
Isn't dual-licensing with the GPL perfect for this (Score:4, Insightful)
Perhaps the problem is the overinsistence on advertising the products as free software as opposed to advertising them as useful products that can be licensed, for a price, at whatever terms the buyer wishes. The problem appears similar to that solved by Sleepycat [sleepycat.com].
The claims of hackerlab and arch are that they are technically superior solutions to important subareas of computer science. This is precisely what Sleepycat claims for Berkeley DB. As a GPLed library, hackerlab already qualifies as a product that cannot be used commercially unless the distributor wishes to distribute the source code for the application under the GPL. If hackerlab really has value, that ought to be enough to pry some money to continue its support. Similar considerations should apply to arch if it was designed properly.
I really don't know why in this case the market isn't a perfect judge of the true value of this project.
No, it isn't. (Score:2, Informative)
Dual-Licensing works only with libraries, because the GPL prohibits linking between GPL and non-GPL code. However, arch is a program. You don't need to link it to your project to use it for generating your project. According to the GPL, you can use GPL'ed programs for commercial projects (e.g. using Kdevelop as IDE an arch as version control). As long as you don't use them as part of your project, everything's fine.So the difference is: When we're talking using GPL'ed stuff as tools, you can use it in any way you like. It's only when it comes to modifying or linking GPL'ed code that you get restrictions.
Re:Isn't dual-licensing with the GPL perfect for t (Score:3, Interesting)
I'd love to see a new license, that could be called the fGPL. That would be the "Funded GPL". To be able to use fGPLd programs you'll HAVE to contribute some small amount of money to the fGPL foundation. You'll not be required to pay for any individual fGPL software, just a plain simple yearly $10 or $20 charge. And you will be able to distribute exactly where that money goes, among all the different projects. If you can't pay $20 a year it will be no problem, just a bit penalty: all fGPL software would be free as in beer once the year passes (old releases).
The money paid to the developers would only cover salaries and some expenses that are needing to continue developement. So if any proyect gets over-funded, you'll be noticed that you must reasign some of your credits.
It'd always be free as in freedom. We only need to bring some beer for that to happen. It'll also kill the anti OSS argument that the system is for comunists or anti-american. I know that is FUD, but does your representatives know that? It will also kill most of the FUD targeted at OSS and will also bust developement to unknown levels.
What do we need for this to happen?
To have the Linux Kernel, the Red Hat distro, mplayer, X and gcc (for example) adopting the fGPL for the next releases. After that, we'll see most every GPLd program adopting the fGPL. After that, you'll start to see how much sense it made to pay $20 a year. And even the ones that can't pay (if any) will be able to use the software (though 1 year old, but their hardware si severla years old for sure).
This is my opinion. I'd gladly pay the $20, as long as EVERYONE ELSE pays their $20. That's why we don't see many donations now: because you have this filling everyone else is just waiting for a fool like you to contribute to project X in order to save it.
Re:Isn't dual-licensing with the GPL perfect for t (Score:3, Interesting)
I really don't know why in this case the market isn't a perfect judge of the true value of this project.
It doesn't work well for two reasons:
1 - Market price reflects value when you can exclude people from using it if they don't pay a price. In any other case it means free-riding. This is why taxes are not optional (though the problem with taxes is you don't get to choose what public goods you do fund).
2 - Distributed development and a lack of a formal structure in the organizations: "Hey, pay me some money, i promse to keep working on this project!" is not good enough. There must be some way to make sure where the money goes and that it's used for that porpuse. This may not look like a problem but it is. For example, people are bidding to open the sources to Blender. But what happens if they don't reach the 100k limit? Donations are not good enough in the sense that companies try not to donate but prefer to fund (meaning the developer just can't do whatever he likes with the money).
That's about it. The misconceptions about the "market and it's benefits" are so widespread, but not their limitations. So I felt like posting my view (which is by no means different than what an economist will tell you)
Re:Tom Lord ? (Score:1)
Re:Tom Lord ? (Score:1)
We were all young once....(some still are, I guess). I don't particularly recall why that happened, but I'd guess it was because though the (then new) regexp engine tested well on the tests available then, it was too slow on some other cases discovered after release, by users. It only took about another 10 years to get the regexp matcher right
:-) You can find the current version in the Hackerlab C library, at [regexps.com]
As others have hinted (but did not provide any details), an alternative to subversion and arch is "opencm":
Unfortunately, like subversion, opencm is still a work-in-progress, but it appears to have a lot of potential. Progress appears to be occurring at a steady, but moderate, pace.
Features:
Money and open source (Score:3, Insightful)
There's a simple solution to this dilemma, which is, don't make your products open source if you want to make money out of it. Free software is great for writing operating systems, but only Stallman has ever claimed it is the be all and end all of software development. Note that you can write open software without giving away the source, simply by documenting the file formats and protocols. I don't respect companies that don't do this anyway, as it implies that they feel they need artificial lockin to stay afloat rather than just producing quality software.
Having said all this, I have a problem with Tom Lord asking for donations, and ditto for Rob Levin with openprojects.net. There are countless open source projects in the world, many of which are very important. The Linux kernel, KDE and so on are all huge projects, yet I don't see them begging for cash. I also write open source software, but I do it in my spare time, and delegate work that I cannot handle, because my projects are by necessity non commercial. No project should be so dependant on one person that they have to work on it full time. This goes for writing source control systems, or running IRC networks. I think projects should either be non commercial, in which case you have a paying job during the day and work on it in your spare time, or you figure out a way of making money from it (ie by keeping the source closed).
I don't see any good way, or any good reason, for attempting to make money directly from donations for open source projects. BitMover has got the right idea, they are getting mindshare and free testing by giving away their product to free software developers, but charging for it for commercial operators. They've figured out a way to tread the line, but most don't.
Re:Money and open source (Score:2, Interesting)
Yes, other projects need support as well. I regard myself as attempting to:
Is there a way to make money directly for creating new Free Software? In a few cases, those of us lucky enough to get money from users, sure. In the majority of cases, in the future, I think the big companies that use open source ought to come up with funding mechanisms (and fund them!), because that's a good way for them to spend their R&D budget.
R&D Fundraising Business Models (Score:1)
Here's an update about arch and the regexps.com [regexps.com] fundraising effort.
A few days ago, I released a GPL'ed package (the monkey directory editor for Emacs) as a fundraiser: rather than post the source or put up a tar bundle or repository, I've been charging people money to send them the source.
To my surprise, that actually worked a little bit. Some people bought copies. Great!
Today I'm trying a new variation: I've mailed out (to the gnu-emacs-sources mailing list) the source for the previous version of monkey, and now I'm offering to sell (still GPL) distributions that have some new features. We'll see.
If all of this works out, one idea I'm considering is to make all of my source available in the usual way (tar bundles, revision control repositories), but to rate-limit traffic from ".com" domains and sell FTP accounts. I think this model can be adopted by many projects, if it works, and that it won't cause any serious problems for hackers sharing code with one another (they just might want to use a non-".com" address for anonymous transfers).
This "service differential for source code" model isn't perfect by some standards. It doesn't force users to pay and it doesn't force customers to spend their money wisely. On the other hand, this model reminds users to pay and implements a well-defined service that they can pay for.
If you like the idea of this model -- that's another reason to support the current fundraiser! Perhaps we can bootstrap a whole new kind of Free Software Business Model.
Re:Dolphin... (Score:2, Insightful)
What you're missing is that all of this is supposed to be fun, too. It's not all buisness, you know. One of the cooler aspects of open source projects is that while the bottom line is important, it's not all there is.
Live a little, or at least don't whine when others do.
Re:Dolphin... (Score:2)
Say - since Win2K makes a pretty nice gaming platform does that mean Win2K isn't ready for work time either? I shudder to think what that means about WinXP. Heck, OS X is out too. Lesse... there's Solaris... oh, wait... Quake ruined that. Damn those geeks having fun!
Damn. I guess we're just going to have to go back to paper and pens. Oh well. The whole "IT industry" scam was fun while it lasted.
Re:Dolphin... (Score:1)
At least I hope it isn't, for IT's sake. | https://slashdot.org/story/02/07/25/239230/slashback-arch-bubbles-keystrokes | CC-MAIN-2018-22 | refinedweb | 6,757 | 71.65 |
I've recently been wondering (in a more general context than just haskell) about optimisations along the lines of foo x y z = if complicated thing then Cons1 a b c else if other complicated thing then Cons2 d e else Cons3 f bar some args = case foo perhaps different args of Cons1 a b c -> code for case one Cons2 a b -> code for case two Cons3 a -> code for case three into foo x y z k1 k2 k3 = if complicated thing then k1 a b c else if other complicated thing then k2 d e else k3 f bar some args = foo perhaps different args (\ a b c -> code for case one) (\ a b -> code for case two) (\ a -> code for case three) Such that you save a cons (unless the compiler can return the value in registers or on the stack) and a case analysis branch, but a normal function return (predictable by the CPU) is replaced by a less-predictable indirect jump. Does anyone have references to a paper that discusses an optimisation like this for any language, not just Haskell? Tony. -- f.a.n.finch <dot at dotat.at> SHANNON ROCKALL: VARIABLE 3, BECOMING SOUTH OR SOUTHEAST 4 OR 5, OCCASIONALLY 6, VEERING WEST LATER. MODERATE OR ROUGH. RAIN WITH FOG PATCHES, SHOWERS LATER. MODERATE OR GOOD, OCCASIONALLY VERY POOR. | http://www.haskell.org/pipermail/haskell-cafe/2007-July/028411.html | CC-MAIN-2014-15 | refinedweb | 226 | 56.52 |
Hello everyone,
I have just written some code that measures a system's FLOPS, but before I wrap it in a class and make a GUI for it, I would like to know what you guys think of it, if you can see any bugs and/or if you got any suggestions for me.
Basically, what I'm doing is I run an empty loop n times, record how long that took, then I run a loop with two floating point operations n times also recording the time it took. I need to run an empty loop because now I know how much of the time is spent doing the looping stuff, like that I can extract how much time the floating point operations took, from which I can calculate the FLOPS.
Here's the code:
#include <iostream> #include <string> #include <ctime> // how many times the loops are run, the higher this number // is, the longer it takes, but the more accurate it gets. // max is 4294967295 (cause I'm using uint32_t) #define LOOP_REPS 4294967295 using namespace std; int main(int argc, char *argv[]) { cout.setf(ios_base::fixed); // shows decimals in the output cout << "loop_reps: " << LOOP_REPS << endl; // reference loop clock_t rl_start = clock(); // loop index is volatile so that the empty loop isn't optimized away for(volatile uint32_t rl_index = 0; rl_index < LOOP_REPS; ++rl_index) { // empty loop - just to calculate how much time an empty loop needs } clock_t rl_end = clock(); double rl_time = difftime(rl_end, rl_start) / CLOCKS_PER_SEC; // output the time the reference loop took cout << "cl_time: " << rl_time << endl; // flops loop volatile float a = 1.5; volatile float b = 1.6; clock_t fl_start = clock(); for(volatile uint32_t fl_index = 0; fl_index < LOOP_REPS; ++fl_index) { a *= b; // multiplication operation b += a; // addition operation } clock_t fl_end = clock(); double fl_time = difftime(fl_end, fl_start) / CLOCKS_PER_SEC; unsigned long flops = LOOP_REPS / ((fl_time - rl_time) / 2); cout << "fl_time: " << fl_time << endl; cout << "flops: " << flops << endl; }
The is how the output should look like:
xfbs@remus:~/Dropbox/projects/C++/FlopsTest> ./flops loop_reps: 4294967295 cl_time: 11.955750 fl_time: 30.865851 flops: 454251121
Cheers, xfbs | https://www.daniweb.com/programming/software-development/threads/391888/wrote-code-that-measures-flops-would-be-nice-if-someone-could-review-it | CC-MAIN-2017-09 | refinedweb | 337 | 59.47 |
11 June 2010 05:06 [Source: ICIS news]
By Becky Zhang
SINGAPORE (ICIS news)--China's polyester capacity is expected to increase by 2.45m tonnes/year in 2010 with eight new plants to come on line following the start-up of China’s Jiangsu Shenghong Chemical Fibre’s 200,000 tonne/year polyester yarn unit this week, market sources said on Friday.
Five of these plants would produce 1.65m tonnes/year of polyester yarn used by the textile sector and four of the new units would add 800,000 tonnes/year of polyester bottle chip capacity that is used by beverages industry, a regional market source said. (Please see table below)
?xml:namespace>
Given the large size of the Chinese textile and beverages markets, the new capacity was unlikely to have an impact on prices and would not create an over-supply situation, industry sources said.
The beverages industry estimates that carbonated drinks would grow by 10% and juices by 20% this year - easily absorbing the new bottle chip polyester capacity - according to a forecast presented last week at the Polyester Industrial Summit organised by Shanghai-based commodity information service CBI.
Zhang Bin, Chief Analyst Textile & Apparel Industry, of brokerage house Sinolink Securities Co, said that China's textile industry was forecast to grow by 10% in 2010, thus easily consuming the additional capacity coming on line this year.
Followed by Shenghong plant in Jiangsu province on 9 June, another two plants were expected to start up this month.
Zhejiang Hengyi Group, the third largest polyester producer in
Yixing Huaya is expected to have another trial run at its 400,000 tonne/year yarn plant in
In the third quarter, four new plants are due to come on stream. Zhejiang Xinfengming will set up a new 250,000 tonne/year yarn unit in August, bringing its total yarn capacity to 750,000 tonnes/year.
Two new 200,000 tonne/year bottle chip plants will follow, including Zhejiang Zhengkai and Changzhou Huarun.
During August to September, a new 400,000 tonne/year yarn plant by Zhejiang Xiaoshan Rongsheng Group is expected to start commercial operation.
Towards the end of the year, Shanghai Far Eastern will bring its new 150,000 tonne/year bottle chip unit on line. By then the company’s total bottle chip capacity will reach 750,000 tonnes/year, the second largest producer in
Tongkun Group, another polyester giant in
With the start of the new plants, an additional 2.1m tonnes/year of fresh demand for feedstock purified terephthalic acid (PTA) would be created, a major Chinese trader said.
“This is the reason for PTA producers to have an optimistic outlook for the second half of the year,” the trader said, adding only one new PTA plant was planned in the whole of Asian region in the rest of 2010.
Jianying Chengxin Industrial Group plans to start commercial production at its 640,000 tonne/year PTA facility in September, according to an earlier report by ICIS news.
“
China’s total PTA capacity was around 14.85m tonnes/year, with monthly output estimated at around 1.05-1.11m tonnes assuming the overall operating rates at 85-90%, he added.
New Polyester Plants | http://www.icis.com/Articles/2010/06/11/9366946/china-polyester-capacity-to-expand-by-2.45m-tonnesyear-in-2010.html | CC-MAIN-2015-22 | refinedweb | 537 | 57.2 |
This article explores how various GNU tools are used for the development of C/C++ applications. It also covers the anatomy of the generated intermediate files and the final outcome. These tools are also available for the Windows environment with support from the MingW runtime.
In the Linux environment, many GNU tools are installed by default when you select the C/C++ development category during installation, or use the package manager, post installation. Most of these tools are supported by packages like gcc, glibc, binutils, etc. You can use the package manager of a specific distribution to install missing packages or, on rare occasions, you can build these from the source code.
For Windows, similar tools are provided by the MingW suite, originally available from mingw.org; but this is limited to 32-bit versions and there have been no consistent updates recently. You may need to download multiple packages and merge them for recent versions. An installer utility is preferred to download all necessary packages in online mode. MingW generates code for the Windows runtime in the PE format, so UNIX/Linux-specific system calls can’t be used with this.
You can get a TDM variant of gcc, which comes with the Code::Blocks IDE. Currently gcc v4.9.2 is bundled with Code::Blocks v16.01. With this you can also avail an elegant IDE with a gcc backend. Download codeblocks-16.01mingw-nosetup.zip from codeblocks.org/downloads, extract and then update the path to codeblocks-16.01mingw-nosetup\MingW\bin at the user level or system level. Now navigate to the directory holding the code in a command prompt, and run any command like gcc.
Alternatively, you can use the MingW-w64 Project, a fork of MingW, which supports 64-bit systems also and provides offline archives. You can download the archive from sourceforge.net/projects/mingw-w64/files/, then navigate to Toolchains targetting Win64/Personal Builds/dongsheng-daily and choose the desired version like 4.8, 4.9 or 5.x. Extract the archive and update the path to the extracted bin directory.
Optionally, the Cygwin Project also provides a complete UNIX-like environment in Windows with GNU tools and an abstraction layer for POSIX APIs. Any UNIX/Linux application can be ported on Cygwin.
A simple program and its analysis
To see various intermediate phases while building, let’s look at the following simple example:
#simple.c #include<stdio.h> #define PI 22.0/7.0 int main() { double area,r; #a comment area=PI*r*r; printf(“area of circle=%lf\n”,area); return 0; }
To build the above program, we use the following command:
gcc simple.c -o simple
When you run the above command, have you ever thought of the various underlying phases of development? gcc and g++ (without options) undergo various phases like preprocessing, assembling, linking, etc, with the help of tools like cpp, as, ld, etc. So gcc and g++ act like wrappers for these tools.
To see each of these phases, let’s try out some commands.
To see preprocessed output, you can use the -E option of gcc which invokes the cpp command internally. Here, you can see symbolic constants like PI replaced by their values; comments are removed, macros are expanded and header file contents are included.
gcc -E simple.c #output comes on stdout by default cpp simple.c -o simple.i #you can use -o option to store output in a file, .i extension is a convention to store outcome of preprocessor.
Optionally, we can provide symbolic constant PI externally using the -D option:
gcc -DPI=22.0/7.0 simple.c
To stop with generation of assembly code equivalent to source code the -S option is used, which internally uses cc1,cc1plus commands:
gcc -S simple.c #generates simple.s
To locate the cc1 command, use the following command:
gcc --print-prog-name=cc1 #path could be libexec/gcc/mingw32/4.9.2/ in MingW
To generate assembly using cc1, use the command given below:
<path-of-cc1-dir>/cc1 simple.c #cc1 is not located in bin dir of toolchain $( gcc --print-prog-name) simple.c #Linux specific
Optionally, you can generate the object file from the generated assembly code, as follows:
as simple.s -o simple.o
To stop compilation from the source code, use the code given below:
gcc -c simple.c #generates object file simple.o
To combine one or more object files with the necessary library files, runtime support, and to generate executables using collect2, ld internally (e.g., printf is taken from the standard C library in the form of libc.a or libc.so and math functions are taken from libm.* ), use the code given below:
gcc simple.o -o simple #generates executable/binary, a.out in absence of -o option
To retain all intermediate files while building with gcc, type:
gcc --save-temps simple.c #keeps simple.i,simple.s,simple.o
gcc supports various optimisation levels by using options like -O1,-O2,-O3 and -O0 to turn off optimisation, and -OS for space optimising. If you are planning to debug generated code using gdb, the -g<level>( -g1,-g2,-g3) option can be used.
-g2 is assumed if -g is specified and -g0 turns off debug support. You can use the -I option to specify the custom path of additional header files. Compile-specific options like –I and -D can go into the CFLAGS variable, and linker-specific options like -L, -l and -static can go into the LDFLAGS variable.
An example of a multi-file
Now let’s try another example where the application is built from multiple source files; each source file with the included header files getting compiled individually is known as the translation unit. In this example, sum and square are invoked from the main function in test.c, and are defined in sum.c, sqr.c respectively. Assume the suitable function declarations and necessary header files.
#test.c:- int main() { int a,b,c,d; a=10,b=20; c=sum(a,b); d=square(a); printf(“c=%d,d=%d\n”,c,d); return 0; } #sum.c:- int sum(int x,int y) { int z; z=x+y; return z; } #sqr.c:- int square(int x) { return x*x; }
To build the above code, i.e., compile individual translation units and link generated object files, you can use the following sequence of commands:
gcc test.c -c #-o test.o gcc sum.c -c #-o sum.o gcc sqr.c -c #-o sqr.o gcc test.o sum.o sqr.o -o all.out #all.exe in case of windows
Static vs dynamic linking
Applications may be linked statically or dynamically. In static linking, all necessary code is kept part of executable by the linker, which enables better performance, eliminating runtime overhead. But this poses the problem of a larger footprint of executables. In dynamic linking, library functions are excluded in the executable and loaded on demand, which allows an optimal footprint, but poses the problem of runtime overhead. Dynamic libraries come with the added advantage of sharing among applications and versioning support. A library is a collection of object files. Let’s create libraries for the frequently used functions like sum and square in the above code.
ar rc libsample.a sum.o sqr.o #static libraries come with extension .a, lib prefix is a convention gcc -L. -lsample -o p.out #libsample.a is linked statically and other std libraries like libc.so, libm.so are linked dynamically gcc -L. -lsample -o s.out -static # all libraries are linked statically, glibc-devel-static package is required in Linux for static linking of std libs like libc.a, libm.a # Assume .exe extension instead of .out in windows, # MingW suite doesn’t have much dynamic libs, most of the linking happens statically here. gcc -shared sum.o sqr.o -o libsample.so #On Linux shared object(.so) files support dynamic linking gcc -shared sum.o sqr.o -o libsample.dll #On Windows dynamic link libraries(dll) format is used gcc -L. test.o -lsample -o d.out #Link with libsample.* dynamically
While linking, the -L option is used by gcc to specify the custom path of our own libraries, and -l is used to specify the custom libraries, assuming the lib prefix and the .a or .so extension. In the above commands, a dot (.) is specified with –L, as the necessary libraries are in the current directory; so replace the dot (.) with the concerned path, as applicable. For example, -lc stands for libc.a or libc.so, -lpthread stands for libpthread.* . We use -lsample to specify libsample.a or libsample.so.
When both .a and .so are available in a specified directory, gcc opts for .so files by default. In Linux, the -static option is used to enforce static linking.
Compare the footprint of s.out, p.out and d.out using the size command or OS-specific commands like ls, du or dir. You will observe that s.out is heavy with all the library code and d.out is very light with minimal code, while p.out comes inbetween as only our library is statically linked. The strip utility can be applied on executables to reduce the footprint, which removes symbols from underlying object files. Please note that the strip can’t be applied on individual object files or libraries before linking. It is meaningful for executables only, especially statically linked code for constrained environments.
strip s.out #compare the size of s.out before and after strip
Note: The following section is Linux specific.
First, type: gcc -shared -Wl,-soname,libsample.so -o ~/dlibs/libsample.so.1.0.1 sum.o sqr.o ldconfig -n ~/dlibs
To run the dynamically linked executable, we need to specify the custom path of the libraries:
LD_LIBRARY_PATH=~/dlibs ./d.out
Alternatively, we need to update the LD_LIBRARY_PATH environment variable appended with ~/dlibs. It would be preferable if you could add an entry of the custom directory in /etc/ld.so.conf and run ldconfig once, to update the cache.
To check the shared library dependencies of any executable, we can use ldd, as follows:
ldd p.out d.out #our library is listed in d.out but not in p.out ldd s.out #says no dependencies
Please refer to tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html for more details on shared object files.
A simple Linux utility file mentions the type of each file, particularly for binaries, providing helpful details like target architecture, whether it is statically or dynamically linked and if it is stripped or not.
file d.out p.out s.out
ELF files and analysis
Generated object files, executables and libraries in Linux are known as ELF (executable and linkable format) files. To analyse ELF files you can use a few dissection tools listed below. Even though MingW generates Windows PE format, the effect of these tools (except readelf) will be the same.
nm: This is used to print a symbol table from an object file or executable. All the global variable and function names seen by the linker are known as symbols.
Here, you can observe that sum and square are undefined in test.o and defined in sum.o, sqr.o. A lot of runtime support symbols are added in all.out, as follows:
nm test.o sum.o sqr.o all.out
objdump: This is used for the dissection of ELF files in terms of disassembly, symbols, section details, etc.
objdump -d test.o #disassemble ELF files objdump -t test.o #displays symbol info objdump -x test.o #displays all headers objdump -S test.o #intermixed source and assembly, code has to be compiled with debug support using -g option
readelf: This is used to provide meta information about ELF files (Linux only).
readlef -h test.o all.out #Provides ELF header details like magic number, target #architecture,endianness, ABI, etc. readelf -a all.out #Provides all headers and section details
To know the size of the text, data, bss sections and the total, we can use the size command, as follows:
size test.o all.out
Last but not the least, the file utility mentioned earlier is used to know each type of file. Based on the magic number in the header part of file, it can distinguish between object files, executables, static libraries and dynamic libraries.
file *.o all.out libsample.a libsample.so
Sections of a program/process
An actively loaded program can have various sections like code (.text), initialised data (.data), uninitialised data (.bss), read-only data (.rodata) and stack. To see the applicable sections of symbols in a code, let’s add a few global variables and functions, as follows. Compile as object file and see the symbol table using nm or objdump. Then check the indicated symbol states:
int d1=10; //D int c; //C static int d2=10; //d static int b1; //b const int r1=10; //R const int r2; //r void foo() { } //T static void bar() { } //t
The explanation of the above code is given below.
T, t: .text
D, d: .data
R, r: .rodata
C: Common symbols, will be merged into .bss on linking
B: .bss
W, w: Weak symbols
T, R,D: Indicates eligible for external linkage
t, d, r: Indicates restricted for internal linkage only
You may wonder about the invisibility of local variables, which are not considered as symbols. These are converted into offsets with respect to the stack frame using a stack pointer, frame pointer registers (ESP, EBP in x86) thus not seen by linker, which is clear from disassembly using objdump also.
Weak symbols are almost similar to declarations, but don’t cause a linker error if they are not defined by the user. These are useful for interrupt handlers, exception handlers or any event handlers that can be aliased to default handlers. Users can override these with their own custom handlers. Let’s add this code in test.c to check for weak symbols quickly and check nm output.
void f1() { #default code for f1 } void f1() __attribute((weak)); #f1 can be redefined void f2_default() { #default code for f2 } void f2() __attribute__((weak,alias(“f2_default”))); #f2 can be redefined
f1 or f2 can be redefined by the user only once as strong symbols in other translation units, or can be redefined as weak symbols multiple times subsequently. f2_default will be invoked if f2 is not redefined which is aliased to f2. If a weak function is not aliased to a strong function, not having default code in the same translation unit or not redefined further by user causes runtime error.
Function overloading and name demangling
Let’s look at a few ways in which the tools are used on C++ code, with examples of function overloading.
int sum(int x, int y); float sum(float x, float y); int sum(int x, int y,int z);
Write two simple C++ programs with definitions of the above functions in fun.cpp, the main function with a few calls in test.cpp, and then generate object files.
g++ -c fun.cpp g++ -c test.cpp
Now check the symbols generated for function definitions and calls for the sum and swap functions using nm.
nm test.o fun.o
Here, you can see overloaded function calls. Definitions are decorated according to the number and type of parameters, and the enclosing scope like global, class, namespace, usage of templates, etc. But names are mangled as per internal conventions. To demangle the symbol names in readable format, you can use the option -C or –demangle with nm or objdump.
nm --demangle test.o fun.o objdump -t -C test.o fun.o
A note about elfutils and cross toolchains
GNU tools target a wide variety of platforms. Ulrich Drepper wrote elfutils purely for Linux and the ELF format. You can download these from the package manager or build them from source available at fedorahosted.org/releases/e/l/elfutils. Similar tools are also available for various target platforms with suitable prefixes, e.g., the Linaro family of toolchains for building Linux components, and bare metal toolchains from launchpad.net/gcc-arm-embedded for ARM targets provide similar tools with their own prefixes. The options and usage of these tools will be similar to the steps mentioned here.
In this article, only minimal hints on tools have been provided. Please refer to the man pages of each command and other resources for additional inputs.
Sir, in symbol table example, const int r2 may be in common symbol or .bss section instead of .rodata.
Yes…it goes into Common symbols,”r” in case initiailized static const variables…will correct it,thank you for the feedback…
The mediocre teacher tells.The good teacher explains.Microsoft
April 2017
Calendar
May 2017 Calendar
June 2017 Calendar
July 2017 Calendar
August 2017 Calendar
September 2017 Calendar
October 2017 Calendar
November 2017 Calendar
December 2017 Calendar like
This awesome Post! That bis very knowledgeable and useful. Thanks for sharing.
What is Digital Marketing
Article Submission Sites
Classfied Sites
Digital Marketing job Titles
Search Engine Submission Sites
Business Listing Sites
This is a really great information……
this is great information…
That’s a great idea.
This amazing Post! That bis entirely learned and helpful. Much obliged for sharing.
Thanks for the valuable information. | https://www.opensourceforu.com/2016/12/gnu-tools-help-develop-cc-applications/ | CC-MAIN-2020-45 | refinedweb | 2,908 | 59.19 |
Using Firebase to control your Arduino project over the web
I have seen the light, and it is Firebase.
Here at Team Sidney Enterprises, we wanted to develop a programmable light display for our house for the holidays — consisting of Arduino-controlled RGB LED strips that can be programmed to display arbitrary patterns (rainbows, sparkles, seizure-inducing strobes, you name it). Check out the final result:
I wanted to make it possible to control the light display not just from my phone, but ideally any web-capable device anywhere in the world. And I wanted to avoid writing a custom native app to control it — a web browser should be all that I need. Finally, I wanted to avoid standing up my own web server or database to manage things — “serverless” is the way to go these days.
Enter Firebase. Firebase makes it super easy to build mobile and web apps that maintain state in the cloud — without running your own servers. And in this case, Firebase’s REST API makes it trivial to access data from embedded devices without a full Android/iOS app stack or web browser. In this article I’ll walk through how it works.
(It’s worth noting that while I work at Google, I don’t have any connection to Firebase — I’m just a fan.)
The hardware
For this project, I settled on the following hardware:
- DotStar LED strips from Adafruit. These are strips of 30, 60, or 144 individually addressable RGB LEDs, and they are amazing, and wicked bright. (One can use NeoPixels as well, but DotStars are less finicky when it comes to the signal timing.) They require four wires: Power, ground, clock, and data, and the LEDs are daisy-chained along the strip. As long as you have enough power and memory, you can control an arbitrary number of LEDs from a single microcontroller.
- The HUZZAH32 microcontroller board, also from Adafruit. This is an Arduino-compatible board based on the ESP32 chipset, with 520KB SRAM and a 240MHz dual-core LX6 processor. The key thing about this board is that it has built-in WiFi, so there’s no need for separate hardware to connect to the Internet. As I’ll show below, standard libraries let you connect to WiFi networks and initiate HTTP connections — you can even update the device’s firmware over the Internet.
- I designed a custom PCB to connect the HUZZAH32 to the DotStar LED strip with a four-pin header. This is a passive board and only routes signals through to the correct pins, so you can accomplish the same thing by soldering wires to the HUZZAH32 directly.
This tutorial should be relevant to any web-based IoT or Arduino project, though, since very little is specific to the hardware that I used.
Software overview
Here’s an overview of how the software works. The HUZZAH32 boards connect to my home WiFi network and run a program that periodically “calls home” to the Firebase REST API, using a simple HTTP request. The response is a JSON object that tells the board what its current configuration is — that is, what pattern to show on the lights (as well as parameters such as color, speed, and brightness). The board then drives the DotStar LED strip using this pattern. Easy!
To set the device configuration in Firebase, I wrote a simple web app that uses the Firebase JavaScript API to read and write values from Firebase.
Note that this basic design should work for any Internet-connected device that you want to control over the web, as long as it can issue REST requests over HTTP.
Part One: Setting up Firebase
Firebase has a lot of great documentation covering a wide range of use cases, however, those tutorials are generally more involved than we care about for a simple IoT-style project. So I’ll walk you through a really basic setup here.
To get started, go to and sign in with a Google account, then go to the Firebase Console. You then create a Firebase “project” under which your Firebase settings will live. Call it anything you like — for example, “IoT Demo.”
Firebase’s free “Spark” tier is pretty generous and likely enough for most hobbyist projects; if you have a lot of devices or are going to store a lot of data, you will need to sign up for one of the other plans.
The primary Firebase feature we are going to use is the Realtime Database, which is a cloud-hosted key-value store. In our case, the web app writes data to the database, and the IoT devices read data from the database to configure themselves. (Note that Firebase has a new offering, currently in beta, called the Cloud Firestore, which is pretty similar but uses a slightly different API. For the sake of this tutorial I’ll stick with the original Realtime Database API.)
Adding some data to the database
Once you’ve created your Firebase project, you’ll find yourself in the Firebase Console for this project. On the left sidebar, you’ll see a link to “Database”. There you will find a button inviting you to Create database (and make sure you are creating this as a “Realtime Database”, rather than “Cloud Firestore”, since this tutorial covers the former.)
You’ll now have an empty database. To add data to it, hover your mouse over the root note in the database (e.g., “iot-demo-3a7b9”) and click the + button that pops up there. You can then add a key/value pair. For now, use the key “test” and enter a value such as “Hello Firebase!”
We’ll talk later about how to add values to the database from your own custom web app, but in a pinch, you can always use the Firebase console to do it by hand.
Reading data from your Arduino device
Firebase has APIs for Android, iOS, and JavaScript-enabled web apps. But if your embedded devices are anything like mine, none of those will be options for you. Fortunately, Firebase can also be accessed through a simple REST API from any device that can issue simple HTTP requests to Firebase’s servers.
In the below example, I’ll show how to do this from an Arduino-compatible device (specifically the HUZZAH32, but should work with any Arduino-like board with WiFi).
First, we need to get the device on our home WiFi. Assuming a WiFi network called “ssid” with password “password”, we’d do something like:
#include <WiFi.h>
#include <WiFiMulti.h>WiFiMulti wifiMulti;
HTTPClient http;void setup() {
// Fill in your SSID and password below
wifiMulti.addAP(“ssid”, “password”);
}
To read from the Firebase database, we need to issue an HTTP GET request to the URL, where
YOUR-PROJECT-NAME is the name of the Firebase project you created above, and
KEY is the database key —
test in this case. So, we need to access:
Keeping in mind that your project name will be different than mine.
From Arduino, the HTTP request code will look like this:
String url = ""; http.setTimeout(1000);
http.begin(url);// Issue the HTTP GET request.
int status = http.GET();
if (status <= 0) {
Serial.printf("HTTP error: %s\n",
http.errorToString(status).c_str());
return;
}// Read the response.
String payload = http.getString();
Serial.println("Got HTTP response:");
Serial.println(payload);
If all goes well, this should print the value stored in the database:
"Hello, Firebase!"
We now have the trappings of a basic cloud-controlled IoT app. For now, we can only read a single string value from the database, but what about something more sophisticated? What about reading a different value for each device? And what about reading more than a simple string — what about structured data? Fortunately, both are pretty easy.
Using different database keys for different devices
In many cases, you may want the database values read by one device to differ from that of other devices. The simplest approach is for each device to access data from the database at a key associated with some unique identifier. While there are many ways of doing this, I was looking for a “zero config” option that would prevent me from having to manually assign an identifier to each device on my network.
A good solution in this case is to use the hardware MAC address of the WiFi radio on the board as a unique identifier: these are supposed to be globally unique (although, in practice they may not be!) six-byte IDs associated with the hardware on board the device.
To use this approach, all we have to do is use
WiFi.macAddress() in the URL for the REST request, as so:
String url = "" +
WiFi.macAddress() +".json";
http.setTimeout(1000);
http.begin(url);int status = http.GET();
// And so on, as above
This will request a URL such as
where the 6 bytes of the URL are unique to each device on the network.
Storing structured data in Firebase
The value stored at a given Firebase database key can actually be a structure containing a mixture of numeric and string fields, which is useful for representing richer data structures. For example, let’s say we wanted each device to have a configuration consisting of a human-readable name, a color (represented as separate red, green, and blue values), and a brightness value. We might organize the data like so:
iot-demo-3a7b9
|
+- config
+
|
+- 30:AE:A4:1B:58:A0
| |
| +- name: "Porch left"
| +- red: 162
| +- green: 0
| +- blue: 255
| +- brightness: 100
|
+- 30:AE:A4:1C:1A:B0
|
+- name: "Porch right"
+- red: 140
+- green: 45
+- blue: 0
+- brightness: 80
You can manually create these nested structures using the Firebase console, again using the + button that pops up when you hover over a database key to add new database entries. (Later I’ll show how to do this in a more elegant way via a web app.)
Reading structured data
Now that we’re storing richer data structures in Firebase, reading it back from an Arduino program is a little more challenging. Accessing the Firebase REST API, you will get back a JSON-encoded string containing the database entry, like so:
{"blue":255,"brightness":100,"green":0,"name":"Porch left","red":162}
It is of course possible to parse this string by hand in C code, but not likely your favorite way to spend an afternoon (especially since the ordering of the keys is not guaranteed!).
An alternate, albeit brute-force, approach is to read each field as a separate JSON request. For example, the URL
returns the value
162, and likewise for
green.json,
blue.json, and so forth. This works, and saves you the trouble of parsing JSON, but requires a lot of round-trip HTTP requests, any one of which might individually fail or time out.
A better approach is to use a library to parse the JSON for you, and the ArduinoJson library is great for this. Here’s some code that reads the values out of the JSON object returned by the REST call:
#include <ArduinoJson.h>// Allocate a 1024-byte buffer for the JSON document.
StaticJsonDocument<1024> jsonDoc;void readConfig() { String url = "";
http.setTimeout(1000);
http.begin(url); // Issue the HTTP GET request.
int status = http.GET();
if (status <= 0) {
Serial.printf("HTTP error: %s\n",
http.errorToString(status).c_str());
return;
} String payload = http.getString(); // Parse the JSON response.
DeserializationError err = deserializeJson(jsonDoc, payload);
Serial.print("Deserialize returned: "); // Cast the response to a JSON object.
JsonObject jobj = jsonDoc.as<JsonObject>(); // Read each JSON object field.
int red = jobj["red"];
int green = jobj["green"];
int blue = jobj["blue"];
int brightness = jobj["brightness"]; char name[64];
memcpy(name, (const char *)jobj["name"], sizeof(name));
}
With this, you can then take the values (red, green, blue, etc.) and use them to control your LED strip. Cool, right?
Here’s the complete Arduino code with all of the bells and whistles:
Using a web app to control your devices
The other great thing about Firebase is that it’s easy to write a web page that can read and write values in the Firebase database. Since this approach does not require you to host your own database or custom server code, it works on any web server that can serve up static pages — no need for PHP and the like. I use GitHub Pages to host my web apps.
To get started, you first need to include the Firebase JavaScript library on your page, and use some boilerplate initialization code. On the Firebase console, click on Develop on the left navbar, and then Web setup in the upper right. This will spit out some code to paste on your web page’s HTML to configure Firebase. In my case, the initialization code looks like this:
<script src=""></script>
<script>
// Initialize Firebase
// NOTE!!! The below is specific to your Firebase project --
// use "Web Setup" from the Firebase "Develop" pane to get this
// code for your app. var config = {
apiKey: "AIzaSyABCaBBY04zpvIl1efmOPrKwNtPkgTXfqs",
authDomain: "team-sidney.firebaseapp.com",
databaseURL: "",
projectId: "team-sidney",
storageBucket: "team-sidney.appspot.com",
messagingSenderId: "395332355872"
};
firebase.initializeApp(config);
</script>
Writing data to a Firebase database entry is easy. First, you get a reference to the entry in the database you want to modify, and then you call the .set() method on that reference object with the data you want to write, like so:
var mac = '30:AE:A4:1B:58:A0';// Get a reference to the Firebase database entry at the given key.
var dbRef = firebase.database().ref('config/' + mac);// The config object we want to write.
var config = {
name: 'Device name',
red: 100,
green: 0,
blue: 100,
brightness: 50,
};// Write the config to the database.
dbRef
.set(config)
.then(function() {
console.log('Success!');
})
.catch(function(error) {
console.log('Error: ' + error.message);
});
That’s it!
You can also read data from Firebase by registering a callback that is invoked whenever a value in the database changes. This can also be applied to a whole set of database entries so you get a callback whenever anything changes. For example:
// Get a database reference to all config/ keys.
dbRef = firebase.database().ref('config/');
// Set callback to be invoked when a child node is added or changes.
dbRef.on('child_added', configChanged, dbErrorCallback);
dbRef.on('child_changed', configChanged, dbErrorCallback);// Callback invoked when database entry is added or changed.
function configChanged(snapshot) {
var key = snapshot.key;
var newValue = snapshot.val();
console.log('Database entry ' + key + ' changed, new value: ' +
newValue);
}// Callback invoked on error.
function dbErrorCallback(err) {
console.log('Error reading database: ' + err.message);
}
For a complete example of my light controller web app, check out the code here:
Security and Authentication
So far, we’ve assumed that the Firebase database you’ve configured can be read from and written to by any device on the Internet that knows the correct URL to use. This isn’t very secure, so of course Firebase has a set of powerful controls to handle user authentication and request authorization.
The full details on setting this up are beyond the scope of this little tutorial, but in brief, you can configure Firebase to limit requests to your database from certain users using a wide range of methods, including email address and password, Google/Facebook/Twitter accounts, and more. You can then configure database access rules that define which users can access which parts of your database under different conditions. To learn more, check out the docs at.
Putting it all together
For the holidays, we put several LED strips on the front porch of our house, and even wrapped a few around the Christmas tree, giving me the ability to remotely drive my entire family nuts with a stunning light display at the touch of a button.
I also hacked together a simple Google Assistant app to let me control the lights (“Hey Google, Tell Blinky control to set the Christmas tree to rainbow!”). But that’s a story for another post.
Lemme know in the comments if you have any questions and I’ll do my best to answer. Happy hacking! | https://medium.com/firebase-developers/using-firebase-to-control-your-arduino-project-over-the-web-ba94569d172c | CC-MAIN-2020-45 | refinedweb | 2,681 | 62.27 |
proconex 0.3
producer/consumer with exception handling
Proconex is a module to simplify the implementation of the producer/consumer idiom. In addition to simple implementations based on Python's Queue.Queue, proconex also takes care of exceptions raised during producing or consuming items and ensures that all the work shuts down in a clean manner without leaving zombie threads.
Example Usage
In order to use proconex, we need a few preparations.
First, set up Python's logging:
>>> import logging >>> logging.basicConfig(level=logging.INFO)
In case you want to use the with statement to clean up and still use Python 2.5, you need to import it:
>>> from __future__ import with_statement
And finally, we of course need to import proconex itself:
>>> import proconex
Here is a simple producer that reads lines from a file:
>>> class LineProducer(proconex.Producer): ... def __init__(self, fileToReadPath): ... super(LineProducer, self).__init__() ... self._fileToReadPath = fileToReadPath ... def items(self): ... with open(self._fileToReadPath, 'rb') as fileToRead: ... for lineNumber, line in enumerate(fileToRead, start=1): ... yield (lineNumber, line.rstrip('\n\r'))
The constructor can take any parameters you need to set up the producer. In this case, all we need is the path to the file to read, fileToReadPath. The constructor simply stores the value in an attribute for later reference.
The function items() typically is implemented as generator and yields the produced items one after another until there are no more items to produce. In this case, we just return the file line by line as a tuple of line number and line contents without trailing newlines.
Next, we need a consumer. Here is a simple one that processes the lines read by the producer above and prints its number and text:
>>> class LineConsumer(proconex.Consumer): ... def consume(self, item): ... lineNumber, line = item ... if "self" in line: ... print u"line %d: %s" % (lineNumber, line)
With classes for producer and consumer defined, we can create a producer and a list of consumers:
>>> producer = LineProducer(__file__) >>> consumers = [LineConsumer("consumer#%d" % consumerId) ... for consumerId in xrange(3)]
To actually start the production process, we need a worker to control the producer and consumers:
>>> with proconex.Worker(producer, consumers) as lineWorker: ... lineWorker.work() # doctest: +ELLIPSIS line ...
The with statement makes sure that all threads are terminated once the worker finished or failed. Alternatively you can use try ... except ... finally to handle errors and cleanup:
>>> producer = LineProducer(__file__) >>> consumers = [LineConsumer("consumer#%d" % consumerId) ... for consumerId in xrange(3)] >>> lineWorker = proconex.Worker(producer, consumers) >>> try: ... lineWorker.work() ... except Exception, error: ... print error ... finally: ... lineWorker.close() # doctest: +ELLIPSIS line ...
Additionally to Worker there also is a Converter, which not only produces and consumes items but also yields converted items. While Converter``s use the same ``Producer``s as ``Worker``s, they require different consumers based on ``ConvertingConsumer. Such a consumer has a addItem() which consume() should use to add the converted item.
Here is an example for a consumer that converts consumed integer numbers to their square value:
>>> class SquareConvertingIntegerConsumer(proconex.ConvertingConsumer): ... def consume(self, item): ... self.addItem(item * item)
A fitting producer for integer numbers between 0 and 4 is:
>>> class IntegerProducer(proconex.Producer): ... def items(self): ... for item in xrange(5): ... yield item
Combining these in a converter, we get:
>>> with proconex.Converter(IntegerProducer("producer"), ... SquareConvertingIntegerConsumer("consumer")) as converter: ... for item in converter.items(): ... print item 0 1 4 9 16
Limitations
When using proconex, there are a few things you should be aware of:
- Due to Python's Global Interpreter Lock (GIL), at least one of producer and
consumer should be I/O bound in order to allow thread switches.
- The code contains a few polling loops because Queue does
not support canceling get() and put(). The polling does not drain the CPU because it uses a timeout when waiting for events to happen. Still, there is room for improvement and contributions are welcome.
- The only way to recover from errors during production is to restart the
whole process from the beginning.
If you need more flexibility and control than proconex offers, try celery.
Source code
Proconex is distributed under the GNU Lesser General Public License, version 3 or later.
The source code is available from <>.
Version history
Version 0.3, 2012-01-06
- Added Converter class, which is similar to Worker but expects consumers to yield results the caller can process.
- Changed exceptions raised by producer and consumer to preserve their stack trace when passed to the Worker or Converter.
Version 0.2, 2012-01-04
- Added support for multiple producers.
- Added limit for queue size. By default it is twice the number of consumers.
Version 0.1, 2012-01-03
- Initial public release.
- Author: Thomas Aglassinger
- Keywords: xml output stream large big huge namespace unicode memory footprint
- License: GNU Lesser General Public License 3 or later
- Categories
- Development Status :: 4 - Beta
- Environment :: Plugins
- Intended Audience :: Developers
- License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
- Natural Language :: English
- Operating System :: OS Independent
- Programming Language :: Python :: 2.5
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Topic :: Software Development :: Libraries
- Package Index Owner: roskakori
- DOAP record: proconex-0.3.xml | http://pypi.python.org/pypi/proconex/0.3 | crawl-003 | refinedweb | 855 | 50.12 |
Archived:Getting tarted with blogging from PySymbian through xmlrpcl.
All PySymbian articles have been archived. PySymbian is no longer maintained by Nokia and is not guaranteed to work on more recent Symbian devices. It is not possible to submit apps to Nokia Store.
You will need xmlrpclib to run the code below; copy it from your Python 2.2 installation directory to your phone where you have the code below saved. You can place it in your phone memory or memory card. Ped editor is recommended for running the code and making changes.
Here is my sample code for Blogging from PySymbian through xmlrpclib.
# First of All Create a server proxy for your xml rpc server
# The Xml rpc server in this case is ""
server = ServerProxy("")
# import the socket module of pys60
import socket
# Show a list of Access Points
apid = socket.select_access_point()
# get the selected access point
apo =socket.access_point(apid)
# set the access point as default
socket.set_default_access_point(apo)
# start the access point, so that it connects to the network and get its ip etc.
apo.start()
try:
# print the addTwoNumbers(1,2) and call the method on server which give us the output of 3
print 'addTwoNumbers(1,2) = ', server.demo.addTwoNumbers(1,2)
# call the supportedMethods() method of the server and print the supported methods
print 'supported methods: \n', server.mt.supportedMethods()
except Error, v:
print "ERROR", v
# stop the access point. if an error occurs
apo.stop()
apo.stop()
apid=None
apo=None
Additional information
Above code is written by Fayyaz Ali and is Licensed under gnu gpl. | http://developer.nokia.com/Community/Wiki/Getting_Started_with_Blogging_from_PySymbian_through_xmlrpclib | CC-MAIN-2013-48 | refinedweb | 263 | 57.87 |
Hi all. Total beginner here taking first baby steps with a Pololu USB AVR programmer 2.1 and Baby Orangutan B-328 controller.
I’ve been going through the various install processes and I believe I have installed things correctly (grab attached). I’ve got the Programmer V2 config utility running (shows “connected”), and I’ve installed the CrossPack AVR for running on my Mac. I can run avr-gcc -v, make -v, and avrdude -v and all seem to show correct installation. I then try to run the Simple-Test in libpololu-avr/examples/atmegaXXX/simple-test/ but I get the following error:
fatal error: pololu/orangutan.h: No such file or directory
#include <pololu/orangutan.h>
What I can’t figure out is that I can see the orangutan.h file in libpololu-avr/pololu (grab attached). libpololu-avr is just on my desktop. Does this need to be located somewhere else? Or is there something else going on? Sorry if this is a super-basic or obvious. Thanks for any assistance. | https://forum.pololu.com/t/newb-trying-to-run-simple-test-on-usb-avr-2-1-orangutan-h-no-such-file-error/18272 | CC-MAIN-2019-47 | refinedweb | 174 | 68.87 |
hi, m very new to this and i guess this sounds rather basic, but i really need help..
in my project i have just a pwm block that is used to activate the LED. It seems the pwm is executing just one cycle and then there is no change. LED remains off initially, goes on and then remains in the same state. the code:
#include <m8c.h> // part specific constants and macros
#include "PSoCAPI.h" // PSoC API definitions for all User Modules
#include "PWM8.h"
int val;
void GenerateOneThirdDutyCycle(void)
{
/* set period to eight clocks */
PWM8_WritePeriod(23);
/* set pulse width to generate a 33% duty cycle */
PWM8_WritePulseWidth(7);
/* ensure interrupt is disabled */
PWM8_DisableInt();
/* start the PWM8! */
PWM8_Start();
}
void main(void)
{
void GenerateOneThirdDutyCycle(void);
M8C_EnableGInt ; // Uncomment this line to enable Global Interrupts
// Insert your main routine code here.
/* function prototype */
/* Divide by eight function */
for(;;)
{
GenerateOneThirdDutyCycle();
}
}
I tried the same with global interrupts disabled as well.. got the same result.. please help..
Suggest the following -
1) Move "void GenerateOneThirdDutyCycle(void);" before your code for this
function. I am sure it compiled correctly as you did not state it errored, but
normally a f() proto needs to occur first when scanned by compiler.
2) Your code shows an infinite loop calling GenerateOneThirdDutyCycle(),
so PWM always being set to same parameters, and you are turning off
interrupts, so what is the point of having an interrupt ?
3) I do not see any pragma for a C interrupt, so conclude you are doinjg a ASM
based interrupt. Therefore you would have modified the PWM8INT.asm file....?
4) What did you want an interruot to do ?
5) Next ime post entire project if you can, easier to see what you are doing,
settings, etc..
6) This post belongs in the PSOC 1 forum. "File", "Create Workspace Bundle", and
Regards, Dana.
thank you.. will do.. but i didnt understand the point about pragma in c? wat does that mean?
Which you can't understand? When you would do expain the point, you coild get a answer. ----
Hey PSoCkers! What was happen? Cypress got a progress!
My reference to pragma for C interrupts was PSOC 1 by mistake, GNU,
as you probably know, quite different. Ignore my past comment, my
mistake.
Regards, Dana.
coild-->could | https://community.cypress.com/t5/PSoC-5-3-1-MCU/PWM-problem/m-p/119098 | CC-MAIN-2021-31 | refinedweb | 378 | 66.64 |
angel_rethink 1.1.0
rethink #.
1.1.0 #
- Moved to
package:rethinkdb_driver
- Fixed references to old hooked event names.
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: angel_rethink: ^1.1.0
2. Install it
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
3. Import it
Now in your Dart code, you can use:
import 'package:angel_rethink/angel_rethink.dart';
The package version is not analyzed, because it does not support Dart 2. Until this is resolved, the package will receive a health and maintenance score of 0.
Analysis issues and suggestions
Fix dependencies in
pubspec.yaml.
Running
pub upgrade failed with the following output:
ERR: The current Dart SDK version is 2.5.0. Because angel_rethink depends on rethinkdb_driver >=0.3.0 which requires SDK version <2.0.0, version solving failed.
Health suggestions
Format
lib/angel_rethink.dart.
Run
dartfmt to format
lib/angel_rethink.dart.
Maintenance issues and suggestions
Fix platform conflicts. (-20 points)
Error(s) prevent platform classification:
Fix dependencies in
pubspec.yaml.
Make sure
dartdoc successfully runs on your package's source files. (-10 points)
Dependencies were not resolved.
Package is getting outdated. (-27.67 points)
The package was last published 66 weeks ago.
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package.
Common filename patterns include
main.dart,
example.dart, and
angel_rethink. | https://pub.dev/packages/angel_rethink | CC-MAIN-2019-39 | refinedweb | 257 | 62.54 |
Frequently Asked Questions¶
1. About Pylint¶
1.1 What is Pylint?¶
Pylint is a static code checker, meaning it can analyse your code without actually running it. Pylint checks for errors, tries to enforce a coding standard, and tries to enforce a coding style.
2. Installation¶
2.1 How do I install Pylint?¶
Everything should be explained on Installation.
2.2 What kind of versioning system does Pylint use?¶
Pylint uses the git distributed version control system. The URL of the repository is: . To get the latest version of Pylint from the repository, simply invoke
git clone
2.3 What are Pylint's dependencies?¶
Pylint depends on astroid and a couple of other packages. See the following section for details on what versions of Python are supported.
2.4 What versions of Python is Pylint supporting?¶
Since Pylint 2.0, the supported running environment is Python 3.4+.
That is, Pylint 2.0.
3. Running Pylint¶
3.1 Can I give pylint a file as an argument instead of a module?¶
Pylint expects the name of a package or module as its argument. As a convenience, you can give it a file name if it's possible to guess a module name from the file's path using the python path. Some examples :
"pylint mymodule.py" should always work since the current working directory is automatically added on top of the python path
"pylint directory/mymodule.py" will work if "directory" is a python package (i.e. has an __init__.py file), an implicit namespace package or if "directory" is in the python path.
"pylint /whatever/directory/mymodule.py" will work if either:
- "/whatever/directory" is in the python path
- your cwd is "/whatever/directory"
- "directory" is a python package and "/whatever" is in the python path
- "directory" is an implicit namespace package and is in the python path.
- "directory" is a python package and your cwd is "/whatever" and so on...
3.2 Where is the persistent data stored to compare between successive runs?¶
Analysis data are stored as a pickle file in a directory which is localized using the following rules:
- value of the PYLINTHOME environment variable if set
- ".pylint generate a sample pylintrc file with --generate-rcfile Every option present on the command line before this will be included in the rc file
For example:
pylint --disable=bare-except,invalid-name --class-rgx='[A-Z][a-z]+' --generate-rcfile
3.4 I'd rather not run Pylint from the command line. Can I integrate it with my editor?¶
Much probably. Read Editor and IDE integration
4. Message Control¶
4.1
4.2 Is there a way to disable a message for a particular module only?¶
Yes, you can disable or enable (globally disabled) messages at the module level by adding the corresponding option in a comment at the top of the file:
# pylint: disable=wildcard-import, method-hidden # pylint: enable=too-many-lines
4.3 How can I tell Pylint to never check a given module?¶.3, you can use symbolic names for messages:
# pylint: disable=fixme, line-too-long
4.5 I have a callback function where I have no control over received arguments. How do I avoid getting unused argument warnings?¶
Prefix (ui) the callback's name by cb_, as in cb_onclick(...). By doing so arguments usage won't be checked. Another solution is to use one of the names defined in the "dummy-variables" configuration variable for unused argument ("_" and "dummy" by default).
4.6 What is the format of the configuration file?¶
Pylint uses ConfigParser from the standard library to parse the configuration file. It means that if you need to disable a lot of messages, you can use tricks like:
# disable wildcard-import, method-hidden and too-many-lines because I do # not want it disable= wildcard-import, method-hidden, too-many-lines
4.7 Why are there a bunch of messages disabled by default?¶
pylint does have some messages disabled by default, either because
they are prone to false positives or that they are opinionated enough
for not being included as default messages..
6. Troubleshooting¶
6.1 Pylint gave my code a negative rating out of ten. That can't be right!¶
Even though the final rating Pylint renders is nominally out of ten, there's no lower bound on it. By default, the formula to calculate score is
10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
However, this option can be changed in the Pylint rc file. If having negative values really bugs you, you can set the formula to be the maximum of 0 and the above expression.
6.2 I think I found a bug in Pylint. What should I do?¶
Read Bug reports, feedback
6.3 I have a question about Pylint that isn't answered here.¶
Read Mailing lists | http://pylint.pycqa.org/en/pylint-2.1.0/faq.html | CC-MAIN-2019-47 | refinedweb | 806 | 67.76 |
Docker: Using Linux Containers to Support Portable Application Deployment
.
In this article I will describe the challenges companies face in deploying complex systems today, and how Docker can be a valuable tool in solving this problem, as well as other use cases it enables.
The deployment challenge
Deployment of server applications is getting increasingly complicated. The days that server applications could be installed by copying a few Perl scripts into the right directory are over. Today, software can have many types of requirements:
- dependencies on installed software and libraries ("depends on Python >= 2.6.3 with Django 1.2")
- dependencies on running services ("requires a MySQL 5.5 database and a RabbitMQ queue")
- dependencies on a specific operating systems ("built and tested on 64-bit Ubuntu Linux 12.04")
- resource requirements:
- minimum amount of available memory ("requires 1GB of available memory")
- ability to bind to specific ports ("binds to port 80 and 443")
For example, let's consider the deployment of a relatively simple application: Wordpress. A typical Wordpress installation requires:
- Apache 2
- PHP 5
- MySQL
- The Wordpress source code
- A Wordpress MySQL database, with Wordpress configured to use this database
- Apache configured:
- to load the PHP module
- to enable support for URL rewriting and .htaccess files
- the DocumentRoot pointing to the Wordpress sources
While deploying and running a system like this on our server, we may run into some problems and challenges:
- Isolation: if we were already hosting a different site on this server, and our existing site runs only on nginx, whereas Wordpress depends on Apache, we're in a bit of a pickle: they both try to listen on port 80. Running both is possible, but requires tweaking the configuration (changing the port to listen to), setting up reverse proxies etc. Similar conflicts can occur at the library level, if I also run an ancient application still depending on PHP4 we have a problem, since Wordpress no longer supports PHP4, and it's very difficult to run both PHP4 and 5 simultaneously. Since applications running on the same server are not isolated (in this case at a filesystem and network level), they may conflict.
- Security: we're installing Wordpress, not the software with the best security track record. It would be nice to sandbox this application so that once hacked at least it doesn't impact the other running applications.
- Upgrades, downgrades: upgrading an application typically involves overwriting existing files. What happens during an upgrade window? Is the system down? What if the upgrade fails, or turns out to be faulty. How do we roll back to a previous version quickly?
- Snapshotting, backing up: it would be nice, once everything is setup up successfully, to "snapshot" a system, so that the snapshot can be backed up, or even moved to a different server and started up again, or replicated to multiple servers for redundancy.
- Reproducibility: It's good practice to automate deployment and to test a new version of a system on a test infrastructure before pushing it to production. The way this usually works is using a tool like Chef, Puppet to install a bunch of packages on the server automatically, and then when everything works, to run that same deployment script on the production system. This will work 99% of the time. That 1% of times, during the timespan between deploying to testing and production, the package repository has been updated with newer, possibly incompatible versions of a package you depend on. As a result, your production setup is different than testing, possibly breaking your production system. So, without taking the burden of taking control of every little aspect of your deployment (e.g. hosting your own APT or YUM repositories), consistently reproducing the exact same system onto multiple setups (e.g. testing, staging, production) is hard.
- Constrain resources: what if our Wordpress goes CPU crazy and starts to take up all our CPU cycles, completely blocking other applications from doing any work? What if it uses up all available memory? Or generates logs like crazy, clogging up the disk? It would be very convenient to be able to limit resources available to the application, like CPU, memory and disk space.
- Ease of installation: there may be Debian or CentOS packages, or Chef recipes that automatically execute all the complicated steps to install Wordpress. However, these recipes are tricky to get rock solid, because they need to take into account many possible existing system configurations of the target system. In cases many, these recipes only work on clean systems. Therefore, it is not unlikely that you have to replace some packages or Chef recipes with your own. This makes installing complex systems not something you try during a lunch break.
- Ease of removal: software should be easily and cleanly removable without leaving traces behind. However, as deploying an application typically requires tweaking of existing configuration files, and putting state (MySQL database data, logs) left and right, removing an application completely is not that easy.
So, how do we solve these issues?
Virtual machines!
When we decide run each individual application on a separate virtual machine, for instance on Amazon's EC2, most of our problems go away:
- Isolation: install one application per VM and applications are perfectly isolated, unless they hack into each other's firewall.
- Reproducibility: prepare your system just the way you like, then create an AMI. You can now instantiate as many instances of this AMI as you like. Fully reproducible.
- Security: since we have complete isolation, if the Wordpress server gets hacked, the rest of the infrastructure is not affected -- unless you litter SSH keys or reuse the same passwords everywhere, but you wouldn't do that, would you?
- Constrain resources: a VM is allocated certain share of CPU cycles, available memory and disk space which it cannot exceed (without paying more money).
- Ease of installation: an increasing amount of applications are available as EC2 appliances and can be instantiated with the click of a button from the AWS marketplace. It takes a few minutes to boot, but that's about it.
- Ease of removal: don't need an application? Destroy the VM. Clean and easy.
- Upgrades, downgrades: do what Netflix does: simply deploy a new version to a new VM, then point your load balancer from the old VM to the VM with the new version. Note: this doesn't work well with applications store state locally that needs to be kept.
- Snapshotting, backing up: EBS disk can be snapshotted with a click of a button (or API call), snapshots are backed up to S3.
Perfect!
Except... now we have a new problem: it's expensive, in two ways:
- Money: can you really afford booting up an EC2 instance for every application you need? Also: can you predict the instance size you will need, because if you need more resources later, you need to stop the VM to upgrade it -- or over-pay for resources you don't end up needing (unless you use Solaris Zones, like on Joyent, which can be resized dynamically).
- Time: many operations related to virtual machines are typically slow: booting takes minutes, snapshotting can take minutes, creating an image takes minutes. The world keeps turning and we don't have have that kind of time!
Can we do better?
Docker is an open source project started by the people of dotCloud, a public Platform-as-a-Service provider, that launched earlier this year. From a technical perspective Docker is plumbing (primarily written in Go) to make two existing technologies easier to use:
- LXC: Linux Containers, which allow individual processes to run at a higher level of isolation than regular Unix process. The term used for this is containerization: a process is said to run in a container. Containers support isolation at the level of:
- File system: a container can only access its own sandboxed filesystem (chroot-like), unless specifically mounted into the container's filesystem.
- User namespace: a container has its own user database (i.e. the container's root does not equal the host's root account)
- Process namespace: within the container only the processes part of that container are visible (i.e. a very clean ps aux output).
- Network namespace: a container gets its own virtual network device and virtual IP (so it can bind to whatever port it likes without taking up its hosts ports).
- AUFS: advanced multi layered unification filesystem, which can be used to create union, copy-on-write filesystems.
Docker can be installed on any Linux system with AUFS support and a 3.8+ kernel. However, conceptually it does not depend on these technologies and may in the future also work with similar technologies, such as Solaris' zones, or BSD jails, using ZFS as a file system, for instance. Today, your only choice is Linux 3.8+ and AUFS, however.
So, why is Docker interesting?
- It's very light weight. Whereas booting up a VM is a big deal, taking up a significant amount of memory, booting up a Docker container has very little CPU and memory overhead and is very fast. Almost comparable to starting a regular process. Not only running a container is fast, building an image and snapshotting the filesystem is as well.
- It works in already virtualized environments. That is: you can run Docker inside an EC2 instance, a Rackspace VM or VirtualBox. In fact, the preferred way to use it on Mac and Windows is using Vagrant.
- Docker containers are portable to any operating system that runs Docker. Whether it's Ubuntu or CentOS, if Docker runs, your container runs.
So, let's get back to our previous list of deployment and operation problems and let's see how Docker scores:
- Isolation: docker isolates applications at the filesystem and networking level. It feels a lot like running "real" virtual machines in that sense.
- Reproducibility: Prepare your system just the way you like it (either by logging in and apt-get in all software, or using a Dockerfile), then commit your changes to an image. You can now instantiate as many instances of it as you like or transfer this image to another machine to reproduce exactly the same setup.
- Security: Docker containers are more secure than regular process isolation. Some security concerns have been identified by the Docker team and are being addressed.
- Constrain resources: Docker currently supports limiting CPU usage to a certain share of CPU cycles, memory usage can also be limited. Restricting disk usage is not directly supported as of yet.
- Ease of installation: Docker has the Docker Index, a repository with off-the-shelf docker images you can instantiate with a single command. For instance, to use my Clojure REPL image, run: docker run -t -i zefhemel/clojure-repl and it will automatically fetch the image and run it.
- Ease of removal: don't need an application? Destroy the container.
- Upgrades, downgrades: same as for EC2 VMs: boot up the new version of an application first, then switch over your load balancer from the old port to the new.
- Snapshotting, backing up: Docker supports committing and tagging of images, which incidentally, unlike snapshotting on EC2, is instant.
How to use it
Let's assume you have Docker installed. Now, to run bash in a Ubuntu container, just run:
docker run -t -i ubuntu /bin/bash
Depending on whether you have the "ubuntu" image downloaded already, docker will now download it or use the copy already available locally, then run /bin/bash in an ubuntu container. Inside this container you can now do pretty much do all your typical ubuntu stuff, for instance install new packages.
Let's install "hello":
$ docker run -t -i ubuntu /bin/bash root@78b96377e546:/# apt-get install hello Reading package lists... Done Building dependency tree... Done The following NEW packages will be installed: hello 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 26.1 kB of archives. After this operation, 102 kB of additional disk space will be used. Get:1 precise/main hello amd64 2.7-2 [26.1 kB] Fetched 26.1 kB in 0s (390 kB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package hello. (Reading database ... 7545 files and directories currently installed.) Unpacking hello (from .../archives/hello_2.7-2_amd64.deb) ... Setting up hello (2.7-2) ... root@78b96377e546:/# hello Hello, world!
Now, let's exit and run the same Docker command again:
root@78b96377e546:/# exit exit $ docker run -t -i ubuntu /bin/bash root@e5e9cde16021:/# hello bash: hello: command not found
What happened? Where did our beautiful hello command go? As it turns out, we just started a new container, based on the clean ubuntu image. To continue on from our previous one, we have to commit it to a repository. Let's exit out of this container and find out what the id of the container was that we launched:
$ docker ps -a ID IMAGE COMMAND CREATED STATUS PORTS e5e9cde16021 ubuntu:12.04 /bin/bash About a minute ago Exit 127 78b96377e546 ubuntu:12.04 /bin/bash 2 minutes ago Exit 0
The docker ps command gives us a list of currently running containers, docker ps -a also shows containers that have already exited. Each container has a unique ID which is more or less analogous to a git commit hash. It also lists the image the container was based on, and the command it ran, when it was created, what its current status is, and the ports it exposed and their mapping to the hosts' ports.
The one of the top was the second one we just launched without "hello" in it, the bottom one is the one we want to keep and reuse, so let's commit it, and create a new container from there:
$ docker commit 78b96377e546 zefhemel/ubuntu 356e4d516681 $ docker run -t -i zefhemel/ubuntu /bin/bash root@0d7898bbf8cd:/# hello Hello, world!
What I did here was commit the container (based on its ID) to a repository. A repository, analogous to a git repository, consists of one or more tagged images. If you don't supply a tag name (like I didn't), it will be named "latest". To see all locally installed images run: docker images.
Docker comes with a few base images (for instance ubuntu and centos) and you can create your own images as well. User repositories follow a Github-like naming model with your Docker username followed by a slash and then the repository name.
So, now we've seen one way of creating a Docker image the hacky way, if you will. The cleaner way is using a Dockerfile.
Building images with a Dockerfile
A Dockerfile is a simple text file consisting of instructions on how to build the image from a base image. I have a few of them on Github. Here's a simple one for running and installing an SSH server:
FROM ubuntu RUN apt-get update RUN apt-get install -y openssh-server RUN mkdir /var/run/sshd RUN echo "root:root" | chpasswd EXPOSE 22
This should be almost self-explanatory. The FROM command defines the base image to start from, this can be one of the official ones, but could also be zefhemel/ubuntu we just created. The RUN commands are commands to be run to configure the image. In this case, we're updating the APT package repository, installing the openssh-server, creating a directory, and then setting a very poor password for our root account. The EXPOSE command exposes port 22 (the SSH port) to the outside world. Let's see how we can build and instantiate this Dockerfile.
The first step is to build an image. In the directory containing the Dockerfile run:
$ docker build -t zefhemel/ssh .
This will create a zefhemel/ssh repository with our new SSH image. If this was successful, we can instantiate it with:
$ docker run -d zefhemel/ssh /usr/sbin/sshd -D
This is different than the command before. -d runs the container in the background, and instead of running bash, we now run the sshd daemon (in foreground mode, which what the -D is for).
Let's see what it did by checking our running containers:
$ docker ps ID IMAGE COMMAND CREATED STATUS PORTS 23ee5acf5c91 zefhemel/ssh:latest /usr/sbin/sshd -D 3 seconds ago Up 2 seconds 49154->22
We can now see that our container is up. The interesting bit is under the PORTS header. Since we EXPOSEd port 22, this port is now mapped to a port on our host system (49154 in this case). Let's see if it works.
$ ssh root@localhost -p 49154 The authenticity of host '[localhost]:49154 ([127.0.0.1]:49154)' can't be established. ECDSA key fingerprint is f3:cc:c1:0b:e9:e4:49:f2:98:9a:af:3b:30:59:77:35. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '[localhost]:49154' (ECDSA) to the list of known hosts. root@localhost's password: <I typed in 'root' here> Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.8.0@23ee5acf5c91:~#
Success once more! There is now a SSH server running and we were able to login to it. Let's exit from SSH and kill the container, before somebody from the outside figures out our password and hacks into the container.
$ docker kill 23ee5acf5c91
As you will have seen, our container's port 22 was mapped to port 49154, but that's fairly random. To map it to a specific port, pass in the -p flag to the run command:
docker run -p 2222:22 -d zefhemel/ssh /usr/sbin/sshd -D
Now, our port will be exposed on port 2222 if it's available. We can make our image slightly more user-friendly by adding the following line at the end of the Dockerfile:
CMD /usr/sbin/sshd -D
CMD signifies that a command isn't to be run when building the image, but when instantiating it. So, when no extra arguments are passed, it will execute /usr/sbin/sshd -D. So, now we can just run:
docker run -p 2222:22 -d zefhemel/ssh
And we'll get the same effect as before. To publish our newly created marvel, we can simply run docker push:
docker push zefhemel/ssh
and after logging in it will be available for everybody to use using that same previous docker run command.
Let's circle back to our Wordpress example. How would Docker be used to run Wordpress in a container? In order to build a Wordpress image, we'd create a Dockerfile that:
- Installs Apache, PHP5 and MySQL
- Download Wordpress and extract it somewhere on the filesystem
- Create a MySQL database
- Update the Wordpress configuration file to point to the MySQL database
- Make Wordpress the DocumentRoot for Apache
- Start MySQL and Apache (e.g. using supervisord)
Luckily, various people have already done this, for instance John Fink's github repository contains everything you need to build such a Wordpress image.
Docker use cases
Beside deploying complex applications easily in a reliable and reproducible way, there are many more uses for Docker. Here are some interesting Docker uses and projects:
- Continuous integration and deployment: build software inside of a Docker container to ensure isolation of builds. Built software images can automatically be pushed to a private Docker repository, and deployed to testing or production environments.
- Dokku: a simple Platform-as-a-Service built in under 100 lines of Bash.
- Flynn, and Deis are two open source Platform-as-a-Service projects using Docker.
- Run a desktop environment in a container.
- A project that brings Docker to its logical conclusion is CoreOS, a very light-weight Linux distribution, where all applications are installed and run using Docker, managed by systemd.
What Docker is not
While Docker helps in deploying systems reliably, it is not a full-blown software deployment system by itself. It operates at the level of applications running inside of containers. Which container to install on which server, and how to start them is outside of Docker's scope.
Similarly, orchestrating applications that run across multiple containers, possibly on multiple physical servers or VMs is beyond the scope of Docker. To let containers communicate, they need some type of discovery mechanism to figure out at what IPs and ports other applications are available. Again, this is very similar to service discovery across regular virtual machines. A tool like etcd, or any other service discovery mechanism can be used for this purpose.
Conclusion
While everything described in this article was possible before using raw LXC, cgroups and AUFS, it was never this easy or simple. This is what Docker brings to the table: a simple way to package up complex applications into containers that can be easily versioned and distributed reliably. As a result it gives light-weight Linux containers about the same flexibility and power as "real" virtual machines as widely available today, but at a much lower cost and in a more portable way. A docker image created with Docker running in a Vagrant VirtualBox VM on a Macbook Pro will run great on EC2, Rackspace Cloud or on physical hardware, and vice versa.
Docker is available for free from its website. A good place to get started is the interactive getting started guide. According to the project's roadmap, the first production-ready version, 0.8, is expected to be ready in October 2013, although people are already using it in production today.
About the Author
Zef Hemel is Developer Evangelist and member of the product management team at LogicBlox, a company developing an application server and database engine based on logic programming, specifically Datalog. Previously he was VP of Engineering at Cloud9 IDE, which develops a browser-based IDE. Zef is a native of the web and has been developing web applications since the 90s. He's a strong proponent of declarative programming environments.
Csaba Okrona
I've written a sample walkthrough to get a Django app up and running with docker: ochronus.com/docker-primer-django/
First Class article
by
Mark Stuart
Excellent one!
by
Muhilan Mg | http://www.infoq.com/articles/docker-containers/ | CC-MAIN-2016-07 | refinedweb | 3,695 | 53.41 |
Proposal for Package Mounting
It may help to refer to the GhcPackages proposal for an introduction to some of the issues mentioned here.
A message by Frederik Eaton to the Haskell mailing list describing the present proposal is archived:. (Also, see note at the end of this document regarding an earlier proposal by Simon Marlow)
This document will go over Frederik's proposal again in brief. The proposal doesn't involve any changes to syntax, only an extra command line option to ghc, etc., and a small change to Cabal syntax.
In this proposal, during compilation of a module, every package would have a "mount point" with respect to which its particular module namespace would be resolved. Each package should have a default "mount point", but this default would be overridable with an option to ghc, etc.
For example, the X11 library currently has module namespace:
Graphics.X11.Types Graphics.X11.Xlib Graphics.X11.Xlib.Atom Graphics.X11.Xlib.Event Graphics.X11.Xlib.Display ...
In this proposal, it might instead have default mount point Graphics.X11 and (internal) module namespace:
Types Xlib Xlib.Atom Xlib.Event Xlib.Display ...
To most users of the X11 package, there would be no change - because of the mounting, modules in that package would still appear with the same names in places where the X11 package is imported: Graphics.X11.Types, etc. However, if someone wanted to specify a different the mount point, he could use a special compiler option, for instance -package-base:
ghc -package X11 -package-base Graphics.Unix.X11 ...
(so the imported namespace would appear as Graphics.Unix.X11.Types, Graphics.Unix.X11.Xlib, etc.) Note that the intention is for each -package-base option to refer to the package specified in the preceding -package option, so to give package PACKAGE a mount point of BASE we use the syntax
ghc ... -package PACKAGE -package-base BASE ...
Ideally one would also be able to link to two different versions of the same package, at different mount points:
ghc -package X11-1.2 -package-base NewX11 -package X11-1.0 -package-base OldX11 ...
(yielding NewX11.Types, NewX11.Xlib, ...; OldX11.Types, OldX11.Xlib, ...)
However, usually the default mount point would be sufficient, so most users wouldn't have to learn about -package-base.
Additionally, Cabal syntax should be extended to support mounting. I would suggest that the optional mount point should appear after a package in the Build-Depends clause of a Cabal file:
Build-Depends: X11(Graphics.Unix.X11.Xlib)
And in the package Cabal file, a new clause to specify the default mount point:
Default-Base: Graphics.X11
Evaluation
This proposal has several advantages over the PackageImports proposal.
- No package names in code. In this proposal, package names would be decoupled from code. This is very important. It should be possible to rename a package (or create a new version of a package with a new name), and use it in a project, without editing every single module of the project and/or package. Even if the edits could be done automatically, they would still cause revision control headaches. Any proposal which puts package names in Haskell source code should be considered unacceptable.
- No syntax changes. The PackageImports proposal requires new syntax, but this proposal does not. Of course, in this proposal it would be slightly more difficult for the programmer to find out which package a module is coming from. He would have to look at the command line that compiles the code he's reading. However, I think that that is appropriate. Provenance should not be specified in code, since it changes all the time. (And there could be a simple debugging option to GHC which outputs a description of the namespace used when compiling each file)
- Simpler module names. This proposal would allow library authors to use simpler module names in their packages, which would in turn make library code more readable, and more portable between projects. For instance, imagine that I wanted to import some of the code from the X11 library into my own project. Currently, I would have to delete every occurrence of Graphics.X11 in those modules. Merging future changes after such an extensive modification would become difficult. This is a real problem, which I have encountered while using John Meacham's curses library. There are several different versions of that library being used by different people in different projects, and it is difficult to consolidate them because they all have different module names. The reason they have different module names is that package mounting hasn't been implemented yet. The PackageImports proposal would not fix the problem.
- Development decoupled from naming. (there is a bit of overlap with previous points here) In the present proposal, programmers would be able to start writing a library before deciding on a name for the library. For instance, every module in the Parsec library contains the prefix Text.ParserCombinators.Parsec. This means that either the author of the library had to choose the name Parsec at the very beginning, or he had to make several changes to the text of each module after deciding on the name. Under the present proposal, he would simply call his modules Char, Combinator, Error, etc.; the Text.ParserCombinators prefix would be specified in the build system, for instance in the Cabal file.
Frederik's mailing list message discusses some other minor advantages, but the above points are the important ones. In summary, it is argued that the above proposal should be preferred to PackageImports because it is both easier to implement (using command line options rather than syntax), and more advantageous for the programmer.
Note on Package Grafting
A proposal by Simon Marlow for "package grafting" predates this one:. However, the "package grafting" proposal is different in that it suggests selecting a "mount point" at library installation time, where in the present proposal, the "mount point" is selected each time a module using the library in question is compiled. The difference is important, as one doesn't really want to have to install a new copy of a library just to use it with a different name. Also, Simon Marlow's proposal puts package versions in the module namespace and therefore source code, where we argue for decoupling source code from anything to do with provenance - be it package names or version numbers. | https://ghc.haskell.org/trac/ghc/wiki/Commentary/Packages/PackageMountingProposal?version=10 | CC-MAIN-2016-36 | refinedweb | 1,060 | 53.71 |
2.6. Automatic Differentiation¶
As we have explained in Section 2.5, differentiation is a crucial step in nearly all deep learning optimization algorithms. While the calculations for taking these derivatives are straightforward, requiring only some basic calculus, for complex models, working out the updates by hand can be a pain (and often error-prone).
The
autograd package expedites this work by automatically
calculating derivatives, i.e., automatic differentiation. computational
graph, filling in the partial derivatives with respect to each
parameter.
from mxnet import autograd, np, npx npx.set_np()
2.
x = np.arange(4) x
array([0., 1., 2., 3.])
Note that also that a gradient of a scalar-valued function with respect to a
vector \(\mathbf{x}\) is itself vector-valued and has the same shape
as \(\mathbf{x}\). Thus it is intuitive that in code, we will access
a gradient taken with respect to
x as an attribute of the
ndarray
x itself. We allocate memory for an
ndarray’s
gradient by invoking its
attach_grad method.
x.attach_grad()
After we calculate a gradient taken with respect to
x, we will be
able to access it via the
grad attribute. As a safe default,
x.grad is initialized as an array containing all zeros. That is
sensible because our most common use case for taking gradient in deep
learning is to subsequently update parameters by adding (or subtracting)
the gradient to maximize (or minimize) the differentiated function. By
initializing the gradient to an array of zeros, we ensure that any
update accidentally executed before a gradient has actually been
calculated will not alter the parameters’ value.
x.grad
array([0., 0., 0., 0.])
Now let us calculate \(y\). Because we wish to subsequently calculate gradients, we want MXNet to generate a computational graph on the fly. We could imagine that MXNet would be turning on a recording device to capture the exact path by which each variable is generated.
Note that building the computational graph requires a nontrivial amount
of computation. So MXNet will only build the graph when explicitly told
to do so. We can invoke this behavior by placing our code inside an
autograd.record scope.
with autograd.record(): y = 2 * np.dot(x, x) y
array(28.)
Since
x is an
ndarray of length 4,
np.dot will perform an
inner product of
x and
x, yielding the scalar output that we
assign to
y. Next, we can automatically calculate the gradient of
y with respect to each component of
x by calling
y’s
backward function.
y.backward()
If we recheck the value of
x.grad, we will find its contents
overwritten by the newly calculated gradient.
x.grad
array([ 0., 4., 8., 12.])
The gradient of the function \(y = 2\mathbf{x}^{\top}\mathbf{x}\)
with respect to \(\mathbf{x}\) should be \(4\mathbf{x}\). Let us
quickly verify that our desired gradient was calculated correctly. If
the two
ndarrays are indeed the same, then the equality between
them holds at every position.
x.grad == 4 * x
array([1., 1., 1., 1.])
If we subsequently compute the gradient of another variable whose value
was calculated as a function of
x, the contents of
x.grad will
be overwritten.
with autograd.record(): y = x.sum() y.backward() x.grad
array([1., 1., 1., 1.])
2.6.2. Backward for Non-Scalar Variables¶
Technically, when
y is not a scalar, the most natural interpretation
of the gradient of
y (a vector of length \(m\)) with respect to
x (a vector of length \(n\)) is the Jacobian (an
\(m\times n\) matrix). For higher-order and higher-dimensional
y
and
x, the Jacobian could be a gnar Jacobian but rather the sum of the partial derivatives computed individually for each example in the batch.
Thus when we invoke
backward on a vector-valued variable
y,
which is a function of
x, MXNet assumes that we want the sum of the
gradients. In short, MXNet will create a new scalar variable by summing
the elements in
y, and compute the gradient of that scalar variable
with respect to
x.
with autograd.record(): y = x * x # y is a vector y.backward() u = x.copy() u.attach_grad() with autograd.record(): v = (u * u).sum() # v is a scalar v.backward() x.grad == u.grad
array([1., 1., 1., 1.])
2 call
u = y.detach() to return a new variable
u that
has the same value as
y but discards any information about how
y
was computed in the computational graph. In other words, the gradient
will not flow backwards through
u to
x. This will provide the
same functionality as if we had calculated
u as a function of
x
outside of the
autograd.record scope, yielding a
u that will be
treated as a constant in any
backward call. Thus, the following
backward function computes the partial derivative of
z = u * x
with respect to
x while treating
u as a constant, instead of the
partial derivative of
z = x * x * x with respect to
x.
with autograd.record(): y = x * x u = y.detach() z = u * x z.backward() x.grad == u
array([1., 1., 1., 1.])
Since the computation of
y was recorded, we can subsequently call
y.backward() to get the derivative of
y = x * x with respect to
x, which is
2 * x.
y.backward() x.grad == 2 * x
array([1., 1., 1., 1.])
Note that attaching gradients to a variable
x implicitly calls
x = x.detach(). If
x is computed based on other variables, this
part of computation will not be used in the
backward function.
y = np.ones(4) * 2 y.attach_grad() with autograd.record(): u = x * y u.attach_grad() # Implicitly run u = u.detach() z = 5 * u - x z.backward() x.grad, u.grad, y.grad
(array([-1., -1., -1., -1.]), array([5., 5., 5., 5.]), array([0., 0., 0., 0.]))
2.6.4. Computing the Gradient of Python Control Flow¶
One benefit of using automatic differentiation is that even if building
the computational graph of a function required passing through a maze of
Python.
def f(a): b = a * 2 while np.linalg.norm(b) < 1000: b = b * 2 if b.sum() > 0: c = b else: c = 100 * b return c
Again to compute gradients, we just need to
record the calculation
and then call the
backward function.
a = np.random.normal() a.attach_grad() with autograd.record(): d = f(a) d.backward()
We can.
a.grad == d / a
array(1.)
2.6.5. Training Mode and Prediction Mode¶
As we have seen, after we call
autograd.record, MXNet logs the
operations in the following block. There is one more subtle detail to be
aware of. Additionally,
autograd.record will change the running mode
from prediction mode to training mode. We can verify this behavior
by calling the
is_training function.
print(autograd.is_training()) with autograd.record(): print(autograd.is_training())
False True
When we get to complicated deep learning models, we will encounter some algorithms where the model behaves differently during training and when we subsequently use it to make predictions. We will cover these differences in detail in later chapters.
2.6.6. Summary¶
MXNet provides the
autogradpackage to automate the calculation of derivatives. To use it, we first attach gradients to those variables with respect to which we desire partial derivatives. We then record the computation of our target value, execute its
backwardfunction, and access the resulting gradient via our variable’s
gradattribute.
We can detach gradients to control the part of the computation that will be used in the
backwardfunction.
The running modes of MXNet include training mode and prediction mode. We can determine the running mode by calling the
is_trainingfunction.
2.6.7. Exercises¶
Why is the second derivative much more expensive to compute than the first derivative?
After running
y.backward(), the paper by Edelman et al. :cite``Edelman.Ostrovsky.Schwarz.2007``. | https://www.d2l.ai/chapter_preliminaries/autograd.html | CC-MAIN-2019-47 | refinedweb | 1,315 | 57.87 |
for connected embedded systems
mq_open()
Open a message queue
Synopsis:
#include <mqueue.h> mqd_t mq_open( const char * name, int oflag, ... )
Arguments:
- name
- The name of the message queue that you want to open; see below.
- oflag
- You must specify one of O_RDONLY (receive-only), O_WRONLY (send-only) or O_RDWR (send-receive). In addition, you can OR in the following constants to produce the following effects:
- O_CREAT -- if name doesn't exist, instruct the server to create a new message queue with the given name. If you specify this flag, mq_open() uses its mode and mq_attr arguments; see below.
- O_EXCL -- if you set both O_EXCL and O_CREAT, and a message queue name exists, the call fails and errno is set to EEXIST. Otherwise, the queue is created normally. If you set O_EXCL without O_CREAT, it's ignored.
- O_NONBLOCK -- under normal message queue operation, a call to mq_send() or mq_receive() could block if the message queue is full or empty. If you set this flag, these calls never block. If the queue isn't in a condition to perform the given call, errno is set to EAGAIN and the call returns an error.
If you set O_CREAT in the oflag argument, you must also pass these arguments to mq_open():
- mode
- The file permissions for the new queue. For more information, see "Access permissions" in the documentation for stat().
If you set any bits other than file permission bits, they're ignored. Read and write permissions are analogous to receive and send permissions; execute permissions are ignored.
- mq_attr
- NULL, or a pointer to an mq_attr structure that contains the attributes that you want to use for the new queue. For more information, see mq_getattr()..
Returns:
A valid message queue descriptor if the queue is successfully created, or -1 (errno is set).
Errors:
- EACCES
- The message queue exists, and you don't have permission to open the queue under the given oflag, or the message queue doesn't exist, and you don't have permission to create one.
- EEXIST
- You specified the O_CREAT and O_EXCL flags in oflag, and the queue name exists.
- EINTR
- The operation was interrupted by a signal.
- EINVAL
- You specified the O_CREAT flag in oflag, and mq_attr wasn't NULL, but some values in the mq_attr structure were invalid.
- ELOOP
- Too many levels of symbolic links or prefixes.
- EMFILE
- Too many file descriptors are in use by the calling process.
- ENAMETOOLONG
- The length of name exceeds PATH_MAX.
- ENFILE
- Too many message queues are open in the system.
- ENOENT
- You didn't set the O_CREAT flag, and the queue name doesn't exist.
- ENOSPC
- The message queue server has run out of memory.
- ENOSYS
- The mq_open() function isn't implemented for the filesystem specified in name.
Classification:
See also:
mq_close(), mq_getattr(), mq_notify(), mq_receive(), mq_send(), mq_setattr(), mq_timedreceive(), mq_timedsend(), mq_unlink()
mq, mqueue in the Utilities Reference | http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/m/mq_open.html | crawl-003 | refinedweb | 473 | 64.91 |
Flag Waiving (Application Development Advisor 6(3), Apr enum to manage bit sets works fine in C, but things become a little more
complex in C++. Kevlin Henney explains an alternative method of handling
flags, using the bitset template
Flag waiving
Take a quick look at the following code:
This brings us to the first bit of laziness to tidy up in
the original enum definition. As it happens, the first
three values form the sequence 0, 1and 2, which spec-
ifies no bits, the first bit and the second bit respec-
tively. Each integer power of 2, from 0 up, represents
a different bit. To make it clear that the enumerators
do not form a conventional sequence, but instead rep-
resent bit masks, developers typically set the mask val-
ues explicitly. As neither C nor C++ supports binary
literals, it is more common to use hexadecimal rather
than either decimal or octal to define the constants:
typedef enum style
{
plain,
bold,
italic,
underline = 4,
strikethrough = 8,
small_caps = 16
} style;
What language is it in? It could be C or C++,
although the style is clearly C-like; C++ does not
need all that typedefnoise to obtain a usable type name.
The following code fragments fix the language:
enum style
{
plain,
bold = 0x01,
italic = 0x02,
underline = 0x04,
strikethrough = 0x08,
small_caps = 0x10
};
style selected = plain;
...
selected |= italic;
...
if(selected & bold)
...
Since the aim of this column is to reconsider how
we work with flags in C++, I have also taken the
small liberty of dropping the C-ish typedef. Let’s look
at another common approach to specifying the enu-
merators, that makes the basic idea of sequence (bit 0,
bit 1, bit 2 etc) a little more explicit:
It’s C. To be precise, it’s common but not particu-
larly good C. The code demonstrates a weakness of the
type system that encourages sloppy design. Unfortu-
nately, given the enduring C influence on C++ cod-
ing practices, this style for flags and sets of flags is also
prevalent in C++. C++ programmers without a C
background can acquire these habits by osmosis
fromtheir C-speaking colleagues or through C-influ-
enced libraries.
enum style
{
plain,
bold = 1 << 0,
italic = 1 << 1,
underline = 1 << 2,
strikethrough = 1 << 3,
Flag poll
First, some quick explanation and a little bit of
reformatting. As its name suggests, an enum is nor-
mally used to enumerate a set of constants. By default,
the enumerators have distinct values, starting at 0 and
rising by 1 for each successive enumerator. Alterna-
tively, an enumerator can be given a specific constant
integer value.
A common approach for holding a set of binary
optionsis to treat an integer value as a collection of bits,
ignoring its numeric properties. If a bit at a particular
position is set, the option represented by that position
is enabled. This duality is common at the systems pro-
gramming level and many programmers never think
to question it. C programmers and, indeed, C com-
pilers make little distinction between enumerators,
integers, flags and bit sets.
An enum is often used to list the distinct options
in a bit set, but instead of acting as distinct symbols,
enumerator constants are used to represent bit masks.
FACTS AT A GLANCE
G C habits still affect C++ style,such as
how to work with flags.
G C++’s stronger type checking makes the
C use of enums as bit sets somewhat
awkward.
G The standard library’s bitset template
provides a simpler and more direct
implementation for sets of flag options.
G Programmers can define their own types
for holding bit sets based on std::bitset.
G Trait classes and policy parameters allow
for flexible implementation.
Kevlin Henney is an
independent software
development consultant
and trainer.He can be
reached at
56
APPLICATION DEVELOPMENT ADVISOR G
small_caps = 1 << 4
a stylevariable to hold not an option but a set of options. While the
bit representation of an enum affords such usage, it is by default a
type error that is fundamentally a category error: a thing and a collection
of things are different types.
This answers the question of why plain is not really a valid enu-
merator for style. It represents a combination of options (or, rather,
their absence); it is a set rather than an individual option. The canon-
ical C type for a bit set is the integer, signed or otherwise, but the
C’s lax type system allows the free and easy mixing of unrelated concepts.
Here is the revised code:
};
This approach makes the bit-masking purpose of the enumera-
tors a little clearer. The relationship to integers and ordinary count-
ing numbers is less interesting than the shifted position of a set bit.
But what about the code that shows how the styleflags are used? Alas,
the following won’t compile:
selected |= italic;
This fails because enumand integer types are not interchangeable:
arithmetic and bitwise operators do not apply to enums. When you
use an enumwhere an integer is expected, you get an implicit con-
version to the enum’s associated integer value, in effect:
const unsigned plain = 0;
...
unsigned selected = plain;
...
selected |= italic;
...
if(selected & bold)
...
static_cast<int>(selected) |= static_cast<int>(italic);
When integers go bad
While there is not much wrong with using an enum as an inte-
ger, there is plenty wrong with using an integer as an enum.
Every enumerator will map to a valid integer value but not every
valid integer will map to a valid enumerator. That’s why C++ banned
the implicit conversion from integer to enum and why the code
shown won’t compile. So how do you make it compile? You can
force the compiler to succumb to your wicked way with a cast as
your accomplice:
However, what the other approaches lacked in grace, the inte-
gerbit set loses in safety and precision. There is nothing that constrains
the actual use of an unsigned to be the same as its intended use.
Taking a step back from these many variations, you may realise
the nagging truth: it’s all a bit of a hack. Why should you be man-
ually organising your constants according to bit masks at all? It is
easy to become rooted in only one form of thinking. Sometimes a
fresh look will help you break out of a rut to a more appropriate solution.
Let’s leave C behind.
It’s time to get back to basics, ignoring all this bitwise gymnas-
tics, and restate the core problem: you need to hold a set of options,
each of which is effectively Boolean. This suggests a simple solution:
hold a set of options.
selected = static_cast<style>(selected | italic);
It would be fair to say this lacks both grace and convenience.
Alternatively, you could overload the bitwise or operators to do
yourbidding:
style operator|(style lhs, style rhs)
{
return static_cast<style>(int(lhs) | int(rhs));
}
enum style
{
bold, italic, underline, strikethrough, small_caps
};
...
std::set<style> selected;
...
selected.insert(italic);
...
if(selected.count(bold))
...
style &operator|=(style &lhs, style rhs)
{
return lhs = lhs | rhs;
}
This will allow the code to compile in its original form, but you
should not forget to define the other relevant bitwise operators. One
problem with this approach is that you need to define these
operators anew for every enumtype you want to use as a bit set. And
what do you do for the definition of the bitwise notoperator? This
is used to ensure that a bit is disabled:
This method is a lot simpler to work with: the style type sim-
ply enumerates the options with no hard coding of literal values.
The standard set class template allows a set of options to be
manipulated as a type-checked set rather than at the systems
programming level. In other words, the language and library do
all the work for you.
selected &= ~italic;
What should the value of ~italicbe? bold | underline | strikethrough
| small_caps or ~int(italic)? The former includes only bits that
have been defined as valid for the bit set, but the latter is a simpler
interpretation of the bit set concept.
A quick aside on cast style: the constructor-like form, eg int(italic)
is used where the conversion is safe and would otherwise be
implicit. You are constructing an intfrom a style. The keyword cast
form, static_cast, is being used where the conversion is potentially
unsafe and we are taking liberties with the type system.
Which brings us neatly to the next point: we are messing with the
type system. There is an underlying reason why we are having to perform
with a cast of thousands: the design is flawed. If you recall, the style
type is an enumeration type. In other words, it enumerates the intended
legal values of a style variable that is designed to hold one of the
enumerator values at a time. However, the common practice uses
Honing for efficiency
Functionally, there are no problems with this approach, but
many programmers may justifiably have concerns over its efficiency.
The bitwise solution required an integer for storage, no additional
dynamic memory usage, and efficient, fixed-time manipulation
of the options. In contrast, an std::set is an associative node-based
container that uses dynamic memory for its representation.
If you hold these space and time efficiency concerns (and have
good reason to), all is not lost: you can still have abstraction and
efficiency. The flat_set class template presented in the last column1
is an improvement over std::set for this purpose. But better still
is the standard bitset template:
APRIL 2002
57
std::bitset<5> selected;
...
selected.set(italic);
...
if(selected.test(bold))
...
enum_set<style, _style_size> selected;
This is clumsy and smacks of redundancy. We state the type and
then a property of the type, relying in this case on a little hack to
keep everything in sync. It would be nice to look up the relevant type
properties based on the type name, so that the type is all the user
has to specify in a declaration. C++’s reflective capabilities are
quite narrow, limited to runtime-type information in the form of
typeidand dynamic_castand compile-time type information in the
form of sizeof. However, the traits technique2, first used in the C++
standard library, offers a simple form of customisable compile-time
type information.
It is a myth that classes exist only to act as the DNA for objects.
Classes can be used to group non-instance information in the form
of constants, types or staticmember functions, describing policiesor
other types. A class template can be used to describe properties
that relate to its parameters, specialising as necessary. This form
of compile-time lookup allows us to answer neatly the question
of the bitset’s size:
The std::bitsetclass template, defined in the standard <bitset> header,
is not a part of the STL, hence the seemingly non-standard member
function names. But it is still standard and it still solves the
problem. A more legitimate obstacle is that the size of the set must
be wired in at compile time. There is also nothing that constrains
you to working with the style type.
There is a simple hack that allows you to avoid having to count
the number of enumerators:
enum style
{
bold, italic, underline, strikethrough, small_caps,
_style_size
};
...
std::bitset<_style_size> selected;
...
template<typename enum_type>
class enum_set
{
...
private:
typedef enum_traits<enum_type> traits;
std::bitset<traits::count> bits;
};
I say this is a hack because you are using the property of enumerators
to count, by default, in steps of 1 from 0. The last enumerator, _style_size,
is not really part of the valid set of enumerators because it does not
represent an option. However, in its favour, this technique does
save you from many of the ravages of change: the addition or
removal of enumerators and any consequent change to the declared
size of the bitset.
Producing a rabbit with style
However, there is no such thing as magic. If you wish to pull a rabbit
out of a hat, you had better have a rabbit somewhere to hand.Here
is a definition of what the traits for the style type would look like:
Rhyme and treason
Now I think we have a better idea of what is needed: something
that works like a type-safe bitsetfor enums. Alas, we won’t find one
of thesein the standard so we will have to create it. Let’s start with
the intended usage:
template<>
struct enum_traits<style>
{
typedef style enum_type;
static const bool is_specialized = true;
static const style first = bold;
static const style last = small_caps;
static const int step = 1;
static const std::size_t count = last - first + 1;
};
enum_set<style> selected;
...
selected.set(italic);
...
if(selected.test(bold))
...
It seems reasonable enough to follow the std::bitset interface
because the two are conceptually related. Additionally, a const
subscript operator would make its use even more intuitive:
It is common to use structrather than classfor traits because they
do not represent the type of encapsulated objects, and a private-public
distinction serves no useful purpose. The trait class just shown is a
full specialisation of the primary template, which is effectively just
a shell with non-functional placeholder values:
if(selected[bold])
...
template<typename type>
struct enum_traits
{
typedef type enum_type;
static const bool is_specialized = false;
static const style first = type();
static const style last = type();
static const int step = 0;
static const std::size_t count = 0;
};
To save on all the bit twiddling, we can use std::bitset as the
representation of our enum_set. However, a quick sketch reveals
aproblem:
template<typename enum_type>
class enum_set
{
...
private:
std::bitset<???> bits;
};
The information available to an enum_traitsuser includes the type
of the enum; whether or not the trait is valid (has been specialised);
the first and last enumerator values; the step increment, if any, between
How big should the bitsetbe? One unsatisfactory solution would
be to require the user to provide the size as well as the type, ie:
58
APPLICATION DEVELOPMENT ADVISOR G
std::size_t count() const
{
return bits.count();
}
std::size_t size() const
{
return bits.size();
}
bool operator[](enum_type testing) const
{
return bits.test(to_bit(testing));
}
enum_set &set()
{
bits.set();
return *this;
}
enum_set &set(enum_type setting, bool value = true)
{
bits.set(to_bit(setting), value);
return *this;
}
enum_set &reset()
{
bits.reset();
return *this;
}
enum_set &reset(enum_type resetting)
{
bits.reset(to_bit(resetting));
return *this;
}
enum_set &flip()
{
bits.flip();
return *this;
}
enum_set &flip(enum_type flipping)
{
bits.flip(to_bit(flipping));
return *this;
}
enum_set operator~() const
{
return enum_set(*this).flip();
}
bool any() const
{
return bits.any();
}
bool none() const
{
return bits.none();
}
...
private:
typedef enum_traits<enum_type> traits;
static std::size_t to_bit(enum_type value)
{
return (value - traits::first) / traits::step;
}
std::bitset<traits::count> bits;
};
each enumerator; and the count of the enumerators. This is all good
information to have but it does seem like a lot of work: a separate
specialisation is required for each enumtype. Fortunately, we can provide
a helper class to cover most of this ground:
template<
typename type,
type last_value, type first_value = type(),
int step_value = 1>
struct enum_traiter
{
typedef type enum_type;
static const bool is_specialized = true;
static const type first = first_value;
static const type last = last_value;
static const int step = step_value;
static const std::size_t count =
(last - first) / step + 1;
};
The enum_traitertemplate is designed for use as a base, reducing
the enum_traits specialisation for style:
template<>
struct enum_traits<style> :
enum_traiter<style, small_caps>
{
};
This makes life a lot easier, and accommodates enumtypes whose
enumerators do not number from 0 and whose step is not 1.
However, the common case is catered for with default template
parameters, remembering that the explicit default construction
for integers and enums is zero initialisation.
Wrapping and forwarding
The implementation for enum_set becomes a fairly simple matter
of wrapping and forwarding to a std::bitset:
template<typename enum_type>
class enum_set
{
public:
enum_set()
{
}
enum_set(enum_type setting)
{
set(setting);
}
enum_set &operator&=(const enum_set &rhs)
{
bits &= rhs.bits;
return *this;
}
enum_set &operator|=(const enum_set &rhs)
{
bits |= rhs.bits;
return *this;
}
enum_set &operator^=(const enum_set &rhs)
{
bits ^= rhs.bits;
return *this;
}
APRIL 2002
59
...
often forgotten. Taking the opportunity to call into question certain
C bit bashing tactics leads to the identification of new abstractions
which, once implemented, reduce the density and quantity of code
needed to express what are in essence high-level ideas. There is no
need to jump through hoops when using the abstractions, although
their implementation does call on some relatively advanced idioms.
How far could you take the use of enum_traits? It is possible to
extend it to accommodate existing enum types that are defined in
terms of a bit-shifted progression rather than an arithmetic one. Based
on traits, you can also define iteration for enum types. For the moment,
these are left as exercises for the reader. I
There is one final tweak that, for no extra hassle, makes the code
a little more generic. A common use of trait classes in the standard
library is as policy parameters. The default is to use the default trait
for a type, but the user could provide an alternative. The use of policies
is a long-standing C++ te chnique that has mature d ove r the last de cade3,4,5,6,7.
The only difference to the enum_set template would be to
remove the traits typedef and add a defaulted parameter:
template<
typename enum_type,
typename traits = enum_traits<enum_type> >
class enum_set
{
...
};
References
1. Kevlin Henney, “Bound and Checked”, Application
De ve lopme nt Advis or, January 2002, available from
2. Nathan Myers, “Traits: A new and useful template
technique”, C++ Re port, June 1995, available from
3. Andrei Alexandrescu, Mode rn C++ De s ign, Addison-Wesley,
2001
4. Grady Booch, Obje ct-Orie nte d Analys is and De s ign with
Applications , 2nd edition, Benjamin/Cummings, 1994
5. Erich Gamma, Richard Helm, Ralph Johnson and John
Vlissides, De s ign Patte rns , Addison-Wesley, 1995
6. Kevlin Henney, “Making an Exception”, Application
De ve lopme nt Advis or, May 2001, available from
7. Bjarne Stroustrup, The De s ign and Evolution of C++,
Addison-Wesley, 1994
This facility can be used to define sets on a subset of options:
struct simple_style : enum_traiter<style, underline>
{
};
...
enum_set<style, simple_style> selected;
...
This declaration of selected allows it to hold only bold, italic and
underline options.
Looking to the standard library can be a good way of finding either
a solution or the inspiration for one. There are a lot of programming
tasks that are repetitive and tedious. Because of their repetitive
nature they become part of the background hum of programming,
ADA's free e-mail newsletter
for software developers
By now many of you will have had a free on-line newsletter which is a service to
readers of ADA and packed with the very latest editorial for software and
application developers. However, some of our readers don't get this free service.
If you don't get your free newsletter it's for one of two good reasons.
Even though you subscribed to ADA,
you haven't supplied us with your e-mail address.
b
You ticked the privacy box at the time you subscribed
and we can't now send you our newsletter.
c | http://www.slideserve.com/Kevlin/flag-waiving | CC-MAIN-2017-34 | refinedweb | 3,211 | 51.18 |
colored scrollbar
Hi! I am writing a Delphi component which must look very good.
I would like to have a colored scrollbar - a scrollbar which looks like a normal scrollbar, but which the color should be different from the standard grey color.
Is there any (simple) way to do this using Windows API or Delphi?
Thank you!
Xtender
Sunday, July 13, 2003
You may try processing the WM_CTLCOLORSCROLLBAR message.
For example:
In the Windows Message Procedure of the window containing the scroll bar you would:
1. Create a Brush in the WM_CREATE message.
case WM_CREATE:
hBrush = CreateSolidBrush(RGB(0xFF,0x00,0x00));
2. Destroy the brush in the WM_DESTROY message.
case WM_DESTROY:
DeleteObject(hBrush);
3. Return a handle to the brush you created in Step 1 during the WM_CTLCOLORSCROLLBAR message.
case WM_CTLCOLORSCROLLBAR:
return (LRESULT) hBrush;
Dave B.
Sunday, July 13, 2003
I'm curious why you have a perception that changing what users know as standard devices is considered "good". I cringe at IE's ability to change the color of its scroll bar. It's a terrible UI decision made by bored developers, being abused by sub-par web designers. Is that the class of people you want to associate with? People who change things for the sake of change?
What's more important to you? Using something cool, or having a usable app for your end users?
Brad Wilson (dotnetguy.techieswithcats.com)
Sunday, July 13, 2003
Having a cool app for my end-users. :-)
Xtender
Sunday, July 13, 2003
Noted: the difference between a "cool" app and a usable one.
Nate Silva
Sunday, July 13, 2003
Microsoft goes for "cool" over "usability" all the time. Witness WindowsXP or the constantly changing button styles in Office.
asdf
Sunday, July 13, 2003
Having a "cool" app is equaly important as an usable one for all those non-IT types who doesn't know about Jakob Nielsen :)
Anon
Sunday, July 13, 2003
Well if the colour of the scrollbar follows that of the system theme then it will be whatever colour the user desires it to be and it will be consistent with all other apps.
I'd spend more time making sure that the content of your dialog or whatever is clear, straightforward and aids the user in whatever it is they're trying to do.
Simon Lucy
Sunday, July 13, 2003
Of course, sometimes a non-standard UI is appropriate. Consider the example of Winamp. In my opinion it pulls off its task admirably using an impressively small amount of desktop real estate.
You might be dealing with folks who respond to unintuitive but pretty UI design with "ooo"s and "ahhh"s. Guess it depends on corporate culture.
Somehow I don't think you'd get praised at any company that knows its ass from its elbows.
Warren Henning.
Xtender
Monday, July 14, 2003
"Sometimes a non-standard UI is appropriate. Consider the example of Winamp."
You have conflicting standards... the Windows UI standard vs. the CD player UI standard. But usability still suffers because of the deviation from normal. You have to discover pieces of the UI that are non-intuitive.
"Microsoft goes for cool over usability all the time."
Microsoft has made some terrible usability decisions (tear off menu bars?). I don't hold them up as a model citizen in this regard, either. But also remember that some of that deviation is good, when they progress the standard to INCREASE usability. They are, after all, in the unique position of controlling the UI standards for the platform.
Brad Wilson (dotnetguy.techieswithcats.com)
Monday, July 14, 2003
You shouldn't say "colored".
You should say "African-American scrollbar"
Malcolm XI?
Or do you think that either is gross exageration, unlikely to be supported by facts?
It is important to improve the user's experience by providing useful information and powerful, convenient tools. What is considered useful, powerful, and convenient is a combination of human factors and the user's goals and responsibilities.
Presentation is much more subjectective and depends heavily on your target audience, stereotypes as well as individuals.
Just make sure the latter does not get in the way of the former.
Practical Geezer
Monday, July 14, 2003
I personally hate applications that decide on my color scheme. Either follow my win settings or have skins. If the later you better have a lot fo choices or you gonna get ditched fast.
tekumse
Tuesday, July?
Well, that happens because the information on Jakob Nielsen's web site IS rubbish.
Most of the information is about issues that were already decided years ago by Microsoft, when they made Windows 95/98/etc.
A program that does not follow the Windows conventions is thought of (by the users) as being unfriendly / weird / quirky.
So, when developing a program, it's of no use to discuss those questions again.
Maybe from someone building a new user interface from scratch (ie. not based on the current GUIs like Windows) the information on that site is useful.
But for modern developers working on an already existing GUI ... thanks, but no thanks!
Xtender
Monday, July 28, 2003
There is always a fine line, or a give and take when deciding on use vs looks.
If we really didn't care about looks, we'd see all graphics on the internet vanish.
If you look on the GNU website for example, the site is COMPLETELY easy to navigate. They've chosen the ugly times new roman font.. no grpahics.. that website is COMPLETELY useable.
There is always a fine line..
I think that most websites and projects out there could get done faster if we didn't rely on the looks.. but then again, when do you draw the line?
Do people really need all these toys they have today (including apps).. I don't think so. A lot of it is looks..
Even the REALLY practical people are into LOOKS some what. I mean look at all the crackers out there, who love text. They think text is great, because it's simple.. (just like say the GNU website). But why do these same people use TEXT ART all the time (in the NFO files and readmes)? if they are so practical... and they like text so much, and simplicity.. why go through the hassle of making graphics in text art?
It seems to be based on fads, silliness, etc. If the guy wants to create a colored bar.. so be it.. at least it will help me learn the basics of writing a delphi component.. it can be useful for some other information, not just color fads.
And yes, I do agree that in general, those websites with the colored scroll bars are very impractical. I draw the line there.. I like the very SQUARE windows 98 style stuff, but not as bulky as 98 ships default. I do not like all this linux roundish buttons and windows xp buttons.. and all the fancy junk linux have on their systems seems to be a paradox if they are unix users (for example their task bars area always bigger than windows 98 ones, and the programs always have flashy graphics, like the alternative emulator to xterm).. but then again look at Mac OS X which is a type of unix too.
I think the world is full of paradox. People have no idea what they are talking about or intending. People will bash others for looks, when they themselves are guilty of it.. every day. Don't tell me you'd really drive a dodge omni, even if it got better mileage than a good looking car.
Looks Matter
Tuesday, March 09, 2004
Recent Topics
Fog Creek Home | http://discuss.fogcreek.com/joelonsoftware2/default.asp?cmd=show&ixPost=56683&ixReplies=15 | CC-MAIN-2015-40 | refinedweb | 1,286 | 75.61 |
In groovy you could do something like this:
import java.io.StringReader import com.avoka.fc.core.util.CsvReader // presumably your data will come from a parameter instead of a file like I've done below def fileContent = new File('names.csv').getText() def csvReader = new CsvReader(new StringReader(fileContent)) // do something with headers def headers = csvReader.readHeaders() def headerRecord = csvReader.getRawRecord() while (csvReader.readRecord()) { def firstname = csvReader.get(0) def lastname = csvReader.get(1) println "firstname: $firstname, lastname $lastname" } csvReader.close()
Can anyone provide or point me in the direction of code or functionality that supports the parsing of data provided in a CSV file? For example, I have a form where the user can upload a single CSV attachment consisting of a header row then a data row(s). I need to be able to parse the CSV and extract data from certain columns/rows to do further processing.
Currently I am calling a Dynamic Data service and have worked out how to get the File Attachment but as a byte array. Ideally would like to use a CSV Parser of some type that better handles the skipping of a line, data matching, etc.
Many thanks in advance for any assistance! | https://support.avoka.com/kb/questions/34111589/parsing-data-provided-in-a-csv-attachment | CC-MAIN-2018-09 | refinedweb | 203 | 59.3 |
go to bug id or search bugs for
Description:
------------
I'm building a framework which has a custom gettext implementation since the gettext extension of PHP is poorly designed for my needs and it's not installed in all environments so relying on it reduces compatibility. The problem however is that it's installed in some environments and since it claims the global "_" function which is pretty much a gettext standard alias (and other gettext functions), it's preventing me from implementing the gettext standard. There are no possible way for me to solve this problem nicely today because PHP does not have the ability to either:
A. Override/undeclare/rename native functions. (Okay, I can do it via APD but that makes extension dependability even WORSE.)
B. Unloading extensions at runtime. (Most preferable... I don't want it at all)
C. Importing functions. (My framework uses namespaces. The fact that functions cannot be imported by the "use" keyword really spoils this feature. Otherwise it could actually have fix this problem.)
Note that this problem assumes a context where you can't control the environment in which you install your application in. This is a very real scenario for a lot of people including me. This is a practical problem, not a theoretical one.
Also note that declaring the "_" function in a namespace would be pointless:
1. it would no longer be compatible with the gettext standard
2. it would require refactoring of all existing string wrapped code
3. it would no longer be compatible with existing string wrapped code
4. a longer name like \foo\translate_lib\_() defeats the point of having a short function name
Another workaround is to declare a _ forwarding function in every possible namespace, but that solution is dumb and ugly.
As a temporary workaround I might declare something like t\s() but I don't like that solution and it doesn't solve 1, 2 and 3 above.
Test script:
---------------
/** EITHER A: */
undeclare_function("_");
/** OR B: */
unload_extension("gettext");
/** OR C: */
namespace foo;
use foo\translate_lib;
/** Test: */
// My gettext implementation.
function _($msgid) {
return translate($msgid);
}
echo _("hello");
Expected result:
----------------
bonjour
Actual result:
--------------
Fatal error: Cannot redeclare _()
Add a Patch
Add a Pull Request
extension unloading on a per-request basis simply isn't feasible from a
performance point of view. And you obviously can't unload and leave it unloaded
for the next request because that next request may be for a page that expects the
extension to be there.
Thanks for your reply. Yes, I understand that it's unrealistic to expect a feature that allows dynamic extension unloading. But how about allowing function importing with the "use" statement? Regards~
Some workaround notes: I'm currently solving this by the workaround previously described as "dumb and ugly" by a little twist. I'm dynamically adding forwarders for the functions that needs to be imported by using a routine that goes trough all namespace locations and creates a forwarder function by evaluating generated PHP code that looks something like:
namespace source_ns; function source_fn() { return call_user_func_array('target_ns\target_fn', get_func_args()) } ...
Eval is slow though... If I could populate the symbol table directly instead doing it by eval() this would be acceptable. What I want is dynamic function declaration like declare_function($name, function() { ... });
The eval method however works temporary since the low amount of function imports only makes this routine use a couple of ms.
I have a weird problem where extensions are apparently unloading dynamically when
I don't want them to!
At the beginning of a script I can call get_loaded_extensions and see 50+
extensions. Later on (in the same script) a few of them disappear, in particular
apc and memcache, so attempts to instantiate memcache clients or call apc_store
result in undefined class/function errors.
I didn't even think this was possible; I've no idea what mechanism is at work
here, and I don't know precisely when these extensions disappear, but they do. I
don't know if this is a bug/feature, or something you can use to fix the orginal
question!
Ignore last comment. Turned out to be due to inadvertent switching between two
local interpreters. | https://bugs.php.net/bug.php?id=53957 | CC-MAIN-2016-07 | refinedweb | 702 | 53.61 |
System: Vista
What I want to do is to catch the mouse release event, which I have researched for but nothing useful came up. I found out the way to catch the pressed event:
#include <windows.h> #include <iostream> int main() { // Grab a handle to the console inputbuffer HANDLE hConsole = GetStdHandle(STD_INPUT_HANDLE); DWORD dwRead = 0; INPUT_RECORD InputRec; // Start looping, waiting for messages while( TRUE ) { // See if there's any messages ReadConsoleInput(hConsole,&InputRec,1,&dwRead); if((InputRec.EventType == MOUSE_EVENT)&&(InputRec.Event.MouseEvent.dwButtonState & FROM_LEFT_1ST_BUTTON_PRESSED)) { printf("Left Mouse - Pressed.\n"); } } }
What I want to use it for is to catch the time elapsed between the 'pressed' and 'released' event. | https://www.daniweb.com/programming/software-development/threads/276744/mouse-event-pressed-released-c | CC-MAIN-2018-13 | refinedweb | 108 | 52.6 |
Can I pass html5 javascript action event to javafx?yhjhoo Nov 24, 2013 11:25 AM
Can I pass html5 javascript action event to javafx?
Instead of just displaying web page in javafx, can javafx interactive with html5 and javascript?
1. Re: Can I pass html5 javascript action event to javafx?jerry kramskoy Nov 25, 2013 11:58 AM (in response to yhjhoo)
Yes, this is possible.
Once a webpage has loaded, you need to add your java interface to the web engine ...
private void addUpCallInterface(WebEngine webEngine) {
JSObject win = (JSObject) webEngine.executeScript("window");
win.setMember("JFXInterface", new YourInterface());
/* as a result of this setMember, javascript can now contain statements like
* JFXInterface.uiClick()
*/
}
public class YourInterface {
/**
* javascript calls back here
*/
public void uiClick(String id) {
/* ... user clicked on some HTML5 ui element with id = "id" */
/* e.g. for the HTML below, id = "top" */
uiId = id;
}
}
In HTML ...
Be careful that the string you provide to win,setMember above is identical with the one you refer to inside javascript.
cheers, Jerry
2. Re: Can I pass html5 javascript action event to javafx?yhjhoo Nov 26, 2013 1:54 PM (in response to jerry kramskoy)
Hi Jerry
That's cool? How about javaFX pass event to html5 ?
Regards,
Hua Jie
3. Re: Can I pass html5 javascript action event to javafx?jerry kramskoy Nov 26, 2013 2:35 PM (in response to yhjhoo)
Hi Hua Lie
I haven't looked into passing events to HTML5, but you can certainly call javascript from JFX.
Once the page is loaded, you can then do something like
private void writeResult(String msg) {
if (msg != null) {
String script = "result(" + "'" + msg + "')";
webEngine.executeScript(script);
}
}
The above code must run on the JFX Application Thread,
Here, the web page has a javascript function result(s), which could edit the HTML5 to display a message in a div.
I've used this technique to set the timeline of an HTML5 video element, controlling the video from JFX.
It's also possible to integrate Swing with Web by using JFX.
cheers, Jerry | https://community.oracle.com/message/11282410?tstart=0 | CC-MAIN-2016-07 | refinedweb | 342 | 75 |
chromium
/
chromiumos
/
third_party
/
kernel
/
20eb4c4bcab0f7ce05051ed5b81ee33229e63565
/
.
/
drivers
/
block
/
Kconfig
blob: 7b2df7a54d8759f45e5467bb31a31a94500f7651 [
file
] [
log
] [
blame
]
# SPDX-License-Identifier: GPL-2.0
#
# Block device driver configuration
#
menuconfig BLK_DEV
bool "Block devices"
depends on BLOCK
default y
Say Y here to get to see options for various different block device
drivers. This option alone does not add any kernel code.
If you say N, all options in this submenu will be skipped and disabled;
only do this if you know what you are doing.
if BLK_DEV
config BLK_DEV_NULL_BLK
tristate "Null test block driver"
select CONFIGFS_FS
config BLK_DEV_FD
tristate "Normal floppy disk support"
depends on ARCH_MAY_HAVE_PC_FDC.
config AMIGA_FLOPPY
tristate "Amiga floppy support"
depends on AMIGA
config ATARI_FLOPPY
tristate "Atari floppy support"
depends on ATARI
config MAC_FLOPPY
tristate "Support for PowerMac floppy"
depends on PPC_PMAC && !PPC_PMAC64
If you have a SWIM-3 (Super Woz Integrated Machine 3; from Apple)
floppy controller, say Y here. Most commonly found in PowerMacs.
config BLK_DEV_SWIM
tristate "Support for SWIM Macintosh floppy"
depends on M68K && MAC
You should select this option if you want floppy support
and you don't have a II, IIfx, Q900, Q950 or AV series.
config AMIGA_Z2RAM
tristate "Amiga Zorro II ramdisk support"
depends on ZORRO.
config GDROM
tristate "SEGA Dreamcast GD-ROM drive"
depends on SH_DREAMCAST
select BLK_SCSI_REQUEST # only for the generic cdrom code.
config PARIDE
tristate "Parallel port IDE device support"
depends on PARPORT_PC> for more information.
If you have said Y to the "Parallel-port support" configuration
option, you may share a single port between your printer and other
parallel port devices. Answer Y to build PARIDE support into your
kernel, or M if you would like to build it as a loadable module. If
your parallel port support is in a loadable module, you must build
PARIDE as a module. If you built PARIDE support into your kernel,
you may still build the individual protocol modules and high-level
drivers as loadable modules. If you build this support as a module,
it will be called paride.
To use the PARIDE support, you must say Y or M here and also to at
least one high-level driver (e.g. "Parallel port IDE disks",
"Parallel port ATAPI CD-ROMs", "Parallel port ATAPI disks" etc.) and
to at least one protocol driver (e.g. "ATEN EH-100 protocol",
"MicroSolutions backpack protocol", "DataStor Commuter protocol"
etc.).
source "drivers/block/paride/Kconfig"
source "drivers/block/mtip32xx/Kconfig"
source "drivers/block/zram/Kconfig"
config BLK_DEV_DAC960
tristate "Mylex DAC960/DAC1100 PCI RAID Controller support"
depends on PCI.
config BLK_DEV_UMEM
tristate "Micro Memory MM5415 Battery Backed RAM support"
depends on PCI.
config BLK_DEV_UBD
bool "Virtual block device"
depends on UML
The User-Mode Linux port includes a driver called UBD which will let
you access arbitrary files on the host computer as block devices.
Unless you know that you do not need such virtual block devices say
Y here.
config BLK_DEV_UBD_SYNC
bool "Always do synchronous disk IO for UBD"
depends on BLK_DEV_UBD.
config BLK_DEV_COW_COMMON
bool
default BLK_DEV_UBD
config BLK_DEV_LOOP
tristate "Loopback device support".
config BLK_DEV_LOOP_MIN_COUNT
int "Number of loop devices to pre-create at init time"
depends on BLK_DEV_LOOP
default 8
Static number of loop devices to be unconditionally pre-created
at init time.
This default value can be overwritten on the kernel command
line or with module-parameter loop.max_loop.
The historic default is 8. If a late 2011 version of losetup(8)
is used, it can be set to 0, since needed loop devices can be
dynamically allocated with the /dev/loop-control interface.
config BLK_DEV_CRYPTOLOOP
tristate "Cryptoloop Support"
select CRYPTO
select CRYPTO_CBC
depends on BLK_DEV_LOOP.
source "drivers/block/drbd/Kconfig"
config BLK_DEV_NBD
tristate "Network block device support"
depends on NET.
config BLK_DEV_SKD
tristate "STEC S1120 Block Driver"
depends on PCI
depends on 64BIT
Saying Y or M here will enable support for the
STEC, Inc. S1120 PCIe SSD.
Use device /dev/skd$N amd /dev/skd$Np$M.
config BLK_DEV_SX8
tristate "Promise SATA SX8 support"
depends on PCI
Saying Y or M here will enable support for the
Promise SATA SX8 controllers.
Use devices /dev/sx8/$N and /dev/sx8/$Np$M.
config BLK_DEV_RAM
tristate "RAM block device support"
select DAX if BLK_DEV_RAM_DAX brd. An alias "rd" has been defined
for historical reasons.
Most normal users won't need the RAM disk functionality, and can
thus say N here.
config BLK_DEV_RAM_COUNT
int "Default number of RAM disks"
default "16"
depends on BLK_DEV_RAM
The default value is 16 RAM disks. Change this if you know what you
are doing. If you boot from a filesystem that needs to be extracted
in memory, you will need at least one RAM disk (e.g. root on cramfs).
config BLK_DEV_RAM_SIZE
int "Default RAM disk size (kbytes)"
depends on BLK_DEV_RAM
default "4096"
The default value is 4096 kilobytes. Only change this if you know
what you are doing.
config BLK_DEV_RAM_DAX
bool "Support Direct Access (DAX) to RAM block devices"
depends on BLK_DEV_RAM && FS_DAX
default n
Support filesystems using DAX to access RAM block devices. This
avoids double-buffering data in the page cache before copying it
to the block device. Answering Y will slightly enlarge the kernel,
and will prevent RAM block device backing store memory from being
allocated from highmem (only a problem for highmem systems).
config CDROM_PKTCDVD
tristate "Packet writing on CD/DVD media (DEPRECATED)"
depends on !UML
select BLK_SCSI_REQUEST
Note: This driver is deprecated and will be removed from the
kernel in the near future!.
config CDROM_PKTCDVD_BUFFERS
int "Free buffers for data gathering"
depends on CDROM_PKTCDVD
default "8"
This controls the maximum number of active concurrent packets. More
concurrent packets can increase write performance, but also require
more memory. Each concurrent packet will require approximately 64Kb
of non-swappable kernel memory, memory which will be allocated when
a disc is opened for writing.
config CDROM_PKTCDVD_WCACHE
bool "Enable write caching"
depends on CDROM_PKTCDVD
If enabled, write caching will be set for the CD-R/W device. For now
this option is dangerous unless the CD-RW media is known good, as we
don't do deferred write error handling yet.
config ATA_OVER_ETH
tristate "ATA over Ethernet support"
depends on NET
This driver provides Support for ATA over Ethernet block
devices like the Coraid EtherDrive (R) Storage Blade.
config SUNVDC
tristate "Sun Virtual Disk Client support"
depends on SUN_LDOMS
Support for virtual disk devices as a client under Sun
Logical Domains.
source "drivers/s390/block/Kconfig"
config XILINX_SYSACE
tristate "Xilinx SystemACE support"
depends on 4xx || MICROBLAZE
Include support for the Xilinx SystemACE CompactFlash interface
config XEN_BLKDEV_FRONTEND
tristate "Xen virtual block device support"
depends on XEN
default y
select XEN_XENBUS_FRONTEND
This driver implements the front-end of the Xen virtual
block device driver. It communicates with a back-end driver
in another domain which drives the actual block device.
config XEN_BLKDEV_BACKEND
tristate "Xen block-device backend driver"
depends on XEN_BACKEND.
config VIRTIO_BLK
tristate "Virtio block driver"
depends on VIRTIO
This is the virtual block driver for virtio. It can be used with
QEMU based VMMs (like KVM or Xen). Say Y or M.
config VIRTIO_BLK_SCSI
bool "SCSI passthrough request for the Virtio block driver"
depends on VIRTIO_BLK
select BLK_SCSI_REQUEST
Enable support for SCSI passthrough (e.g. the SG_IO ioctl) on
virtio-blk devices. This is only supported for the legacy
virtio protocol and not enabled by default by any hypervisor.
You probably want to use virtio-scsi instead.
config BLK_DEV_RBD
tristate "Rados block device (RBD)"
depends on INET && BLOCK
select CEPH_LIB
select LIBCRC32C
select CRYPTO_AES
select CRYPTO
default n
Say Y here if you want include the Rados block device, which stripes
a block device over objects stored in the Ceph distributed object
store.
More information at.
If unsure, say N.
config BLK_DEV_RSXX
tristate "IBM Flash Adapter 900GB Full Height PCIe Device Driver"
depends on PCI
Device driver for IBM's high speed PCIe SSD
storage device: Flash Adapter 900GB Full Height.
To compile this driver as a module, choose M here: the
module will be called rsxx.
endif # BLK_DEV | https://chromium.googlesource.com/chromiumos/third_party/kernel/+/20eb4c4bcab0f7ce05051ed5b81ee33229e63565/drivers/block/Kconfig | CC-MAIN-2020-24 | refinedweb | 1,328 | 54.32 |
Understanding Zones
Applies To: Windows Server 2008, Windows Server 2008 R2
In addition to dividing your Domain Name System (DNS) namespace into domains, you can also divide your DNS namespace into zones that store name information about one or more DNS domains. A zone is the authoritative source for information about each DNS domain name that is included in the zone..
For example, the following illustration shows the microsoft.com domain, which contains domain names for Microsoft. When the microsoft.com domain is first created at a single server, it is configured as a single zone for all of the Microsoft DNS namespace. If, however, the microsoft.com domain must use subdomains, those subdomains must be included in the zone or delegated away to another zone.
In this illustration, the example.microsoft.com domain has a new subdomain—the example.microsoft.com domain—delegated away from the microsoft.com zone and managed in its own zone. However, the microsoft.com zone must it is managed by the microsoft.com zone.
Zone replication and transfers copies of the zone that are used at each server that is configured to host the zone.
When a new DNS server is added to the network and it is configured as a new secondary server for an existing zone, it performs a full initial transfer of the zone to obtain and replicate a full copy of resource records for the zone. Most earlier DNS server implementations use this same method of full transfer for a zone when the zone requires updating after changes are made to the zone. For DNS servers running Windows Server 2003 and Windows Server 2008, the DNS Server service supports incremental zone transfer, a revised DNS zone transfer process for intermediate changes. Incremental transfers provide a more efficient method of propagating zone changes and updates. Unlike in earlier DNS implementations in which any request for an update of zone data required a full transfer of the entire zone database, with incremental transfer the secondary server can pull only those zone changes that it needs to synchronize its copy of the zone with its source, either a primary or secondary copy of the zone that is maintained by another DNS server. | https://technet.microsoft.com/en-us/library/cc725590 | CC-MAIN-2015-22 | refinedweb | 369 | 53.31 |
I had a similar situation with one of my services Dimuthu...
It doesn't seem to affect the operation of the service on when using Axis at
both ends but is a little unsightly... To make sure the xmlns is given a
value, when you add the parameter to the call, make sure you use a QName to
add the parameter name instead of just a string - this will allow you to
specify your namespace too.
I am not sure whether it is possible to remove the declaration altogether...
Anybody else on this list know?
Jim
-----Original Message-----
From: Dimuthu Leelarathne [mailto:muthulee@yahoo.com]
Sent: 01 September 2003 10:21
To: axis-user@ws.apache.org
Subject: xmlns=" " in the doc\literal SOAP message
Hi all,
I'm trying to write a doc\literal web service and my soap message appears
as below.........
<query xmlns="urn:HistorySriLanka">
<description xmlns="">Wood carving of an Elephant</description>
<ItemId xmlns="">ER234</ItemId>
</query>
Has anybody else has come across a situation like this? Is it ok for empty
xmlns="" tags to go in the wire? Any help is greatly appreciated.
Thank you,
Dimuthu
----------------------------------------------------------------------------
--
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software | http://mail-archives.apache.org/mod_mbox/axis-java-user/200309.mbox/%3CHLEMLCHAFNNBECBADKCEOEFJCFAA.JimHarris@blueyonder.co.uk%3E | CC-MAIN-2017-09 | refinedweb | 204 | 65.52 |
.
>control userpasswords2.
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList
SpecialAccounts
I also checked HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList, and while it does have the SystemProfile, LocalService, and NetworkService accounts listed, it does not have others (like TrustedInstaller and its ilk).
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList.
runas
I don't think there is an ultimate list of all possible accounts.
There are different types of names you can use in the user input-field such as in permissions dialogs.
First up are standard Win32_Accounts, to get a full list open a PowerShell session and run:
get-wmiobject -class "win32_account" -namespace "root\cimv2" | sort caption | format-table caption, __CLASS, FullName
These are the usual users, groups and the builtin accounts.
Since Vista, there is a new class of accounts, called virtual accounts, because they do not show up in the usual management tools.
There are sometimes called service accounts as well, and there are at least three different types of these:
Since Vista every windows service has an virtual account associated with it, even it it runs under a different user account and
even if it does not run at all. It looks like NT Service\MSSQLSERVER
NT Service\MSSQLSERVER
To get a list of those use:
get-service | foreach {Write-Host NT Service\$($_.Name)}
Each IIS application pool that runs under the ApplicationPoolIdentity runs under a special account called IIS APPPOOL\NameOfThePool
IIS APPPOOL\NameOfThePool
Assuming you have the IIS Management scripting tools installed, you can run:
Get-WebConfiguration system.applicationHost/applicationPools/* /* | where {$_.ProcessModel.identitytype -eq 'ApplicationPoolIdentity'} | foreach {Write-Host IIS APPPOOL\$($_.Name)}
On Server 2008+ and Windows 8+ you have Hyper-V, each virtual machine creates it own virtual account, which looks like:
NT VIRTUAL MACHINE\1043F032-2199-4DEA-8E69-72031FAA50C5
NT VIRTUAL MACHINE\1043F032-2199-4DEA-8E69-72031FAA50C5
to get a list use:
get-vm | foreach {Write-Host NT VIRTUAL MACHINE\$($_.Id) - $($_.VMName)}
Ever though these accounts are not accepted in the permissions dialog, you can use them with icacls.exe to set permissions.
There is also a special group NT Virtual Machine\Virtual Machines, which doesn't show up elsewhere. All of the virtual machine accounts are members of this group, so you can use this to set permissions for all VM files.
NT Virtual Machine\Virtual Machines
These names are language specific, e.g. in German it is named NT Virtual Machine\Virtuelle Computer
NT Virtual Machine\Virtuelle Computer
The dvm.exe process (Desktop Window Manager) runs under a user Windows Manager\DWM-1
Windows Manager\DWM-1
Again you can not use this type of users in the permissions dialogs. It is not really possible to enumerate these either because one exists for each 'Desktop session', so when using two RDP sessions, you also have DWM-2 and DWM-3 in addition to DVM-1. So there are as many as there are desktops available.
DWM-2
DWM-3
DVM-1
In certain cases you can also use computer names in the permissions dialog, usually when being part of an Active Directory domain..
FooBarPool
IIS APPPOOL\FooBarPool
everyone
restricted
NT Service\*
TrustedInstaller
Get-WebConfiguration system.applicationHost/applicationPools/add
You can use NetQueryDisplayInformation API, combine with bitwise check on user info flag. I have exactly same requirement, so I cook a sample code (modified from MSDN GROUP query).
The user flag I used are UF_NORMAL_ACCOUNT UF_ACCOUNTDISABLE UF_PASSWD_NOTREQD ---> this ensure we get Human account, Human account always requires password.
working code at:
From Windows Vista on, services are treated like users. That is, each a SID is assigned to every service. This is not specific to TrustedInstaller service. You can view the SID assigned to any service using the sc showsid command:
sc showsid
USAGE: sc showsid [name]
DESCRIPTION: Displays the service SID string corresponding to an arbitrary name. The name can be that of an existing or non-existent service.
USAGE: sc showsid [name]
DESCRIPTION: Displays the service SID string corresponding to an arbitrary name. The name can be that of an existing or non-existent service.
Note that there is no need for the service to exist on the system. Examples:
C:\> sc showsid TrustedInstaller
NAME: TrustedInstaller
SERVICE SID: S-1-5-80-956008885-3418522649-1831038044-1853292631-2271478464
or, for the service Windows Management Instrumentation (Winmgmt):
Winmgmt
C:\> sc showsid Winmgmt
NAME: Winmgmt
SERVICE SID: S-1-5-80-3750560858-172214265-3889451188-1914796615-4100997547
and, finally, for a fake service:
C:\> sc showsid FakeService
NAME: FakeService
SERVICE SID: S-1-5-80-3664595232-2741676599-416037805-3299632516-2952235698
Note that all SIDs start with S-1-5-80, where 80 is assigned to SECURITY_SERVICE_ID_BASE_RID sub-authority. Moreover, this assignment is deterministic: No RIDs are used, and the SID will be the same across all systems (see the references at the end of this post for more information).
S-1-5-80
80
SECURITY_SERVICE_ID_BASE_RID
As an example, I will assign the NT Service\Winmgmt service, write permission to some file:
NT Service\Winmgmt
Windows underlines the name Winmgmt, confirming that it's a valid identity:
Now, click OK, and then assign the write permission:
This confirms that any service name can be used as a user identity. Therefore, I wouldn't call them "supper-hidden" accounts :D
For more information, please read the following articles:
This is because TrustedInstaller is a service and not a "user" object. With Vista, Services are now security principals and can be assigned permissions.
Go to the security tab and click Edit
Edit
Add...
Click Advanced...
Advanced...
Click Object Types... and uncheck Groups, then click OK
Object Types...
Groups
OK
Click Find Now. This will list all regular users and built-in system users ("built in security principles", as Windows calls them).
Find Now
Note that not all accounts that appear on this page can be used in a Run-As command, though they can all be used in a permissions dialog.
SYSTEM
find now
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
42279 times
active
8 months ago | http://superuser.com/questions/248315/list-of-hidden-virtual-windows-user-accounts?answertab=active | CC-MAIN-2016-18 | refinedweb | 1,018 | 51.58 |
A combination of the power of Hadoop and the speed of Lustre bodes well for enterprises. In this article, the author shows how Hadoop can be set up over Lustre. So those who love to play around with software, heres your chance!
Hadoop is a large-scale, distributed, open source framework for the parallel processing of data on large clusters built out of commodity hardware. While it can be used on a single machine, its true power lies in its ability to scale to hundreds or thousands of computers, each with several processor cores. Hadoop is also designed to efficiently distribute large amounts of work across a set of machines.
Hadoop is built in two main partsa special file system called Hadoop Distributed File System (HDFS) and the MapReduce Framework. HDFS is an optimised file system for distributed processing of very large data sets on commodity hardware. HDFS stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Hadoop implements a computational paradigm named MapReduce, by which the application is divided into many small fragments of work, which can each be executed or re-executed on any node in the cluster. Both MapReduce and the HDFS are designed so that node failures are automatically handled by the framework.
Hadoop runs on the master-slave architecture. An HDFS cluster consists of a single Namenode, a master server that manages the file systems namespace and regulates access to files by clients. There are a number of DataNodes, usually one per node in a cluster. The DataNodes manage storage attached to the nodes that they run on. HDFS exposes a file systems namespace and allows user data to be stored in files. A file is split into one or more blocks and a set of blocks are stored in DataNodes. DataNodes serve read and write requests, and perform block creation, deletion and replication upon instruction from Namenode.
Lustre, on the other hand, is an open source distributed parallel file system. It is a scalable, secure, robust and highly-available cluster file system that addresses I/O needs such as low latency and the extreme performance of large computing clusters. Lustre is basically an object-based file system. It is composed of three functional components: metadata servers (MDSs), object storage servers (OSSs) and clients. An MDS provides metadata services. It stores file system metadata such as file names, directories and permissions. The availability of the MDS is critical for file system data. In a typical configuration, there are two MDSs configured for high-availability failover. Since an MDS stores only metadata, the storage or metadata target (MDT) attached to the file system need only store hundreds of gigabytes for a multi-terabyte file system. One MDS per file system manages one MDT. Each MDT stores file metadata, such as file names, directory structures and access permissions.
Why Lustre and not HDFS?
The Hadoop Distributed File System works well for general statistical applications, but it might exhibit performance bottlenecks for complex computational applications like HPC or high performance computing-based applications that generate large, even increasing outputs. Second, HDFS is not POSIX-compliant, which means it cannot be used as a normal file system, which makes it difficult to extend. Also, HDFS has a WORM (write-once, read-many) access model, so changing even a small part of a file requires that all file data be copied, resulting in very time-intensive file modifications. Hadoop implements a computational paradigm named MapReduce, where the Reduce node uses HTTP to shuffle all related big Map Task outputs before the real task begins. This consumes a massive amount of resources and generates a lot of I/O and merge spill operation.
Setting up the Metadata server
I assume that you have CentOS/RHEL 6.x installed on your hardware. I have RHEL 6.x available and will be using it for this demonstration. This should work for CentOS 6.x versions too. The firewall and SELinux both need to be disabled. You can disable the firewall using the iptables command, whereas SELinux can be disabled by changing it in the file /etc/sysconfig/selinux
SELINUX=disabled
Next, lets install Lustre-related packages through the Yum repository.
#yum install lustre
Reboot the machine to boot into the Lustre kernel. Once the machine is up, verify the Lustre kernel through the following command:
#uname -arn Linux MDS 2.6.32-279.14.1.el6_lustre.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
Create a LVM partition on /dev/sda2 (ensure this is a LVM partition type at your end). Refer to Figure 3.
#pvcreate /dev/sda2 #vgcreate vg00 /dev/sda2 #lvcreate name vg00/mdt size 6G #ls -al /dev/vg00
Run the mkfs.lustre utility to create a file system named Lustre on the server including the metadata target and the management server.
Now, create a mount point and start the MDS node as shown below:
# mkdir /mdt # mount -t lustre /dev/vg00/mdt /mdt
Run the mount command to verify the overall setting:
# mount
Ensure the MDS related services are enabled:
[root@MDS ~] # modprobe lnet [root@MDS ~] # lctl network up [root@MDS ~] # lctl list_nids 192.168.1.185@tcp
This completes MDS configuration.
Setting up the Object Storage Server (OSS1)
An Object Storage Server needs to be set up on a separate machine. I assume Lustre has been installed on a separate box and booted into the Lustre kernel. As we did earlier for MDS, we need to create several logical volumes, namely, ost1 to ost6, as shown in Figure 4.
Use the mkfs.lustre command to create the Lustre file systems as shown below:
[root@oss1-0 ~] # mkfs.lustre --fsname lustre --ost --mgsnode=192.168.1.185@tcp0 /dev/vg00/ost1
Run the above command for ost1 to ost6, in a similar way. Verify the various logical volumes created, as shown below:
#mount t lustre /dev/vg00/ost1 /mnt/ost1 mkfs.lustre --fsname lustre --ost --mgsnode=192.168.1.185@tcp0 /dev/vg00/ost2 mkfs.lustre --fsname lustre --ost --mgsnode=192.168.1.185@tcp0 /dev/vg00/ost3 mkfs.lustre --fsname lustre --ost --mgsnode=192.168.1.185@tcp0 /dev/vg00/ost4 mkfs.lustre --fsname lustre --ost --mgsnode=192.168.1.185@tcp0 /dev/vg00/ost5 mkfs.lustre --fsname lustre --ost --mgsnode=192.168.1.185@tcp0 /dev/vg00/ost6
Its time to start the OSS by mounting the OSTs to the corresponding mount point:
#mount t lustre /dev/vg00/ost2 /mnt/ost2 #mount t lustre /dev/vg00/ost3 /mnt/ost3 #mount t lustre /dev/vg00/ost4 /mnt/ost4 #mount t lustre /dev/vg00/ost5 /mnt/ost5 #mount t lustre /dev/vg00/ost6 /mnt/ost6
Finally, the mount command will display the logical volumes, as shown in Figure 5.
# mount
Verify the relative device displays as shown:
# cat /proc/fs/lustre/devices
This completes the OSS1 configuration.
Follow similar steps for OSS2 (as shown above). It is always recommended that you perform the striping over all the OSTs by running the following command on Lustre Client:
#lfs setstripe -c -1 /mnt/lustre
Setting up Lustre Client #1
All clients mount to the same file system identified by the MDS. Use the following commands, specifying the IP address of the MDS server:
#mount t lustre 192.168.1.185@tcp0:/lustre /mnt/lustre
You can use the lfs utility to manage the entire file system information at the client system (as shown in Figure 6).
The figure shows that the overall file system size of /mnt/lustre is around 70GB. Striping of data is an important aspect of the scalability and performance of the Lustre file system. The data gets striped over the blocks of multiple OSTs. The stripe count can be set on a file system, directory or file level.
You can view the striping details by using the following command:
[root@lustreclient1 ~] # lfs getstripe /mnt/lustre /mnt/lustre Stripe_count: 1 stripe size: 1048576 stripe_offset: -1
The Lustre set-up is now ready. Its time to run Hadoop over Lustre instead of HDFS.
I assume that Hadoop is already running with one name node and four data nodes.
On the master node, lets perform the following file configuration changes. Open the files /usr/local/hadoop/conf/core-site.xml and /usr/local/hadoop/conf/mapred-site.xml in any text editor and make changes as shown in Figures 7 and 8.
On every slave node (data nodes), lets perform the configuration changes as done above.
Once the configuration is done, we are good to start the Hadoop-related services.
On the master node, run the mapred service without starting HDFS (since we are going to use only Lustre) as shown in Figure 9.
You can ensure the service runs through the jps utility as follows:
[root@lustreclient1 ~]# jps 20112 Jps 15561 Jobtracker [root@lustreclient1 ~]#
Start the tasktracker on the slave nodes through the following command:
[root@lustreclient2 ~]# bin/hadoop-daemon.sh start tasktracker
You can now just run a simple hadoop word count example (as shown in Figure 10).
References
[1]
[2]
[3]
Good one Ajeet. However, being an SME in HPC arena, I do understand that
performance of lustre comes with its own set of problems.. We have
issues like the OSTs becoming readonly and sometimes the hang is not
released until the entire cluster is rebooted. Although, intel has tried
to fix a number of problems associated with older versions not sure these problems were fixed.
[…] monitoring and analytics. Being an extensible platform, the project integrates elements from Apache Hadoop and enables rapid detection and response using machine learning and traditional […] | https://opensourceforu.com/2015/03/running-apache-hadoop-over-lustre/ | CC-MAIN-2019-13 | refinedweb | 1,599 | 56.15 |
execl()
Execute a file
Synopsis:
#include <process.h> int execl( const char * path, const char * arg0, const char * arg1, … const char * argn, NULL );.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The execl()
If you call this function from a process with more than one thread, all of the threads are terminated and the new executable image is
Returns:
When execl() is successful, it doesn't return; otherwise, it returns -1 (errno is set).
Errors:
- E2BIG
- The argument list and the environment is larger than the system limit of ARG_MAX bytes.
- EACCES
- image file has the correct access permissions, but isn't in the proper format.
- ENOMEM
- There's insufficient memory available to create the new process.
- ENOTDIR
- A component of path isn't a directory.
- EPERM
- The calling process doesn't have the required permission; see procmgr_ability() .
- ETXTBSY
- The text file that you're trying to execute is busy (e.g. it might be open for writing).
Examples:. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/e/execl.html | CC-MAIN-2020-10 | refinedweb | 172 | 57.57 |
Hi,
as some of you may have noticed, Lucene prefers shorter documents over
longer ones, i.e. shorter documents get a higher ranking, even if the
ratio "matched terms / total terms in document" is the same.
For example, take these two artificial documents:
doc1: x 2 3 4 5 6 7 8 9 10
doc2: x x 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
When searching for "x" doc1 will get a higher ranking, even though "x"
makes up 1/10 of the terms in both documents.
Using this similarity implementation seems to "fix" that:
class MySim extends DefaultSimilarity {
public float lengthNorm(String fieldName, int numTerms) {
return (float)(1.0 / numTerms);
}
public float tf(float freq) {
return (float)freq;
}
}
It's basically just the default implementation with Math.sqrt() removed. Is
this the correct approach? Are there any problems to expect? I just tested
it with the documents cited above.
The use case is that I want to boost fields, e.g. "body:foo^2 title:blah".
This could lead to strange results if title is already preferred just
because it's shorter.
Regards
Daniel
--
---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org | http://mail-archives.apache.org/mod_mbox/lucene-java-user/200608.mbox/%3C200608150220.26216@danielnaber.de%3E | CC-MAIN-2014-15 | refinedweb | 217 | 63.8 |
Obtains the imaginary part of a complex number
#include <complex.h>
double cimag( double complex z );
float cimagf( float complex z );
long double cimagl( long double complex z );
A complex number is represented as two floating-point numbers, one quantifying the real part and one quantifying the imaginary part. The cimag( ) function returns the floating-point number that represents the imaginary part of the complex argument.
double complex z = 4.5 - 6.7 * I;
printf( "The complex variable z is equal to %.2f %+.2f \xD7 I.\n",
creal(z), cimag(z) );
This code produces the following output:
The complex variable z is equal to 4.50 -6.70 x I.
cabs( ), creal( ), carg( ), conj( ), cproj( ) | http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-33.html | CC-MAIN-2018-43 | refinedweb | 115 | 68.06 |
07 October 2010 06:07 [Source: ICIS news]
By Nurluqman Suratman
SINGAPORE (ICIS)--Thailand’s Industry Ministry has handed oil and gas major PTT the operating licenses needed to run most of its stalled projects in Mab Ta Phut, including the firm’s 6th gas separation (GSP) unit, a company source and analysts said on Thursday.
“We have been given the operating permit for the gas separation plant on 24 September,” a company source told ICIS.
PTT was looking to start full production at the GSP unit by the end of the year, the source added.
Affiliate PTT Chemical have also been handed permits to start running its 300,000 tonne/year high density polyethylene (HDPE) expansion project and its 300,000 tonne/year low density polyethylene (LDPE) projects, said Sutthichai Kumworachai of brokerage house KGI Securities in Bangkok.
“The government gave a go for HDPE expansion on the last week of September,” Kumworachai said.
However, the firm’s 95,000 tonne/year monoethylene glycol (MEG) expansion project in Mab Ta Phut would remain suspension under the 2 September ruling which freed 74 out of 76 projects that were suspended on environmental grounds by the ?xml:namespace>
It would take a month or longer for PTT to complete trial runs and tests at the GSP unit after receiving an operating permit, analysts had previously said.
The company’s 6th GSP plant would feed ethane gas feedstock to PTT’s 1m tonne/year cracker, which was currently running at 75% capacity.
The cracker would then feed ethylene feedstock to an integrated production complex that produces 300,000 tonnes/year of LDPE and 400,000 tonnes/year of linear low density polyethylene (LLDPE), according to the company source.
PTT’s 6th GSP would also provide propane gas feedstock to a 400,000 tonnes/year polypropylene (PP) plant belonging to affiliate HMC Polymer via the cracker, the source added.
Downstream units belonging to PTT Chemical would begin full production soon after the GSP comes on stream, analysts said.
The other projects were probably given the operating permits around the same time as the 6th GSP plant, according to Naphat Chantaraserekul, an analyst at brokerage firm DBS Securities.
PTT could not immediately confirm when the permits for the PTT Chemical projects were given out by the country’s industry ministry.
“While the news about PTT Chemical being given the permits to run was made public, they have been quiet about the permit for PTT’s gas separation unit as it would spark more protests from NGOs (Non-Governmental Organisations),” Chantaraserekul said.
Environmental activists filed a lawsuit against PTT and other companies with projects in Mab Ta Phut earlier this week in a move to urge the government to review its list of 11 types of harmful industrial activities, which paved the way for the initial court suspensions in Mab Ta Phut to be removed, he said.
“This [the lawsuit] will not have any major impact on the GSP as it does not qualify as a harmful activity under the list. Whether the Central court agrees to take up the case or not will not matter to PTT,” Chantaraserekul said.
“However, any decision to follow up on the case will impact other companies,” he added.
Meanwhile, PTT Chemical is currently conducting health and environmental assessments on its suspended MEG expansion project and was scheduled to conduct a public hearing by the end of the year, Chantaraserekul said.
“The assessments and the public hearing is a requirement by law and they need to complete them before the court could approve them to operate,” he said. | http://www.icis.com/Articles/2010/10/07/9399318/ptt-gets-operating-permit-for-gas-separation-downstream-units.html | CC-MAIN-2014-15 | refinedweb | 599 | 50.8 |
the latest C# version?.
dotUltimate is a single license that allows a single developer to use these JetBrains tools:
dotUltimate license also covers plugins for dotCover and dotTrace in Rider.
ReSharper now supports the new Visual Studio 2022 release build. You will get the same rich feature set you had in other Visual Studio versions, but since Visual Studio 2022 is a x64 process, it is not limited to a maximum allocation of 3 GB of RAM memory. All the ReSharper features work faster as a result.
C# 10 has been released recently, and ReSharper continues to add support for more C# 10 features. Today, we are happy to announce support for file-scoped namespaces, global usings, the CallerArgumentExpression attribute, the “interpolated string handlers” concept, and C# 10 lambdas.
Now you can call Find Usages for user-defined implicit conversion operators. This allows you to find out whether user-defined implicit conversion operators are used at all, and then navigate to blocks of code with conversions.. | https://www.jetbrains.com/resharper/ | CC-MAIN-2022-05 | refinedweb | 166 | 53.1 |
WebVR in Progressive Web Apps
As of the Windows 10 April 2018 Update (version 1803, build 17134, EdgeHTML 17), WebVR is supported in Progressive Web Apps (PWAs). PWAs combine the best of the web and native apps, allowing you to take your existing websites and publish them to the Microsoft Store as Windows 10 applications. By adding WebVR functionality to provide deeper, more immersive experiences, you can create PWAs that are exceptionally engaging.
This article extends the tutorial in Get started with Progressive Web Apps, and will show you how to add WebVR to your PWA (or other type of web app) using the Babylon.js library.
Add WebVR to a PWA with Babylon.js
Babylon.js makes it easy to create WebVR experiences, and in the following example we will use it to add a simple WebVR experience to an existing PWA. We will be using the default project that is created in Visual Studio with the Basic Node.js Express 4 Application template. While we will be using Node.js, you can apply this to other web application frameworks as well.
Though you can use any web development IDE and framework you prefer, to follow along with this tutorial, you will need the following:
- Visual Studio 2017 (any edition—Community is free)
- When installing, make sure to select the Universal Windows Platform development and Node.js development workloads. If you've already installed Visual Studio 2017, you can open the Visual Studio Installer and click Modify under your installation to install the workloads.
- Either a working PWA (see Get started with Progressive Web Apps for info on how to create one) or a simple web app. If you have neither of these, follow the next section to create one.
- A Windows Mixed Reality immersive headset
Create the web app
We will be using the default project created with the Basic Node.js Express 4 Application template. To create it, in Visual Studio:
Select File > New > Project...
In the New Project window, in the left sidebar, select Installed > JavaScript > Node.js > Basic Node.js Express 4 Application. Name your app, select a Location, and click OK.
Press F5 to run the app, and you should see a simple webpage appear.
Add Babylon.js
Now we will add a simple 3D experience to our web app using Babylon.js.
In the Solution Explorer, under your project, find the public > javascripts folder. Right-click it and select Add > New Item...
In the Add New Item window, select JavaScript file, give it a Name (for example, main.js), and click Add.
Add the following code to the file. This code initializes Babylon.js and creates a simple scene:
// Get the canvas element var canvas = document.getElementById("renderCanvas"); // Generate the BABYLON 3D engine var engine = new BABYLON.Engine(canvas, true); var createScene = function () { // Create the scene space var scene = new BABYLON.Scene(engine); // Add a camera to the scene and attach it to the canvas var camera = new BABYLON.ArcRotateCamera("Camera", Math.PI / 2, Math.PI / 2, 2, BABYLON.Vector3.Zero(), scene);); return scene; }; var scene = createScene(); //Call the createScene function engine.runRenderLoop(function () { // Register a render loop to repeatedly render the scene scene.render(); }); window.addEventListener("resize", function () { // Watch for browser/canvas resize events engine.resize(); });
In layout.pug (in the views folder), add the following code to the
headblock. This adds the scripts necessary for Babylon.js:
script(src="") script(src="") script(src="")
Replace the code in the
bodyblock with the following code:
canvas(id="renderCanvas", touch-action="none") script(src='/javascripts/main.js')
In main.css (under public > stylesheets), remove the
bodyblock and add the following code:
html, body { overflow: hidden; width: 100%; height: 100%; margin: 0; padding: 0; } #renderCanvas { width: 100%; height: 100%; }
In index.js (in the routes folder), replace the
res.rendercall (line 7) with the following:
res.render('index');
In index.pug (in the views folder), remove lines 3-5 (the
blocksection).
Run the app, and you should see a 3D sphere appear. You should also be able to rotate around it with the left mouse button.
Add WebVR
Now that we have a 3D Babylon.js experience working in our PWA, we can easily add WebVR support.
In main.js, replace the
createScenefunction (should start on line 7) with the following code:
var createScene = function () { // Create the scene space var scene = new BABYLON.Scene(engine); // Add a camera to the scene and attach it to the canvas var camera = new BABYLON.WebVRFreeCamera("Camera", new BABYLON.Vector3(0, 0, 0), scene); scene.onPointerDown = function () { scene.onPointerDown = undefined;); sphere.position.z = 10; return scene; };
Basically, we are changing the camera to work in VR (the
WebVRFreeCamera), making it so the user must click the screen to start the experience, and repositioning the sphere to be in front of the viewer.
Run the app. At first, you will just see a blank screen. Attach your Windows Mixed Reality immersive headset, open the Mixed Reality Portal, and click the screen with the mouse. Put on your headset, and you should see the scene we just created in VR!
Going further
Using WebVR together with the capabilities of a PWA can yield some great benefits. For example, you could use service workers to cache your Babylon.js script files for offline access. Additionally, access to WinRT APIs opens up many more possibilities, including using the Windows.UI.Input.Spatial namespace and other MR-specific APIs.
Another benefit of Progressive Web Apps is that they can be published to the Microsoft Store, where they will have a potential audience the size of the entire Windows 10 install base. To learn more about publishing your PWA to the Store, see Progressive Web Apps in the Microsoft Store.
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/microsoft-edge/webvr/webvr-in-pwas | CC-MAIN-2019-22 | refinedweb | 961 | 58.99 |
More Swift on Linux
Editor’s Note: This article was written on December 6, 2015, days after Apple open-sourced Swift and made an Ubuntu distribution of the Swift compiler available. All of the techniques used below should be forward compatible, however, there may be easier ways of doing things in the future as the Foundation classes are implemented. Apple has posted a status page that outlines what works and what doesn’t.
Using Glibc Routines with Swift
As I mentioned in Swift on Linux!, the Foundation classes that Objective-C and Swift developers have come to know and love are only partially implemented. And by partially implemented I really mean hardly implemented. Okay,
NSError is there and a few others, but no
NSURL,
NSURLSession, etc.
What is there is the wealth of routines from the GNU C Library, also known as Glibc. You know, the library of rotuines you’d look up with a man page. Functions like
popen and
fgets,
getcwd and
qsort. Swift won’t be displacing Python any time soon if this is all we’re left to work with, but you can do something useful and begin exploring the possibilities of intermixing C with Swift. In this tutorial we’ll do exactly that and write up some Swift code that uses
popen to spawn
wget to make up for the lack of
NSURLSession.
So let’s get stuck in and write some Swift.
Swift cat
Create a file named
swiftcat.swift and add the following code:
To get access to all of the Glibc routines we use
import Glibc. Easy enough. Swift 2 brought us the
guard construct, so we’ll use that to ensure that we have an argument to our script. Our first exposure to using a Glibc function is
exit(-1). That’s right, nothing special about calling it, it is just the
void exit(int status) function.
We’re going to cheat a bit and leverage the
/bin/cat command to read the file and write to standard out. To call it though we’ll use
popen which will pipe us a stream of bytes that we can read with
fgets. There is one thing to notice here, and that is that Glibc routines which take
const char* arguments can be given Swift
Strings directly. Routines that take
char*, as in the case of
fgets require some finesse.
fgets does take a
char*, so we cannot pass it a
String, but rather will use a buffer allocated as a
[CChar] (C char) array. The array has a fixed size of 1024 and is initialized with zeroes. Our
while loop calls
fgets with the stream pointer, and non-
nil results contain a buffer from which we can create a Swift
String.
Go ahead and save this to a file called
swiftcat.swift and then run it!
# swift swiftcat.swift Usage: swiftcat FILENAME
Pass it a file to get the equivalent of
cat output!
Mixing in C
You aren’t limited to using Glibc routines with your Swift code. Let’s say we want to use
libcurl to escape some strings and get them ready to be included in a URL. This is easy to do with libcurl.
In a file called
escapetext.c put the following:
Make sure you have
libcurl installed with
apt-get install -y libcurl4-gnutls-dev.
Now, compile the file with:
clang -D__TEST__ -o escapetext escapetext.c -lcurl
We include the
-D__TEST__ here to pick up the
main function. In a minute I’ll show you how to take this routine and include it in a Swift application. Run the C application:
# ./escapetext "hey there\!" Escaped text: hey%20there%21
Easy enough. Now, we want to write a Swift application that uses our C routine
escapeText. The first thing to do is compile an
escapetext.o object file without the
-D__TEST__ flag set. This will get rid of
main().
clang -c escapetext.c
Now, create a file called
escapetext.h and put the function prototype in it.
Write a new file called
escapeswift.swift and add the following:
Compile this Swift code with:
swiftc -c escapeswift.swift -import-objc-header escapetext.h
Notice that we included
-import-objc-header escapetext.h. Without this header the Swift compiler won’t be able to find the prototype for
escapeText and will subsequently fail with
use of unresolved identifier.
Bringing it all together, we link our
escapeswift.o and
escapetext.o objects together, and pass in the Curl library.
swiftc escapeswift.o escapetext.o -o escapeswift -lcurl
And run it!
# ./escapeswift "how now brown cow" Escaped text: how%20now%20brown%20cow
Translator Application
This is a more complex example, but the principals are the same as those outlined above. We’re going to mix C objects and Swift modules together to write a command line application that translates strings from one language to another.
The REST API we’ll be using to do the actual translation returns results in JSON. Since
NSJSONSerialization isn’t yet available in Foundation on Linux, we’ll use the
libjson-c-dev library, so install it with
apt-get install libjson-c-dev.
jsonparse
Two files make up our JSON-parsing routine,
parsejson.c and its companion header
parsejson.h.
parsejson.c:
parsejson.h
We can easily compile this file with
clang -c jsonparse.c.
Translator module
The workhorse of the translator application will be a Swift module called
translator. To create this module and prepare it for inclusion with the rest of our project, start with the class file
translator.swift:
Take a moment to read through the code. We’re including direct calls to the Curl library here, as well as
popen and
fgets, and our
translatedText routine that is compiled into an object file created by
clang.
In addition, create a
bridgingHeader.h with the contents:
There are two steps to getting this ready to use in our application:
- Create a shared library with the translator routine
- Create a
swiftmodulethat describes the interface
I will confess, I didn’t understand this until I read on Stackoverflow:
The
.swiftmoduledescribes the Swift module’s interface but it does not contain the module’s implementation. A library or set of object files is still required to link your application against.
First, compile the code into a
.o and create a shared library:
swiftc -emit-library translator.swift -module-name translator -import-objc-header bridgingHeader.h clang -shared -o libtranslator.so translator.o
Now, create the module:
swiftc -emit-module -module-name translator translator.swift -import-objc-header bridgingHeader.h
This leaves us with three files:
libtranslator.so,
translator.swiftmodule, and
translator.swiftdoc.
Main Routine
Our main file,
main.swift looks like this:
Again, we’ve made use of Foundation and Glibc, but we’re also using
import translator. You must have a
translator.swiftmodule in your module search path, which we add with
-I.:
swiftc -I. -c main.swift -import-objc-header bridgingHeader.h
Let’s link everything together:
swiftc -o translate.exe jsonparse.o main.o -L. -ltranslator -lcurl -ljson-c -lswiftGlibc -lFoundation
The resulting binary is
translate.exe because we intend to wrap a helper script around it to set the
LD_LIBRARY_PATH to find the
libtranslator.so shared library. Without the helper script (or using
ldconfig to update the search path), you need to invoke the excecutable like this:
LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./translate.exe "Hello world\!" from en to es Translation: ¡Hola, mundo!
Let’s try Irish:
LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./translate.exe "Hello world\!" from en to ga Translation: Dia duit
Makefile
It’s not clear how “interpreter” friendly Swift will become. Yes, one can create a single monolithic Swift script right now and run it with
swift. In fact we did that above. Using bits of code from other Swift files though, without specifying everything on the command line, remains, well, impossible. Maybe I’m wrong and just haven’t figured out the magic incantation to have Swift greedily open up files searching for code to run.
At any rate, our translator application above needs a little help to build. There is a new Swift builder tool, but I found
make could get the job done with some appropriate rules:
Getting the Code
You can get all of the code above from Github.
git clone
The
swiftcat code is meant to be run with the
swift command, where as
escapetext has a simple
build.sh script, and
translate has a full-on
Makefile.
If you’ve enjoyed this tutorial please follow us Twitter at @iachievedit! There will be more to come as Swift on Linux matures. | https://dev.iachieved.it/iachievedit/category/apple/page/2/ | CC-MAIN-2020-10 | refinedweb | 1,431 | 66.94 |
10 August 2009 04:51 [Source: ICIS news]
SINGAPORE (ICIS news)--Fujian Refining and Petrochemical Co (FREP) has started shipping products from its new aromatics complex at Quanzhou in China’s southern Fujian province, sources close to the company said on Monday.
“Some aromatics have been shipped to customers near us as well as to [nearby] ?xml:namespace>
A company spokesman did not deny the commercial sales of the complex’s products but said: “The new plant is in the phase of start-up; the whole new project is expected to go into full operation in the second half of this year.”
Other sources said that commercial production at the polyolefins unit had yet to start, but expected sales to begin at the end of the month.
The 800,000 tonne/year steam cracker was also running at low rates currently but was expected to reach optimal operations by the latter half of the month, sources said.
Besides the new cracker, the
FREP is a joint venture of ExxonMobil (25%), Saudi Aramco (25%) and Fujian Petrochemical (50%).
Fujian Petrochemical is a 50:50 joint venture between the
Ong Sheau Ling, Judith Wang and Dolly Wu contributed to this article
For more on aromatics & | http://www.icis.com/Articles/2009/08/10/9238418/chinas-frep-starts-shipping-aroms-to-customers.html | CC-MAIN-2014-35 | refinedweb | 203 | 51.82 |
Up to this point, my perl experience has been largely in general scripting tasks and one large application. When DrZaius told me that my Find Ethernet Card Manufacturer script might be better as a module, I realized that this was an area in which I had little experience.
I've read the appropriate sections in the Camel book a couple of times, along with perlmod and perlmodlib and even browsed around CPAN a bit, thinking that they might at least have some guidelines for module creation. For the first time in my perl career, I am feeling rather dissatisfied with the documentation I've read.
For example, in the post mentioned above, DrZaius suggested that I include the mac address database into the __DATA__ section of a module. For the life of me, I can't seem to find any documentation about the __DATA__ section of a module.
I'm also curious about the ramifications of use and require statements, when it make more sense to use an object-oriented approach, etc.
Does you have any pointers to resources that might help me explore this facet of perl? Or perhaps you think I'm making the topic overcomplicated and can enlighten me in that regard.
One big resource would be Advanced Perl Programming. It has a number of sections on modules, both creation and maintenance.
The basic point behind modules is to encapsulate code that's used more than once by creating another namespace. (Don't worry about the namespace stuff, but remember it in the back of your mind.)
A standard package, if you're creating a set of functions to be used, would look something like:
use strict;
use 5.6.0;
package MyModule;
use Exporter;
our @ISA=qw(Exporter);
our @EXPORT_OK = qw(foo bar);
my $var1 = "Some value";
sub foo {
... # Maybe use $var1 here...
}
sub bar {
... # If not, use $var1 here.
}
1;
[download]
The 1; at the end is critical. Modules (or packages ... same thing) are pulled in with do, which requires that the last line be a true value.
$var1 is scoped so that ONLY functions within the package MyModule can access it. (That's not quite true, but you should code as if it is until you're more comfortable.)
Now, in your main program, you'd have a line similar to
use MyModule (foo);
That means that, in your main script, you can call foo() as if it was defined in your main script. However, you cannot call bar().
You've seen the use syntax a lot, I'm sure. If you put a symbol name (functions, usually) in @EXPORT (instead of @EXPORT_OK), then the function would be there, whether or not you requested it. A lot of CPAN modules that use Exporter do this, though it's generally considered somewhat rude programming style. (If you write a package like this, you're polluting my namespace, which is sorta rude if you think about it.)
This should be enough to give you an idea as to what questions you need to ask. I'm figuring that it's not that you don't know anything, cause that's obvious. It's more that you don't know what you don't know, which means you don't know what questions to ask.
I'll pick up a copy of Advanced Perl Programming -- I just browsed the table of contents on the O'Reilly site, and I think it will be very helpful.
$ h2xs -A -X -n Example::Plugh
Writing Example/Plugh/Plugh.pm
Writing Example/Plugh/Makefile.PL
Writing Example/Plugh/test.pl
Writing Example/Plugh/Changes
Writing Example/Plugh/MANIFEST
[download]
Tom Christianson and/or Nathan Torkington seem to think this is a good idea, as they advocate this technique in the Perl Cookbook. I haven't tried it, but it looks like it takes care of a bunch of boring crap automagically (creating directories and a module skeleton along with other stuff). The syntax they recommend is:
h2xs -XA -n Foo
X suppresses the creation of XS components, the A says that the module won't use the autoloader, and the -n flag marks the name of the module.
TGI says moo
The Perl Cookbook, which is a great all around resource, has a chapter on module creation that is very useful and a good companion to the info in Advanced Perl Programming. It even includes an explanation of how to prep your module for distribution. Buy this book, you will be glad you have it.
The __DATA__ handle and __END__ tokens are described in perldata.pod,
perlpod.pod, and perltoc.pod.
Try to use OO techniques when you want your libraries to be big,
difficult, and slow.
Others have apparently recommended the use of h2xs,
note that h2xs has internal pod documentation, as does
ExtUtils::MakeMaker. After using h2xs
to create a skeletal directory, and after you have edited
all the files appropriately be sure to use:
perl Makefile.PL
make dist
[download]
in order to package your module in a manner suitable for
putting on CPAN via PAUSE.
I would avoid the use of __DATA__ when coding your module; this would make your module incompatable with mod_perl (see: for more info.)
Perhaps it is not your intention to use mod_perl, but my basic philosophy is to write code that is as flexable as possible; better to write it this way in the first place, then have to re-write it later.
I would also suggest writing all modules using h2xs. Not only does this lead to a more consistant programming style, you don't have to worry about use lib statements, nor do you have to worry about access permissions to your module files.
Beer
Other beverages
Pizza
Fruit and Vegetables
Other foods
Organs
Thyme
Space
Itself
Lies
Me, that's why I'm so cool
Archeologists
Penguins
Servers
Mystery
Logic (separated into Horror and Brilliance)
Results (218 votes). Check out past polls. | http://www.perlmonks.org/index.pl?node_id=97378 | CC-MAIN-2017-43 | refinedweb | 993 | 70.43 |
Homework 3
Due by 11:59pm on Tuesday, 9/12. Check that you have successfully submitted your code on
okpy.org.
See Lab 0
for more instructions on submitting assignments.
Using Ok: If you have any questions about using Ok, please refer to this guide.
Readings: You might find the following references useful:
The
construct_check module is used in this assignment, which defines a
function
check. For example, a call such as
check("foo.py", "func1", ["While", "For", "Recursion"])
checks that the function
func1 in file
foo.py does not contain
any
while or
for constructs, and is not an overtly recursive function (i.e.,
one in which a function contains a call to itself by name.)
Required questions
Q1: Has Seven ***"
Use Ok to test your code:
python3 ok -q has_seven
Q2: Summation
Write a recursive implementation of
summation, which takes a positive integer
n and a function
term. It applies
term to every number from
1 to
n
including
n and returns the sum of the results.
def summation(n, term): """Return the sum of the first n terms in the sequence defined by term. Implement using recursion! >>> summation(5, lambda x: x * x * x) # 1^3 + 2^3 + 3^3 + 4^3 + 5^3 225 >>> summation(9, lambda x: x + 1) # 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 54 >>> summation(5, lambda x: 2**x) # 2^1 + 2^2 + 2^3 + 2^4 + 2^5 62 >>> # Do not use while/for loops! >>> from construct_check import check >>> check(HW_SOURCE_FILE, 'summation', ... ['While', 'For']) True """ assert n >= 1 "*** YOUR CODE HERE ***"
Use Ok to test your code:
Several doctests refer to these one-argument functions:Several doctests refer to these one-argument functions:
python3 ok -q summation
from operator import add, mul def square(x): return x * x def triple(x): return 3 * x def identity(x): return x def increment(x): return x + 1
Q3: Accumulate
Show that both
summation and
product (from Homework 2) function. >>> accumulate(add, 0, 5, identity) # 0 + 1 + 2 + 3 + 4 + 5 15 >>> accumulate(add, 11, 5, identity) # 11 + 1 + 2 + 3 + 4 + 5 26 >>> accumulate(add, 11, 0, identity) # 11 11 >>> accumulate(add, 11, 3, square) # 11 + 1^2 + 2^2 + 3^2 25 >>> accumulate(mul, 2, 3, square) # 2 * 1^2 * 2^2 * 3^2 72 """ "*** YOUR CODE HERE ***"
accumulate(combiner, base, n, term) takes the following arguments:
termand
n: the same arguments as in
summationand
product
combiner: a two-argument function that specifies how the current term combined with the previously accumulated terms. You may assume that
combineris commutative, i.e.,
combiner(a, b) = combiner(b, a).
base: value that specifies what value to use to start the accumulation.
For example,
accumulate(add, 11, 3, square) is
11 + square(1) + square(2) + square(3)
Implement
accumulate and show how
summation and
product can both be
defined as simple calls to
accumulate:
def summation_using_accumulate(n, term): """Returns the sum of term(1) + ... + term(n). The implementation uses accumulate. >>> summation_using_accumulate(5, square) 55 >>> summation_using_accumulate(5, triple) 45 >>> from construct_check import check >>> check(HW_SOURCE_FILE, 'summation_using_accumulate', ... ['Recursion', 'For', 'While']) True """ "*** YOUR CODE HERE ***" return _______ def product_using_accumulate(n, term): """An implementation of product using accumulate. >>> product_using_accumulate(4, square) 576 >>> product_using_accumulate(6, triple) 524880 >>> from construct_check import check >>> check(HW_SOURCE_FILE, 'product_using_accumulate', ... ['Recursion', 'For', 'While']) True """ "*** YOUR CODE HERE ***" return _______
Use Ok to test your code:
python3 ok -q accumulate python3 ok -q summation_using_accumulate python3 ok -q product_using_accumulate
Q4: Filtered Accumulate
Show how to extend the
accumulate function to allow for filtering the
results produced by its
term argument, by implementing the
filtered_accumulate function in terms of
accumulate:
def filtered_accumulate(combiner, base, pred, n, term): """Return the result of combining the terms in a sequence of N terms that satisfy the predicate PRED. COMBINER is a two-argument function. If v1, v2, ..., vk are the values in TERM(1), TERM(2), ..., TERM(N) that satisfy PRED, then the result is BASE COMBINER v1 COMBINER v2 ... COMBINER vk (treating COMBINER as if it were a binary operator, like +). The implementation uses accumulate. >>> filtered_accumulate(add, 0,. Only values for which
predreturns a true value are combined to form the result. If no values satisfy
pred, then
baseis returned.
For example,
filtered_accumulate(add, 0, is_prime, 11, identity) would be
0 + 2 + 3 + 5 + 7 + 11
for a suitable definition of
is_prime.
Implement
filtered_accumulate by defining the
combine_if function. Exactly
what this function does is something for you to discover. Do not write any
loops or recursive calls to
filtered_accumulate.
Use Ok to test your code:
python3 ok -q filtered_accumulate
Q))). Yes, it
makes sense to apply the function zero times! See if you can figure out a
reasonable function to return for that case.
def make_repeater(f, n): """Return the function that computes the nth application of f. >>> add_three = make_repeater(increment, 3) >>> add_three(5) 8 >>> make_repeater(triple, 5)(1) # 3 * 3 * 3 * 3 * 3 * 1 243 >>> make_repeater(square, 2)(5) # square(square(5)) 625 >>> make_repeater(square, 4)(5) # square(square(square(square(5)))) 152587890625 >>> make_repeater(square, 0)(5) 5 """ "*** YOUR CODE HERE ***"
For an extra challenge, try defining
make_repeaterusing
compose1and your
accumulatefunction in a single one-line return statement.
def compose1(f, g): """Return a function h, such that h(x) = f(g(x)).""" def h(x): return f(g(x)) return h
Use Ok to test your code:
python3 ok -q make_repeater
Extra questions
Extra questions are not worth extra credit and are entirely optional. They are designed to challenge you to think creatively!
Q6: Quine.
A program that prints itself is called a Quine. Place your solution in the multi-line string named
quine.
Note: No tests will be run on your solution to this problem.
Q7: Church numerals ***"
Use Ok to test your code:
python3 ok -q church_to_int python3 ok -q add_church python3 ok -q mul_church python3 ok -q pow_church | http://inst.eecs.berkeley.edu/~cs61a/fa17/hw/hw03/ | CC-MAIN-2018-05 | refinedweb | 976 | 53.61 |
Auraria Home
|
CU Denver Theses
myAuraria Home
User based quality of service for 802.11 networks via dynamic control of the 802.11E EDCA MAC
Item menu
Print
Send
Add
Description
Standard View
MARC View
Metadata
Usage Statistics
PDF
Downloads
Citation
Permanent Link:
Material Information
Title:
User based quality of service for 802.11 networks via dynamic control of the 802.11E EDCA MAC
Creator:
Padden, Joseph
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
2013
Language:
Subjects
Subjects / Keywords:
Wireless LANs -- Security measures ( lcsh )
Roaming (Telecommunication) ( lcsh )
IEEE 802.11 (Standard) ( lcsh )
IEEE 802.11 (Standard) ( fast )
Roaming (Telecommunication) ( fast )
Wireless LANs -- Security measures ( fast )
Notes
Abstract:
The ubiquity of IEEE 802.11 based Wi-Fi networks has thrust them into new and creative service delivery use cases. It is well known that the Distributed Coordination Function (DCF), which governs channel access in Wi-Fi networks, provides equiprobable channel access to each client in the network. In addition, Wi-Fi clients support a wide range of physical layer transmission rates ranging from 1 Mbps to over 600Mbps. These two characteristics can lead to a condition of airtime unfairness in which lower rate clients receive a majority of the shared channel resource. In such a scenario, both the aggregate network throughput, and the individual throughput of higher rate clients can be limited by the inefficient use of channel airtime by lower rate clients. The 802.11e quality of service mechanisms employed in current Wi-Fi networks have not been updated or adapted to address the airtime fairness problem. This paper explores the mechanisms defined in 802.11e intended to provide application layer quality of service (QoS). In particular, the scope of this analysis is limited to a use case defined as a single radio serving two Wi-Fi networks. This paper will discuss successes and failures of some previous attempts to adapt and improve upon the principals defined in the 802.11e Enhanced Distributed Channel Access (EDCA) definition. Finally, this paper presents a new control algorithm that provides proportional throughput fairness via dynamic control of the EDCA parameter sets for two networks. It is shown that for a relatively small number of clients, the new method provides a good trade off between performance and complexity, while maintaining nearly universal device support.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
Padden, Joseph
Record Information
Source Institution:
Auraria Library
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Auraria Membership
Aggregations:
Auraria Library
University of Colorado Denver Theses and Dissertations
Downloads
This item is only available as the following downloads:
AA00000149_00001.pdf
Full Text
PAGE 1
USER BASED QUALITY OF SERVICE FOR 802.11 NETWORKS VIA DYNAMIC CONTROL OF THE 802.11E EDCA MAC by JOSEPH PADDEN B.S. Mechanical Engineering, University of Colorado, 2004 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 2013
PAGE 2
ii This thesis for the Master of Science degree by Joseph Padden has been approved for the Electrical Engineering Department by Miloje Radenkovic, Chair Jaedo Park Yiming Deng Novemb er 11 2013
PAGE 3
iii Padden, Joseph (M.S., Electrical Engineering ) User Based Quality of Service for 802.11 Networks via Dynamic C ontrol of the 802.11e EDCA MAC. Thesis directed by Professor Miloje Radenkovic. AB STRACT The ubiquity of IEEE 802.11 based Wi Fi networks has thrust them into new and creative service delivery use cases. It is well known that the Distributed Coordination Function (DCF) which governs channel access in Wi Fi networks provides equiprobab le channel access to each client in the network. In addition, Wi Fi client s support a wide range of physical layer transmission rates ranging from 1 Mbps to over 600Mbps These two characteristics can lead to a condition of airtime unfairness in which lower rate client s receive a majority of the shared channel resource. In such a scenario, both the aggregate network throughput, and the individual throughput of higher rate client s can be limited by the i nefficient use of channel airtime by lower rate client s. The 802.11e quality of service mechanisms employed in current Wi Fi networks have not been updated or adapted to address the airtime fairness problem. This paper explore s the mechanisms defined in 80 2.11e intended to provide application layer quality of service (QoS). In particular, the scope of this analysis is limited to a use case defined as a single radio serving two Wi Fi networks. This paper will discuss successes and failures of some previous a ttempts to adapt and improve upon the principals defined in the 802.11e Enhanced Distributed Channel Access (EDCA) definition. Finally, this paper presents a new control alg orithm that provides proportional throughput fairness via dynamic control of the ED CA parameter set s for two networks It is shown that for a relatively small
PAGE 4
iv number of client s the new method provides a good trade off betwe en performance and complexity, while maintaining near ly universal device support. The form and content of this ab stract are approved. I recommend its publication. Approved: Miloje Radenkovic
PAGE 5
v TABLE OF CONTENTS CHAPTER !" INTRODUCTION ................................ ................................ ................................ ....... 1 # Physical Layer Background ................................ ................................ ........... 2 # Media Access Control Layer Background ................................ ..................... 6 # !!" QUALITY OF SERVICE IN WI FI NETWORKS ................................ .................. 10 # Motivation for Quality of S ervice ................................ ................................ 10 # Spectrum Crunch ................................ ................................ ..................... 10 # DCF Performance ................................ ................................ .................... 11 # Airtime Fairness and Throughput ................................ ........................... 12 # Airtime Fairness Simulation and Testing Results ................................ ........ 14 # NS 3 Simulation ................................ ................................ ...................... 14 # Device Testing ................................ ................................ ........................ 18 # Current QoS Mechanisms ................................ ................................ ............ 20 # 802.11e MAC Enhancements for Quality of Service ........................... 20 # Wi Fi MultiMedia ................................ ................................ ................... 21 # Current Wi Fi QoS Shortcomings ................................ ........................... 23 # !!!" USER BASED QOS IN WI FI NETWORKS ................................ .......................... 25 # Probability Analyses of the 802.11e EDCA ................................ ................ 25 # Collision Probability ................................ ................................ ............... 25 # Channel Access Probability per Network ................................ ............... 29 # Approaches to Optimizing Wi Fi QoS ................................ ........................ 33 #
PAGE 6
vi Idle Sense ................................ ................................ ................................ 33 # Link MTU Modulation ................................ ................................ ............ 34 # CWmin Adaptation ................................ ................................ ................. 36 # Proposed CWmin Adaptation Algorithm ................................ ..................... 38 # CWmin Adaptation ................................ ................................ ................. 38 # Network Client Count Ratio ................................ ................................ .... 39 # Physical Layer Link Rate ................................ ................................ ........ 39 # Control Algorithm ................................ ................................ ................... 40 # Testing Results ................................ ................................ ............................. 41 # Algorithm Optimizations ................................ ................................ ........ 47 # !$" CONCLUSION ................................ ................................ ................................ ......... 49 # REFERENCES ................................ ................................ ................................ .......... 50 # APPENDIX ................................ ................................ ................................ ............... 52 # A: Device Data ................................ ................................ ................................ .. 52 # B: NS 3 Simul ation Code ................................ ................................ .................. 54 # C: Airtime Analysis Python Script ................................ ................................ .... 59 # D: OpenWrt Router Configuration ................................ ................................ .... 62 # E: OpenWrt Dynamic CWmin Implementation Code ................................ ....... 65 # F: MATLAB Analysis Code ................................ ................................ ............ 104 #
PAGE 7
vii LIST OF FIGURES FIGURE %" Simple Residential Site Plan ................................ ................................ ................... 5 # &" DCF IFS relationship ................................ ................................ ............................... 8 # '" Contention access to the channel ................................ ................................ ............. 8 # (" Simulated Network Topology ................................ ................................ ............... 16 # )" 802.11g Throughput with Near vs. Far Client ................................ ....................... 16 # *" Cumulative Airtime for Distance Interval of 0 m and 110 m ................................ 17 # +" Device Testing Topology ................................ ................................ ...................... 18 # ," Throughput Related to Edge User Distance ................................ .......................... 19 # -" WMM CMSA/CA and AC Parameter Relationship ................................ ............. 22 # %." Collision Probability vs. Client Count for Various CWmin Values ..................... 28 # %%" Channel Access Probability for Networks vs CWmin(public) .............................. 30 # %&" Probability of Channel Access Ratio Surface and Contour ................................ ... 32 # %'" Throughput Test: {1,1} Private (Near) vs. Public (Far) ................................ ........ 44 # %(" Throughput Test: {1,2} Private (Near) vs. Public (Far) ................................ ........ 45 # %)" Throughput Test: Private Joins During Public Session ................................ ......... 46 # %*" CWmin Adaptation vs. Time for Private Late Join Test ................................ ....... 47 #
PAGE 8
viii LIST OF TABLES TABLE %" Free Space Transmission Loss of an Isotropic Radiator ................................ ......... 3 # &" ITU Indoor Propagation Model Loss at 2.4 and 5 GHz ................................ .......... 4 # '" DCF Timing Parameter Definitions ................................ ................................ ........ 7 #
PAGE 9
1 CHAPTER I INTRODUCTION Internet service providers (ISPs) are increasingly offering IEEE 802.11 based Wi Fi services to customers. The proliferation of Wi Fi enabled devices and the demand for ubiquitous connectivity has driven the ISPs to explore new and creative ways to increas e their service coverage footprint. One new approach, referred to as "Community Wi Fi involves the deployment of customer premise modems (cable, DSL, or fiber) with an embedded Wi Fi radio. Two or more networks (SSIDs) are b roadcast from this single radi o. One SSID is a private network for the home or business users while the remaining SSIDs are public network s for anyone near the residence or business to use. This approach enables ISPs to leverage their installed equipment base to further extend their co verage area. Many current and future deployments of Community Wi Fi plan to use a single radio configured to act as multiple virtual access points [1 0 ]. In fact the number of deployed Community Wi Fi devices is currently in the millions globally. Despite the wide adoption of this deployment model, this approach comes with inherent risks. This paper will focus on this use case as a basis for exploring 802.11 based Wi Fi network channel access method s This paper is organized as follows. Chapter I provides background on the operation and theory behind the IEEE 802.11 channel access method called the Distribution C oordination Function (DCF). C hapter II look s at the 802.11e channel access modification s to the DCF and analyze s their shortcomings for adequately managing resources in the community Wi Fi use case. Chapter III review s and analyze s previous attempts to improve upon the 802.11e QoS mechanisms. In addition, a new
PAGE 10
2 resource control algorithm is presented that is particularly suited to the Community Wi F i use case. Chapter IV concludes the paper. Physical Layer Background All wireless transmissions are subject to Free Space Path Loss. This path loss (also known as isotropic spreading loss) represents the most basic and fundamental transmission loss mode l. The free space loss of an isotropic radiator is given by: !"# $ where is distance in meters, is wavelength in meters, f is frequency in Hz and is speed of light in meters per second. In the community Wi Fi use case, the distance between the transmitter and the receiver will generally be less than 100 feet. Because of this, the Free Space Loss equation above will suffice as a first order approximation of the propagation characteristics for outdoor users. FreeSpacePathLoss =20 Log 4 d =20 Log 4 df c
PAGE 11
3 Table 1 shows the free space loss at 2.4 and 5 GHz. Table 1 : Free Space Transmission Loss of an Isotropic Radiator DISTANCE (FEET) DISTANCE (METERS) 2.4 GHZ LOSS (DB) 5 GHZ LOSS (DB) %&% $ $ '(&( ')&' )&) $ $ ')&" +*&' "%&* $ $ +*&" +,&+ *)&' $ ,&" $ +,&" )'&+ %*&, $ "( $ )(&( ))&' )+&) $ *( $ ))&" -*&' "%"&* $ '( $ -*&" -,&+ ".)&, $ )( $ -+&) ,*&( %*, $ "(( $ -,&" ,'&+ For indoor users, the signal propagation model commonly used is the ITU indoor model [11]. This propagation model was developed for a frequency range that includes the 2.4 GHz and 5 GHz bands used by most current Wi Fi networks. Based on this model, propagation loss is given by: !*# $ where L total is loss in dB, f is frequency in MHz, N is the distance loss coeff icient, d is distance from the AP to the client in meters, and Lf ( n ) is the floor loss penetration factor for n floors. In this model, the loss due to wall penetration and in room objects is included in the distance loss coefficient. For residential con struction, i.e. wood wall structure, the distance loss coefficient is given as 28. Table 2 below shows the indoor L total =20 log 10 ( f )+ N log 10 ( d )+ L f ( n ) 28
PAGE 12
4 propagation loss given by (2) assuming a single level (n=0) home similar to Figure 1 below. Table 2 : ITU Indoor Propagation Model Loss at 2.4 and 5 GHz DISTANCE (FEET) DISTANCE (METERS) 2.4 GHZ LOSS (DB) 5 GHZ LOSS (DB) %&% $ $ %.&) '-&% )&) $ $ ',&( ++&"%&* $ $ +)&+ )'&" *)&' $ $ )'&. -*&) %*&, $ "( $ )-&) -+&% A common way to predict wireless coverage area is using a link budget model. In such a model, all power gains and losses are accounted for between the transmitter and the receiver. The resulting model predicts the maximum propagation loss tolerated by a functioning wireless link. For a Wi Fi client, one p ossible link budget model is given by: !%# $ where !"#$!%&& !"!#$ is the maximum tolerated propagation loss, !" is the AP transmit power, !" is the transmitter antenna gain, !" is the transmitter loss, !" is the receiver antenna gain, !" is the receiver loss, !" is the receiver minimum sensitivity of the client, and is miscellaneous loss. All values are in dB. The maximum link budget of a common Wi Fi link is approximately 110 dB from Tx chip to Rx chip. This is based on the following assumptions: LinkLoss total = P tx + G tx + L tx + G rx + L rx P rx + L m
PAGE 13
5 1. Both the transmitter and the receiver are using omnidirectional antennas with 1 dB gain. 2. The transmitter is transmitting at 20 dBm. 3. The receiver minimum threshold is 88 dBm. 4. Transmitter and receiver loss es are zero (ideal). 5. Miscellaneous loss is zero (ideal). ( See Appendix A for collected device data supporting the above assumptions. ) Given the above link budget data and the above propagation models given by (2) and (3), it can be seen that customer devices inside the house will see average link loss on the order of 40 to 70 dB in ideal conditions. If the indoor and outdoor models are concatenated, however, it can be seen that the outdoor customer devices will see average link loss on the order of 105 to 115 dB based on the dimensions presented in Figure 1. Note that the floor plan in Figure 1 is composed of dimensions that could also apply to a small business location such as a coffee shop, restaurant, small medical or law office, etc. Figure 1 Simple Residential Site Plan Living Room Bedroom 1 Bedroom 2 Kitchen Entry ~4m from AP to Front wall ~4-6 m from house to sidewalk or street
PAGE 14
6 The above discussion is based on the stated assumptions that are in some cases idealized. Regardless, it is clear that in the application of the Community Wi Fi use case in a residential or small business setting, a large portion of users at the sidewalk or in the street will be at the edge of the coverage area for an indoor AP. In general, clients at the coverage area edge use a lower physical layer data rate than clients closer to the AP. The impact of t he lower data rate usage will be discussed in more detail later in this paper. Media Access Control Layer Background A fundamental problem in wireless communication is controlling access to the shared medium. The media access control (MAC) layer defines the rules and procedures for channel access of a given protocol. Wi Fi, as defined by the 802.11 working group of the Institute of Electrical and Electronic Engineers (IEEE), uses an access method called the Dis tributed Coordination Function (DCF). The design of the DCF is based on Carrier Sense Multiple Access with Collision Avoidance or CSMA/CA. In the DCF system, each device uses the following 3 basic steps when attempting to access the shared medium: 1. Perform a Clear Channel Assessment (CCA) by sensing the medium for a period of time before attempting to transmit. If the device determines that the medium is clear it proceeds to the next step. 2. Check the Network Allocation Vector (NAV) the virtual CCA mechanism, to ensure no hidden nodes are currently using the channel. 3. If the device determines the channel is busy in step 1 or 2, it sets a binary exponential backoff timer and waits before attempting the access the medium again, starting back at step 1. Otherwise if the device senses the medium is not busy, the device can now transmit a frame.
PAGE 15
7 For most traffic types, the amount of time the device must sense the medium in step one is the sum of the appropriate Interframe Space (IFS) plus the random binary back of f time called the Contention Window (CW). The IFS most often used is called the DCF IFS or DIFS. This is a fixed parameter determined by the physical layer characteristics of the network. The DIFS is defined as 2 x Slot Time + SIFS. The SIFS is described n ext. In a few select cases the device may use a short IFS, or SIFS, to access the channel without adding the CW. This medium access method gives certain packets absolute priority access to the medium over those using the DIFS + CW method. These special cas es include layer 2 acknowledgements, the response message to a Request To Send (RTS) frame called the Clear to Send (CTS) frame, data frames immediately following a CTS message, and any fragments of a fragmented frame after the first portion has been sent. There is a third IFS defined in the base 802.11 standards called the PCF IFS. However, Point Coordination Function (PCF) operation has seen little, if any, adoption so neither it, nor the PIFS will be discussed further. Table 3 below provides a summary of the various IFS values. Table 3 : DCF Timing Parameter Definitions 2.4 GHz 5 GHz Phy 802.11b 802.11g 802.11n 802.11a 802.11n Slot Time 20us Long GI = 20us Short GI = 9us Long GI = 20us Short GI = 9us 9us 9us SIFS 10us 10us 10us 16us 16us DIFS 50us Long GI = 50us Short GI = 28us Long GI = 5 0us Short GI = 28 us 34us 34us
PAGE 16
8 Based on a review of the current draft of the 802.11ac amendment [16], the aforementioned values are not changing and the mechanisms and discussion herein apply to this upcoming technology as well. Figure 2 below shows the relationship and usage of the IFS when used to access the shared channel. Figure 2 DCF IFS relationship When devices on the network attempt to gain access to the channel, they use a random binary back off procedure. In this procedure, during the first attempt to access the channel, a random value between zero and the contention window minimum value ( CWmin ) is chosen. In this fashion, the probability of collisions is reduced via each device choosing a random back off value from a range. This process is illustrated below in Figure 3. Figure 3 Contention access to the channel DIFS channel busy DIFS contention SIFS PIFS next frame t DIFS channel busy DIFS contention next frame t Backoff Slot contention window with randomly selected back-off slots
PAGE 17
9 Air interface packet collisions are detected by the absence of a layer 2 acknowledgement frame from the receiving party in response to a transmitted frame. When a collisio n does occur, the value of CWmin is increased and the process is repeated. Increasing the size of C Wmin can increase the channel access latency, but i t also reduces the probability of a collision. The tradeoffs of this relationship will also be discussed later in this paper. The contention window random backoff process is the basis of the new method described later in this paper. In addition, the proba bility of successful transmission and collision will also be explored for various CWmin configurations in depth later in this paper. A key point to highlight is that, in traditional DCF operation, all devices on the network have the same value for DIFS, S IFS, and the same algorithm for adjusting the value of CWmin Therefore, over time all devices have an equal probability of success when attempting to access the medium. The shortcomings of this equality will be discussed more detail later in this paper. $ $
PAGE 18
10 CHAPTER II QUALITY OF SERVICE IN WI FI NETWORKS Motivation for Quality of Service Wi Fi Quality of Service (QoS) is becoming more important as Wi Fi continues to gain popularity. In addition, access network data rates continue to increase and stress Wi Fi networks capacity The popularity of Wi Fi contributes to the need for QoS in two ways: f irst, by increasing the number of Wi Fi networks; second, by an increase in the number of devices on each Wi Fi network. The following section will explore the roots and motivation for Wi Fi QoS. Spectrum Crunch The 802.11 Wi Fi specifications have been written to take advantage of unlicensed spectrum. Early Wi Fi, namely 802.11 and 802.11b/g, use channel frequencies in the 2.4 GHz ISM band comprised of 11 chann els, 3 of which are non overlapping. Later additions to the Wi Fi spectrum include 23 non overlapping channels in the 5 GHz UNII 1, 2, and 3 bands used by 802.11a, 802.11n, and 802.11ac. Until recently the majority of Wi Fi devices supported 2.4 GHz frequ encies, with a smaller fraction of devices also supporting 5 GHz frequencies. As smartphones and tablets flood i nto Wi Fi networks, the majority of these devices supported only 2.4 GHz. This biased device support has lead to over crowding of the 3 non over lapping channels in the 2.4 GHz spectrum. In addition, the spectrum is further stressed by the frequency re use of adjacent networks. This leads to co channel interference. As discussed by Panda et al. in [5] there are three interference regions in 802.11 networks. The three regions are the Decoding Region, the Carrier Sensing Region, and the Interference Region. As shown
PAGE 19
11 in their research, co channel 802.11 networks with overlap that occupies the Decoding Region essentially become one network and can shar e the bandwidth efficiently In cases of overlap occupying the other two regions the efficiency of both networks suffers due to the increase in probability of collisions. Unfortunately, the net result is that any coverage area overlap of two networks, re gardless of region, results in lower aggregate throughput for both networks. This affects well managed enterprise networks with many APs in close proximity as well as poorly configured adjacent residential or hotspot networks occupying the same channel. DCF Performance The DCF used by all, or nearly all, of the currently deployed Wi Fi networks controls access to the wireless channel as described earlier. DCF channel access can be modeled as a bi dimensional Markov Chain as shown in [2] and [3] and refine d in [4]. One limitation of the DCF highlighted by these studies is that it loses efficiency as the number of devices increases. This has also been shown through analysis first in [2] and confirmed in [6], [7], and many other studies. Bianchi [2] first sho wed that the saturation throughput displays asymptotic behavior as the number of devices, and therefore the collision probability, goes up. Saturation is defined as the condition where all devices on the network have constantly non empty transmit queues, i .e. all devices are always contending for channel access. This fundamental limitation in combination with the spectrum and network crowding combine to form one key motivation for improved quality of service mechanisms. All devices on a given network suffer approximately equal increases in
PAGE 20
12 channel access time and reductions in throughput as the device count rises due to the egalitarian nature of the DCF. However, there are use cases in which there exist s a subset of devices that may need or want pri ority ac cess to the channel or a larger portion of the network resources. In the Community Wi Fi use case, the private network, servicing the home users, may want priority access to ensure compliance with a Service Level Agreement (SLA), either explicit or implici t, between the customer and the service provider. In such a scenario, the service provider may want a mechanism to ensure that resources are first allocated to the private network and any resources in excess of those needed to meet the SLA can then be allo cated to the public network users. Airtime Fairness and Throughput In Wi Fi networks throughput, though the primary interest of the user, is often not a good performance metric. This is due to the large number of variables which effect single client and a ggregate network throughput. For many instances, a metric called airtime is a better indicator of relative network resource allocation between a set of clients. Airtime, for the purposes of this paper, is the amount of time over a fixed interval that a giv en client has access to transmit on the shared channel. The relationship between user level throughput and airtime is upper bounded by: !'# $ where !"#$%& is user throughput data rate, is the physical layer link data rate. The factor n is to account for the time needed by the DIFS, SIFS, and by the receiving node to send an acknowledgement frame. This scaling factor varies depending on the spectrum R client R phy 1 n Airtime client Airtime total
PAGE 21
13 band used and version of 802.11 (e.g. .11a, .11b, .11g, etc) and takes a value in the range 1 n 2 It can be easily seen from (4) that user level throughput is directly proportional to the airtime that a client receives. The airtime needed to transmit a packet of !"# bytes of data is given by: !+# $ where !"#$ accounts for the time from DIFS, !"#! is the time for a SIFS, and !"# accounts for the time needed to send the acknowledgement frame. From (5) it can be seen that the airtime needed to transmit a packet is inversely proportional physical layer data rate As mentioned previously clients farther from the AP will use lower physical layer data rates. Therefore, by combining the fair channel access of the DCF, with equations (4) and (5), clients at the edge of the network co verage area will use more airtime while getting lower throughput due to less efficient use of the airtime. Heusse e t al first identified this phenomenon in [8]. I n a later study [4] by Kumar et al. this concept was extended to reflect the effect of the ed ge users on the total network throughput. In their study, they showed that the total network throughput was upper bounded by the reciprocal of the harmonic mean of the physical layer data rates of the clients on the network, as shown in (6) below. !)# $ Applying this concept to the Community Wi Fi use case it is clear that with a single radio AP the public network users, who will predominantly be farther from the Airtime pkt T DIFS + B pkt R phy + T SIFS + T ack R tot 1 1 n n i =1 1 R phy i n min 1 i n R phy i
PAGE 22
14 access point than the private network users, will undoubtedly have a negative effect on the overall network throughput, and therefore the private network throughput. One possible deployment model proposes the use of a QoS enabled device upstream of the AP (i.e. the access network modem) to manage the resource allocation between the public and p rivate Wi Fi networks. In such a model, Wi Fi throughput and resource management is performed indirectly by throttling the traffic flow to and from each network. As shown later in the simulation and test results, this method is sufficient when the aggregat e throughput of the Wi Fi network exceeds that of the access network. However, when the edge user related inefficiency lowers the aggregate throughput of the Wi Fi network below that of the access network, this management method is no longer able to effect ively manage the resources. Moreover, any QoS system that focuses solely on fixed throughput thresholds and fails to consider airtime fairness will have a similar performance floor, below which it becomes ineffective. Airtime Fairness Simulation and Testi ng Results NS 3 Simulation A simulation using Network Simulator 3 (NS 3) was performed to illustrate the airtime fairness and throughput issues facing the Community Wi Fi deployment model. NS 3 is a discrete event based network simulator. The base code is written in C++ and Python and simulation functions can be written in either. The simulation scripts used for the results below are included in Appendix B. The simulated 802.11g Wi Fi network contained two Wi Fi clients and one wired client. The Wi Fi clie nts were spatially configured such that one was directly adjacent to the AP (zero meters) and the other was a configurable distance (X meters) from the AP;
PAGE 23
15 see Figure 4 below. TCP traffic was generated from both Wi Fi clients with the destination for both flows being the wired client. The simulation was then cycled with the distance X increasing while the individual and aggregate throughput were tracked. The simulation outputs are packet captures in the form of .pcap files. The output packet traces were the n analyzed to extract throughput and airtime metrics. The python script used for airtime calculation is included in Appendix C. Throughput was measured using the Wireshark freeware application. The NS 3 Wi Fi client model includes automatic link rate adapt ation as the client moves. Furthermore, the simulator allows for the use of various wireless channel propagation models. For the simulation results presented below, the Log Distance Propagation model was used. The Log Distance model in NS 3 is based on the free space propagation loss model described earlier by equation (1). Using the propagation model, the simulator determines link quality metrics including signal strength and signal to noise ratio, and adjusts the physical layer data rate as appropriate similar to the behavior of a real Wi Fi client. Clients positioned farther from the AP use a lower physical layer data rate in the simulation. By testing over a range of distance configurations, the simulation provides a view into the effect of edge users on overall network and individual user throughput.
PAGE 24
16 Figure 4 Simulated Network Topology Figure 5 below shows the individual user throughput and the aggregate throughput of the network as the distance interval X was increased. In general, as the far client moves away from the AP, all throughput decreases. Initially, with both clients equally close to the AP (0 on the x axis), both clients achieve roughly equal throughput of approximately 6 Mbps. As the far client moves away from t he AP, both client's throughput drops, decreasing aggregate throughput. This drop in aggregate throughput indicates a drop in efficiency. Figure 5 802.11g Throughput with Near vs. Far Client 0 X TCP Sink TCP Source 2 TCP Source 1 802.11g AP 0 20 40 60 80 100 120 0 5 10 15 Far Client Distance (Feet) Throughput (Mbps) 802.11g Throughput Near Client Far Client Aggregate
PAGE 25
17 Figure 6 below shows the cumulative airtime for both the near and far client. In Case 1, the distance interval is set to 0 meters and both clie nts are very close to the AP. In this configuration it can be seen that both clients achieve nearly equal cumulat ive airtime due to the DCF fairness for clients with similar link rates In addition, both clients cumulative airtime curve maintains a roughly constant slope throughout the simulation indicating that each client was able to use the channel consistently. I n Case 2, the distance interval was increased to 110 meters, one client was 0 meters from the AP and the other was 110 meters from the AP. In this configuration, the cumulative airtime curves have different slopes. The far client is able to dominate the ai rtime, which is explained by the relative efficiency of the two clients. Both clients have equal probability of accessing the channel. However, when the far client gets control of the channel, it uses the channel less efficiently and holds the channel for a longer period of time. Furthermore, it follows that the overall network throughput drops because the far client gets a large portion of the airtime and uses the lowest physical data rate. Figure 6 Cumulative Airtime for Distance Interval of 0 m and 110 m 0 10 20 30 40 50 60 70 0 5 10 15 20 25 30 35 Simulation Time Cumulative Airtime (sec) Near/Near and Near/Far Cumulative Airtime per STA Case 1 Near STA1 Case 1 Near STA2 Case 2 Near STA Case 2 Far STA
PAGE 26
18 Device Testing Lab testing was performed using Wi Fi devices and a cable access network in order to confirm the edge user effect on single radio Community Wi Fi scenarios. An 802.11n AP running the highly flexible OpenWRT [15] firmware was configured to broadcast two separate SSIDs; a "public" network and a "private" network on the same radio. Each network was configured to draw IP addresses from a separate Class C subnet via DHCP. The WAN connection of the AP was connecte d to a cable modem that was provisioned with two upstream DOCSIS service flows, with each configured to pass the traffic from only one SSIDs IP subnet. The upstream bandwidth serving the private network was rate limited to 30 Mbps. Similarly, the upstream bandwidth serving the public network was rate limited to 10Mbps. Figure 7 Device Testing Topology Figure 8 below shows the device testing results. Each plot shows a two minute time sample. Iperf [16] software was used to create and measure the throughput of TCP flows on both the near and far client to a server behind the DOCSIS access network. The near client would start first and transmit for two minutes. At the one minute mark the far Private Home Network Public Community Network Cable Modem Home Client #1 Public Client #1 2 Upstream Service Flows 1 per Wi-Fi network 1 Downstream Service Flow 5GHz AP w/ OpenWRT
PAGE 27
19 client would start to transmit and send da ta for one minute. This staggered start approach allows for visualization of the effect of the far user on the near user. The solid lines with x = 3 both clients were placed within 3 feet of the AP. Both clients had physical data rates of 130 Mbps using 802.11n Modulation Coding Scheme (MCS) 15. It can be seen that both clients had sufficient airtime to fully utilize their access network provisioned rate limits of 10 and 30 Mbps. The dotted lines in Figure 8 show the results when the far user was moved 50 feet away from the access point in an indoor environment. In this configuration, the far user's phys ical data rate dropped to MCS 3 or 26 Mbps. As shown in the dotted line plot s the near user is able to consistently achieve the rate limit of 30 Mbps for the first minute with the exception of an anomalous rate drop early on At 60 seconds, the far user starts a traffic stream and is able to achieve the 10 Mbps rate limit. However, the near user throughput drops and becomes erratic when the far user starts sending traffic. The average throughput for the second minute of the near user drops by 20% due to the far user taking a disproportionate share of airtime. Figure 8 Throughput Related to Edge User Distance 0 20 40 60 80 100 120 0 5 10 15 20 25 30 35 Throughput Related to Edge User Distance Time (sec) Throughput (Mbps) x = 3 Near x = 50 Near x = 3 Far x = 50 Far
PAGE 28
20 The configuration fil es for the both the OpenWRT AP and the cable modem can be found in Appendix D. The clients used for testing were two identical MacBook Air laptops running the latest OS software with all updates installed. Current QoS Mechanisms The current options for Wi Fi QoS fall into two fundamental categories, standards based approaches and vendor proprietary approaches. Standards based approaches are agreed upon extensions or amendments to the 802.11 MAC/PHY specification. These features are well documented and avail able to all device manufacturers. Vendor proprietary solutions are individual approaches implemented by individual vendors. Often, device support for vendor proprietary methods is limited or hard to determine without testing. As shown in the previous secti on, it is clear that QoS mechanisms which address allocation of uplink bandwidth and airtime fairness must have broad client support to function efficiently and prevent overall network performance from being driven by non QoS enabled edge users. This paper will not discuss the vendor proprietary solutions, but instead will focus on the more widely supported standards based approaches in the next section. Furthermore, the method proposed later in the paper must have wide device support to ensure it's applica bility in current networks. 802.11e MAC Enhancements for Quality of Service The 802.11 working group of the IEEE acknowledged the need for QoS in Wi Fi networks and released a n amendment called 802.11e aimed at providing it in 2005 [12]. The approach is similar to an early proposal described in a study from Heusse et al. in 2003 [9]. This amendment was incorporated in the release of the core 802.11 2007 and later the 802.11 2012 specification.
PAGE 29
21 802.11e introduced two new coordination functions both aimed at improving the performance of the DCF for latency sensitive traffic such as voice or video. These new MAC methods are called the Enhanced Distributed Channel Access (EDCA) and the Hybrid Distributed Channel Access (HDCA) methods. The core mechanism behind the 802.11e QoS method is the assignment of different size IFS and CWmin channel access parameters based on the Differentiated Services (DiffServ) field in the IP header of pa ckets The new IFS used is called the Abritrary IFS, or AIFS. Using this method, differing channel access priority levels are mapped to DiffServ markings. The standard also introduced the Access Category (AC) concept that is used for traffic mapping with t wo DiffServ values assigned to each AC. The four ACs are Voice, Video, Best Effort, and Background, in descending order of priority. In this way, latency sensitive traffic is able to gain access to the channel with higher priority, resulting in reduced channel access time. In a high density network scenario with many equal priority users all with strong link characteristics, this enables network resources to most efficiently be used. Wi Fi MultiMedia Wi Fi multimedia (WMM) is the trade name for an 802.11 e based QoS mechanism. The Wi Fi Alliance (WFA) is a testing and certification body for Wi Fi devices. Their certification programs select a set of standards based features that support a particular service or set of use cases. A few examples include WMM f or QoS related features, Voice Enterprise for voice over Wi Fi use cases, and Passpoint (HotSpot2.0) for automatic network discovery and selection use cases.
PAGE 30
22 In the case of WMM, the WFA has specified default values and acceptable ranges for configurati on parameters related to 802.11e QoS [13]. In their specification, the WFA has defined the values of the AIFS and CWmin according to Figure 9 below. Figure 9 WMM CMSA/CA and AC Parameter Relationship It is clear from Figure 9 that traffic mapped to an AC with higher priority is assigned a shorter AIFS. Therefore, higher priority traffic is able to gain channel access before lower priority traffic. Said differently, it gives higher priority traffic a higher probability of succes s when attempting to access the shared medium. Similar to 802.11e, packets are mapped to an AC by the use of the IETF DiffServ field in the packet header. Described as User Priority (UP), this places the burden on the client device to appropriately set the DiffServ field for traffic to get the appropriate priority. Furthermore, the DiffServ code is often set at the application layer based on the service type of the application. This practice often requires complicated user configuration and/or results in di sparate behavior from similar applications from different implementers. As a result, the performance gains afforded by the use of WMM QoS are often only realized in tightly managed enterprise or commercial scenarios. In such a case, SIFS SIFS SIFS SIFS 2 slots 2 slots 2 slots 3 slots 7 slots 0-15 slots 0-15 slots 0-7 slots 0-3 slots Voice Video Best Effort Background AIFS Random Back-off window *WMM Defaults
PAGE 31
23 the network and the cli ents are both managed and configured by a common dedicated staff allowing for the proper configuration of the myriad settings needed to optimize performance. The key distinction of the WMM protocol is that the QoS is applied to specific traffic class or a pplication types. There is no mention above of the user as a priority classification criterion. There is therefore an implicit assumption that all users are equal and should get equal treatment. The goal of WMM is to optimize network usage among a group of equal clients. Current Wi Fi QoS Shortcomings The built in assumption of user equality is the fundamental shortcoming of current Wi Fi QoS. There is no user identity based mechanism available for network operators to leverage when a use case calls for di fferentiated service amongst sets of users. The lack of a user based option prevents current Wi Fi QoS from being applicable in the Community Wi Fi use case. Take for example the case where the public (Far) user is using a video streaming application whil e the private (Near) user is web browsing or uploading a large file. In such a case, if WMM were enabled as the QoS mechanism on the network, the public user would have a higher priority traffic type that requires sustained bandwidth. In addition, the publ ic user would be farther from the AP and therefore using the network resources less efficiently. Combining these two points, higher priority and less efficiency, it is possible that on a single radio AP, the private user may be completely starved of airtim e by a high priority public user located at the edge of the AP coverage area.
PAGE 32
24 Another built in assumption is that high priority flows have lower bandwidth. From an application based point of view, this is true for the most part. Voice over IP applications are very latency/jitter sensitive but commonly use as little as 64 kbps of bandwidth. Streaming video applications that are also latency/jitter sensitive, particularly when interactive such as in video calling, usually require bandwidth on the order of 2 3 Mbps. Using these assumptions, the WMM protocol allows these applications priority access to the wireless channel. However because of the "high priority = low bandwidth" assumption no metering of resources or rate limiting mechanism has been built into t he QoS definition. Taken to the logical extreme, if a high priority flow were to become a high bandwidth flow, it could starve all lower priority flows of network airtime/resources. The low bandwidth assumption therefore prevents the trivial extension of the 802.11e/WMM mechanisms to enable user based QoS. A key functional component would need to be added to police the division of resources amongst the high and low priority sets of users. Such a function could enforce a resource division policy configured by the network operator and designed to satisfy the SLA offered to users.
PAGE 33
25 CHAPTER III USER BASED QOS IN WI FI NETWORKS The idea of improved QoS mechanisms for Wi Fi networks is not a new one. Researchers have been working to improve 802.11e since it's int roduction in 2005. This chapter explores the underlying mechanisms governing the behavior of the 802.11e channel access method and the tradeoffs that must be balanced when searching for optimizations. In addition, this chapter will review several approache s to optimize the channel access method of 802.11e. Finally, this chapter presents a new algorithm for configuring 802.11e channel access parameters that is well suited to the Community Wi Fi use case. Probability Analyses of the 802.11e EDCA Collision Pro bability As described previously the re are two important parameters that govern Wi Fi client access to the shared channel under the EDCA, the AIFS and the CWmin. Similar to channel access under the DCF, the AIFS is a fixed waiting period, whereas the CWmin defines a range from which a r andom waiting time is selected. Therefore the total waiting t ime elapsed during the clear channel assessment can be defined as X = X A + X CW where X A is the fixed value AIFS assigned to each AC, and X CW is a uniformly distributed random variable which can take values between 0 and CWmin : !-# $ Using the above parameter definitions, the following is a derivation of the probability of a single client winning the channel summarizing to that provided by Rajmic et al. in [21], X CW { 0 1 ,...CWmin }
PAGE 34
26 [22]. This is a simplified model compared to the 2 D Markov Cha in model u sed in many analyses. H owever it is useful because it efficiently and accurately illustrates the effects of the CWmin parameter on the per client channel access probability and the overall collision probability that are of most interest to this analysis. In this derivation, a simplifying assumption is made that the current contention period is not dependent on any previous conditions. In a network with K clients, t he probability of a given client client 1, gaining access to the channel is given by: !,# $ I t can be see n that (8) is composed of a set of complex events, one event f or each contention slot in the CWmin range, thus expanding each slot into a set of independent events produces: !.# $ For a m ore compact form of (9) the symbol can be replaced by multiplication and the symbol can be replaced w ith summation resulting in: !"(# $ Using (10) the probability of collision can be computed by finding the comp lement of the sum of the probabilities for each client k K winning, given by: P win = P X 1
PAGE 35
27 !""# $ From (10) and (11) the overall network efficiency can be related back to the AIFS and CWmin chose for each AC. In addition, (10) enables the exploration of parameter and network configurations to learn more about the dynamics of the CWmin, network client composition, and their joint effect on proportional resource allocation. Using the Community Wi Fi use case as an example, suppose two networks are served by one radio interface. The two networks each have one AC with matching AIFS values but differing CWmin values. The private network has a fixed CWmin = 15 to match the standard DCF parameter. The public network CWmin is varied and the collision probability is explore d as a function of client count in each network. P coll =1 K k =1 P win ( k ) #
PAGE 36
28 Figure 10 Collision Probability vs. Client Count for Various CWmin Values Figure 10 shows the relationship between the number of clients in each network sharing a Community Wi Fi radio and the simplified collision probability. The top curve where CWmin for the public network equals 15 is equ ivalent to the DCF; the standard non QoS channel access method i.e. both networks using a CWmin = 15, all clients have equal probability of transmitting in a given contention slot The well known inefficiencies of the DCF can be seen as the collision probability quickly climbs as the client count rises Comparatively, the curves for a public CWmin equal to 31 and 63 (circle and star markers) show a significant reduction in the collision probability in the cases of 5 and 10 clients per network. In this scenario, half of the clients contending for the shared medium are selecting a random number from a range 2 to 4 times as large as the other half of the 2 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of STAs in each Network Network Probability of Collision Collision Probability for Modulated CWmin for Various CWmin Configurations CWmin = 15 CWmin = 31 CWmin = 63 CWmin = 127 CWmin = 255 CWmin = 511 CWmin = 1023
PAGE 37
29 clients. This division creates two classes of service and also reduces the likelihood of collision. The remaining curves for a public CWmin = 1 27, 255, 511, and 1023 show diminishing returns and only reduce collision probability slightly compared to the CWmin = 63 case. As shown in previous analysis [ 19 ], [20], increasing the CWmin increases the channel access delay. Therefore, when tuning the C Wmin parameter, unnecessary increases in CWmin should be avoided as they incur a delay cost on the link. It should be noted that Huang et al. have shown in [19] that this increase in channel access delay is compounded as the clien t count increases This wi ll be relevant in later sections of this paper. Channel Access Probability per Network One way to delineate groups of Wi Fi clients K i which is particularly well suited to the Community Wi Fi use case is to group clients according to the network or SSID with which they are associated. Suppose again there are two networks one private and one public, served by a single radio. In this case there are K priv cl ients on the private network and K pub clients on the public network. Also suppos e that the ISP wants to control the allocation of Wi Fi resources to each network. Using (10) the probability that each client within its respective network is successful at gaini ng access to the channel can be summed to find the probability that a given network wins access to the channel. Using the private network as an example, this is expressed by: !"*# $ P win k K priv = K priv # k =1 $ P win ( k ) %
PAGE 38
30 Based on (11) and (12) the following is also evident : !"%# $ From (13) it can be seen that in the Community Wi Fi use case there are only three events in the set of possible outcomes for each contention slot : either a private user su ccessfully accesses the channel; a public user successfully accesses the channel ; or a collision occurs. The optimization goal for designing a resource control algorit hm is therefore controlling the first two events while maintaining an acceptably low probability of collision. Figure 11 below shows the probability of any client in a given network gaining channel access versus the CWmin value of the public network for v arious client count ratios. Figure 11 Channel Access Probability for Networks vs CWmin(public) The relationship between client ratio and the CWmin scaling follows the general trend that can be seen in Figure 11. The upper set of curves represents the private 15 31 63 127 255 511 1023 0 0.2 0.4 0.6 0.8 1 Cwmin of Public Network Network Probability of Successful Transmission Channel Access Probability for Private and Public Network vs CWmin(public) for Various STA count Configurations Priv:Pub = 2:7 Priv:Pub = 2:6 Priv:Pub = 2:5 Priv:Pub = 2:4 Priv:Pub = 2:3 Priv:Pub = 2:2 K priv k =1 P win ( k ) # + K pub k =1 P win ( k ) # + P coll =1
PAGE 39
31 network probability of winning channel access. The lower set of curves represents the same m etric for the public network. For a simple example, take the outer curves that represent a case in which each network has two clients. For compact notation, {CWmin_priv, CWmin_pub} will be used to convey the CWmin relationship between the two networks. In addition, it will be assumed for now that all clients have the same link rate. The first data point is {15,15} where the public and private network both have P win = .436. T he ratio of the probability of winning the channel for private versus public is 1:1 As the CWmin increases for the public network, it can be see that this ratio increases with it, e.g. {15, 31} = 2.6:1, {15, 63} = 5.9:1, {15, 127} = 12.5:1, {15, 255} = 25.7 or roughly doubling the ratio for each doubling of the public CWmin. Inspection of the relationship between the client count ratio and the ratio of the network probability of winning the channel reveals a similar trend. For example holding the CWmin ratio fixed at {15, 15} and varying the client count ratio, it can be seen that when t he client ratio is 1:1, the network probability of winning the channel ratio is 1:1. When the client ratio is 2:3, the ratio of the network probability of winning the channel is 2:3. Figure 12 below provides a surface plot of the parameter space regarding channel access probability for the simple two network Community Wi Fi scenario. The y axis is the client ratio between the two networks. The x axis is the CWmin chosen for the public network while the private network CWmin is fixed at 15. The z axis is th e private versus public network channel access probability ratio. The surface plot shows that when the networks have parity in either CWmin ratio or client count ratio (i.e. moving along the x or y axis) the growth in the channel access probability is prop ortional to the variable
PAGE 40
32 being changed. However, if both ratios are changing the effects compound and the channel access probability ratio increase significantly. Figure 12 Probability of Channel Access Ratio Surface and Contou r Below the surface plot are five contour lines showing the zero change contours for channel access probability ratios of 20x, 40x, 60x, 80x, and 100x. These values were chosen to be illustrative and not prescriptive, an infinite number of similar contours can be found in this data set. In an idealized network, an algorithm could react to the client ratio as it changes and adjust the CWmi n parameter in arbitrary step size s to maintain a desired channel access probability ratio by following such a contour Unfortunately, current standards, and therefore devices, limit CWmin values to power of 2 minus 1 15 63 127 255 511 1023 2/8 2/7 2/6 2/5 2/4 2/3 2/2 0 50 100 Public CWmin Probability of Channel Access Success Ratio for Private/Public Network as a function of Client Ratio and Public CWmin Private / Public Client Ratio Probability of Channel Access Ratio (Private Network / Public Network) 20x 40x 60x
PAGE 41
33 values only as CWmin is derived from the value m spe cified by the access point by calculating CWmin = 2 m 1 Regardless of this constraint, this analysis gives insight into the design of an algorithm that could be used to manage resources on the shared medium. Specifically, this analysis shows the relations hip between two key inputs and one key output that is to be controlled. Approaches to Optimizing Wi Fi QoS As mentioned before, years of research have produced many proposed optimizations or replacements to the EDCA channel access mechanism. The following is an analysis of three approaches that directly apply to the problem that is the focus of this paper. These approaches were chosen for review because they are either often cited in related papers, proposed by authorities in the field, or tested via real implementation; the latter of which is rare and directly applies to this paper. Idle Sense The first approached reviewed is a concept referred to as Idle Sen se. This a pproach as defined by Heusse et al. in [ 17 ] and further analyzed by Nassiri et al. in [18] replaces the exponential backoff proc edure with an access method based on sensing channel idle time In an idle sense network, each client tracks a value n i that is a count of the idle slots between two tran smission attempts. Each client then uses a control algorithm to adjust its own CWmin to try to drive its measured n i towards a target value n i target The authors offer performance assessment s that show that the algorithm converges quickly with all clients n i tending towards n i target When convergence occurs, the network is shown to achieve optimal throughput characteristics with minimal collisions and differentiated services achieving their respective throughput targets.
PAGE 42
34 While the above qualities are attra ctive, t here are limitations to the Idle Sense approach. First, the analysis was performed using integer values for CWmin which does no t match current device capabilities or the definition of CWmin in the 802.11e standard As stated before the current 802 .11e standard in force specifies CWmin via the equation CWmin = 2 m 1, where m is a n integer. Thus only powers of two minus one are valid values of CWmin. In addition, the simulation results shared displayed a tendency to employ large values of CWmin for lo wer priority ACs. This is common among research into differentiated throughput services via CWmin modulation. Th is tendency can also be see in [19], [24], [25] [26] all of which focus on maximizing aggregate throughput for a large number of clients while providing differentiated throughput services. However, the focus of throughput efficiency comes at the expense of channel access delay. As mentioned before, increases in the value of CWmin result in increases in channel access delay. Finally, the idle se nse algorithm fundamentally changes the client side behavior at the MAC layer This has two significant implications. First, support for the millions of legacy Wi Fi devices is very unlikely as MAC functions are generally in hardware with limited software upgradability More importantly, a s stated in the research, any clients that join an idle sense network but do not participate in the idle sense measurement process and adjust their CWmin accordingly can prevent the system from converging. Link MTU Modula tion The second approach in [28] reviewed uses modulation of the link Maximum Transmission Unit ( MTU ) to control differentiated services on the Wi Fi link. The MTU is the largest data packet, inclusive of all link headers and overhead bits, capable of
PAGE 43
35 trav ersing a link in a data network. In this method t he Wi Fi MAC layer remains unaffected and thus each has eq uiprobable access to the link. As discussed before, i n the standard DCF, this results in clients with lower physical layer data rates dominating the use of the shared airtime resource. The heart of the method lies in modulating the link MTU on a per client basis such that clients with lower physical layer link rates see a link with a lower MTU size. The smaller MTU size used by clients with lower physi cal layer rates organically limits the airtime used to send a single packet by making the packet smaller. Therefore, MTU size is used to manage the airtime allocation. In this way, each client gets equiprobable access to the channel, and more equal airtime once they win the channel. This approach is very novel and approaches the problem from a new angle. Furthermore, the results show that the method is effective at providing differentiated throughput services. However, there are some fundamental flaws to t his method that make it unsuitable for use in current Wi Fi networks. First, this method increases framing overhead on the end to end connection by forcing the MTU lower Some proposals affect the efficiency of the Wi Fi portion of a connection, but the ad ditional overhead incurred by reducing the MTU size of a connection affects the entire path the connection traverses. Another problem with this method is that many I nternet applications have requirements on MTU and simply break when the MTU is reduced bel ow the minimum expected value. As an example, the popular Xbox Live gaming platform has a well documented minimum MTU size of 1364 bytes. The method proposed requires the ratio of the client physical layer rates be proportional to the MTU size used by each client. With the standard Ethernet MTU of 1500 bytes, the ratio of the 1500 to 1364 does not
PAGE 44
36 offer enough differentiation to provide meaningful fairness control when compared to physical layer data rates which varying from 300+ Mbps to 1 Mbps In additio n, the study points o ut that the clients can take up to 10 minutes to react to changes in MTU size. The I nternet Engineering Task Force (IETF) Request For Comments ( RFC ) document that govern s the MTU discovery protocol [27] specifies a maximum cycle time o f 10 minutes between MTU size probes. The long reaction time between stimulus and response obviously limits the usefulness of this method in real networks. Finally, updates to the 802.11 standards incorporated in the 802.11e and 802.11n amendments allow f or frame aggregation. In frame aggregation smaller packets are grouped into larger frames before being sent over the wireless medium. The intent of frame aggregation is to reduce the overhead of sending many small packets and thus increase throughput. This feature fundamentally breaks the MTU modulation method as pro posed. In modern Wi Fi devices, even if the end to end connection MTU is small, the Wi Fi link is able to aggregate the frames for transmission over the Wi Fi link. CWmin Adaptation The third approach reviewed is CWmin adaptation In this method the value of CWmin is modulated to achieve the desired throughput differentiation between ACs. Extensive research has gone into this method with notable examples including [20 ], [24], [25], and [26]. I n the Tinnirello study [20], co authored by Bianchi ( an author ity in the field), the various parameters of the 802.11e EDCA are explored for their suitability for controlling
PAGE 45
37 airtime resources. It was concluded that AIFS provides good differentiation in th e access delay between ACs while CWmin provides good control of throughput differentiation. In the Li study [24] the authors proved the existence of the inverse relationship between the CWmin assigned to ACs their proportional throughput. This finding is key to the study of EDCA optimization. This result was later referenced and extended in the study by Yang et al. in [25] Their results focused on aggregate efficiency of the network with high client counts. Similarly, in the Yoon et al. study [26] the Li finding is exploited in an attempt to optimize the setting of CWmin values for ACs with different fairness or proportional throughput goals. In all of the above referenced papers, the authors focused on throughput efficiency as the primary optimization f actor. Moreover, the focus was on high client count networks. All four studies provided simulation or theoretical results for networks with client counts up to 200. This is a key distinction between these studies and the goal of this paper. T he goal of thi s paper is to develop an algorithm suitable for the Community Wi Fi use case. In this use case, client count s will often be in the 10 to 20 client range. In addition, [24], [25], and [26] perform the efficiency optimization with no consideration for chann el access delay. More specifically, [24] and [25] clearly favor larger values of CWmin in general. Whereas [26] allowed extremely large values of CWmin in the simulations ; the magnitude of which are not feasible for real world devices. In all cases, the ch annel access delay incurred would render the link unusable for practical purposes.
PAGE 46
38 P roposed CWmin Adaptation Algorithm The goal of this paper i s to motivate, develop an d implement a control algorithm that resolves the uncontrolled resource allocation problem in the community Wi Fi use case. T he algorithm should be able to protect differentiated services between two or more networks across the Wi Fi link. As shown previously in this p aper, simply rate limiting the network side connections for each network is insufficient for ensuring Wi Fi throughput. The proposed algorithm leverages three key parameters previously discussed to achieve the control goal The three parameters are CWmin, the client count ratio between the networks, and the physical layer data rates of the clients within each network. CWmin Adaptation CWmin adaptation was chosen as the primary control interface for its well documented ability to provide differentiated thr oughput between traffic classes. Moreover the negative behaviors shown in previous studies are largely ameliorated by characteristics of the Community Wi Fi use case, while any remaining negative behaviors are accepted as tradeoffs Specifically, previous studies have shown that CWmin adaptation can be inefficient for networks with many clients e.g. 100 200 clients Fortunately, the typical residential or small business user will have on the order of 10 20 Wi Fi devices or fewer. In this lower client count region of Figure 10, collision probability stays acceptably low and therefore network efficiency remains sufficiently high. Perhaps most importantly, the vast majority of Wi Fi devices deployed support CWmin adaptation since it is explicitly allowed in the 802.11e standard [12]. Any
PAGE 47
39 solution developed with the goal of being implementable and deployable in scale requires client side support without changing the MAC layer of the device. Network Client Count Ratio The primary Community Wi Fi use case is a simple two network, one radio deployment configuration supporting a private and a public network. In this configuration the goal is to manage the air resource such that throughput thresholds are protected for the private network. Any air resources in exces s of those needed to achieve the private network throughput threshold can be then shared with the public network to provide service to roaming customers. In this scenario, the network is the entity being managed not any single client. Hence, the number of private clients active at any given time wil l a ffect the resources being used by the private network. The same is true of the public network. Thus, a ny CWmin adaptation process needs to account for a varying client count ratio so that the appropriate per c lient channel access probability is assigned such that the overall network channel access ratio is maintained. Furthermore, this process must be dynamic to acco unt for scenarios such as active clients going into power save mode for long periods or clients joining and leaving the network. An example of such and algorithm would be one that follows a zero change contour line from Figure 12. Physical Layer Link Rate Airtime is directly proportional to both channel access probability as well as the physical lay er link rate of the client. As mentioned previously and described in equation (6), low rate clients govern the throughput behavior of a network. Thus any algorit hm that aims to control throughp ut or airtime, or both, must consider the physical layer li nk
PAGE 48
40 r ate. In the proposed algorithm, the minimum link rate in both networks is compared to the object throughput rate when used for determining the CWmin of the public network. Control Algorithm Based on the above motivation, the CWmin adaptation algorithm proposed is designed to protect a predetermined throughput threshold for the private network. The algorithm is designed to follow a contour on the surface in Figure 12 based on the input desired private data rate. This algorithm uses the CWmin of the private network as a baseline. The CWmin for the public network is then scaled based on the physical layer link rate ratio and the client count ratio to provide a probability of channel access succes s ratio for the private network that is sufficient to achieve the desired data rate. The scaling function is given by: !"'# $ w here R desired is the desired throughput of the private network, n is a scaling function defined for each physical layer and accounts for protocol overhead (e.g. n = 2 for 802.11g), R min is the minimum physical layer link rate of any active client in either network, and Count priv and Count pub are the active client counts in the private and public networks, respectively The log bas e two function is used find the appropriate value for use with the current definition of CWmin as discussed previously. Then finally the ceiling function is used to always round up so that the t hreshold is protected absolutely. CWmin pub = CWmin priv + log 2 n R desired R min Count priv Count pub # $
PAGE 49
41 The control algorithm in the current implementation is represented in pseudo code as follows: 1. If (Active Client Count in Private > 0 && Active Client Count in Public > 0) 2. Update CWmin pub per (14) 3. Else 4. Use default CWmin pub = 5 5. End If In this way, when any network is the only network with an active client the standard DCF configuration is used However, when both networks have clients that are actively contending the channel, the relative channel acces s probability is governed by (14) to ensure the private network receives the necessary resources for the desired throughput rate Testing Results To validate the proposed algorithm an implementation was created using the OpenWrt platform. The implementati on consisted of modifying the OpenWrt source C++ code for the access point controller daemon, known as "hostapd". The function that handles the MAC management beacon frame was located and modified such that the advertised WMM param eters set, in particular CWmin, was recalculated every two seconds via equation (14) Thus the maximum possible update rate for the CWmin adaptation was once every 2 seconds. The function was further modified to allow for the collection of necessary input parameters to equation (1 4) including R min Count priv and Count pub Once the modifications were in place, the router was configured to support two Wi Fi networks. One network was the "private" network and the other the "public" network. The only difference in this case was that the WMM parameter set was fixed at
PAGE 50
42 the default values for the private network, and for the public network the CWmin value was varied per (14). The same configuration as illustrated in Figure 7 was used for testing the proposed algorithm. The network consisted of a residential cable modem connected to a Netgear WNDR3800 Wi Fi router running the modified version of OpenWrt. The Wi Fi router was configured to serve the two Wi Fi networks via a single radio on channel 36 in the 5GHz UNII 1 band. The cable modem was configured with two upstream service flows with rate limits of 10 Mbps and 30 Mbps for the public and private Wi Fi networks respectively. The test procedure was identical to that discussed previously resulting in the data for Figure 8. Two id entical Apple MacBook Air laptops were used to perform throughput tests over the priv ate and public networks. In all test s the private client was place within 5 feet of the access point and used a link rate of 130Mbps, or 802.11n MCS 15. The public client was placed approximately 50 feet away on the other side of a wall and primarily used a link rate of 26 Mbps or MCS 3. The link rate would occasionally fluctuate between MCS 2,3,4 for link rates of 20, 26, and 39 Mbps respectively. Once the clients were po sitioned, the private client would initiate an Iperf throughput test with a two minute duration. After one minute, the public client would initiate a one minute Iperf throughput test thus sharing the channel resources for the second minute of the private c lient test Both throughput tests terminated at a server behind the cable network. Figure 13 shows the results of the throughput tests for the case with one client in each network. The left plot shows the behavior of the standard DCF channel access method This is also the behavior of the EDCA if all the traffic is classified into the Best
PAGE 51
43 Effort AC. It can be seen that the private client throughput in the first minute is stable around the configured cable modem rate limit of 30 Mbps. In the second minute the public user begins contending for the channel resources. It is clear from the results that the private users throughput drops as a result of insufficient airtime resources. This result is similar to those presented in Figure 8. In this case, the CWmi n for both network s was held constant at 15. The right plot in Figure 13 shows the result of the same test performed with the proposed CWmin adaptation algorithm enabled on the access point. In this case, the CWmin value of the public netw ork was varied p er (14) with an update interval of two seconds. It can be seen that the throughput of the private network showed minimal impact from the public user contending for the shared resource. It can also be seen that the throughput of the public user was reduced compared to the left plot, but did not go to zero. Thus from Figure 13 it can be seen that the algorithm is able to secure the resources needed by the private user to achieve the protected data rate of 30 Mbps while allowing the public user to use any rem aining resources. It should be noted that in Figure 13, 14, and 15 the x axis is measured in samples not time. The Iperf tool outputs throughput measurement samples at intervals of approximately 1 second, but not exactly 1 second. Thus to a void confusion, the axis is labeled in samples not seconds. The full duration of all tests was 120 seconds. Each plot has approximately 104 samples.
PAGE 52
44 Figure 13 Throughput Test: {1,1 } Private (N ear) vs. Public (Far) The previous test exercised the physical layer data rate portion of the proposed algorithm. Next the client ratio portion of the algorithm was exercised in addition to the physical layer data rate portion. Figure 14 shows the results for a similar through put test with one private user and two public users. In this test, both public users were physically located at the signal coverage edge and both primarily used MCS 3 or 26 Mbps physical layer data rates. Thus in this test there is a 2:1 client ratio betwe en the networks as well as a physical layer data rate disparity. The left plot in Figure 14 again shows the standard DCF performance in this scenario. It is clear that the standard channel access method is not able to protect the private user from being a ffected by the public users. The right plot shows the results when the proposed CWmin adaptation algorithm is enabled. The private user threshold is protected via the dynamic control of the CWmin parameter. This test exhibits the algorithms ability to acco unt for physical layer data rate and client ratio disparities. 0 50 100 0 10 20 30 40 Sample # Throughput (Mbps) Standard DCF or EDCA 0 50 100 0 10 20 30 40 Sample # Throughtput (Mbps) CWmin Adaptation Private Public 1
PAGE 53
45 Figure 14 Throughput Test: {1,2} Private (Near) vs. Public (Far) The transient behavior seen in both Figure 13 and 14 is a result of the Wi Fi clients dynamically a djusting the physical layer data rate more frequently than the implementation can update the CWmin value. The Wi Fi clients can adjust the physical layer data rate for each burst which is on the order of milli seconds. The current implementation is limited to updating the CWmin every 2 seconds. This time disparity occasionally results in a momentary drop in throughput. The standard beacon frame interval is 100ms. Thus, further optimization of the implementation code could reduce the update interval and the transient behavior. Figure 15 below shows the behavior when a private user attempts to gain access to the channel in the middle of a public user heavily using the channel. Again the left plot shows the behavior of the standard DCF channel access method. It can be seen that the public client is able to achieve the configured cable upstream data rate for the first minute. In the second minute, once the private user starts a throughput test, the public users is able to keep most of the resources needed to reac h the 10 Mbps rate while the private users throughput suffers. 0 50 100 0 10 20 30 40 Sample # Throughput (Mbps) Standard DCF or EDCA 0 50 100 0 10 20 30 40 Sample # Throughtput (Mbps) CWmin Adaptation Private Public 1 Public 2
PAGE 54
46 The right plot of Figure 15 shows the result when the proposed algorithm is used for the same test. At the one minute mark, the private user begins contending for the channel and the algorithm begins to decrease the channel access probability for the public user. As a result the private user is able to achieve much closer to the configured throughput rate than under the standard DCF. Furthermore, in contrast to the DCF, the established connecti on of the public user is de prioritized via reduced channel access probability and thus the throughput drops. Again, it is important to note that the public user continues to receive some resources, just at a diminished rate. In addition, the private users rate instability is tending to reduce as time progresses. Figure 15 Throughput Test: Private Joins During Public Session Figure 16 shows the CWmin value over time for the test in which the private user joins late. In this cas e, the public users starts w ith a default CWmin of 5 and then increases it's value once the private client joins and becomes active. The fluctuations seen in the data are related to the physical layer data rate of the public user adjusting between MCS 2, 3 and 4 resulting in the algorithm adjusting the CWmin accordingly. 0 50 100 0 10 20 30 40 Sample # Throughput (Mbps) Standard DCF or EDCA 0 50 100 0 10 20 30 40 Sample # Throughtput (Mbps) CWmin Adaptation Public 1 Private
PAGE 55
47 The early spike is the result of the private client issuing a probe request or some other MAC management traffic for a short period of time. In the current implementation, the CWmin adapta tion is only activated if there is at least one active user in each network. Thus, during the first half of this test, any traffic from the private user would trigger the momentary activation of the CWmin adaptation. Figure 16 CWmin Adaptation vs. Time for Private Late Join Test Algorithm Optimizations The proposed algorithm and proof of concept implementation provide evidence that the method used is capable of address ing the resource protection goal of this paper. However, duri ng the development and testing phases, areas of future optimization were identified. First, the inability to use integer step sizes for CWmin adjustments is a hindrance to the optimal tuning of the parameter. For the proof of concept implementation the cei ling function was used to ensure sufficient priority for the private network. However, an alternative approach is to use dithering to rapidly modulate the CWmin value to achieve and effective CWmin that is between the power of two steps currently available 0 20 40 60 80 100 120 140 1 2 3 4 5 6 7 8 9 10 CWmin Adapation vs. Time for Private Client Late Join Test Time (sec) Public Network CWmin
PAGE 56
48 The second optimization for future work is to tune the control process with some measure of hysteresis such that rapid fluctuations in physical layer link rate were smoothed from the viewpoint of (14). Such a smooth process would provide a more stable C Wmin for the public network. As of this time, the maximum client supported CWmin update frequency is unknown.
PAGE 57
49 CHAPTER IV CONCLUSION This paper has presented a simple control law and algorithm with the demonstrated ability to control the air resource alloc ations in the Community Wi Fi use case. The principals governing the physical and MAC layers of Wi Fi technologies were explored and used to motivate the proposed algorithm. The objective of the algorithm is to protect a private network throughput threshol d in the presence of conditions that would otherwise cause airtime unfairness. Three related optimization efforts were analyzed and shown to be unsuitable for the goals of this project. The algorithm was constrained to being something that was implementabl e and deployable in current Wi Fi networks. Th ese constraints limited the tools available for airtime control and forced some tradeoffs. However, the resulting method has been shown to achieve the service objectives while staying within the constraints.
PAGE 58
50 R EFERENCES [1] IEEE 802.11: Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, 2012. [2] Bia nchi, Giuseppe. "Performance analysis of the IEEE 802.11 distributed coordination function." Selected Areas in Communications, IEEE Journal on 18.3 (2000): 535 547. [3] Wu, Haitao, et al. "Performance of reliable transport protocol over IEEE 802.11 wireles s LAN: analysis and enhancement." INFOCOM 2002. Twenty First Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE Vol. 2. IEEE, 2002. [4] Kumar, Anurag, et al. "New insights from a fixed point analysis of single cel l IEEE 802.11 WLANs." INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE Vol. 3. IEEE, 2005. [5] Panda, Manoj K., Anurag Kumar, and S. H. Srinivasan. "Saturation throughput analysis of a system o f interfering IEEE 802.11 WLANs." World of Wireless Mobile and Multimedia Networks, 2005. WoWMoM 2005. Sixth IEEE International Symposium on a IEEE, 2005. [6] Manshaei, Mohammad Hossein, and Jean Pierre Hubaux. "Performance Analysis of the IEEE 802.11 Distributed Coordination Function: Bianchi Model." Mobile Networks, Communication Systems & Computer Science Divisions (2007). [7] Vardakas, John S., Michael K. Sidiropoulos, and Michael D. Logothetis. "Performance behaviour of IEEE 802.11 distributed coordi nation function." IET circuits, devices & systems 2.1 (2008): 50 59. [8] Heusse, Martin, et al. "Performance anomaly of 802.11 b." INFOCOM 2003. Twenty Second Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies Vol. 2. IEEE, 2003. [9] H Heusse, Martin, et al. "Bandwidth allocation for DiffServ based quality of service over 802.11." Global Telecommunications Conference, 2003. GLOBECOM'03. IEEE Vol. 2. IEEE, 2003. [10] Wi Fi Requirements for Cable Modem Gateways, WR SP WiFi GW I01 100729, July 29, 2010, Cable Television Laboratories, Inc. [11] ITU R RECOMMENDATION P.1238 7, Propagation data and prediction methods for the planning of indoor radio communication systems and radio local area networks in the frequency range 900 MHz t o 100 GHz, 2012. Available online at: REC P.1238/ [12] IEEE 802.11e: Amendment 8: Medium Access Control (MAC) Quality of Service Enhancements, 2005.
PAGE 59
51 [13] Wi Fi Alliance, "WMM Sp ecification", v1.2, 2012, fi.org/knowledge_center/published specifications [14] [15] [16] IEEE 802.11ac/D2.0: Amendment 4: Enhancements for Very High Throughput for Operation in Bands below 6 GHz, 2012. [17 ] Heusse, Martin, et al. "Idle sense: an optimal access method for high throughput and fairness in rate diverse wireless LANs." ACM SIGCOMM Computer Communication Review Vol. 35. No. 4. ACM, 2005. [18 ] Nassiri, Mohammad, Martin Heusse, and Andrzej Duda. "A novel access method for supporting absolute and proportional priorities in 802.11 WLANs." INFOCOM 2008. The 27th Conference on Computer Communications. IEEE IEEE, 2008. [19] Huang, Ching Ling, and Wanjiun Liao. "Throughput and delay performance of IEEE 802.11 e enhanced distributed channel access (EDCA) under saturation condition." Wireless Co mmunications, IEEE Transactions on 6.1 (2007): 136 145. [20] Tinnirello, Ilenia, Giuseppe Bianchi, and Luca Scalia. "Performance evaluation of differentiated access mechanisms effectiveness in 802.11 networks." Global Telecommunications Conference, 2004. G LOBECOM'04. IEEE Vol. 5. IEEE, 2004. [21] Rajmic, Pavel, Dan Komosny, and Karol Moln‡r. "Theoretical Analysis of EDCA Medium Access Control Method in Simplified Network Environment." Networks, 2009. ICN'09. Eighth International Conference on IEEE, 2009. [ 22] Rajmic, P., et al. "Optimized Algorithm for Probabilistic Evaluation of Enhanced Distributed Coordination Access According to IEEE 802. 11e." Proceedings of the 33rd International Conference Telecommunications and Signal Processing Baden bei Wien, 201 0, 297 303 [24] Li, Bo, and Roberto Battiti. "Performance analysis of an enhanced IEEE 802.11 distributed coordination function supporting service differentiation." Quality for all Springer Berlin Heidelberg, 2003. 152 161. [25] Yang, Yaling, Jun Wang, a nd Robin Kravets. "Distributed optimal contention window control for elastic traffic in wireless LANs." INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE Vol. 1. IEEE, 2005. [26] Yoon J, Yun S, Kim H, Bahk S. Maximizing differentiated throughput in IEEE 802.11e wireless LANs. In: Ken C, Matthias F, eds. Proc. of the 31st IEEE Conf. on Local Computer Networks. Los Alamitos: IEEE Computer Society, 2006. 411" 417. [27] J.C. Mogul, S. E. Deering "RFC 1191, Path MTU discovery [ November 1990 ] (TXT = 47936) (Obsoletes RFC1063) (Status: DRAFT STANDARD) (Stream: Legacy) [28] Dunn, Joseph, et al. "A practical cross layer mechanism for fairness in 802.11 networks." Broadband Networks, 2004. BroadNets 2004. Proceedings. First International Conference on IEEE, 2004.
PAGE 60
52 APPENDIX A : Device Data FCC Published Testing Data Phone/Tablet Peak Tx Power Measured (SARS) Antenna Specs. iPhone 27 ipod touch A1288 24.24 1.5dbi iPod Touch A1318 10.73 1.dbi iPad2 A1395 13.94 .6dbi iPhone 3gs A1303 26.69 1.2dbi iPad A1219 22.8 3.8dbi HTC Vigor PH98100 21.7 2dbi HTC Evo Shift 4G PG06100 21.66 2dbi Droid Incredible PB31200 21.6 Laptops Intel PRO/Wireless 3945ABG 24.94 .9dBi Intel Centrino Wireless N 2200 16.5 2.6dBi Lenovo Thinkpad x200 Intel Wi Fi link 5300 29.5 1.3dBi Intel Wif link 5100 18.6 Dell PP02X Precision M60 Peak TX Power Antenna Gain Average of FCC Testing Data 22 1 *Data aggregated, not all data available for all devices. Web Search Device Data Product Chipset TX peak power (dBm) RX Sensitivity (dBm) HTC EVO 4G BCM4329 18 Nokia N8 TI WL1271A 20 89 HTC Droid Incredible BCM4329 18 Google Nexus One BCM4329 18 Palm Pre Plus Marvell W8686 16 82 HTC HD2 BCM4329 18 Motorola Droid TI WL1271A 20 89 iPhone 4 BCM4329 18 HTC Thunderbolt BCM4329 18 Sony Ericsson Xperia Play BCM4329 18 Motorola Atrix BCM4329 18 Motorola Droid X TI WL1273 18 87
PAGE 61
53 Product Chipset TX peak power (dBm) RX Sensitivity (dBm) Nook TI WL1273 18 87 iPhone 2G Marvell W8686 16 82 iPhone3G Marvell W8686 16 82 iPod Touch 1G Marvell W8686 16 82 iPod Touch 3G BCM4329 18 Samsung Galaxy S1 BCM4329 18 iPad BCM4329 18 iPad2 BCM4329 18 Xoom BCM4329 18 Samsung Galaxy Tab BCM4329 18 Samsung Galaxy S 4G BCM4329 18 Google Nexus S BCM4329 18 Blackberry Torch 9800 TI WL1271A 20 89 BelAir 100SNE 38 102 Cisco Aironet 1520 28 92 Rukus Strand Mounted 7761cm 27 Average of web data 19 88
PAGE 62
54 APPENDIX B : NS 3 Simulation Code /* Mode:C++; c file style:"gnu"; indent tabs mode:nil; */ /* This is the code for Joey Padden's independent study of Wi Fi QoS motivation. This is based on the NS 3 third example application.*/ " usi ng namespace ns3; NS_LOG_COMPONENT_DEFINE ("IndependentStudy"); int main (int argc, char *argv[]) { //default values for configurable params uint32_t nCsma = 0; uint32_t nWifi = 2; uint32_t maxBytes = 0; //build node container for p2p nodes NodeContainer p2pNodes; p2pNodes.Create (2); //establish point to point link attributes PointToPointHelper pointToPoint; pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("50Mbps")); pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms")); //build p2p netdeivces NetDeviceContainer p2pDevices; p2pDevices = pointToPoint.Install (p2pNodes);
PAGE 63
55 //create the node container to hold the csma devices(AP and //wired device N IC cards) NodeContainer csmaNodes; csmaNodes.Add (p2pNodes.Get (1)); csmaNodes.Create (nCsma); //setup the cmsa link using the cmsahelper CsmaHelper csma; csma.SetChannelAttribute ("DataRate", StringValue ("100Mbps")); csma.SetChannelAttribu te ("Delay", TimeValue (NanoSeconds (6560))); //define csma devices NetDeviceContainer csmaDevices; csmaDevices = csma.Install (csmaNodes); //create node container for wifi fixed client and AP NodeContainer wifiStaNodes; wifiStaNodes.Create ( nWifi); NodeContainer wifiApNode = p2pNodes.Get (0); //create node container and device for moving wifi client NodeContainer movingStaNode; movingStaNode.Create (1); //use yans to setup wifi phy and channel YansWifiChannelHelper channel = YansWifiChannelHelper::Default (); YansWifiPhyHelper phy = YansWifiPhyHelper::Default (); phy.SetChannel (channel.Create ()); //set trace type to include radiotap headers for SNR, sig power, //and phy rate for us e in finding the airtime and throughput phy.SetPcapDataLinkType (YansWifiPhyHelper::DLT_IEEE802_11_RADIO); //setup the wifihelper with default settings WifiHelper wifi = WifiHelper::Default (); wifi.SetRemoteStationManager ("ns3::AarfWifiManager") ; //setup wifi mac, simple DCF no qos mode NqosWifiMacHelper mac = NqosWifiMacHelper::Default (); //initialize SSID and AP behavior Ssid ssid = Ssid ("ns 3 ssid");
PAGE 64
56 mac.SetType ("ns3::StaWifiMac", "Ssid", SsidValue (ssid), "ActiveProbing", BooleanValue (false)); //add wifi devices to node contatianer and apply phy and mac created NetDeviceContainer staDevices; staDevices = wifi.Install (phy, mac, wifiStaNodes); NetDeviceContainer movingStaDevice; movingStaDevice = wifi.Install (phy, mac, movingStaNode); //configure mac for the SSID mac.SetType ("ns3::ApWifiMac", "Ssid", SsidValue (ssid)); //add AP to ap container NetDeviceContainer apDevices; apDevices = wifi.Install (phy, mac, wifiApNode); //create one mobility helper for the fixed client and AP MobilityHelper mobility; mobility.SetPositionAllocator ("ns3::GridPositionAllocator", "MinX", DoubleValue (0.0), "MinY", DoubleValue (0.0), "DeltaX", DoubleValue (0.0), "DeltaY", DoubleValue (0.0), "GridWidth", UintegerValue (1), "LayoutType", StringValue ("RowFirst")); mobility.Install (wifiStaNodes); mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel"); mobility.Install (wifiApNode); //create second mobility helper for moving client. By adjusting //MinY you can move the client the desired distance from the AP MobilityHelper mobility2; mobility2.SetPositionAllocator ("ns3::GridPositionAllocator", "MinX", DoubleValue (0.0), "Min Y", DoubleValue (115.0),
PAGE 65
57 "DeltaX", DoubleValue (0.0), "DeltaY", DoubleValue (0.0), "GridWidth", UintegerValue (1), "LayoutTy pe", StringValue ("RowFirst")); mobility2.SetMobilityModel ("ns3::ConstantPositionMobilityModel"); mobility2.Install (movingStaNode); //install IP stack on all devices InternetStackHelper stack; stack.Install (csmaNodes); stack.Install (wifiAp Node); stack.Install (wifiStaNodes); stack.Install (movingStaNode); //DHCP all devices to get addresses. There are three networks to make //traffic tracability tractable. Ipv4AddressHelper address; address.SetBase ("10.1.1.0", "255.255.255.0 "); Ipv4InterfaceContainer p2pInterfaces; p2pInterfaces = address.Assign (p2pDevices); address.SetBase ("10.1.2.0", "255.255.255.0"); Ipv4InterfaceContainer csmaInterfaces; csmaInterfaces = address.Assign (csmaDevices); address.SetBase ("10.1.3.0", "255.255.255.0"); address.Assign (movingStaDevice); address.Assign (staDevices); address.Assign (apDevices); // Create a BulkSendApplication and install it on wireless nodes uint16_t port = 9; Ipv4GlobalRoutingH elper::PopulateRoutingTables (); BulkSendHelper source ("ns3::TcpSocketFactory", InetSocketAddress (csmaInterfaces.GetAddress (nCsma), port)); source.SetAttribute ("MaxBytes", UintegerValue (maxBytes)); ApplicationContainer s ourceApps = source.Install (wifiStaNodes.Get (1)); sourceApps.Add (source.Install(movingStaNode.Get(0))); sourceApps.Start (Seconds (0.0)); sourceApps.Stop (Seconds (60.0));
PAGE 66
58 // Create a PacketSinkApplication and install it on wired node Pac ketSinkHelper sink ("ns3::TcpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), port)); ApplicationContainer sinkApps = sink.Install (csmaNodes.Get (nCsma)); sinkApps.Start (Seconds (0.0)); sinkApps.Stop (Seconds (55. 0)); Simulator::Stop (Seconds (65.0)); //configure tracing parameters pointToPoint.EnablePcapAll ("third"); phy.EnablePcapAll ("third", false); csma.EnablePcap ("third", csmaDevices.Get (0), true); //run simulation Simulator::Run (); Simulator::Destroy (); return 0; }
PAGE 67
59 APPENDIX C : Airtime Analysis Python Script The script below was adapted from a script from the OLPC computer company. Their airtime computation did not accurately account for overhead. In addition, the lower physical layer data rates were added because they were not previously included. Otherwise the input output processing was left as is. #!/usr/bin/python import sys import commands import math from optparse import OptionParser arguments = OptionParser() arguments.a dd_option(" f"," -pcap file", dest="pcapfile", help="Capture dump") arguments.add_option(" t"," -text file", dest="textfile", help="Capture already converted/filtered") arguments.add_option(" i"," -interval", dest="interval", help="Consolidation interval in seconds") arguments.add_option(" w"," -filter", dest="filter", help="Wireshark filt er") arguments.add_option(" o"," -output format", dest="output", help="Output Format [csv, lines] ") arguments.add_option(" -no fcs", action="store_false", dest="crc", default=True, help="don't check if frames have bad crc") (options, args) = arguments.parse_args() if not (options.pcapfile or options.textfile) : print "input file is mandatory"
PAGE 68
60 sys.exit(0) filter_exp = '' filte r = '' if options.crc == True: filter += 'wlan.fcs_good == 1' if options.filter: filter += and '+options.filter else: filter += options.filter if options.crc or options.filter: filter_exp = R "'+filter+'"' if options.pcapfile: pcapfile = options.pcapfile inputfile = pcapfile if options.textfile: textfile = options.textfile inputfile = textfile else: textfile = pcapfile+'.tmp3' filter_cmd='tshark r %s %s T fields e frame.time_relative e radiotap.datarate e frame.len > %s' % (pcapfile, filter_exp, textfile) s, o = commands.getstatusoutput(filter_cmd) if options.interval: interval = float(options.interval) else: interval = 1 timeslot = 0 lastslot = 0 airtime = [0] fd = open(textfile, 'r') cck_datarate s = ('2', '4', '11', '22') ofdm_datarates = ('6', '9', '12', '18', '24', '36', '48', '72', '96', '108') for line in fd: time, rate, size = line.split(' \ t') size = size.strip(' \ n') if rate in cck_datarates: airsize = 192 + float(size) 16 / float (rate) elif rate in ofdm_datarates: airsize = 10 + 28 + 24 + 4 math.ceil(((float(size)) 8 + 6) / (float (rate) 4)) else: airsize = 0 timeslot = int(math.floor(float(time) / interval))
PAGE 69
61 if timeslot > lastslot: for slot in range(lastslot, timeslot): airtime.append(0) airtime[timeslot] += airsize / (interval 1000000) lastslot = timeslot if options.output == "csv": for i in airtime: print str(i)+',', else: for i in range(0, len( airtime)): print "[%s %s): %.2f%%" % (i*interval, (i+1)*interval, airtime[i] 100)
PAGE 70
62 APPENDIX D : OpenWrt Router Configuration The AP used for device testing was a Netgear WNDR 3800 hardware running OpenWRT software version 12.09 The pertinent conf iguration files are below. Prior to testing the wireless environment was scanned. It wa s determined that 5Ghz channel 3 6 was not occupied by any networks so this channel was used for testing to ensure no co channel interference outside of the testing was p roduced. Network Configuration config 'interface' 'loopback' option 'ifname' 'lo' option 'proto' 'static' option 'ipaddr' '127.0.0.1' option 'netmask' '255.0.0.0' config 'interface' 'lan' option 'ifname' 'eth0' option 'type' 'bridge' option 'proto' 'static' option 'netmask' '255.255.0.0' option 'defaultroute' '0' option 'peerdns' '0' option 'ipaddr' '10.32.115.2' option 'gateway' '10.32.115.1' config 'interface' 'wan' option 'ifname' 'eth1' option 'proto' 'dhcp' config 'switch' opti on 'name' 'rtl8366s' option 'reset' '1' option 'enable_vlan' '1' option 'blinkrate' '2' config 'switch_vlan' option 'device' 'rtl8366s'
PAGE 71
63 option 'vlan' '0' option 'ports' '0 1 2 3 5' config 'switch_port' option 'device' 'rtl8366s' option 'port' '1' option 'led' '9' config 'switch_port' option 'device' 'rtl8366s' option 'port' '2' option 'led' '6' config 'switch_port' option 'device' 'rtl8366s' option 'port' '5' option 'led' '6' config 'interface' 'lan2' option 'ifname' 'eth0' option 'proto' 'static' option 'ipaddr' '10.50.151.2' option 'netmask' '255.255.0.0' option 'gateway' 10.50.101.1' Wireless Configuration config 'wifi device' 'radio0' option 'type' 'mac80211' option 'macaddr' '30:46:9a:1c:57:fe' option 'hwmode' '11ng' option 'htmode' 'HT20' list 'ht_capab' 'SHORT GI 40' list 'ht_capab' 'DSSS_CCK 40' option 'disabled' '1' option 'channel' '1' option 'txpower' '8' config 'wifi iface' option 'device' 'radio0' option 'network' 'lan' option 'mode' 'ap' option 'ssid' 'OpenWrt'
PAGE 72
64 option 'encryption' 'none' config 'wifi device' 'radio1' option 'type' 'mac80211' option 'channel' '40' option 'macaddr' '30:46:9a:1c:58:00' option 'hwmode' '11na' option 'htmode' 'HT20' list 'ht_capab' 'SHORT GI 40' list 'ht_capa b' 'DSSS_CCK 40' option 'disabled' '0' option 'txpower' '8' config 'wifi iface' option 'device' 'radio1' option 'network' 'lan' option 'mode' 'ap' option 'ssid' 'OpenWrt' option 'encryption' 'none' config 'wifi iface' option 'device' 'radio1' op tion 'ssid' 'OpenWrt2' option 'network' 'lan' option 'mode' 'ap' option 'encryption' 'none'
PAGE 73
65 APPENDIX E : OpenWrt Dynamic CWmin Implementation Code The OpenWrt version 12.09 "Attitude Adjustment" was used for this project. The application hostapd wpad mini" was installed to run the access point functions of the router. The following files were modified to support the implementation of the algorithm discussed above: "& /00&1 $ *& /00&2 $ %& 345678&1 $ '& 297834:&1 $ The following appendix provides the entire code for each file. Search for "JRP" to find the modified sections of the code. The primary logic and modification occurs in wmm.c and aplist.c. The other files contain necessary actions for initialization and clean up. Most of the work consisted of adding funct ionality not present in the OpenWrt code. However, in the aplist.c file, the behavior was modified. In this case, code was commented out to prevent unwanted behavior. Thus, this commented out code remains in the documentation below. WMM.C Ful l path: /Volumes/OpenWrtAA/openwrt/build_dir/target mips_r2_uClibc 0.9.33.2/hostapd wpad mini/hostapd 20130405/src/ap /wmm.c /* hostapd / WMM (Wi Fi Multimedia) Copyright 2002 2003, Instant802 Networks, Inc. Copyright 2005 2006, Devicescape Softw are, Inc. Copyright (c) 2009, Jouni Malinen. */
PAGE 74
66 #include "utils/includes.h" #include "utils/common.h" #include "common/ieee802_11_defs.h" #include "common/ieee802_11 _common.h" #include "hostapd.h" #include "ieee802_11.h" #include "sta_info.h" #include "ap_config.h" #include "ap_drv_ops.h" #include "wmm.h" //JRP added #include
#include "syslog.h" /* TODO: maintain separate sequence and fragment numbers f or each AC TODO: IGMP snooping to track which multicasts to forward and use QOS DATA if only WMM stations are receiving a certain group */ static inline u8 wmm_aci_aifsn(int aifsn, int acm, int aci) { u8 ret; ret = (aifsn << WMM_AC_AIFNS_SHIFT) & WMM_AC_AIFSN_MASK; if (acm) ret |= WMM_AC_ACM; ret |= (aci << WMM_AC_ACI_SHIFT) & WMM_AC_ACI_MASK; return ret; } static inline u8 wmm_ecw(int ecwmin, int ecwmax) { return ((ecwmin << WMM_AC_ECWMIN_SHIFT) & WMM_AC_ECWMIN_MASK) | ((ecwmax << WMM _AC_ECWMAX_SHIFT) & WMM_AC_ECWMAX_MASK); } //JRP adding to check which AP we are updating for. static inline int mainssid(struct hostapd_data *hapd) { u8 mymac[ETH_ALEN]; mymac[0] = 0x20; mymac[1] = 0x4e; mymac[2] = 0x7f; mymac[3] = 0x4a; mymac[4] = 0x98; mymac[5] = 0x44; int ret; ret = memcmp(mymac,hapd >own_addr,6); return ret; } //JRP added find active STA cnt in OpenWrt network static inline int active_sta_openwrt(int *minrate) { DIR *dir; struct dirent *ep; char Filenam e[84]; char Filename2[88]; char Filename3[85]; int active_cnt = 0; int value = 1001; int value2 = NULL; int value3 = NULL;
PAGE 75
67 i nactive ms time for this client strcpy(Filename,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1/stations/"); strcat(Filename,ep >d_name); strcat(Filename,"/inactive_ms"); FILE *inputdata; inputdata = NULL; inputdata = fopen(Filename,"r"); //open next file if (inputdata) { fscanf(inputdata,"%d",&value); fclose (inputdata); } if (value < 1000) { active_c nt++; //formulate filename to find tx rate for this client strcpy(Filename2,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1/stations/"); strcat(Filename2,ep >d_name); strcat(Filename2,"/cu rrent_tx_rate"); FILE *inputdata2; inputdata cli ent strcpy(Filename3,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1/stations/"); strcat(Filename3,ep >d_name); strcat(Filename3,"/last_rx_rate"); FILE *inputdata3; inputd ata3 = NULL; inputdata3 = fopen(Filename3,"r"); //open next file if (inputdata3) { fscanf(inputdata3,"%d.",&value3); fclose (inputdata3); } if (*minrate == NULL) *minrate = value3; else if (value3 < *minrate) *minrate = value3; } } }
PAGE 76
68 closedir(dir); return active_cnt; } } //JRP added find active STA cnt in OpenWrt2 network stat ic inline int active_sta_openwrt2(int *minrate) { DIR *dir; struct dirent *ep; char Filename[86]; char Filename2[90]; char Filename3[87]; int active_cnt = 0; int value = 1001; int value2 = NULL; int value3 = NULL; if ((dir = opendir("/sys/kernel/debug/ieee80211/phy1/netdev:wlan1 1/stations/")) == NULL) { syslog(LOG_INFO,"Cannot open directory,"/inactive_ms"); FILE *inputdata; inputdata = NULL; inputdata = fopen(Filename,"r"); //open next file if (inputdata) { fscanf(inputdata ,"%d",&value); fclose (inputdata); } if (value < 1000) { active_cnt++; //formulate filename to find inactive ms time for this client strcpy(Filename2,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1 1/stations/"); strcat(Filename2,ep >d_name); strcat(Filename2,"/current_tx_rate"); FILE *inputdata2; inputd ata client strcpy(Filename3,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1 1/ stations/"); strcat(Filename3,ep >d_name); strcat(Filename3,"/last_rx_rate");
PAGE 77
69 FILE *inputdata3; inputdata3 = NULL; inputdata3 = fopen(Filename3,"r"); //open next file if (inputdata3) { fscanf(inputdata3,"%d.",&value3); fclose (inputdata3); } if (*minrate == NULL) *minrate = value3; else if (value3 < *minrate) *minrate = value3; } } } closedir(dir); return active_cnt; } } //JRP added find the rx byte count in OpenWrt network static inline int bytes_openwrt(void) { DIR *dir; struct dirent *ep; char Filename[81]; int bytes = 0; int value = 0; inactive ms time for this client strcpy(Filename,"/sys/kernel/debug/ieee802 11/phy1/netdev:wlan1 bytes rx in OpenWrt2 network static inline int bytes_openwrt2(void)
PAGE 78
70 { DIR *dir; struct dirent *ep; char Filename[83]; int bytes = 0; int value = 0; if ((dir = opendir("/sys/kernel/debug/ieee80211/phy1/netdev:wlan1 1/stations/")) == NULL) { syslog(LOG_INFO,"Cannot open direc tory the number of total STAs in OpenWrt static inline int sta_cnt_openwrt(void) { DIR *dir; struct dirent *ep; char Filename[59]; int bytes = 0; int value = 0; //formulate filename to find inactive ms time for this client strcpy(Filename,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1/num_mcast_sta"); FILE *inputdata; inputdata = NULL; inputdata = fopen(Filename,"r"); //open next file if (inputdata) { fscanf(inputdata,"%d.",&value); fclose (inputdata); } bytes += value; return bytes;
PAGE 79
71 } //JRP get the number of total STAs in OpenWrt2 static inline int sta_cnt_openwrt2(void) { DIR *dir; struct dirent *ep; char Filename[61]; int bytes = 0; int value = 0; //formulate filename to find inactive ms time for this client strcpy(Filename,"/sys/kernel/debug/ieee80211/phy1/netdev:wlan1 1/num_mcast_sta"); FILE *inputdata; inputdata = NULL; inputdata = fopen(Filename,"r"); //open next file if (inputdata) { fscanf(inputdata,"%d.",&value); fclose (inputdata); } bytes += value; return bytes; } /* Add WMM Parameter Element to Beacon, Probe Response, and (Re)Association Response frames. */ u8 hostapd_eid_wmm(struct hostapd_data *hapd, u8 *eid) { u8 *pos = eid; struct wmm_parameter_element *wmm = (struct wmm_parameter_element *) (pos + 2) ; int e; //JRP variables int activecnt = NULL; int activecnt2 = NULL; int stacnt = NULL; int stacnt2 = NULL; int bytes = NULL; int bytes2 = NULL; int minrate2 = NULL; int minrate = NULL; int ssidtest = NULL; double tput = 0.0; double tput2 = 0 .0; int printac = 0; int cwmin2 = 5; float cwdeltaf = 0.0; float cwdeltac = 0.0; int cwdelta = 0; int cwdelta2 = 0; openlog ("wmm.c", LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL0); if (!hapd >conf >wmm_enabled) return eid; //JRP check if we are in OpenWrt SSID Beacon Frame
PAGE 80
72 ssidtest = mainssid(hapd); //JRP if we are in OpenWrt2 SSID Beacon Frame, then fetch STA count in each BSS. if (ssidtest!=0) { activecnt = active_sta_openwrt(&minrate); activecnt2 = active_sta_openwrt2(&minrate2); //shift old paramhist to slot 0 paramhist[0] = paramhist[1]; //JRP get stacnt for each network, ensures oldbyte count is valid to use right now. stacnt = sta_cnt_openwrt(); stacnt2 = sta_cnt_openwrt2(); //JRP do tput calculation for OpenWrt if old stored numbers are valid, otherwise update //stored numbers and wait for next round. if (stacnt == oldstacnt) { bytes = bytes_openwrt(); tput = (bytes oldbytes)*8/2; oldbytes = bytes; } else { oldstacnt = stacnt; oldbytes = bytes; syslog(LOG_INFO,"STA joined or left OpenWrt, tput count is borked for this interval"); } //JRP do tput calculation for OpenWrt if old stored numbers are valid, otherwise update stored //numbers and wait for next round. if (stacnt2 == ol dstacnt2) { bytes2 = bytes_openwrt2(); tput2 = (bytes2 oldbytes2)*8/2; oldbytes2 = bytes2; } else { oldstacnt2 = stacnt2; oldbytes2 = bytes2; syslog(LOG_INFO,"STA joined or left OpenWrt2, tput count is borked for this interval"); } //JRP find cwmin of public network if (minrate != 0 && minrate2 != 0) { if (minrate > minrate2) { minrate = minrate2; } cwdeltaf = 1.442695040888964 log(2 40 / (float)minrate); cwdeltac = 1.442695040888964 log(( activecnt + activecnt2) / (float)activecnt); cwdelta = (int)(cwdeltaf + 0.5); cwdelta2 = (int)(cwdeltac + 0.5); if (cwdelta > 0) { cwmin2 = cwmin2 + cwdelta; } if (cwdelta2 > 0){ cwmin2 = cwmin2 + cwdelta2; } if (cwmin2 > 10) cwmin2 = 10; } } eid[0] = WLAN_EID_VENDOR_SPECIFIC; wmm >oui[0] = 0x00; wmm >oui[1] = 0x50; wmm >oui[2] = 0xf2;
PAGE 81
73 wmm >oui_type = WMM_OUI_TYPE; wmm >oui_subtype = WMM_OUI_SUBTYPE_PARAMETER_ELEMENT; wmm >version = WMM_VERSION; //wmm >qos_info = hapd > parameter_set_count & 0xf; if (hapd >conf >wmm_uapsd && (hapd >iface >drv_flags & WPA_DRIVER_FLAGS_AP_UAPSD)) wmm >qos_info |= 0x80; wmm >reserved = 0; /* fill in a parameter set record for each AC */ for (e = 0; e < 4; e++) { struct wmm _ac_parameter *ac = &wmm >ac[e]; struct hostapd_wmm_ac_params *acp = &hapd >iconf >wmm_ac_params[e]; //JRP adding logic for cwmin adjustments for BE ac e = 0 if (e == 0 && ssidtest != 0) { acp >cwmin = cwmin2; syslog (LOG_INFO, "AC %d A IFS = %d, CWmin = %d, CWmax = %d, TXop = %d, minrate = %d, minrate2 = %d, tput = %.2f, tput2 = %.2f, cwdeltaf = %.2f, cwdelta =%d, cwdeltac = %.2f, cwdelta2 = %d", e, acp >aifs,acp >cwmin, acp >cwmax,acp >txop_limit,minrate,minrate2,tput,t put2,cwdeltaf, cwdelta,cwdeltac,cwdelta2); paramhist[1] = cwmin2; } else if(e == 0){ acp >cwmin = 4; } ac >aci_aifsn = wmm_aci_aifsn(acp >aifs, acp >admission_control_mandatory, e); ac >cw = wmm_ecw(acp > cwmin, acp >cwmax); ac >txop_limit = host_to_le16(acp >txop_limit); } //JRP if the params have changed then increment the param_set_count for the hapd we are in. if ( (ssidtest != 0) && (paramhist[0] != paramhist[1]) ) { hapd >parameter_set_coun t++; } //JRP update parameter set count element. the 0xf bit is to prevent sending in a too big //a number it cuts is off at 16 bit number wmm >qos_info = hapd >parameter_set_count & 0xf; pos = (u8 *) (wmm + 1); eid[1] = pos eid 2; /* eleme nt length */ closelog(); return pos; } /* This function is called when a station sends an association request with WMM info element. The function returns 1 on success or 0 on any error in WMM element. eid does not include Element ID and Length octets. */ int hostapd_eid_wmm_valid(struct hostapd_data *hapd, const u8 *eid, size_t len) { struct wmm_information_element *wmm;
PAGE 82
74 wpa_hexdump(MSG_MSGDUMP, "WMM IE", eid, len); if (len < sizeof(st ruct wmm_information_element)) { wpa_printf(MSG_DEBUG, "Too short WMM IE (len=%lu)", (unsigned long) len); return 0; } wmm = (struct wmm_information_element *) eid; wpa_printf(MSG_DEBUG, "Validating WMM IE: OUI %02x:%02x:%02x "OUI typ e %d OUI sub type %d version %d QoS info 0x%x", wmm >oui[0], wmm >oui[1], wmm >oui[2], wmm >oui_type, wmm >oui_subtype, wmm >version, wmm >qos_info); if (wmm >oui_subtype != WMM_OUI_SUBTYPE_INFORMATION_ELEMENT || wmm >version != WMM_VERS ION) { wpa_printf(MSG_DEBUG, "Unsupported WMM IE Subtype/Version"); return 0; } return 1; } static void wmm_send_action(struct hostapd_data *hapd, const u8 *addr, const struct wmm_tspec_element *tspec, u8 action_code, u8 dialogue_token, u8 status_code) { u8 buf[256]; struct ieee80211_mgmt *m = (struct ieee80211_mgmt *) buf; struct wmm_tspec_element *t = (struct wmm_tspec_element *) m >u.action.u.wmm_action.variable; int len; hostapd_logger(ha pd, addr, HOSTAPD_MODULE_IEEE80211, HOSTAPD_LEVEL_DEBUG, "action response reason %d", status_code); os_memset(buf, 0, sizeof(buf)); m >frame_control = IEEE80211_FC(WLAN_FC_TYPE_MGMT, WLAN_FC_STYPE_ACTION); os_memcpy(m >da, addr, ETH_ALEN); os_memcpy(m >sa, hapd >own_addr, ETH_ALEN); os_memcpy(m >bssid, hapd >own_addr, ETH_ALEN); m >u.action.category = WLAN_ACTION_WMM; m >u.action.u.wmm_action.action_code = action_code; m >u.action.u.wmm_action.dialog_token = dialogue_token; m >u.action.u.wmm_action.status_code = status_code; os_memcpy(t, tspec, sizeof(struct wmm_tspec_element)); len = ((u8 *) (t + 1)) buf; if (hostapd_drv_send_mlme(hapd, m, len, 0) < 0) perror("wmm_send_action: send"); } int wmm_process_tspec(struc t wmm_tspec_element *tspec) { int medium_time, pps, duration; int up, psb, dir, tid; u16 val, surplus; up = (tspec >ts_info[1] >> 3) & 0x07; psb = (tspec >ts_info[1] >> 2) & 0x01; dir = (tspec >ts_info[0] >> 5) & 0x03; tid = (tspec >ts_info[0] >> 1 ) & 0x0f; wpa_printf(MSG_DEBUG, "WMM: TS Info: UP=%d PSB=%d Direction=%d TID=%d", up, psb, dir, tid); val = le_to_host16(tspec >nominal_msdu_size); wpa_printf(MSG_DEBUG, "WMM: Nominal MSDU Size: %d%s", val & 0x7fff, val & 0x8000 ? (fixed)" : ""); wpa_printf(MSG_DEBUG, "WMM: Mean Data Rate: %u bps",
PAGE 83
75 le_to_host32(tspec >mean_data_rate)); wpa_printf(MSG_DEBUG, "WMM: Minimum PHY Rate: %u bps", le_to_host32(tspec >minimum_phy_rate)); val = le_to_host16(tspec > surplus_bandwidth_allowance); wpa_printf(MSG_DEBUG, "WMM: Surplus Bandwidth Allowance: %u.%04u", val >> 13, 10000 (val & 0x1fff) / 0x2000); val = le_to_host16(tspec >nominal_msdu_size); if (val == 0) { wpa_printf(MSG_DEBUG, "WMM: Invalid Nomin al MSDU Size (0)"); return WMM_ADDTS_STATUS_INVALID_PARAMETERS; } /* pps = Ceiling((Mean Data Rate / 8) / Nominal MSDU Size) */ pps = ((le_to_host32(tspec >mean_data_rate) / 8) + val 1) / val; wpa_printf(MSG_DEBUG, "WMM: Packets per second estimate for TSPEC: %d", pps); if (le_to_host32(tspec >minimum_phy_rate) < 1000000) { wpa_printf(MSG_DEBUG, "WMM: Too small Minimum PHY Rate"); return WMM_ADDTS_STATUS_INVALID_PARAMETERS; } duration = (le_to_host16(tspec >nominal_msdu_size) & 0x7fff) 8 / (le_to_host32(tspec >minimum_phy_rate) / 1000000) + 50 /* FIX: proper SIFS + ACK duration */; /* unsigned binary number with an implicit binary point after the leftmost 3 bits, i.e., 0x2000 = 1.0 */ surplus = le_to_host16(tspec >surplus_b andwidth_allowance); if (surplus <= 0x2000) { wpa_printf(MSG_DEBUG, "WMM: Surplus Bandwidth Allowance not "greater than unity"); return WMM_ADDTS_STATUS_INVALID_PARAMETERS; } medium_time = surplus pps duration / 0x2000; wpa_printf(MSG_DEBUG, "WMM: Estimated medium time: %u", medium_time); /* TODO: store list of granted (and still active) TSPECs and check whether there is available medium time for this request. For now, just refuse requests that would by them selves take very large portion of the available bandwidth. */ if (medium_time > 750000) { wpa_printf(MSG_DEBUG, "WMM: Refuse TSPEC request for over "75%% of available bandwidth"); return WMM_ADDTS_STATUS_REFUSED; } /* Convert to 32 mi croseconds per second unit */ tspec >medium_time = host_to_le16(medium_time / 32); return WMM_ADDTS_STATUS_ADMISSION_ACCEPTED; } static void wmm_addts_req(struct hostapd_data *hapd, const struct ieee80211_mgmt *mgmt, struct wmm_tspec_element *tspec, size_t len) { const u8 *end = ((const u8 *) mgmt) + len; int res; if ((const u8 *) (tspec + 1) > end) { wpa_printf(MSG_DEBUG, "WMM: TSPEC overflow in ADDTS Request"); return; } wpa_printf(MSG_DEBUG, "WMM: ADDTS Request (Dialog Token %d) for TSPEC
PAGE 84
76 "from MACSTR, mgmt >u.action.u.wmm_action.dialog_token, MAC2STR(mgmt >sa)); res = wmm_process_tspec(tspec); wpa_printf(MSG_DEBUG, "WMM: ADDTS processing result: %d", res); wmm_send_action(hapd, m gmt >sa, tspec, WMM_ACTION_CODE_ADDTS_RESP, mgmt >u.action.u.wmm_action.dialog_token, res); } void hostapd_wmm_action(struct hostapd_data *hapd, const struct ieee80211_mgmt *mgmt, size_t len) { int action_code; int left = len IEEE80211_HDRLEN 4; const u8 *pos = ((const u8 *) mgmt) + IEEE80211_HDRLEN + 4; struct ieee802_11_elems elems; struct sta_info *sta = ap_get_sta(hapd, mgmt >sa); /* check that the request comes from a valid station */ if (!sta || (sta >flags & (WLAN_STA_ASSOC | WLAN_STA_WMM)) != (WLAN_STA_ASSOC | WLAN_STA_WMM)) { hostapd_logger(hapd, mgmt >sa, HOSTAPD_MODULE_IEEE80211, HOSTAPD_LEVEL_DEBUG, "wmm action received is not from associated wmm" station"); /* TO DO: respond with action frame refused status code */ return; } /* extract the tspec info element */ if (ieee802_11_parse_elems(pos, left, &elems, 1) == ParseFailed) { hostapd_logger(hapd, mgmt >sa, HOSTAPD_MODULE_IEEE80211, HOSTAPD_LEVEL_ DEBUG, "hostapd_wmm_action could not parse wmm "action"); /* TODO: respond with action frame invalid parameters status code */ return; } if (!elems.wmm_tspec || elems.wmm_tspec_len != (sizeof(struct wmm_tspec_element) 2)) { hostapd_logger(hapd, mgmt >sa, HOSTAPD_MODULE_IEEE80211, HOSTAPD_LEVEL_DEBUG, "hostapd_wmm_action missing or wrong length "tspec"); /* TODO: respond with action frame invalid parameters status code */ return; } /* TODO: check the request is for an AC with ACM set, if not, refuse request */ action_code = mgmt >u.action.u.wmm_action.action_code; switch (action_code) { case WMM_ACTION_CO DE_ADDTS_REQ: wmm_addts_req(hapd, mgmt, (struct wmm_tspec_element *) (elems.wmm_tspec 2), len); return; #if 0 /* TODO: needed for client implementation */ case WMM_ACTION_CODE_ADDTS_RESP: wmm_setup_request(hapd, mgmt, len); return; /* TODO: handle station teardown requests */
PAGE 85
77 case WMM_ACTION_CODE_DELTS: wmm_teardown(hapd, mgmt, len); return; #endif } hostapd_logger(hapd, mgmt >sa, HOSTAPD_MODULE_IEEE80211, HOSTAPD_LEVEL_DEBUG, "hostapd_wmm_action unknown action code %d", action_code); } WMM.H Full Path: /Volumes/OpenWrtAA/openwrt/build_dir/target mips_r2_uClibc 0.9.33.2/hostapd wpad mini/hostapd 20130405/src/ap /wmm.h /* hostapd / WMM (Wi Fi Mu ltimedia) Copyright 2002 2003, Instant802 Networks, Inc. Copyright 2005 2006, Devicescape Software, Inc.. */ #ifndef WME_H #define WME_H struct ieee80211_mgmt; struct wmm_tspec_element; //JRP added u8 paramhist[2]; int oldbytes; int oldbytes2; int oldstacnt; int oldstacnt2; u8 hostapd_eid_wmm(struct hostapd_data *hapd, u8 *eid); int hostapd_eid_wmm_valid(struct hostapd_data *hapd, const u8 *eid, size_t len); vo id hostapd_wmm_action(struct hostapd_data *hapd, const struct ieee80211_mgmt *mgmt, size_t len); int wmm_process_tspec(struct wmm_tspec_element *tspec); #endif /* WME_H */ APLIST.C Full Path: /Volumes/OpenWrtAA/openwrt/build_dir/target mips_r2_uClibc 0.9.33.2/hostapd wpad mini/hostapd 20130405/src/ap /aplist.c /* hostapd / AP table Copyright (c) 2002 2009, Jouni Malinen
Copyright (c) 2003 2004, Instant802 Networks, Inc. Co pyright (c) 2006, Devicescape Software, Inc. This software may be distributed under the terms of the BSD license. See README for more details.
PAGE 86
78 */ #include "utils/includes.h" #include "utils/common.h" #include "utils/eloop.h" #include "common/ie ee802_11_defs.h" #include "common/ieee802_11_common.h" #include "drivers/driver.h" #include "hostapd.h" #include "ap_config.h" #include "ieee802_11.h" #include "sta_info.h" #include "beacon.h" #include "ap_list.h" /* AP list is a double linked list with head >prev pointing to the end of the list and tail >next = NULL. Entries are moved to the head of the list whenever a beacon has been received from the AP in question. The tail entry in this link will thus be the least recently used entry. */ s tatic int ap_list_beacon_olbc(struct hostapd_iface *iface, struct ap_info *ap) { int i; if (iface >current_mode >mode != HOSTAPD_MODE_IEEE80211G || iface >conf >channel != ap >channel) return 0; if (ap >erp != 1 && (ap >erp & ERP_INFO_NON_ERP_ PRESENT)) return 1; for (i = 0; i < WLAN_SUPP_RATES_MAX; i++) { int rate = (ap >supported_rates[i] & 0x7f) 5; if (rate == 60 || rate == 90 || rate > 110) return 0; } return 1; } static struct ap_info ap_get_ap(struct hostapd_iface *iface, const u8 *ap) { struct ap_info *s; s = iface >ap_hash[STA_HASH(ap)]; while (s != NULL && os_memcmp(s >addr, ap, ETH_ALEN) != 0) s = s >hnext; return s; } static void ap_ap_list_add(struct hostapd_iface *iface, struct ap_info *ap) { if (if ace >ap_list) { ap >prev = iface >ap_list >prev; iface >ap_list >prev = ap; } else ap >prev = ap; ap >next = iface >ap_list; iface >ap_list = ap; } static void ap_ap_list_del(struct hostapd_iface *iface, struct ap_info *ap) { if (iface >ap_list == ap)
PAGE 87
79 iface >ap_list = ap >next; else ap >prev >next = ap >next; if (ap >next) ap >next >prev = ap >prev; else if (iface >ap_list) iface >ap_list >prev = ap >prev; } static void ap_ap_hash_add(struct hostapd_iface *iface, struct ap_info *ap ) { ap >hnext = iface >ap_hash[STA_HASH(ap >addr)]; iface >ap_hash[STA_HASH(ap >addr)] = ap; } static void ap_ap_hash_del(struct hostapd_iface *iface, struct ap_info *ap) { struct ap_info *s; s = iface >ap_hash[STA_HASH(ap >addr)]; if (s == NULL) r eturn; if (os_memcmp(s >addr, ap >addr, ETH_ALEN) == 0) { iface >ap_hash[STA_HASH(ap >addr)] = s >hnext; return; } while (s >hnext != NULL && os_memcmp(s >hnext >addr, ap >addr, ETH_ALEN) != 0) s = s >hnext; if (s >hnext != NULL) s > hnext = s >hnext >hnext; else printf("AP: could not remove AP MACSTR from hash table \ n", MAC2STR(ap >addr)); } static void ap_free_ap(struct hostapd_iface *iface, struct ap_info *ap) { ap_ap_hash_del(iface, ap); ap_ap_list_del(iface, ap ); iface >num_ap -; os_free(ap); } static void hostapd_free_aps(struct hostapd_iface *iface) { struct ap_info *ap, *prev; ap = iface >ap_list; while (ap) { prev = ap; ap = ap >next; ap_free_ap(iface, prev); } iface >ap_list = NULL; } static struct ap_info ap_ap_add(struct hostapd_iface *iface, const u8 *addr) { struct ap_info *ap;
PAGE 88
80 ap = os_zalloc(sizeof(struct ap_info)); if (ap == NULL) return NULL; /* initialize AP info data */ os_memcpy(ap >addr, addr, ETH_ALEN); ap_ap_lis t_add(iface, ap); iface >num_ap++; ap_ap_hash_add(iface, ap); if (iface >num_ap > iface >conf >ap_table_max_size && ap != ap >prev) { wpa_printf(MSG_DEBUG, "Removing the least recently used AP MACSTR from AP table", MAC2STR(ap >prev > addr)); ap_free_ap(iface, ap >prev); } return ap; } void ap_list_process_beacon(struct hostapd_iface *iface, const struct ieee80211_mgmt *mgmt, struct ieee802_11_elems *elems, struct hostapd_frame_info *fi) { struct ap_info *ap ; struct os_time now; int new_ap = 0; int set_beacon = 0; if (iface >conf >ap_table_max_size < 1) return; ap = ap_get_ap(iface, mgmt >bssid); if (!ap) { ap = ap_ap_add(iface, mgmt >bssid); if (!ap) { printf("Failed to allocate AP informati on entry \ n"); return; } new_ap = 1; } merge_byte_arrays(ap >supported_rates, WLAN_SUPP_RATES_MAX, elems >supp_rates, elems >supp_rates_len, elems >ext_supp_rates, elems >ext_supp_rates_len); if (elems >erp_info && elems >erp_info_len == 1) ap >erp = elems >erp_info[0]; else ap >erp = 1; if (elems >ds_params && elems >ds_params_len == 1) ap >channel = elems >ds_params[0]; else if (elems >ht_operation && elems >ht_operation_len >= 1) ap >channel = elems >ht_operation[0]; el se if (fi) ap >channel = fi >channel; if (elems >ht_capabilities) ap >ht_support = 1; else ap >ht_support = 0; os_get_time(&now); ap >last_beacon = now.sec; if (!new_ap && ap != iface >ap_list) { /* move AP entry into the beginning of the list so that the oldest entry is always in the end of the list */
PAGE 89
81 ap_ap_list_del(iface, ap); ap_ap_list_add(iface, ap); } if (!iface >olbc && ap_list_beacon_olbc(iface, ap)) { iface >olbc = 1; wpa_printf(MSG_DEBUG, "OLBC AP detected: MACSTR (channel %d) enable protection", MAC2STR(ap >addr), ap >channel); set_beacon++; } #ifdef CONFIG_IEEE80211N if (!iface >olbc_ht && !ap >ht_support && (ap >channel == 0 || ap >channel == iface >conf >channel || a p >channel == iface >conf >channel + iface >conf >secondary_channel 4)) { iface >olbc_ht = 1; hostapd_ht_operation_update(iface); wpa_printf(MSG_DEBUG, "OLBC HT AP detected: MACSTR (channel %d) enable protection", MAC2STR(ap >addr), ap >channel); set_beacon++; } #endif /* CONFIG_IEEE80211N */ //JRP //if (set_beacon) //ieee802_11_update_beacons(iface); } static void ap_list_timer(void *eloop_ctx, void *timeout_ctx) { struct hostapd_iface *iface = eloop_ctx; struct os_time now; struct ap_info *ap; int set_beacon = 0; eloop_register_timeout(10, 0, ap_list_timer, iface, NULL); if (!iface >ap_list) return; os_get_time(&now); while (iface >ap_list) { ap = iface >ap_list >prev; if (ap >last_beacon + iface >conf >ap_table_expiration_time >= now.sec) break; ap_free_ap(iface, ap); } if (iface >olbc || iface >olbc_ht) { int olbc = 0; int olbc_ht = 0; ap = iface >ap_list; while (ap && (olbc == 0 || olbc_ht == 0)) { if (ap_list_beacon_olbc(iface, ap)) olbc = 1; if (!ap >ht_support) olbc_ht = 1; ap = ap >next; } if (!olbc && iface >olbc) {
PAGE 90
82 wpa_printf(MSG_DEBUG, "OLBC not detected anymore"); iface >olbc = 0; set_beacon++; } #ifdef CONFIG_IEEE80 211N if (!olbc_ht && iface >olbc_ht) { wpa_printf(MSG_DEBUG, "OLBC HT not detected anymore"); iface >olbc_ht = 0; hostapd_ht_operation_update(iface); set_beacon++; } #endif /* CONFIG_IEEE80211N */ } //JRP //if (set_beacon) // ieee802_11_update_beacons(iface); } //JRP beacon update timer with no extra triggers to accidentally fire it. static void beacon_update_timer(void *eloop_ctx, void *timeout_ctx) { struct hostapd_iface *iface = eloop_ctx; eloop_register_timeout(2,0,beac on_update_timer, iface, NULL); ieee802_11_update_beacons(iface); } int ap_list_init(struct hostapd_iface *iface) { eloop_register_timeout(10, 0, ap_list_timer, iface, NULL); //JRP kicking off my timer loop eloop_register_timeout(2,0,beacon_update_tim er, iface, NULL); return 0; } void ap_list_deinit(struct hostapd_iface *iface) { eloop_cancel_timeout(ap_list_timer, iface, NULL); //JRP cancel off my timer loop eloop_cancel_timeout(beacon_update_timer, iface, NULL); hostapd_free_aps(iface); } HOSTAPD.C Full Path: /Volumes/OpenWrtAA/openwrt/build_dir/target mips_r2_uClibc 0.9.33.2/hostapd wpad mini/hostapd 20130405/src/ap /hostapd.c /* hostapd / Initialization and configuration Copyright (c) 2002 2012, Jouni Malinen
This software may be distributed under the terms of the BSD license. See README for more details. */ #include "utils/includes.h" #include "utils/common.h" #include "utils/eloop.h" #include "common/ieee802_11_defs.h" #include "radius/radius_clien t.h" #include "radius/radius_das.h" #include "drivers/driver.h" #include "hostapd.h" #include "authsrv.h"
PAGE 91
83 #include "sta_info.h" #include "accounting.h" #include "ap_list.h" #include "beacon.h" #include "iapp.h" #include "ieee802_1x.h" #include "ieee802_11.h" #include "ieee802_11_auth.h" #include "vlan_init.h" #include "wpa_auth.h" #include "wps_hostapd.h" #include "hw_features.h" #include "wpa_auth_glue.h" #include "ap_drv_ops.h" #include "ap_config.h" #include "p2p_hostapd.h" #include "gas_serv .h" //JRP #include "wmm.h" static int hostapd_flush_old_stations(struct hostapd_data *hapd, u16 reason); static int hostapd_setup_encryption(char *iface, struct hostapd_data *hapd); static int hostapd_broadcast_wep_clear(struct hostapd_data *hapd); extern int wpa_debug_level; extern struct wpa_driver_ops *wpa_drivers[]; int hostapd_for_each_interface(struct hapd_interfaces *interfaces, int (*cb)(struct hostapd_iface *iface, void *ctx), void *ctx) { size_t i; int ret; for (i = 0; i < interfaces >count; i++) { ret = cb(interfaces >iface[i], ctx); if (ret) return ret; } return 0; } static void hostapd_reload_bss(struct hostapd_data *hapd) { #ifndef CONFIG_NO_RADIUS radius_client_reconfig(hapd >radius, hapd >conf >radius ); #endif /* CONFIG_NO_RADIUS */ if (hostapd_setup_wpa_psk(hapd >conf)) { wpa_printf(MSG_ERROR, "Failed to re configure WPA PSK "after reloading configuration"); } if (hapd >conf >ieee802_1x || hapd >conf >wpa) hostapd_set_drv_ieee8021x(hapd, hapd >conf >iface, 1); else hostapd_set_drv_ieee8021x(hapd, hapd >conf >iface, 0); if (hapd >conf >wpa && hapd >wpa_auth == NULL) { hostapd_setup_wpa(hapd); if (hapd >wpa_auth) wpa_init_keys(hapd >wpa_auth); } e lse if (hapd >conf >wpa) { const u8 *wpa_ie; size_t wpa_ie_len; hostapd_reconfig_wpa(hapd);
PAGE 92
84 wpa_ie = wpa_auth_get_wpa_ie(hapd >wpa_auth, &wpa_ie_len); if (hostapd_set_generic_elem(hapd, wpa_ie, wpa_ie_len)) wpa_printf(MSG_ERROR, "Failed to con figure WPA IE for "the kernel driver."); } else if (hapd >wpa_auth) { wpa_deinit(hapd >wpa_auth); hapd >wpa_auth = NULL; hostapd_set_privacy(hapd, 0); hostapd_setup_encryption(hapd >conf >iface, hapd); hostapd_set_generic_elem(hapd, (u8 *) "", 0); } ieee802_11_set_beacon(hapd); hostapd_update_wps(hapd); if (hapd >conf >ssid.ssid_set && hostapd_set_ssid(hapd, hapd >conf >ssid.ssid, hapd >conf >ssid.ssid_len)) { wpa_printf(MSG_ERROR, "Could not set SSID for kernel driver"); /* try to continue */ } wpa_printf(MSG_DEBUG, "Reconfigured interface %s", hapd >conf >iface); } static void hostapd_clear_old(struct hostapd_iface *iface) { size_t j; /* Deauthenticate all stations since the new configuration may no t allow them to use the BSS anymore. */ for (j = 0; j < iface >num_bss; j++) { hostapd_flush_old_stations(iface >bss[j], WLAN_REASON_PREV_AUTH_NOT_VALID); hostapd_broadcast_wep_clear(iface >bss[j]); #ifndef CONFIG_NO_RADIUS /* TODO: update dynamic data based on changed configuration items (e.g., open/close sockets, etc.) */ radius_client_flush(iface >bss[j] >radius, 0); #endif /* CONFIG_NO_RADIUS */ } } int hostapd_reload_config(struct hostapd_iface *iface) { struct hostapd _data *hapd = iface >bss[0]; struct hostapd_config *newconf, *oldconf; size_t j; if (iface >config_fname == NULL) { /* Only in memory config in use assume it has been updated */ hostapd_clear_old(iface); for (j = 0; j < iface >num_bss; j++) hostapd_reload_bss(iface >bss[j]); return 0; } if (iface >interfaces == NULL || iface >interfaces >config_read_cb == NULL) return 1; newconf = iface >interfaces >config_read_cb(iface >config_fname); if (newconf == NULL) return 1; hostapd_clear_old(iface);
PAGE 93
85 oldconf = hapd >iconf; iface >conf = newconf; hostapd_select_hw_mode(iface); iface >freq = hostapd_hw_get_freq(hapd, newconf >channel); if (hostapd_set_freq(hapd, newconf >hw_mode, iface >freq, newconf >channel, newconf >ieee80211n, newconf >ieee80211ac, newconf >secondary_channel, newconf >vht_oper_chwidth, newconf >vht_oper_centr_freq_seg0_idx, newconf >vht_oper_centr_freq_seg1_idx)) { wpa_printf(MSG_ERROR, "Could not set channel for "kernel driver"); } if (iface >current_mode) hostapd_prepare_rates(iface, iface >current_mode); for (j = 0; j < iface >num_bss; j++) { hapd = iface >bss[j]; hapd >iconf = newconf; hapd >conf = &newconf >bss[j]; hostapd_reload_bss(hapd); } hostapd_config_free(oldconf); return 0; } static void hostapd_broadcast_key_clear_iface(struct hostapd_data *hapd, char *ifname) { int i; for (i = 0; i < NUM_WEP_KEYS; i++) { if (hostapd_drv_set_key(ifnam e, hapd, WPA_ALG_NONE, NULL, i, 0, NULL, 0, NULL, 0)) { wpa_printf(MSG_DEBUG, "Failed to clear default "encryption keys (ifname=%s keyidx=%d)", ifname, i); } } #ifdef CONFIG_IEEE80211W if (hapd >conf >ieee80211w) { for (i = NU M_WEP_KEYS; i < NUM_WEP_KEYS + 2; i++) { if (hostapd_drv_set_key(ifname, hapd, WPA_ALG_NONE, NULL, i, 0, NULL, 0, NULL, 0)) { wpa_printf(MSG_DEBUG, "Failed to clear "default mgmt encryption keys "(ifname=%s keyidx=%d)", ifname, i); } } } #endif /* CONFIG_IEEE80211W */ } static int hostapd_broadcast_wep_clear(struct hostapd_data *hapd) { hostapd_broadcast_key_clear_iface(hapd, hapd >conf >iface); return 0; }
PAGE 94
86 static int hostapd_broadc ast_wep_set(struct hostapd_data *hapd) { int errors = 0, idx; struct hostapd_ssid *ssid = &hapd >conf >ssid; idx = ssid >wep.idx; if (ssid >wep.default_len && hostapd_drv_set_key(hapd >conf >iface, hapd, WPA_ALG_WEP, broadcast_ether_addr, idx 1, NULL, 0, ssid >wep.key[idx], ssid >wep.len[idx])) { wpa_printf(MSG_WARNING, "Could not set WEP encryption."); errors++; } if (ssid >dyn_vlan_keys) { size_t i; for (i = 0; i <= ssid >max_dyn_vlan_keys; i++) { const char *ifname; struct hostapd_wep_keys *key = ssid >dyn_vlan_keys[i]; if (key == NULL) continue; ifname = hostapd_get_vlan_id_ifname(hapd >conf >vlan, i); if (ifname == NULL) continue; idx = key >idx; if (hostapd_drv_set_key(ifname, h apd, WPA_ALG_WEP, broadcast_ether_addr, idx, 1, NULL, 0, key >key[idx], key >len[idx])) { wpa_printf(MSG_WARNING, "Could not set "dynamic VLAN WEP encryption."); errors++; } } } return errors; } static void hostapd_free_hapd_data(struct hostapd_data *hapd) { iapp_deinit(hapd >iapp); hapd >iapp = NULL; accounting_deinit(hapd); hostapd_deinit_wpa(hapd); vlan_deinit(hapd); hostapd_acl_deinit(hapd); #ifndef CONFIG_NO_RADIUS radius_client_deinit(hapd >radiu s); hapd >radius = NULL; radius_das_deinit(hapd >radius_das); hapd >radius_das = NULL; #endif /* CONFIG_NO_RADIUS */ hostapd_deinit_wps(hapd); authsrv_deinit(hapd); if (hapd >interface_added && hostapd_if_remove(hapd, WPA_IF_AP_BSS, hapd >con f >iface)) { wpa_printf(MSG_WARNING, "Failed to remove BSS interface %s", hapd >conf >iface); } os_free(hapd >probereq_cb);
PAGE 95
87 hapd >probereq_cb = NULL; #ifdef CONFIG_P2P wpabuf_free(hapd >p2p_beacon_ie); hapd >p2p_beacon_ie = NULL; wpabuf_free(hapd >p2p_probe_resp_ie); hapd >p2p_probe_resp_ie = NULL; #endif /* CONFIG_P2P */ wpabuf_free(hapd >time_adv); #ifdef CONFIG_INTERWORKING gas_serv_deinit(hapd); #endif /* CONFIG_INTERWORKING */ #ifdef CONFIG_SQLITE os_free(hapd >tmp_eap_ user.identity); os_free(hapd >tmp_eap_user.password); #endif /* CONFIG_SQLITE */ } /** hostapd_cleanup Per BSS cleanup (deinitialization) @hapd: Pointer to BSS data This function is used to free all per BSS data structures and resources. This gets called in a loop for each BSS between calls to hostapd_cleanup_iface_pre() and hostapd_cleanup_iface() when an interface is deinitialized. Most of the modules that are initialized in hostapd_setup_bss() are deinitialized here. */ sta tic void hostapd_cleanup(struct hostapd_data *hapd) { if (hapd >iface >interfaces && hapd >iface >interfaces >ctrl_iface_deinit) hapd >iface >interfaces >ctrl_iface_deinit(hapd); hostapd_free_hapd_data(hapd); } /** hostapd_cleanup_iface_pre Preliminary per interface cleanup @iface: Pointer to interface data This function is called before per BSS data structures are deinitialized with hostapd_cleanup(). */ static void hostapd_cleanup_iface_pre(struct hostapd_iface *iface) { } static void hostapd_cleanup_iface_partial(struct hostapd_iface *iface) { hostapd_deinit_ht(iface); hostapd_free_hw_features(iface >hw_features, iface >num_hw_features); iface >hw_features = NULL; os_free(iface >current_rates); iface >current_rates = N ULL; os_free(iface >basic_rates); iface >basic_rates = NULL; ap_list_deinit(iface); } /** hostapd_cleanup_iface Complete per interface cleanup @iface: Pointer to interface data
PAGE 96
88 This function is called after per BSS data structures are de initialized with hostapd_cleanup(). */ static void hostapd_cleanup_iface(struct hostapd_iface *iface) { hostapd_cleanup_iface_partial(iface); hostapd_config_free(iface >conf); iface >conf = NULL; os_free(iface >config_fname); os_free(iface >bss); os_free(iface); } static void hostapd_clear_wep(struct hostapd_data *hapd) { if (hapd >drv_priv) { hostapd_set_privacy(hapd, 0); hostapd_broadcast_wep_clear(hapd); } } static int hostapd_setup_encryption(char *iface, struct hostapd_data *hapd) { int i; hostapd_broadcast_wep_set(hapd); if (hapd >conf >ssid.wep.default_len) { hostapd_set_privacy(hapd, 1); return 0; } /* When IEEE 802.1X is not enabled, the driver may need to know how to set authentication algorithms for static WEP. */ hostapd_drv_set_authmode(hapd, hapd >conf >auth_algs); for (i = 0; i < 4; i++) { if (hapd >conf >ssid.wep.key[i] && hostapd_drv_set_key(iface, hapd, WPA_ALG_WEP, NULL, i, i == hapd >conf >ssid.wep.idx, NULL, 0, hapd >conf > ssid.wep.key[i], hapd >conf >ssid.wep.len[i])) { wpa_printf(MSG_WARNING, "Could not set WEP "encryption."); return 1; } if (hapd >conf >ssid.wep.key[i] && i == hapd >conf >ssid.wep.idx) hostapd_set_privacy(hapd, 1); } r eturn 0; } static int hostapd_flush_old_stations(struct hostapd_data *hapd, u16 reason) { int ret = 0; u8 addr[ETH_ALEN]; if (hostapd_drv_none(hapd) || hapd >drv_priv == NULL) return 0; wpa_dbg(hapd >msg_ctx, MSG_DEBUG, "Flushing old station entries"); if (hostapd_flush(hapd)) {
PAGE 97
89 wpa_msg(hapd >msg_ctx, MSG_WARNING, "Could not connect to "kernel driver"); ret = 1; } wpa_dbg(hapd >msg_ctx, MSG_DEBUG, "Deauthenticate all stations"); os_memset(addr, 0xff, ETH_ALEN); hostapd_drv_sta_d eauth(hapd, addr, reason); hostapd_free_stas(hapd); return ret; } /** hostapd_validate_bssid_configuration Validate BSSID configuration @iface: Pointer to interface data Returns: 0 on success, 1 on failure This function is used to va lidate that the configured BSSIDs are valid. */ static int hostapd_validate_bssid_configuration(struct hostapd_iface *iface) { u8 mask[ETH_ALEN] = { 0 }; struct hostapd_data *hapd = iface >bss[0]; unsigned int i = iface >conf >num_bss, bits = 0, j; int auto_addr = 0; if (hostapd_drv_none(hapd)) return 0; /* Generate BSSID mask that is large enough to cover the BSSIDs. */ /* Determine the bits necessary to cover the number of BSSIDs. */ for (i -; i; i >>= 1) bits++; /* Determine the bits necessary to any configured BSSIDs, if they are higher than the number of BSSIDs. */ for (j = 0; j < iface >conf >num_bss; j++) { if (hostapd_mac_comp_empty(iface >conf >bss[j].bssid) == 0) { if (j) auto_addr++; continue; } for (i = 0 ; i < ETH_ALEN; i++) { mask[i] |= iface >conf >bss[j].bssid[i] ^ hapd >own_addr[i]; } } if (!auto_addr) goto skip_mask_ext; for (i = 0; i < ETH_ALEN && mask[i] == 0; i++) ; j = 0; if (i < ETH_ALEN) { j = (5 i) 8; while (mask[i] != 0) { mask[i] >>= 1; j++; } } if (bits < j) bits = j;
PAGE 98
90 if (bits > 40) { wpa_printf(MSG_ERROR, "Too many bits in the BSSID mask (%u)", bits); return 1; } os_memset(mask, 0xff, ETH_ALEN); j = bits / 8; for (i = 5; i > 5 j; i -) mask[i] = 0; j = bits % 8; while (j -) mask[i] <<= 1; skip_mask_ext: wpa_printf(MSG_DEBUG, "BSS count %lu, BSSID mask MACSTR (%d bits)", (unsigned long) iface >conf >num_bss, MAC2STR(mask), bits); if (!auto_addr) return 0; for (i = 0; i < ETH_ALEN; i++) { if ((hapd >own_addr[i] & mask[i]) != hapd >own_addr[i]) { wpa_printf(MSG_ERROR, "Invalid BSSID mask MACSTR for start address MACSTR ".", MAC2STR(mask), MAC2STR(hapd >own_addr)); wpa_printf(MSG _ERROR, "Start address must be the "first address in the block (i.e., addr "AND mask == addr)."); return 1; } } return 0; } static int mac_in_conf(struct hostapd_config *conf, const void *a) { size_t i; for (i = 0; i < conf >num_bss; i++) { if (hostapd_mac_comp(conf >bss[i].bssid, a) == 0) { return 1; } } return 0; } #ifndef CONFIG_NO_RADIUS static int hostapd_das_nas_mismatch(struct hostapd_data *hapd, struct radius_das_attrs *attr) { /* TODO */ return 0; } static struct sta_info hostapd_das_find_sta(struct hostapd_data *hapd, struct radius_das_attrs *attr) { struct sta_info *sta = NULL; char buf[128]; if (attr >sta_addr) sta = ap_get_sta(hapd, attr >sta_addr);
PAGE 99
91 if (sta == NULL && attr >acct_session_id && attr >acct_session_id_len == 17) { for (sta = hapd >sta_list; sta; sta = sta >next) { os_snprintf(buf, sizeof(buf), "%08X %08X", sta >acct_session_id_hi, sta >acct_session_id_lo); if (os_memcmp(attr >ac ct_session_id, buf, 17) == 0) break; } } if (sta == NULL && attr >cui) { for (sta = hapd >sta_list; sta; sta = sta >next) { struct wpabuf *cui; cui = ieee802_1x_get_radius_cui(sta >eapol_sm); if (cui && wpabuf_len(cui) == attr >cui_len && os_memcmp(wpabuf_head(cui), attr >cui, attr >cui_len) == 0) break; } } if (sta == NULL && attr >user_name) { for (sta = hapd >sta_list; sta; sta = sta >next) { u8 *identity; size_t identity_len; identity = ieee802_1x_get_identity(sta >eapol_sm, &identity_len); if (identity && identity_len == attr >user_name_len && os_memcmp(identity, attr >user_name, identity_len) == 0) break; } } return sta; } static enum radius_ das_res hostapd_das_disconnect(void *ctx, struct radius_das_attrs *attr) { struct hostapd_data *hapd = ctx; struct sta_info *sta; if (hostapd_das_nas_mismatch(hapd, attr)) return RADIUS_DAS_NAS_MISMATCH; sta = hostapd_das_find_sta(hapd, attr); if (sta == NULL) return RADIUS_DAS_SESSION_NOT_FOUND; hostapd_drv_sta_deauth(hapd, sta >addr, WLAN_REASON_PREV_AUTH_NOT_VALID); ap_sta_deauthenticate(hapd, sta, WLAN_REASON_PREV_AUTH_NOT_VALID); return RADIUS_DAS_SUCCESS; } #endif /* CONFIG_NO_RADIUS */ /** hostapd_setup_bss Per BSS setup (initialization) @hapd: Pointer to BSS data @first: Whether this BSS is the first BSS of an interface This function is used to initialize all per BSS data structures and resource s. This gets called in a loop for each BSS when an interface is
PAGE 100
92 initialized. Most of the modules that are initialized here will be deinitialized in hostapd_cleanup(). */ static int hostapd_setup_bss(struct hostapd_data *hapd, int first) { struct ho stapd_bss_config *conf = hapd >conf; u8 ssid[HOSTAPD_MAX_SSID_LEN + 1]; int ssid_len, set_ssid; char force_ifname[IFNAMSIZ]; u8 if_addr[ETH_ALEN]; if (!first) { if (hostapd_mac_comp_empty(hapd >conf >bssid) == 0) { /* Allocate the next available BSSID. */ do { inc_byte_array(hapd >own_addr, ETH_ALEN); } while (mac_in_conf(hapd >iconf, hapd >own_addr)); } else { /* Allocate the configured BSSID. */ os_memcpy(hapd >own_addr, hapd >conf >bssid, ETH_ALEN); if (hostapd_mac_comp(h apd >own_addr, hapd >iface >bss[0] >own_addr) == 0) { wpa_printf(MSG_ERROR, "BSS '%s' may not have "BSSID set to the MAC address of "the radio", hapd >conf >iface); return 1; } } hapd >interface_added = 1; if (hostapd_if_add(hapd >iface >bss[0], WPA_IF_AP_BSS, hapd >conf >iface, hapd >own_addr, hapd, &hapd >drv_priv, force_ifname, if_addr, hapd >conf >bridge[0] ? hapd >conf >bridge : NULL)) { wpa_printf(MSG_ERROR, "Failed to add BSS (BSSID=" MACSTR ")", MAC2STR(hapd >own_addr)); return 1; } } if (conf >wmm_enabled < 0) conf >wmm_enabled = hapd >iconf >ieee80211n; hostapd_flush_old_stations(hapd, WLAN_REASON_PREV_AUTH_NOT_VALID); hostapd_set_privacy(hapd, 0); hostapd_broadcast_wep_clear(hapd); if (hostapd_setup_encryption(hapd >conf >iface, hapd)) return 1; /* Fetch the SSID from the system and use it or, if one was specified in the config file, verify they matc h. */ ssid_len = hostapd_get_ssid(hapd, ssid, sizeof(ssid)); if (ssid_len < 0) { wpa_printf(MSG_ERROR, "Could not read SSID from system"); return 1; } if (conf >ssid.ssid_set) { /* If SSID is specified in the config file and it differs from what is being used then force installation of the new SSID. */ set_ssid = (conf >ssid.ssid_len != (size_t) ssid_len ||
PAGE 101
93 os_memcmp(conf >ssid.ssid, ssid, ssid_len) != 0); } else { /* No SSID in the config file; just use the o ne we got from the system. */ set_ssid = 0; conf >ssid.ssid_len = ssid_len; os_memcpy(conf >ssid.ssid, ssid, conf >ssid.ssid_len); } if (!hostapd_drv_none(hapd)) { wpa_printf(MSG_ERROR, "Using interface %s with hwaddr MACSTR an d ssid \ "%s \ "", hapd >conf >iface, MAC2STR(hapd >own_addr), wpa_ssid_txt(hapd >conf >ssid.ssid, hapd >conf >ssid.ssid_len)); } if (hostapd_setup_wpa_psk(conf)) { wpa_printf(MSG_ERROR, "WPA PSK setup failed."); return 1; } /* Set SSID for the kernel driver (to be used in beacon and probe response frames) */ if (set_ssid && hostapd_set_ssid(hapd, conf >ssid.ssid, conf >ssid.ssid_len)) { wpa_printf(MSG_ERROR, "Could not set SSID for kernel driver"); return 1; } if (wpa_debug_level == MSG_MSGDUMP) conf >radius >msg_dumps = 1; #ifndef CONFIG_NO_RADIUS hapd >radius = radius_client_init(hapd, conf >radius); if (hapd >radius == NULL) { wpa_printf(MSG_ERROR, "RADIUS client initialization failed."); return 1; } if (hapd >conf >radius_das_port) { struct radius_das_conf das_conf; os_memset(&das_conf, 0, sizeof(das_conf)); das_conf.port = hapd >conf >radius_das_port; das_conf.shared_secret = hapd >conf >radius_das_shared_secret; das_conf.shared_secret_le n = hapd >conf >radius_das_shared_secret_len; das_conf.client_addr = &hapd >conf >radius_das_client_addr; das_conf.time_window = hapd >conf >radius_das_time_window; das_conf.require_event_timestamp = hapd >conf >radius_das_require_event_timesta mp; das_conf.ctx = hapd; das_conf.disconnect = hostapd_das_disconnect; hapd >radius_das = radius_das_init(&das_conf); if (hapd >radius_das == NULL) { wpa_printf(MSG_ERROR, "RADIUS DAS initialization "failed."); return 1; } } #endif /* CONFIG_NO_RADIUS */ if (hostapd_acl_init(hapd)) { wpa_printf(MSG_ERROR, "ACL initialization failed."); return 1; } if (hostapd_init_wps(hapd, conf)) return 1;
PAGE 102
94 if (authsrv_init(hapd) < 0) return 1; if (ieee802_1x_init(hapd)) { wpa_printf(MSG_ERROR, "IEEE 802.1X initialization failed."); return 1; } if (hapd >conf >wpa && hostapd_setup_wpa(hapd)) return 1; if (accounting_init(hapd)) { wpa_printf(MSG_ERROR, "Accounting initialization failed."); return 1; } if (hapd >conf >ieee802_11f && (hapd >iapp = iapp_init(hapd, hapd >conf >iapp_iface)) == NULL) { wpa_printf(MSG_ERROR, "IEEE 802.11F (IAPP) initialization "failed."); return 1; } #ifdef CONFIG_INTERWORKING if (gas_serv_init(hapd)) { wpa _printf(MSG_ERROR, "GAS server initialization failed"); return 1; } #endif /* CONFIG_INTERWORKING */ if (hapd >iface >interfaces && hapd >iface >interfaces >ctrl_iface_init && hapd >iface >interfaces >ctrl_iface_init(hapd)) { wpa_printf(M SG_ERROR, "Failed to setup control interface"); return 1; } if (!hostapd_drv_none(hapd) && vlan_init(hapd)) { wpa_printf(MSG_ERROR, "VLAN initialization failed."); return 1; } ieee802_11_set_beacon(hapd); if (hapd >wpa_auth && wpa_init_keys(hapd >wpa_auth) < 0) return 1; if (hapd >driver && hapd >driver >set_operstate) hapd >driver >set_operstate(hapd >drv_priv, 1); return 0; } static void hostapd_tx_queue_params(struct hostapd_iface *iface) { struct hostapd_data *ha pd = iface >bss[0]; int i; struct hostapd_tx_queue_params *p; for (i = 0; i < NUM_TX_QUEUES; i++) { p = &iface >conf >tx_queue[i]; if (hostapd_set_tx_queue_params(hapd, i, p >aifs, p >cwmin, p >cwmax, p >burst)) { wpa_printf(MSG_DEBUG, "F ailed to set TX queue "parameters for queue %d.", i); /* Continue anyway */ } }
PAGE 103
95 } static int setup_interface(struct hostapd_iface *iface) { struct hostapd_data *hapd = iface >bss[0]; size_t i; char country[4]; /* Make sure that all BSSes get configured with a pointer to the same driver interface. */ for (i = 1; i < iface >num_bss; i++) { iface >bss[i] >driver = hapd >driver; iface >bss[i] >drv_priv = hapd >drv_priv; } if (hostapd_validate_bssid_con figuration(iface)) return 1; if (hapd >iconf >country[0] && hapd >iconf >country[1]) { os_memcpy(country, hapd >iconf >country, 3); country[3] = \ 0'; if (hostapd_set_country(hapd, country) < 0) { wpa_printf(MSG_ERROR, "Failed to set country code"); return 1; } } if (hostapd_get_hw_features(iface)) { /* Not all drivers support this yet, so continue without hw feature data. */ } else { int ret = hostapd_select_hw_mode(iface); if (ret < 0) { wpa_printf(MSG_ERROR, "Could n ot select hw_mode and "channel. (%d)", ret); return 1; } ret = hostapd_check_ht_capab(iface); if (ret < 0) return 1; if (ret == 1) { wpa_printf(MSG_DEBUG, "Interface initialization will "be completed in a callback"); return 0; } } //JRP initializing some params here. paramhist[0]=0; paramhist[1]=0; oldbytes = 0; oldbytes2 = 0; oldstacnt = 0; oldstacnt2 = 0; return hostapd_setup_interface_complete(iface, 0); } int hostapd_setup_interface_complete(struct hostapd_iface *iface, int err) { struct hostapd_data *hapd = iface >bss[0]; size_t j; u8 *prev_addr; if (err) goto error;
PAGE 104
96 wpa_printf(MSG_DEBUG, "Completing interface initialization"); if (hapd >iconf >channel) { iface >freq = hostapd_hw_get_fr eq(hapd, hapd >iconf >channel); wpa_printf(MSG_DEBUG, "Mode: %s Channel: %d "Frequency: %d MHz", hostapd_hw_mode_txt(hapd >iconf >hw_mode), hapd >iconf >channel, iface >freq); if (hostapd_set_freq(hapd, hapd >iconf >hw_mode, ifac e >freq, hapd >iconf >channel, hapd >iconf >ieee80211n, hapd >iconf >ieee80211ac, hapd >iconf >secondary_channel, hapd >iconf >vht_oper_chwidth, hapd >iconf >vht_oper_centr_freq_seg0_idx, hapd >iconf >vht_oper_centr_freq_seg1_idx)) { wpa_printf(MSG_ERROR, "Could not set channel for "kernel driver"); goto error; } } if (iface >current_mode) { if (hostapd_prepare_rates(iface, iface >current_mode)) { wpa_printf(MSG_ERROR, Failed to prepare rates "table."); hostapd_logger(hapd, NULL, HOSTAPD_MODULE_IEEE80211, HOSTAPD_LEVEL_WARNING, "Failed to prepare rates table."); goto error; } } if (hapd >iconf >rts_threshold > 1 && hostapd_ set_rts(hapd, hapd >iconf >rts_threshold)) { wpa_printf(MSG_ERROR, "Could not set RTS threshold for "kernel driver"); goto error; } if (hapd >iconf >fragm_threshold > 1 && hostapd_set_frag(hapd, hapd >iconf >fragm_threshold)) { wpa_printf(MSG_ERROR, "Could not set fragmentation threshold "for kernel driver"); goto error; } prev_addr = hapd >own_addr; for (j = 0; j < iface >num_bss; j++) { hapd = iface >bss[j]; if (j) os_memcpy(hapd >own_addr, prev_addr, ETH_ ALEN); if (hostapd_setup_bss(hapd, j == 0)) goto error; if (hostapd_mac_comp_empty(hapd >conf >bssid) == 0) prev_addr = hapd >own_addr; } hostapd_tx_queue_params(iface); ap_list_init(iface); if (hostapd_driver_commit(hapd) < 0) { wpa_pri ntf(MSG_ERROR, "%s: Failed to commit driver "configuration", __func__); goto error; }
PAGE 105
97 /* WPS UPnP module can be initialized only when the "upnp_iface" is up. If "interface" and "upnp_iface" are the same (e.g., non bridge mode), the interface is up only after driver_commit, so initialize WPS after driver_commit. */ for (j = 0; j < iface >num_bss; j++) { if (hostapd_init_wps_complete(iface >bss[j])) return 1; } if (hapd >setup_complete_cb) hapd >setup_co mplete_cb(hapd >setup_complete_cb_ctx); wpa_printf(MSG_DEBUG, "%s: Setup of interface done.", iface >bss[0] >conf >iface); return 0; error: wpa_printf(MSG_ERROR, "Interface initialization failed"); eloop_terminate(); return 1; } /** hostapd_setup_interface Setup of an interface @iface: Pointer to interface data. Returns: 0 on success, 1 on failure Initializes the driver interface, validates the configuration, and sets driver parameters based on the configuration. Flushes old stations, sets the channel, encryption, beacons, and WDS links based on the configuration. */ int hostapd_setup_interface(struct hostapd_iface *iface) { int ret; ret = setup_interface(iface); if (ret) { wpa_printf(MSG_ERROR, "%s: U nable to setup interface.", iface >bss[0] >conf >iface); return 1; } return 0; } /** hostapd_alloc_bss_data Allocate and initialize per BSS data @hapd_iface: Pointer to interface data @conf: Pointer to per interface configuration @bss: Pointer to per BSS configuration for this BSS Returns: Pointer to allocated BSS data This function is used to allocate per BSS data structure. This data will be freed after hostapd_cleanup() is called for it during interface deiniti alization. */ struct hostapd_data hostapd_alloc_bss_data(struct hostapd_iface *hapd_iface, struct hostapd_config *conf, struct hostapd_bss_config *bss) { struct hostapd_data *hapd; hapd = os_zalloc(sizeof(*hapd));
PAGE 106
98 if (hapd == NULL) return NULL; hapd >new_assoc_sta_cb = hostapd_new_assoc_sta; hapd >iconf = conf; hapd >conf = bss; hapd >iface = hapd_iface; hapd >driver = hapd >iconf >driver; hapd >ctrl_sock = 1; return hapd; } void hostapd_interface_deinit(struct hostapd_iface *iface) { size_t j; if (iface == NULL) return; hostapd_cleanup_iface_pre(iface); for (j = 0; j < iface >num_bss; j++) { struct hostapd_data *hapd = iface >bss[j]; hostapd_free_stas(hapd); hostapd_flush_old_stations(hapd, WLAN_RE ASON_DEAUTH_LEAVING); hostapd_clear_wep(hapd); hostapd_cleanup(hapd); } } void hostapd_interface_free(struct hostapd_iface *iface) { size_t j; for (j = 0; j < iface >num_bss; j++) os_free(iface >bss[j]); hostapd_cleanup_iface(iface); } #ifdef HOSTAPD void hostapd_interface_deinit_free(struct hostapd_iface *iface) { const struct wpa_driver_ops *driver; void *drv_priv; if (iface == NULL) return; driver = iface >bss[0] >driver; drv_priv = iface >bss[0] >drv_priv; hostapd_interface_deinit (iface); if (driver && driver >hapd_deinit && drv_priv) driver >hapd_deinit(drv_priv); hostapd_interface_free(iface); } int hostapd_enable_iface(struct hostapd_iface *hapd_iface) { if (hapd_iface >bss[0] >drv_priv != NULL) { wpa_printf(MSG_ERROR, "Interface %s already enabled", hapd_iface >conf >bss[0].iface); return 1; } wpa_printf(MSG_DEBUG, "Enable interface %s", hapd_iface >conf >bss[0].iface); if (hapd_iface >interfaces == NULL ||
PAGE 107
99 hapd_iface >interfaces > driver_init == NULL || hapd_iface >interfaces >driver_init(hapd_iface) || hostapd_setup_interface(hapd_iface)) { hostapd_interface_deinit_free(hapd_iface); return 1; } return 0; } int hostapd_reload_iface(struct hostapd_iface *hapd_iface ) { size_t j; wpa_printf(MSG_DEBUG, "Reload interface %s", hapd_iface >conf >bss[0].iface); for (j = 0; j < hapd_iface >num_bss; j++) { hostapd_flush_old_stations(hapd_iface >bss[j], WLAN_REASON_PREV_AUTH_NOT_VALID); #ifndef CONFIG_NO_R ADIUS /* TODO: update dynamic data based on changed configuration items (e.g., open/close sockets, etc.) */ radius_client_flush(hapd_iface >bss[j] >radius, 0); #endif /* CONFIG_NO_RADIUS */ hostapd_reload_bss(hapd_iface >bss[j]); } return 0; } int hostapd_disable_iface(struct hostapd_iface *hapd_iface) { size_t j; struct hostapd_bss_config *bss; const struct wpa_driver_ops *driver; void *drv_priv; if (hapd_iface == NULL) return 1; bss = hapd_iface >bss[0] >conf; driver = hapd_ifa ce >bss[0] >driver; drv_priv = hapd_iface >bss[0] >drv_priv; /* whatever hostapd_interface_deinit does */ for (j = 0; j < hapd_iface >num_bss; j++) { struct hostapd_data *hapd = hapd_iface >bss[j]; hostapd_free_stas(hapd); hostapd_flush_old_stati ons(hapd, WLAN_REASON_DEAUTH_LEAVING); hostapd_clear_wep(hapd); hostapd_free_hapd_data(hapd); } if (driver && driver >hapd_deinit && drv_priv) { driver >hapd_deinit(drv_priv); hapd_iface >bss[0] >drv_priv = NULL; } /* From hostapd_cleanup_iface: These were initialized in hostapd_setup_interface and hostapd_setup_interface_complete */ hostapd_cleanup_iface_partial(hapd_iface); bss >wpa = 0; bss >wpa_key_mgmt = 1; bss >wpa_pairwise = 1; wpa_printf(MSG_DEBUG, "Int erface %s disabled", bss >iface); return 0; }
PAGE 108
100 static struct hostapd_iface hostapd_iface_alloc(struct hapd_interfaces *interfaces) { struct hostapd_iface **iface, *hapd_iface; iface = os_realloc_array(interfaces >iface, interfaces >count + 1, s izeof(struct hostapd_iface *)); if (iface == NULL) return NULL; interfaces >iface = iface; hapd_iface = interfaces >iface[interfaces >count] = os_zalloc(sizeof(*hapd_iface)); if (hapd_iface == NULL) { wpa_printf(MSG_ERROR, "%s: Failed to allocate memory for "the interface", __func__); return NULL; } interfaces >count++; hapd_iface >interfaces = interfaces; return hapd_iface; } static struct hostapd_config hostapd_config_alloc(struct hapd_interfaces *interfaces, const char *ifnam e, const char *ctrl_iface) { struct hostapd_bss_config *bss; struct hostapd_config *conf; /* Allocates memory for bss and conf */ conf = hostapd_config_defaults(); if (conf == NULL) { wpa_printf(MSG_ERROR, "%s: Failed to allocate memory for "configuration", __func__); return NULL; } conf >driver = wpa_drivers[0]; if (conf >driver == NULL) { wpa_printf(MSG_ERROR, "No driver wrappers registered!"); hostapd_config_free(conf); return NULL; } bss = conf >last_bss = conf >bss; os_strlcpy(bss >iface, ifname, sizeof(bss >iface)); bss >ctrl_interface = os_strdup(ctrl_iface); if (bss >ctrl_interface == NULL) { hostapd_config_free(conf); return NULL; } /* Reading configuration file skipped, will be done in SET! From r eading the configuration till the end has to be done in SET */ return conf; } static struct hostapd_iface hostapd_data_alloc( struct hapd_interfaces *interfaces, struct hostapd_config *conf) { size_t i; struct hostapd_iface *hapd_iface =
PAGE 109
101 interfaces >iface[interfaces >count 1]; struct hostapd_data *hapd; hapd_iface >conf = conf; hapd_iface >num_bss = conf >num_bss; hapd_iface >bss = os_zalloc(conf >num_bss sizeof(struct hostapd_data *)); if (hapd_iface >bss == NULL) ret urn NULL; for (i = 0; i < conf >num_bss; i++) { hapd = hapd_iface >bss[i] = hostapd_alloc_bss_data(hapd_iface, conf, &conf >bss[i]); if (hapd == NULL) return NULL; hapd >msg_ctx = hapd; } hapd_iface >interfaces = interfaces; return hapd_iface; } int hostapd_add_iface(struct hapd_interfaces *interfaces, char *buf) { struct hostapd_config *conf = NULL; struct hostapd_iface *hapd_iface = NULL; char *ptr; size_t i; ptr = os_strchr(buf, '); if (ptr == NULL) return 1; *ptr++ = \ 0'; for (i = 0; i < interfaces >count; i++) { if (!os_strcmp(interfaces >iface[i] >conf >bss[0].iface, buf)) { wpa_printf(MSG_INFO, "Cannot add interface it "already exists"); return 1; } } hapd_iface = host apd_iface_alloc(interfaces); if (hapd_iface == NULL) { wpa_printf(MSG_ERROR, "%s: Failed to allocate memory "for interface", __func__); goto fail; } conf = hostapd_config_alloc(interfaces, buf, ptr); if (conf == NULL) { wpa_printf(MSG_ERROR, "%s: Failed to allocate memory "for configuration", __func__); goto fail; } hapd_iface = hostapd_data_alloc(interfaces, conf); if (hapd_iface == NULL) { wpa_printf(MSG_ERROR, "%s: Failed to allocate memory "for hostapd", __func__); goto fail; } if (hapd_iface >interfaces && hapd_iface >interfaces >ctrl_iface_init &&
PAGE 110
102 hapd_iface >interfaces >ctrl_iface_init(hapd_iface >bss[0])) { wpa_printf(MSG_ERROR, "%s: Failed to setup control "interfac e", __func__); goto fail; } wpa_printf(MSG_INFO, "Add interface '%s'", conf >bss[0].iface); return 0; fail: if (conf) hostapd_config_free(conf); if (hapd_iface) { os_free(hapd_iface >bss[interfaces >count]); os_free(hapd_iface); } return 1; } int hostapd_remove_iface(struct hapd_interfaces *interfaces, char *buf) { struct hostapd_iface *hapd_iface; size_t i, k = 0; for (i = 0; i < interfaces >count; i++) { hapd_iface = interfaces >iface[i]; if (hapd_iface == NULL) return 1; if (!os_strcmp(hapd_iface >conf >bss[0].iface, buf)) { wpa_printf(MSG_INFO, "Remove interface '%s'", buf); hostapd_interface_deinit_free(hapd_iface); k = i; while (k < (interfaces >count 1)) { interfaces >iface[k] = interfaces >ifa ce[k + 1]; k++; } interfaces >count -; return 0; } } return 1; } #endif /* HOSTAPD */ /** hostapd_new_assoc_sta Notify that a new station associated with the AP @hapd: Pointer to BSS data @sta: Pointer to the associated STA data @reassoc: 1 to indicate this was a re association; 0 = first association This function will be called whenever a station associates with the AP. It can be called from ieee802_11.c for drivers that exp ort MLME to hostapd and from drv_callbacks.c based on driver events for drivers that take care of management frames (IEEE 802.11 authentication and association) internally. */ void hostapd_new_assoc_sta(struct hostapd_data *hapd, struct sta_info *st a, int reassoc) { if (hapd >tkip_countermeasures) { hostapd_drv_sta_deauth(hapd, sta >addr, WLAN_REASON_MICHAEL_MIC_FAILURE); return; } hostapd_prune_associations(hapd, sta >addr);
PAGE 111
103 /* IEEE 802.11F (IAPP) */ if (hapd >conf > ieee802_11f) iapp_new_station(hapd >iapp, sta); #ifdef CONFIG_P2P if (sta >p2p_ie == NULL && !sta >no_p2p_set) { sta >no_p2p_set = 1; hapd >num_sta_no_p2p++; if (hapd >num_sta_no_p2p == 1) hostapd_p2p_non_p2p_sta_connected(hapd); } #endif /* CONFIG_P2P */ /* Start accounting here, if IEEE 802.1X and WPA are not used. IEEE 802.1X/WPA code will start accounting after the station has been authorized. */ if (!hapd >conf >ieee802_1x && !hapd >conf >wpa) { os_get_time(&sta >connected_ti me); accounting_sta_start(hapd, sta); } /* Start IEEE 802.1X authentication process for new stations */ ieee802_1x_new_station(hapd, sta); if (reassoc) { if (sta >auth_alg != WLAN_AUTH_FT && !(sta >flags & (WLAN_STA_WPS | WLAN_STA_MAYBE_WPS))) wpa_auth_sm_event(sta >wpa_sm, WPA_REAUTH); } else wpa_auth_sta_associated(hapd >wpa_auth, sta >wpa_sm); wpa_printf(MSG_DEBUG, "%s: reschedule ap_handle_timer timeout "for MACSTR (%d seconds ap_ma x_inactivity)", __func__, MAC2STR(sta >addr), hapd >conf >ap_max_inactivity); eloop_cancel_timeout(ap_handle_timer, hapd, sta); eloop_register_timeout(hapd >conf >ap_max_inactivity, 0, ap_handle_timer, hapd, sta); }
PAGE 112
104 APPENDIX F : MATLA B Analysis Code For various figures throughput the paper MATLAB was used for analysis and plotting. The analysis portions of the code are supplied here. The main probability analysis code was based on code from the referenced papers from Rajmic [21], [22]. The following is the contents of the key m files. PWIN_PER_STA.M %Function used for Joey Padden Thesis project 2012/2013 %This function was adapted from the Rajmic code found at %" % probabilistic analysis of ieee 802 11e/content/edca_probability_win.m" function [p_win_net1,p_win_net2,p_coll,p_win_net_dcf,pcoll_dcf] ... = pwin_per_sta(n,m,cwn,cwm) net1(1:n,1)=3; net1(1:n,2)=cwn; net2(1:m,1)=3; net2(1:m,2)=cwm; matrix = [net1;net2]; K = size(matrix,1); %number of stations for k = 1 : K matrix = shift_nth_station_to_first(matrix,k); [p_win(k), ~] = edca_probability_win(matrix, 'whole' ); end p_win_net1 = sum(p_win(1:n)); p_win_net2 = sum(p_win(n+1:n+m) ); p = K 1; out = 0; cw = 15; z = 1:cw 1; pwin_dcf = (1/cw)*sum((z./cw).^p); p_win_net_dcf = [n*pwin_dcf m*pwin_dcf]; p_coll = edca_probability_collision(matrix); pcoll_dcf = 1 K*pwin_dcf;
PAGE 113
105 EDCA_PROBABILITY_COLLISION.M %Function used for Joey Padden thesis project 2012/13 %Code created by Rajmic. Original code found at % % probabilistic analysis of ieee 802 11e/content/edca_probability_win.m function [p_coll] = edca_probability_c ollision(matrix) K = size(matrix,1); total = 0; for cnt = 0:(K 1) p_win = edca_probability_win(shift_nth_station_to_first(matrix,cnt+1)); total = total + p_win; end p_coll = 1 total; EDCA_PROBABILITY_WIN.M %Function used for Joey Padden thesis project 2012/13 %Code created by Rajmic. Original code found at % % probabilistic analysis of ieee 802 11e/content/edca_probability_win.m function [p_win, matrix_p_win] = edca_proba bility_win(matrix,full) K = size(matrix,1); N0 = matrix(:,1); N = matrix(:,2); N0 = N0 min(N0); N10 = N0(1); N1 = N(1); total = N0+N; if nargin == 1 a = min(total(2:end)); if total(1) <= a no_cols = N1; else b = N10+1; c = a b; no_cols = max([0 c]); end else no_cols = N1; end if no_cols > 0 matrix_p_win = zeros(K 1,no_cols); for k = 2:K
PAGE 114
106 number = N(k) max(0,N10 N0(k)+1); matrix_p_win(k 1,1) = nu mber; end if no_cols > 1 ict = N0(2:K) N0(1) 1; for cnt = 2:no_cols matrix_p_win(:,cnt) = matrix_p_win(:,cnt 1) 1*(ict<=0); ict = ict 1; end matrix_p_win = max(0,matrix_p_win); end p_win = sum(prod(matrix_p_win)); p_win = p_win / prod(N(1:K)); else p_win = 0; matrix_p_win = []; end SHIFT_NTH_STATION_TO_FIRST.M %Function used for Joey Padden thesis project 2012/13 %Code created by Rajmic. Original code found at % % probabilistic analysis of ieee 802 11e/content/edca_probability_win.m function matrix_new = shift_nth_station_to_first(matrix,no) matrix_new = matrix; if no ~= 1 matrix_new(1,:) = matrix(no,:); matrix_new(no,:) = matrix(1,:); end PLOTCWMINSURF ...
PAGE 115
107' ':' ... 'Color' cc(j,:), 'LineWidth' ,1.2); out(j,:)=p_win_net1./p_win_net2; end % Create xlabel
PAGE 116
108 zd = get(h, 'ZData' ); for i = 1:length(zd) set(h(i), 'ZData' ,new_level*ones(length(zd{i}),1)) end PLOTCWMINSTACNT' ':' ... 'C olor' cc(j,:), 'LineWidth' ,1.2); out(j,:)=p_win_net1./p_win_net2; end % Create xlabel
PAGE 117
109 zd = get(h, 'ZData' ); for i = 1:length(zd) set(h(i), 'ZData' ,new_level*ones(length(zd{i}),1)) end
©Auraria Library
SobekCM
Library Site Index
|
Library FAQs
|
Ask Us
|
Send a Comment | http://digital.auraria.edu/AA00000149/00001 | CC-MAIN-2018-17 | refinedweb | 24,346 | 52.19 |
This package allows a simple interaction with Dart Editor in
build.dart.
You can read Build.dart and the Dart Editor Build System to understand available interactions with Dart Editor.
You can use the
BuildOptions to parse arguments.
final opts = BuildOptions.parse(new Options().arguments); opts.changed; // The list of files that changed and should be rebuilt. opts.removed; // The list of files that was removed and might affect the build. opts.clean; // bool opts.full; // bool opts.machine; // bool opts.deploy; // bool
You can use
BuildResult to create the output of
build.dart.
final result = new BuildResult(); result.addError('foo.html', 23,'no ID found'); result.addWarning('foo.html', 24,'no ID found', charStart: 123, charEnd: 130); result.addInfo('foo.html', 25,'no ID found'); result.addMapping('foo.html', 'out/foo.html'); print(result); // to provide information to editor
Apache 2.0
Add this to your package's pubspec.yaml file:
dependencies: editor_build: ^0.0.5
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:editor_build/editor. | https://pub.dartlang.org/packages/editor_build | CC-MAIN-2018-43 | refinedweb | 197 | 55.3 |
Andi Kleen wrote:> > We're in real mode for now nd should not care about the hidden state.> Sorry, Andi, that's not how real mode works.That may be how real mode is *documented*, but that's not how it works.The segment descriptor registers (what Intel calls "segment cache") is always active. The only thing that changes based on CR0.PE is how it is *loaded* and the interpretation of the CS flags.The segment descriptor registers contain of the following sub-registers: selector (the "visible" part), base, limit and flags. In protected mode or long mode, they are loaded from descriptors (or fs.base or gs.base can be manipulated directly in long mode.) In real mode, the only thing changed by a segment register load is the selector and the base, where the base <- selector << 4. In particular, *the limit and the flags are not changed*.As far as the handling of the CS flags: a code segment cannot be writable in protected mode, whereas it is "just another segment" in real mode, so there is some kind of quirk that kicks in for this when CR0.PE <- 0. I'm not sure if this is accomplished by actually changing the cs.flags register or just changing the interpretation; it might be something that is CPU-specific. In particular, the Transmeta CPUs had an explicit "CS is writable if you're in real mode" override, so even if you had loaded CS with an execute-only segment it'd be writable (but not readable!) on return to real mode. I'm not at all sure if that is how other CPUs behave.The most likely explanation for this is that the VESA BIOS expects to be entered in Big Real Mode (*.limit = 0xffffffff) instead of ordinary Real Mode. Here is a completely untested patch which changes the segment descriptors to Big Real Mode instead. It would be worth testing out. -hpadiff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.cindex 36af01f..97648aa 100644--- a/arch/x86/kernel/acpi/sleep.c+++ b/arch/x86/kernel/acpi/sleep.c@@ -23,6 +23,15 @@ static unsigned long acpi_realmode; static char temp_stack[10240]; #endif +/* XXX: this macro should move to asm-x86/segment.h and be shared with the+ boot code... */+#define GDT_ENTRY(flags, base, limit) \+ (((u64)(base & 0xff000000) << 32) | \+ ((u64)flags << 40) | \+ ((u64)(limit & 0x00ff0000) << 32) | \+ ((u64)(base & 0x00ffffff) << 16) | \+ ((u64)(limit & 0x0000ffff)))+ /** * acpi_save_state_mem - save kernel state *@@ -58,11 +67,11 @@ int acpi_save_state_mem(void) ((char *)&header->wakeup_gdt - (char *)acpi_realmode)) << 16); /* GDT[1]: real-mode-like code segment */- header->wakeup_gdt[1] = (0x009bULL << 40) +- ((u64)acpi_wakeup_address << 16) + 0xffff;+ header->wakeup_gdt[1] =+ GDT_ENTRY(0x809b, acpi_wakeup_address << 16, 0xfffff); /* GDT[2]: real-mode-like data segment */- header->wakeup_gdt[2] = (0x0093ULL << 40) +- ((u64)acpi_wakeup_address << 16) + 0xffff;+ header->wakeup_gdt[2] =+ GDT_ENTRY(0x8093, acpi_wakeup_address << 16, 0xfffff); #ifndef CONFIG_64BIT store_gdt((struct desc_ptr *)&header->pmode_gdt); | http://lkml.org/lkml/2008/7/13/187 | CC-MAIN-2017-17 | refinedweb | 482 | 64.81 |
Originally posted by Nikhilesh Fonseca: I need to store the session objects on to a hashmap so that it is possible to view all the users logged in. I created a session Hashmap as like this import java.util.HashMap; public class HashMapWrapper { static HashMap sessionHashmap; private HashMapWrapper(){ } public static HashMap getHashMapInstance(){ if(sessionHashmap==null) sessionHashmap= new HashMap(); return sessionHashmap; } } I need to store the user sessions in this hashmap . I saw an earlier post but couldn't make out much Also Do i need to store this Hashmap in the application scope or settings properties. I have no idea on how these work | http://www.coderanch.com/t/360422/Servlets/java/Session-HashMap | CC-MAIN-2014-52 | refinedweb | 104 | 59.84 |
This first tutorial with teach you the basic concepts of the PloobsEngine and how to create a simple 3D scene. (tutorials series here)
We plan to make a serie of tutorials, some will be about using the engine, others will explain how the engine implement its internal features. We will range from the basic to the advanced. Our plan is to release at least one tutorial each week.
For those who does not know ploobs yet, the PloobsEngine is an engine for creating games and graphics applications developed in XNA 4.0, C #. Net 4.0 and HLSL. It is designed to be simple and easy to use even for those unfamiliar with computer graphics programming.
First of all, i recomend you to read this post that presents the engine and its capabilities. Also, download the Visual Studio 2010 Templates (you will find it in that post) and install on your computer, we will use them in this tutorial.
Our first sample will be a very basic 3D scene (an island with some directional lights, see the screenshot in the end of the article =P). BUT, before start coding, is essencial to learn the basic architecture about the PloobsEngine.
PloobsEngine Architecture
We will focus in 3D in this tutorial, The PloobsEngine can be seen as the following class diagram (in a point of view of the users):
The class EngineStuff is our “Entry Point”, it is responsible for transparently interact with the XNA (and sometimes with the underlying system), you normaly dont access/use this class when making your application.
The EngineStuff class contains an instance of ScreenManager, that is responsible for managing (add, remove, load, update internal states and draw ) the IScreens.
The IScreen represents an abstraction to a container of components that will be updated and maybe draw in your application. The engine provides some implementation of this class, the most used ones are: IScene (for 3D world) and MovieScreen (to play a avi file).
- IScene: Container specialized in 3D World management, contains an instance of IWorld and IRenderTechnich (like you saw in the class diagram =P).
- MovieScreen: Specialized in playing videos (used in cinematics for example)
For this first tutorial, we will only talk in depth about IScene implementation.
As said, each IScene contains an IWorld that acts as a container of objects, triggers, lights, cameras, particles and 3D sounds. It also take care of updating, adding and removing these components.
The objects that are draw in the screen are represented by the entity called IObject, that is composed by others classes:
- IModelo, Responsible for storing geometric (vertices and indices) and texture informations of the object.
- IPhysicObject: Responsible for represent the object in the physic world. The engine has lots of implementation for this class, the most used are BoxObject, that represents the object as a box, and the TriangleObject, that represents the object as a colection of triangles. All physic simulation will be done considering the IPhysicObjects, not the IModelo data. For example, we can see a detailed dragon in the screen (IModelo being draw), but the physic system simulates it as a simple box.
- IMaterial: Responsible for rendering the IModelo in the screen using the progamable pipeline. The IMaterial is just a “dummy”, the IShader class does the real work. The engine provides lots of classes that implements this interface, each one gives a diferent appearance to the associated object.
The IScene also contains a IRenderTechnich that draw the objects in the screen (using the objects’s IMaterial and others internal stuffs) .
When building a simple application in the PloobsEngine, you should only worry about extending the IScreen class (or one specialization of it like the IScene) and overrides some of its methods (like the initialize and the LoadContent). This is what we are going to do in the next section.
Getting your hands dirty
Before coding, you should have:
- XNA 4.0 Game Studio installed.
- The Visual Studio 2010 Templates or Donwload the PloobsEngine xna 4.0 version DLLs (both can be found here)
If you prefer dont use the Visual Studio Templates, you can create a simple Windows XNA 4.0 Game Project (the one that ships with XNA 4.0 Game Studio), download the DLLs of the engine and add it to the project:
- The PloobsEnginePipelineDebug.dll Must be added in the Content Project (not used in this demo)
- The PloobsEngineDebug.dll must added in the main project projeto principal (the PloobsEngineDebug.xml file must be in the sampe folder of PloobsEngineDebug.dll)
If you choose to use the templates, you dont need to download the DLLs, just create a project using it and everything will be configured.
We are interested in building 3D worlds, so we begin extending the IScene class as show in the following list: (the template already contains an IScene implementation of this class, you can replace it or just change some parts):
using Microsoft.Xna.Framework; using PloobsEngine.Cameras; using PloobsEngine.Light; using PloobsEngine.Material; using PloobsEngine.Modelo; using PloobsEngine.Physics; using PloobsEngine.Physics.Bepu; using PloobsEngine.SceneControl; namespace IntroductionDemo4._0 { /// /// Basic Deferred Scene /// public class BasicScreenDeferredDemo : IScene { /// /// Sets the world and render technich. protected override void SetWorldAndRenderTechnich(out IRenderTechnic renderTech, out IWorld world) { world = new IWorld(new BepuPhysicWorld(), new SimpleCuller()); DeferredRenderTechnicInitDescription desc = DeferredRenderTechnicInitDescription.Default(); desc.UseFloatingBufferForLightMap = true; renderTech = new DeferredRenderTechnic(desc); } /// /// Load content for the screen. /// protected override void LoadContent(PloobsEngine.Engine.GraphicInfo GraphicInfo, PloobsEngine.Engine.GraphicFactory factory, IContentManager contentManager) { base.LoadContent(GraphicInfo, factory, contentManager);); #region Lights DirectionalLightPE ld1 = new DirectionalLightPE(Vector3.Left, Color.White); DirectionalLightPE ld2 = new DirectionalLightPE(Vector3.Right, Color.White); DirectionalLightPE ld3 = new DirectionalLightPE(Vector3.Backward, Color.White); DirectionalLightPE ld4 = new DirectionalLightPE(Vector3.Forward, Color.White); DirectionalLightPE ld5 = new DirectionalLightPE(Vector3.Down, Color.White); float li = 0.4f; ld1.LightIntensity = li; ld2.LightIntensity = li; ld3.LightIntensity = li; ld4.LightIntensity = li; ld5.LightIntensity = li; this.World.AddLight(ld1); this.World.AddLight(ld2); this.World.AddLight(ld3); this.World.AddLight(ld4); this.World.AddLight(ld5); #endregion this.World.CameraManager.AddCamera(new CameraFirstPerson(GraphicInfo.Viewport)); } protected override void Draw(GameTime gameTime, RenderHelper render) { base.Draw(gameTime, render); render.RenderTextComplete("Demo: Basic Screen Deferred", new Vector2(GraphicInfo.Viewport.Width - 315, 15), Color.White, Matrix.Identity); } } }
The method SetWorldAndRenderTechnich always must be override. It is responsible for creating the IWorld and the RenderTechnich that the scene will use.
In this example, we are creating a simple IWorld, passing a Bepu physic world implementation called BepuPhysicWorld (responsible for collision detection and physic simulation, the user can create its own physic World (not an easy task =P), just need to extend the IPhysicWorld interface). We also provide an ICuller implementation, in this sample we created a SimpleCuller (responsible for accelerating the render proccess). The Ploobsengine provides others implementations forICuller like the OctreeCuller class.
After we create the IRenderTechnich. The engine provides two implementation for this class: the ForwardRenderTechnich that implements the classic renderization technich called Single Pass Multi-Lightning and the DeferredRenderTechnich that implements DeferredShading.
Some olders computers and the Windows Phone 7 plataform does not run DeferredShading and must use the other. Most of our effects like shadow are only implemented in Deferred Shading, if possible, we always recomend users to choose this option. For this tutorial we first create a DeferredRenderTechnicInitDescription (in the following tutorials we will talk about how to configure this object to create interesting effects) and feed it to the DeferredRenderTechnic.
The next method overrided is the LoadContent. Here we create and populate the IWorld.
To create a simple IObject, we use the following code:);
In the first line we create the IModelo, passing in the first parameter an intance of the graphic factory (responsible for creating every stuff related to graphics, the engine provides it). The second parameter is the name of the model used (can be a .x or .fbx) and the last one is the diffuse texture name (if you dont provide, the engine will try to find it inside of the model) . If you are using effects (will be explained latter) that need more textures like bump mapping, you should use other SimpleModel constructor, or use one of the IModelo methods to load it). The model used in this demo can be found in this project (the code is also there, with lots of others examples)
In sequence, we instantiate the physic representation of the object. We used a TriangleMeshObject passing the IModelo, position, rotation, scaling and physic material properties (like friction and mass). Remember that Triangle Meshes cannot be moved, they act as they have infinite inertia.
Next, we create the IShader and the IMaterial, we used the DeferredMaterial implementation (That works with the DeferredRenderTechnich) and the DeferredNormalShader.
The engine provides lots of options for shaders (in this context, we use this word meaning graphich effect) like DeferredCustomShader that supports Normal Map, Specular Map and Glow Map. The DeferredNormalShader supports only simple phong illumination (with diffuse texture only), you can only customize the Specular Intensity and Specular Power (look at the shader constructor) of it.
Now that we have all the necessary stuffs, we just create the IObject passing the created auxiliar instances and add it to the IWorld.
To finish the LoadContent method, we create five Directional Lights (without shadow) and a First Person Camera (can be controlled by mouse and keyboard). We will talk more about this in the next tutorials.
The last stuff we did in the IScene class is overriding the method Draw just to write something on the screen. (things must be draw after the call to the base.draw())
To show the IScreen on the screen =P, we need to start the PloobsEngine. The following code does this job:
using System; using PloobsEngine.Engine; using PloobsEngine.SceneControl; namespace IntroductionDemo4._0 { static class Program { /// /// The main entry point for the application. /// static void Main(string[] args) { InitialEngineDescription desc = new InitialEngineDescription("PLoobsDemos",800, 600, false, Microsoft.Xna.Framework.Graphics.GraphicsProfile.HiDef, true, true, true); using (EngineStuff engine = new EngineStuff(ref desc, LoadScreen)) { engine.Run(); } } static void LoadScreen(ScreenManager manager) { manager.AddScreen(new BasicScreenDeferredDemo ()); } } }
The InitialEngineDescription is an object that contains lots of initial parameters of the engine (things like Antialiasing options, VSync, use of MipMap, internal Clock update method, screen resolution and the name of the application), experiment changing this parameters =P
The EngineStuff recives two parameters: the description and a the function (LoadScreen in our case) that creates the first IScreen and add it to the ScreenManager.
DONE !
The first tutorial id finished. If you run it, you will see the following image (move the mouse and the ASDW QZ keys to control the camera).
Tutorial 0 Image
We are using deferred shading, so even if you enable Antialiasing in the Engine, it wont work. The following tutorials will teach you how to enable/use our Post Process antialiasing.
The code for this demo can be found in our Introduction Demos package, you can download it here. (there are lots of others demos in this package, we will explain each of them in the next tutorials)
In the following post we intend to explore some of the basic resources of the engine. The next one will talk about the Input System and the Physic System.
Any doubts, critics, suggestions, pls go to our forum or leave a comment here.
See you guys =P
Links
- PloobsEngine Alpha Xna 4.0:
- Project Site:
- Our Blog: (most of the content is in portugues)
- Our Forum:
- Contact: contato@ploobs.com.br
- PloobsEngine Release in portuguese:
- PORTUGUESE VERSION OF THIS ARTICLE:
#1 by on 22 de junho de 2017 - 2:04 pm
Everything is very open with a really clear clarification of the challenges.
It was definitely informative. Your website is extremely helpful.
Thank you for sharing!
#2 by pleasure whip on 22 de junho de 2017 - 2:04 pm
although internet websites we backlink to beneath are considerably not related to ours, we feel they are really worth a go as a result of, so possess a look
#3 by tam su cua kien truc su on 22 de junho de 2017 - 2:04 pm
Thank you for every other informative website.
Where else may just I get that type of info written in such an ideal method?
I have a mission that I’m simply now operating on, and I’ve been at the glance out for
such info.
#4 by curved 4k tv on 22 de junho de 2017 - 2:13 pm
Your way of telling all in this article is in fact good, all be capable
of without difficulty understand it, Thanks a lot.
#5 by билети online on 22 de junho de 2017 - 2:18!
#6 by clyde81100121.blog.fc2.com on 22 de junho de 2017 - 2:38 pm
Thus, knowing syntax of any type of language is as necessary
as learning the language itself.
#7 by game tài xỉu on 22 de junho de 2017 - 2:38 pm
Unquestionably believe that that you stated. Your favourite reason seemed to be at the internet the simplest thing to remember of.
I say to you, I definitely get annoyed even as people consider worries that
they just do not recognize about. You controlled to hit the nail upon the top
and also outlined out the entire thing with
no need side effect , other people could take a signal.
Will likely be again to get more. Thank you
#8 by gas grill on 22 de junho de 2017 - 2:42 pm
Way cool! Some very valid points! I appreciate you penning this article and also the rest of the site is very good.
#9 by match.com on 22 de junho de 2017 - 2:42 pm
Excellent Webpage, Preserve the great job. Appreciate it!
#10 by er sucht ihn sexkontakte kreis heinsberg on 22 de junho de 2017 - 2:42 pm
Elena, auf dem Tisch das ein total tabuloser Junge sie knallhart vögelt!
#11 by Kay on 22 de junho de 2017 - 2:45 pm
I like the valuable information you provide in your articles.
I’ll bookmark your weblog and check again here frequently.
I am quite certain I’ll learn many new stuff right here!
Good luck for the next!
#12 by coreepi on 22 de junho de 2017 - 2:52 pm
次のWindows95からMS-DOSは必要なくなりWin95のパッケージのみで動作するようになり、ここからWindows自体MS-DOS上のアプリの一種では無くOSとして扱われるようになりました。=]office2016 personal 価格[/url]
最新OSへのアップグレードを無償で提供するという画期的な試みだった一方で、拒否する方法が分かりづらく、意図せずアップグレードしてしまうユーザーが続出。 ただその設計思考はクラウドコンピューティングを意識したものであって、従来のただ目前のPC内だけのワークを想定しているユーザーには特にそのメリットを感じることはないと思われる。
[url=]office2016 personal 価格[/url] 一方GoogleのAndroidは最初からすべての機器について無償なのだ。 E. エグゼクテゖブメールボックスの Microsoft Exchange Online Archiving を有効にします。
[url=]microsoft office 安い[/url]
原因を探ろうとメインのLinux機でネット検索をしたら、同じような状態にハマっているユーザーがいくつかヒットする。 今度のことで、つくづくデジタルの世界は移ろいやすく、かつ、もろい世界だと思い知りました。 [url=]office 2016 価格[/url]
Outlook使いにくいんだよね、、、、、、、、Office2010と2013持っているんでその中のOutlook使うか。。
[url=]ms office 2016 personal[/url] 間違っているかも?●結論PCが遅くなっているのは使用しているメモリ使用率が多いから、物理メモリが不足となってPCの速度が遅くなる。 )A. ドキュメントフ゜ルダにボリュームマ゙ントポントを作成します。
[url=]Acrobat 11 激安[/url]
#13 by Travel and Leisure on 22 de junho de 2017 - 2:55 pm
I do believe all of the concepts you’ve presented to your post. They are really convincing and can certainly work. Still, the posts are very brief for novices. May just you please extend them a bit from next time? Thanks for the post.
#14 by Somatropine on 22 de junho de 2017 - 3:38 pm
steroids online usa
Buy Legal Anabolic Steroids
[url=]Strenght
Stacks [/url]
steroids injection
#15 by hier ansehen on 22 de junho de 2017 - 3:48 pm
Heute hat unbeschwerter Verkehr oft den Touch des Unmoralischen.
#16 by online game sites on 22 de junho de 2017 - 3:49 pm
Is the nation’s pioneer in bringing the best standards of safety and security for its Money Players in India.
Additionally, if the poker web site does not have much site visitors, there will likely be less motion on the less generally performed games like 5 Card Stud, 7 Card Stud, Razz, Badugi
and Triple Draw. The superb poker set comes with 2
decks of playing playing cards, 500 pcs poker chips, aluminum
poker chip case, 1 seller button, keys, and 5 dices.
To engage a spare hour, free poker opens up limitless poker tables and numerous
on-line poker players at Free poker right here is accessible around the
clock. Should you’re in search of essentially the most poker
variants, the biggest bonuses, and one of the best Indian poker buyer assist, download and register an account with one of the poker sites on our record at present.
Win big at poker and you won’t have a pit boss in your back, get thrown out, or get
your fingers damaged. That’s the reason the World
Collection of Poker draws in 1000’s of players who compete for thousands and thousands of dollars (fun truth: the last 7
champions had been all under 30). Nevertheless, online poker is equally
widespread right here and the sport’s popularity is witnessing
a high surge. The Supreme Courtroom of India
has to resolve these tough points throughout
the present litigation. Chips are of good quality for
the price, as talked about in the valuable evaluate the chip
holder high quality is made from flimsy plastic
and must be case is of fairly good high quality, light and compact.
Spartan Poker has been a pioneer in starting on-line tourney tradition in India, this is a true example of promoting the sport in occasions which are being deliberate and executed are just getting higher by the day, I bear in mind simply logging in to earlier
than an IPC live occasion to play whole event package deal
online satellites that have been hosted on TheSpartanPoker.
If you’re a poker fanatic and hold tournaments, you already know how thrilling it’s to gather the most
effective poker equipment and stay abreast of the newest ideas and designs
for poker supplies. After confirming hand rankings with my good friend who
took me to a sport in Golden Aces 6 years ago.
Let it’s stay, online or casino poker, you may all the time benefit from ,
no matter the type of recreation you select to play.
The poker growth in India is still 4 to 5 years away and will require dramatic authorized modifications and the mind-set shift
of a technology. The seventh version of the Championship happened from 2nd – fifth June 2011 within the card room onboard On line casino Royale, the most important
off-shore casino situated within the Goan state capital of Panaji (or Panjim).
Playing on-line poker at is 100% protected & protected as we adjust to
the best stage of on-line security with certifications from iTech and Alpha.
Let it’s reside, on-line or on line casino poker, you possibly can always revenue from , regardless
of the type of recreation you choose to play.
Adda52 presents both rummy and poker, while Thrill Poker is at present just offering poker.
They aren’t at all complying with the cyber law due diligence (PDF) requirements of Indian cyber legislation. You’ll now be
rewarded for bringing your mates to play on-line poker at Invite your mates and win cash chips from us for each buddy whom you convey along.
But it is still very early days… all of us in the
Indian Poker community are really hoping that the Government
will move a bill to hyperlink poker to the Indian laws
on Rummy in that poker is extra of a sport of skill fairly
than chance.
As far as Indian poker gamers go, I believe Indians have an amazing aptitude
for poker, and noticed this even again in Sydney.
They are saying poker is a raffle, however not so long ago, they said that about entrepreneurship too.
This evening too noticed a file-breaking field of 94 gamers, the first time ever that there’s been over ninety gamers in a purchase-in of this magnitude.
Read our Legal Terms to know the Legality of taking part in poker video games in India at Pokabunga.
Speaking concerning the potential of the league, Mr.
Amit Burman, Founder, Poker Sports activities League stated, Poker is a well-liked mind sport throughout
the globe and has gained large traction within India, amongst individuals seeking
to problem their mind, grit and choice making talents.
Ring Sport Poker: That’s arguably the world’s hottest on-line poker format by
which you play a single poker deal, after which you’re free to exit or migrate to
different tables. The banks and so forth must ask them to first
comply with relevant techno authorized compliances after
which help their claims with a proper techno licensed consultancy from a reputed regulation agency.
On this online poker variant, a player has to make use of two out of four hole cards and three from
the board to make a excessive hand or a low hand mixture.
Resulting from ambiguity in authorized tips, solely West Bengal, Nagaland
and Karnataka has labeled Poker as a valid, potential-based sport, whereas in Goa, the game will probably be played solely in casinos.
Before he knew it, he was making much more cash enjoying poker than he was at his day
job as a business development supervisor. You are assured
of excessive privacy requirements when enjoying your game of Rummy and Poker on Pokabunga.
Get the Cartamundi James Bond poker set and have interaction in a enjoyable poker game with Skyfall impressed poker chips
and cards. The approval of the Nagaland bill is anticipated to lead to further regulated on-line poker in India.
You possibly can redistribute or modify it under the terms of the GNU Lesser General Public
License as printed by the Free Software program Foundation.
That’s just the average person, I’m not speaking concerning the pros that
play in India. With over a billion people
it goes without saying that India might rapidly grow to be some of the essential on-line
poker markets in the world. With that being stated, aside from in Maharashtra,
there aren’t any legal guidelines which clearly state it’s illegal for an Indian resident to gamble online.
Sites providing a welcome bonus will normally require users to launch an account to
qualify. The largest poker prospects from India showcased their skills
to try and change into the champion. All the cardboard video games supplied at are utterly safe and authorized to play in India and run 24×7 in Ring, Tournament, Sprint and Sit n Go formats.
Let’s take for granted that India competing with Macau is as probably as the
solar scorching the sky and Santa Claus coming to save us.
Online is the place this poker thing is going to exist here.
In view of the aforesaid, it’s clear that solely Video games of
expertise and that too played in bodily sort
have been held by the courts to be valid and falling throughout the ambit of assorted enactments, dealing with Gaming, in India.
Adda52 could lose its spot as a result of the primary on-line poker website in India ought to PokerStars resolve to launch firms within the Indian market.
Many regions of the world have seen will increase in the number of on-line
gamblers, however India’s surge in poker interest has been more moderen.
Earlier many poker players from India had no choice other than to participate in International poker tournaments in numerous nations
the place they could or might not get an opportunity to make it big.
Ltd was given the primary on-line poker license and the company which runs the online poker web
site will broaden its choices in Nagaland. To be on a safer side, it’s increased to regulate to
varied techno authorized legal guidelines of India whereas opening an internet
primarily based playing, gaming and betting platform moderately than dealing
with the punitive provisions of Indian laws. The Indian gambling market is estimated to
be price US$60 billion per 12 months, of which about half is illegally guess.
#17 by massage lyon pas cher on 22 de junho de 2017 - 3:58 pm.
#18 by holidayParty on 22 de junho de 2017 - 4:03 pm
Hi there to all, the contents existing at this web page
are genuinely remarkable for people experience, well, keep up
the nice work fellows.
#19 by coreswu on 22 de junho de 2017 - 4:11 pm
②Microsoft Edgeが起動しない。 という訳で、社員に改めてオフィスの研修をすることにしました。 [url=]ms office 2013 personal[/url]
米連邦捜査局(FBI)ロサンゼルス支局は、ユーザーが自分のコンピューターシステム内の情報へアクセスするのを阻止あるいは制限するマルウェアの一種「ランサムウエア」を使ったサイバー攻撃を捜査中だとしている。 新しいBIOSファームウェアを配信して、LSEを無効化または削除する措置を取った。
[url=]office2016 メディア 購入[/url] IT業界では広く認可されている試験として、MB2-704基礎問題集はMicrosoftの中の最も重要な試験の一つです。 たとえば、Chart を Sheet1 に追加し、それがワークシートの最初の埋め込みグラフの場合は、Name プロパティの値は Sheet1 グラフ 1 になります。
[url=]office2016 プロダクト キー[/url]
今回の騒動について、日本マイクロソフトの担当者は、無償期間の終了が迫っていることから、通知画面を変更したことが理由ではないかと話しています。 ※ResEdit で編集を行っても Visual C++ Express には反映されません。 [url=]office2016 メディア 購入[/url]
男性教諭は直後に男子生徒らに「冗談のつもりだったが、不適切な行為だった」とわびたといい同日夜には男子生徒宅を訪れて保護者に謝罪した。 本物は横から見ても出っ張ってませんし、爪でひっかいても取れません。
[url=]office personal 2013 ダウンロード 版[/url] 。
[url=]office2016 メディア 購入[/url]
#20 by Pilkada Dki 2017 on 22 de junho de 2017 - 4:13 pm
Let’s take the example of online Punjabi news portals and e-newspapers..
#21 by Moupkda on 22 de junho de 2017 - 4:15 pm
(2016/6/1 産経新聞)「近く、日本人を幹部にするつもりだ」(キリッ)AIIBの幹部に採用された人は「はした金で釣られる程度の売国奴さん」ということでむしろ分かりやすい指標としてしか使われないでしょう。 ウエブサイト「PCワールド」の編集者、ブラッド・チャコスさんは「汚いトリック」だと批判する。 [url=]office 2016 激安[/url]
23:52 UP! さて、明日は通常作業に加えて図面描きの予定です。 一緒にしたらどっちがどっちだよ?シールの貼ってある位置がちょっと違うね、でも覚えていない。
[url=]日本アニメ人気DVD box[/url] どの 4 つのゕクションを順番に実行する必要がありますか。 ここで演習する数学モデルをソルバーで解く手順はシートにセットしてありますので、ソルバーがアドインしてあれば、電卓のように解が簡単に得られます。
[url=]Server 2012[/url]
パッケージ製品限定収録として200種類のクリップアートを同梱。 およそ45分でWindows10が入ったべし。 [url=]windows 8.1評価[/url]
ただし、パワフルなDVDコピープロテクト解除機能を備えるが、出力プロファイルがISOに限っている。 ⑩dvdのwindowsインストールディスクから起動します。
[url=]microsoft office 2010 格安[/url] ビジネスの現場の(line-of-business, LOB)アプリケーションは、その多くがさまざまなデータソースに接続する。 世代論を、結局のところ建設的な議論を育めないから、と避ける人が多いが、下記のように言われれば、なかなか黙ってはいられまい。
[url=]windows7 アップデート版[/url]
#22 by Boom lift repair florida on 22 de junho de 2017 - 4:16 pm
very couple of websites that occur to be in depth beneath, from our point of view are undoubtedly very well worth checking out
#23 by dc event photographer on 22 de junho de 2017 - 4:30 pm
I was recommended this website via my cousin. I am no longer sure whether this submit is written by way of him as nobody else recognise such detailed about my difficulty.
You’re amazing! Thanks!
#24 by match.com on 22 de junho de 2017 - 4:41 pm
Good Webpage, Continue the useful work. Thanks.
#25 by Situs Judi Poker Online Terpercaya on 22 de junho de 2017 - 4:47 pm
You could certainly see your skills within the work you
write. The sector hopes for more passionate writers such as you who are not
afraid to say how they believe. Always follow your heart.
#26 by haben sie einen blick hier on 22 de junho de 2017 - 5:00 pm
Die perverse Sexsite mit Teenbabes exclusiv für Dich.
#27 by trading criptomonedas on 22 de junho de 2017 - 5:06 pm
Inspiring story there. What occurred after? Good luck!
#28 by Property on 22 de junho de 2017 - 5:15 pm
Hi there! Would you mind if I share your blog with my myspace group?
There’s a lot of folks that I think would really enjoy your content.
Please let me know. Thank you
#29 by professional plumbing contractor on 22 de junho de 2017 - 5:28 pm
Sites of interest we have a link to
#30 by Diggin On You on 22 de junho de 2017 - 5:46 pm
Since the admin of this website is working, no hesitation very soon it will be well-known, due to its quality contents.
#31 by psychic on 22 de junho de 2017 - 6:26 pm
I do not even know how I ended up here, but I thought this post was great.
I do not know who you are but certainly you’re going to a famous blogger if you
aren’t already 😉 Cheers! | http://ploobs.com.br/arquivos/838 | CC-MAIN-2017-26 | refinedweb | 4,572 | 60.65 |
PyQt (but possibly C++) Very simple signal/slot "transference"/encapsulation?
This is a question for PyQt. However, I may be able to adapt a C++ solution, depnding on what it is....
I inherited code using a
QLineEdit. I have to change that into a (what I call) a "composite" widget, consisting of a
QWidgetwhich holds a
QHBoxLayoutwhich in turn holds the original
QLineEdit, plus a
QPushButton; the button leads to something which can populate the
QLineEdit.
I'm OK with the design, apart from signal/slot handling. The outside world used to go
QLineEdit.editingFinished.connect(...). To encapsulate, I'd like it to go
CompositeWidget.editingFinished.connect(...), rather than addressing the
QLineEditdirectly. So I want to simply "transfer" the existing
editingFinishedsignal/slot from the
QLineEditto the
CompositeWidgetlevel, "transparently".
This is for PyQt 5 only, not earlier versions. So far I've never had to use PyQt
@annotations (
@pySignal/Slotor whatever they are), and I'm not sure I ought need to, given the definition in
QLineEditin
QtWidgets.pyiis already as plain as
def editingFinished(self) -> None: ...
So, given that I regard minimal code as neat/desired, what is like the minimum I need to write to achieve this? I will need the outside world to be able to
connect(), my widget needs to be able to
emit()it (when the user has finished interacting via the button, widget populates the
QLineEditand needs that to raise
editingFinishedsignal to the outside world). I think that's it!
- SGaist Lifetime Qt Champion
Hi,
From a C++ point of view: no problem. Signal chaining is indeed the recommended way to propagate in such a case. Just connect the original signal to your custom signal and you should be good to go.
For PyQt, I have discovered the magic of
pyqtSignal()function for the simplest "redirection". So the code looks like:
class JDateEdit(QWidget): # *** THE NEXT LINE IS THE PyQt MAGIC.... *** # class variable for "editingFinished" signal editingFinished = QtCore.pyqtSignal() def initUI(self): # lineEdit holds the date self.lineEdit = QLineEdit(self) ... # connect self.lineEdit.editingFinished signal to self.editingFinished signal self.lineEdit.editingFinished.connect(self.editingFinished) de = JDateEdit() de.editingFinished.connect(...)
Dunno how this compares to whatever in C++ ...
- SGaist Lifetime Qt Champion
Same as before except that you will have two
Q_SIGNALin the connect statement if using the old version. | https://forum.qt.io/topic/86919/pyqt-but-possibly-c-very-simple-signal-slot-transference-encapsulation/1 | CC-MAIN-2019-09 | refinedweb | 385 | 50.33 |
19 November 2008 04:08 [Source: ICIS news]
SINGAPORE (ICIS news)--State-run National Petrochemical Co (NPC) expects to start up its new 350,000 tonne/year propane dehydrogenation (PDH) plant at Bandar Imam, Iran, by 2011, a source close to the project said on Wednesday.
?xml:namespace>
“The project is already under way and will be privatised after completion,” the source said. NPC is not planning any derivative projects downstream of the PDH unit, he said.
“The propylene will be used to feed the already running as well as planned polypropylene (PP) units at Bandar Imam and also two phenol-acetone projects being planned by private investors at Esfahan and a yet-to-be decided location,” he added.
For more | http://www.icis.com/Articles/2008/11/19/9172681/irans-npc-to-start-up-bandar-imam-pdh-by-2011.html | CC-MAIN-2014-49 | refinedweb | 121 | 59.98 |
On 27.02.2016 19:29, Christophe Henry wrote:
[...]
> public class CustomApplication extends Application
> {
> private static instance @Override void onCreate()
> {
> super.onCreate()
> instance =this }
>
> public trait ApplicationUtilities
> {
> public ContextgetApplicationContext(){return instance.applicationContext }
> }
> }
>
> While this piece of code would have worked with an abstract class or a
> Java 8's interface, it throws a compilation error with a trait.
>
> I know I'm trying to do unnatural things with this language but is there
> any reason for this limitation and is there a plan to implement a
> feature like this in future releases?
Hmmm, I wonder if this can be made sense of or not. A trait can be seen
as some independent piece that can be added to a class. But for it to be
independent, it cannot be non-static. In your code the trait
implementation would depend on an instance of CustomApplication and
cannot be created without supplying such an instance. If I now have a
random class X, how can I let that use the trait without an instance of
CustomApplication? I cannot. Then how does the instance get there? Is
there one instance per trait usage, or do all traits share the same
CustomApplication instance (as would be the case in the non-static case
for inner classes!)d
So what makes no sense is to have a
class X implements CustomApplication.ApplicationUtilities {}
What could work is something like:
class X {
def someField
trait T {
def getField() {return someField}
}
class Y implements T {
}
def foo(){ return new Y() }
}
def x = new X()
assert x.someField == x.foo().getField()
I kind of doubt you wanted this later version... and does not anyway...
yes, the inner class cases have been totally ignored so far. Even making
the trait static is raising havoc in the compiler... and that at least
should have passed compilation and only failed when trying to access
"someField".
Though... further thought leads me to see things a little different. In
a trait you can do
trait X {
def foo() { return bar }
}
class Y implements X{
private bar = 1
}
The property bar is not existing in the trait, it is expected on
"traited" class. Following this logic, what does it mean to access the
field of an outer class? Strictly following this logic means not to be
able to do that. Resolving first outer class fields and then traited
class means to get into trouble with dynamic properties.
bye Jochen | http://mail-archives.apache.org/mod_mbox/groovy-dev/201602.mbox/%3C56D41D9A.4080103@gmx.org%3E | CC-MAIN-2017-39 | refinedweb | 404 | 63.49 |
problem- Demonetisation
contest- BITFLIT
PROBLEM LINK:-
MY CODE :-
RESULT :- WA
The contest is over now so please help me debugging it.
problem- Demonetisation
contest- BITFLIT
PROBLEM LINK:-
MY CODE :-
RESULT :- WA
The contest is over now so please help me debugging it.
@neget Your logic is correct but you are taking answer as int, just change it to long long int and it will get AC. Although, according to the limits, the maximum answer for a single test case can be 1000500000 which is well within the limit of int, but sometimes the test data might contain some larger test cases also. So, its a good practice to take final answer as long long int in such cases.
@neget Your code is correct but you forgot something. “long long int sum”.
When I remove all “if-else” conditions the result is AC and also time efficient. Check it out.
#include<bits/stdc++.h> using namespace std; int main() { int t; cin>>t; while(t--) { int n; cin>>n; long long int sum; long long int tc=0; for(int i=0;i<n;i++) { cin>>sum; tc= tc+sum/100; sum=sum%100; tc=tc+sum/50; sum=sum%50; tc=tc+sum/20; sum=sum%20; tc=tc+sum/10; sum=sum%10; tc=tc+sum/5; sum=sum%5; tc=tc+sum/3; sum=sum%3; tc=tc+sum/2; sum=sum%2; tc=tc+sum; } cout<<tc<<endl; } return 0; }
@neget
Your code to the problem can be simplified has in poste by @only4. But you would still recieve WA. It would fail some test cases.
According to your code: 9+1+1+1(=4)
Actual Answer: 4+4+4 (=3) which would be the correct answer
Editorial for DMNTION can be found here
To implement this problem, there is a concept of Dynamic Programming aka DP. It is very important concept in dynamic programming. Check out the below link to know more about dp: | https://discusstest.codechef.com/t/help-me-debugging-my-code/13974 | CC-MAIN-2021-31 | refinedweb | 328 | 70.13 |
This is a bug in Rails that quite likely affects you, but which you’ve even more likely never experienced. I’ve posted it here for the benefit of the small number of people who will run into this problem and turn to Google for help.
In short, if you use Mongrel app servers (this may affect Passenger as well, I don’t know), the first HTTP request to your Rails app after you restart your servers, or otherwise reload your environment, will have an empty HTTP body.
I say you’ve likely never experienced this because the majority of HTTP requests to your Rails app are likely GET requests, which always have empty HTTP bodies. After that first request everything will work just fine. Even if you’re unlucky enough to receive a POST or a PUT request containing a body immediately after restart it will only fail once, which you could easily write off an an anomaly. You also won’t see this behavior in your development environment, or any environment in which you use Mongrel as a web server rather than just an app server.
If you’re interested in a patch for the bug, I’ve submitted one to Rails here.
The source of the problem lies in how ActionController initializes itself. In the actionpack gem you’ll find the lib/action_controller/cgi_ext.rb file, which does little more than load the three files in the cgi_ext directory:
require 'action_controller/cgi_ext/stdinput' require 'action_controller/cgi_ext/query_extension' require 'action_controller/cgi_ext/cookie' ...
The cgi_ext/query_extension.rb file is the interesting one:
require 'cgi' class CGI #:nodoc: module QueryExtension # Remove the old initialize_query method before redefining it. remove_method :initialize_query # Neuter CGI parameter parsing. def initialize_query # Fix some strange request environments. env_table['REQUEST_METHOD'] ||= 'GET' # POST assumes missing Content-Type is application/x-www-form-urlencoded. if env_table['CONTENT_TYPE'].blank? && env_table['REQUEST_METHOD'] == 'POST' env_table['CONTENT_TYPE'] = 'application/x-www-form-urlencoded' end @cookies = CGI::Cookie::parse(env_table['HTTP_COOKIE'] || env_table['COOKIE']) @params = {} end end end
This replaces the default #initialize_query method provided by Ruby’s CGI library:
def initialize_query() if ("POST" == env_table['REQUEST_METHOD']) and %r|Amultipart/form-data.*boundary="?([^";,]+)"?|n.match(env_table['CONTENT_TYPE']) boundary = $1.dup @multipart = true @params = read_multipart(boundary, Integer(env_table['CONTENT_LENGTH'])) else @multipart = false @params = CGI::parse( case env_table['REQUEST_METHOD'] when "GET", "HEAD" if defined?(MOD_RUBY) Apache::request.args or "" else env_table['QUERY_STRING'] or "" end when "POST" stdinput.binmode if defined? stdinput.binmode # =====> stdinput.read(Integer(env_table['CONTENT_LENGTH'])) or '' else read_from_cmdline end ) end @cookies = CGI::Cookie::parse((env_table['HTTP_COOKIE'] or env_table['COOKIE'])) end
The interesting line is the one I’ve marked with a comment rocket. Notice how it reads from stdinput; this leaves the read pointer at the end of the input stream. Now look back at the Rails override for this method, and notice how it does not read from stdinput, thus leaving the read pointer at the start of the input stream.
This is all fine and dandy as long as all of the ActionController code loads up and patches the CGI library properly. However, ActionController doesn’t load the cgi_ext.rb file (or its dependencies) until it references either the CgiRequest or CGIHandler classes (which require cgi_process.rb, which require cgi_ext.rb), as part of the first request, which is after the default Ruby CGI library has read the input stream containing the request body. ActionController then tries to read the request body assuming the read pointer is at the start of the stream. Oops. Subsequent requests work fine, because everything has now been loaded.
Finding the source of this bug took some doing (Chris Heisterkamp, aka “The Hammer” and I tracked it down together), but the fix is easy. If you look at the patch you’ll see it’s simply a single require in action_controller.rb. You can achieve the same result by requiring ‘action_controller/cgi_ext’ in an initializer file in your app.
Like many problems, this one should go away in Rails 3. Rails has deprecated use of the CGI library, and the CGI extensions have already been removed from the Rails master branch. However, it’s a real problem now, and will remain so for at least some amount of time.
Thanks for summarising this. We ran into the issue and was dumbfounded by the error ourselves. Going to try passenger instead and seeing if it still happens.
August 24, 2009 at 5:11 pm
Thanks a lot for the write up and the solution. When you have a lot of Mongrels and frequent restarts this issue can become a serious pain.
September 10, 2009 at 12:29 am
Thank you. Really! I’m a ruby/rails noob, 20+ years in IT. My app starts on a multi form search page. All the forms are post. I’ve been thinking I just didn’t get it. Your work-around works perfectly.
September 15, 2009 at 7:41 am
You are my hero. I ran into this problem and had a horrible time figuring out what was going wrong. Your patch works wonderfully.
In case it helps future people find this post quicker, the problem for me first showed up as a 500 error that wasn’t in the log files, and which I later determined was showing up as an EOFError exception with the message “bad content body.”
October 31, 2009 at 10:51 pm | http://pivotallabs.com/rails-requests-missing-the-http-body/ | CC-MAIN-2014-52 | refinedweb | 893 | 55.84 |
Retrieve the current value of the specified event property of type void*.
#include <screen/screen.h>
int screen_get_event_property_pv(screen_event_t ev, int pname, void **param)
The handle of the event whose property is being queried. The event must have an event type of Screen event types.
The name of the property whose value is being queried. The properties available for query are of type Screen property types.
The buffer where the retrieved value(s) will be stored. This buffer must be of type void*.
Function Type: Immediate Execution
This function stores the current value of an event property in a user-provided buffer. The list of properties that can be queried per event type are listed as follows:
0 if a query was successful and the value(s) of the property are stored in param, or -1 if an error occurred (errno is set). | http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.qnxcar2.screen/topic/screen_get_event_property_pv.html | CC-MAIN-2019-47 | refinedweb | 143 | 72.97 |
std::sort()from
#include algorithm
total = dArray[i]+ dArray [i+1];
total += dArray[i]+ dArray [i+1];
} while (!fin.eof());Instant fail. count will always be off by one.
median = dArray[count/2];Will not work as it should iff count is even
dArray[count]Out of bound access: array of count elements have valid indexes [0; count-1]
min (dArray)Did you made min as a function?
int meanwhat you think would be mean of 2 and 3 in your case? Answer: it would be 2 because of rounding. Use double.
double mean = std::accumualte(dArray, dArray + count, 0.0) / count;
median = dArray[count/2];Find median of sequence {0, 5, 7, 7} using your code. it gives you 7, but it should be 6. Fix that.
#include <numeric>Yes. But it is better to declare variables where they will be used.
i <= count;i < count
for(int i = 0; i < n; ++i) | http://www.cplusplus.com/forum/beginner/101268/ | CC-MAIN-2017-04 | refinedweb | 154 | 66.74 |
FlipView, represents an item control which displays one item at a time and enables user to use the flip behavior which is used for traversing through the collection of items. Typically such a control is used for traversing through Product Catalog, Books information etc. Technically the FlipView is a control provided in Windows Store Apps through Windows Library for JavaScript. The data to FlipView is made available through IListDataSource. (Note: You can get data from an external web sources like, web service, WCF service, WEB API in the JSON format).
We will use the FlipView for iterating through Images that’s passed to it from the WinJS List object.
Step 1: Open VS2012 and create a new Windows Store Apps using JavaScript. Name it as ‘Store_JS_FlipView’. In this project add some images in the ‘images’ folder. (I have some images as Mahesh.png, SachinN.png,SachinS.png, KiranP.jpg).
Step 2: In the default.html add the below Html code:
<div id="trgFilpView" data-</div>
Note that the <div> is set to the WinJS.UI.FlipView using data-win-control property.
Step 3: Add the style for the FlipView in the default.html as below:
<style type="text/css">
#trgFilpView
{
width: 600px;
height: 500px;
border: solid 1px black;
background-color:coral;
}
</style>
Step 4: Since the FlipView accepts the IListDataSource, we need to define it as JSON data. To do this, add a new JavaScript file in the project, name it as ‘dataInformation.js’. Add the following code in it:
(function ()
{
"use strict";
//The JavaScript Array.
var trainerArray = [
{
name: "Sachin Shukre",
image: "images/SachinS.jpg",
description: "The Senior Corporate Trainer for C,C++"
},
{
name: "Mahesh Sabnis",
image: "images/Mahesh.png",
description: "The Senior Corporate Trainer for .NET"
},
{
name: "Sachin Nimbalkar",
image: "images/SachinN.jpg",
description: "The Senior Corporate Trainer for Client Side Frameworks"
},
{
name: "Mahesh Sabnis",
image: "images/KiranP.jpg",
description: "The Senior Corporate Trainer for C# and ASP.NET"
}];
//Define the List from the Array
var trainersList = new WinJS.Binding.List(trainerArray);
//This is the Private data
//To expose Data publically, define namespace which defines
//The object containing the Property-Value pair.
//The property is the public name of the member and the value is variable which contains data
var trainersInfo =
{
trList:trainersList
};
WinJS.Namespace.define("TrainersInformation", trainersInfo);
})();
The above code defines a JSON array with hard-coded data in it. This array is then passed to the ‘trainersList’ object defined as a List object using WinJS.Binding.List(). Since the array and the List are declared as private objects, these will not be exposed to the FlipView. To do this, we need to define the namespace which defines an object with property/value pair. The namespace is defined using WinJS.Namespace.define(). The ‘trList’ is the public property which contains the ‘trainersList’. This is now exposed to the FlipView.
Step 5: To display data into the FlipView, we need to define a Template for showing the repeated data. (Note: This is conceptually similar to templates in XAML). In the default.html add the below <div> tag, this is set to the WinJS.Binding.Template.
<!--Define the Template Here-->
<div id="DataTemplate" data-
<div>
<img src="#" data-
<div>
<h3 data-</h3>
<h4 data-</h4>
</div>
</div>
</div>
<!--Ends Here-->
The above template contains <img> which is bound to the ‘image’ property. which is coming from the List which is defined in Step 4 using the Array. The header <h3> and <h4> are used for displaying ‘name’ and ‘description’ declared in the array.
Step 6: To display the data in the FlipView, change the <div> with id ‘trgFlipView’ as shown below:
<div id="trgFilpView"
data-win-control="WinJS.UI.FlipView"
data-win-options=
"{
itemDataSource:TrainersInformation.trList.dataSource,
itemTemplate:DataTemplate
}">
</div>
To connect to the data, the itemsDataSource property of the FlipView is used. This property is assigned to trList object defined in the namespace in Step 4. The ItemTemplate property is set to the DataTemplate defined in Step 5.
Run the application, you will get the First Record from the Array.
Click on the Flip Navigation button, the next record will be displayed:
ConclusionWe saw how to bind an array of images to a FlipView control in WinJS. We can use the FlipView control to display any page-wise data, for example magazines, books etc.
The entire source code of this article can be downloaded at
Will you give this article a +1 ? Thanks in advance
comments0 Responses to "Using FlipView Control in Windows Store Apps using JavaScript and HTML" | http://www.devcurry.com/2013/03/using-flipview-control-in-windows-store.html | CC-MAIN-2014-42 | refinedweb | 750 | 66.23 |
These days, running your apps over HTTPS is pretty much required. so you need an SSL certificate to encrypt the connection between your app and a user's browser.
I was recently trying to create a self-signed certificate for use in a Linux development environment, to serve requests with ASP.NET Core over SSL when developing locally. Playing with certs is always harder than I think it's going to be, so this post describes the process I took to create and trust a self-signed cert.
Disclaimer I'm very much a Windows user at heart, so I can't give any guarantees as to whether this process is correct. It's just what I found worked for me!
Using Open SSL to create a self-signed certificate
On Windows, creating a self-signed development certificate for development is often not necessary - Visual Studio automatically creates a development certificate for use with IIS Express, so if you run your apps this way, then you shouldn't have to deal with certificates directly.
On the other hand, if you want to host Kestrel directly over HTTPS, then you'll need to work with certificates directly one way or another. On Linux, you'll either need to create a cert for Kestrel to use, or for a reverse-proxy like Nginx or HAProxy. After much googling, I took the approach described in this post..cer
This creates 3 files:
localhost.cer- The public key for the SSL certificate
localhost.key- The private key for the SSL certificate
localhost.pfx- An X509 certificate containing both the public and private key. This is the file that will be used by our ASP.NET Core app to serve over HTTPS.
The script creates a certificate with a "Common Name" for the
localhost domain (the
-subj /CN=localhost part of the script). That means we can use it to secure connections to the
localhost domain when developing locally.
The problem with this certificate is that it only includes a common name so the latest Chrome versions will not trust it. Instead, we need to create a certificate with a Subject Alternative Name (SAN) for the DNS record (i.e.
localhost).
The easiest way I found to do this was to use a .conf file containing all our settings, and to pass it to
openssl.
Creating a certificate with DNS SAN
The following file shows the .conf config file that specifies the particulars of the certificate that we're going to create. I've included all of the details that you must specify when creating a certificate, such as the company, email address, location etc.
If you're creating your own self signed certificate, be sure to change these details, and to add any extra DNS records you need.
[ req ] prompt = no default_bits = 2048 default_keyfile = localhost.pem distinguished_name = subject req_extensions = req_ext x509_extensions = x509_ext string_mask = utf8only # The Subject DN can be formed using X501 or RFC 4514 (see RFC 4519 for a description). # Its sort of a mashup. For example, RFC 4514 does not provide emailAddress. [ subject ] countryName = GB stateOrProvinceName = London localityName = London organizationName = .NET Escapades # Use a friendly name here because its presented to the user. The server's DNS # names are placed in Subject Alternate Names. Plus, DNS names here is deprecated # by both IETF and CA/Browser Forums. If you place a DNS name here, then you # must include the DNS name in the SAN too (otherwise, Chrome and others that # strictly follow the CA/Browser Baseline Requirements will fail). commonName = Localhost dev cert emailAddress = [email protected] # Section x509_ext is used when generating a self-signed certificate. I.e., openssl req -x509 ... [ x509_ext ] subjectKeyIdentifier = hash authorityKeyIdentifier = keyid,issuer # You only need digitalSignature below. *If* you don't allow # RSA Key transport (i.e., you use ephemeral cipher suites), then # omit keyEncipherment because that's key transport. # Section req_ext is used when generating a certificate signing request. I.e., openssl req ... [ req_ext ] subjectKeyIdentifier = hash [ alternate_names ] DNS.1 = localhost # Add these if you need them. But usually you don't want them or # need them in production. You may need them for development. # DNS.5 = localhost # DNS.6 = localhost.localdomain # DNS.7 = 127.0.0.1 # IPv6 localhost # DNS.8 = ::1
We save this config to a file called
localhost.conf, and use it to create the certificate using a similar script as before. Just run this script in the same folder as the localhost.conf file.
openssl req -config localhost.conf -new -x509 -sha256 -newkey rsa:2048 -nodes \ -keyout localhost.key -days 3650 -out localhost.crt openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.crt
This will ask you for an export password for your pfx file. Be sure that you provide a password and keep it safe - ASP.NET Core requires that you don't leave the password blank. You should now have an X509 certificate called
localhost.pfx that you can use to add HTTPS to your app.
Trusting the certificate
Before we use the certificate in our apps, we need to trust it on our local machine. Exactly how you go about this varies depending on which flavour of Linux you're using. On top of that, some apps seem to use their own certificate stores, so trusting the cert globally won't necessarily mean it's trusted in all of your apps.
The following example worked for me on Ubuntu 16.04, and kept Chrome happy, but I had to explicitly add an exception to Firefox when I first used the cert.
#Install the cert utils sudo apt install libnss3-tools # Trust the certificate for SSL pk12util -d sql:$HOME/.pki/nssdb -i localhost.pfx # Trust a self-signed server certificate certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n 'dev cert' -i localhost.crt
As I said before, I'm not a Linux guy, so I'm not entirely sure if you need to run both of the trust commands, but I did just in case! If anyone knows a better approach I'm all ears :)
We've now created a self-signed certificate with a DNS SAN name for
localhost, and we trust it on the development machine. The last thing remaining is to use it in our app.
Configuring Kestrel to use your self-signed certificate
For simplicity, I'm just going to show how to load the localhost.pfx certificate in your app from the .pfx file, and how configure Kestrel to use it to serve requests over HTTPS. I've hard-coded the .pfx password in this example for simplicity, but you should load it from configuration instead.
Warning You should never include the password directly like this in a production app.
The following example is for ASP.NET Core 2.0 - Shawn Wildermuth has an example of how to add SSL in ASP.NET Core 1.X (as well as how to create a self-signed cert on Windows).
public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) => return WebHost.CreateDefaultBuilder() .UseKestrel(options => { // Configure the Url and ports to bind to // This overrides calls to UseUrls and the ASPNETCORE_URLS environment variable, but will be // overridden if you call UseIisIntegration() and host behind IIS/IIS Express options.Listen(IPAddress.Loopback, 5001); options.Listen(IPAddress.Loopback, 5002, listenOptions => { listenOptions.UseHttps("localhost.pfx", "testpassword"); }); }) .UseStartup<Startup>() .Build(); }
Although
CreateDefaultBuilder() adds Kestrel to the app anyway, you can call
UseKestrel() again and specify additional options. Here we are defining two URLs and ports to listen on (The
IPAddress.Loopback address corresponds to
localhost or
127.0.0.1):
- - An unsecured end point
- - Secured using our SSL cert
We add HTTPS to the second
Listen() call with the
UseHttps() extension method. There are several overloads of the method, which allow you to provide an
X509Certificate2 object directly, or as in this case, a filename and password to a certificate.
If everything is configured correctly, you should be able to view the app in Chrome, and see a nice, green, Secure padlock:
As I said at the start of this post, I'm not 100% on all of this, so if anyone has any suggestions or improvements, please let me know in the comments. | https://andrewlock.net/creating-and-trusting-a-self-signed-certificate-on-linux-for-use-in-kestrel-and-asp-net-core/ | CC-MAIN-2021-10 | refinedweb | 1,377 | 66.54 |
I suggest your create a new thread and post a link here so I and others interested in your application can help.
Time and TimeAlarms are libraries for handling time and time based tasks on Arduino. The code can be found here. This thread is for help on how to use these libraries and suggestions for future improvements.A thread specifically for discussing issues relating to updates in the beta test version can be found here (that thread will be closed after the beta testing is completed.
Hello mem,I needed to use DS1307 Real-Time Clock on a project and saw the Time library, but I thought it was too big for what I needed (just to know day, month, year, hour, minute and second from DS1307), so I decided to create a new, simple library to achieve this task and its code is at GitHub: think we should integrate it, maybe creating a "driver interface" (just some conventions) so we can create drivers for many RTC chips and use the same code. What do you think?Other thing I think that should be changed in Time library is the namespace of functions. Maybe using Time.hour(), Time.day() etc. instead of directly hour(), day() etc.
/* * A simple sketch to display time from a DS1307 RTC * */#include <Time.h> #include <Wire.h> #include <DS1307RTC.h> // a basic DS1307 library that returns time as a time_tchar dateTime[20];void setup() { Serial.begin(9600); }void loop(){ setTime(RTC.get()); sprintf(dateTime, "%4d-%02d-%02d %02d:%02d:%02d", year(), month(), day(), hour(),minute(), second()) ; Serial.print(dateTime); Serial.print(" - day of week: "); Serial.println(dayStr(weekday())); delay(1000);}
void setup(){ Serial.begin(115200); Serial.println("UnixTime simulator :) ");}void loop(){ char buffer[20]; time2string(millis(), buffer); Serial.print(millis()); Serial.print(" ==> "); Serial.println(buffer);}uint8_t daysInMonth [] = { 31,28,31,30,31,30,31,31,30,31,30,31 };void time2string(unsigned long t, char *buf){ uint8_t yOff, m, d, hh, mm, ss; ss = t % 60; t /= 60; mm = t % 60; t /= 60; hh = t % 24; uint16_t days = t / 24; uint8_t leap; for (yOff = 0; ; ++yOff) { leap = yOff % 4 == 0; if (days < 365 + leap) break; days -= 365 + leap; } for (m = 1; ; ++m) { uint8_t daysPerMonth = daysInMonth[ m - 1]; if (leap && m == 2) ++daysPerMonth; if (days < daysPerMonth) break; days -= daysPerMonth; } d = days + 1; sprintf(buf,"%4d-%02d-%02d %02d:%02d:%02d", 1970+yOff, m,d, hh,mm,ss);}
Many C compilers have a function named ctime that produces a date string but I don't think its available with the arduino tools. I have always used multiple print statements to display the data, is there a reason you can't do that for your application?
incompatible types in assignment of 'time_t' to 'char [11]'
#include "stdlib.h"void setup(){ Serial.begin(115200); char buffer[11]; unsigned long unixtime=1234567890L; // L for long ltoa(unixtime, buffer, 10); Serial.println(buffer); }void loop(){}
do you mean some thing like this?Code: [Select]#include "stdlib.h"void setup(){ Serial.begin(115200); char buffer[11]; unsigned long unixtime=1234567890L; // L for long ltoa(unixtime, buffer, 10); Serial.println(buffer); }loop(){}(code not tested)see - - search for ltoa
#include "stdlib.h"void setup(){ Serial.begin(115200); char buffer[11]; unsigned long unixtime=1234567890L; // L for long ltoa(unixtime, buffer, 10); Serial.println(buffer); }loop(){}
you need to add void before loop(){}allready patched in posting above. | http://forum.arduino.cc/index.php?topic=66054.msg506605 | CC-MAIN-2015-35 | refinedweb | 565 | 65.01 |
In this lesson, you'll learn how to create boxplots in Python using matplotlib.
The Imports We'll Need For This Lesson
As before, the code cells in the lesson will assume that you have already performed the following imports:
import matplotlib.pyplot as plt %matplotlib inline import pandas as pd
The Dataset We Will Be Using In This Lesson
In our first lesson on using pyplot, we used fake datasets generated using NumPy's random number generator. While this can be useful for educational purposes, it is time for us to begin working with a real-world dataset.
Specifically, we will be working with the famous Iris data set. This data set was produced by English statistician Ronald Fisher in 1936 (!!) when he was writing one of the first papers on linear discriminant analysis.
The Iris dataset is so commonly used for machine learning and deep learning practice that it is actually included in many data visualization and statistical libraries for Python. However, we are not using any of those libraries. Because of this, we will import the Iris dataset manually.
To make things easy for you, I have uploaded a
json file containing the iris dataset to the GitHub repository for this course. You can find it in the folder
iris with the filename
iris.json.
You can import this dataset into your Python script using the following command:
import pandas as pd iris_data = pd.read_json('')
The iris data set is a collection of data points for flowers with the following data fields:
sepalLength
sepalWidth
petalLength
petalWidth
species
It is an ideal candidate for creating boxplots using matlpotlib.
How To Create Boxplots in Python Using Matplotlib
We will now learn how to create a boxplot using Python. Note that boxplots are sometimes call 'box and whisker' plots, but I will be referring to them as boxplots throughout this course.
First, what is a boxplot?
A boxplot is a chart that has the following image for each data point (like
sepalWidth or
petalWidth) in a dataset:
Each specific component of this boxplot has a very well-defined meaning. They are labeled in the following image.
For those unfamiliar with the terminology of this diagram, they are described below:
- Q1: The first quartile of the dataset. 25% of values lie below this level.
- Q2: The second quartile of the dataset. 50% of values lie above and below this level.
- Q3: The third quartile of the dataset. 25% of values lie above this level.
- The boxplot 'Minimum', defined as Q1 less 1.5 times the interquartile range.
- The boxplot Maximum, defined as Q3 plus 1.5 times the interquartile range.
- The median: the midpoint of the datasets.
- Interquartile range: the distance between Q1 and Q3.
- Outliers: data points that are below Q1 or above Q3.
So how can we actually create a boxplot using matplotlib?
First, we will have to drop any non-numerical columns from the
iris_data DataFrame.
The only column that is non-numerical is
species.
We can drop
species from
iris_data using the
drop method, like this:
iris_data = iris_data.drop('species', axis=1)
Now that the dataset contains only numerical values, we are ready to create our first boxplot!
You can create a boxplot using matlplotlib's
boxplot function, like this:
plt.boxplot(iris_data)
The resulting chart looks like this:
As you've probably guessed, this is not what we wanted our boxplot to look like! What is the solution?
If you look closely at this chart, it becomes clear that this is creating a boxplot where there is a chart for each row, not a chart for each column. The solution for this is to transpose the DataFrame using the
transpose method.
You can either do this in separate lines, like this:
transposed_iris_data= iris_data.transpose() plt.boxplot(transposed_iris_data)
Alternatively, you can transpose the DataFrame within the
boxplot method like this:
plt.boxplot(iris_data.transpose())
This looks much better!
However, we still have work to do.
One of the problems that remains is that the x-axis is not labeled. It is currently unclear which boxplot represents which data point.
We can modify the labels of the x-axis using matplotlib's
xticks method. The
xticks method takes two arguments:
ticks: A list of positions at which the labels should be placed.
labels: A list of explicit labels to place at the given ticks.
Note that each of these arguments must be a list - which means they begin with
[ and end in
]. As an example, you could label the 2nd entry as 'The Second Entry!' with the following
xticks command:
plt.xticks([2], ['The Second Entry!'])
If you wanted to label each boxplot with its corresponding datapoint, your arguments should look like this:
ticks:
[1, 2, 3, 4]
labels: ['sepalLength', 'sepalWidth', 'petalLength', 'petalWidth']
Typing out these arguments by hand is not ideal. It does not scale to larger datasets with many more datapoints per observation.
Because of this, it is a good idea to learn how to programmatically generate the
ticks and
labels arguments in a way that would be repeatable for large databases.
Let's start by programmatically creating the
ticks argument:
ticks = range(1, len(iris_data.columns)+1)
This statement uses the
range function to create a list from 1 to 4 (inclusive), since the value of
iris_data.columns is 4.
Next, let's create the
labels argument:
labels = list(iris_data.columns)
A brief explanation of this code cell is below:
- First, we create an object that contains all of the column names using the pandas DataFrame
columnsattribute.
- Next, we force this object into a
listdata structure using the
listfunction.
With all this done, we can relabel the x-axis as follows:
plt.boxplot(iris_data.transpose()) plt.xticks(ticks,labels)
That plot looks much better! In the next section, we will explore how to style boxplots using various methods available in matplotlib.
Customizing The Appearance of Boxplots
There are a number of ways that we can customize the appearance of boxplots created using matplotlib. We will discuss a few methods in this section.
First, we can pass in the
showmeans=True argument to show the means of the datasets we're displaying. An example is below:
plt.boxplot(iris_data.transpose(),showmeans=True) plt.xticks(ticks,labels)
We can also use the
showfliers=False argument to remove the outliers from the chart. An example is below:
plt.boxplot(iris_data.transpose(),showfliers=False) plt.xticks(ticks,labels)
The last two arguments that we will explore are
boxprops and
flierprops, which change the appearance of the box within the boxplot (for
boxprops) and the outliers within the boxplot (for
flierprops).
Both
boxprops and
flierprops must be passed into the
boxplot method as a dictionary. Because of this, it is easiest to create these variables outside of the
boxplot method, like this:
boxprops = dict(linestyle='--', linewidth=3, color='darkgoldenrod')
Once this is done, you can create the actual plot and incorporate the
boxprops dictionary like this:
plt.boxplot(iris_data.transpose(), boxprops=boxprops) plt.xticks(ticks,labels)
The
flierprops argument works in a similar manner. We first create the
flierprops dictionary outside of the
boxplot method, like this:
flierprops = dict(marker='o', markerfacecolor='green', markersize=12, linestyle='none')
Then we pass it into the
boxplot method:
plt.boxplot(iris_data.transpose(), boxprops=boxprops, flierprops=flierprops) plt.xticks(ticks,labels)
Moving On
In this lesson, we learned how to import the Iris dataset and create boxplots with it. We also learned how to style boxplots using the properties of matplotlib's
boxplot method. | https://nickmccullum.com/python-visualization/boxplot/ | CC-MAIN-2021-31 | refinedweb | 1,250 | 65.52 |
I have spent a few days with NiFi trying to use ExecuteSQLRecord and PutDatabaseRecord (configured with AvroRecordsetWriter) to transfer data from one PostgreSQL table to another table. Everything works find until I include an array of float column. The error shown in PutDatabaseRecord was "Cannot cast an instance of Ljavalang.Object to type Types.ARRAY"
Can anyone show me an example of how to make PutDatabaseRecord work with an array column in Postgresql database?
Thanks,
Andy
Hi experts,
Could anyone shed some light on this? Or this is not yet supported?
Thanks.
Created 04-13-2020 05:54 AM
@stevenmatison Think I've managed to hit this as well.
Some relevant screenshots (processor config, schema):
Schema for good measure:
{ "type": "record", "namespace": "com.asdf", "name": "sdfg", "fields": [ { "name": "doc_id", "type": {"type": "array", "items": "long"} }, { "name": "start_id", "type": {"type": "array", "items": "int"} }, { "name": "end_id", "type": {"type": "array", "items": "int"} }, { "name": "passage_date", "type": {"type": "int", "logicalType": "date"} }, { "name": "passage_time", "type": {"type": "int", "logicalType": "time-millis"} } ] }
Thanks- any insight would be much appreciated!
Hi Ana,
Unfortunately I didn't find a "clean" solution for this at all- ended up converting my Avro records to JSON (using ConvertAvroToJSON), then ran into some similar issues with the ConvertJSONToSQL processor (it wasn't able to correctly generate SQL for all relevant data types- I seem to recall datetimes being an issue?), so used a Python ExecuteScript processor to convert JSON to SQL statements before executing these with PutSQL.
TL;DR: Avro records -> ConvertAvroToJSON -> ExecuteScript ("homemade" JSON to SQL conversion) -> PutSQL
Hope this helps!
Created 01-05-2021 01:34 PM
ARRAYs are a bit tricky. But JSONReader and Writer may work better. | https://community.cloudera.com/t5/Support-Questions/PutDatabaseRecord-and-ARRAY-type/td-p/293564 | CC-MAIN-2021-10 | refinedweb | 276 | 53.71 |
Silly GORM tricks, part II: dependent variables
April 29, 2008 6 Comments
This post discusses a relatively simple topic in GORM: how to use dependent variables in a domain class. It’s simple in the sense that it’s been discussed on the mailing list, but I haven’t seen it documented anywhere so I thought I’d do so here.
I started with a simple two-class domain model that I discussed in my last GORM post.
class Quest { String name static hasMany = [tasks:Task] String toString() { name } }
class Task { String name static belongsTo = [quest:Quest] String toString() { name } }
As before, there is a one-to-many relationship between quests and tasks. A quest has many tasks, and the
belongsTo setting implies a cascade-all relationship, so inserting, updating, or deleting a quest does the same for all of its associated tasks.
In
Bootstrap.groovy, I also have:
def init = { servletContext -> new Quest(name:'Seek the grail') .addToTasks(name:'Join King Arthur') .addToTasks(name:'Defeat Knights Who Say Ni') .addToTasks(name:'Fight Killer Rabbit') .save() }
which shows how the classes are intended to work together.
The first change I want to make is to give tasks a start date and end date. My first attempt is to just add properties with those names, of type
java.util.Date.
class Task { String name Date start Date end // ... rest as before ... }
This leads to a minor problem. If I start up the server, I don’t see any quests or tasks. The reason is that my bootstrap code tries to create tasks without start and end dates, which violates the database schema restriction. My generated schema marks both
start and
end columns as “not null”.
There are many ways to fix that. I can either assign both
start and
end properties for each task in my bootstrap code, or add a constraint in Task that both can be
nullable, or do what I did here, which is to give them default values.
class Task { String name Date start = new Date() Date end = new Date() + 1 // ... rest as before ... }
I do have a constraint in mind, actually. I’d like to ensure that the
end date is after the
start date. That requires a custom validator, which is also pretty easy to implement:
class Task { // ... static constraints = { name(blank:false) start() end(validator: { value, task -> value >= task.start }) } }
That works fine.
Now for the dependent variable. My tasks all have a
start and an
end, so implicitly they have a duration. I could add the
duration variable to my Task class, but I don’t want to save it in the database. It’s dependent on the values of
start and
end. I also don’t want to be able to set it from the gui.
Here’s the result:
class Task { String name Date start Date end int getDuration() { (start..end).size() } void setDuration(int value) {} static transients = ['duration'] // ... rest as before ... }
This computes the
duration from the
start and
end dates by returning the number of days between them. It relies on the fact that Groovy modifies
java.util.Date to have the methods
next() and
previous(), and since
Date implements
Comparable, it can then be used in a range, as shown.
(As an aside, this implementation is probably pretty inefficient. If the number of days between start and end was substantial, I think this implementation executes the
next() method over and over until it reaches the end. I thought about trying to subtract the two dates, but interestingly enough the
Date class only has
plus() and
minus() methods that take
int values, not other Dates. I considered adding a category that implemented those methods, but haven’t tried it yet. I’d like to look in the Groovy source code for the
plus() and
minus() implementations, but I couldn’t find it. I did find something similar in
org.codehaus.groovy.runtime.DefaultGroovyMethods, but I’m not sure that’s the same thing. Sigh. Still a lot to learn…)
By putting
'duration' in the
transients closure, I ensure that it isn’t saved in the database.
The
getDuration method is pretty intuitive, but adding set method as a no-op is somewhat annoying. If I leave it out, then Groovy will generate a setter that can modify the duration. As an alternative, according to GinA I can also supply my own backing field and mark it as final:
class Task { // ... final int duration int getDuration() { (start..end).size() } // ... }
Just to be sure, I added the following test to my
TaskTests:
void testSetDuration() { Task t = new Task(name:'Join King Arthur') shouldFail(ReadOnlyPropertyException) { t.duration = 10 } q.addToTasks(t).save() }
That passed without a problem.
Interestingly, the dynamic scaffold still generates a modifiable input text field for duration, both in the create and edit views. I can put my own value in it and submit the form without a problem. The result does not get saved, which is correct, but I don’t see an exception thrown anywhere in the console. If I generate the static scaffolding, I know that in
Task.save there is a line like
t.properties = params
which is how the form parameters are transfered to the object. Presumably the internal logic knows enough to avoid trying to invoke a setter on a final field. Of course, as soon as I generate the static scaffolding, I usually just delete that row in the GSP form.
There’s one final (no pun intended) issue with the dynamic scaffolding. The generated list view puts its properties in
<g:sortableColumn> tags. This holds true for the duration, as well. Normally, when I click on the column header, the result is sorted, ascending or descending, by that property. If I click on the
duration column header, however, I get an “
org.hibernate.QueryException: could not resolve property: duration of: Task“.
It turns out that the User Guide has a “Show Source” link for every tag. When I clicked on that link for the
sortableColumn tag, I saw near the top:
if(!attrs.property) throwTagError("Tag [sortableColumn] is missing required attribute [property]")
The error I got in the console is “could not resolve property”, but it’s possible this is the source of that issue. I’m not sure. The only other source (again, no pun intended) of the problem I could see was the execution of the
list action at the bottom. That would imply that Grails is generating the Hibernate query and we’re failing at that point, which would be consistent with the error reported above.
At any rate, the duration property now works in the domain class. I can always modify the views to ensure I don’t try to set it.
Recent Comments | https://kousenit.org/2008/04/ | CC-MAIN-2017-17 | refinedweb | 1,127 | 74.08 |
Implementing a Session Timeout Page in ASP.NET
Date Published: 02 April 2008
Introduction
In many applications, an authenticated user's session expires after a set amount of time, after which the user must log back into the system to continue using the application. Often, the user may begin entering data into a large form, switch to some other more pressing task, then return to complete the form only to find that his session has expired and he has wasted his time. One way to alleviate this user interface annoyance is to automatically redirect the user to a "session expired" page once their session has expired. The user may still lose some work he was in the middle of on the page he was on, but that would have been lost anyway had he tried to submit it while no longer authenticated. At least with this solution, the user immediately knows his session has ended, and he can re-initiate it and continue his work without any loss of time.
Technique
The simplest way to implement a cross-browser session expiration page is to add a META tag to the HTML headers of any pages that require authentication and/or a valid session. The syntax for the META tag, when used for this purpose, is pretty simple. A typical tag would look like this:
<meta http-
The first attribute, http-equiv, must be set to refresh. The META tag supports a number of other options, such as providing information about page authors, keywords, or descriptions, which are beyond the scope of this article (learn more about them here). The second attribute, content, includes two parts which must be separated by a semicolon. The first piece indicates the number of seconds the page should delay before refreshing its content. A page can be made to simply automatically refresh itself by simply adding this:
<meta http-
However, to complete the requirement for the session expiration page, we need to send the user's browser to a new page, in this case /SessionExpired.aspx which is set with the url= string within the content attribute. It should be pretty obvious that this behavior is really stretching the intended purpose of the
<meta> tag, which is why there are so many fields being overloaded into the content attribute. It would have made more sense to have a
<refresh delay='60' refreshUrl='' /> tag, but it is no simple task to add a new tag to the HTML specification and then to get it supported in 1.2 million different versions of user agents. So, plan on the existing overloaded use of the
<meta> tag for the foreseeable future.
With just this bit of code, you can start hand-coding session expirations into your ASP.NET pages to your heart's content, but it is hardly a scalable solution. It also does not take advantage of ASP.NET's programmability model at all, and so I do not recommend it. The problem that remains is how to include this meta tag into the appropriate pages (the ones that require a session) without adding it to public pages, and how to set up the delay and destination URL so that they do not need to be hand-edited on every ASPX page. But before we show how to do that, let us design our session expired page.
Listing 1 - Session Expired Page
<@ Page <html xmlns="" > <head runat="server"> <title>Session Expired</title> </head> <body> <form id="form1" runat="server"> <div> <h1>Session Expired</h1> <p> Your session has expired. Please <a href="Default.aspx">return to the home page</a> and log in again to continue accessing your account.</p> </div> </form> </body> </html>
Listing 2 - Session Expired Page CodeBehind
using System; namespace SessionExpirePage { public partial class SessionExpired : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Session.Abandon(); } } }
Of course the code in Listing 1 is extremely simple and you will want to update it to use your site's design, ideally with a Master Page. Note in Listing 2 the call to
Session.Abandon(). This is important and ensures that if the client countdown and the server countdown are not quite in sync, the session is terminated when this page is loaded.
There are several ways we could go about including the session expiration META tag on a large number of secured pages. We could write it by hand - not a good idea. We could use an include file (yes, those still exist in ASP.NET) - even worse idea. We could write a custom control and include it by hand. Slightly better, but still requires touching a lot of ASPX files. We could create a base page class or extend one that is already in use. This is actually a promising technique that would work, but is not the one that I will demonstrate. You could easily implement it using a variation of my sample, though. Or you could use an ASP.NET master page. This is the simplest, most elegant solution, in my opinion, and is the one I will demonstrate.
In most applications I have worked with, it is typical to have a separate master page for the secure, admin portion of the site from the public facing area of the site. This technique works best in such situations. Essentially, the application's secure area will share a single master page file, which for this example will be called Secure.Master. Secure.Master will include some UI, but will also include a ContentPlaceHolder in the HTML
<head> section that will be used to render the Session Expiration META tag. Then, in the Master page's codebehind, the actual META tag will be constructed from the Session.Timeout set in the site's web.config and the URL that should be used when the session expires (in this case set as a property of the master page, but ideally this would come from a custom configuration section in web.config). The complete code for the master page is shown in Listings 3 and 4.
Listing 3 - Secure.Master
<@ Master <html xmlns=""> <head runat="server" id="PageHead"> <title>Secure Page</title> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> </head> <body> <form id="form1" runat="server"> <div> <h1> Your Account [SECURE]</h1> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> <p> Note: Your session will expire in <%=SessionLengthMinutes %> minute(s), <%=Session["name"] %> . </p> </div> </form> </body> </html>
Listing 4 - Secure.Master CodeBehind
using System; using System.Web.UI; namespace SessionExpirePage { public partial class Secure : System.Web.UI.MasterPage { public int SessionLengthMinutes { get { return Session.Timeout; } } public string SessionExpireDestinationUrl { get { return "/SessionExpired.aspx"; } } protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); this.PageHead.Controls.Add(new LiteralControl( String.Format("<meta http-", SessionLengthMinutes*60, SessionExpireDestinationUrl))); } } }
The important work is all done within the OnPreRender event handler, which adds the
<meta> tag to the page using String.Format. One important thing to note about this approach is that it follows DRY (Don't Repeat Yourself) by keeping the actual session timeout period defined in only one place. If you were to hardcode your session timeouts in your META tags, and later the application's session timeout changed, you would need to update the META tags everywhere they were specified (and if you did not, you would not get a compiler error, just a site that did not work as expected). Setting the session timeout is easily done within web.config and completes this example. The relevant code is shown in Listing 5.
Listing 5 - Set Session Timeout in web.config
<system.web> <sessionState timeout="1" mode="InProc" /> </system.web>
Considerations
One thing to keep in mind with this approach is that it will start counting from the moment the page is first sent to the browser. If the user interacts with that page without loading a new page, such as adding data or even working with the server through AJAX callbacks or UpdatePanels, the session expiration page redirect will still occur when the session would have timed out after the initial page load. In practice, this is not an issue for most pages since if a user is going to work with the page they will do so soon after it first loads, and if they do not use it for some time, they will return to find the Session Expired page and will re-authenticate. However, if your site makes heavy use of AJAX (or Silverlight or any other client-centric technology), you may need to consider another (more complex) approach, such as using an AJAX Timer for this purpose.
Download
Download the source for the above examples here.
Summary
Providing.
Originally published on ASPAlliance.com
Tags - Browse all tags | https://ardalis.com/implementing-session-timeout-aspnet/ | CC-MAIN-2021-25 | refinedweb | 1,448 | 61.77 |
Code: Select all
wfcreate u 1 1
import "test.xlsx" range="'Test'!A1" colhead=1 namepos=none names=("test") types=(a) @append
The alpha series contains ID values composed of numbers and/or letters. Rows that do not have an ID contain the words "NO ID". For example:
100000000
A00000001
100000002
100000003
100000004
NO ID
NO ID
NO ID
When the data are imported into EViews, the words "NO ID" do not get populated in the alpha object - these rows are just blank. Other values get imported without any issue.
I've tested this with different words, and the problem is the same. (For example, if "A00000001" appears in multiple rows, that value stops being imported.) ID values that are entirely composed of numbers do not have this problem - they can repeat in many rows and still be imported.
Am I doing something wrong, or is this a bug? Thanks! | http://forums.eviews.com/viewtopic.php?f=9&t=19179 | CC-MAIN-2019-09 | refinedweb | 150 | 72.46 |
25142/how-can-i-use-python-to-execute-a-curl-command
I want to execute a curl command in python.
Usually, I just need to enter the command in terminal and press return key. However, I don't know how it works in python.
The command shows below:
curl -d @request.json --header "Content-Type: application/json"
There is a request.json file to be sent to get response.
I searched a lot and got confused. I tried to write a piece of code, although I could not fully understand. It didn't work.
import pycurl
import StringIO
response = StringIO.StringIO()
c = pycurl.Curl()
c.setopt(c.URL, '')
c.setopt(c.WRITEFUNCTION, response.write)
c.setopt(c.HTTPHEADER, ['Content-Type: application/json','Accept-Charset: UTF-8'])
c.setopt(c.POSTFIELDS, '@request.json')
c.perform()
c.close()
print response.getvalue()
response.close()
The error message is 'Parse Error'.Can anyone tell me how to fix it? or how to get response from the sever correctly?
For sake of simplicity, maybe you should consider using the standard library Requests.
An example with json response content would be something like:
import requests
r = requests.get('')
r.json()
If you look for further information, in the Quickstart section, they have lots of working examples.
EDIT:
For your specific curl translation:
import requests
url = ''
payload = open("request.json")
headers = {'content-type': 'application/json', 'Accept-Charset': 'UTF-8'}
r = requests.post(url, data=payload, headers=headers)
For Python 3, try doing this:
import urllib.request, ...READ MORE
def add(a,b):
return a + b
#when i call ...READ MORE
If there is list =[1,2,4,6,5] then use ...READ MORE
You can use sleep as below.
import time
print(" ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)):
In ...READ MORE
Hi there, instead of sklearn you could ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/25142/how-can-i-use-python-to-execute-a-curl-command | CC-MAIN-2020-34 | refinedweb | 361 | 62.75 |
Folder redirection, DFS, and "semaphore timeout" error
So we have been successfully using Folder Redirection here for a year or so without any issues. I am in the process of moving our users redirected folder to a DFS namespace as part of retiring an old file server.
The process involves moving the user object to a new OU with a new GPO linked that handles folder redirection to the new DFS location (the old location was *not* DFS). I then run a GPUPDATE, restart the machine, and on the next login, the GPO handles moving the user's redirected folder from the old location to the new.
I have successfully done this with a handlful of users, but one user in particular, is giving me problems.
When I move the user object to the new GP and restart, after the login, i see that their folders are still pointing to the old location, despite GPRESULT saying the new GPO applied.
I am finding the following error in the event log.
The following error occurred: "Failed to copy files from "C:\Users\<UserName>\Documents" to "<a network share>". Error details: "The semaphore timeout period has expired".
I found this KB article, but I am not able to run Procmon because the the copy/error is generated during login. I did try logging in as the user via RDP, and then sitting at the console logged in as admin and running procmon, but it didn't/couldn't see processes running in the other session.
Any ideas what I can do to resolve this?
EDIT: The issue ended up being the 249 charecter limit mentioned in the KB article. I ran a script to list the offending file paths, had the user rename them, and then the folder redirection GPO properly applied. Thanks to all who offered help!
*a note for anyone with a similiar problem that stumbles across this post in the future. the issue was the number if charecters in the entire file path. Also it wasn't the current path that was the issue. The DFS path that I was moving to had more charecters in it than the existing UNC path directly to the old server. When my script reported the length, I had it report the current length and the length of the complete DFS path. It was the later that was problematic.
Edited Mar 27, 2013 at 12:47 UTCEdited Mar 27, 2013 at 12:47 UTC
6 Replies
Mar 25, 2013 at 7:23 UTC
Worst error message ever!!!!
LOLOLOL!!
Mar 25, 2013 at 7:40 UTC
Give that domain user local admin access to the system, reset their AD password (or have them give it to you) so you can use their account and login as them and then see if you can tell what processes are running.
Mar 25, 2013 at 7:54 UTC
Cant you just goto their share on the server, find the directory or file name that is long, shouldnt be too hard, and delete/rename it?
You could do a search for *.* on the user's space, will return all files. Then sort on Folder location
Mar 25, 2013 at 8:37 UTC
They are a local admin and I am logging in as them after i move their user object to the new OU/GPO.
The issue is that they process runs (and throws the error) during login. I have verbose login enabled and it pauses on "Applying Folder Redirection Policy" and stays there a good 30 minutes or so. I believe this is when that actual copy attempt is made. After the login complete, there is no process to view with Procmon.
I have seen it hang at the "Applying Folder Redirection Policy" when I have migrated other users, but when the login is complete, all of their data is in the new location and the pointers for My Docs and Desktop are pointing to the new location.
Mar 26, 2013 at 2:44 UTC
Check the files in the the old and new location and make sure that the user didn't somehow lose permissions to one... If it can't access a file for some reason that might explain the hangup.
Mar 27, 2013 at 9:27 UTC
Unless you set it up that way (and I can't imagine why you would), Windows folder replication or redirection does not use a semaphore file. It only falls back to looking for one after a timeout in case it is on some kind of funny network with unusual protocols or mixed platforms where netbios and DNS can't work as efficiently as they should, assuming the administrator set up a semaphore handshake for just that situation. I'm assuming you didn't.
Something is pooched about that user's data and stopping access to it, causing the timeout. I'm inclined to +1 Dashrender's permissions theory, then I would test accessing the data using various accounts from another server or workstation via hostname/ip/etc and see if you get mixed or consistent results. That should indicate if there is a DNS problem lurking.
Something else you can try doing to get more info is to log in locally as the user, start procmon, then RDP in as the admin and minimize the RDP session. Then watch procmon and see what happens. You may have to patch the termsrv.dll to get the multiple concurrent logins (I didn't tell you that).
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion.
It's FREE. Always will be. | http://community.spiceworks.com/topic/317308-folder-redirection-dfs-and-semaphore-timeout-error | CC-MAIN-2014-35 | refinedweb | 946 | 68.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.