text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Table of Contents - 1. Hex coordinates - 2. Layout - 3. Fractional Hex - 4. Map - 5. Rotation - 6. Offset coordinates - 7. Notes - 8. Source Code Note: this article is a companion guide to my guide to hex grids. The data structures and functions here implement the math and algorithms described on that page. The main page covers the theory for hex grid algorithms and math. Now let’s write a library to handle hex grids. The first thing to think about is what the core concepts will be. - Since most of the algorithms work with cube coordinates, I’ll need a data structure for cube coordinates, along with algorithms that work with them. I’ll call this the Hex class. - For some games I want to show coordinates to the player, and those will probably not be cube, but instead axial or offset, so I’ll need a data structure for the player-visible coordinate system, as well as functions for converting back and forth. Cube and axial are basically the same so I’m not going to bother implementing a separate axial system, and I’ll reuse Hex. For offset coordinates, I’ll make a separate data structure Offset. - A grid map will likely need additional storage for terrain, objects, units, etc. A 2d array can be used but it’s not always straightforward, so I’ll create a Map class for this. - To draw hexes on the screen, I need a way to convert hex coordinates into screen space. I’ll call this the Layout class. The main article doesn’t cover some of the additional features I want: - Support y-axis pointing down (common in 2d libraries) as well as y-axis pointing up (common in 3d libraries). The main article only covers y-axis pointing down. - Support stretched or squashed hexes, which are common with pixel graphics. The main article only supports equilateral hexes. - Support the 0,0 hex being located on the screen anywhere. The main article always places the 0,0 hex at x=0, y=0. - I also need a way to convert mouse clicks and other pixel coordinates back into hex coordinates. I will put this into the Layout class. The same things I need to deal with for hex to screen (y-axis direction, stretch/squash, origin) have to be dealt with for screen to hex, so it makes sense to put them together. - The main article doesn’t distinguish hexes that have integer coordinates from those with fractional coordinates. I’ll define a second class FractionalHex for the two algorithms where I want to have floating point coordinates: linear interpolation and rounding. - Once I have coordinates and the neighborsfunction implemented I can use all graph algorithms including movement range and pathfinding. I cover pathfinding for graphs on another page[1] and won’t duplicate that code here. I’m going to use C++ for the code samples, but I also have Java, C#, Python, Javascript, Haxe, and Lua versions of the code. 1 Hex coordinates# On the main page, I treat Cube and Axial systems separately. Cube coordinates are a plane in x,y,z space, where x+y+z = 0. Axial coordinates have two axes q,r that are 60° or 120° apart. Here’s a class that represents cube coordinates, but uses names q, r, s instead of the x, z, y I use on the main page. Note the order is different: on the main page, axial r corresponds to cube z, so Hex(q, r, s) is the same as Cube(q, s, r). This is confusing, I know, but the alternative was to use Hex(q, s, r) which I think is also confusing. Sorry about that. I will have to fix this at some point. struct Hex { // Cube storage, cube constructor const int q, r, s; Hex(int q_, int r_, int s_): q(q_), r(r_), s(s_) { assert (q + r + s == 0); } }; Pretty simple. Here’s a class that stores axial coordinates internally, but uses cube coordinates for the interface: struct Hex { // Axial storage, cube constructor const int q_, r_; Hex(int q, int r, int s): q_(q), r_(r) { assert (q + r + s == 0); } inline int q() { return q_; } inline int r() { return r_; } inline int s() { return - q_ - r_; } }; These two classes are effectively equivalent. The first one stores s explicitly and the second one uses accessors and calculates s when needed. Cube and Axial are essentially the same system, so I’m not going to write a separate class for each. However the labels on this page are different. On the main page, the axial/cube relationship is q→x, r→z, but here I am making q→q, r→r. And that means on the main page cube coordinates are (q, -q-r, r) but on this page cube coordinates are (q, r, -q-r). This makes my two pages inconsistent and I plan to update the main page to match this page. Yet another style is to calculate s in the constructor instead of passing it in: struct Hex { // Cube storage, axial constructor const int q, r, s; Hex(int q_, int r_): q(q_), r(r_), s(-q_ - r_) {} }; An advantage of the axial constructor style is that more than half the time, you’re doing this anyway at the call site. You’ll have q and r and not s, so you’ll pass in -q-r for the third parameter. You can also combine this with the second style (axial storage), and store only q and r, and calculate s in an accessor. Yet another style is to use an array instead of named fields: struct Hex { // Vector storage, cube constructor const int v[3]; Hex(int q, int r, int s): v{q, r, s} { assert (q + r + s == 0); } inline int q() { return v[0]; } inline int r() { return v[1]; } inline int s() { return v[2]; } }; An advantage of this style is that you start seeing patterns where q, r, s are all treated the same way. You can write loops to handle them uniformly instead of duplicating code. You might use SIMD instructions on CPU. You might use vec3 on the GPU. When you read the equality, hex_add, hex_subtract, hex_scale, hex_length, hex_round, and hex_lerp functions below, you’ll see how it might be useful to treat the coordinates uniformly. When you read hex_to_pixel and pixel_to_hex you’ll see that vector and matrix operations (CPU or GPU) can be used with hex coordinates when expressed this way. Programming is full of tradeoffs. For this page, I want to focus on simplicity and readability, not performance or abstraction, so I’m going to use the first style: cube storage, cube constructor. I find it easiest to understand the algorithms in this style. However, I like all of these styles, and wouldn’t hesitate to choose any of them, as long as things are consistent in the project. In a language with multiple constructors, I’d include both the axial and cube constructors for convenience. In C++, the int could instead be a template parameter. In C or C++11, the int v[] style and the int q, r, s style can be merged with a union[2]. A template parameter w can also be used to distinguish between positions and vectors. Putting all of these together: template <typename Number, int w> struct _Hex { // Both storage types, both constructors union { const Number v[3]; struct { const Number q, r, s; }; }; Hex(Number q_, Number r_): v{q_, r_, -q_ - r_} {} Hex(Number q_, Number r_, Number s_): v{q_, r_, s_} {} }; typedef _Hex<int, 1> Hex; typedef _Hex<int, 0> HexDifference; typedef _Hex<double, 1> FractionalHex; typedef _Hex<double, 0> FractionalHexDifference; I didn’t use this C++-specific style on this page because I want to make translation to other languages straightforward. Another design alternative is to use the x, y, z names so that you can make hex coordinates and cartesian coordinates reuse the same data structures. If you’re already using a vector library for geometry, you can reuse that for hex coordinates, and then use a matrix library for representing hex-to-pixel and pixel-to-hex operations. 1.1 Equality# Equality and inequality is straightforward: two hexes are equal if their coordinates are equal. In C++, use operator ==; in Python, define a method __eq__; in Java, define a method equals(). Use the language’s standard style if possible. bool operator == (Hex a, Hex b) { return a.q == b.q && a.r == b.r && a.s == b.s; } bool operator != (Hex a, Hex b) { return !(a == b); } 1.2 Coordinate arithmetic# Since cube coordinates come from 3d cartesian coordinates, I automatically get things like addition, subtraction, multiplication, and division. For example, you can have Hex(2, 0, -2) that represents two steps northeast, and add that to location Hex(3, -5, 2) the obvious way: Hex(2 + 3, 0 + -5, -2 + -2). With other coordinate systems like offset coordinates, you can’t do that and get what you want. These operations are what you’d implement with 3d cartesian vectors, but I am using q, r, s names in this class instead of x, y, z: Hex hex_add(Hex a, Hex b) { return Hex(a.q + b.q, a.r + b.r, a.s + b.s); } Hex hex_subtract(Hex a, Hex b) { return Hex(a.q - b.q, a.r - b.r, a.s - b.s); } Hex hex_multiply(Hex a, int k) { return Hex(a.q * k, a.r * k, a.s * k); } An alternate design is to reuse an existing vec3 class from your geometry library to represent axial/cube coordinates, and in that case you won’t have to write these functions. 1.3 Distance# The distance between two hexes is the length of the line between them. Both the distance and length operations can come in handy. It looks like the distance function from the main article: int hex_length(Hex hex) { return int((abs(hex.q) + abs(hex.r) + abs(hex.s)) / 2); } int hex_distance(Hex a, Hex b) { return hex_length(hex_subtract(a, b)); } 1.3.1 Neighbors# With distance, I defined two functions: length works on one argument and distance works with two. The same is true with neighbors. The direction function is with one argument and the neighbor function is with two. It looks like the neighbors function from the main article: const vector<Hex> hex_directions = { Hex(1, 0, -1), Hex(1, -1, 0), Hex(0, -1, 1), Hex(-1, 0, 1), Hex(-1, 1, 0), Hex(0, 1, -1) }; Hex hex_direction(int direction /* 0 to 5 */) { assert (0 <= direction && direction < 6); return hex_directions[direction]; } Hex hex_neighbor(Hex hex, int direction) { return hex_add(hex, hex_direction(direction)); } To make directions outside the range 0..5 work, use hex_directions[(6 + (direction % 6)) % 6]. Yeah, it’s ugly, but it will work with negative directions too. (Side note: it would’ve been nice to have a modulo operator[3] in C++.) 2 Layout# The next major piece of functionality I need is a way to convert between hex coordinates and screen coordinates. There’s a pointy top layout and a flat top hex layout. The conversion uses a matrix as well as the inverse of the matrix, so I need a way to store those. Also, for drawing the corners, pointy top starts at 30° and flat top starts at 0°, so I need a place to store that too. I’m going to define an Orientation helper class to store these: the 2×2 forward matrix, the 2×2 inverse matrix, and the starting angle: struct Orientation { const double f0, f1, f2, f3; const double b0, b1, b2, b3; const double start_angle; // in multiples of 60° Orientation(double f0_, double f1_, double f2_, double f3_, double b0_, double b1_, double b2_, double b3_, double start_angle_) : f0(f0_), f1(f1_), f2(f2_), f3(f3_), b0(b0_), b1(b1_), b2(b2_), b3(b3_), start_angle(start_angle_) {} }; There are only two orientations, so I’m going to make constants for them: const Orientation layout_pointy = Orientation(sqrt(3.0), sqrt(3.0) / 2.0, 0.0, 3.0 / 2.0, sqrt(3.0) / 3.0, -1.0 / 3.0, 0.0, 2.0 / 3.0, 0.5); const Orientation layout_flat = Orientation(3.0 / 2.0, 0.0, sqrt(3.0) / 2.0, sqrt(3.0), 2.0 / 3.0, 0.0, -1.0 / 3.0, sqrt(3.0) / 3.0, 0.0); Now I will use them for the layout class: struct Layout { const Orientation orientation; const Point size; const Point origin; Layout(Orientation orientation_, Point size_, Point origin_) : orientation(orientation_), size(size_), origin(origin_) {} }; Oh, hm, I guess I need a minimal Point class. If your graphics/geometry library already provides one, use that. struct Point { const double x, y; Point(double x_, double y_): x(x_), y(y_) {} }; Side note: observe how many of these are arrays of numbers underneath. Hex is int[3]. Orientation is an angle, a double, and two matrices, each double[4] or double[2][2]. Point is double[2]. Layout is an Orientation and two Points. Later on the page, FractionalHex is double[3], and OffsetCoord is int[2]. I use structs instead of arrays of numbers because giving a name to things helps me understand them, and also helps with type checking. However, an alternate design choice is to reuse a standard vector library for all of these types, and then use standard matrix multiply for the layout. You can then use your library’s matrix inverse to calculate the inverse matrix. Using arrays of numbers (or a numeric array class) instead of separate structs with names will allow you to reuse more code. I chose not to do that but I think it’s a reasonable choice. Ok, now I’m ready to write the layout algorithms. 2.1 Hex to screen# The main article has two versions of hex-to-pixel, one for each orientation. The code is essentially the same except the numbers are different, so for this implementation I’ve put the numbers into the Orientation class, as f0 through f3: Point hex_to_pixel(Layout layout, Hex h) { const Orientation& M = layout.orientation; double x = (M.f0 * h.q + M.f1 * h.r) * layout.size.x; double y = (M.f2 * h.q + M.f3 * h.r) * layout.size.y; return Point(x + layout.origin.x, y + layout.origin.y); } Unlike the main article, I have a separate x size and y size. That allows two things: - You can stretch and squash the hexagon to match whatever size pixel art you have. Note that size.xand size.yare not the width and height of the hexagons. - You can use a negative value for the y size to flip the y axis. Also, the main article assumes the q=0,r=0 hexagon is centered at x=0,y=0, but in general, you might want to center it anywhere. You can do that by adding the center ( layout.origin) to the result. 2.2 Screen to hex# The main article has two versions of pixel-to-hex, one for each orientation. Again, the code is the same except for the numbers, which are the inverse of the matrix. I put the matrix inverse into the Orientation class, as b0 through b3, and used it here. In the forward direction, to go from hex coordinates to screen coordinates I first multiply by the matrix, then multiply by the size, then add the origin. To go in the reverse direction, I have to undo these. First undo the origin by subtracting it, then undo the size by dividing by it, then undo the matrix multiply by multiplying by the inverse: FractionalHex pixel_to_hex(Layout layout, Point p) { { const Orientation& M = layout.orientation; Point pt = Point((p.x - layout.origin.x) / layout.size.x, (p.y - layout.origin.y) / layout.size.y); double q = M.b0 * pt.x + M.b1 * pt.y; double r = M.b2 * pt.x + M.b3 * pt.y; return FractionalHex(q, r, -q - r); } There’s a complication here: I start with integer hex coordinates to go to screen coordinates, but when going in reverse, I have no guarantee that the screen location will be exactly at a hexagon center. Instead of getting back an integer hex coordinate, I get back a floating point value (type double), which means I return a FractionalHex instead of a Hex. To get back to the integer, I need to round it to the nearest hex. I’ll implement that in a bit. 2.3 Drawing a hex# To draw a hex, I need to know where each corner is relative to the center of the hex. With the flat top orientation, the corners are at 0°, 60°, 120°, 180°, 240°, 300°. With pointy top, they’re at 30°, 90°, 150°, 210°, 270°, 330°. I encode that in the Orientation class’s start_angle value, either 0.0 for 0° or 0.5 for 60°. Once I know where the corners are relative to the center, I can calculate the corners in screen locations by adding the center to each corner, and putting the coordinates into an array. Point hex_corner_offset(Layout layout, int corner) { Point size = layout.size; double angle = 2.0 * M_PI * (layout.orientation.start_angle + corner) / 6; return Point(size.x * cos(angle), size.y * sin(angle)); } vector<Point> polygon_corners(Layout layout, Hex h) { vector<Point> corners = {}; Point center = hex_to_pixel(layout, h); for (int i = 0; i < 6; i++) { Point offset = hex_corner_offset(layout, i); corners.push_back(Point(center.x + offset.x, center.y + offset.y)); } return corners; } 2.3.1TODO Make hex_corner_offset line up with hex_neighbor The way hex_corner_offset works is different enough from hex_neighbor that I can’t use the corner offset for anything other than drawing the entire polygon. This is not ideal. I sometimes want to draw corners or edges. I need to study this a bit more before I can recommend a better hex_corner_offset function. This might tie into the corner and edge labeling[4] I’ve tried in the past. 2.4 Layout examples# Ok, let’s try it out! I have written Hex, Orientation, Layout, and Point and the functions that go with each. That’s enough for me to draw hexes. I’m going to use the Javascript version of these functions to draw some hexes in the browser. Let’s try the two orientations, layout_pointy and layout_flat: Let’s try three different sizes, Point(10, 10), Point(20, 20), and Point(40, 40): Let’s try stretching the hexes, by setting size to Point(15, 25) and Point(25, 15): Let’s try a downward y-axis with size set to Point(25, 25) and a flipped y-axis (upward) with size set to Point(25, -25). Look closely at how r increases downwards vs upwards: I think that’s a reasonable set of tests for the orientation and size, and it shows that the Layout class can handle a wide variety of needs, without having to make different variants of the Hex class. 3 Fractional Hex# For pixel-to-hex I need fractional hex coordinates. It looks like the Hex class, but uses double instead of int: struct FractionalHex { const double q, r, s; FractionalHex(double q_, double r_, double s_) : q(q_), r(r_), s(s_) {} }; 3.1 Hex rounding# Rounding turns a fractional hex coordinate into the nearest integer hex coordinate. The algorithm is straight out of the main article: Hex hex_round(FractionalHex h) { int q = int(round(h.q)); int r = int(round(h.r)); int s = int(round(h.s)); double q_diff = abs(q - h.q); double r_diff = abs(r - h.r); double s_diff = abs(s - h.s); if (q_diff > r_diff and q_diff > s_diff) { q = -r - s; } else if (r_diff > s_diff) { r = -q - s; } else { s = -q - r; } return Hex(q, r, s); } 3.2 Line drawing# To draw a line, I linearly interpolate between two hexes, and then round it to the nearest hex. To linearly interpolate between hex coordinates I linearly interpolate each of the components ( q, r, s) independently: float lerp(double a, double b, double t) { return a * (1-t) + b * t; /* better for floating point precision than a + (b - a) * t, which is what I usually write */ } FractionalHex hex_lerp(Hex a, Hex b, double t) { return FractionalHex(lerp(a.q, b.q, t), lerp(a.r, b.r, t), lerp(a.s, b.s, t)); } Line drawing is not too bad once I have linear interpolation: vector<Hex> hex_linedraw(Hex a, Hex b) { int N = hex_distance(a, b); vector<Hex> results = {}; double step = 1.0 / max(N, 1); for (int i = 0; i <= N; i++) { results.push_back(hex_round(hex_lerp(a, b, step * i))); } return results; } I needed to stick that max(N, 1) bit in there to handle lines with length 0 (when A == B). Sometimes the hex_lerp will output a point that’s on an edge. On some systems, the rounding code will push that to one side or the other, somewhat unpredictably and inconsistently. To make it always push these points in the same direction, add an “epsilon” value to a. This will “nudge” things in the same direction when it’s on an edge, and leave other points unaffected. vector<Hex> hex_linedraw(Hex a, Hex b) { int N = hex_distance(a, b); FractionalHex a_nudge(a.q + 1e-6, a.r + 1e-6, a.s - 2e-6); FractionalHex b_nudge(b.q + 1e-6, b.r + 1e-6, b.s - 2e-6); vector<Hex> results = {}; double step = 1.0 / max(N, 1); for (int i = 0; i <= N; i++) { results.push_back( hex_round(hex_lerp(a_nudge, b_nudge, step * i))); } return results; } The nudge is not always needed. You might try without it first. 4 Map# There are two related problems to solve: how to generate a shape and how to store map data. Let’s start with storing map data. 4.1 Map storage# The simplest way to store a map is to use a hash table. In C++, in order to use unordered_map<Hex,_> or unordered_set<Hex> I need to define a hash function for Hex. It would’ve been nice if C++ made it easier to define this, but it’s not too bad. I hash the q and r fields (I can skip s because it’s redundant), and combine them using the algorithm from Boost’s hash_combine: namespace std { template <> struct hash<Hex> { size_t operator()(const Hex& h) const { hash<int> int_hash; size_t hq = int_hash(h.q); size_t hr = int_hash(h.r); return hq ^ (hr + 0x9e3779b9 + (hq << 6) + (hq >> 2)); } }; } Here’s an example of making a map with a float height at each hex: unordered_map<Hex, float> heights; heights[Hex(1, -2, 3)] = 4.3; cout << heights[Hex(1, -2, 3)]; The hash table by itself isn’t that useful. I need to combine it with something that creates a map shape. In graph terms, I need something that creates the nodes. 4.2 Map shapes# In this section I write some loops that will produce various shapes of maps. You can use these loops to make a set of hex coordinates for your map, or fill in a map data structure, or iterate over the locations in the map. I’ll write sample code that fills in a set of hex coordinates. 4.2.1 Parallelograms# With axial/cube coordinates, a straightforward loop over coordinates will produce a parallelogram map instead of a rectangular one. unordered_set<Hex> map; for (int q = q1; q <= q2; q++) { for (int r = r1; r <= r2; r++) { map.insert(Hex(q, r, -q-r))); } } There are three coordinates, and the loop requires you choose any two of them: (q,r), (s,q), or (r,s) lead to these pointy top maps, respectively: And these flat top maps: 4.2.2 Triangles# There are two directions for triangles to face, and the loop depends on which direction you use. Assuming the y axis points down, with pointy top these triangles face south/northwest/northeast, and with flat top these triangles face east/northwest/southwest. unordered_set<Hex> map; for (int q = 0; q <= map_size; q++) { for (int r = 0; r <= map_size - q; r++) { map.insert(Hex(q, r, -q-r)); } } With pointy top these triangles face north/southwest/southeast and with flat top these triangles face west/northeast/southeast: unordered_set<Hex> map; for (int q = 0; q <= map_size; q++) { for (int r = map_size - q; r <= map_size; r++) { map.insert(Hex(q, r, -q-r)); } } If your flip your y-axis, then it’ll switch north and south here, as you might expect. 4.2.3 Hexagons# Generating a hexagonal shape map is described on the main page. unordered_set<Hex> map; for (int q = -map_radius; q <= map_radius; q++) { int r1 = max(-map_radius, -q - map_radius); int r2 = min(map_radius, -q + map_radius); for (int r = r1; r <= r2; r++) { map.insert(Hex(q, r, -q-r)); } } Here’s what I get for pointy top and flat top orientations: 4.2.4 Rectangles# With axial/cube coordinates, getting rectangular maps is a little trickier! The main article gives a clue but I don’t actually show the code. Here’s the code: unordered_set<Hex> map; for (int r = 0; r < map_height; r++) { int r_offset = floor(r/2); // or r>>1 for (int q = -r_offset; q < map_width - r_offset; q++) { map.insert(Hex(q, r, -q-r)); } } As before, I have to pick two of q, r, s for the loop, but this time the order matters because the outer and inner loops are different. Here’s what I get for pointy top hexes if I set the (outer,inner) loops to (r,q), (q,s), (s,r), (q,r), (s,q), (r,s): They’re rectangles, but they’re don’t have to be oriented with the x-y axes! Most likely you want the first one, with r for the outer loop and q for the inner loop. How about flat topped hexes? Let’s set the (outer,inner) loops to (r,q), (q,s), (s,r), (q,r), (s,q), (r,s): To get the fourth one, you can make q the outer loop and r the inner loop, and switch width and height: unordered_set<Hex> map; for (int q = 0; q < map_width; q++) { int q_offset = floor(q/2); // or q>>1 for (int r = -q_offset; r < map_height - q_offset; r++) { map.insert(Hex(q, r, -q-r)); } } There are two versions of the loop that will produce essentially the same shape, but with minor differences. You might also need to experiment to get exactly the map you want. Try setting the offset to floor((q+1)/2) or floor((q-1)/2) instead of floor(q/2) for example, and the boundary will change slightly. 4.3 Optimized storage# The hash table approach is pretty generic and works with any shape of map, including weird shapes and shapes with holes. You can view it as a type of node-and-edge graph structure, storing the nodes but explicitly but calculating the edges on the fly with the hex_neighbor function. A different way to store the node-and-edge graph structure is to calculate all the edges ahead of time and store them explicitly. Give each node an integer id and then use an array of arrays to store neighbors. Or make each node an object and use a field to store a list of neighbors. These graph structures are also generic and work with any shape of map. You can also use any graph algorithm on them, such as movement range, distance map, or pathfinding. Storing the edges implicitly works well when the map is regular or is being edited; storing them explicitly can work well when the map is irregularly shaped (boundary, walls, holes) and isn’t changing frequently. Some map shapes also allow a compact 2d or 1d array. The main article gives a visual explanation. Here, I’ll give an explanation based on code. The main idea is that for all the map shapes, there is a nested loop of the form for (int a = a1; a < a2; a++) { for (int b = b1; b < b2; b++) { ... } } For compact map storage, I’ll make an array of arrays, and index it with array[a-a1][b-b1]. I subtract where the loop starts so that the first index will be 0. For example, here’s the code for a rectangular shape with pointy top hexes: (for flat top hexes, the loop is different) for (int r = 0; r < height; r++) { int r_offset = floor(r/2); for (int q = -r_offset; q < width - r_offset; q++) { map.insert(Hex(q, r, -q-r)); } } For pointy top hexes, variable a is r, and b is q. (For flat top hexes, a will be q and b will be r, but I haven’t worked through that example.) Value a1 (where the r loop starts) is 0 and b1 (where the q loop starts) is -floor(r/2). That means the array will be indexed array[r-0][q-(-floor(r/2))] which simplifies to array[r][q+floor(r/2)]. Note that floor(r/2) can be written r>>1. The second thing I need to know is the size of the arrays. I need a2-a1 arrays, and the size of each should be b2-b1. (Be sure to check for off-by-1 errors: if the loop is written a <= a2 then you’ll want a2-a1+1 arrays, and similarly for b <= b2.) I can build these arrays using C++ vectors using this pattern: vector<vector<T>> map(a2-a1); for (int a = a1; a < a2; a++) { map.emplace_back(b2-b1); } For the rectangle example, a2-a1 becomes height and b2-b1 becomes width: vector<vector<T>> map(height); for (int r = 0; r < height; r++) { map.emplace_back(width); } I can encapsulate all of this into a Map class: template<class T> class RectangularPointyTopMap { vector<vector<T>> map; public: Map(int width, int height): map(height) { for (int r = 0; r < height; r++) { map.emplace_back(width); } } inline T& at(int q, int r) { return map[r][q + (r >> 1)]; } }; For the other map shapes, it’s only slightly more complicated, but the same pattern applies: I have to study the loop that created the map in order to figure out the size and array access for the map. 1d arrays are trickier and I won’t try to tackle them here. In practice, I rarely use array storage for hex maps, except when the maps are large, and my code is written in C++. Although it’s more compact, it almost never makes a difference in practice in my projects. For most of my projects, I use a graph representation. It gives me the most flexibility and reusability. I only need the more compact storage when storage size matters. 5 Rotation# There are two one-step rotation functions, but which is “left” and which is “right” depends on your map orientation. You may have to swap these. Hex hex_rotate_left(Hex a) { return Hex(-a.s, -a.q, -a.r); } Hex hex_rotate_right(Hex a) { return Hex(-a.r, -a.s, -a.q); } Note that these are slightly different from the main page because q,r,s don’t quite line up with x,y,z. If you think of the coordinates v in vector format, these operations are 3x3 matrix multiplies, M times v, where M = [0 0 -1; -1 0 0; 0 -1 0]. The matrix inverse M-1 = [0 -1 0; 0 0 -1; -1 0 0] rotates in the opposite direction. Raising the matrix to a power Mk rotates k times. You can precomputate all the rotation matrices, or combine the matrix with other operations such as translate, scale, etc. 6 Offset coordinates# In the main article I use the names q and r for offset coordinates, but since I’m using those for cube/axial, I’m going to use col and row here. struct OffsetCoord { const int col, row; OffsetCoord(int col_, int row_): col(col_), row(row_) {} }; I’m expecting that I’ll use the cube/axial Hex class everywhere, except for displaying to the player. That’s where offset coordinates will be useful. That means the only operations I need are converting Hex to OffsetCoord and back. There are four offset types: odd-r, even-r, odd-q, even-q. The “r” types are used with with pointy top hexagons and the “q” types are used with flat top. Whether it’s even or odd can be encoded as an offset direction +1 or -1. For pointy top, the offset direction tells us whether to slide alternate rows right or left. For flat top, the offset direction tells us whether to slide alternate columns up or down. const int EVEN = +1; const int ODD = -1; OffsetCoord qoffset_from_cube(int offset, Hex h) { assert(offset == EVEN || offset == ODD); int col = h.q; int row = h.r + int((h.q + offset * (h.q & 1)) / 2); return OffsetCoord(col, row); } Hex qoffset_to_cube(int offset, OffsetCoord h) { assert(offset == EVEN || offset == ODD); int q = h.col; int r = h.row - int((h.col + offset * (h.col & 1)) / 2); int s = -q - r; return Hex(q, r, s); } OffsetCoord roffset_from_cube(int offset, Hex h) { assert(offset == EVEN || offset == ODD); int col = h.q + int((h.r + offset * (h.r & 1)) / 2); int row = h.r; return OffsetCoord(col, row); } Hex roffset_to_cube(int offset, OffsetCoord h) { assert(offset == EVEN || offset == ODD); int q = h.col - int((h.row + offset * (h.row & 1)) / 2); int r = h.row; int s = -q - r; return Hex(q, r, s); } If you’re only using even or odd, you can hard-code the value of offset into the code, making it simpler and faster. Alternatively, offset can be a template parameter so that the compiler can inline and optimize it. For offset coordinates I need to know if a row/col is even or odd, and use a&1 (bitwise and[5]) instead of a%2 return 0 or +1. Why? - On systems using two’s complement[6] representation, which is just about every system out there, a&1returns 0 for even aand 1 for odd a. This is what I want. However, it’s not strictly portable. - In some languages, including C++, a%2computes remainder[7], not modulo. When ais -1, I want to say that’s odd, so I want a%2to be 1, but some systems will return -1. If your language computes modulo, you can safely use a%2. - If you know that your coordinate awill never be negative, you can safely use a%2. - If you don’t have a&1available, you can use abs(a) % 2instead. Also, in many (all?) languages, & has lower precedence than + so be sure to parenthesize a&1. 7 Notes# - In languages that don’t support a>>1, you can use floor(a/2)instead. - Most of the functions are small and should be inlined in languages that support it. - Operator overloading is sometimes abused, but might be nice for the arithmetic Hex operations hex_add, hex_subtract, hex_scale. I didn’t use it here. - I wrote this code in module style, but you might prefer to write it as class style, where the functions are static or class methods. In some languages, class style is the only choice. Some of the methods might be better as instance methods. - In languages that support more than one constructor, or optional arguments, it might be handy to have both the two-argument axial constructor and the three-argument cube constructor. 7.1 Cube vs Axial# Cube coordinates are three numbers, but one can be computed from the others. Whether you want to store the third one as a field or compute it in an accessor is primarily a code style decision. If performance is the main concern, the cost of the accessor vs the cost of the computation will matter most. In languages like C++ where accessors are inlined away, save the memory (accessing RAM is expensive) and use an accessor. In languages like Python where accessors are expensive, save the function call (function calls are expensive) and store the third coordinate in a field. Also take a look at this paper[8] which found axial and cube to be faster than offset for line of sight, distance, and other algorithms, but slower than offset for displaying offset coordinates (as expected). I can’t find their code though. If performance matters, the best thing to do is to actually measure it. 7.2 C++# - These are all value types, cheap to copy and pass around. For a bit more compactness, if your maps are small you can use an int16 or int8 for the Hex and Offset class. If you’re computing sin an accessor, storing qand r(or coland row) as int16 will let you fit the entire coordinate into 32 bits. - As written, these classes have a non-default constructor, so they won’t count as a POD trivial type, although I think they count as a POD standard-layout type. Switch to a default constructor and use struct initialization if you’d like them to be a POD trivial type. - I could have written a template class Hex<>and instantiated it as Hex<int>and Hex<double>. I decided not to because I expect that many of the readers will be translating the code to another language. 7.3 Python, Javascript# - Python and other dynamically typed languages don’t need Hex and FractionalHex to be separate. You can write the FractionalHex functions to work with Hex instead, and skip the FractionalHex class. 8 Source Code# 8.1 Code from this page# I have some unoptimized incomplete code in several languages, with some unit tests too, but no documentation or examples. Feel free to use these as a starting point writing your hex grid library: - C++ - Python - C# - Haxe - Java - Javascript top-level functions or with classes or with es6 modules - Typescript - Lua 5.2; see source for notes about 5.1 and 5.3 Caveat: this is procedurally generated code (yes, really![9]) and doesn’t follow the best style recommendations for each language. It’d be cool to add Racket, Rust, Ruby, Haskell, Swift, and others, but I don’t know when I might have time to do that. My procedural code generator is kinda awful but if you want to take a look at it, it’s codegen.zip. [Changed 2016-07-20] I changed the winding direction for hex_corner_offset to match that of hex_neighbor; this should not matter in theory but it’s nice for them to match. [Changed 2018-03-10] I changed the Java, C#, and Typescript output to use instance methods instead of static methods. I added a precondition invariant check to make sure q+r+s == 0 when you call the Hex constructor. This should help catch bugs sooner. 8.2 Other libraries# It’s worth looking at these libraries, some of which include source code: - Unity and C# - GameLogic Grids[10] - Unity - includes more grid types than I knew even existed! Their blog has tons of useful information about grids (hex and others) - Hex-Grid Utilities[11] - C# - includes field of view, pathfinding, WinForms - tejon/HexCoord[12] - C# / Unity - DigitalMachinist/HexGrid[13] - C# - Amaranthos/UnityHexGrid[14] - C# / Unity - svejdo1/HexGrid[15] - C# - Banbury/UnityHexGrid[16] - C# / Unity - aurelwu.github.io[17] - Unity - grantmoore3d/unity-hexagonal-grids[18] - Unity - imurashka/HexagonalLib[19] - C# / .NET - Clpsplug/hexagonal_map[20] - C# - cube/axial coordinates - JVM - Hexworks/mixite[21] - Java - Sscchhmueptfter/HexUtils[22] - Java - timgilbert/scala-hexmap[23] - Scala - mraad/grid-hex[24] - Scala - dmccabe/khexgrid[25] - Kotlin - Objective C - denizztret/ObjectiveHexagon[26] - Objective C - pkclsoft/HexLib[27] - Objective C - Swift - MadGeorge/AmitsHexGridLibrarySwift[28] - Swift - fananek/HexGrid[29] - Swift - JavaScript - mpalmerlee/HexagonTools[30] - JavaScript - RobertBrewitz/axial-hexagonal-grid[31] - JavaScript - flauwekeul/honeycomb[32] - JavaScript - bodinaren/BHex.js[33] - JavaScript - Hellenic/react-hexgrid[34] - JavaScript / React - vonWolfehaus/von-grid[35] - JavaScript / Three.js - othree/hexagons[36] - JavaScript - odd-r coordinates - cefleet/hexAPI[37] - JavaScript - njlr/solid-hex[38] - JavaScript + pipeline operator - aahdee/p5grid[39] - JavaScript / P5.js - joshuabowers/hexagonally[40] - TypeScript - scrapcupcake/hexs6[41] - JavaScript - euoia/hex-grid.js[42] - JavaScript - Python - RedFT/Hexy[43] - Python - stephanh42/hexutil[44] - Python - doubled coordinates - Ruby - czuger/rhex[45] - Ruby - SpeciesFileGroup/waxy[46] - Ruby - Elm - Voronchuck/hexagons[47] - Elm - danneu/elm-hex-grid[48] - Elm - etague/elm-hexagons[49] - Elm - Rust - dpc/hex2d-rs[50] - Rust - leftiness/hex_math[51] - Rust - ozkriff/zemeroth[52] - game written in Rust; hex code not separated out into its own library - toasteater/beehive[53] - Rust - iancormac84/hexae[54] - Rust - Lua - icrawler/Hexamoon[55] - Lua - ontoclasm/Hex[56] - Lua - GDScript / Godot - droxmusic/HexMap[57] - GDscript / Godot - DDoop/HexTesting[58] - GDscript / Godot - romlock/godot-gdhexgrid[59] - GDscript / Godot - Other languages - mhwombat/grid[60] - Haskell - includes square, triangle, hexagonal, octagonal grids - RyanMcNamara86/Hex[61] - Haskell - andeemarks/clj-hex-grid[62] - Clojure - rayalex/hexgrid[63] - Elixir - zacharycarter’s gist[64] - Nim - pmcxs/hexgrid[65] - Go - hautenessa/hexagolang[66] - Go - GiovineItalia/Hexagons.jl[67] - Julia - hexagonal_grid[68] - Dart / Flutter Also for Unity take a look at CatlikeCoding’s tutorial[69].
https://www.redblobgames.com/grids/hexagons/implementation.html
CC-MAIN-2020-40
refinedweb
6,890
62.78
struts.xml file to process the request. The mapping to an action is usually generated by a Struts Tag. The action tag (within the struts root node of struts.xml file) specifies the action by name and the framework renders Struts 2 Actions request. About Struts Action Interface In Struts 2 all actions may implement... Struts 2 Actions In this section we will learn about Struts 2 Actions, which is a fundamental concept in most of the web the many Action classes and interface, if you want to make an action class for you...;roseindia" extends="struts-default"> <action name=" Struts dispatch action - Struts Struts dispatch action i am using dispatch action. i send... now it showing error javax.servlet.ServletException: Request[/View/user] does not contain handler parameter named 'parameter' how can i overcome Struts Dispatch Action Example ; Struts-Configuration file). Here the Dispatch_Action... Struts Dispatch Action Example Struts Dispatch Action struts - Struts a configuration file to store mappings of actions. By using this file... the struts.config.xml file to determine which module to be called upon an action request. Struts only reads the struts.config.xml file upon start up. Struts Actions Threadsafe by Default - Struts Actions Threadsafe by Default Hi Frieds, I am beginner in struts, Are Action classes Threadsafe by default. I heard actions are singleton , is it correct Struts Struts What is called properties file in struts? How you call the properties message to the View (Front End) JSP Pages Understanding Struts Action Class Understanding Struts Action Class In this lesson I will show you how to use Struts Action Class and forward a jsp file through it. What is Action Class? An Action STRUTS STRUTS 1) Difference between Action form and DynaActionForm? 2) How the Client request was mapped to the Action file? Write the code and explain2 Actions the mapping from struts.xml file to process the request. The mapping to an action is usually generated by a Struts Tag. Struts 2 Redirect Action In this section, you will get familiar with struts 2 Redirect action Action in Struts 2 Framework . Actions are mostly associated with a HTTP request of User. The action class..., and select the view result page that should send back to the user action class...Actions Actions are the core basic unit of work in Struts2 framework. Each   configuration file | Struts 2 Actions | Struts 2 Redirect Action... | Struts Built-In Actions | Struts Dispatch Action | Struts Forward... | AGGREGATING ACTIONS IN STRUTS | Aggregating Actions In Struts Revisited Struts Forward Action Example Struts Forward Action Example  ...). The ForwardAction is one of the Built-in Actions that is shipped with struts framework..... Here in this example you will learn more about Struts Forward Action Struts - Struts Struts Hi, I m getting Error when runing struts application. i... /WEB-INF/struts-config.xml 1...-- java.lang.ClassNotFoundException: org.apache.struts.action.ActionServlet. how i can 1 Tutorial and example programs . - STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS...; Aggregating Actions In Struts Revisited - In my previous article Aggregating Actions in Struts , I have given a brief idea of how Struts LookupDispatchAction Example in the action tag through struts-config.xml file). Then this matching key is mapped...;Struts File Upload</html:link> <br> Example demonstrates how...; Struts LookupDispatch Action of the Built-in Actions provided along with the struts framework... in the struts-config.xml Here, we need to create multiple independent actions.../MappingDispatchAction.jsp">Struts File Upload</html:link> < Struts MappingDispatchAction Example ; Struts MappingDispatch Action... Developing the Action Mapping in the struts-config.xml Here, we...="/pages/MappingDispatchAction.jsp">Struts File Upload</html Test Actions Test Actions An example of Testing a struts Action is given below using...; <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts...;default" namespace="/" extends="struts-default"> <default different kinds of actions in Struts different kinds of actions in Struts What are the different kinds of actions in Struts Struts Tutorial , Architecture of Struts, download and install struts, struts actions, Struts Logic Tags... the model from view and the controller. Struts framework provides the following three... : Struts provides the POJO based actions. Thread safe. Struts has support Interview Questions - Struts Interview Questions application. Action are mapped in the struts configuration file... is Struts actions and action mappings? Answer: A Struts action is an instance... with the requested action. In the Struts framework this helper class Action classes in struts Action classes in struts how many type action classes are there in struts Hi Friend, There are 8 types of Action classes: 1.ForwardAction class 2.DispatchAction class 3.IncludeAction class 4.LookUpDispatchAction Tutorials Struts application, specifically how you test the Action class. The Action class... Tutorial This complete reference of Jakarta Struts shows you how to develop Struts... multiple Struts configuration files This tutorial shows Java Web developers how to set Struts Built-In Actions Struts Built-In Actions  ... actions shipped with Struts APIs. These built-in utility actions provide different...; to combine many similar actions into a single action Struts Struts 1)in struts server side validations u are using programmatically validations and declarative validations? tell me which one is better ? 2) How to enable the validator plug-in file How Struts Works How Struts Works  ..., and maintain. Struts is purely based on the Model- View- Contoller (MVC) design..., the View, and the Controller. Below we are describing the working make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other page Struts 2 Tutorials - Struts version 2.3.15.1 Actions Value Stack / OGNL Results View Struts 2 Configurations Learn... for making developer work easy. Removing default Struts 2 action suffix - How... in Struts 2.3.15.1 Lifecycle/Architecture of Struts 2 framework How Struts Struts Tutorial Struts Architecture How Struts Works? Struts Controller Struts Action Class Struts Validator Framework Struts DynaActionForm Struts File Upload... to the advance concepts of struts. At Roseindia you will learn the Basic Model no action mapped for action - Struts no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloW How Struts 2 Framework works? How Struts 2 Framework works? This tutorial explains you the working.... In this tutorial you will learn How Struts 2 works with the help of an easy.... Controller maps the user request to specific action. In Struts 2 Struts 2 Training Struts 2 Training The Struts 2 Training for developing enterprise applications with Struts 2 framework. Course Code: STRUS-2 Duration: 5.0 day(s) Apply for Struts 2 Training Lectures java - Struts that are stored as request or session attributes (depending on how long they need...;controller" in the Model-View-Controller (MVC) design pattern for web... architecture. * The RequestProcessor selects and invokes an Action class Articles and Arabic. You will see how to set the user locale in Struts Action classes... security principle and discuss how Struts can be leveraged to address... goal of the Struts framework is to enforce a MVC-style (Model-View-Controller struts struts <p>hi here is my code can you please help me to solve...; <p><html> <body></p> <form action="login.do">...*; import org.apache.struts.action.*; public class LoginAction extends Action ;!-- This file contains the default Struts Validator pluggable validator... in this file. # Struts Validator Error Messages errors.required={0...struts <p>hi here is my code in struts i want to validate Guide ? - - Struts Frame work is the implementation of Model-View-Controller (MVC) design..., Action, ActionForm and struts-config.xml are the part of Controller... the official site of Struts. Extract the file ito Struts file uploading - Struts Struts file uploading Hi all, My application I am uploading files using Struts FormFile. Below is the code. NewDocumentForm... and append it in the database. Ultimately all the file content should be stored Populate Menus In Tree View - Struts application using struts framework. In this application i thought to bring..." as prefix.In the same way i am thinking to bring the menus in a tree view in my application "is it possible to bring menu that is stored in a database in a tree Struts - Struts Struts how to set value in i want to set Id in checkBox from the struts action. Hi friend, For more information,Tutorials and Examples on Checkbox in struts visit to : Struts Action Chaining Struts Action Chaining Struts Action Chaining dropdown in struts - Struts write the query (in which file)to get the list from database and how and where... in action file...dropdown in struts how to populate a drop down from database entries Struts 2 action-validation.xml not loading Struts 2 action-validation.xml not loading Hi All, I am getting... error SERVER : Caught exception while loading file package/action-validation.xml Connection refused: Connect - (unknown location) The path of my action 2.2.1 - Struts 2.2.1 Tutorial in Struts application Example of File Upload Interceptor How... and testing the example Advance Struts Action Struts Action... Implementing Actions in Struts 2 Chaining Actions in Books components How to get started with Struts and build your own... of design are deeply rooted. Struts uses the Model-View-Controller design pattern... Edition maps out how to use the Jakarta Struts framework, so you can solve code - Struts code How to write the code for multiple actions for many submit buttons. use dispatch action fast view web--yes - Struts fast view web--yes How to enable the fast web view--yes on pdf file Hi Friend, Please clarify your problem. Thanks struts <html:select> - Struts , allowing Struts to figure out the form class from the struts-config.xml file...struts i am new to struts.when i execute the following code i am getting this exception .how to solve this problem .please rectify the Struts properties file location - Struts Struts properties file location Hi, Where struts properties file stored in web application. I mean which location. Thank u Hi Friend, The struts.properties file can be locate anywhere on the classpath Jakarta Struts Interview Questions ? A: Jakarta Struts is open source implementation of MVC (Model-View-Controller... Message Resources Definitions file to the Struts Framework Environment... Resources Definitions files can be added to the struts-config.xml file through File Upload in Struts. File Upload in Struts. How to do File Upload solve actionservlet is not found error in dispatch action configuration - Struts . Action class: An Action class in the struts application extends Struts...:// Can you please tell me the clear definition of Action Struts 2 File Upload Struts 2 File Upload In this section you will learn how to write program in Struts 2 to upload the file..., these error messages are stored in the struts-messsages.properties file java - Struts java how can i get dynavalidation in my applications using struts... : *)The form beans of DynaValidatorForm are created by Struts and you configure in the Struts config : *)The Form Bean can be used Struts - Framework using the View component. ActionServlet, Action, ActionForm and struts-config.xml..., Struts : Struts Frame work is the implementation of Model-View-Controller... of any size. Struts is based on MVC architecture : Model-View-Controller Struts - Struts Struts hi can anyone tell me how can i implement session tracking in struts? please it,s urgent........... session tracking? you mean... for later use in in any other jsp or servlet(action class) until session exist Introduction to Struts 2 to the action class. Struts 2 actions are Spring friendly and so easy... multiple view for different purposes. The view object defines the way how... determines the appropriate view. The Flow of Struts 2 based Application Here
http://roseindia.net/tutorialhelp/comment/15308
CC-MAIN-2013-48
refinedweb
1,951
59.6
Gridworld Tile Game << Two Player Starter Code | FinalProjectsTrailIndex | Worm Game >> This tile game is adapted from an earlier example written by Cay Horstman when he first started designing Gridworld. It works a lot like a TV Game show called "Concentration." The game board contains black, or "covered" tiles (or Generic Actors as it is--you can make your own tile gif if you wish). The player's job is to uncover matching color pairs. Click on a tile to flip it. Then click on another. If both tiles have the same color, they both stay exposed. If not, they are both covered again. Try to match all pairs of all colors. The idea is to uncover all the matching colors in as few attempts as possible. Tile.java import info.gridworld.actor.Actor; import java.awt.Color; public class Tile extends Actor { private Color color; private boolean up; public Tile(Color color){ up = false; this.color = color; } public Color getColor(){ if (up) return color; return Color.BLACK; } public void setColor(Color color){ this.color = color; } public void flip(){ up = !up; } } TileGame.java import java.awt.Color; import info.gridworld.actor.Actor; import info.gridworld.actor.ActorWorld; import info.gridworld.grid.Grid; import info.gridworld.grid.Location; public class TileGame extends ActorWorld { private Location up; private int guessNumber; public TileGame(){ Color[] colors = { Color.RED, Color.BLUE, Color.GREEN, Color.CYAN, Color.PINK, Color.ORANGE, Color.GRAY, Color.MAGENTA, Color.WHITE, Color.YELLOW}; for (Color color : colors){ add(new Tile(color)); add(new Tile(color)); } guessNumber=0; System.setProperty("info.gridworld.gui.selection", "hide"); System.setProperty("info.gridworld.gui.frametitle", "Tile Game"); System.setProperty("info.gridworld.gui.tooltips", "hide"); setMessage("Click on the first tile"); } public boolean locationClicked(Location loc){ Grid<Actor> gr = getGrid(); Tile t = (Tile) gr.get(loc); if (t != null) { t.flip(); if (up == null){ setMessage("Click on the second tile"); up = loc; }else { Tile first = (Tile) gr.get(up); if (!first.getColor().equals(t.getColor())){ first.flip(); t.flip(); } guessNumber++; setMessage("Click on the first tile\n"+guessNumber+" guesses so far (10 is a perfect score)"); up = null; } } return true; } } TileGameRunner.java public class TileGameRunner { public static void main(String[] args) { TileGame game=new TileGame(); game.show(); } } Tweaking a world You can tweak the behavior of your world as follows: - Call setMessage to display different messages in the yellow area above the grid. (Scroll bars are added if needed.) This can happen at any time. - Call addOccupantClass so that additional occupant classes are offered when a user clicks on an empty location. (By default, any classes that have ever been added to the world are displayed.) - Call addGridClass to supply another grid class in the World -> Set grid menu, such as a torus grid that wraps around the edges. This is typically done in the main method of the runner program. - Call step programmatically. You may want to do this in a program that tests student homework, so you don't have to press the Step button yourself. Other worlds You can use the GridWorld framework for games and other applications. In this case, you will want to install your own subclass of the info.gridworld.world.World class. There are three main extension points. - Override the locationClickedmethod so that something happens when the user clicks on a location (or presses the Enter key). Your method should return true if you consume the event. If you return false, the GUI will carry out its default action, showing the constructor menu in an empty location or method menu in an occupied location. - Override the keyPressedmethod so that something happens when the user hits a key. Your method should return true if you consume the key press. Don't consume standard keys such as the arrow keys or Enter. (Shifted arrows are ok to consume.) - Override the stepmethod so that something happens when the user presses the Step button. By default, nothing happens in the Worldclass. (In the ActorWorld, the act method is invoked on all occupants.) Secret Flags - If you call System.setProperty("info.gridworld.gui.selection", "hide");the selection square is hidden. This may be useful for special environments (such as a Quzzle game in which it doesn't make sense to select a fractional tile.) - If you call System.setProperty("info.gridworld.gui.tooltips", "hide");the tooltips are hidden. - Call System.setProperty("info.gridworld.gui.frametitle", titleString);before showing the world to set a different title. If you modify the framework to implement other special effects that shouldn't result in API clutter, consider using a similar “secret flag” mechanism.
https://mathorama.com/apcs/pmwiki.php?n=Main.GridworldTileGame
CC-MAIN-2018-51
refinedweb
763
51.65
Let’s explore the key differences between the OnInit lifecycle hook in Angular versus the constructor that we declare via a TypeScript class. The difference between the two, or where to put particular logic, has been the source of some confusion for those getting started with Angular - you may have questions such as; “Do I put this in the constructor?” “What do I need OnInit for?” “Can I use the constructor and OnInit for the same thing?” I’m going to help you answer some of these questions you may be having, and show you the difference between constructors and the OnInit lifecycle hook, to guide you through making your own decisions when building out your Angular apps. To better understand the decision we would need to make, let’s understand what a constructor and lifecycle hook is. Table of contents What is a constructor? The constructor lives on a JavaScript (ES2015/ES6) class (or a TypeScript class in this case) is a feature of a class itself - it’s not an Angular feature! >. A constructor is invoked first when a class is being instantiated, which means it’s a good place to possibly put some logic. class Pizza { constructor() { console.log('Hello world!'); } } // create a new instance, constructor is // then invoked by the *JavaScript Engine*! const pizza = new Pizza(); The main piece here is that the JavaScript engine calls the constructor, not Angular directly, as in this example - there is no Angular! It’s plain JavaScript. Constructors in Angular What about when we are using Angular? In this case, the constructor is best left for simply ‘wiring things up’, as this is where Angular resolves providers you declare in your constructor, through Dependency Injection: import { Component } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; @Component({...}) class PizzaComponent { constructor( private route: ActivatedRoute ) {} } This will bind ActivatedRoute to the class, making it accessible as part of the component class via this.route. The lesson here is that it’s out of Angular’s control when the constructor is invoked, so we should leave it for merely wiring up purposes instead of attempting to place any initialisation logic here - as Angular has a special lifecycle hook, OnInit, for this very reason. But before we dive too deep on OnInit, let’s understand the concept of a lifecycle hook. What are Lifecycle Hooks in Angular? Lifecycle hooks are just methods on an object, on a component for example, at key points in time. Some typical lifecycle hooks that ship with Angular include OnInit, OnDestroy and OnChanges. For the complete list of lifecycle hooks, check out the Lifecycle Hooks documentation page. Want to deep-dive into Angular’s lifecycle hooks? We have an “Exploring Lifecycle Hooks” series covering them all! Start with our guide to OnInit and follow the rest of the series. A lifecycle hook could give us key insights as to what’s happening with a component, for instance. When the component is created, the OnChanges lifecycle hook fires first. Secondly, OnInit, where we now realise our component is ready to go. If our component were to be destroyed, we’d then be able to write some code to slot into the OnDestroy lifecycle hook - to run some cleanup logic, perhaps. With this in mind, what does the OnInit lifecycle hook give us? Let’s explore and gain a better understanding. OnInit Lifecycle Hook By adding this lifecycle hook to a component, Angular has full control, unlike the constructor. Angular will invoke the OnInit lifecycle hook once it has finished setting the component up. The way we can use this hook is via an interface, OnInit, which we simply declare via the implements keyword (this will create a contract between the class and TypeScript to ensure the method ngOnInit is present on the class): import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; @Component({...}) class PizzaComponent implements OnInit { constructor( private route: ActivatedRoute ) {} ngOnInit() { // subscribe when OnInit fires this.route.params.subscribe(params => { // now we can do something! }); } } This approach also makes unit testing easier, as we could actually invoke the lifecycle hook ourselves. Here’s what technically happens when Angular instantiates our component and invokes ngOnInit: const instance = new PizzaComponent(); // Angular calls this when it's ready instance.ngOnInit(); Differences between constructor and OnInit To conclude, the ngOnInit is purely there to give us a signal that Angular has finished initialising the component - and we’re ready to roll. This initialisation phase includes the first pass at Change Detection, which means any @Input() properties are available inside ngOnInit, however are undefined inside the constructor, by design. The differences are subtle, yet important. Now we’ve fully explored the constructor role, you’re now primed to make the right decision with OnInit! Using OnInit gives us a guarantee that bindings are readily available and we can use them to the fullest. To learn more techniques, best practices and real-world expert knowledge I’d highly recommend checking out my Angular courses - they will guide you through your journey to mastering Angular to the fullest! It’s also important not to overload your constructor with initialisation logic, with this in mind and benefits we see unit testing - we’re onto a winner.
https://ultimatecourses.com/blog/angular-constructor-ngoninit-lifecycle-hook
CC-MAIN-2021-43
refinedweb
870
52.7
Sometimes it’s the little things that really make coding fun. We used to use a common pattern that helped out with our view code. By adding the linktoxxx helpers, we could make our applications more consistent and maintainable: def link_to_candidate(candidate, msg = nil) link_to(msg || h(candidate.name), candidate_path(candidate)) end def link_to_issue(issue, msg = nil) link_to(msg || h(issue.title), issue_path(issue)) end def link_to_intern(intern, msg = nil) link_to(msg || h(intern.name), intern_path(intern.candidate, intern)) end #...and on... But this grows out of hand pretty quickly—In one application we have over 30 of those puppies. Well, we figured out that by being a little clever, we could really clean this up… def link(item, msg = nil) case item when Candidate: link_to(msg || h(item.name), candidate_path(item)) when Issue: link_to(msg || h(item.title), issue_path(item)) when Intern: link_to(msg || h(item.name), intern_path(item.candidate, item)) #... else raise ArgumentError, "Unrecognized item given to link: #{item}" end end Well, that’s much better. It only grows one line for each model instead of four, and it’s easier to call in the views. Candidates! <% @candidates.each do |candidate| %> <%= link candidate %> <% end %> But it still smells a little fishy. I don’t think anyone here at Thoughtbot likes seeing a case statement. Let’s get just a little more clever… Abandon hope, all ye who enter here def link(item, msg = nil) msg ||= item.send([:name, :title, :id].detect {|n| item.respond_to? n}) method = "#{item.class.name.underscore}_path" link_to(msg, self.send(method, item)) end If you’re still with me—this version of link() figures out what attribute to call on the give model and generates the xxxpath method. It’s very concise, and won’t grow with the size of your code base, but hot-damn is it a doozy to decipher. But a larger issue is that we lost our ability to handle nested resources (like internpath(intern.candidate, intern)). Now, we definitely went with the case-statement version up there, but just as an exercise… Let’s just say we required all nested models to provide a parents attribute, which returned the list of parent models. We could then clean up our link() method like so: def link(item, msg = nil) msg ||= item.send([:name, :title, :id].detect {|n| item.respond_to? n}) method = "#{item.class.name.underscore}_path" parents = item.parents rescue [] link_to(msg, self.send(method, *parents, item)) end I wonder what else could be simplified if the models could tell you what other models proceed them in the resource chain.
http://robots.thoughtbot.com/its-the-little-things
CC-MAIN-2014-52
refinedweb
431
59.3
I concur with tilly. My diagnosis is that your module needs a more specifically named namespace, and possibly a more illustrative name for your subroutine. Weirdly enough, coming up with a perfect name for a subroutine/package/module sometimes takes me longer than the design login and implementation. So, take two rename update's and call us in the morning. :) In reply to Re^2: To script or not to script, that is the question... by biosysadmin in thread To script or not to script, that is the question... by tlm Lots Some Very few None Results (226 votes), past polls
http://www.perlmonks.org/index.pl?parent=462244;node_id=3333
CC-MAIN-2016-07
refinedweb
101
74.29
Abstract: In the days before BlockingQueue was added to Java, we had to write our own. This newsletter describes an approach using synchronized and wait()/notify(). Welcome to the 16th issue of The Java(tm) Specialists' Newsletter, written in a dreary-weathered-Germany. Since I'm a summer person, I really like living in South Africa where we have 9 months of summer and 3 months of sort-of-winter. It's quite difficult to explain to my 2-year old son the concepts of snow, snow-man, snow-ball, etc. Probably as difficult as explaining to a German child the concepts of cloudless-sky, beach, BSE-free meat, etc. Next week I will again not be able to post the newsletter due to international travelling (Mauritius), but the week after that I will demonstrate how it is possible to write type-safe enum types in Java using inner classes and how it is possible to "switch" on their object references. Switch statements should never be used, but it is nevertheless fascinating to watch how the Java language constructs can be abused... javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge. This week I want to speak about a very useful construct that we use for inter-thread communication, called a blocking queue. Quite often in threaded applications we have a producer-consumer situation where some threads want to pop jobs onto a queue, and some other worker threads want to remove jobs from the queue and then execute them. It is quite useful in such circumstances to write a queue which blocks on pop when there is nothing on the queue. Otherwise the consumers would have to poll, and polling is not very good because it wastes CPU cycles. I have written a very simple version of the BlockingQueue, a more advanced version would include alarms that are generated when the queue reaches a certain length. --- Warning Advanced: When I write pieces of code which are synchronized, I usually avoid synchronizing on "this" or marking the whole method as synchronized. When you synchronize on "this" inside the class, it might happen that other code outside of your control also synchronize on the handle to your object, or worse, call notify on your handle. This would severely mess up your well-written BlockingQueue code. I therefore as a habit always use private data members as locks inside a class, in this case I use the private queue data member. Another disadvantage of indiscriminately synchronizing on "this" is that it is very easy to then lock out parts of your class which do not necessarily have to be locked out from each other. For example, I might have a list of listeners in my BlockingQueue which are notified when the list gets too long. Adding and removing such listeners from the BlockingQueue should be synchronized, but you do not have to synchronize in respect of the push and pop operations, otherwise you limit concurrency. --- //: BlockingQueue.java import java.util.*; public class BlockingQueue { /** It makes logical sense to use a linked list for a FIFO queue, although an ArrayList is usually more efficient for a short queue (on most VMs). */ private final LinkedList queue = new LinkedList(); /** This method pushes an object onto the end of the queue, and then notifies one of the waiting threads. */ public void push(Object o) { synchronized(queue) { queue.add(o); queue.notify(); } } /** The pop operation blocks until either an object is returned or the thread is interrupted, in which case it throws an InterruptedException. */ public Object pop() throws InterruptedException { synchronized(queue) { while (queue.isEmpty()) { queue.wait(); } return queue.removeFirst(); } } /** Return the number of elements currently in the queue. */ public int size() { synchronized(queue) { return queue.size(); } } } Now we've got a nice little test case that uses the blocking queue for 10 worker threads which will each pull as many tasks as possible from the queue. To end the test, we put one poison pill onto the queue for each of the worker threads, which, when executed, interrupts the current thread (evil laughter). //: BlockingQueueTest.java public class BlockingQueueTest { private final BlockingQueue bq = new BlockingQueue(); /** The Worker thread is not very robust. If a RuntimeException occurse in the run method, the thread will stop. */ private class Worker extends Thread { public Worker(String name) { super(name); start(); } public void run() { try { while(!isInterrupted()) { ((Runnable)bq.pop()).run(); } } catch(InterruptedException ex) {} System.out.println(getName() + " finished"); } } public BlockingQueueTest() { // We create 10 threads as workers Thread[] workers = new Thread[10]; for (int i=0; i<workers.length; i++) workers[i] = new Worker("Worker Thread " + i); // We then push 100 commands onto the queue for (int i=0; i<100; i++) { final String msg = "Task " + i + " completed"; bq.push(new Runnable() { public void run() { System.out.println(msg); // Sleep a random amount of time, up to 1 second try { Thread.sleep((long)(Math.random()*1000)); } catch(InterruptedException ex) { } } }); } // We then push one "poison pill" onto the queue for each // worker thread, which will only be processed once the other // tasks are completed. for (int i=0; i<workers.length; i++) { bq.push(new Runnable() { public void run() { Thread.currentThread().interrupt(); } }); } // Lastly we join ourself to each of the Worker threads, so // that we only continue once all the worker threads are // finished. for (int i=0; i<workers.length; i++) { try { workers[i].join(); } catch(InterruptedException ex) {} } System.out.println("BlockingQueueTest finished"); } public static void main(String[] args) throws Exception{ new BlockingQueueTest(); } } The concepts in the newsletter can be expanded quite a bit. They could, for example, be used as a basis for implementing a ThreadPool, or otherwise you can implement an "ActiveQueue" which performs callbacks to listeners each time an event is pushed onto the queue via a Thread running inside the ActiveQueue. It is also possible to use PipedInputStream and PipedOutputStream to send messages between threads, but then you have to set up a whole protocol, and if you want to exchange objects you have to use ObjectOutputStream which will be alot slower than just passing handles. Until next week, and please remember to forward this newsletter in its entirety to as many Java users as you...
https://www.javaspecialists.eu/archive/Issue016.html
CC-MAIN-2020-05
refinedweb
1,038
61.26
I think there would be so many hidden gotchas that it is not worth it. The myriad of strange bugs that could come along with it would be much harder to figure out than the issues you describe of the constructor not running again. In most applications, it wouldn't be that difficult to start back at a bookmarkable page when you make an incompatible change. Jeremy Thomerson -- sent from my "smart" phone, so please excuse spelling, formatting, or compiler errors On Nov 18, 2010 7:53 AM, "Martijn Dashorst" <martijn.dashorst@gmail.com> wrote: I've been trying out jrebel and wicket a couple of times, and I thought it didn't work. It does, but the way Wicket development works is undoing most of the benefits of using jrebel. The idea of jrebel is to replace hotswap with something that actually works for normal development: adding methods, renaming them, creating new (anonymous inner) classes etc, without having to restart your application. And that works quite well... Until you start developing with Wicket. The problem is that our component hierarchy doesn't work well with code replacement. A typical workflow is that you navigate in your application to a page, and want to add a new component to it. So you go into that class: public class LinkCounter extends WebPage { public LinkCounter() { } } add a field: private int counter; add a label: public LinkCounter() { add(new Label("counter", new PropertyModel<Integer>(this, "counter))); } <span wicket:</span> add a link: public LinkCounter() { ... add(new Link<Void>("click") { public void onClick() { counter++; }); } } <a href="#" wicket:Click me</a> All is well, and when you refresh the page (as long as you had a bookmarkable link to it) it shows the new label and link. You click the link and the URL changes from a bookmarkable URL to a link to a specific instance. Now you want to add another link: add(new Link<Void>("minus") { public void onClick() { counter--; } }); Don't forget to modify the markup: <span wicket:</span> JRebel does its thing: adding the code to the constructor including the anonymous inner class. You refresh your page and are presented with a component not found exception: minus is added in the markup, but not in the java code The problem is that jrebel doesn't invoke the constructor (again) when replacing the code. Moving the code to onInitialize() might enable the jrebel plugin to call that method when it modifies a component class. This won't work because you typically then get: java.lang.IllegalArgumentException: A child with id 'counter' already exists: Now we could ask folks to use addOrReplace() instead of add(), or we could relax the multi add restriction to alleviate this problem. I wouldn't be against relaxing add() and deprecating addOrReplace(). Now calling onInitialize again on a constructed component might open up another can of worms. Is this something worth pursuing? Or should we just write an article with how to do jrebel no-redeploy wicket coding? Martijn
http://mail-archives.apache.org/mod_mbox/wicket-dev/201011.mbox/%3CAANLkTik637aRLTytnf+Pv0RT0nxbPM4gLhJSimORHJoS@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
501
59.13
Red Hat Bugzilla – Bug 46365 libstdc++ 2.96 stringstream does not work correctly Last modified: 2008-05-01 11:38:00 EDT From Bugzilla Helper: User-Agent: Mozilla/4.77 [en] (X11; U; Linux 2.4.3-12 i686; Nav) Description of problem: stringstream operator >> does not work when used after operator <<. The following program does not print "5" in g++ 2.96 with libstdc++ 2.96 (it works OK in g++ 2.95, however): #include <sstream> main() { stringstream ss; int i=0; ss << 5; ss >> i; cout << i <<endl; } How reproducible: Always Steps to Reproduce: 1. compile the small sample code provided 2. execute 3. the output is 0 (incorrect) intead of 5 (correct). Actual Results: The output of the compiled program is 0 Expected Results: The output should be 5 Additional info: I verified this problem also on updated Red Hat 7.0 and on Mandrake 8.2. So I guess it is a general problem of gcc 2.96 and a very nasty one. gcc 3.04, 3.1 and 2.95.x work fine.
https://bugzilla.redhat.com/show_bug.cgi?id=46365
CC-MAIN-2017-39
refinedweb
178
78.45
> I'm working on a project that uses animation characters with facial rigging, max cameras and many other things. The problem is that when I export characters, simple objects or cameras into a fbx format and then I import them on Unity (at least using "legacy" at the rig of the import settings) several unexpected things happen: 1.-My five keys of the animated fbx object turn into a file which has many more keys. (see the attached images below) 2.-When an object is going to make a strong displacement between two consecutive keys, (despite I set the animation tangents on max to stepped), Unity imports it ignoring my tangents adding extra movements before and after these keys. Unity even seems to create new keys between my two consecutive keys. Is there any solution,. Imported FBX from Google sketchup to unity has too many meshes, how to export/import properly? 1 Answer exporting custom point/vertex attribute from Houdini to unity 0 Answers Scene animation 0 Answers import FBX files to unity not correct axis 1 Answer Unity normalizing tangents on import from .fbx file? 0 Answers
https://answers.unity.com/questions/443844/how-can-i-exportimport-accurately-my-animation-key.html
CC-MAIN-2019-22
refinedweb
188
50.87
Before we go any further, I recommend heading over to software.intel.com and reading Detecting Slate/Clamshell Mode & Screen Orientation in a 2 in 1. The code we’re looking to implement is basically this (done in a WndProc): 1: case WM_SETTINGCHANGE: 2: if (wcscmp ( TEXT ("ConvertibleSlateMode"), (TCHAR*)lParam ) == 0) 3: NotifySlateModeChange (); 4: else if wcscmp ( TEXT ("SystemDockMode"), (TCHAR*)lParam ) == 0) 5: NotifyDockingModeChange (); Simply stated, we need to monitor WM_SETTINGCHANGE or equivalent, detect which state we’re in, and respond accordingly. Now that we know what our goal is, let’s make it work in a modern WPF C# application. I looked a few approaches before settling on my solution, and hopefully it will make sense why I’ve done it this way as we work through the code. Our first step is to import user32.dll so we can define and use the GetSystemMetrics method. 1: public partial class MainWindow : Window 2: { 3: [DllImport("user32.dll")] 4: static extern int GetSystemMetrics(SystemMetric smIndex); 5: ... To do that we’ll need to reference the InteropServices: using System.Runtime.InteropServices; At this point you may have noticed that we don’t have the SystemMetric enum defined. We could create our own with just the two metrics we need, but we might as well jump over to and get a fairly complete enum, and just copy that into our window class or somewhere else convenient (this might just be useful for other purposes as well): Unfortunately this list doesn’t contain the two metrics we want to check against as we saw in the Intel doc, namely SM_CONVERTABLESLATEMODE and SM_SYSTEMDOCKED so we’ll add these to the end of the enum ourselves: 1: public enum SystemMetric 2: { 3: ... 4: SM_CONVERTABLESLATEMODE = 0x2003, 5: SM_SYSTEMDOCKED = 0x2004 6: } (* after writing this post I decided I should update the pinvoke.net page so the enum should now be up to date) We’re almost ready to watch for the 2 in 1 state change. In C++ we would just capture the WM_SETTINGSCHANGED message. We could do this in C# by overriding OnSourceInitialized() and hooking into the WndProc. Truth be told this was my first approach. But on my machine I wasn’t receiving “SystemDockMode” from the lParam – it would always give me “ConvertableSlateMode” regardless of the state. So I decided to do it the “C# way” and just watch UserPreferenceChanging. To try it out just add 1: SystemEvents.UserPreferenceChanging += SystemEvents_UserPreferenceChanging; to the constructor and create a method with the correct signature. You’ll need to include “using Microsoft.Win32;” for SystemEvents. We’ll get a UserPreferenceChanging event every time the device changes state, and it will come in under the General Category of UserPreferenceCategory. This is passed in with the UserPreferenceChagingEventArgs. Unfortunately we don’t get any additional information, and we don’t have access to the lParam as we would if we were handling the message directly. And that’s where the GetSystemMetrics() method comes in. We’ll simply query for the state every time we change comes in under the General category. Uou’ll likely want to add some logic to remember your current state and only update your UI if it changes. See the code below: 1: void SystemEvents_UserPreferenceChanging(object sender, UserPreferenceChangingEventArgs e) 2: { 3: if(e.Category == UserPreferenceCategory.General) 4: { 5: if (GetSystemMetrics(SystemMetric.SM_CONVERTABLESLATEMODE) == 0) 6: { 7: Debug.WriteLine("detected slate mode"); 8: } 9: else if (GetSystemMetrics(SystemMetric.SM_SYSTEMDOCKED) == 0) 10: { 11: Debug.WriteLine("detected docked mode"); 12: } 13: } 14: } Finally, there’s a sample project with all of the above code on github here: Happy Coding!
http://www.themethodology.net/2014/05/detecting-2-in-1-state-using-wpf-c.html
CC-MAIN-2017-26
refinedweb
597
55.44
In classical mechanics, a double pendulum is a pendulum attached to the end of another pendulum. Its equations of motion are often written using the Lagrangian formulation of mechanics and solved numerically, which is the approach taken here. The dynamics of the double pendulum are chaotic and complex, as illustrated below. The code: import sys import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt from matplotlib.patches import Circle # Pendulum rod lengths (m), bob masses (kg). L1, L2 = 1, 1 m1, m2 = 1, 1 # The gravitational acceleration (m.s-2). g = 9.81 def deriv(y, t, L1, L2, m1, m2): """Return the first derivatives of y = theta1, z1, theta2, z2.""" theta1, z1, theta2, z2 = y c, s = np.cos(theta1-theta2), np.sin(theta1-theta2) theta1dot = z1 z1dot = (m2*g*np.sin(theta2)*c - m2*s*(L1*z1**2*c + L2*z2**2) - (m1+m2)*g*np.sin(theta1)) / L1 / (m1 + m2*s**2) theta2dot = z2 z2dot = ((m1+m2)*(L1*z1**2*s - g*np.sin(theta2) + g*np.sin(theta1)*c) + m2*L2*z2**2*s*c) / L2 / (m1 + m2*s**2) return theta1dot, z1dot, theta2dot, z2dot def calc_E(y): """Return the total energy of the system.""" th1, th1d, th2, th2d = y.T V = -(m1+m2)*L1*g*np.cos(th1) - m2*L2*g*np.cos(th2) T = 0.5*m1*(L1*th1d)**2 + 0.5*m2*((L1*th1d)**2 + (L2*th2d)**2 + 2*L1*L2*th1d*th2d*np.cos(th1-th2)) return T + V # Maximum time, time point spacings and the time grid (all in s). tmax, dt = 30, 0.01 t = np.arange(0, tmax+dt, dt) # Initial conditions: theta1, dtheta1/dt, theta2, dtheta2/dt. y0 = np.array([3*np.pi/7, 0, 3*np.pi/4, 0]) # Do the numerical integration of the equations of motion y = odeint(deriv, y0, t, args=(L1, L2, m1, m2)) # Check that the calculation conserves total energy to within some tolerance. EDRIFT = 0.05 # Total energy from the initial conditions E = calc_E(y0) if np.max(np.sum(np.abs(calc_E(y) - E))) > EDRIFT: sys.exit('Maximum energy drift of {} exceeded.'.format(EDRIFT)) # Unpack z and theta as a function of time theta1, theta2 = y[:,0], y[:,2] # Convert to Cartesian coordinates of the two bob positions. x1 = L1 * np.sin(theta1) y1 = -L1 * np.cos(theta1) x2 = x1 + L2 * np.sin(theta2) y2 = y1 - L2 * np.cos(theta2) # Plotted bob circle radius r = 0.05 # Plot a trail of the m2 bob's position for the last trail_secs seconds. trail_secs = 1 # This corresponds to max_trail time points. max_trail = int(trail_secs / dt) def make_plot(i): # Plot and save an image of the double pendulum configuration for time # point i. # The pendulum rods. ax.plot([0, x1[i], x2[i]], [0, y1[i], y2[i]], lw=2, c='k') # Circles representing the anchor point of rod 1, and bobs 1 and 2. c0 = Circle((0, 0), r/2, fc='k', zorder=10) c1 = Circle((x1[i], y1[i]), r, fc='b', ec='b', zorder=10) c2 = Circle((x2[i], y2[i]), r, fc='r', ec='r', zorder=10) ax.add_patch(c0) ax.add_patch(c1) ax.add_patch(c2[imin:imax], y2[imin:imax], c='r', solid_capstyle='butt', lw=2, alpha=alpha) # Centre the image on the fixed anchor point, and ensure the axes are equal ax.set_xlim(-L1-L2-r, L1+L2+r) ax.set_ylim(-L1-L2-r, L1+L2+r) ax.set_aspect('equal', adjustable='box') plt.axis('off') plt.savefig('frames/_img{:04d}.png'.format(i//di), dpi=72) plt.cla() # Make an image every di time points, corresponding to a frame rate of fps # frames per second. # Frame rate, s-1 fps = 10 di = int(1/fps/dt) fig = plt.figure(figsize=(8.3333, 6.25), dpi=72) ax = fig.add_subplot(111) for i in range(0, t.size, di): print(i // di, '/', t.size // di) make_plot(i) The images are saved to a subdirectory, frames/ and can be converted into an animated gif, for example with ImageMagick's convert utility. The derivation of the double pendulum equations of motion using the Lagrangian formulation has become a standard exercise in introductory classical mechanics, but an outline is given below. There are many, many similar derivations on the internet. We use the following coordinate system: The two degrees of freedom are taken to be $\theta_1$ and $\theta_2$, the angle of each pendulum rod from the vertical. The components of the bob positions and velocities are The potential and kinetic energies are then The Lagrangian, $\mathcal{L} = T - V$ is therefore: The Euler-Lagrange equations are: For these coordinates, after some calculus and algebra, we get: scipy's ordinary differential equation solver, integrate.odeint needs to work with systems of first-order differential equations, so let $z_1 \equiv \dot{\theta_1} \Rightarrow \ddot{\theta}_1 = \dot{z}_1$ and $z_2 \equiv \dot{\theta_2} \Rightarrow \ddot{\theta}_2 = \dot{z}_2$. After some rearranging, the following expressions for $\dot{z}_1$ and $\dot{z}_2$ are obtained: It is these equations which appear in the function deriv in the code above. Comments are pre-moderated. Please be patient and your comment will appear soon. Matias Koskimies 2 years, 1 month ago I would love to see how a chaos pendulum with the middle point (knee) fixed would look like.Link | Reply christian 2 years, 1 month ago I'm not sure I follow you – if you fix both the first and second joints, then you have one pendulum. If you fix only the second ("knee"), you have two single pendulums? Or are you interested in the relative motion between the two bobs (theta2 - theta1)?Link | Reply Dave F 2 years ago The double pendulum animation at the top of the page starts off slowly then builds up its speed until it is moving very quickly before the animation resets. (On my browser, at least.) I assume it's written in Javascript.Link | Reply My guess is that in discretizing the changes to the angles, a tiny change to the system's total energy in introduced with each discrete step, and this accumulates with time. christian 2 years ago Many thanks for noticing this: the code is in Python but there was a bug – a missing factor of cos(theta1-theta2) – which caused the energy to drift. I've corrected it now and added a check for energy conservation.Link | Reply Joel 1 year, 11 months ago Nice.Link | Reply Andrey 10 months ago Can we slide the joint connecting the second pendulum (or "knee", as one commenter puts it) up and down the first pendulum? E.g., one corner case is that you align the joint connecting the second pendulum to the first with the joint of the first pendulum (the two pendulums are, in effect, independent pendulums).Link | Reply christian 10 months ago Hi Andrey,Link | Reply you can alter the values of L1 and L2 to effectively move the joint connecting the two pendulums: but this keeps the "knee" at the bob of the first pendulum. I think you're describing the situation where the knee is along the rod L1 between the top pivot and the first bob. No doubt this could be investigated, but it requires a new analysis of the dynamics and an additional variable (the position of the knee). John 7 months, 1 week ago Pretty cool simulation.Link | Reply How would you account for friction in the joints? I'm building a 3d printed double pendulum and wonder if I could use this simulation to model different bearing and their friction, rod lengths, and different rod masses. Could you use your energy conservation fix to insert a small lose each time? christian 7 months ago I think you could account for friction by adding a dissipative term to the Euler-Lagrange equations... this would add some complexity to the solution, however.Link | Reply My energy conservation fix was a fix to the implementation of the friction-free equations: I had a bug that I should have detected by checking for energy conservation. I don't think the bug itself was a good way of introducing a dissipation effect: in my case the energy increased...(!) Don't forget that this is a chaotic system, so modelling in this way may not tell you very much about the precise long-term behaviour of an actual, physical double-pendulum. Naomi 3 months, 3 weeks ago i am looking for code that makes a trail following a circle on python canvas, can you point out the code for that specific part?Link | Reply Mike 3 months, 1 week ago Hey! Im really looking for a simulation of a double physical pendulum, have you any?Link | Reply christian 3 months, 1 week ago I'm a bit confused: this is a simulation of a double pendulum... what do you mean by "double physical pendulum"?Link | Reply Mike 3 months ago The physical pendulum is when the mass isnt entirely at the end like a point, the string itself has mass, ideal pendulum is when you neglect the mass of the string. So you get that the mass is at lenght L/2 for example.Link | Reply christian 2 months, 3 weeks ago Oh, I see: well, the Langrangian is given in Wikipedia, so it would be fairly straightforward to edit the deriv function to change it there. I'll see if I can get round to it and post again.Link | Reply New Comment
https://scipython.com/blog/the-double-pendulum/
CC-MAIN-2020-29
refinedweb
1,588
65.42
Part 2 | Building a quiz app with BaasBox In the first part of this series we have seen how to setup a project with the BaasBox SDK and how to use it in Swift. We have created the first screen of the QuizWorldCup application, which allows users to login and signup. In this second part we are gonna build the game screen, which allows the user to play a game. Let's get started. We already have a class "HomeViewController.swift". This is the view that will allow the user to start a new game, check the rankings and tweak the settings. We are going to build the first functionality: start a new game. Delete the label in "HomeViewController.swift" and its corresponding outlet in code, we don't need it anymore. Now drag a new button, place it in the center and change its label to "Play". Next open the assistant editor and control-drag from the play button to the code in HomeViewController. Create a new IBAction and name it "playButtonTapped". Put a simple "println" statement in the newly created method just to check if it works. At this point HomeViewController.swift should look like this. import UIKit class HomeViewController: UIViewController { init(nibName nibNameOrNil: String?, bundle nibBundleOrNil: NSBundle?) { super.init(nibName: nibNameOrNil, bundle: nibBundleOrNil) } init(coder aDecoder: NSCoder!) { super.init(coder: aDecoder) } override func viewDidLoad() { super.viewDidLoad() } override func viewDidAppear(animated: Bool) { super.viewDidAppear(animated) let client = BAAClient.sharedClient() if !client.isAuthenticated() { navigationController.performSegueWithIdentifier("showLogin", sender: nil) } } @IBAction func playButtonTapped(sender: UIButton) { println("play button tapped") } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } } Build and run the application, login/signup if needed, tap the play button and make sure the log statement shows up in the debug console. Once you are sure the UI is hooked up correctly let's move on to the next step: building the game view controller. The Game View Controller will include the following functionalities: Looks like a lot to do, so let's dive in! The first step is to prepare the UI. Drag a new view controller on the storyboard and populate it with the following: This is the layout that we are looking for: Once you have created the UI create a new class named "GameViewController.swift" and assign it to the view controller newly created on the storyboard. Now use the assistant editor to create outlets for each UI element. At this point the "GameViewController" should look like this: import UIKit class GameViewController: UIViewController { @IBOutlet var questionLabel: UILabel @IBOutlet var button1: UIButton @IBOutlet var button2: UIButton @IBOutlet var button3: UIButton @IBOutlet var button4: UIButton @IBOutlet var timerLabel: UILabel @IBOutlet var pointsLabel: UILabel @IBOutlet var spinner: UIActivityIndicatorView init(nibName nibNameOrNil: String?, bundle nibBundleOrNil: NSBundle?) { super.init(nibName: nibNameOrNil, bundle: nibBundleOrNil) } override func viewDidLoad() { super.viewDidLoad() } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } } If you run the application now you'll get an error due to the lack of an initializer. This is probably an Xcode bug that's gonna be fixed in the future. For now you should add an extra initializer as follows: init(coder aDecoder: NSCoder!) { super.init(coder: aDecoder) } Next we need to add a few variables to keep track of the game state. Insert this code below the @IBOutlet declarations: var questions: [Question] = [] var buttons: [UIButton] = [] var timer: NSTimer? var timerValue: Int = 0 { didSet { timerLabel.text = String(timerValue) } } var points: Int = 0 { didSet { pointsLabel.text = "Points: \(String(points))" } } There are two arrays, one for the questions and one for the buttons. There's also a timer, a timerValue and an integer variable to keep track of points. Notice that for timerValue and points we are exploiting property observers, a very powerful feature of Swift. To some extend it is very similar to Objective-c key-value observing, but it requires way less code to be implemented. This allows to declare a property along with a closure, which can be executed right before or after a property value has changed. In our case we are triggering a refresh of their respective labels when timerValue or points change value. The next step is to hide all the UI elements when the controller is created, so that we can show the spinner right away. Change the implementation of viewDidLoad as follows: override func viewDidLoad() { super.viewDidLoad() buttons = [button1, button2, button3, button4] var tag = 0 for button in buttons { button.setTitle("", forState: .Normal) button.titleLabel.lineBreakMode = .ByWordWrapping button.titleLabel.textAlignment = .Center button.backgroundColor = UIColor.lightGrayColor() button.addTarget(self, action: "answerTapped:", forControlEvents: .TouchUpInside) button.tag = tag button.hidden = true tag++ } questionLabel.hidden = true points = 0 spinner.hidesWhenStopped = true; } This code keeps all the buttons in an array and sets a few properties like an empty title, label alignment, background color, the selector triggered when the button is tapped and the tag, to simplify the retrieval of the right question. Finally we need to show the game view controller when the user taps play. Open the storyboard and control-click from the navigation controller on to the game view controller and pick "Show Detail" from the menu. Then select the segue and name it "showGame". Now open "HomeViewControlle.swift" and change playButtonTapped as follows. @IBAction func playButtonTapped(sender: UIButton) { navigationController.performSegueWithIdentifier("showGame", sender: nil) } Build the application, tap the play button and make sure the game view controller is shown. The next step is to load questions from the server. Let's see how it is easy with the BaasBox SDK. Once the user taps the play button the application will trigger a request to the server to fetch 3 random questions related to the world cup. A question looks like the following, in JSON format: { "question": "When was the last time a footballer scored two goals in the first match of the World Cup?", "correctAnswerIndex": 0, "answers": [ "1950", "1936", "1970", "1998" ] } There is a field for the actual questions, an array with possible answers and the index of the correct answer in the array. You could create questions using the BaasBox console. That would mean entering data in a form for each questions. Populating the back end is out of the scope of this tutorial so I created a script to automatically populate the backend with a set of questions. It is a python script that can be downloaded from here.Before running it make sure the variables url_base and app_code (lines 7 and 8) are set correctly. Next you can run it by typing "python populate.py". The script will create a collection to hold questions and create a list of 28 questions with related answers. Now let's see how we can model a question in our app using the BaasBox SDK. The backend is now populated with questions. Our app needs to know about the structure of a questions to load and display it. Create a new class, name it "Question" and make it a subclass of BAAObject. Then implement it as follows: import UIKit class Question: BAAObject { var question: String var answers: [String] var correctAnswerIndex: Int init(dictionary: [NSObject : AnyObject]!) { self.question = dictionary["question"]! as String self.correctAnswerIndex = dictionary["correctAnswerIndex"]! as Int self.answers = dictionary["answers"]! as [String] super.init(dictionary: dictionary) } override func collectionName() -> String { return "document/questions" } } The class has three properties: question (the body), an array of answers and the index of the correct answer in the array. Two are the methods to implement: init and collectionName. The first parses the fields of the JSON returned by the server. Notice that the field names correspond exactly to the field names of the JSON on the server. The second method simply returns the server endpoint from which to fetch questions. In this case the name, "questions" corresponds to the one provided in the populate script illustrated above. That's it, modelling a BaasBox object is that easy. Now let's see how to load questions from the server. The fetch operation will be triggered when the game view controller appears on screen. Open GameViewController.swift and add the following method. override func viewDidAppear(animated: Bool) { super.viewDidAppear(animated) spinner.startAnimating() Question.getRandomObjectsWithParams(nil, bound: 3, {(questions: [AnyObject]!, error: NSError!) -> () in self.spinner.stopAnimating() if (error == nil) { self.questions = questions as [Question] self.play() } else { println("error loading questions") } }) } The BaasBox SDK comes with a handy method getRandomObjectsWithParams, inherited from BAAObject, which allows to fetch a set of elements from the back end. Once we get the response back we cache the result in a property and call the play() method, which is defined like this: func play() { if (questions.count == 0) { println("round is over")) } This method is the core of the game. It checks the elements of the questions property. When it's empty, it means the turn is over and we can dismiss the controller. While if there are still questions, it pops the first, updates the UI by setting labels and buttons and starts the countdown via a timer. The callback associated to the timer is defined as follows: func update() { if (timerValue == 0) { println("time is up") timer!.invalidate() questions.removeAtIndex(0) play() } else { timerValue-- } } This method decreases the value of the timer each time it's called and, when the value is zero, it calls the play method again. Notice that, thanks to observable properties, we just need to update timerValue without writing any code to update the UI. Finally we need to implement the method associated to the tap of a button: func answerTapped (sender:UIButton) { let buttonIndex = sender.tag let correctAnswerIndex = questions[0].correctAnswerIndex timer!.invalidate() if (buttonIndex == correctAnswerIndex) { println("answer is correct") points++ } else { println("answer is wrong") } questions.removeAtIndex(0) play() } Here we use the tag attached to the button to fetch the answer and check whether it's correct. Here we are finally! Run the application and tap play. The app will fetch three questions from the backend and show each with a countdown. The score will update accordingly. Once the round is over the home view will be shown. In this second part you have learned how to create the Game view and how to populate it with data coming from the server. You have learned the power of observable properties and the simplicity of data modelling in BaasBox. In the next part you will learn how to save the scores of a user on the backend and how to build a view to show the rankings. Until then, enjoy swifting! Please login in order to leave a comment.
http://www.chupamobile.com/tutorial-ios/swifting-with-baasbox-598
CC-MAIN-2017-13
refinedweb
1,741
57.47
Cluster Policies¶ If you want to check or mutate DAGs or Tasks on a cluster-wide level, then a Cluster Policy will let you do that. They have three main purposes: Checking that DAGs/Tasks meet a certain standard Setting default arguments on DAGs/Tasks Performing custom routing logic There are three types of cluster policy: dag_policy: Takes a DAGparameter called dag. Runs at load time. task_policy: Takes a BaseOperatorparameter called task. Runs at load time. task_instance_mutation_hook: Takes a TaskInstanceparameter called task_instance. Called right before task execution. The DAG and Task cluster policies can raise the AirflowClusterPolicyViolation exception to indicate that the dag/task they were passed is not compliant and should not be loaded. Any extra attributes set by a cluster policy take priority over those defined in your DAG file; for example, if you set an sla on your Task in the DAG file, and then your cluster policy also sets an sla, the cluster policy’s value will take precedence. To configure cluster policies, you should create an airflow_local_settings.py file in either the config folder under your $AIRFLOW_HOME, or place it on the $PYTHONPATH, and then add callables to the file matching one or more of the cluster policy names above (e.g. dag_policy) Examples¶ DAG policies¶ This policy checks if each DAG has at least one tag defined: def dag_policy(dag: DAG): """Ensure that DAG has at least one tag""" if not dag.tags: raise AirflowClusterPolicyViolation( f"DAG {dag.dag_id} has no tags. At least one tag required. File path: {dag.fileloc}" ) Note To avoid import cycles, if you use DAG in type annotations in your cluster policy, be sure to import from airflow.models and not from airflow. Task policies¶ Here’s an example of enforcing a maximum timeout policy on every task: def task_policy(task: BaseOperator): if task.task_type == 'HivePartitionSensor': task.queue = "sensor_queue" if task.timeout > timedelta(hours=48): task.timeout = timedelta(hours=48) You could also implement to protect against common errors,, your airflow_local_settings.py might follow this pattern:loc}):\n" f"Notices:\n" f"{notices_list}" ) def cluster_policy(task: BaseOperator): """Ensure Tasks have non-default owners.""" _check_task_rules(task)
https://airflow.apache.org/docs/apache-airflow/2.2.4/concepts/cluster-policies.html
CC-MAIN-2022-33
refinedweb
356
57.77
In a system like the BeOS, much effort goes into documenting the specifics of programming the BeOS and not as much is spent documenting more common parts of the libraries, such as the C library. This makes sense, since there are numerous books on the C library and POSIX that cover it in adequate detail. Unfortunately, when some of us incorrigible C weenies add functions or extensions to the C library it means that they tend not to get documented. Thus, it pays to peruse the header files found in: /boot/develop/headers/posix/ If you poke around in this directory, one "interesting" header you'll find is parsedate.h. This is the header file for the parsedate() function that I added to the BeOS C library (which is part of libroot.so). The parsedate() routine is a sophisticated date parsing function that knows how to convert from a human-readable string into a standard time_t variable (i.e., the number of seconds since January 1, 1970). It even knows about time zones. The parsedate() routine is pretty nifty, because it understands common time formats like "Mon, June 10th, 1993 10:00:03 am GMT". In fact, parsedate() understands virtually every imaginable verbose time format. This includes all the typical formats that you'd see in an e-mail message or Usenet news posting, or that other programs would print using ctime() or strftime(). The initial list of date formats supported by parsedate() was generated by culling the Date: line from around 80,000 news postings, so it's reliably comprehensive. Having a routine that can parse such strings and return a canonical integer time is very useful if you ever have to parse an e-mail header or input a date that was output from another program. One drawback to using time formats such as these is that their rigidity makes them difficult for users to type. This becomes an issue if you need users to input a date as part of your program and you want to use parsedate(). To remedy the problem of inflexible time formats, parsedate() also understands many natural date and time specifications. The parsedate.h file alludes to this, but doesn't really go into detail about what these formats are. The parsedate function accepts the following "natural" time formats (square brackets indicate an optional item): yesterday today tomorrow [last | next] dayname (monday, tuesday, etc; e.g., last monday) [last | next] hour [last | next] week [last | next] month [last | next] year number minute (e.g. 15 min, -30 minutes) number hour (e.g. 4 hours, -1 hour) number day (e.g. 5 days, -3 days) number month (e.g. 1 month, -2 months) number year (e.g. 2 years, -3 years) Some slightly less casual but still fairly loose formats also work: 11/10/97 3pm Dec 25th 5:00pm Friday July 9th 10am Sunday Because parsedate() understands these types of formats, you can use it in a variety of situations. It's easy to imagine a reminder/calendar program that, as an option, lets users just type a day (i.e., remind me next Thursday of ...). It's also possible to imagine a mail filter which would scan message text for date strings and build a calendar automatically (strings like "this Friday" are easily detectable). parsedate() is also helpful in the BeOS Find panel. For example, if you need to find everything created in the last two days, you can do a find by last modification time and simply enter the string "-2 days". Other common finds might be everything modified since "yesterday" or "last week". Another aspect of parsedate() is that when there's a choice of how to interpret a date, it assumes you want a date after the current time. For example, if today is Wednesday and you give a time of "monday", parsedate() interprets that to mean next Monday. If that's not what you want, you may have to be more explicit or use a relative time format like "-2 days". We'll add support for other "casual" date formats as we come across them (I've added about 10 more since I started writing this article). If you have suggestions for other common formats, send them to us and we'll see about adding them to the list (barring ambiguity problems). To wrap things up, here is simple demonstration program that shows you how to use parsedate() and lets you play with inputs to it. /* This is a simple program to demonstrate using parsedate() on the BeOS. dbg@be.com */ #include <stdio.h> #include <stdlib.h> #include <time.h> #include <parsedate.h> int main(int argc, char ** argv) { char str[256]; time_t remind_time; struct tm * tm; printf("\nI am Chronos, the keeper of time.\n"); printf("I am feared by ctime() implementors everywhere.\n"); printf("Enter a time and I will parse it!\n\n"); while ( fgets( str, sizeof( str), stdin) != NULL) { if ( str[0] == '\n') continue; if ( strcmp( str, "quit\n") == 0 || strcmp( str, "exit\n") == 0) break; /* woo-hoo! here it is...dun dun da nah... the call to parsedate()! pretty complex eh? The -1 argument means to parse relative to now. */ remind_time= parsedate( str, -1); if ( remind_time== ~0) { fprintf( stderr, "\nHmmm:\n %sis not recognized.\n", str); fprintf( stderr, "You should tell my trusty assistant dbg@be.com\n\n"); continue; } printf("\nThe canonical form for the time you entered was:\n"); /* convert to a struct tm which has all the time fields broken out */ tm= localtime(& remind_time); printf(" Year: %d Month: %d Day: %d Hour: %.2d:%.2d\n", tm-> tm_year, tm-> tm_mon+1, tm-> tm_mday, tm-> tm_hour, tm-> tm_min); /* also print the result of ctime() */ printf(" %s\n", ctime(& remind_time)); } } Have fun and maybe next time we'll poke around in that mysterious looking header file called malloc_internal.h... When was the last time you used a RAM drive? Do you still remember what a RAM drive is? On my first PC (an AT 286, 12 MHz, 2 MB RAM) a RAM disk was the only workable way to use more than 1 MB of memory while running under MS-DOS. Unlike creaky MS-DOS, a current OS like the BeOS usually uses free memory in the file system cache; in general, caching algorithms provide much better performance than a simple RAM disk. Recently though, I was overcome by inexplicable nostalgia for the good old days and I decided to write a driver for that "disk" of yesteryear. So if you have memory to burn, you may want to try using some of your surplus as a RAM "disk." If nothing else you'll find out that a RAM disk driver is an excellent example of using a device driver not only as a hardware interface but also to emulate it! And if you develop a file system driver, a big RAM disk can speed up the testing. I assume that you know the basics of BeOS device drivers, so I'll focus on features specific to mass storage devices (floppy drive, IDE disk, IEEE 1394 [FireWire] disk) and on some tricks related to the volatile nature of RAM. The driver has to support all standard entry points. In addition it has to handle some mass storage device I/O control codes. I tried to make the driver as simple as possible, so it supports only one RAM disk. Its dynamic configuration is almost nonexistent, and its emulation of the hard drive seek process is very primitive. This simplification let me forget about synchronization problems. Here's the driver source code. You can find the original source, unaltered for or by e-mail transmission, at: /****************** cut here ******************************/ #include <OS.h> #include <Drivers.h> #include <KernelExport.h> #include <stdlib.h> status_t vd_open(const char * name, uint32 flags, void ** cookie); status_t vd_free(void * cookie); status_t vd_close(void * cookie); status_t vd_control(void * cookie, uint32 msg, void * buf, size_t size); status_t vd_read(void * cookie, off_t pos, void * buf, size_t * count); status_t vd_write(void * cookie, off_t pos, constvoid * buf, size_t * count); static void format_ram_drive(void* buf); static uchar* create_ram_drive_area(size_t drive_size); static status_t delete_ram_drive_area(void); static void emulate_seek(off_t pos); #define RAM_DRIVE_RELEASE_MEMORY( B_DEVICE_OP_CODES_END+1) #define RAM_DRIVE_EMULATE_SEEK( B_DEVICE_OP_CODES_END+2) #define RAM_DRIVE_SIZE(8*1024*1024) #define RAM_BLOCK_SIZE512 #define MAX_SEEK_TIME1000.0 /* microseconds */ #define PREFETCH_BUFFER_SIZE(32*1024) static const char* const ram_drive_area_name= "RAM drive area"; uchar icon_disk[ B_LARGE_ICON* B_LARGE_ICON]; uchar icon_disk_mini[ B_MINI_ICON* B_MINI_ICON]; int emulate_seek_flag= FALSE; uchar * ram= NULL; static const char * vd_name[] = { "disk/virtual/ram_drive", NULL }; device_hooks vd_devices= { vd_open, vd_close, vd_free, vd_control, vd_read, vd_write }; status_t init_driver(void) { dprintf("vd driver: %s %s, init_driver()\n", __DATE__, __TIME__); ram= create_ram_drive_area( RAM_DRIVE_SIZE); if( ram== NULL) return B_ERROR; return B_NO_ERROR; } void uninit_driver(void) { dprintf("vd driver: uninit_driver()\n"); } const char** publish_devices() { dprintf("vd driver: publish_devices()\n"); return vd_name; } device_hooks* find_device(const char* name) { dprintf("vd driver: find_device()\n"); return & vd_devices; } status_t vd_open(const char * dname, uint32 flags, void ** cookie) { dprintf("vd driver: open(%s)\n", dname); return B_NO_ERROR; } status_t vd_free(void * cookie) { dprintf("vd driver: free()\n"); return B_NO_ERROR; } status_t vd_close(void * cookie) { dprintf("vd driver: close()\n"); return B_NO_ERROR; } status_t vd_read(void * cookie, off_t pos, void * buf, size_t * count) { size_t len; status_t ret= B_NO_ERROR; if( pos>= RAM_DRIVE_SIZE) { len= 0; } else { len= ( pos+ (* count) > RAM_DRIVE_SIZE) ? ( RAM_DRIVE_SIZE- pos) : (* count); emulate_seek( pos); memcpy( buf, ram+ pos, len); } * count= len; return ret; } status_t vd_write(void * cookie, off_t pos, constvoid * buf, size_t * count) { size_t len; status_t ret= B_NO_ERROR; if( pos>= RAM_DRIVE_SIZE) { len= 0; } else { len= ( pos+ (* count) > RAM_DRIVE_SIZE) ? ( RAM_DRIVE_SIZE- pos) : (* count); emulate_seek( pos); memcpy( ram+ pos, buf, len); } * count= len; return ret; } status_t vd_control(void * cookie, uint32 ioctl, void * arg1, size_t len) { device_geometry * dinfo; dprintf("vd driver: control(%d)\n", ioctl); switch ( ioctl) { /* generic mass storage device IO control codes */ case B_GET_GEOMETRY: dinfo= (device_geometry *) arg1; dinfo-> sectors_per_track= RAM_DRIVE_SIZE/ RAM_BLOCK_SIZE; dinfo-> cylinder_count= 1; dinfo-> head_count= 1; dinfo-> bytes_per_sector= RAM_BLOCK_SIZE; dinfo-> removable= FALSE; dinfo-> read_only= FALSE; dinfo-> device_type= B_DISK; dinfo-> write_once= FALSE; return B_NO_ERROR; case B_FORMAT_DEVICE: format_ram_drive( ram); return B_NO_ERROR; case B_GET_DEVICE_SIZE: *(size_t*) arg1= RAM_DRIVE_SIZE; return B_NO_ERROR; case B_GET_ICON: switch (((device_icon *) arg1)-> icon_size) { case B_LARGE_ICON: memcpy(((device_icon *) arg1)-> icon_data, icon_disk, B_LARGE_ICON* B_LARGE_ICON); break; case B_MINI_ICON: memcpy(((device_icon *) arg1)-> icon_data, icon_disk_mini, B_MINI_ICON* B_MINI_ICON); break; default: return B_BAD_TYPE; } return B_NO_ERROR; /* device specific IO control codes */ case RAM_DRIVE_RELEASE_MEMORY: return delete_ram_drive_area(); case RAM_DRIVE_EMULATE_SEEK: emulate_seek_flag= *(int*) arg1; return B_NO_ERROR; default: return B_ERROR; } } static void format_ram_drive(void* buf) { static const char format_str[16] = "RAM drive "; uchar* ptr= (uchar*) buf; off_t i; dprintf("vd driver: format_ram_drive(%08x)\n", buf); for( i=0; i< RAM_DRIVE_SIZE/16; i++) { memcpy( ptr, format_str, 16); ptr+= 16; } } static uchar* create_ram_drive_area(size_t drive_size) { void* addr; area_id area= find_area( ram_drive_area_name); if( area== B_NAME_NOT_FOUND) { area= create_area( ram_drive_area_name, & addr, B_ANY_KERNEL_ADDRESS,/*kernel team will own this area*/ drive_size, B_LAZY_LOCK, B_READ_AREA| B_WRITE_AREA); if(( area== B_ERROR) || ( area== B_NO_MEMORY) || ( area== B_BAD_VALUE)) addr= NULL; } else { area_info info; get_area_info( area, & info); addr= info. address; } return (uchar*) addr; } static status_t delete_ram_drive_area(void) { area_id area= find_area( ram_drive_area_name); if( area== B_NAME_NOT_FOUND) return B_ERROR; else return delete_area( area); } static void emulate_seek(off_t pos) { static off_t old_pos= 0; if(! emulate_seek_flag) return; if(abs( pos- old_pos)> PREFETCH_BUFFER_SIZE) { old_pos= pos; snooze((int)( rand()* MAX_SEEK_TIME)/ RAND_MAX); } } /****************** cut here ******************************/ What happens is that init_driver() creates the kernel memory area that's used to store the data. The driver can't use malloc()/ free() because when the driver is unloaded the memory (and all data) is lost. It's actually present somewhere, but there's no easy way to find it when the driver is loaded again. So init_driver() calls create_ram_drive_area(), which tries to find the previously created memory area with the name "RAM drive area." If the search is unsuccessful the driver is loaded the first time. In that case create_ram_drive_area() creates the memory area. It uses a currently undocumented flag, B_ANY_KERNEL_ADDRESS. This flag gives ownership of this memory area to the kernel, so the area will not be deleted when the application that opened the driver quits. It also uses the B_LAZY_LOCK flag so the driver doesn't consume RAM until the first time you use it. I could have used B_FULL_LOCK and put the memory allocation in vd_open(), but being lazy I didn't want to provide a critical section synchronization for it. vd_open() can be called a few times simultaneously; init_driver() cannot. publish_devices() creates the device file in /dev/disk/virtual/ram_drive. The /virtual/ram_drive part of the name is arbitrary, but /dev/disk is mandatory if you want to use a standard disk setup program—like DriveSetup—to configure the RAM disk. vd_open(), vd_free(), and vd_close() do nothing and return success. vd_read() and vd_write() check to see if the caller tries to read/write over the end of the disk. They adjust the requested length accordingly, then simply copy the data from/to the requested offset in the RAM area to/from the caller's buffer. If emulate_seek_flag is set the driver calls emulate_seek(). emulate_seek() is a crude imitation of real hard drive mechanics and caching. The current implementation is not in a critical section so it is capable of multiple concurrent seeks. It would be great to have a real drive with an infinite number of head arms! Readers are welcome to create a more realistic model. Still, with this function implemented, the RAM drive is better suited for testing a file system driver. It can block the caller's thread as a driver for a real drive would do. vd_control() handles four generic I/O control codes for a mass storage device: B_GET_GEOMETRY, B_FORMAT_DEVICE, B_GET_DEVICE_SIZE, B_GET_ICON. The first three are mandatory; the last one is optional. This driver has zero-initialized arrays for large and small icons. The real driver should provide more pleasing icons than black rectangles. RAM_DRIVE_RELEASE_MEMORY deletes the allocated memory area and thus destroys all information on the RAM drive. RAM_DRIVE_EMULATE_SEEK sets or clears the emulate_seek_flag. A control panel application for the RAM drive could send such commands. Now here's how to build and install the driver: Instruct the linker to produce an add-on image. Disable linking to the default shared system libraries. Export the driver's entry points—for example, by exporting everything. Place a copy of the appropriate kernel file ( kernel_joe, kernel_mac, or kernel_intel) in your project directory. Link against this file. Copy the compiled driver in the /boot/home/config/add-ons/kernel/drivers directory. Use your favorite disk format program or BeOS preferences/DriveSetup to partition and/or install BFS on the /dev/disk/virtual/ram_drive device. That's it. You're all set! I hope that this simple device driver may help get somebody started in BeOS driver development. If any seasoned device drivers out there have feature requests or comments (why does Be have such-and-such stupid restriction or does not have such-and-such device driver API ...), let me know and I'll try to implement them. The Intel port in particular may require same changes: @#$% ISA 16 MB limit on DMA, PCI interrupt sharing, ISA PnP, etc. Speak now and you can make your life running the BeOS on Intel that much easier! :. Hello Be people. As the newest Developer Technical Support Engineer, I have spoken with some of you recently regarding specific problems but this is my first taste of the Be Newsletter audience. Playing live shows in a band once or twice a month has tamed my wild urge to run and hide in front of a crowd, but writing an article that persists over time definitely feels different then being live on stage. So with a little nervousness, I present you with my first (not last!) bit of Be insight, in two parts. The subjects were inspired by many things but the titles were inspired by my favorite thing—BYOB! Melissa has graciously given me the task of evaluating bugs submitted by developers. In attempting to climb this ant hill I have found that some people are natural bug writers and others, well, are more like Hostess Twinkies. To help out all those American snack cakes out there, I will offer a few guidelines to writing a good bug (and the only *good* bugs are the ones you can hunt down and kill!). Step 1: A reproducible case is needed. If you can't reproduce it, neither can we. Step 2: Document each step necessary to reproduce the bug. For example: Open a Terminal Type twinky -old Open a Tracker window Create a folder called Hostess Right click the folder Crash! Step 3: Document your machine configuration. Include all the hardware AND software your are running. For example, a developer recently forgot to mention that MALLOC_DEBUG was turned on. Step 4: If the app that crashes is yours, include the code. Don't worry, that little window on the bug report form scrolls for a long time! Just paste it right in there. Now I know you're thinking "Hey, I've got 50 source files, they won't all fit." You need to find the piece of code that is causing the crash and put it in a little test app. HelloWorld is great for this. It has an app and a window and a view. Most bugs don't need more than that to cause trouble. HINT: This can also help you find bugs in your own code! If the only way to cause the crash is to include your entire source, then zip it up and send it in to devservices@be.com with a reference to the bug number that you get when you submit the description using the web form. Step 5: The last part of Step 4 is really Step 5. Write down your bug number so that you can contact us about it in the future. If you skip any one of these steps, your bug will more than likely get classified as "Unreproducible." This is bad for you and for Be, because your bug will still be running around in the next release causing you and everyone else problems! If you have any trouble with the bug reporting process or you feel that we've made a grave error and classified your bug as a feature, please don't hesitate to fill out a Developer tech Support form in the Registered Developer Area and we will look into your problem. Are you building that killer tractor app, and you want the interface to be oh-so-cool? It's not going to knock their socks of with a bunch of plain-jane buttons. You need to customize! So I've prepared some sample code to get you started making your very own buttons. Find the complete archive on the Be website at: The BPictureButton class is a button that takes two BPictures as arguments. You can build these pictures out of BBitmaps, a combination of BBitmaps and BButtons (for that grey look) or any other drawing routine. The two pictures each represent one of two states, on and off. I started by scanning a familiar form. I used Photoshop on the Mac ;-( to save my scanned image in raw format. You need to remember the dimensions of the image and the bit depth. The width needs to be a multiple of four in order to use the command line tool craw to convert them into code. My images were 48 x 48 so I just typed "$craw 48 48 myrawimage" in a terminal. (For more information on how to use craw check out... Be Engineering Insights: craw, shex, and script: Excuse Me? This is an oldie but a goodie! William Adams also addressed creating custom graphics from images in a News From The Front article... Some of the samples he referred to have been moved, including mkimghdr, which is now at....) Craw generates an unsigned char array. blue4x4 and blue4x4on are the names of my two arrays. Now I'm ready to fill some bitmaps and create a button... /* NOTE: This is all happening during the construction of a Window inheriting from a BWindow. */ BRect rect; rect. Set(0,0,47,47); //bitmaps for the pictures BBitmap onBitmap( rect, B_COLOR_8_BIT); BBitmap offBitmap( rect, B_COLOR_8_BIT); //fill bitmap onBitmap. SetBits( blue4x4on, 18432, 0, B_COLOR_8_BIT); offBitmap. SetBits( blue4x4, 18432, 0, B_COLOR_8_BIT); /* Next, I create two BPictures and draw my bitmaps in them. */ //tempview for creating the picture BView* tempView= new BView( rect, "temp", B_FOLLOW_NONE, B_WILL_DRAW); AddChild( tempView); //create on picture BPicture* on; tempView-> BeginPicture(new BPicture); tempView-> DrawBitmap(& onBitmap); on= tempView-> EndPicture(); //create off picture BPicture* off; tempView-> BeginPicture(new BPicture); tempView-> DrawBitmap(& offBitmap); off= tempView-> EndPicture(); //get rid of tempview RemoveChild( tempView); delete tempView; /* Finally I create my BPicture button and the other things that it needs including the message that will be sent to its target. */ //create a message for the button BMessage* pictmsg= new BMessage( BUTTON_MSG); pictmsg-> AddString("text", "Picture Button"); //create a picture button using the two pictures rect. Set( 120, 45, 167, 92 ); BPictureButton* pictureButton= new BPictureButton( rect, "picture", off, on, pictmsg, B_TWO_STATE_BUTTON); /* The last argument for the BPictureButton is a flag for the mode. B_TWO_STATE_BUTTON behaves like a toggle switch. Turn it on with one click. Turn it off with another. A B_ONE_STATE_BUTTON is only on while you hold the mouse down. */ /* Once you have created your button you can add it as a child to the window. The buttonView is a BTextView to which we will send our message. */ // add view and button to window AddChild( buttonView); AddChild( pictureButton); /* Finally in order to direct the message your button sends you need to assign it a target. */ // make the view the target of the buttons pictureButton-> SetTarget( buttonView); Now when you click on the button, a message is sent to the buttonView which displays the string contained by the message. To see how the BStringView handles the message, please check out the complete sample. Have fun and remember, we're happy to have you BYOB at Be! If you need more information, don't hesitate to ask! It is fashionable to complain about Comdex, from the bad food, the taxi lines, expensive hotel rooms, sore feet at the end of the day and silly carnival acts in the booths of companies who should know better. Perhaps, but I still like Comdex, and I liked this one even better, for two reasons. The first is Umax. Our partner graciously hosted us on their booth in the main hall, providing us with the opportunity to show both the PowerPC and the Intel versions of the BeOS as befits their own business addressing both standards. As expected we got both good reactions and blank stares, when not eyes rolling. Some visitors knew us, or had heard about us and were happy to get a progress report. Others had no idea we existed and a few questioned our sanity. When we got the opportunity, we disposed easily of the mental health question by pointing out the difference between OS/2 trying to dislodge Windows and the BeOS happily coexisting with the general-purpose OS. This experience is a useful reminder of what awaits us in the Intel-based market. It's not just larger than the PowerPC market, it's much different and our reputation, the exposure we enjoy in the PowerPC segment, aren't worth much in the new space. The newer Intel version performed well during the week, much better than we had anticipated and, towards the end, we sneaked in a "just baked" port on an Intel-powered laptop. The second reason to like Comdex this year is the abundance of technology coming out of gestation, ready to become a real product at Fry's some time in the next twelve months. Flat panels were big, literally and in their ubiquity. A forty-six-inch panel is still horribly expensive, but the smaller models are soon to grace the desktops of Corporate America. Closer to our business, video cameras, ever higher-speed graphic cards, still cameras, IEEE 1394 connections, high-bandwidth disk adapters...all sing the song of better, faster, more affordable digital media. This year we didn't hear the old saw: "The industry is becoming commoditized, boring, less innovative." Confusing, a little disorganized perhaps, but we like that, there is little room for a start-up such as ours in a perfectly stable and organized world. As for the expensive rooms and the bad food, a minivan gets you a little out of the way, the hotel prices plummet and you even find restaurants without slot:. We've all seen it...the "Data > 32K" compiler error message. What does it mean? It means you've asked for too much local data. The 32k limit is enforced by the compiler in order to be ANSI compliant, but beyond that, is biting off huge chunks of local data bad programming? Earl Malmrose saw some advantages in the practice: “Being local, you have automatic garbage collection of a sort - [the data] will never be leaked. You also have to write less code.” Jon Watte listed some reasons behind the limitation, among which: “In a multi-threaded environment, each thread has to be given its own stack...there is a very real trade-off between how many threads you can create, and how much stack space they get.” Nonetheless, there were objections to the "bad programming practice" characterization. It was contended that the negative citations were architecture specific (hardware or software)—that there were no intrinsic reasons why huge local data is bad. Then Osma Ahvenlampi added this: “In addition to the problems pointed out by others, note that it [relying on the stack for memory allocation] will not give you any kind of error recovery. What happens when there isn't enough memory to give you that big a stack frame? Crash, most likely.” Can a PGP-style signature be stored as an attribute of a file? (Sure.) But would the attribute itself affect the signature for the file? (Not if you didn't want it to—attributes can be selectively ignored.) Expanding the equation, what if you wanted to verify attributes as well as data? Peter Folk suggested a 3-tier scheme: PGP__SIG is a signature for the main data stream. PGP_<attributename>_SIG is a signature for whichever attributes you care to sign (excluding PGP__SIG). PGPSIG_SIG is a signature of all the PGP_*_SIG attributes. This may be overkill, suggests Jon Watte: “Having one signature for each attribute is rather wasteful... Instead, you can sign the data with one signature, and the union of all non-changing [attributes] with another; that should be enough even for the most paranoid among us.” Speaking of paranoia, the thread veered into a discussion of reliable key retrieval and verification. Anthony Towns provided a tidy wish list that summarized the elements of the problem (paraphrased here): Some way of storing public keys. This would naturally expand to include encryption as well as verification keys... Some way of easily requesting keys. This is strongly related to the public key database: first you check it, then you check a public key server somewhere. Library support for verifying signatures. Convenient methods for signing files (including support for various signature algorithms). An integrated encryption API. But should Be be in the encryption business? Some folks think not. Also, it was contended that the entire signature-in-an-attribute approach is flawed: Attributes can get lost in an ftp transfer (for example). Important signature information should be encoded in the data portion of a file. Should the Width() of a BRect return the "virtual" width of the rectangle, or the number of pixels the stroked and filled rect touches? The function returns the former, a practice that many developers are confused by; when you ask for the width of a rectangle, you should get a count of the number of (horizontal) pixels it encloses. But, goes the counter-argument, such a measurement would need to consider the pen size, so the status quo is proper. If you want the pixel-touched measure, you have to add the pen size to the rectangle width. Eric Berdahl contends that the width+pen_size business is a product of Be's "center pen" approach. He would like to see Be adopt a more flexible pen model (i.e. "inset" and "outset" pens). Devtalkers take a step back (or across, or something) and discuss Be's marketing approach, its seemingly unshakable association with Apple, whether a brainwash-the-CS- department approach can work, the role of free and eminently portable software in an OS company's success, and other non-technical matters that all broached the question: How shall Be thrive? Lots of opinions, or, at least, a lot of attitudes. Side-stepping back into the tech stream, another question was raised: What's a Be app? Does a port count, or is there some other defining element. Some proposed litmus strips: BWindow. If you don't have a BWindow, you're not a Be app. BMessages. Non-UI apps (servers) can still qualify if they respond to BMessages.
https://www.haiku-os.org/legacy-docs/benewsletter/Issue2-47.html
CC-MAIN-2017-04
refinedweb
4,827
62.17
twisted.conch.manhole_tap twisted.conch TAP plugin for creating telnet- and ssh-accessible manhole servers. Create a manhole server service. "telnetPort": strports description of the address on which to listen for telnet connections. If None, no telnet service will be started. "sshPort": strports description of the address on which to listen for ssh connections. If None, no ssh service will be started. "namespace": dictionary containing desired initial locals for manhole connections. If None, an empty dictionary will be used. "passwd": Name of a passwd(5)-format username/password file. "sshKeyDir": The folder that the SSH server key will be kept in. "sshKeyName": The filename of the key. "sshKeySize": The size of the key, in bits. Default is 4096. dict twisted.application.service.IService
https://twistedmatrix.com/documents/current/api/twisted.conch.manhole_tap.html
CC-MAIN-2016-26
refinedweb
123
54.18
<>.. (Suggestions for a better name than “sidecar” welcome.) (In reply to comment #0) >. That does sound like a win, but it would mean the keys were strings rather than entire object paths (which is in practice a source of confusion within Mission Control - about half the variables are just the unique tail of the object path or bus name, and the other half are the whole thing - which I'm trying to eliminate by normalizing to the whole thing). Other possibilities include: * that, but namespaced: org/laptop/Misc (leading to the object /o/f/T/C/gabble/jabber/foobar/org/laptop/Misc) * have the entire object path as the key, but have the client compute the object path by appending the tail it wants to the connection's object path, then look that up * declare that sidecars (may? must) have a Type like channels do, which makes it a little easier for clients to scan through the hash table (one hash lookup per sidecar, rather than one hash lookup plus one linear search of a GStrv or equivalent) > (Suggestions for a better name than “sidecar” welcome.) I think the namespace should probably mention Connection somewhere (o.f.T.Connection.Sidecar? o.f.T.ConnectionSidecar?), although I could be persuaded otherwise. Appendage? Appendix? I'd say extension, but conventional telepathy-glib usage already assigns a meaning to that. quick note from spec cabal: a{s: main interface => (oa{sv}): details} Untagging, needs more work. Assigning to Will for the more work. Working on updated spec. Updated: • The Sidecar interface is gone; • The Sidecars property is an a{s(oa{sv})} as above. See <> for the property's documentation; my 'sidecars' git branch is at <;a=shortlog;h=refs/heads/sidecars>. That looks good to me, but I'd prefer it diverted off into Connection.FUTURE for the initial release in the spec and implementation in Gabble. So I've been agonizing about this some more. Having the only change notification for Sidecars being the connection moving to Connected seems like a great idea, but in fact may not be. On XMPP, we could imagine sidecars being dependent on server components returned in the disco#items query to the server, so you have to disco a whole bunch of jids — which might be on different machines to your server, and may have fallen down a well so you need to wait for them to time out — before you move to Connected. (Gabble currently moves to Connected before doing any of this, presumably for this reason.) So, I thought to myself, we need a SidecarAdded signal. But actually that's not good enough: clients need to be able to know whether a sidecar will ever show up, or if they're going to be sitting around waiting for SidecarAdded forever. Let's add a method: RequestSidecar ( s: Interface) → ( o: Sidecar_Path , a{sv}: Properties ) Then a client which wants a particular sidecar can just call that method, and when it returns deal with the error or use the sidecar. Given this, there's relatively little need for the Sidecars property at all: if you want one, ask for it. Explicitly requesting sidecars also means that if a sidecar is expensive in terms of network traffic, it doesn't have to be initialized unless someone actually wants it. Thoughts? Pushed the latest version of this, and uploaded the documentation at <>. There's now one method: EnsureSidecar ( s: interface ) → ( o: sidecar_path , a{sv}: immutable_properties ) which you can call whenever you like, and get a sidecar back for that interface. It assumes that there's only going to be one sidecar per (connection, interface), with the rationale that if you want to expose more than one object with a given interface, you can either make them children of your sidecar or make them channels. I think this is ready for spec cabal bikeshedding, particularly given that I've implemented it in Gabble and it seems to work. :-) review+ from me as a draft. On the 0.19.0 wishlist. Draft exists in 0.19.0; repurposing this bug for a final version in future. I played a bit with sidecars recently and would like to discuss some points regarding them. - Currently once loaded side cars object stay alive during all the lifetime of the connection right? Maybe we could do the same kind of magic than in the client interest branch and destroy it once all the clients than ensured it are gone? This could be useful for example, if sidecars subscribes to some PEP ou pubsub nodes to reduce XMPP traffic. -. (In reply to comment #11) > -. I changed the wording to be vague enough, and the API to be flexible enough, that I think sidecars can use client interests fine. > -). I'd be happy to adjust the spec wording so it specifically allows. This is rather problematic in telepathy-glib, because TpContactFeature is an enum rather than a (quark isomorphic to a) string, so there's no namespacing. We could add an orthogonal TpContactExtendedFeature mechanism or something but that's a bit full of bees. When we break telepathy-glib ABI for spec 1.0 and/or GDBus, we should either change TpContactFeature into a quark, or change both TpProxy and TpContact features into an opaque pointer of some sort. Created attachment 56330 [details] [review] Protocol: add Sidecars interface This was gathering dust in my git repository. Maybe it'd be useful if this is ever undrafted? --.
https://bugs.freedesktop.org/show_bug.cgi?format=multiple&id=24661
CC-MAIN-2020-45
refinedweb
911
60.75
The Continue node passes control to the next iteration of the enclosing while, do, for, or foreach statement in which it appears. Examples In this example, the For Number Loop node is called at start. It then loop 5x each loop it call “if node” if the index is equal to 3 then it will skip the current loop. The “Debug Log” node will called after “If node” finished whatever the condition is true or false, but when the index is 3 the node will not be called because skipped by “Continue” node. Flow Graph: Generated Script: using UnityEngine; using System.Collections.Generic; public class Program : MonoBehaviour { private int index; public void Start() { for(index = 0; index < 5; index += 1) { if((index == 3)) { continue; } Debug.Log(index); } } } Output: 0 1 2 4
https://maxygames.com/docs/unode/nodes/continue/
CC-MAIN-2022-33
refinedweb
132
61.67
I've covered a few sorting algorithms in Python that I have learned and analyzed in my computer science and algorithms classes: Insertion Sort Using Python, Selection Sort in Python, and Merge Sort in Python Using Pythonista 3 on iPad Pro. Today, in one of my algorithms design and analysis classes I learned about Quicksort. My instructor was particularly enthusiastic about Quicksort, because he thinks it is a very elegant and practical algorithm. Much like Merge Sort, it is a divide and conquer sorting algorithm that sorts the items in O(nlogn). Quicksort in Python As with all new learnings, I like to practice writing the algorithms in Python, which I am also currently learning. Here is today's version of Quicksort in Python. import random def quicksort(items): def sort(lst, l, r): # base case if r <= l: return # choose random pivot pivot_index = random.randint(l, r) # move pivot to first index lst[l], lst[pivot_index] = lst[pivot_index], lst[l] # partition i = l for j in xrange(l+1, r+1): if lst[j] < lst[l]: i += 1 lst[i], lst[j] = lst[j], lst[i] # place pivot in proper position lst[i], lst[l] = lst[l], lst[i] # sort left and right partitions sort(lst, l, i-1) sort(lst, i+1, r) if items is None or len(items) < 2: return sort(items, 0, len(items) - 1) Quicksort is an in-place sorting algorithm. Therefore, when you pass it an array of integers, for example, it will mutate that array and sort it in increasing order. The fact that Quicksort doesn't use a lot of additional memory like Merge Sort makes it very practical and desirable. One can sort a list of integers using the following Python script and my quicksort function. numbers = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0] print 'unsorted: ', numbers quicksort(numbers) print 'sorted: ', numbers unsorted: [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0] sorted: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] This is my first time working with Quicksort and it is solely based on what I learned today in my algorithms class. A couple of things are worth noting, because I plan to do further investigation on them in the near future. The first is the use of a random pivot, and the second is the placement of my pivot at the front of the list. Choosing a Good Pivot is Critical My instructor was quite adamant about the importance of choosing a good pivot. In class we discussed choosing the pivot randomly, which doesn't sound intuitive, but actually has good results when looked at mathematically. On average, a randomly chosen pivot will provide O(nlogn) and worst case O(n2). The worst case is apparently highly unprobable, which means a randomly chosen pivot is on average "good enough." If O(n2) is unacceptable even by chance, there is a median of medians alternative to choosing the pivot that was discussed later in the lecture. I will stick with a random pivot, and therefore in my code above, I used the randint function to randomly choose the index of the next pivot. # choose random pivot pivot_index = random.randint(l, r) Pivot Moved to Beginning of List I'm not sure why this strikes me as interesting, but I wonder about the need to move the pivot to the beginning of the list. It seems necessary and practical to keep track of the pivot, but also seems out of place in the code. I move the pivot to the beginning of the list, because in class we always chose the first item as the pivot for simplicity, but I want to randomize the choice of the pivot in my code. To keep my thinking straight with what I learned in class, I move the pivot to the beginning of the list. It makes it really easy to track the location of the pivot and maintain the partitions, but I'm wondering if this is actually done in other implementations. # move pivot to first index lst[l], lst[pivot_index] = lst[pivot_index], lst[l] I did a small amount of investigation on this, and found that in the partition function psuedocode for quickselect on Wikipedia there is mention of moving the pivot to the end of the list. function partition(list, left, right, pivotIndex) pivotValue := list[pivotIndex] swap list[pivotIndex] and list[right] // Move pivot to end ... That makes me feel somewhat at ease, but I want to research that further. Conclusion I'm sure I will be changing this code as I learn more about Quicksort. You will soon see similar code when I mention Quickselect, which is something we also just covered in my algorithms class. If you're learning Quicksort, I hope this code helps! You can find me on twitter as @KoderDojo! P.S. Check out the Quickselect Algorithm, which is a selection algorithm based on Quicksort.
https://www.koderdojo.com/blog/quicksort-algorithm-in-python
CC-MAIN-2021-39
refinedweb
829
56.79
strnicmp() Compare two strings up to a given length, ignoring case Synopsis: #include <string.h> int strnicmp( const char* s1, const char* s2, size_t len ); Since: BlackBerry 10.0.0 Arguments: - s1, s2 - The strings that you want to compare. - len - The maximum number of characters that you want to compare. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The strnicmp() function compares up to len characters from the strings pointed to by s1 and s2, ignoring case. Returns: - < 0 - s1 is less than s2. - 0 - s1 is equal to s2. - > 0 - s1 is greater than s2. Examples: #include <stdio.h> #include <stdlib.h> #include <string.h> int main( void ) { printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 10 ) ); printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 6 ) ); printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 3 ) ); printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 0 ) ); return EXIT_SUCCESS; } produces the output: -20 -20 0 0 Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/strnicmp.html
CC-MAIN-2019-35
refinedweb
181
77.74
:\windows\syswow64\windowspowershell\v1.0\powers. Hi Arvind, I need to export a specific folder within a PST (imported using your method) to CSV There are currently 30 PST's and doing it manually is time consuming. Can you provide a script which helps me to do Here’s a version which works with Redemption to add password protected PSTs: Add-type -path “c:\redemption\Interop.Redemption.dll” $cred = Get-Credential $rdosession = new-object -comobject Redemption.RDOSession $rdosession.Logon() dir C:\mailbox\*.pst | % { $rdosession.Stores.AddPstStoreWithPassword(($_.FullName), $null, ($_.Name), $cred.GetNetworkCredential().Password) } Quite impressive that a Shell script can import PST file to Outlook. Everyone is not that good at shell script though given an example with. Trying script is an option but during the process, one might lose their important PST files. For avoiding such accidents use Add Outlook PST Tools to Import multiple PST file into Outlook at one time, without losing any data. Visit: Hello, the script seems to be fine… unless the pst is into a network folder. In this case, if i write: dir “\\servername\path\existing.pst” | % { $namespace.AddStore($_.FullName) } i receive an error message “RPC Server is unavailable”. Is it possible to make this worked with network pst also? Claudio, You could map the network drive using a script and then use the path via drive letter: net use z: \\servername\path\ then use this Arvind’s script with path z:\existing.pst Works perfectly
https://blogs.msdn.microsoft.com/arvindsh/2014/12/11/using-powershell-to-attach-pst-files-to-outlook/
CC-MAIN-2018-39
refinedweb
245
57.27
While running regrtest with -R to find reference leaks I found a usage issue. When a codec is registered it is stored in the interpreter state and cannot be removed. Since it is stored as a list, if you repeated add the same search function, you will get duplicates in the list and they can't be removed. This shows up as a reference leak (which it really isn't) in test_unicode with this code modified from test_codecs_errors: import codecs def search_function(encoding): def encode1(input, errors="strict"): return 42 return (encode1, None, None, None) codecs.register(search_function) ### Should the search function be added to the search path if it is already in there? I don't understand a benefit of having duplicate search functions. Should users have access to the search path (through a codecs.unregister())? If so, should it search from the end of the list to the beginning to remove an item? That way the last entry would be removed rather than the first. n
https://mail.python.org/pipermail/python-dev/2005-November/058324.html
CC-MAIN-2022-05
refinedweb
168
70.43
ONTAP Discussions We've got an odd problem that's been going on for a while. When we create a new nfs vol on Netapp (ontap 8.2.3), we must wait 30-40 minutes before we're able to add it to vCenter. Trying to add it immediately produces an "access denied" error. If we wait 30 minutes or so, it adds to vCenter just fine. No changes to ontap (8.2.3 is the version installed with the netapp and we've not done any updates yet) NFS3 vcenter and vsphere 6.0 (u1, 2 and 3) We're adding the storage to vcenter using IP address (server name field) of the netapp controller. The namespace and export policy seems obviously correct, since it does eventually mount with no changes. Export policy is in use by over 100 other nfs vols/data stores. Anyone seen this before? Thank you! Hi Tracy, I've seen this issue before, are you using load sharing mirrors for you vserver root volume? If so have you tried invoking a snapmirror update of your vserver root LSM volume, then mount trying to add the volume as an NFS datastore to your ESX hosts? To check if your vserver hosting the NFS volumes for your vsphere datastores is using a load sharing mirror or the root volume (namespace) SSH to your cluster and using the following commands as an example: cluster1::> snapmirror show -vserver vserver1 Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- cluster1://vserver1/vserver1_root LS cluster1://vserver1/vserver1_root_m1 Snapmirrored Idle - true - cluster1::> snapmirror update -source-path vserver1:vserver1_root -destination-path vserver1:vserver1_root_m1 [Job 2772] Job is queued: snapmirror update of destination "cluster1://vserver1/vserver1_root_m1". Also you can use WFA to automate this task and integrate with vSphere too. Might be an option for you instead of a script, you can call WFA workflows using a script via a REST API if you really want to stay with CLI script option, the benefit of using WFA for this is that your automation process is centralised, logged, auditable etc. Please let me know if you have any questions. /Matt
https://community.netapp.com/t5/ONTAP-Discussions/Script-Create-Source-and-Destination-volumes-export-NFS-and-mount-to-ESXi-hosts/td-p/137892
CC-MAIN-2021-04
refinedweb
362
68.2
oIn the implementation with the same model as our encoder-decoder without attention model, which I have detailed here. We will slowly introduce attention and we will also implement inferencing. Note: This is not going to be a state of the art model, especially since I wrote the data myself in couple minutes. This will purely be a post about understanding the architecture and you can use the implementation with your own larger data sources and you will achieve nice results. Encoder-Decoder with Attention: This diagram is a detailed version of the first diagram. We will start from the encoder and move up to the decoder outputs. We have out input data which has been padded and embedded. We feed these into our encoder RNN with a hidden state. The hidden state is initially all zeros but after our first input, it starts changing and holding valuable information. Just know that if you use an LSTM, we will also be passing a cell state c along with our hidden state h. For each input into the encoder we get the hidden state output which is passed for the next input but it is also used as the output for this cell for this input. We can call these h_1 to h_N and these some of our inputs for the attention model. Before we dive deep into the actual attentional interface, let’s see how the decoder processes its inputs and generates outputs. Our decoder inputs are the target language inputs with a GO token in the front and an EOS token followed by PADs. We will be embedding these inputs as well. The decoder RNN cell also has a hidden state input. Initially, these will be zeroes and then change as we feed in inputs. So far, the decoder RNN looks exactly like the encoder RNN, the difference is with an additional input c_i which comes from the attention mechanism. In the next section below, we will take a closer look at how this context c_i is derived but it is essentially the result from the attentional interface based on all of the encoder inputs and our previous decoder hidden state. It tells us how much attention to give to each of the encoder inputs when trying to predict our next output. From each decoder input, the decoder cell uses the input, previous hidden state and the context vector from attention to predict the target output via softmax. Note that during training, each RNN cell only uses these three inputs to get its target output, but during inference we do not know what the next decoder input. So we will use the previously predicted decoder output as the new decoder input. Now, let’s take a closer look at how the context c_i is calculated from the attentional interface. Attention Mechanisms: Let’s initially just focus on the the inputs and outputs from the attention layer. For generating a context c_i for each decoder cell, we use all of the hidden states from all of the encoder inputs and the previous hidden state from the decoder. First, both of these inputs go through a tanh net which produced the output e [NXH]. This happens for all j relevant encoder inputs. We apply softmax of all of the e and now we get a probability for each of the h, which we call alpha. We now multiply each alpha but the originally hidden states h to get a weighted value from the each h. Finally we will sum them up to get our context c_i [NXH] to get the weight representation of the encoder inputs. Initially, these will be arbitrary contexts but eventually, our model will train and learn which of the encoder inputs are worth weighting in order to accurately predict the decoder target. Tensorflow Implementation: Now we can focus on implementing this architecture, but most of the focus will be on the attentional interface. We will be using a simple unidirectional GRU encoder and decoder (very similar to the one from the previous post) but the decoder will now be using attention. Specifically, we will be using the embedding_attention_decoder() from tensorflow. First, let take a look at the data that we will process and feed into the encoder/decoder._3<< We feed in the encoder_inputs into the encoder. The inputs are of shape [N, max_len] which are embedded into shape [N, max_len, H]. The encoder dynamic RNN processes these inputs and seq_lens ([N,]) and returns all outputs with shape [N, max_len, H] and states of shape [N, H] (which is the last relevant state for each input.) We will attend to all of these encoder outputs. Decoder Before talking about the attentional interface, let’s quickly see the inputs and outputs of the decoder as well. The decoder’s initial state is the encoder’s last relevant state for each sample ([N, H]). Tensorflow’s embedding_attention_decoder() requires the decoder inputs to be time-major list so we convert our decoder inputs from [N, max_len] to a max_len sized list [N]. We also create our output projection weights, which is another way of calling the softmax weights [H, C] for processing the outputs from the decoder. We feed in our time-major list of decoder inputs, initial state, attention states and the projection weights into the embedding_attention_decoder() and we receive all outputs ([max_len, N, H]) and state ([N, H]). It doesn’t matter that our all outputs is time major because we will just be flattening our outputs and applying softmax to convert them to shape [Nmax_len, C]. We will then also flatten our targets from [N, max_len] to [Nmax_len,] and compute the loss with sparse_softmax_cross_entropy_with_logits(). We will then be masking the loss, in order to remove influence from the predictions where the target was a PAD. Attention: Finally, we can focus on the attentional interface. We know it’s inputs and outputs but what is happening internally? Our time-major list of decoder inputs, initial_state, attention states (encoder_outputs) all go into our embedded_attention_decoder(). First we will create a new set of weights in order to embed the decoder inputs. Let’s call these weights W_embedding. We will then set up a loop function, which will be used after generating a decoder output with a decoder input. The loop function will decide wether to and what to pass into the next decoder cell for processing the next decoder input. Usually, during training we will not pass in the previous decoder output. Here the loop function will just be a None. However, during inference, we will want to pass in the previously predicted decoder output. The loop function we will be using here is _extract_argmax_and_embed() which does exactly what it says. We will take output of the decoder, apply our softmax (output_projection) weights to convert from [N, H] to [N, C] and then use the same W_embedding to place the embedded output ([N, H]) as an input for the next decoder cell. # Loop function if using decoder outputs for next prediction loop_function = _extract_argmax_and_embed( W_embedding, output_projection, update_embedding_for_previous) if feed_previous else None One additional option we have with the loop function is update_embedding_for previous which if False, will stop the gradient from flowing through our W_embedding weights when we embed the decoder output (except for the GO token). So, even though we use W-embedding twice, they weights will only depend on the embedding we do on the decoder inputs and NOT on the decoder outputs (except GO token). Now, we can pass our list of embedded time-major decoder inputs, initial_state, attention states and loop function into attention_decoder(). The attention_decoder() is the heart of the attentional interface and there are a few additional processing steps applied not mentioned in the alignment paper. Recall that attention will use our attention states (encoder outputs) and the previous decoder state. It will pass these values into a tanh neural net and we will project e_ij for each of the hidden states. We will then apply softmax to convert to alpha_ij which is multiplied with the original attention state. We take the sum of this value for all attention states and this becomes our new context vector c_i. This context vector is used to eventually produce our new decoder output. The main difference here is that our attention states (encoder outputs) and the previous decoder state are not processed together with something like _linear() and then applied with a regular tanh. Instead, we do some extra steps. First, the attention states go through a 1×1 convolution. This is a technique to extract meaningful features from our attention states, instead of processing them raw. Recall that conv layers in image recognition acted as excellent feature extractors. The consequence of this step are better features from the attention states but we also now have a 4D representation of the attention states. ''' Transformation in shape: original hidden state: [N, max_len, H] reshaped to 4D hidden: [N, max_len, 1, H] = N images of [max_len, 1, H] so we can apply filter filter: [1, 1, H, H] = [height, width, depth, # num filters] Apply conv with stride 1 and padding 1: H = ((H - F + 2P) / S) + 1 = ((max_len - 1 + 2)/1) + 1 = height' W = ((W - F + 2P) / S) + 1 = ((1 - 1 + 2)/1) + 1 = 3 K = K = H So we just converted a [N, max_len, H] into [N, height', 3, H] ''' hidden = tf.reshape(attention_states, [-1, attn_length, 1, attn_size]) # [N, max_len, 1, H] hidden_features = [] attention_softmax_weights = [] for a in xrange(num_heads): # filter k = tf.get_variable("AttnW_%d" % a, [1, 1, attn_size, attn_size]) # [1, 1, H, H] hidden_features.append(tf.nn.conv2d(hidden, k, [1,1,1,1], "SAME")) attention_softmax_weights.append(tf.get_variable( "W_attention_softmax_%d" % a, [attn_size])) This means that, in order to process our transformed 4D attention states with the previous decoder state, we need to convert the previous decoder state to 4D as well. This is easily done by first sending the previous state through an MLP to change it’s shape to match the attention states’ size and then reshaped to a 4D tensor. y = tf.nn.rnn_cell._linear( args=query, output_size=attn_size, bias=True) # Reshape into 4D y = tf.reshape(y, [-1, 1, 1, attn_size]) # [N, 1, 1, H] # Calculating alpha s = tf.reduce_sum( attention_softmax_weights[a] * tf.nn.tanh(hidden_features[a] + y), [2, 3]) a = tf.nn.softmax(s) # Calculate context c c = tf.reduce_sum(tf.reshape( a, [-1, attn_length, 1, 1])*hidden, [1,2]) cs.append(tf.reshape(c, [-1, attn_size])) Now that both the attention states and the previous decoder state have been transformed, we just need to apply the tanh operation. We multiply this with the softmax weights and apply softmax to give us our alpha_ij. Finally, we reshape our alphas and multiply with original attention states and take the sum to receive our context vectors c_i. Now we are ready to process our decoder inputs one by one. Let’s talk about training first. We do not care about the decode outputs because the input will always be decoder input. So our loop function is none. We process the decoder input with the PREVIOUS context vectors (initially zeroes for first input) with an MLP using _linear(). Then we run the output of that with the previous decoder state through our dynamic_rnn cell. This is the reason we needed our decoder inputs to a list of time-major inputs. We processed one time token at a time for all the samples because we need the previous state from the last token at that time index. Time-majored inputs allows us to do this in batch efficiently. Once we get the dynamic rnn outputs and state, we compute the new context vectors using this new state. The cell outputs are combined with this new context vector and go through an MLP to finally give us our decoder output. All of these additional MLPs are usually not show in decoder diagrams but they are additional steps we apply to get the outputs. Note that the outputs from the cell and the outputs from the attention_decoder itself both have shape [max_len, N, H]. If we were doing inference, our loop function is no longer None but the _extract_argmax_and_append(). This takes in the previous decoder output and our new decoder input is just the previous output with softmax applied to it, following by reembedding. And after we do all the processing with the attention states, the prev is updated to be the newly predicted output. # Process decoder inputs one by one for i, inp in enumerate(decoder_inputs): if i &gt; 0: tf.get_variable_scope().reuse_variables() if loop_function is not None and prev is not None: with tf.variable_scope("loop_function", reuse=True): inp = loop_function(prev, i) # Merge the input and attentions together input_size = inp.get_shape().with_rank(2)[1] x = tf.nn.rnn_cell._linear( args=[inp]+attns, output_size=input_size, bias=True) # Decoder RNN cell_outputs, state = cell(x, state) # our stacked cell # Attention mechanism to get Cs attns = attention(state) with tf.variable_scope('attention_output_projection'): output = tf.nn.rnn_cell._linear( args=[cell_outputs]+attns, output_size=output_size, bias=True) if loop_function is not None: prev = output outputs.append(output) return outputs, state And of course, we processes the outputs from our attention_decoder with softmax, flatten and then compare with targets to compute loss. Nuances: Sampled Softmax Using attentional interfaces like this are excellent architecture for seq-seq tasks such as machine translation. But a common issue is the large size of the corpus. This especially proves to be computationally expensive during training when we need to compute the softmax with the decoder outputs. The solution here is to use a sampled softmax. You can read more about why and how in my post here. Here is the code for a sampled softmax. Note that the weights are the same as our output_projection weights we are using with the decoder, since they both are doing the same task (converting decoder output H-length vector to num_class length vector). def sampled_loss(inputs, labels):(local_w_t, local_b, local_inputs, labels, num_samples, self.target_vocab_size), dtype) softmax_loss_function = sampled_loss And then, we can just process the loss with a seq_loss function, where weights are 1 everywhere except 0 where target outputs are PADs. Note: Sampled softmax is only used for training but during inference we have to use regular softmax because we want to sample from the entire corpus for the word, not just a select few that best approximate the corpus. else: losses.append(sequence_loss( outputs, targets, weights, softmax_loss_function=softmax_loss_function)) Model with buckets: Another common architectural addition is to use tf.nn.seq2seq model_with_buckets(). This is what the official tensorflow NMT tutorial uses and the main advantage of these buckets is with the attention states. With our model, we are applying attention to all <max_len> hidden states. We should only be doing them to the relevant states since the PADs do not need any attention anyway. With buckets, we can place our sequence in select buckets so there are very few PADs for a given sequence. However, I find this method a bit crude and if we really wanted to process out the PADs, I would suggest using seq_lens to filter out the PADs first out of the encoder outputs OR once we compute the attention context vectors_i, we zero out the locations where the hidden state comes from a PAD for each sequence. This is a bit complicated and we won’t implement it here but buckets does serve as a decent solution for this extra noise. Conclusion: There are many more variants to this attentional interface and it is a growing area of research. I like to use this architecture above for many seq-seq processing tasks as it produces decent results for many situations. Just be wary to have large training and validation sets because these models can easily overfit and produce horrible results for validation. In subsequent posts, we will be using these attentional interfaces for more complicated tasks involving memory and logical reasoning. Code: GitHub Repo (Updating all repos, will be back up soon!) Shapes Analysis: Encoder inputs are [N, max_len] which are embedded and transformed to [N, max_len, H] and fed into the encoder RNN. The outputs are [N, max_len, H] and state is [N, H] holding the last relevant state for each sample. The encoder outputs are same as the attention states of shape [N, max_len, H]. Decoder inputs is shape [N, max_len] which are converted to a max_len long time-major list of shape N. Decoder initial state is the encoder state of shape [N, H]. Before getting input into the decoder RNN, the inputs are embedded in to a max_len long time-major list of shape [N, H]. The input may be the actual decoder input or the previous output (during inference). If doing inference, the previous output is derived from the decoder. The output from the decoder from the previous time step had shape [N, H] which sent into a softmax layer (output projection) into shape [N, C]. This output is then reembedded using the same weights we use to embed the inputs, into shape [N, H]. These inputs are fed into the decoder RNN which produces decoder outputs of shape [max_len, N, H] and state [N, H]. The outputs are flattened to [Nmax_len, H] and compared with flattened targets (also of shape [Nmax_len, H]). The losses are masked where targets are PADs and backprop ensues. Inside the decoder RNN, there are quite a few operations. First it takes the attention states (encoder outputs) of shape [N, max_len, H] and convert to a 4D [N, max_len, 1, H] (so we can apply conv) and applies convolution to extract meaningful features. The shape of these hidden features is 4D [N, height’, 3, H]. The previous hidden state from the decoder, of shape [N, H], is also an input to the attentional interface. This hidden state goes through an MLP into shape [N, H] (this was done incase the previous hidden states second dimension (H) was different from the attention_size, which is also H for us). Then this previous hidden state is converted to 4D [N, 1, 1, H] so that we can combine with the hidden features. We apply tanh to this addition to produce e_ij and then take the softmax to produce alpha_ij, of shape [N, max_len, 1, 1] (which is representing the probability of each hidden state to use for each sample). This alpha is multiplied with the original hidden states, of shape [N, max_len, 1, H], and then summed to create our context vector c_i, of shape [N, H]. This context vector c_i is combined with the decoder inputs, of shape [N, H], which is the case regardless of wether the input is from the decoder inputs data (training) or from the previous prediction (inference). This inputs is just one from the list of length max_len of shape [N, H]. First we add the previous context vector (initially it’s zeros of shape [N, H]) to the input. Recall that the inputs are from decoder inputs which is a time_major length N list of [max_len,], which is why each input will be size [N, H]. An MLP is assigned to the addition of the input and the context vector to create an output with shape [N, H]. This is fed into our dynamic RNN cell along with the state, of shape [N, H]. The outputs are cell_outputs of shape [N, H] and the state is also shape [N, H]. The new state becomes the state we use for the next decoder operation. Recall that we generate these outputs of shape [N, H] for all max_len so at the end we shape a max_len length list of shape [N, H]. After getting the output and state from the decoder cell, we process the new context vector by passing in this new state into the attention function. This new context vector of shape [N, H] is combined with the outputs of shape [N, H] and an MLP is applied to the sum and converted to shape [N, H]. Finally, if we are using inference the new prev becomes this output (initially prev was none). This prev is used as an input into loop_function to get the input for the next decoder operation. 12 thoughts on “Recurrent Neural Network (RNN) – Part 4: Attentional Interfaces” Hello Goku Mohandas I was following your videos and I learnt so many thing about RNN from you github repo and these explanations. Why your github repo not working anymore. Hi, sorry about that, I was fixing up some old code and didn’t want to confuse anyone. It’s all up now! And I don’t make videos but would it helpful if I did for some topics? Hi, this post is great but I have a question What do you mean by saying “Our attentional states become just the encoder outputs because the attention for a simple sample is just all of its hidden states from all the outputs.” in encoder, TENSORFLOW IMPLEMENTATION section ? LikeLiked by 1 person Hi, my wording is a bit confusing. What I meant to say was that, in this simple case, we will be attending to all of the encoder outputs. I will fix the wording and checkout my new O’Reilly post for a more recent, better written post 🙂 Thanks for your reply and I have a confusion on “shapes analysis” section In your essay, if an embed vector is H dimensional as input then its output of the encoder_rnn is H dimensional too But I thinks it is a D dimensional vector where D is the num_unit of LSTM if this encoder rnn is LSTM So I mean the embedding dimension may not be the same as output dimension of rnn, D is not equal to H in most cases , if so , then output of encoder rnn is [N, max_len, H] may be misleading or just I’m wrong ? LikeLiked by 1 person Hey RYH, can’t reply to your latest message so hope you see this here. You can have D and H be the same number. You’ll see some papers use the same throughout the models and others will differentiate with different letters. Got it thanks for your patience LikeLiked by 1 person Hi, thanks for the great post. The github repo seems to be unavailable. I was wondering if you can put the code back online! Thanks! Hey arman, these tutorials are outdated and written for Tensorflow. New updated PyTorch tutorials with code will be available soon. But if you specifically interested in code for attentional interfaces checkout my article here: attention states is not encoder hidden states, it’s encoder outputs LikeLiked by 1 person Ah good catch, thanks Medhi
https://theneuralperspective.com/2016/11/20/recurrent-neural-network-rnn-part-4-attentional-interfaces/
CC-MAIN-2018-22
refinedweb
3,784
60.24
Fuse - write filesystems in Perl using FUSE use Fuse; my ($mountpoint) = ""; $mountpoint = shift(@ARGV) if @ARGV; Fuse::main(mountpoint=>$mountpoint, getattr=>\&my_getattr, getdir=>\&my_getdir, ...); This lets you implement filesystems in perl, through the FUSE (Filesystem in USErspace) kernel/lib interface. FUSE expects you to implement callbacks for the various functions. NOTE: I have only tested the things implemented in example.pl! It should work, but some things may not.. FUSE_DEBUG by default. You can request all exportable symbols by using the tag ":all". You can request all debug symbols by using the tag ":debug". This will export FUSE_DEBUG. You can request the extended attribute symbols by using the tag ":xattr". This will export XATTR_CREATE and XATTR_REPLACE. seperated list of mount options to pass to the FUSE kernel module. At present, it allows the specification of the allow_other argument when mounting the new FUSE filesystem. To use this, you will also need 'user_allow_other' in /etc/fuse.conf as per the FUSE documentionmountopts => "allow_other" or mountopts => "" unthreaded => boolean This turns FUSE multithreading off and on. NOTE: This perlmodule does not currently work properly in multithreaded mode! The author is unfortunately not familiar enough with perl-threads internals, and according to the documentation available at time of writing (2002-03-08), those internals are subject to changing anyway. Note that singlethreaded mode also means that you will not have to worry about reentrancy, though you will have to worry about recursive lookups (since the kernel holds a global lock on your filesystem and blocks waiting for one callback to complete before calling another). I hope to add full multithreading functionality later, but for now, I recommend you leave this option at the default, 1 (which means unthreaded, no threads will be used and no reentrancy is needed).);.) Arguments: link pathname. Returns a scalar: either a numeric constant, or a text string. This is called when dereferencing symbolic links, to learn the target. example rv: return "/proc/self/fd/stdin"; Arguments: Containing directory name. Returns a list: 0 or more text strings (the filenames), followed by a numeric errno (usually 0). This is used to obtain directory listings. Its opendir(), readdir(), filldir() and closedir() all in one call. example rv: return ('.', 'a', 'b', 0); Arguments: Filename, numeric modes, numeric device Returns an errno (0 upon success, as usual). This function is called for all non-directory, non-symlink nodes, not just devices. Arguments: New directory pathname, numeric modes. Returns an errno. Called to create a directory. Arguments: Filename. Returns an errno. Called to remove a file, device, or symlink. Arguments: Pathname. Returns an errno. Called to remove a directory. Arguments: Existing filename, symlink name. Returns an errno. Called to create a symbolic link. Arguments: old filename, new filename. Returns an errno. Called to rename a file, and/or move a file from one directory to another. Arguments: Existing filename, hardlink name. Returns an errno. Called to create hard links. Arguments: Pathname, numeric modes. Returns an errno. Called to change permissions on a file/directory/device/symlink. Arguments: Pathname, numeric uid, numeric gid. Returns an errno. Called to change ownership of a file/directory/device/symlink. Arguments: Pathname, numeric offset. Returns an errno. Called to truncate a file, at the given offset. Arguments: Pathname, numeric actime, numeric modtime. Returns an errno. Called to change access/modification times for a file/directory/device/symlink. Arguments: Pathname, numeric flags (which is an OR-ing of stuff like O_RDONLY and O_SYNC, constants you can import from POSIX). Returns an errno. No creation, or trunctation flags (O_CREAT, O_EXCL, O_TRUNC) will be passed to open(). Your open() method needs only check if the operation is permitted for the given flags, and return 0 for success. Arguments: Pathname, numeric requestedsize, numeric offset. Returns a numeric errno, or a string scalar with up to $requestedsize bytes of data. Called in an attempt to fetch a portion of the file. Arguments: Pathname, scalar buffer, numeric offset. You can use length($buffer) to find the buffersize. Returns an errno. Called in an attempt to write (or overwrite) a portion of the file. Be prepared because $buffer could contain random binary data with NULLs and all sorts of other wonderful stuff. Arguments: none Returns any of the following: -ENOANO() or $namelen, $files, $files_free, $blocks, $blocks_avail, $blocksize or -ENOANO(), $namelen, $files, $files_free, $blocks, $blocks_avail, $blocksize Arguments: Pathname Returns an errno or 0 on success. Called to synchronise any cached data. This is called before the file is closed. It may be called multiple times before a file is closed. Arguments: Pathname, numeric flags passed to open Returns an errno or 0 on success. Called to indicate that there are no more references to the file. Called once for every file with the same pathname and flags as were passed to open. Arguments: Pathname, numeric flags Returns an errno or 0 on success. Called to synchronise the file's contents. If flags is non-zero, only synchronise the user data. Otherwise synchronise the user and meta data.'; Arguments: Pathname, extended attribute's name Returns an errno, 0 if there was no value, or the extended attribute's value. Called to get the value of the named extended attribute. Arguments: Pathname Returns a list: 0 or more text strings (the extended attribute names), followed by a numeric errno (usually 0). Arguments: Pathname, extended attribute's name Returns an errno or 0 on success. Mark Glines, <mark@glines.org> perl, the FUSE documentation.
http://search.cpan.org/~dpavlin/Fuse-0.06/Fuse.pm
crawl-002
refinedweb
904
61.22
If you want to import or export spreadsheets and databases for use in the Python interpreter, you must rely on the CSV module, or Comma Separated Values format.. You don’t need to know JavaScript to work with these files, nor is the practice confined to that language. Obviously, since we’re working with Python here. The text inside a CSV file is laid out in rows, and each of those has columns, all separated by commas. Every line in the file is a row in the spreadsheet, while the commas are used to define and separate cells. Working with the CSV Module To pull information from CSV files you use loop and split methods to get the data from individual columns. The CSV module explicitly exists to handle this task, making it much easier to deal with CSV formatted files. This becomes especially important when you are working with data that’s been exported from actual spreadsheets and databases to text files. This information can be tough to read on its own. Unfortunately, there is no standard so the CSV module uses “dialects” to support parsing using different parameters. Along with a generic reader and writer, the module includes a dialect for working with Microsoft Excel and related files. CSV Functions The CSV module includes all the necessary functions built in. They are: - csv.reader - csv.writer - csv.register_dialect - csv.unregister_dialect - csv.get_dialect - csv.list_dialects - csv.field_size_limit In this guide we are only going to focus on the reader and writer functions which allow you to edit, modify, and manipulate the data stored in a CSV file. Reading CSV Files. To prove it, let’s take a look at an example. import CSV With open(‘some.csv’, ‘rb’) as f: reader = csv.reader(f) for row in reader: print row Notice how the first command is used to import the CSV module? Let’s look at another example. import csv import sys f = open(sys.argv[1], ‘rb’) reader = csv.reader(f) for row in reader print row f.close() In the first two lines, we are importing the CSV and sys modules. Then, we open the CSV file we want to pull information from. Next, we create the reader object, iterate the rows of the file, and then print them. Finally, we close out the operation. CSV Sample File We’re going to take a look at an example CSV file. Pay attention to how the information is stored and presented. Reading CSV Files Example We’re going to start with a basic CSV file that has 3 columns, containing the variables “A”, “B”, “C”, and “D”. $ cat test.csv A,B,”C D” 1,2,”3 4” 5,6,7 Then, we’ll use the following Python program to read and display the contents of the above CSV file. import csv ifile = open(‘test.csv’, “rb”) reader = csv.reader(ifile) rownum = 0 for row in reader: # Save header row. if rownum ==0: header = row else: colnum = 0 for col in row: print ‘%-8s: %s’ % (header[colnum], col) colnum + = 1 rownum + = 1 ifile.close() When we execute this program in Python, the output will look like this: $ python csv1.py A : 1 B : 2 C D : 3 4 A : 5 B : 6 C D : 7 Writing to CSV Files When you have a set of data that you would like to store inside a CSV file, it’s time to do the opposite and use the write function. Believe it or not, this is just as easy to accomplish as reading them. The writer() function will create an object suitable for writing. To iterate the data over the rows, you will need to use the writerow() function. Here’s an example. The following Python program converts a file called “test.csv” to a CSV file that uses tabs as a value separator with all values quoted. The delimiter character and the quote character, as well as how/when to quote, are specified when the writer is created. These same options are available when creating reader objects. import csv ifile = open('test.csv', "rb") reader = csv.reader(ifile) ofile = open('ttest.csv', "wb") writer = csv.writer(ofile, delimiter='', quotechar='"', quoting=csv.QUOTE_ALL) for row in reader: writer.writerow(row) ifile.close() ofile.close() When you execute this program, the output will be: $ python csv2.py $ cat ttest.csv "A" "B" "C D" "1" "2" "3 4" "5" "6" "7" Quoting CSV Files With the CSV module, you can also perform a variety of quoting functions. They are: - csv.QUOTE_ALL - Quote everything, regardless of type. - csv.QUOTE_MINIMAL - Quote fields with special characters - csv.QUOTE_NONNUMERIC - Quote all fields that are not integers or floats - csv.QUOTE_NONE - Do not quote anything on output More Python Reading and:
https://www.pythonforbeginners.com/systems-programming/using-the-csv-module-in-python/
CC-MAIN-2020-16
refinedweb
795
75.81
Hey all! I am quite new to Rosetta and am using it for a project I am working on. My PI and I have found a paper in which we are wanting to use its code for minimizaiton and repacking but I really do not understand everything that is going on here. The code is as follows: fix_bb_monomer_ddg.linuxgccrelease -ddg::weight_file soft_rep_design -ddg::iterations 50 - ddg::local_opt_only false -ddg::min_cst true -ddg::mean false -ddg::min true -ddg::sc_min_only false - ddg::ramp_repulsive true -ddg::minimization_scorefunction standard -ddg::minimization_patch score12 I can understand part of it. 1) I get that the 'fix_bb_monomer_ddg.linuxgccrelease' is invoking the packer (the syntax used in this code is weird cause I don't see any underscores in my Rosetta though) 2) I also get that the next couple parts are specifying the scoring function and the number of itterations But now I get into the parts that I do not understand. 1) What is the 'ddg'? What does it mean and where does it come from? I have tried searching up Rosetta ddg on google but nothing really comes up for it. 2) Why are there so many double colons? I am sure that the vast amjority of this is from my being so inexpoerienced but any help would be greatly appreciated. If anyone also has suggestions for extra Rosetta documention that would be amazing. I mainly use the tutorials provided but I can't seem to find a lot of this stuff on there. Thanks in advance!! The primary Rosetta documentation is here: Your starting point suggests you may benefit from the Meiler Lab recorded tutorials: > 1) I get that the 'fix_bb_monomer_ddg.linuxgccrelease' is invoking the packer (the syntax used in this code is weird cause I don't see any underscores in my Rosetta though) You are on command line. fix_bb_monomer_ddg.linuxgccrelease is the command. The remainder of the line is command line flags and arguments to those flags. fix_bb_monomer_ddg.linuxgccrelease specifically is one compiled rosetta binary - the portion before the . names the binary (there are a few hundred) and the portion after describes how it was compiled. I guess this command "invokes the packer" in a technical sense, but if all you want to do is repack you should use regular fixbb, as it is simpler and meant for the purpose. > 1) What is the 'ddg'? What does it mean and where does it come from? I have tried searching up Rosetta ddg on google but nothing really comes up for it. DDG means delta delta G - the change of free energy of folding upon mutation. mutation is one delta, folding is one delta, g is gibbs free energy. DDG of mutation tools tell how how a protein's energy will change when you mutate the sequence but leave the structure similar. You say you want to use minimization and repacking; you probably want regular old relax, not this tool that is a preparatory step for a particular ddg pipeline. > 2) Why are there so many double colons? C++ uses a double colon to indicate namespacing. I will rely on you to google namespacing in programming to learn about it. Rosetta namespaces its command line options because there are thousands and it's a good way to resolve name conflicts. Double or single colons are permitted for command line flag namespacing. I assume we allow both because some people wanted singular for compactness, and others used double because they were used to C++ and it was easier to allow both. (It isn't because the underlying code requires colons because it is C++, it's just a case of progammers using familiar patterns.) Thank you so much!! That clears up a lot of stuff. I will also besure ot check out the Meiler lab stuff
https://rosettacommons.org/comment/12505
CC-MAIN-2022-40
refinedweb
633
63.59
.\" SYN 1" .TH PERLSYN 1 "2004-11-05" "perl v5.8.6" "Perl Programmers Reference Guide" .SH "NAME" perlsyn \- Perl syntax .SH "DESCRIPTION" .IX Header "DESCRIPTION" A Perl program consists of a sequence of declarations and statements which run from the top to the bottom. Loops, subroutines and other control structures allow you to jump around within the code. .PP Perl is a \fBfree-form\fR language, you can format and indent it however you like. Whitespace mostly serves to separate tokens, unlike languages like Python where it is an important part of the syntax. .PP Many of Perl's syntactic elements are \fBoptional\fR. Rather than requiring you to put parentheses around every function call and declare every variable, you can often leave such explicit elements off and Perl will figure out what you meant. This is known as \fBDo What I Mean\fR, abbreviated \fB\s-1DWIM\s0\fR. It allows programmers to be \fBlazy\fR and to code in a style with which they are comfortable. .PP Perl \fBborrows syntax\fR. .Sh "Declarations" .IX Subsection "Declarations" The only things you need to declare in Perl are report formats and subroutines (and sometimes not even subroutines). A variable holds the undefined value (\f(CW\*(C`undef\*(C'\fR) until it has been assigned a defined value, which is anything other than \f(CW\*(C`undef\*(C'\fR. When used as a number, \&\f(CW\*(C`undef\*(C'\fR is treated as \f(CW0\fR; when used as a string, it is treated as the empty string, \f(CW""\fR; and when used as a reference that isn't being assigned to, it is treated as an error. If you enable warnings, you'll be notified of an uninitialized value whenever you treat \&\f(CW\*(C`undef\*(C'\fR as a string or a number. Well, usually. Boolean contexts, such as: .PP .Vb 2 \& my $a; \& if ($a) {} .Ve .PP are exempt from warnings (because they care about truth rather than definedness). Operators such as \f(CW\*(C`++\*(C'\fR, \f(CW\*(C`\-\-\*(C'\fR, \f(CW\*(C`+=\*(C'\fR, \&\f(CW\*(C`\-=\*(C'\fR, and \f(CW\*(C`.=\*(C'\fR, that operate on undefined left values such as: .PP .Vb 2 \& my $a; \& $a++; .Ve .PP are also always exempt from such warnings. .PP \f(CW\*(C`my()\*(C'\fR, you'll have to make sure your format or subroutine definition is within the same block scope as the my if you expect to be able to access those private variables. .PP Declaring a subroutine allows a subroutine name to be used as if it were a list operator from that point forward in the program. You can declare a subroutine without defining it by saying \f(CW\*(C`sub name\*(C'\fR, thus: .PP .Vb 2 \& sub myname; \& $me = myname $0 or die "can't get myname"; .Ve .PP Note that \fImyname()\fR functions as a list operator, not as a unary operator; so be careful to use \f(CW\*(C`or\*(C'\fR instead of \f(CW\*(C`||\*(C'\fR in this case. However, if you were to declare the subroutine as \f(CW\*(C`sub myname ($)\*(C'\fR, then \&\f(CW\*(C`myname\*(C'\fR would function as a unary operator, so either \f(CW\*(C`or\*(C'\fR or \&\f(CW\*(C`||\*(C'\fR would work. .PP Subroutines declarations can also be loaded up with the \f(CW\*(C`require\*(C'\fR statement or both loaded and imported into your namespace with a \f(CW\*(C`use\*(C'\fR statement. See perlmod for details on this. .PP. .Sh "Comments" .IX Subsection "Comments" Text from a \f(CW"#"\fR character until the end of the line is a comment, and is ignored. Exceptions include \f(CW"#"\fR inside a string or regular expression. .Sh "Simple Statements" .IX Subsection \f(CW\*(C`eval {}\*(C'\fR and \&\f(CW\*(C`do {}\*(C'\fR that look like compound statements, but aren't (they're just TERMs in an expression), and thus need an explicit termination if used as the last item in a statement. .Sh "Truth and Falsehood" .IX Subsection "Truth and Falsehood" The number 0, the strings \f(CW'0'\fR and \f(CW''\fR, the empty list \f(CW\*(C`()\*(C'\fR, and \&\f(CW\*(C`undef\*(C'\fR are all false in a boolean context. All other values are true. .Sh "Statement Modifiers" .IX Subsection "Statement Modifiers" Any simple statement may optionally be followed by a \fI\s-1SINGLE\s0\fR modifier, just before the terminating semicolon (or block ending). The possible modifiers are: .PP .Vb 5 \& if EXPR \& unless EXPR \& while EXPR \& until EXPR \& foreach LIST .Ve .PP The \f(CW\*(C`EXPR\*(C'\fR following the modifier is referred to as the \*(L"condition\*(R". Its truth or falsehood determines how the modifier will behave. .PP \&\f(CW\*(C`if\*(C'\fR executes the statement once \fIif\fR and only if the condition is true. \f(CW\*(C`unless\*(C'\fR is the opposite, it executes the statement \fIunless\fR the condition is true (i.e., if the condition is false). .PP .Vb 2 \& print "Basset hounds got long ears" if length $ear >= 10; \& go_outside() and play() unless $is_raining; .Ve .PP The \f(CW\*(C`foreach\*(C'\fR modifier is an iterator: it executes the statement once for each item in the \s-1LIST\s0 (with \f(CW$_\fR aliased to each item in turn). .PP .Vb 1 \& print "Hello $_!\en" foreach qw(world Dolly nurse); .Ve .PP \&\f(CW\*(C`while\*(C'\fR repeats the statement \fIwhile\fR the condition is true. \&\f(CW\*(C`until\*(C'\fR does the opposite, it repeats the statement \fIuntil\fR the condition is true (or while the condition is false): .PP .Vb 3 \& # Both of these count from 0 to 10. \& print $i++ while $i <= 10; \& print $j++ until $j > 10; .Ve .PP The \f(CW\*(C`while\*(C'\fR and \f(CW\*(C`until\*(C'\fR modifiers have the usual "\f(CW\*(C`while\*(C'\fR loop" semantics (conditional evaluated first), except when applied to a \&\f(CW\*(C`do\*(C'\fR\-BLOCK (or to the deprecated \f(CW\*(C`do\*(C'\fR\-SUBROUTINE statement), in which case the block executes once before the conditional is evaluated. This is so that you can write loops like: .PP .Vb 4 \& do { \& $line = ; \& ... \& } until $line eq ".\en"; .Ve .PP See \*(L"do\*(R" in perlfunc. Note also that the loop control statements described later will \fI\s-1NOT\s0\fR work in this construct, because modifiers don't take loop labels. Sorry. You can always put another block inside of it (for \f(CW\*(C`next\*(C'\fR) or around it (for \f(CW\*(C`last\*(C'\fR) to do that sort of thing. For \f(CW\*(C`next\*(C'\fR, just double the braces: .PP .Vb 4 \& do {{ \& next if $x == $y; \& # do something here \& }} until $x++ > $z; .Ve .PP For \f(CW\*(C`last\*(C'\fR, you have to be more elaborate: .PP .Vb 6 \& LOOP: { \& do { \& last if $x = $y**2; \& # do something here \& } while $x++ <= $z; \& } .Ve .PP \&\fB\s-1NOTE:\s0\fR The behaviour of a \f(CW\*(C`my\*(C'\fR statement modified with a statement modifier conditional or loop construct (e.g. \f(CW\*(C`my $x if ...\*(C'\fR) is \&\fBundefined\fR. The value of the \f(CW\*(C`my\*(C'\fR variable may be \f(CW\*(C`undef\*(C'\fR, any previously assigned value, or possibly anything else. Don't rely on it. Future versions of perl might do something different from the version of perl you try it out on. Here be dragons. .Sh "Compound Statements" .IX Subsection ). .PP But generally, a block is delimited by curly brackets, also known as braces. We will call this syntactic construct a \s-1BLOCK\s0. .PP The following compound statements may be used to control flow: .PP .Vb 9 \& if (EXPR) BLOCK \& if (EXPR) BLOCK else BLOCK \& if (EXPR) BLOCK elsif (EXPR) BLOCK ... else BLOCK \& LABEL while (EXPR) BLOCK \& LABEL while (EXPR) BLOCK continue BLOCK \& LABEL for (EXPR; EXPR; EXPR) BLOCK \& LABEL foreach VAR (LIST) BLOCK \& LABEL foreach VAR (LIST) BLOCK continue BLOCK \& LABEL BLOCK continue BLOCK .Ve .PP Note that, unlike C and Pascal, these are defined in terms of BLOCKs, not statements. This means that the curly brackets are \fIrequired\fR\-\-no dangling statements allowed. If you want to write conditionals without curly brackets there are several other ways to do it. The following all do the same thing: .PP .Vb 5 \& if (!open(FOO)) { die "Can't open $FOO: $!"; } \& die "Can't open $FOO: $!" unless open(FOO); \& open(FOO) or die "Can't open $FOO: $!"; # FOO or bust! \& open(FOO) ? 'hi mom' : die "Can't open $FOO: $!"; \& # a bit exotic, that last one .Ve .PP The \f(CW\*(C`if\*(C'\fR statement is straightforward. Because BLOCKs are always bounded by curly brackets, there is never any ambiguity about which \&\f(CW\*(C`if\*(C'\fR an \f(CW\*(C`else\*(C'\fR goes with. If you use \f(CW\*(C`unless\*(C'\fR in place of \f(CW\*(C`if\*(C'\fR, the sense of the test is reversed. .PP The \f(CW\*(C`while\*(C'\fR statement executes the block as long as the expression is true (does not evaluate to the null string \f(CW""\fR or \f(CW0\fR or \f(CW"0"\fR). The \s-1LABEL\s0 is optional, and if present, consists of an identifier followed by a colon. The \s-1LABEL\s0 identifies the loop for the loop control statements \f(CW\*(C`next\*(C'\fR, \f(CW\*(C`last\*(C'\fR, and \f(CW\*(C`redo\*(C'\fR. If the \s-1LABEL\s0 is omitted, the loop control statement refers to the innermost enclosing loop. This may include dynamically looking back your call-stack at run time to find the \s-1LABEL\s0. Such desperate behavior triggers a warning if you use the \f(CW\*(C`use warnings\*(C'\fR pragma or the \fB\-w\fR flag. .PP If there is a \f(CW\*(C`continue\*(C'\fR \s-1BLOCK\s0, it is always executed just before the conditional is about to be evaluated again. Thus it can be used to increment a loop variable, even when the loop has been continued via the \f(CW\*(C`next\*(C'\fR statement. .Sh "Loop Control" .IX Subsection "Loop Control" The \f(CW\*(C`next\*(C'\fR command starts the next iteration of the loop: .PP .Vb 4 \& LINE: while ( ) { \& next LINE if /^#/; # discard comments \& ... \& } .Ve .PP The \f(CW\*(C`last\*(C'\fR command immediately exits the loop in question. The \&\f(CW\*(C`continue\*(C'\fR block, if any, is not executed: .PP .Vb 4 \& LINE: while ( ) { \& last LINE if /^$/; # exit when done with header \& ... \& } .Ve .PP The \f(CW\*(C`redo\*(C'\fR command restarts the loop block without evaluating the conditional again. The \f(CW\*(C`continue\*(C'\fR block, if any, is \fInot\fR executed. This command is normally used by programs that want to lie to themselves about what was just input. .PP For example, when processing a file like \fI/etc/termcap\fR. If your input lines might end in backslashes to indicate continuation, you want to skip ahead and get the next record. .PP .Vb 8 \& while (<>) { \& chomp; \& if (s/\e\e$//) { \& $_ .= <>; \& redo unless eof(); \& } \& # now process $_ \& } .Ve .PP which is Perl short-hand for the more explicitly written version: .PP .Vb 8 \& LINE: while (defined($line = )) { \& chomp($line); \& if ($line =~ s/\e\e$//) { \& $line .= ; \& redo LINE unless eof(); # not eof(ARGV)! \& } \& # now process $line \& } .Ve .PP Note that if there were a \f(CW\*(C`continue\*(C'\fR block on the above code, it would get executed only on lines discarded by the regex (since redo skips the continue block). A continue block is often used to reset line counters or \f(CW\*(C`?pat?\*(C'\fR one-time matches: .PP .Vb 10 \& #? \& } .Ve .PP If the word \f(CW\*(C`while\*(C'\fR is replaced by the word \f(CW\*(C`until\*(C'\fR, the sense of the test is reversed, but the conditional is still tested before the first iteration. .PP The loop control statements don't work in an \f(CW\*(C`if\*(C'\fR or \f(CW\*(C`unless\*(C'\fR, since they aren't loops. You can double the braces to make them such, though. .PP .Vb 5 \& if (/pattern/) {{ \& last if /fred/; \& next if /barney/; # same effect as "last", but doesn't document as well \& # do something here \& }} .Ve .PP This is caused by the fact that a block by itself acts as a loop that executes once, see \*(L"Basic BLOCKs and Switch Statements\*(R". .PP The form \f(CW\*(C`while/if BLOCK BLOCK\*(C'\fR, available in Perl 4, is no longer available. Replace any occurrence of \f(CW\*(C`if BLOCK\*(C'\fR by \f(CW\*(C`if (do BLOCK)\*(C'\fR. .Sh "For Loops" .IX Subsection "For Loops" Perl's C\-style \f(CW\*(C`for\*(C'\fR loop works like the corresponding \f(CW\*(C`while\*(C'\fR loop; that means that this: .PP .Vb 3 \& for ($i = 1; $i < 10; $i++) { \& ... \& } .Ve .PP is the same as this: .PP .Vb 6 \& $i = 1; \& while ($i < 10) { \& ... \& } continue { \& $i++; \& } .Ve .PP There is one minor difference: if variables are declared with \f(CW\*(C`my\*(C'\fR in the initialization section of the \f(CW\*(C`for\*(C'\fR, the lexical scope of those variables is exactly the \f(CW\*(C`for\*(C'\fR loop (the body of the loop and the control sections). .PP Besides the normal array index looping, \f(CW\*(C`for\*(C'\fR can lend itself to many other interesting applications. Here's one that avoids the problem you get into if you explicitly test for end-of-file on an interactive file descriptor causing your program to appear to hang. .PP .Vb 5 \& $on_a_tty = -t STDIN && -t STDOUT; \& sub prompt { print "yes? " if $on_a_tty } \& for ( prompt(); ; prompt() ) { \& # do something \& } .Ve .PP Using \f(CW\*(C`readline\*(C'\fR (or the operator form, \f(CW\*(C` \*(C'\fR) as the conditional of a \f(CW\*(C`for\*(C'\fR loop is shorthand for the following. This behaviour is the same as a \f(CW\*(C`while\*(C'\fR loop conditional. .PP .Vb 3 \& for ( prompt(); defined( $_ = ); prompt() ) { \& # do something \& } .Ve .Sh "Foreach Loops" .IX Subsection "Foreach Loops" The \f(CW\*(C`foreach\*(C'\fR loop iterates over a normal list value and sets the variable \s-1VAR\s0 to be each element of the list in turn. If the variable is preceded with the keyword \f(CW\*(C`my\*(C'\fR, then it is lexically scoped, and is therefore visible only within the loop. Otherwise, the variable is implicitly local to the loop and regains its former value upon exiting the loop. If the variable was previously declared with \f(CW\*(C`my\*(C'\fR, it uses that variable instead of the global one, but it's still localized to the loop. This implicit localisation occurs \fIonly\fR in a \f(CW\*(C`foreach\*(C'\fR loop. .PP The \f(CW\*(C`foreach\*(C'\fR keyword is actually a synonym for the \f(CW\*(C`for\*(C'\fR keyword, so you can use \f(CW\*(C`foreach\*(C'\fR for readability or \f(CW\*(C`for\*(C'\fR for brevity. (Or because the Bourne shell is more familiar to you than \fIcsh\fR, so writing \f(CW\*(C`for\*(C'\fR comes more naturally.) If \s-1VAR\s0 is omitted, \f(CW$_\fR is set to each value. .PP If any element of \s-1LIST\s0 is an lvalue, you can modify it by modifying \&\s-1VAR\s0 inside the loop. Conversely, if any element of \s-1LIST\s0 is \s-1NOT\s0 an lvalue, any attempt to modify that element will fail. In other words, the \f(CW\*(C`foreach\*(C'\fR loop index variable is an implicit alias for each item in the list that you're looping over. .PP If any part of \s-1LIST\s0 is an array, \f(CW\*(C`foreach\*(C'\fR will get very confused if you add or remove elements within the loop body, for example with \&\f(CW\*(C`splice\*(C'\fR. So don't do that. .PP \&\f(CW\*(C`foreach\*(C'\fR probably won't do what you expect if \s-1VAR\s0 is a tied or other special variable. Don't do that either. .PP Examples: .PP .Vb 1 \& for (@ary) { s/foo/bar/ } .Ve .PP .Vb 3 \& for my $elem (@elements) { \& $elem *= 2; \& } .Ve .PP .Vb 3 \& for $count (10,9,8,7,6,5,4,3,2,1,'BOOM') { \& print $count, "\en"; sleep(1); \& } .Ve .PP .Vb 1 \& for (1..15) { print "Merry Christmas\en"; } .Ve .PP .Vb 3 \& foreach $item (split(/:[\e\e\en:]*/, $ENV{TERMCAP})) { \& print "Item: $item\en"; \& } .Ve .PP Here's how a C programmer might code up a particular algorithm in Perl: .PP .Vb 9 \& for (my $i = 0; $i < @ary1; $i++) { \& for (my $j = 0; $j < @ary2; $j++) { \& if ($ary1[$i] > $ary2[$j]) { \& last; # can't go to outer :-( \& } \& $ary1[$i] += $ary2[$j]; \& } \& # this is where that last takes me \& } .Ve .PP Whereas here's how a Perl programmer more comfortable with the idiom might do it: .PP .Vb 6 \& OUTER: for my $wid (@ary1) { \& INNER: for my $jet (@ary2) { \& next OUTER if $wid > $jet; \& $wid += $jet; \& } \& } .Ve .PP See how much easier this is? It's cleaner, safer, and faster. It's cleaner because it's less noisy. It's safer because if code gets added between the inner and outer loops later on, the new code won't be accidentally executed. The \f(CW\*(C`next\*(C'\fR explicitly iterates the other loop rather than merely terminating the inner one. And it's faster because Perl executes a \f(CW\*(C`foreach\*(C'\fR statement more rapidly than it would the equivalent \f(CW\*(C`for\*(C'\fR loop. .Sh "Basic BLOCKs and Switch Statements" .IX Subsection "Basic BLOCKs and Switch Statements" A \s-1BLOCK\s0 by itself (labeled or not) is semantically equivalent to a loop that executes once. Thus you can use any of the loop control statements in it to leave or restart the block. (Note that this is \&\fI\s-1NOT\s0\fR true in \f(CW\*(C`eval{}\*(C'\fR, \f(CW\*(C`sub{}\*(C'\fR, or contrary to popular belief \&\f(CW\*(C`do{}\*(C'\fR blocks, which do \fI\s-1NOT\s0\fR count as loops.) The \f(CW\*(C`continue\*(C'\fR block is optional. .PP The \s-1BLOCK\s0 construct is particularly nice for doing case structures. .PP .Vb 6 \& SWITCH: { \& if (/^abc/) { $abc = 1; last SWITCH; } \& if (/^def/) { $def = 1; last SWITCH; } \& if (/^xyz/) { $xyz = 1; last SWITCH; } \& $nothing = 1; \& } .Ve .PP There is no official \f(CW\*(C`switch\*(C'\fR statement in Perl, because there are already several ways to write the equivalent. .PP However, starting from Perl 5.8 to get switch and case one can use the Switch extension and say: .PP .Vb 1 \& use Switch; .Ve .PP after which one has switch and case. It is not as fast as it could be because it's not really part of the language (it's done using source filters) but it is available, and it's very flexible. .PP In addition to the above \s-1BLOCK\s0 construct, you could write .PP .Vb 6 \& SWITCH: { \& $abc = 1, last SWITCH if /^abc/; \& $def = 1, last SWITCH if /^def/; \& $xyz = 1, last SWITCH if /^xyz/; \& $nothing = 1; \& } .Ve .PP (That's actually not as strange as it looks once you realize that you can use loop control \*(L"operators\*(R" within an expression. That's just the binary comma operator in scalar context. See \*(L"Comma Operator\*(R" in perlop.) .PP or .PP .Vb 6 \& SWITCH: { \& /^abc/ && do { $abc = 1; last SWITCH; }; \& /^def/ && do { $def = 1; last SWITCH; }; \& /^xyz/ && do { $xyz = 1; last SWITCH; }; \& $nothing = 1; \& } .Ve .PP or formatted so it stands out more as a \*(L"proper\*(R" \f(CW\*(C`switch\*(C'\fR statement: .PP .Vb 5 \& SWITCH: { \& /^abc/ && do { \& $abc = 1; \& last SWITCH; \& }; .Ve .PP .Vb 4 \& /^def/ && do { \& $def = 1; \& last SWITCH; \& }; .Ve .PP .Vb 6 \& /^xyz/ && do { \& $xyz = 1; \& last SWITCH; \& }; \& $nothing = 1; \& } .Ve .PP or .PP .Vb 6 \& SWITCH: { \& /^abc/ and $abc = 1, last SWITCH; \& /^def/ and $def = 1, last SWITCH; \& /^xyz/ and $xyz = 1, last SWITCH; \& $nothing = 1; \& } .Ve .PP or even, horrors, .PP .Vb 8 \& if (/^abc/) \& { $abc = 1 } \& elsif (/^def/) \& { $def = 1 } \& elsif (/^xyz/) \& { $xyz = 1 } \& else \& { $nothing = 1 } .Ve .PP A common idiom for a \f(CW\*(C`switch\*(C'\fR statement is to use \f(CW\*(C`foreach\*(C'\fR's aliasing to make a temporary assignment to \f(CW$_\fR for convenient matching: .PP .Vb 6 \& SWITCH: for ($where) { \& /In Card Names/ && do { push @flags, '-e'; last; }; \& /Anywhere/ && do { push @flags, '-h'; last; }; \& /In Rulings/ && do { last; }; \& die "unknown value for form variable where: `$where'"; \& } .Ve .PP Another interesting approach to a switch statement is arrange for a \f(CW\*(C`do\*(C'\fR block to return the proper value: .PP .Vb 8 \& $amode = do { \& if ($flag & O_RDONLY) { "r" } # XXX: isn't this 0? \& elsif ($flag & O_WRONLY) { ($flag & O_APPEND) ? "a" : "w" } \& elsif ($flag & O_RDWR) { \& if ($flag & O_CREAT) { "w+" } \& else { ($flag & O_APPEND) ? "a+" : "r+" } \& } \& }; .Ve .PP Or .PP .Vb 5 \& print do { \& ($flags & O_WRONLY) ? "write-only" : \& ($flags & O_RDWR) ? "read-write" : \& "read-only"; \& }; .Ve .PP Or if you are certain that all the \f(CW\*(C`&&\*(C'\fR clauses are true, you can use something like this, which \*(L"switches\*(R" on the value of the \&\f(CW\*(C`HTTP_USER_AGENT\*(C'\fR environment variable. .PP .Vb 13 \& #!/usr/bin/perl \& # pick out jargon file page based on browser \& \e015\e012\e015\e012"; .Ve .PP That kind of switch statement only works when you know the \f(CW\*(C`&&\*(C'\fR clauses will be true. If you don't, the previous \f(CW\*(C`?:\*(C'\fR example should be used. .PP You might also consider writing a hash of subroutine references instead of synthesizing a \f(CW\*(C`switch\*(C'\fR statement. .Sh "Goto" .IX Subsection "Goto" Although not for the faint of heart, Perl does support a \f(CW\*(C`goto\*(C'\fR statement. There are three forms: \f(CW\*(C`goto\*(C'\fR\-LABEL, \f(CW\*(C`goto\*(C'\fR\-EXPR, and \&\f(CW\*(C`goto\*(C'\fR\-&NAME. A loop's \s-1LABEL\s0 is not actually a valid target for a \f(CW\*(C`goto\*(C'\fR; it's just the name of the loop. .PP The \f(CW\*(C`goto\*(C'\fR\-LABEL form finds the statement labeled with \s-1LABEL\s0 and resumes execution there. It may not be used to go into any construct that requires initialization, such as a subroutine or a \f(CW\*(C`foreach\*(C'\fR loop. It also can't be used to go into a construct that is optimized away. It can be used to go almost anywhere else within the dynamic scope, including out of subroutines, but it's usually better to use some other construct such as \f(CW\*(C`last\*(C'\fR or \f(CW\*(C`die\*(C'\fR. The author of Perl has never felt the need to use this form of \f(CW\*(C`goto\*(C'\fR (in Perl, that is\*(--C is another matter). .PP The \f(CW\*(C`goto\*(C'\fR\-EXPR form expects a label name, whose scope will be resolved dynamically. This allows for computed \f(CW\*(C`goto\*(C'\fRs per \s-1FORTRAN\s0, but isn't necessarily recommended if you're optimizing for maintainability: .PP .Vb 1 \& goto(("FOO", "BAR", "GLARCH")[$i]); .Ve .PP The \f(CW\*(C`goto\*(C'\fR\-&NAME form is highly magical, and substitutes a call to the named subroutine for the currently running subroutine. This is used by \&\f(CW\*(C`AUTOLOAD()\*(C'\fR subroutines that wish to load another subroutine and then pretend that the other subroutine had been called in the first place (except that any modifications to \f(CW@_\fR in the current subroutine are propagated to the other subroutine.) After the \f(CW\*(C`goto\*(C'\fR, not even \f(CW\*(C`caller()\*(C'\fR will be able to tell that this routine was called first. .PP In almost all cases like this, it's usually a far, far better idea to use the structured control flow mechanisms of \f(CW\*(C`next\*(C'\fR, \f(CW\*(C`last\*(C'\fR, or \f(CW\*(C`redo\*(C'\fR instead of resorting to a \f(CW\*(C`goto\*(C'\fR. For certain applications, the catch and throw pair of \&\f(CW\*(C`eval{}\*(C'\fR and \fIdie()\fR for exception processing can also be a prudent approach. .Sh "PODs: Embedded Documentation" .IX Subsection "PODs: Embedded Documentation" Perl has a mechanism for intermixing documentation with source code. While it's expecting the beginning of a new statement, if the compiler encounters a line that begins with an equal sign and a word, like this .PP .Vb 1 \& =head1 Here There Be Pods! .Ve .PP Then that text and all remaining text up through and including a line beginning with \f(CW\*(C`=cut\*(C'\fR will be ignored. The format of the intervening text is described in perlpod. .PP This allows you to intermix your source code and your documentation text freely, as in .PP .Vb 1 \& =item snazzle($) .Ve .PP .Vb 3 \& The snazzle() function will behave in the most spectacular \& form that you can possibly imagine, not even excepting \& cybernetic pyrotechnics. .Ve .PP .Vb 1 \& =cut back to the compiler, nuff of this pod stuff! .Ve .PP .Vb 4 \& sub snazzle($) { \& my $thingie = shift; \& ......... \& } .Ve .PP. .PP .Vb 5 \& $a=3; \& =secret stuff \& warn "Neither POD nor CODE!?" \& =cut back \& print "got $a\en"; .Ve .PP You probably shouldn't rely upon the \f(CW\*(C`warn()\*(C'\fR being podded out forever. Not all pod translators are well-behaved in this regard, and perhaps the compiler will become pickier. .PP One may also use pod directives to quickly comment out a section of code. .Sh "Plain Old Comments (Not!)" .IX Subsection "Plain Old Comments (Not!)" Perl can process line directives, much like the C preprocessor. Using this, one can control Perl's idea of filenames and line numbers in error or warning messages (especially for strings that are processed with \f(CW\*(C`eval()\*(C'\fR). The syntax for this mechanism is the same as for most C preprocessors: it matches the regular expression .PP .Vb 5 \& # example: '# line 42 "new_filename.plx"' \& /^\e# \es* \& line \es+ (\ed+) \es* \& (?:\es("?)([^"]+)\e2)? \es* \& $/x .Ve .PP with \f(CW$1\fR being the line number for the next line, and \f(CW$3\fR being the optional filename (specified with or without quotes). .PP. .PP Here are some examples that you should be able to type into your command shell: .PP .Vb 6 \& % perl \& # line 200 "bzzzt" \& # the `#' on the previous line must be the first char on line \& die 'foo'; \& __END__ \& foo at bzzzt line 201. .Ve .PP .Vb 5 \& % perl \& # line 200 "bzzzt" \& eval qq[\en#line 2001 ""\endie 'foo']; print $@; \& __END__ \& foo at - line 2001. .Ve .PP .Vb 4 \& % perl \& eval qq[\en#line 200 "foo bar"\endie 'foo']; print $@; \& __END__ \& foo at foo bar line 200. .Ve .PP .Vb 6 \& % perl \& # line 345 "goop" \& eval "\en#line " . __LINE__ . ' "' . __FILE__ ."\e"\endie 'foo'"; \& print $@; \& __END__ \& foo at goop line 345. .Ve
http://www.fiveanddime.net/ss/man-unformatted/man1/perlsyn.1
crawl-003
refinedweb
4,652
73.78
Pyo is an extremely powerful sound synthesis and processing python module. It has excellent documentation, but very little in the way of tutorial material. So in this article I’m going to explain how to make a simple midi controlled synthesiser. This covers: - Getting and using MIDI input - Tables - ADSRs - Oscillators - GUIs If you haven’t used pyo before, have a skim over the official introductory tutorial. For this tutorial you will need: - Python with Pyo installed - A MIDI input - Audio out (dur) Making the Synth Start with the obligatory import statement: from pyo import * Setting Up the Server The convention seems to be to store your server in a variable called s, and I see no reason not to stick to that. # Set Up Server s = Server() s.setMidiInputDevice(2) # Must be called before s.boot() s.boot() s.start() The server must be set up before it is boot()ed, and s.boot() must be called before the audio processing chain is defined.Python starts engaging with the audio drivers when s.start() is called. Both of these tasks take a couple of seconds to run, so I tend to call them before doing anything else. Strictly speaking you can call s.start() at the end of the script and it makes no difference. Which MIDI Device? Calling pm_get_input_devices() in an interactive shell will give you a numbered list of MIDI inputs. The int you pass to s.setMidiInputDevice() is simply the number of the device you’re using. For example, I’m using a Novation ReMote SL Compact. pm_get_input_devices() gives me this output: >>> pm_get_input_devices() (['IAC Driver Bus 1', 'IAC Driver IAC Bus 2', 'ReMOTE SL Compact Port 1', 'ReMOTE SL Compact Port 2', 'ReMOTE SL Compact Port 3'], [0, 1, 2, 3, 4]) 0: IN, name: IAC Driver Bus 1, interface: CoreMIDI 1: IN, name: IAC Driver IAC Bus 2, interface: CoreMIDI 2: IN, name: ReMOTE SL Compact Port 1, interface: CoreMIDI 3: IN, name: ReMOTE SL Compact Port 2, interface: CoreMIDI 4: IN, name: ReMOTE SL Compact Port 3, interface: CoreMIDI And I know I want MIDI from ReMOTE SL Compact Port 1, so I call: s.setMidiInputDevice(2) Set Up MIDI Receiving MIDI input is very simple: # Set Up MIDI midi = Notein() # The defaults are more than adequate The midi variable is an object that exposes two streams, pitch and velocity. They can be accessed via [] notation, e.g: midi['pitch']. At any given moment, the value of midi['pitch'] is zero or more integers representing midi note values. Pyo works using a whole load of these real-time streams, as you’ll see later. The Oscillator For the purposes of understanding, we need to take a leap forward to the end of the script and have a look at the object that’s actually producing the sound, so you understand what it needs to be given to function properly. The class in question is Osc, and to function it requires (at minimum) three variables: - A frequency - An amplitude (volume) - A waveform So, we need to generate a waveform for the Osc to play, and from the MIDI input we need to generate streams of frequencies and amplitudes at which to play that waveform. Handling Pitch If you try feeding the values from midi['pitch'] into an Osc, you get some seriously microtonal music. This is because the values provided by midi['pitch'] are MIDI note numbers, but the Osc object we’ll be using to generate sound works with frequencies. We need some way of translating the MIDI values into pitches, and the MToF (MIDI to Frequency) object does just that: # Pitch pitch = MToF(midi['pitch']) So we’ve fed the pitch stream into this object, and it’s producing another stream representing exactly the same information, but encoded as frequencies instead of midi note numbers. Perfect! Handling velocity and preventing everlasting notes A handy way of getting extremely easy control of the amplitude of your sounds over time is to use the built-in MidiAdsr class. We’ll stick to the defaults for the moment, so all we need to do is feed in the midi data: # ADSR amp = MidiAdsr(midi['velocity']) Given the stream of raw velocities, we get a stream of ADSR controlled amplitudes (values from 0-1) that we can feed into the Oscillator. Setting Up the Oscillator So, we have a pitch stream and a amp stream. The last bit of data we need is a waveform for the synth to play. Without going into too much detail, we need a pyo table. I’ll explain more about these in a future article, for the moment just use the built in square wave: # Table wave = SquareTable() Now for the actual oscillator: # Osc osc = Osc(wave, freq=pitch, mul=amp) And that’s it. If you’re following along in an interactive shell, run osc.out() and hit some notes on your keyboard — you should get some sound! (If you don’t, you’ve probably missed something, but don’t worry — there’s a full code listing at the bottom) Extras FX Pyo has some awesome effects built in. They take an audio source (like the osc object we’ve made) as an input and act as an audio source themselves. Try out the reverb: verb = Freeverb(osc).out() GUIs Pyo has GUIs built in for a great many of it’s objects. The one you’ll use the most is: s.gui(locals()) Essential for non-interactive scripts, this prevents the script from quitting, provides a useful interpreter, quick record feature and an easy way to start and stop the server. Calling ctrl() on some objects will pop up a GUI for controlling their parameters, e.g. try: verb.ctrl() And twiddle the sliders whilst you’re playing. Should look like this: The other graphical display that is indispensable whilst using pyo tables is view(). Called on any table object, it produces a graphical representation of the waveform — brilliant when the docs say things I don’t understand like ‘Chebyshev polynomials of the first kind’. Try: wave.view() It should produce something like this: Putting it all together That was all a bit disjointed, so here’s the entire script, with comments: from pyo import * # Set Up Server s = Server() s.setMidiInputDevice(2) # Change as required s.boot() s.start() # Set Up MIDI midi = Notein() # ADSR amp = MidiAdsr(midi['velocity']) # Pitch pitch = MToF(midi['pitch']) # Table wave = SquareTable() # Osc osc = Osc(wave, freq=pitch, mul=amp) # FX verb = Freeverb(osc).out() ### Go osc.out() s.gui(locals()) # Prevents immediate script termination. And here’s a diagram of exactly what’s happening, just in case I’ve confused you: Names of objects are: ClassName (variable name I’ve used) I hope you have great fun using pyo! I’ll be explaining how to do more things as I learn more myself. Please direct any corrections, comments, issues, cool tricks etc to barnaby@waterpigs.co.uk
http://waterpigs.co.uk/articles/simple-pyo-synth/
CC-MAIN-2013-48
refinedweb
1,165
59.94
Created on 2013-12-09 04:24 by JBernardo, last changed 2015-04-15 20:16 by steve.dower. This issue is now closed. From the docs for built-in function "round": "If ndigits is omitted, it defaults to zero" () But, the only way to get an integer from `round` is by not having the second argument (ndigits): >>> round(3.5) 4 >>> round(3.5, 1) 3.5 >>> round(3.5, 0) 4.0 >>> round(3.5, -1) 0.0 >>> round(3.5, None) Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> round(3.5, None) TypeError: 'NoneType' object cannot be interpreted as an integer Either the docs are wrong or the behavior is wrong. I think it's easier to fix the former... But also there should be a way to make round return an integer (e.g. passing `None` as 2nd argument) Here is the preliminary patch. After patch: round(1.23, 0) => 1 not 1.0 round(4.67, 0) => 5 not 5.0 > After patch: > round(1.23, 0) => 1 not 1.0 > round(4.67, 0) => 5 not 5.0 Please no! Two-argument round should continue to return a float in all cases. The docs should be fixed. > But also there should be a way to make round return an integer I don't understand. There's already a way to make round return an integer: don't pass a second argument. Okay, here is the patch to fix the doc. Thanks. It's inaccurate to say that a float is returned in general, though: for most builtin numeric types, what's returned has the same type as its input. So rounding a Decimal to two places gives a Decimal on output, etc. (That's already explained in the next paragraph.) How about just removing the mention of 'defaults to zero', and say something like: "if ndigits is omitted, returns the nearest int to its input" > I don't understand. There's already a way to make round return an integer: don't pass a second argument. If this function were to be written in Python, it would be something like: def round(number, ndigits=0): ... or def round(number, ndigits=None): ... But in C you can forge the signature to whatever you want and parse the arguments accordingly. In Python there's always a way to get the default behavior by passing the default argument, but in C it may not exist (in this case `PyObject *o_ndigits = NULL;`) So, I propose the default value being `None`, so this behavior can be achieved using a second argument. Here is the updated doc fix. Anyway, why not round(1.2) -> 1.0 in the first place? Just curious. > Anyway, why not round(1.2) -> 1.0 in the first place? Just curious. It was the behavior on Python 2.x, but somehow when they changed the rounding method to nearest even number this happened... I think it's too late to change back the return type. Do you have any real-world motivating use case for None? Not just theoretical consistency with what a Python version of the function would look like. (I'm not saying we shouldn't consider supporting None as a low priority change, I'm just trying to figure out where you'd ever need it in the real world.) Not really. Just consistency: For the same reason ' foo '.strip(None) works... To avoid special casing the function call when you already have a variable to hold the argument. Right, but None in that case has real world utility, since you might have the the value in a variable. But you are hardly going to hold int-or-not in a variable, especially a variable that is really about the number of places in the float result... > Anyway, why not round(1.2) -> 1.0 in the first place? Just curious. All this changed as part of PEP 3141. I wasn't watching Python 3 development closely back then, but I *think* at least part of the motivation was to provide a way to get away from the use of `int` to truncate a float to its integer part: the argument goes that a simple type conversion shouldn't throw away information, and that if you want a transformation from float to int that throws away information you should ask for it explicitly. So `math.trunc` was born as the preferred way to truncate a float to an int, and `math.floor`, `math.ceil` and `round` became alternative float -> int conversion methods. That entailed those functions returning ints. <off-topic> In the case of `math.floor` and `math.ceil` at least, I think this is regrettable. There are plenty of places where you just want a float -> float floor or ceiling, and Python no longer has a cheap operation for that available: floor as a float-to-float operation is cheap; floor as a float-to-long-integer operation is significantly more costly. In the case of `round`, we still have `round(x, 0)` available as a cheap float->float conversion, so it's less of a problem. And I hardly ever use `trunc`, so I don't care about that case. </off-topic> In case we want to add consistency with None ndigits, here is the patch adding support for None value for ndigits parameter. This one looks like a low-risk addition but since Python 3.4 is in beta phase.... The docstring is better than the current doc as it says that the default precision is 0, without calling that the default for ndigits. ''' round(number[, ndigits]) -> number Round a number to a given precision in decimal digits (default 0 digits). This returns an int when called with one argument, otherwise the same type as the number. ndigits may be negative.''' --- Sidenote: To write round in Python, one could easily write _sentinel = object def round(number, ndigits=_sentinel): if ndigits is _sentinel: ... which makes ndigits positional-or-keyword, and almost optional-with-no-default, as _sentinel is close enough to being a default that cannot be passed in. This is a standard idiom. One who was really picky about having no default could use def round(number, *args, **kwds): ... and look for len(args) == 1 xor kwds.keys() == {'ndigits'}. New changeset e3cc75b1000b by Steve Dower in branch 'default': Issue 19933: Provide default argument for ndigits in round. Patch by Vajrasky Kok.
https://bugs.python.org/issue19933
CC-MAIN-2020-40
refinedweb
1,077
75.2
Set a custom projection matrix. If you change this matrix, the camera no longer updates its rendering based on its fieldOfView. This lasts until you call ResetProjectionMatrix. Use a custom projection only if you really need a non-standard projection. This property is used by Unity's water rendering to setup an oblique projection matrix. Using custom projections requires good knowledge of transformation and projection matrices. Note that projection matrix passed to shaders can be modified depending on platform and other state. If you need to calculate projection matrix for shader use from camera's projection, use GL.GetGPUProjectionMatrix. See Also: Camera.nonJitteredProjectionMatrix // Make camera wobble in a funky way! using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Matrix4x4 originalProjection; Camera cam; void Start() { cam = GetComponent<Camera>(); originalProjection = cam.projectionMatrix; } void Update() { Matrix4x4 p = originalProjection; p.m01 += Mathf.Sin(Time.time * 1.2F) * 0.1F; p.m10 += Mathf.Sin(Time.time * 1.5F) * 0.1F; cam.projectionMatrix = p; } } // Set an off-center projection, where perspective's vanishing // point is not necessarily in the center of the screen. // // left/right/top/bottom define near plane size, i.e. // how offset are corners of camera's near plane. // Tweak the values and you can see camera's frustum change. using UnityEngine; using System.Collections; [ExecuteInEditMode] public class ExampleClass : MonoBehaviour { public float left = -0.2F; public float right = 0.2F; public float top = 0.2F; public float bottom = -0.2F; void LateUpdate() { Camera cam = Camera.main; Matrix4x4 m = PerspectiveOffCenter(left, right, bottom, top, cam.nearClipPlane, cam.farClipPlane); cam.projectionMatrix = m; } static Matrix4x4 PerspectiveOffCenter(float left, float right, float bottom, float top, float near, float far) { float x = 2.0F * near / (right - left); float y = 2.0F * near / (top - bottom); float a = (right + left) / (right - left); float b = (top + bottom) / (top - bottom); float c = -(far + near) / (far - near); float d = -(2.0F * far * near) / (far - near); float e = -1.0F; Matrix4x4 m = new Matrix4x4(); m[0, 0] = x; m[0, 1] = 0; m[0, 2] = a; m[0, 3] = 0; m[1, 0] = 0; m[1, 1] = y; m[1, 2] = b; m[1, 3] = 0; m[2, 0] = 0; m[2, 1] = 0; m[2, 2] = c; m[2, 3] = d; m[3, 0] = 0; m[3, 1] = 0; m[3, 2] = e; m[3, 3] = 0; return m; } }
https://docs.unity3d.com/kr/2019.1/ScriptReference/Camera-projectionMatrix.html
CC-MAIN-2021-04
refinedweb
388
51.55
Let. [Advertisement] Pingback from Domaining - Information on Domains and Domaining » Architecting LINQ To SQL Applications, part 5 Scott Guthrie has updated links for ASP.NET and AJAX A series of articles detailing LINQ to SQL A set Link Listing - February 17, 2008 Sharepoint Upcoming training on WSS - ask your boss now!! [Via: Sahil Malik ] WPF After Installing... Pingback from » Daily Bits - February 18, 2008 Alvin Ashcraft’s Daily Geek Bits: Daily links, development, gadgets and raising rugrats. Do you think its a good idea to have all of this persistence information polluting the domain? What happens when we need to persist our model to two different data sources? Pingback from Weekly crumbs #8 - Service Endpoint @Greg As I point out above the XML option will take the pain of polluting domain classes away for you. My gut says that I would prefer to use XML mappings over attributes, but I can appreciate the arguments of people who resist the growth of XML configuration. To be frank my major reason for using attributes here was to help those people who had thought about marking up their own classes, but had only seen the designer generated code make the steps into tagging their own model more easily. That said I have begun to drift to using attributes with NHibernate of late so there must be something in it Doesn't the fact that System.Data.Linq.Table<TEntity> is a sealed class prevent it from ever being able to implement IQuerySource<TEntity>? @Justin You don't need to modify Table<T> just use an adapter that does implement IQuerySource<T>. I usually implement in terms of Table<T> like so: public class TableQuery<T> : IQuerySource<T> where T : class { public Table<T> dataTable; public IQueryable<T> queryableTable; ... public TableQuery(Table<T> dataTable) { this.dataTable = dataTable; queryableTable = dataTable as IQueryable<T>; } } Previous : Architecting LINQ To SQL Applications, part 5 Mapping with XML files instead of Attributes Pingback from Finds of the Week - April 6, 2008 » Chinh Do Pingback from autosync error Linq to SQL is great. You can open up your Db schema, drag some tables in, and in no time you have a What exactly is an ISession? Did I miss an article in this series? Is this part of LINQ to SQL? In the 1990s I coded on a few systems where the architecture was that we attached database functionality
http://codebetter.com/blogs/ian_cooper/archive/2008/02/17/architecting-linq-to-sql-applications-part-5.aspx
crawl-002
refinedweb
399
59.74
On Fri, 13 Oct 2006, Philippe Canal wrote: > > T->UnbinnedFit("myf","preshowerE:showerE","(preshowerE>450.)*electronP"); > > does not help either. > > How does it fail? It does not find the optimum, and instead wanders off to infinity. This is probably because the function "myf" is not normalized (it is not a probability distribution). Below is a very simplified but self-contained test case with which to experiment. UnbinnedFit as written really seems to be for a different problem, involving EVENT distributions not variable correlations. Just like when one is looking to fit the correlation between TWO variables, we typically make a 2-D histogram, then using either ProfileX() or FitSlicesY() make a 1-D histogram that can be simply fit. Thanks, Rob Feuerbach #ifndef __CINT__ #include "TTree.h" #include "TFile.h" #include "TRandom.h" #include "TF2.h" #include <iostream> using namespace std; #endif TTree *tree = 0; TF2* myf =0; void build_tree() { struct { Float_t psE, shE, elP; } data; TFile *nf = new TFile("testit.root","RECREATE"); tree = new TTree("T","testing"); tree->Branch("data",&data,"psE:shE:elP"); for (int i=0; i<5000; i++) { data.psE = gRandom->Uniform(0.,1000.); data.shE = gRandom->Uniform(0.,1000.); data.elP = .2*data.psE+.5*data.shE; data.elP += 20*gRandom->Gaus(); // noise factortree->Fill(); void fit_tree() { // now try to fit the tree if (!myf) myf = new TF2("myf","[0]*x+[1]*y",0.,3000.,0.,3000.); myf->SetParameters(.1,.1); tree->UnbinnedFit("myf","psE:shE","elP"); } void testit() { build_tree(); fit_tree(); tree->Draw("elP:.2*psE+.5*shE"); // verify the tree is good } > Cheers, > Philippe. > > PS. We had no plan on addiung support for [0],[1], etc. > since the intend api for what you are trying to do is TTree::UnbinnedFit. > > > -----Original Message----- > From: Robert Feuerbach [mailto:feuerbac_at_jlab.org] > Sent: Thursday, October 12, 2006 9:36 PM > To: Philippe Canal > Cc: 'Rene Brun'; roottalk_at_pcroot.cern.ch > Subject: RE: [ROOT] Using Parameters in TTreeFormula's? > > > Hi Philippe, > > Thank you for the suggestion. When I try this, this is what > happens: > > analyzer [8] TF2 *myf = new TF2("myf","[0]*x+[1]*y",0.,3000.,0.,3000.) > analyzer [9] myf->SetParameters(.1,.1) > analyzer [10] > T->UnbinnedFit("myf","electronP:preshowerE:showerE","preshowerE>450.") > Error in <TTreePlayer::UnbinnedFit>: Function dimension=2 not equal to > expression dimension=3 > (Long64_t)0 > > Playing with: > T->UnbinnedFit("myf","preshowerE:showerE","(preshowerE>450.)*electronP"); > > does not help either. Doing > > T->Fit("myf","preshowerE:showerE","(preshowerE>450.)*electronP","W") > > (ignore the "error bars" on the temporary histogram being fitted) > does sort of work as long as the 2-d histogram being created is > binned finely enough that no bin contains more than one event. > > I do have a solution for this case, it would just be nice to be > able to have a general solution that is not restricted to a > handful of variables and parameters with a somewhat convoluted > approach. This is why I'm interested in having the Parameters > work in TTreeFormula's. > > Rob > > > > On Thu, 12 Oct 2006, Philippe Canal wrote: > > > Date: Thu, 12 Oct 2006 16:40:53 -0500 > > From: Philippe Canal <pcanal_at_fnal.gov> > > To: 'Robert Feuerbach' <feuerbac_at_jlab.org>, 'Rene Brun' > <Rene.Brun_at_cern.ch> > > Subject: RE: [ROOT] Using Parameters in TTreeFormula's? > > > > Hi Robert, > > > > > I do NOT want to 'fit' to the population's distribution like the > > > example does -- the function I'm trying to use (TF1 or whatever) > > > should NOT return something proportional to the number of events > > > in a bin/region. Instead, the evaluation is similar to TGraph, > > > where the Chi2 function is built directly from the X and Y > > > points. > > > > As far as I know/understand this is __exactly__ (except for > > actually creating the intermediary TGraph2D object) what > > TTree::UnbinnedFit does .... > > > > Seriously, it sound that you want you exactly want is: > > > > TF2 *myf = new TF2("myf","[0]*x+[1]*y",0.,3000.,0.,3000.); > > myf->SetParameters(.1,.1); > > tree->UnbinnedFit("myf","electronP:preshowerE:showerE",selection); > > > > Cheers, > > Philippe. /*************************************************** * Dr. Robert J. Feuerbach feuerbac_at_jlab.org * * 12000 Jefferson Avenue CEBAF Center A120 * * Suite 4 Office: (757) 269-7254 * * Newport News, VA 23606 Page: 584-7254 * * Fax: (757) 269-5703 Mailstop 12H3 * ***************************************************/Received on Fri Oct 13 2006 - 21:24:55 MEST This archive was generated by hypermail 2.2.0 : Mon Jan 01 2007 - 16:32:01 MET
https://root.cern.ch/root/roottalk/roottalk06/1351.html
CC-MAIN-2022-21
refinedweb
707
58.48
Hi, I use conversation scope for my controller in CDI, but I have 1 problem is in the URL there are 2 cid parameters, as following: How to remove this duplication? Please help me. P/S: My application is run normally with this url, but I use pretty-faces to clean my URL, if there are this duplication, I have problem because url as following: Refer the following code: @Named @ConversationScoped @LoggedIn public class TestController implements Serializable { ....... @Inject private Conversation conversation; public void initialize() { if (conversation.isTransient()) { conversation.begin(); } } ...... } Yes, it's a bug in weld. See I'm working on it. Fixed in 1.2.0 Beta1, which will probably be released this week. Thank you, I will update new version
https://community.jboss.org/thread/197035?tstart=0
CC-MAIN-2015-32
refinedweb
122
58.58
Yamaha Extended Control Python API – v0.3 pyamaha - Python implementation of Yamaha Extended Control API Specification. (more…)Read more » Switching to a new language is always a big step, especially when only one of your team members has prior experience with that language. Early this year, we switched Stream’s primary programming language from Python to Go. This post will explain some of t… Read more pyamaha - Python implementation of Yamaha Extended Control API Specification. (more…)Read more » First of all, what is Python? According to its creator, Guido van Rossum, it’s a “high-level programming language, and its core design philosophy is all about code readability and a syntax which… (more…)Read more » Those who have been following Python development on Windows recently will be aware that I’ve been actively redeveloping the installer. And if you’ve been watching closely you’ll know that there are now many more ways to install the official python.o...Read more » A post by Bhavani Ravi. Tagged with python, dev, codenewbie. (more…)Read more » import operator f = lambda n: reduce(operator.mul, range(1,n+1))...Read more »
https://fullstackfeed.com/why-we-switched-from-python-to-go/
CC-MAIN-2020-29
refinedweb
188
55.84
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I am using JIRA Builtin Script Listener - Create a sub-task. to create subtasks for Dev and QA for every story and bug in JIRA. I would like the subtasks to be always assigned to user "Virtual QA". It seems that I have to do this through Additional issue actions field. I am trying to use: issue.summary = ('QA: ' + issue.summary) issue.assignee = 'Virtual QA' This works only if I use only the first line to set the subtask summary but when I add the second line the script does not run. Can you please help me to solve it? I was not able to help myself using the official documentation on: I have also tried solutions from Georgiy Senenkov Jul 03 '12 at 07:51 AM said, neither of the proposed solutions work. I tried all the described solutions and it did not work for me. You could not use a string of the user display name to set the assignee. You have to set the corresponding user object ( com.atlassian.crowd.embedded.api.User) or you could use issue.assigneeId = 'userkey' (be aware that this is not the display name but the user key, which is the login name if you didn't renamed the user). Thank you for your response. I am not able to find the userkey for the user, username does not work, mail which is used as login does not work and full name does not work to and I dont have direct access to database to verify. How can I use the com.atlassian.crowd.embedded.api.User? I need some line of code like issue.assigneeId = UserManager.findIdbyUsername("Virtual QA") Ok, here some code to give you a clue. import com.atlassian.crowd.embedded.api.User import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.user.ApplicationUser import com.atlassian.jira.user.util.UserUtil UserUtil userUtil = ComponentAccessor.getUserUtil() // If you want a User object User u = userUtil.getUser('username') // If you need the key of a user (which don't change, so this could be done in the console and the key could be used in your scripts ApplicationUser au = userUtil.getUserByName('username') String key = au.getKey() This works, thank you for the direction :) import com.atlassian.jira.component.ComponentAccessor issue.summary = ('QA: ' + issue.summary) issue.setAssignee(ComponentAccessor.getUserUtil().getUser('q.
https://community.atlassian.com/t5/Jira-questions/JIRA-Builtin-Script-Listener-Create-a-sub-task-How-to-set/qaq-p/119836
CC-MAIN-2018-39
refinedweb
412
59.09
This article covers Functions in C++. What are Functions? A function is a reusable block of code that runs only when called. Functions save time, increase code re-usability and makes your code simpler and easy to read. Functions make use of arguments and parameters to receive data and output a result accordingly. This allows for a greater deal of flexibility on your code. Syntax return_type function_name (parameter1, parameter2,...) { // Statements to be executed } return_typedetermines the data type of the value returned by the function. Some of the possible values are int, stringor void. voidis used if the function will not return anything. function_nameis the name used to call the function. - Within the parenthesis the parameters of the function are defined. These parameters are variables which hold data to be used in the function body. These “values” or “data” are passed while calling the function. There is no limit to the number of parameters that may exist for a function. - The curly brackets define the function body. The code inside this will run every time the function is. Defining a Function Below is an example of simple function in C++. We’ve made our own addition function which takes two integers, x and y and returns the result. #include <iostream> using namespace std; int add(int x, int y){ return x + y; } int main () { cout << add(5,8); return 0; } The output. 13 Some points to note here. - Returning the value does not print it to the screen. - Both parameters must be declared with an appropriate data type. - The input arguments 5and 8are separated by a comma. - The number of input arguments and parameters must be the same. Functions with no arguments Functions which have no return value, are sometimes referred to as procedures. The Function below performs the simple task of displaying a “Hello World” prompt to screen. Also note the use of void to declare it as a Function with no return value. #include <iostream> using namespace std; void display(){ cout << "Hello World"; } int main () { display(); return 0; } Hello World This marks the end of the C++ Functions article. Any suggestions or contributions for Coderslegacy are more than welcome. Questions regarding the article content can be asked in the comments section below.
https://coderslegacy.com/c/c-functions/
CC-MAIN-2021-21
refinedweb
373
67.86
We can customize nopCommerce to fit your store perfectly. now recompile your application and voila, should be showing the page! occurs when we try to open project as website. Thanks Reply NandaDodd None 0 Points 1 Post Re: Parser Error: Could not load type '_Default'. Parser Error Message: Could not load type 'webmarketing'. If its 1.1, you need to change the <%@ Page %> directive to use Codebehind="...". Class declaration: public Global() { InitializeComponent(); } and namespace <%@ Application Codebehind="Global.asax.cs" Inherits="Inventory1.Global" %> –AnchovyLegend Feb 25 '13 at 16:35 is there a namespace declaration in the .cs file? Once I did this, I navigated to my site and everything worked when using. This works. easyaspnet 2.952 görüntüleme 3:58 ASP.NET Routing - How to fix error The resource cannot be found. - Süre: 1:37. Yükleniyor... how to fix this? To elaborate the above details,here is my comment.... share|improve this answer answered Feb 25 '13 at 16:31 DiskJunky 1,444721 Thanks for the reply. This change is required when you change project type from website to webapplication or vice versa.Since you are saying that the project does not contain the project file or solution file, c# asp.net global-asax share|improve this question edited Feb 26 '13 at 1:05 webaware 1,9481327 asked Feb 25 '13 at 16:10 AnchovyLegend 3,3341866134 Don't expect us to magically solve your Yükleniyor... Permalink Posted 15-Apr-14 13:23pm Gus'O.724 Rate this: Please Sign up or sign in to vote. So I installed .NET 4.6 (NDP46-KB3045557-x86-x64-AllOS-ENU.exe), restarted the server, and then my simple site worked. i thought about this I put some junk text into the Your Email Password Forgot your password? This type of error Could not load type.... When i run on the internet it is not working . When I try to rebuild solution Inventory1 using VB, I get many errors and warnings, here is a screen shot: oi48.tinypic.com/288bea9.jpg –AnchovyLegend Feb 25 '13 at 17:06 1 That's why Aug 02, 2007 05:10 AM|ravipahuja1|LINK You can try delete all the files under the "Temporary ASP.NET Files" located at win dir->Microsoft.net->Framework->.net framework version->Temporary ASP.NET Files and then build and publish the Reply maparash None 0 Points 1 Post Re: Parser Error: Could not load type '_Default'. instead of 1.1..... My virtual directory was "service", so what I need to type into the browser was. Does anyone know how to make it work? Check This Out This simple step saved my JOB and my HOUSE from getting mortgaged! :D Left by nish on Oct 19, 2008 9:32 PM # re: You may receive the error "Parser Error Hope this helps share|improve this answer answered Dec 2 '15 at 12:35 rags 314 This resolved my problem. if you're developing in .NET 2.0 and it's set to 1.1 you'll get this message Left by spamcannon on Jan 16, 2006 10:18 PM # re: You may receive the error And then after I built the project, it was compiled into a dll in the bin folder. The problem is solved. If it is a folder right click on it, then next to the grayed out application name click on the create button to create a webapp. How do I publish on the dept server if I cannot find the server there? Jan 15, 2006 10:47 AM|HuwB|LINK In IIS (with version 2 of the .net framework installed) the last tab is asp.net. In the properties window go to ASP.Net tab and there you select the ASP.Net Version as 2.0....... Parser Error Description: An error occurred during the parsing of a resource required to service this request. Oturum aç 2 Yükleniyor... Please review the following specific parse error details and modify your source file appropriately. After that it worked. Left by Ravi R on Sep 14, 2006 7:07 PM # re: You may receive the error "Parser Error Message: Could not load type 'WebApplication1.Global'." when browsing an asp.net page Would Pentesting against own web service hosted on 3rd party platform Why do Latin nouns (like cena, -ae) include two forms? To rectify this, go to IIS manager (launch inetmgr.exe from Start Menu-Run) and Right Click on the your application's virtual folder under the "Default Web Site" and select properties. I was wondering if someone can help identify the cause of this error? have a peek here Try copy-n-pasting/uploading the complete components of your 'bin' folder and voila! The issue went away when I accessed the web site via the correct url "". The files uploaded to bin are the: dll, pdb and xml. I've done some searching but haven't found the solution,I'd really appreciate any help on this ASAP. Vis Dotnet 25.636 görüntüleme 0:52 ASP.NET - How to fix error is not allowed here because it does not extend class - Süre: 1:22. Mar 24, 2006 05:38 AM|tr1stan|LINK Did you get this problem solved? Look and see if the location of your files is configured as a web app or just a folder (folder icon). share|improve this answer answered Mar 30 at 11:55 Tauseef Nabiji 11 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign Source Error: <%@ Application Codebehind="Global.asax.cs" Inherits="Inventory1.Global" %> Entire Global.asax contents: <%@ Application Codebehind="Global.asax.cs" Inherits="Inventory1.Global" %> Many thanks in advance! Kapat Evet, kalsın. Mar 14, 2007 04:54 PM|maparash|LINK Thx ,,,this works 4 me !!! If a question is poorly phrased then either ask for clarification, ignore it, or edit the question and fix the problem. Pr Error Message: Could not load type 'E_Business.WebForm1'. Check the Global.asax.cs file for the class declaration and make sure that the namespace and class name match exactly what's in the asax file. The class is not marked as public. rgf239 29.183 görüntüleme 0:57 the name control does not exist in current context in asp.net - Süre: 4:37. I tried with the solution given here, but even then i`m getting the same problem.thanks..... My app is a simple one built using VS.NET 2003. then I forgot to put everithing back and i received the error.After putting the old file in the right place everithing was fine.Hope helpfull.Bye Left by Dario on Sep 12, 2008
http://gsbook.org/error-could/error-could-not-load-type-in-asp-net.php
CC-MAIN-2018-17
refinedweb
1,111
76.22
The NanoPI Neo is a tiny $8 Linux computer available at FriendlyARM. With a wide variety of accessories to interface with the real world, the NanoPi Neo can be an affordable but powerful IoT device. Setting up your NanoPi You can refer to FriendlyARM's instructions to setup your NanoPi, although here I present a simplified procedure: - Download the official Ubuntu image. - Download PiFiller or Win32DiskImager and flash the .img file to an SD card (min 8GB of size) - Insert the SD card and boot the NanoPI with an Ethernet cable attached to your router. - Find the IP address assigned to your NanoPI: I typically use nmap from the command line, but you can also try looking at your router's client list. Nmap IP Lookup: Router's IP Lookup (Linksys router): Connecting the NanoPi to Ubidots using Python Now that we have the IP address, we can ssh into the NanoPi: ssh root@10.85.4.193 User Name: root Password: fa Let's upgrade some packages and install pip, Python's packet manager: sudo apt-get update sudo apt-get install python-pip python-dev build-essential Let's install these libraries: - requests: to make HTTP requests from Python to Ubidots - pyspeedtest: to measure the Internet speed from Python pip install requests pyspeedtest Ready to code! Create a Python script: nano ubi_speed_tester.py And copy this code into it: #!/usr/bin/python import pyspeedtest import requests st = pyspeedtest.SpeedTest() payload = {'Download': round(st.download()/1000000,2) , 'Upload': round(st.upload()/1000000, 2), 'Ping': round(st.ping(),2)} r = requests.post('', data=payload) Make sure to replace your Ubidots account token in the request URL. Let's test the script: python ubi_speed_tester.py You should see a new device in your Ubidots account with three variables: Download, Upload and Ping: Optional Step: Rename the Device and Variables The names of the variables created are the same as the API labels, which are the IDs used by the API. This doesn't mean their names can't be changed, so I'd recommend changing the names of the devices and variables them to make them look better: - nanopi --> "Nano Pi Neo" - download --> "Download Speed" - upload --> "Upload Speed" - ping --> "Ping" You can also add the units to each variable: Create a Crontab to run the script every x minutes Now that we tested the script we can set it to run automatically every x minutes. For this purpose we'll use Linux Cron tool. 1- Make the file executable: chmod a+x ubi_speed_tester.py 2- Create a crontab For some reason, the command “crontab -e” didn’t work out of the box, so I installed cron manually: sudo apt-get install cron then type: “crontab -e” and add the line * * * * * python /root/ubi_speed_tester.py to run the script every minute. 3- Reboot and Check your Data in Ubidots Reboot the NanoPI reboot wait for a minute and then go to Ubidots to start seeing the results every minute: Now that your data is in Ubidots, you can create dashboards and events. Here'a an example: Bar chart widget I also created two events: one to notify me if the Internet is slow, and another one to notify my if there's no Internet at all! Value-based Event Activity-based Event Wrap up Now put your NanoPi in a safe place behind your router and never worry again about wondering wether your ISP is providing the right service or not!
https://help.ubidots.com/en/articles/688700-connect-your-nanopi-to-ubidots
CC-MAIN-2019-47
refinedweb
579
67.59
23,223 This item is only available as the following downloads: ( PDF ) Full Text ~~-#- NEWSLETTER Volume 8 Number 2 For the Week Ending 23rd February 1980 8th Year of Publication - - 235th Issue DOMINICANS UNDER FIRE In a national broadcast on Saturday February 16th, Prime Minister Maurice Bishop made the accusation that "counter revolutionaries, a tiny minority of foreign priests, have laid careful plans to sabotage the revolution." Mr. Bishop said a letter written by priests of the Roman Catholic Dominican Order had come into his posession. This letter, he said, was addressed to a member of the Order resident in Britain and "amounts to be a request for poli- tical help to engage in activities of a destabilising and counter revolutionary character". The Prime Minister read the entire letter which said,in part, that "the great majority of the people are complete- ly behind the Government in their aspirations to construct a new free society independent of American and all European influence in which they hope to discover their identity as a people, a Caribbean people." The letter said, "the place for christianity in this new vision remains problematic and "there is a good deal of atheistic indoctrination". "Faced with this situation", the letter said, "the Bishop and clergy are in disarray. There is an absence of any continued - Produced & Printed by Alister & Cynthia Hughes P 0 Box 65, St.Georges, Grenada, Westindies Page 2 THE GRENADA NEWSLETTER Week Ending 23.2.80 common analysis of the situation and of a common policy of ad- justment to it." It said also that, over the last 20 years. there has been much talk among British Dominicans of a Christ- ian-Marxist dialogue, and it is probable that many ritish Dominicans are better read in modern Marxist ideology than any member of the Peoples' Revolutionary Government (PRG), "Grenada offers them a tiny but significant field of experi- ment in which to test out their theories and aspirations", the letter said, "an opportunity to preach the Gospel in a predom -inantly Marxist oriented society while, at the same time, co- operating and assisting in the efforts to construct a just human society." Chalienr-j The letter asked that it be circulated among the Dominicans in the hope that three or four might respond to the challenge. "There would be little difficulty at present about their enter- ing the country to work as Priests", the letter said, "even though it be privately agreed among uS that their work as Priests might be very different from that usually done in the area. There are great opportunities to influence the situa- tion through preaching, adult education, youth work and, per- haps, even journalism." In closing, the letter said that "whatever happens in Grenada may have a profound effect on the work of the Church in the whole area. The Prime Minister said he had been "shocked and disgusted" by the letter which, he said, indicated the "intention of this handful of priests, while masquerading behind the cloak of religion to abandon their traditional and perfectly acceptable role of Ministers of Religion to become, instead, direct pol- itical activists and agitators." Mr. Bishop said he was repeating his Governments fullest com- mitment to freedom of worship and religion which he made a permanent standing commitment now and for ever". He guar- continued - Week Ending 23.2.80 THE GRENADA NEWSLETTER Page 3 anteed full!st continued cooperation with oth Church in education, health care and community activities, anid aid the 'PRe has no inten- tion of telling the Church how to conduct its religious activities. "But, by the same token", he said,"We are not prepared to allow the Church pr elements within theChurch to carve out a new politi- cal role for themselves that will provide them wit,,t,t opportunity to use their influence and.standing as religious leaders to engage in counter revolutionary activity against the interests of our people." The Prime Minister said also that'he had recently received a top- secret report from the Special Branch Department' of the Police Ser- vice with reference to a series of publications put out by the Rom- an Catholic Church. Mr. Bishop said the report advises that most of these publications "are aimed at showing that the New Jewel Move -ment and the Peoples Revolutionary Government are..Communist". Mr. Bishop said the Special ranch reported thqt, according to its sources, "one of whom is, a priest", it is "the deliberate intention of these publications to distort the policies, programmes.and ob- jectives of the NJM and PRG to make them appear as Cpmaunistic".. The Roman Catholic Bishop of St. George's, Sydney Charles, has prepared a reply to the Prime Minister's accusations and this is to be read in all Roman Catholic 'Churches tomorrow (24th)'. In that reply, Bishop Charles says the letter written by the Dominicans was written without his knowledge and he disassociates himself and the Grenada Church from its authorship but, having said that and having studied the contents of the letter, he could not see that it con- tains anything of a seditious nature. "It is regrettable", he says, "that a letter which was meant to be helpful to the Church and to the society in building a socialist society which is Christian could be misconstrued by Government". Reservations As part of his message to be read in the Churches, Bishop Charles will include a message from Father Gilbert Coxhead, Head of the continued - Page 4 THE GRENADA NEWSLETTER' Week Ending 23.2.80 Dominicans in Grenada. That message says in part, "As far, as I knol 1 alone in Grenada knew of the letter. It was ri#t en in Trinidad The writer was determined that his letter should be strengthened by several signa- tures and with several others I did sign. I signed with reservations, since I considered the proposal of Christian- Marxist dialogue wold ot work:. But still signed because I thought that there was a h.ange that ChristianiMariist dialogue might imbue 6ur Grenadian Revolution with thrist- ian principles, and thus prevent our Revolution from be- coming an atheistic one, of the Matx-Lenin type. In short, I, chose to sign in the hope that Socialism in Grenada would be .Christiaq Socialism., : : A copy o the letter was brought from Trinidad to Grenada and a copy was handed oveo to the Poples Revolutionary Government. I hberby state publicly th I have f orgiven the persons who have don6 ihis to e." !" ' Bishop Charles said the Prime Minister had made insinuations from the 'dmiiicans' letter which could be misleading, and that rong'c6ricfusibns and im plcations had been drawn. '~' As far as I know", Bishbp-Charles said, i and as far as the Clergy aid ar. . Religious in Grenada are aware, the allegations are false. I am absolutely satisfied that no Priest or Religioqs. in Grenada or. even outside of Grenada, on behalf of Grenada, is engaged in destabilising activities regarding the Rqvolution,." Bishop Charles' message refers also to the publications critici- zed by the Prime Minister. These publications, the Bishop said were a direct result Of decisions taken at the Roman Catholic Church's Conference, Assembly '78", and they were in circu- lation before the revolution of .March 13th 1979. The Bish- op said also that he is aware that copies of these publications were consistently sent to the Prime Minister. .- continued '- Week.Ending 23.2.80 THE GRENADA NEWSLETTER Page 5 Absurd "To accuse our Priests of being engaged in'activities to:destabil- ise the Revolution is also to accuse the whole Assmebly 78 of the same", BiShop Charles message says, since it iS the Assembly that recognizing the. need, has called for this work of educating our mem -bers in the faith, includiAig the social dimension.. The accusa- tion, therefore, is absurd." NEWSLETTER has had an opportunity to examine some of the public, tions referred to. They make. no reference to either the New Jewel Movement. or the Peoples.' Revolutionary; Government but cover the fields of human rights ani civic freedoms. They also give an out- line of Marx-Leninism and issue some warnings against this philoso- phy. In this philosophy, they say, there is' ho democracy in the traditional full sense", there is dictatorship, "absolute and ruth- less rule of the party bureaucrats", there is atheism which con- siders religion a fake, a deceit, a product of an unjust society", and "there is no room for morality as we know it." ( 1263 words') THE DOMINICAN LETTER i iLa.i we Dear Jonathon, . As tb 2 'Epglish Provincial Chapter 'approaches, we would like to6 put. a very setiouS 'proposal' to you. Thepolitical situation in Grenada is developing rapidly. The island is tecOmirig politically isolated within the English-speaking Caribbean as an object of fear to.all other territories which, like Trinidad & Barbados, aspire. to the ideology and life-style of the Western Capitalist bloc. It is also attracting a great deal of attention in the diplomatic services of Britain and the United States of America. Within Grenada, whatever poli- tical ideas may be entertained by the handful of peo- ple responsible for the PRG are becoming submerged under a massive Cuban influence. Cuba, Angola, Ethi- opa, Viet-Nam and Kampochea are now upbIld as -the models of development of'a free Third World State. Grenada was the only Caribbean state to vote with Cuba agiiist the UN resolutiOn on the It`vasion of Af- ganistan. A large number of young people have taken up university scholarships in Cuba. The Government is exercising more and` more c control over the d i ssem- ination' of news and information. The great majority of people are, completely behind the Government in their aspirations to construct a new free society in- dependent of American and all European influence in which they hope to discover their identity as a people, a Caribbean people. continued - THE GRENADA NEWSLETTER The place for Christianity in this new vision; remains problematic. There is a good deal of athe- istic indoctrination, it is only. too easy to carica- ture the. relegion;of the White God'as just..one more colonial imposition which is, at best, irrelevant to Caribbean aspirations. But, the population as a whole remains deeply attached to Christianity aid -the Government is trying hard .to show that it has no quarrel with the Church.' But, the uliimite'-aim'may well be to reduce it to a harmless and irrelevant or- ganisation for children and old people, and towadds this end, they are able to draw on,20 years of Cuban experience. Faced with this situation, the Bishop and Clergy are in disarray. There is an absence of any common ,analysis of the situation and pf a common policy of adjustment to it. We are out of'our depth. Over the last 20 :years there has been iuch interest in Marxism among the brethren in England and much talk about a:Christian/Marxist dialogue. Many of the brethren are probably better read in modern Marxist ideology, than any of the Members of the PRG. Grena- da offers them a tiny but significant field of ex- periment in which to test out their theories and aspirations, an opportunity to preach~ e. Gospel in a predominantly Marxist, oriented society while, at the same time, cooperating and assistiig in the ef- forts tp construct a just,human society. Grenada, therefore, poses an interesting challenge to English Dominicans, an opportunity to put theory into pract- ice in a very small theatre of operations. We would like you, therefore, to circulate this letter to all the brethren in the hope that three or four might feel called to respond to this challedgea and help us to discover an effective way of preach- ing the Gospel in a Ma 3rist situation for the build- ing up of a strong local Church and in forming a group of Caribbean Dominicans who will live and work in an increasingly Marxist Caribbean. The fact that they are English and white will certainly be against them but, for the moment, it is no insuperable disadvantage. Their lack of know- ledge about ,the lpcal situation of how people think and feel., need nof ,hi rider themor long. We have all had to learn this. There would be little diffi- qulty at present about their entering the country po work as Priests, even though it be privately a- greed among us tht their work as Priests might be very different from that usually done in the area. There are great opportunities to influence the sitt- uation through preaching, aduit education', youth work and perhaps, even journalism. In fact, the Church is planning to ,start a. newspaper. ,, imakethis appeal to you as you assemble to further develop our plans for the future Caribbean province, tp emerge from our different vicariates in' the region. As we consider, the .prcbtem f what specific contribution, if any, a future Dominican' province might .make to ,the Church in the Caribbean, whether there is, in fact, any room i1or an Order' like-burs to rcake a specific contribution, the problems of Grenada coje to mind and the ooportuni- ties it offers.for a very Dominican kind of work. But, -we have::no men to,begin it,. We are persuaded ;that whatever happens :in Grenada may have a profound .effect on. the work of the Church in the whole area. (7:84 words) (I^ Week Ending 23.2.80 Page 6 Week Ending 23.80 THE GRANAJDA NEWSLETTER Page 7. NO DIFFICULTY TO IMPORT RELjiGTOUS LITERATTTlS Prime Minister Maurice Bishop has accused the Life Study Fellowship a religious organisation of. Norton, Connecticut, US A, of attempt- ing to make it appear that the Peoples' Revolutionary Government has imposed financial regulations to prevent Grenadians from ob- taining religious literature from abroad. Broadcasting to the Nation on; February 16th, Mr. Bishop quoted a letter which, he said, has been received by many Grenadians during the past few weeks. The letter, from the Life Study Fellowship said the organisation sends out literature without a subscription price, and relies on voluntary financial support. The letter said, however, that it is realized that 'your strict governmental regulations. will not allow you to do.this, therefore, it will be difficult for us to continue mailing our material.' SIn order to avoid further problems or complications for you', the letter said, or to have you waiting tto hear from us, :we feel we must let you know right from the start that we cannot send mater- ial to your country'. Bishop said the inference that people cannot send money out of the island for religious. literature is a complete and total lie'. He said foreign exchange regulations in Grenada are more generous than many other Caribbean islands, and the letter 'represents an attempt by foreign source to.make it appear that there are governmental regulations which prevent our people from obtaining religious mat- erials from abroad'. It has not been possible for NEWSLETTER to Contact anyone who re- ceived this letter from Life Study Fellowship, but individuals who import religious literaturee from other sources deny there is any difficulty in sending moiy nut in payment. Booksellers :,also and Banking sources say they don't knA-w of any rest A~4* + c n nect ion. ( 284 words) THE G 2 ADA NEWSLETTER We .4. MINIATURE SUB 3AID SIGHTED :ON EAST CC'AST 4 crowd of. over 100 persons converged, on a remote east coast ,each yesterday ( 22nd) following a 'report that -a miniature sub -marine had been seen there. The report was made by Lucy Phillip, 19, of Pomme Rose. St. Davids and when NEWSLETTER interviewed her, she said that, ab -out 2,90, pm yesterday ( 22nd) she had been bathing alone at "Bungay" beach, some 15:miles. fr-omR .t;: Georges and which is reached by road 'through 'the-Mrlimount 'Estate "I was standing in water up to my shoulders and had my head un- der water when I saw something like a moving shadow coming in- wards from; the open sea", she said, and. when,.I lifted my head, X, saw sppethinrg rising out of the water.".; ":. ::- Miss Phillip described what she saw as having a 10 foot long body 1, like a:-plane"rti; with "wings" oti ea6h sid about 5 feet Aong. The. craft as coloured grey and green and, it came to -.r stop .c.lose t~ the. beach in about: 6 fet: of water with its-' wings resting on top "of the water." Mis's Phillipsaiidtwo. white men followed.by a third tame out of a small door- n the side of the craft and came itoiheron the' beach where she had retreated, in, feat and was putting 'on her skirt and-bloase. i The men were all tall withtPtd curly hair "' a.nd one, whp was bearded, appeared tO, be-.th?'leadet.,r -They wore ..horts and!jerseys., had tanks on theirxibacks and something &t .t their sides which looked like a gun. False Name According tq Miss Phillip, the .men greeted'her and .asked whether' shetwas Gemnadian.' She'gave the false name of "Joan' Harris' and her 'addresS as Pomme Rose, St.' Diiids:. She Was asked how things are ii Gredada, Wher" the ~ P iih Minister's ' residence is-antd Whet6ier''h3"would be scared to see deposed' Prime Minister Gairy again. I told them things are going well in Grenada", she said, "that I didn't know anything-about Bishop and that I continued - Page 8. Week ending ,23.2. 0 ~ Week Enditg 23.2.80 THE GRENADA NASLETTER Page 9 wouldn't be frightened to see'Gaity again b--cause I have donl him nothing and hu has done me nothing." Miss Phillipwho was educated at the Pomme Rose primary school said that her fear had calmed and that she asked to be allowed to go on to the craft to see how inside there situate". This was not al- lowed but there was an exchange of pleasantries, she receiving a couple of friendly pats and biing told she is very attractive and brave. On her'part, she'told them they were handsome. The fear revived,, however, when one of the men referred to earrings she was wearing. They were in the form of the letter "K" and she was asked if that stood for 'kill'. "They asked me if I was not afraid they would kill me", she told' NEWSLETTER, but I told them I did not think they would do that because I had done them nothing". Miss Phillip said that, throughout their conversation with her, the men spoke English with a foreign accent and lapsed from time to time into a foreign language. She estimates that they were with her a- bout 45 minutes during which time her photograph was taken and one of the men returned to the craft to get a pencil or pen'to write her name and address. The pencil or pu, n was handed to him by some -one in the craft. Return They informed Miss Phillip, she said, that they would return on March _5th and they told her. nt to.tellanyone of her encounter with them. She left the beach at this stage, having seen two of the men reenter the craft .while the third was on the beach looking inland through binoculars. Leaving the beach, Miss Phillip reported the incident to several of her friends pnd to the Peoples Revolutionary Army (IA). In addition to the crowd which went down to the beach, the PRA also investigated but there was no sign of the men or their craft. A full PRAalert was put into effect immediately. "Bungay'" beach is unnamed on the official map of Grenada' tit it lies immediately north of Requin Bay. A chart of that area shbws 3 fathoms of water at the middle of the entrance to the bay which is approximately 800 yards wide by 600 yards deep. t S- cnntinue,' - Page .q,, Well inq~rmed sources say Government is, puzzled over the signi- ficance of the alleged incident, but that it is being taken seriously. No official statement has been made. (759 words) FINAL DECISION SOON ON GRENADA'S NEW AIRPORT Grenada's proposed international airport to be laid down at Point Saline with Cuban assistance will be constructed in two phases, the first providing for a runway of 7,800 feet (2,400 meters) and the second phase extending this td 9,000 feet (2,700 meters). These were some of the statistics given by Mr, Ron Smith, Pro- ject Manager for construction of the airport as he spoke to a, meeting of the Chamber of Commerce on February 6th. Mr. Smith said' the final decision as to the ailgnrmen of the ruhway had not yet been made and the information he gave wa based on "the position as it stood on 21st December last. "Thedrawings foy the runway are paing.prepared in Havana", ,he said. "We' had completed the design considerations and all this design information left Grenada about 21st December, and it's now being put on paper. This was completed about a week ago and we expect that, perhaps, this Friday (8th) the'Chieqf Engineei of the Cuban eam will bring the plans for presentation to Government so that a final decision can be made dn the best of wo or thiee alignmens that are under consideration." Mr. Smith gave details of two alternative runway alignments which were under consideration up to December 21st. Both the6e aligl:- ments had advantages and disadvantages and he said that','since then, a third airport alternative had been proposed. This 'al- ternative, he thought, was the most attractive, some advantages being that, the Point Saline lighthouse would n6t have to be re- moved: immediately nor would the, St. George's University School of Medicine have to,be relocated. - continued - Week Ending 23.2.80 Week Ending 23.2.80 THE GRENADA NEWSLETTER Page 11 Background Givindg.the historical background to the 'airport, the Project Manag- er said a runway at 'Point Saline was first proposed in 1954 by the firm of Scott Wilson Kirkpatrick & Partners who identified 'the " southern end of the island as the only area'for future airport de- velopment. In spite of this7 this. firm, commissioned to carry out detailed studies -for Point Saline, made a proposal for a runway to be built at Pearls at .right angles to the existing runway and the 'prevailing wind.- "Those of us-who were involved were puzzled by and question -ed this", he said, and ultimately werb able to identify the flaws it contained." Mr. Smith said that-, .inf1967j the-Canadi.an Department of Transport made a brief..study of all airports in the area'and dismissed the Pearls cross wind runway'as being "totally impractical". Scott Kirkpatrick & Partners again studied the Point Saline project in 1969 and prepared a preliminary feasibility study. This study put forward five alternative alignments for the runway, and, in putting forward its proposals, the firm again offered to prepare a study on the cross-wind runway at Pearls. This offer was accepted, said Mr. Smith, and the study was produced in 1971 but, when it was presented it was puzzling to everyone con- cerned. It was as if", he said, one was being told that a place -had been found where a tennis court could be built running from east to west and the players would be unaffected by the sun". The Project Manager said it was eventually discovered that the wind information .supplied to Scott Kirkpatrick & Partners was inaccur- ate, wrong readings having been given by faulty instruments at Pear s. The next development was that, in 1973, Scott Wilson Kirkpatrick & with Partners in association consultants, Economist Intelligence Unit Ltd updated the previous studies they had done on the Point Saline pro- ject. No action was taken on; this and, in 1,977, some German engin -eers were commissioned by a private individual to find a site for continued - Pa We-k Tip-.--inr 23.2..^ a runway which would allow direct lights from Europe. Mr. Smith sid these engineers, were mor.e.-,interested in;the Pearls location than Point Saline as the former se-med ablesrto provide more length, but, because: of the cross wind problems at, Pearls, ; that area waw discarded and aa. proposal was submitted for a'Point Saline runway. Preliminary Mr.. Smith said ,that, a, :the: present time, preliminary work is be- Iing done and the barracks- accommodation for the. Cubia technicians is beiinq erected. Preparations for the acquisition of some lands, is being ipade and an access grod to,.the new airport is be- ing ,surveyad.;and laid out. Mr. Smith said it is difficult to give a costing for the new air- port and it depends on what costs are included~,, : "The figure could vary from pne epd of the sclle-to the other",: hesaid, ,s'; "because, for instance, O .,we take everything into account? All the requirements for water, for the future, for the growth that comes with the airport, the additional roads, arid telephones, electricity ct cetera? By and large, we prepared a sort of budgetary figure, trying to take much of this in, and it is' thought that it will be of the order of US$60 million." The Project Manager said the Point Saline runway in its final phase will be classified "A" which means that it will be over 8,400 feet long. He said an understanding of the category classification is better had in considering what is required for an aircraft to take off with a full load of passengers and a full load of fuel. "The 7,800 foot strip will accommodate aircraft of the 707 cate- gory flying to New York with one stop", he said. "The second phase would give you New York direct and this new proposal that we are to present to Government for consideration may of- fer.,the possibility of direct Europe flights, and this was something tht, the 9,000 foot strip could noti offerv,- irt. Smith said the airport project will not provide a great 'd'i of employment, most of it bei~ in the quarrying of stone. .eek Ending ..0 THR .ENADA -"'"Fr'ST'I Page 13 There would be however, training program ior heavy equipdAnt operators 'ind cmechanics. C uAb.t Assistanc: Referring to assistance given by Cuba, Mr, Smith said that country has undertaken to supply all the equipment required for construction of the runway and parking apron, all the spares for the equipment, the technicians and all the men necessary for working t'he project' to-completion, all dry fodstuff and other item's for the Cuban'personrnel who work of the site, equipment for the soil mechanics laboratory," workshop build ings and materials and equip- mert for odfic'e and field work necessary fodk the implement -tation of the project. The biggest responsibility Grenada has', according'to Mr. Smith, is that the GoVernzmbnt must provide all the c ush_ ed stone and all the hea'~ rock rquirBa for the job. This, in. his -opinion, 'would probably cdt JUS$12 million, excluding' the cost df equipment. Grenada is" also tO;6 .provide all fuel', oil and grease required and, he said, "that runs into some millions of gallonsi.": ' Grenada must also bear the cost.of relocating Point Sal- ine lighthbuse', must spply' Mlest city and water''ot the' camp site, must provide all the asphalt needed for the project'and lu'- t supply- ail 'th fr'sh foods m' meats vege- tables etc. fbr the Cubans who will work 6n the project; "As a variation to this", Mr. Smith said, I am told that Cuba has also absolved Grenada from its responsibility in respect of foods for the first year, even though that was part of the agreement". The Project Manager said that, before a final decision is taken on the alignment of..the runway, it is difficult to estimate costs and givea completion date... "My .Cuban colleagues say that they think it will be about.,athree year project", Mr. Sith said, "butI hope we can cut a little off that." (1266 wo6ds) 4J 3 14 TEGRENAD :,'WSLETTER Week Ending 23.2.30 ANOTrHER NEWSPAPER CLOSED DOWN Another Grenadian newspaper has been closed down by thne Peoples Revolutionary Government. It is the "Catholic Focus", a paper published by the Roman Catholic Church, and the order to stop operations came on February lith, one day after the papc:ris first issues Last October 13th, the "Torchlight" newspaper, was ordered! clos- ed, by the: PRG in.cthe? interest. of "peace., order and, national security". The paper was accused of printing "vicious lies",, and "misinformation" and Prime Minister Bishop said his Govern- ment would "democratise the ownership structure of 'Torchlight'. To this end, Peoples Law Number 81, the Newspaper (Amendment) Act, was passed on October 26th. Under, hat act, no alien may hold shares in a Comp-iny which is th e proprietor, printerr or publisher of, a newspaper, and no Grenadian may hold more than 4% of sh; phares.. in such a Company,.,, The law maas-l pro- vision for the autopanaic transfer to Gqverxment of all alien shares and the shares of Grenadians in excess: of 4S, Provi- sion is made also for compensation and for the resale of shares by Government. In a national broadcast on February 16th,,; Pime.Minister Bishop said the lose down of "Catholic Focus" was because of a contra -vention of. the Newspaper..(Amendment) Act. "The publication. of that paper was illegal", he said, as it was printed by .the Torchlight newspaper Company in defiance of Peoples' Law Number 81 which forbids a newspaper Company from publishing a news- paper if there are' individuals in the Company who own more than 4% of the shares". A spokesman for the Government Information Services *id the ban on "Catholic Focus" had come at A meeting of the Prime'Min- ister with 'Ioman Catholi6 Bishop"Sydney Charies on Monday Feb- ruary 13th, and Bishop Chlrles confirmed this. He said he had been told not to publish''he pAper'iutil Government has made public its comprehensive Policy Statemeit on the opera tion of the mass medii. It was his understanding that Wook' Eniding 23.2.,,"' THE GRENADA NEWSWUTTER Page 15 this statement will be pUbliShed after March 13th. In a statement to be read in all Roman Catholic Chuiches tomorrow (24th) Bishop Charles said neither he nor his Editorial Board was aware that publication of "Catholic Focus" constituted a violation of Peoples' Law, Number 81. "Rest assured", his statement says, "the Editorial Board acted responsibly and .i good faith". S(380 ,words) ESTATE HIJACKED The east coast River Antoine Estate was hijacked by a group of some 12 young men on Wednesday 13th February. It is reported that these young men went to the estate office'car- rying placards. They blew conch shells which attracted a large crowd, after which they read a resolution declaring that the name of the estate had been changed to "The Peoples' Cooperative Farm". The keys of the buildings were taken from the Manager, Mr. Percival Campbell, and it was announced that a committee had been appointed to run the estate. The Manager was told that he must cooperate or be fired. This matter was reported to the Peoples Revolutionary Army in near- by Grenville, following which army personnel visited the estate. What was discussed with the young men is not known .ut later that day Deputy Commissioner of Police Anthony 'Lucky' fLernard visited the estate and had talks with the young men Reliab-le sources state the men expressed dissatisfaction with the wages paid on the estate and said workers there were being exploit- ed by the owners, members of the deGale fily. These young men, who are not themselves employees of the estate, complained that the owners wished to see their workers dead and, as proof of this, show -ed a stock'0of coffins which, they said, were given by the' estate to the families of employees wheh a death occurred. The men said also that the Peoples' Revolutionary Government (PRG) were not con- cerned with their grievances. - continued - '* f^ . -- P.y ; 16 Tr:. '3RENADA E -: LETTER Wo--k Enci -23.2 .~80 Further talks were held the following day (14th) whkn Deputy, Commissioner Bernard returned to the estate accompanied by two members of the Peoples' Revolutionary Government, Messrs Vincent Noel and Caldwell Taylor. Mr. Noel is Minister for Home Affairs and President of both the Bank & General Workers Union and the Commercial & Industrial Workers Union. Mr. Tayl'or is President of the Agricultural & General Workers Union. These talks did not succeed in having the estate handed back to the owners. On Saturday 16th, however, the keys were given up but, since then, a committee of 4 appointed by the young men to run the estate has been in attendance at the estate's office. It is reported that this committee is comprised of 2 workers from the estate and 4 of the,young men-,who hijacked the estate. It is reported also that this committee has not attempted to in- tarcre with the m.nmagoment mf tho ostatc but that there is growing tension as the Manager, Mr. Campbell, has not attended any committee meetings which he has been instructed by them to do. A Government Information Service release of February 18thg states that two PRG members had made it clear to the hijackers that while the P RG fully supported the rights of workers to decent living aid working conditions, the Pk' could not and would not support the seizure of people's property as a means of re- solving the conditions of hardship of the workers." "The PRG members pledged that the Government and the Agricultur- el & General Workers Union, which represents the workers, would seek to work closely with the workers in resolving their genuine problems", the releaseo said.' "They emphasized, however, the PRG's opposition to the seizure of thie people's property". To date ( 23$r), the committee continues on the estate's, prem- ises and the matter has not been resolved. (554 words) -"* Week Ending 23.2.80 -- THE GRENADA NEWSIcl- ER Page .2.?: ROTARY PR3Si TS DENTAL CL.TNIC. The'Grenada-East Rotary Club has turned over to the Government of Grenada a complete dental clinic which was made available by the Rotary Club of Scarborough Bluffs, Toronto, Canada. The handing over ceremony was performed on February 23rd by Mr.Paul Kent, President of the Grenada-East Club, and he told NEWSLETTER That the clinic is valued at over Can$20 thousand. The clinic is located at the Government Medical Centre at Grand Bras, St. Andrews but Mr. Kent said that, originally, it had been intended that it would be operated at another location by the-Club. With the up-grading of the Medical Centre by the Peoples' Revolu- tionary Government, however, the Club decided to offer thr; clinic for location at that Centre. Thy Grenada-East Rotary Club which is twinned with the Rotary Club of Scarborough Bluffs, received its Charter on 3istiOotober"1978 and its membership is now 24. The Dentai Clinic, which is the on- ly such facility on the island's east coast, is not the first con- tribution the Club has made :t the community. Others have been gifts of equipment to homes for the aged and medical supplies to the east coast Princess Alice Hospital. The equipment for the Dental Clinic was flow fkom Toronto to Bar- bados by Wardair on a no-charge basis and Liat gave the same facil- ity between Barbados and Grenada. The Minister of Health, Mr. Norris Bain, received the Dental Clinic from the Club on behalf of Government. ( 240 words) CHAMBER & PRG TO MEET MONTHLY The Grenada Chamber of Commerce and the Ministry of Finance & Trade have decided to hold monthly meetings to discuss matters of mutual interest. This was 'disclosed by the Chamber's President, Mr. Geoffrey Thoapson at a Chamber meeting on February 6th, and Mr. Thompson said 'the continued' - 0 Pane 18 THE GRNADA. NWB TjRER R Week Ending !,.2.80 decision was taken last: December t' .a.eeting' of .representatives of the, Chamber with the ,Pr;me Minister,, the Milnister of Finance & Trade Mr. Bernard Coard, and the Minister of Communications, Works & Labour, Mr. Selwyn Strachan. "-It was decided that a regular forum between the Ministry and the Chamber would a useful exercise", he said, "in that it would assist in preventing issues reaching crisis proportions by Communication between both bodies prior to the introduction of any startling legislation." Mr. Thompson .said the first meeting was held in January with Mr. Coard and the,Minister's Permanent Secretary and his Econo- mic Advisor.had been, present. .Many-:matters were discussed and Mr. Thompson gave some details. FiXst on the ,Pres iden!; s list was Ports & Harbours and he said the meeting with-the Minister discussed the problems ,of pilfer- age on the docks. and.inadequate equipment for use on the docks. In connection with the first, Mr. Thompson said there had been serious complaints by the .Insurance Companies and, with the sec- ond, he said that, of 8 fork-lifts available, only 2 were ser- viceable. Witt,reference to pilferage, Mr.. board had outlined steps which. were being taken to improve security on the docks and Mr. Thomp- son said recent reports indicated an improvement in the situa- tion. Concerning equipment, two new fork-lifts have been ordered, spare parts have been ordered for the inoperative mach- ines and a maintenance training programme is to be introduced for the Pier staff. Improvement Mr. Thompson said delay in payment of claims by the Treasury Department to the Commercial Community was another matter dis- cussed. The Chamber was told by Mr. Coard that an attempt has been made to rationalise Government's accounting and pay- ment procedures and that improvements in the system could be -anticipated., Mr..Thompson said the,December performance was; nqc as good-a~ was anticipated but, since then, there has been improvement.: ; continued - Week ending 23;2.8 THE GRENADA NEWSLETR Page 1 Ihe meeting discussed the island ,s proposed ixiitFrnational airport and Mr. Thompson said Government's preoccupation now, apart from the construction of tiu! airport, is in providing adequate accomo- dation on the basis of local-private, local-public and external- private participation. "The ground services at the airport, we were told, will be the re- sponsibility of the Gvetrnment", the Chamber President said, "and there has, as yet been no determination on ancillary services such as duty-free concessions, restaurants, catering, book-shops and that sort of airport business." .. Mr; Thompson said st'rorg representation was made that these ser- vices should be left to the private sector to operate and a Govern -ment .committee will consider these matters sometime before the end of 'March. The Chamber's meeting wi-h Mr. Coard also discussed expansion of the infrastructure, the energy problem, the extension of the elec- tricity service and the proposed low-cost housing scheme. ... Water Mr. Coard told the Chamber that, during,1980, an additional 900,000 gallons of water per day is expected to be-.delivered to the St. George's area. The European Development Fund has given EC$5 million to assist in the repair of the Eastern main road and the'Caribbean Development Bank hat been requested to provide EC$5 million for extension of the electricity service. "Concerning the low-cost,,housing scheme, the Chamber was advised that, as.far as 'possible, purchases of materials will be made locally and local contractors will be employed on the project. The Minister told the Chamber that active attention is now being given to determining Grenada's legal "economic zone" and, when this is resolved, the Government will be in a position to accept one of ,the many offers whiph have been made with regard to exploration and ,extraction of petroleum deposits. Mr. Thompson said the ,Minister of Finance said that his Ministry was being reorg nised and, when this was completed, it was hoped that a budget for the island for 1980 would be Dresented during March or April w ( 67 words) THE GRENADA NEWSLETTER Week Ending 23.2.80 ----- -- -- -.-- ---- ; . --DETAINEES ' On the 1st of February, there were 79 persons being heid at Richmond Hill Prisons as political prisoners. Of these, 3t were detained in March, 1 in April, 4 in July, 1 in August, 16 in October, and 22 in December. The following are the names of these political prisoners with the dates on which they were admitted to Richmond Hill Prisons:- Goselyn Jones John Thomas Eric Charles James DuBois Godwin Charles Winst.on Cour tney James Modeste Rupert Jap al : Twist leton Patterson Benedict George Jerome Romain , Herbert Alexis UAe; Albert Forsyth Albert Abr aham Adonis Francis Norman DeSouza Chrysler Thomas Clinty Samuel Cletus Paul Tannil dlarke Tarrence- Jones Clptus James Dalton Pope Michael Frank Francis, Jones Kenny Lalsingh Dominic Regis James Henry Malcolm Baptiste Aird Ventou ' Lester DeSouza Stanley Cyrus Johnny Madrid Neville Rennie Winston Whyte Edmund Gilbert' f Denzil Celestine . 17..12.79 17.12.79 17.12.79 17.12.79 17.12.79i 18.12 79 18.12.79 18.12.79 18.12.79 18.12.79 18.12 ,Z9 13.3,79 13.3.79 133.379 13.3.79 14.33.79 13.3.79 25.3.79 25.3.79 30.7.91 13.3.79: 15.10.79 15.10.79 15.10.79 15.10.79 14. 3.79 30., 7.79 14.10.79 13. 3.79 21. 3.79 15.10.79 15. 3.79 17.10.79 Michael Rodney Lennox, Scott Joseph Peters Jenson Otway Conary..Paryag Wayne Lett Steven Cuffie "Matthew Antoine Wilton DeRaveniere James Bowen Her bqx t. ,Pr eaidh omme George Donovan Oliver Raeburn L'loyd St. Louis Osbert James Raymond DeSou a' :Abraham Joseph Kingston Baptiste Ashley Church Donnally Patrick , .Noble, Phillip Steadman Patrick Godwin Benjamin SAntonnio IAngeon Teddy Victor Leslie Phillip Hayes James Ricky Baptiste Norton- Noel David Comansingh Rasta Nang Napg Raymond Fraser Dennis Rush Anthony Mitchell Aldon Allridge GabrielL algee Barry Joseph 17.12.79 17.12.79 17.12,79 18.12.79 18.12.79 18.12.79 18.12.79 1.6,12.79 18.12.79 13,3.79 13.3.79 14.3.79 ' 15.3.79 15.3.79 13.3.79 13...3.79, 14.3.79 13.3.70 23 3.79' 25.3.79 25.3.79 17.3.79 15. .79 14. 10.79 15.10.79 15. 10 .79 15.10.79 5.4.79 ,14.10.79 13. 3.79 17. 3'379 ,14.10 79 23. 3.79 .24:12.79 28.10.79 I - continued- Week Inding 23.1.80 THE GRENAAi.PFSWBTtR T Page 21 , Mathias Belfon 15,4 0.79 :;Anr Al xnJndr 31. 7.79 D-phne Baptiste 4. 7.79. Dudley Passo,- ""'13."3.79 James Antoine 14.10.79 .COCOA ASSOCIATION BOARD REPORTS Sales of Grinarda's cocoa crop for the year ended 30th September 1979 were just over 3% higher in weight than for the previous year, but the value increased by over 50%. This is disclosed by figures released by.the Grenada Cocoa Associa- tion, the sole exporting agency for the island's crop, and the Re- port of. the Board of Directors says these figures "are a trie in- dicator of increased world market prices enjoyed and good marketing by the Board of Directors." Interviewed on February 15th, Mr. Lyden Ramdhanny chairman of the Board, fold NEWSLETTER that the year ended 30th September last-had been "fantastic". Our gross sales for 1979 weresome :C$27.3, million as compared with 8C$18 million in 1978", he said, and whd this is compared on a .per-pound basis, that, gives a gross figure of EC$5.07 a pound in 1979 compared with EC$3;47,,per .pound in 1978." According to the Report of the Board of Directors, however., the. high world market prices which brought about this favourable Situa ition 'will be short-lived and already, "because of an excess of cocoa on the market from cocoa producing countries, the current prices are somewhat depressed". During 1979, the average FOB price received for Grenada cocoa was BC$5.07 per pound as compared with BC$3.47 in the previous year, but, because of an anticipated glut, the Board anticipates that world prices will fall by some 30% to 40%. The books of the Cocoa Association are audited by Messrs Coppers & Lybr.indr, and these Auditors report that, because of a breakdown in accounting and internal control procedures and in the mainten -ance of accounting records during the year, they were unable to obtain all the information and explanations required. continued on page 23 - S Week Ending 3~,2 ,.0, THE GRENADA COCOA ASSOCAATIQN STATEMENT OF TRADING AND PROFIT AND LOSS .FOR THE YE "ENDPD 30TH SEPTEMR 1979 1979 Weight (1bs) Sales 5,382,6' 7 Cost. of .Sales: - In'ent:.:ry 1.10.78 Deliveries ,by Producers Deduct:- Shrinkage & shortages Inventory 30.9.79 r.aoss.. Profit 124,126 5i783,633 5,912,759 240,158 5,672,601 289,934 5,382,667: Value 27,.318,780 452,373" 7,239,168 7,691,481 1,075,924 6,615,557 20,703 223 197 3 Weight Valu (Ibs) ( 5,203,200 18,039,941 84,839 S3701207 ;5,455,046 127,720 5,327,326 124 126 5,203,200 89,500 6,211,676 6,301,176 452,373 .5,848.803. 12,191,138 Selling, General & Administrative Expenses Salaries & Wages Cbmiissio6n agents & ,brokers Loss on Fire Insurance claim Ekportierjs expenses Travelling Expenses Telephones: & Cables Stamps & Stationery Renit Insurance Electricity & :Fu.l Advertising & Printing Legal & Professionak Fees' Audit Fee Cess Curing Fee Export Duty Freight Claims (1978 provision written back) Bags Exchange Fluctuation Spraying (pest control) Maintenance : (equipment & buildings) Surplus Expenses written back Interest & Bank Charges Miscellaneous Fertilizer Expenses Cash Shortage ( Mt. Home) Deferred Expenses written off Depreciation Machinery & Equipment Fermentary Equipment Accounting Machine Furniture & Equipment 1 529,140 846 ,310 19,323 146,455 54,805 11i 259 35,838 .* :. ,2.5 i: 57,229 53,634 419 11,819 8,500 107,284 212,255 5,135,169 '33,514 10,175 34,899 ;1 30,126 727,371 185,222 68,689 39,863 56,410 10,841 25,426 4,548 27,300 6,681 14,558 3,657,932 Abt Operating Income 356,052 610 263 155 ,676, 19,175 6,639. 15,662 3,900 6,084 5,714 1,693 &8400 7,500 102 ,144 255,477 3,378,218 S27,61.9. (190,500) 28,291 (48,803) 644,788 45,428 (30,000 ) 74,737 17,863 23,591 2,920 3,221 5,671,752 EC6 .519.386 THE ggi^A.MNW&la;B R Page, 22 -. EC$12,045.291 Week Ending23 .2.0 THE GRENADA ELE PagW7- The Auditors report, for inistan,' that--they were unable to obtain confirmation of, n;,lances amounting ,to EC$105,202 and EC$96,907. du by exporters relating respectively. ,to the Cocoa .Years 1977 and 1979. Exporters hold stocks and make.shipments ;o behalf of the Association and, for the Cocoa Year 1977, the Auditors re- ported originally there was an unaccounted shortage of 2C$106,400 in the stocks of two exporters. The :then Chairman,, Sir William Branch, said one of,,these exporters had paid for his short-ag.e and arising from that, it ,must bc pre- sumed that the shortage now reported, EC$105,,,290, relates to the other exporter. Messrs Coopers & Lybrand state they have been un -able to satisfy themselves as to the collectability of this amount and of the further shortage of EC$96,907 in exporters stocks at the end of the 1979 Cocoa Year. The Auditors state .also that, with reference to the Fertilizer Scheme, the total of the balances due:to be paid by individual pro- ducers is approximately EC$89,000 less than the total of .... EC$1,281,694 appearing in the Control Account. This is a problem dating back to the 1977 Cocoa Year but there is now a change in the amount of the reported difference with the Control Account. For the 1977 Cocoa Year, the difference reported was approximately EC$173,233 as compared with EC$89,000 for the 1979 Cocoa Year. Mr. Ramdhanny told NEWSLETTER that, with reference to the short- ages in Exporters' stocks, the matter is to go to arbitration and he hoped it would be cleared up soon. Referring to the differ- ences in the balances relating to the Fertilizer Scheme, this had not yet been cleared up. He said the present Board inherited this problem when it took office in April 1979. A new accounting system is to be introduced and the Chairman did not think this problem will arise again. ( 603 words ) Ss*1 ' Page 24 THE GRENADA NEWSLETTER Week Ending 23.2.80 7 1 7 .11.-7 T71" 'I 7r f .R STAKES PONATION.IO DOMINICA C-ubtiis' Amas*ad4or' V6 'Grenada:, -5hor Julian Torres Rizo, visited- bominic'a onn Februaryl fth oi behalf of a special ;committee which .wo se't U~ la" t 'ASeptembef .'thdi NinAlignaedC Gnferend' in HaV- aria a Cuba. Senor Torres told NEWSLETTER this committee had been charged with looking into the possibility of extending aid to Dominica fddll6wing d'Hricae '"David'", and :ihf this 'connection: he had .met1 Prime Ministei Olivei Setaphin and had presented him with thee sum of Cadi$l,426',;05,47. "The countries that presented donations are the Arab Republic of Syria, the Republic of Cuba, the Republic of Sri Lanka, the Republic of San Marino, and the Socialist Republic of Lybia", Senor Torres said. As you know, the devastation in Dominica was considerable and we hope'these donatiomstwill help ini-the effortiof reconstruction ,of that country". '- Senor Torres said the special committee had been comprised of Cuba ( in that country's capacity as Chairman of the Non-Align- ed Conference), Jamaica, Guyana and Grenada. ('162 words) NEWS SHORTS Local Timber Soon The Ministry of Agriculture has announced that local timber will be on the market by the end of February. A portable saw mill has been imported and it is estimated that its annual revenue will be approximately EC$3 million. The mill is to be located in the Grand Etang'forest and will be operated by an engineer assigned by the United States Peace Corps. (.64 words)- Flour by March 31st Ihe flour mill, now being constructed by Messrs Caribbean Agro Industries Ltd. which is a -oint venture of the local firm of Geo. F. Huggins & Co. Ltd. and the Continental Milling Corpor- continued - Week inding 23. 2-80--' THE GRENADA NEWSLETTER Page 25- ation of the USA, is exp:eted to be in prtb.octi-on" by the end of March. This has been discloscJ by the Government Information Services i which states that the mill wil1 produce 30 tons of flour daily.: Daily. lp9al consumption is said to be 20 tons. (77 words) Venezuelan Technical Team Visits A technical mission from Venezuela arrived in Grenada on February 13th for a three-day visit. The mission, comprised of technicians from the Ministry of Petroleum & Mines, road construction experts and experts on airport equipment and maintenance, was expected to hold discussions relative to assistance for road repairs, repair and replacement of equipment at Pearls Airport, and adequate sup- plies: offuel for the-p proposed international airport. (66 words) Young Pioneers. Laurnhed A "Young Pioneer Group" was established in the St. Andrew's Parish during January. The Group, which was launched by the Secretary of Information, Mr Caldwell Taylor, has a membership of people be- tween the ages of 9 and 15 and, according to a spokesman for the Group, reported by the Government Information Services, the main purpose behind the formation was that of "supporting and defending the Grenada revolution." - (67 words) Law of Sea Conference Expresses Concern Delegates ,to the Law of the Sea Conference which was held in ,St. Kitts during 1st -and 2nd. February expressed concern over the in- trusion of foreign fishing vessels into Caribbean waters. The Con- ference made recomnmendattionp to the Standing Committee of CARICOM Ministers ,responsible for Foreign Affairs. Mr. Ashley Taylor, L.gal Advisor to the Ministry of External Af- fairs, represented Grenada at this Conference which was attended by 10 regional Governments. (69 words) Page 26 THE GRENADA NEWSLETTER Week EnJing 23.2.80 Mission Heads to Return for Festival Grenada's Heads of Missions abroad are expected to return to the island for the Festival of the Revolution scheduled for Isi to i3th March. They are Mr. Fennis Augustine, London, Mt. Jimmy Emmanuel, New York, Mrs. :Jennifet Hosten-Craig, Ottawa Miss Dessima Williams, Washington and Mr. Ridhard Jacobsj Hava- na. ( 48 words ) Grenada Businessmen Visit Cuba Seven Grehada businessmicn including Chamber of C, rerce Presid- ent of Messrs.-Renwick. Thompson & Co. Ltd. visited Caba from 13th to 21st February with a view to promoting business between the two countries. Others in the; group were Messrs Richa q::M~nezes and Clyde Hay wood of Messrs Geo. F. Huggins & Co. Ltd., Mr. C.K. Sylvester of Messrs Independence Agencies Ltd., Mr. Zaid Jaleel of Messrs Motor Sales & Service Co. Ltd., Mr. Angus MLinea. of Messrs Bry- den & Minors Ltd. and Mr. Lyle Bullen of Messrs Vena Bu le,' s. Sons Ltd. in the sister island of Cartiacou. Mr. Bullen is a member of; the Peoples .Revolutionary Goverpment .,nd is Secretary for Carriacpu Affairs. (116 words) Workers -To Share Profits Workers on Government owned firims are to be given a monthly re- port of the profits and expenses of the farm at which they are employed'and will receive a'share of the 'annual profits., This was announced 'bh" Minister for Agriculture, Tourism &.Fisheries, Mr. Unison Whiteman; as he -ddiessed the workers at Gbvernment owed Carriere Estate on Febrtiry'13th. .According to' a Gov. eminent Information Service release, this policy is in keeping with Government's commitment to sharing. ( 77 words ) Week'cE- ..4ig_ .33 ..P '. ;: TH-E lBN __EW_-M_'R Page..27_. Processing Plant to Open In March A plant to process local fruit is' to be opened by Government at True Blue early in March to coincide wth the Festival of the Revo- lution'. 'The plant, which will employ 2'Grenadians, is'expected to meet local d1~aind and produce' for the export market. ( 45 words ') EDF Aid For Grenada Agreement was reached early in February with the European Develop- ment Fund (EDF) for loans of BC$12.5 million at 2% interest to fin- ance various projects in the state of Grenada over the next 5 years.. , Talking to the Government Information Services, Minister of Finance Bernard Coard said these loans will finance expansion of Grenada's electricity services, a survey of hydro power potential and the possibilities for oil exploration. Mr. Coard said that, when a survey is conducted mid-way through the 5-year agreement period, the loan couldbe Increased by a further EC$2 million if the projects are assessed favourably. The Minister of Finance said Grenada stood to benefit also from negotiations now Being conducted with Ihe EDF for EC$7 million for regional projects :in the fields of health care, agriculture and agrQindustr,4es. (134 words ) Sites Cleared For Housing Project. .. Sites have been cleared at Bonaire, St. Marks; Plains, St.: Patricks and True Blue, St. Georges for Government's low-income housing pro !-ject financed by loans from three foreign banks and the Organisa- tion of- Petroleum Exporting Countries. iThis project will cost EC$4 million and is part of a EC$7 million housing and house repair programme launched by the Peoples Revolu- Itionary Government last December. ( 61 words ) 4u Page 28, THE GR gn -tA Week endingg 23...J , j.naepeneence .oumeraeag The 6th anniversary of Grenada's attainment of independence from aritain ias, celebrated on, February.$h. .Celebrations centered. principally on a "Freedom Walk" from Leapers Hill at Sauteurs., :, St. Patricks .n the north to True Blue, St. Georges ,n the, south:. Leapers Hill is the site at which;, in 1651, forty Caribs jumped into the sea rather than be massacred by the French colonists. t True Blue was located the barracks of the Grenada Defence Force which were successfully attacked by revolutionaries of the New Jewel Movement on March 13th 1979. ( 90 words) !I ADDITION TO NEWSLETTER'S FAMILY i' I.WSLETTER is happy to record the birth in Trinidad . Yesterday (22nd) of Dion Modeen Mohammed second sou f of Shafeek Mohammed and Christine Mohamnmed nee Hughes, , "'-The bieth was within the firsi minute fc 'the day and" LT sthe S dIai-'-t ime w 6'ibs L3'ozs. '''This fs the"" r, Second child born.to the Moha"meds. Li seod ::.. SDiun Mohammed is the third grandchild of Alister and ": piCynthia .Hghes, producers and printers of they GRENADA SNWSLETT.BR. S -" .- ' : .. 7. ... ." Alister Hughes 23rd February 1980 Full Text
http://ufdc.ufl.edu/AA00000053/00223
CC-MAIN-2014-52
refinedweb
9,584
61.87
Boostcon 2008 From Just in Time This page will contain the notes that I will make during Boostcon 2008. For a trip-report, see the Trip report boostcon 2008-page. Boost.Extension & Boost.Reflection Extension Problems with shared libraries - performance - differences in semantics of open, close, getprocaddr - name mangling, extern "C" losing type safety shared_library m("my_module_name"); m.open(); m.get<int(float)>(function_name")(5.0f); m.close(); Reflection as expected... liaw Tuesday quickbook Joel - docs in boost head/tools Installing it on windows: - quickbook uses boostbook uses docbook. - problem with latest xslt: do not download latest version (may have been fixed). Use 1.66.1 version. Not listed on sourceforge. Just change url while downloading from sf. - Eric's docs demonstrate how to use doxygen Eric: import doxygen import quickbook; doxygen autodoc : [ glob ../../../boost/xpresive/*.hpp] : <doxygen:param>X=Y ; xml xpressive : xpressive.qbk ; boostbook standalone : xpressive : - boostdocs mailinglist discusses how to get rid of boost-headers. - URLs to code can be svn urls. concurrency containers range [ file vault] range_ex has versions of stl algorithms that accept ranges. return type can be customized (range, iterator, what range). Explore, container printing It took pretty much in the neighborhood of 90 minutes. Amazing, I thought it would be about 45. Good thing it was scheduled for 90 minutes. Audience participation was great. Benign and very constructive. boost serialization: A Hands on Tutorial - load_construct_data & save_construct_dat - for types that have no default constructor - serialize classes that are outputstreamable and inputstreamable (to C++ streams) - use load_construct_data etc - ...or use BOOST_SERIALIZATION_SPLIT_FREE om load/save te splitsen. - BOOST_SERIALIZATION_SPLIT_MEMBER() in de class scope bjam --with-serialization --layout-system variant=debug link=shared stage optionally append -jN+1 (N = number of processors) Building a mini-fusion with C++0x - relation between run-time and compile-time: boost::fusion::xxx vs. boost::fusion::type_of<...> LIAW Wednesday edit distance/sequence alignment algos may be quite hot: dna sequencing. Bjarne Stroustrup: A C++ library wish list - what do we want - What would I like - Teaching C++ - What is "a system" - A system for c++ Libraries C++ libraries are small. About 25 GUI libraries, no interoperability. We do not have a system of interoperating libraries TBB going in the right direction. Problem: Those 'novices' that "knew everything" telling those that have never seen a line of code what to do. Teach in a way that encourages hard work. - fltk? (gui & graph) Boost.function - SFINAE in 2002 - use of preprocessor metaprogramming made implementation unreadable Boost.Wave - macro expansion trace - interactive mode Spirit v2 - %= (auto rule operator) - just like the semantic action [_val = _0] (Joel wrote "[_val = _1]") asio Traversal - Peyton Jones - "Scrap your boilerplate", "Scrap your boilerplate systematically" -paper - barbed wire & bananas Boost.Lexer - Ben Hanson created it - In boost.spirit - Merged, minimized DFAs, parse large sets of alternatives very effectively. - multi_pass<> - iterator - In Spirit2: tokenize_and_parse(...) Planning BoostCon 2009 Dataflow Library (and the arts) examples of dataflow frameworks - Eyesweb - Maxim sp - jmax - labview - GNU radio Concepts: Components, Ports. consumer ports are boost.function, producer ports are boost.signal operator>>=() used to connect components, e.g.: producer >>= filter >>= consumer - cppgui (Felipe Magno de Almeida) - pin-based approach (Tobias Schwinger). - flock of birds 6 dof 3d position - AMELiA now open source (GPL)
https://rurandom.org/justintime/index.php?title=Boostcon_2008&oldid=51
CC-MAIN-2021-04
refinedweb
537
50.53
In this C++ tutorial, Structures Part II, you will learn how to use structure in an efficient way, and features of structures explained with examples. Structure Members Initialization: As with arrays and variables, structure members can also be initialized. This is performed by enclosing the values to be initialized inside the braces { and } after the structure variable name while it is defined. For Example: #include <iostream> using namespace std; struct Customer { int custnum; int salary; float commission; }; void main( ) { Customer cust1={100,2000,35.5}; Customer cust2; cust2=cust1; cout << "n Customer Number: "<< cust1.custnum << "; Salary: Rs."<< cust1.salary << "; Commission: Rs." << cust1.commission; cout << "n Customer Number: "<< cust2.custnum << "; Salary: Rs."<< cust2.salary << "; Commission: Rs." << cust2.commission; } The output of the above program is In the above example, the structure variable can be assigned to each by using assignment operator ‘=’. The programmer must consider that only structure variables of the same type can be initialized. If a programmer tries to initialize two structure variables of different types to each other it would result in compiler error. It is wrong to initialize as: Customer cust1; cust1= {100,2000,35.5}; Nesting of structures: Nesting of structures is placing structures within structure. How to declare nesting of structures? How to access structure members in case of nesting of structures? For Example: #include <iostream> using namespace std; struct course { int couno; int coufees; }; struct student { int studno; course sc; course sc1; }; void main( ) { student s1; s1.studno=100; s1.sc.couno=123; s1.sc.coufees=5000; s1.sc1.couno=200; s1.sc1.coufees=5000; int x = s1.sc.coufees + s1.sc1.coufees; cout<< "n Student Number: "<< s1.studno <<"n Total Fees: Rs."<< x; } The output of the above program is In the above example, the structure course is nested inside the structure student. To access such nested structure members, the programmer must use dot operator in the above case twice to access the nested structure members. In the above example: s1.sc.couno s1 is the name of the structure variable, sc is the member in the outer structure student. couno is the member in the inner structure course. This is how nested structure members are accessed. One more important feature of C++ structure is it can hold both data and functions. This is in contrast to C where structures can hold only data. Though C++ structures can hold both data and functions, most classes are used for the purpose of holding both data and functions and structures are used to hold data.
http://www.exforsys.com/tutorials/c-plus-plus/structure-in-c-part-ii.html
CC-MAIN-2019-09
refinedweb
420
66.94
Also, if you can give up on the dependent types issue, and you just want the equivalent of "embeddedParser [1,2]", you have a problem that the type you are specifying is infinite; this is the cause of the "occurs checks" errors you are getting. Lets specify the type you are parsing directly, then abstract a bit: > -- your HData is just Maybe! > > data IN1 = IN1 Int (Maybe CH1) > data CH1 = CH1 Char (Maybe IN1) > sample :: Maybe IN1 > sample = Just $ > IN1 0 $ Just $ > CH1 'a' $ Just $ > IN1 1 $ Just $ > CH1 'b' $ Just $ > IN1 2 $ Nothing You can easily write a parser for this type with two mutually recursive parsers that parse IN1 and CH1; I'll leave that as an exercise for you. Now, you might not want to explicitly specify the type of the result in the data type; that's what you have done with your versions of IN and CH that take the "rest of the type" as an argument. But the problem with that approach is that the resultant type is *infinite*! > data > -- broken: > -- type ParserResult = Maybe (IN (Maybe (CH (Maybe (IN (Maybe ... But there is a great trick to solve this; another type can wrap the "fixpoint" of this type: > newtype Mu f = In (f (Mu f)) > out (In x) = x You can then use this structure to define the type of the parser: > data ResultOpen a = O | C (IN (ResultOpen (CH (ResultOpen a)))) > type Result = Mu ResultOpen What "Mu" does here is fill in the "a" in ResultOpen with (ResultOpen (ResultOpen (ResultOpen (ResultOpen ..., infinitely large. The price you pay for this infinite type is that you have to explicitly mark the boundaries with "In"; the constructor for "Mu": > sample2 :: Result > sample2 = In (C(IN 0 (C(CH 'a' (In (C(IN 1 (C(CH 'b' (In (C(IN 2 O)))))))))))) A parser for this type is also not too difficult to write; you just have to take care to use "In" and "out" in the right places. -- ryan On Mon, Dec 1, 2008 at 6:08 PM, Ryan Ingram <ryani.spam at gmail.com> wrote: > The problem is this: what is the type of "embeddedParser"? Unless you > can answer that question, you're not going to be able to write it. > > In particular, its *type* depends on the *value* of its argument; the > type of embeddedParser [1,2] is different from the type of > embeddedParser [1,1,2]. This isn't possible in Haskell; you need a > language with an even more exotic type system (Agda, for example) to > encode this dependency. Google "dependent types" for more > information. > > You can encode something similar using existentials: > > data Sealed p = forall a. Sealed (p a) > type ParseResult = Sealed HData > > ... > > case h of > 1 -> do > aux <- pInt > Sealed rest <- embeddedParser (t ++ [h]) > return (Sealed (C (In aux rest))) > > and a similar transformation on the (2) case and the "end" case; this > makes the type of embeddedParser into Parser ParseResult. What you > are doing here is saying that the result of a parse is an HData a for > *some* a, but you aren't saying which one. You extract the HData > from the existential when running the sub parser, then stuff it back > into another existential. > > But you can't extract the type out of the existential ever; it is > lost. In particular you can't prove to the compiler that the type > matches that of the [1,2] input and get back to the IN and CH values. > And you can't return a value that has been extracted out, you can only > stuff it back into another existential container or consume it in some > other way! > > A better option is to use a type that matches what you expect to > parse, or just use Data.Dynamic if you want multiple types. You > aren't going to get any benefit from "HData a" without a lot more > type-level work! > > Also, for your type list, it'd be much more efficient to use (cycle > types) to construct an infinite list (in finite space!) rather than > keep appending the head back onto the tail. > > 2008/12/1 Georgel Calin <6c5l7n at googlemail.com>: >> Hello >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> >> >> >
http://www.haskell.org/pipermail/haskell-cafe/2008-December/051309.html
CC-MAIN-2014-42
refinedweb
705
65.56
Raspberry Pi PICO with Thonny IDE Raspberry Pi PICO is one of the latest dev board developed by Raspberry Pi foundation. This is a small compact and supports C/C++ and micro python. In this tutorial we will be learning how to use Raspberry Pi PICO with Thonny IDE. Raspberry Pi PICO is one of the cheapest micro python supported development board. We will be learning, how to upload python code in Raspberry Pi PICO. There are multiple ways to do it, but in this tutorial we use the simple method. We will be using Thonny Python IDE for compiling and uploading the code in PI PICO. Materials Required For this tutorial you need three items as listed below. Thonny Python IDE This is a Python IDE for beginners which is simple to use. Thonny comes with Python 3.7 built in, so you don’t need to install python separately. Just install Thonny and you’re ready to learn programming. (You can also use a separate Python installation, if necessary.) The initial user interface is stripped of all features that may distract beginners. You can download the latest version from the link below. Download Thonny Python Editor Connection Once you have installed Thonny IDE, no its time to connect your Raspberry Pi PICO with you computer using a data cable. Initial Setup Before you start uploading the code, you need to install micro python firmware. Follow these steps to install the firmware. Press the BOOTSEL button and connect it with your computer using a USN to micro USB data cable. Once connected, This will show your Raspberry Pi Pico as USB mass storage device with two files in it. It has a storage of 128MB. Now you can open Thonny IDE and there will be a popup as shown below. This popup will show some information regarding Pi PICO. Just click on Install to install the micro python firmware. Once you click on install, you will see the progress bar showing the progress of installation. Once the installation is completed, You can click on done to see the uploaded details. This step is optional. Finally you Raspberry Pi Pico is updated with micro Python firmware and ready to execute python codes. Must Read: - How to install Raspbian Lite on Raspberry Pi Zero W - Connecting DHT11 Sensor with Raspberry Pi 4 / 3 using Python - Interfacing MQ2 Gas Sensor with Arduino - Send DS18B20 Temperature Sensor data to ThingSpeak - IoT based Timer Switch using Blynk and NodeMCU - How to read DHT11 sensor data using Blynk Python Code For testing purpose we will be running this blink code. This code will be blinking the onboard led ON and OFF in a certain interval. from machine import Pin, Timer led = Pin(25, Pin.OUT) timer = Timer() def blink(timer): led.toggle() timer.init(freq=2.5, mode=Timer.PERIODIC, callback=blink) Uploading the code and Testing Now copy paste the python code which will blink the onboard LED. Once pasted in Thonny editor then click on the Floppy button. This will save the code before you run it. Click on Raspberry Pi PICO Put a name for your python file with an extension “py“. Then click on OK. Once the code is saved, then click on the Green Play button to execute the code in Raspberry Pi Pico. To stop execution of any code Click the red STOP button. Now you will see the Green onboard LED will start blinking. NOTE: If you wish to run the code automatically whenever you power Raspberry Pi Pico then you have to name your code file as “main.py” while saving it. Conclusion Students, Electronics Hobbyist, Engineering students can use this board to start learning python and coding the electronics with python. Python and micro python is one of the emerging coding language which can be learnt. It is easy to learn and start off with Raspberry Pi Pico.
https://www.iotstarters.com/raspberry-pi-pico-with-thonny-ide/
CC-MAIN-2021-25
refinedweb
652
65.62
XML Spy Home Edition free Since this tools has been mentioned a few times on this site as a staple in the programmers toolbox, I thought you might like to know there is now a free edition of it [ via ] Just me (Sir to you) Monday, June 07, 2004 I'm using oxygen () and it rocks. I don't think they have a free edition, but the professional one is MUCH cheaper than XMLSpy... Nicky Monday, June 07, 2004 Also, make sure to read the licensing agreement of XML Spy very closely. I promptly uninstalled when I got to the paragraph where they reserve the right to have XML Spy send usage data from my computer at their whim to prevent piracy! Chris Nahr Monday, June 07, 2004 Chris - exactly what a firewall is for...? Andrew Cherry Monday, June 07, 2004 Ach... we use Oxygen at work and I don't like it. It crashes often; it can't cope with files on a network share unless you have a drive mapped; it hangs on some large documents; sometimes it doesn't start up; associating it with files is hit-and-miss; its UI has all the problems you'd expect with a Java app (wrong fonts, wrong colours, some non-standard text interaction stuff). I'm rambling. Spend the extra money if you can. Especially if you can get a tool that lets you debug XSL transforms (Oxygen will often just crash and not let you know where or why). Could just be my setup, though. Lots of people seem happy with it. :) Thom Lawrence Monday, June 07, 2004 The one feature I really want is the xpath evaulator, it's not in the free version. So I use this : Damian Monday, June 07, 2004 CookTop looks nice enough but apparently it can't handle schemas that are imported into the document's default namespace. The fact that it's virtually undocumented doesn't help. Any ideas? Chris Nahr Wednesday, June 09, 2004 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=148999&ixReplies=6
CC-MAIN-2017-47
refinedweb
339
76.25
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives This is a fork of another topic. I think it is high time that we stop talking about programming languages in isolation of the tools that support them. The tools have always depended on the languages, obviously, but increasingly the languages depend on the tools, especially in professional language design contexts. Dart and TypeScript are extreme examples of this where the type systems are there primarily to support tooling, and are much less about early error detection. As another example, C# design is heavily influenced by Visual Studio. Designing a language invariably involves trade offs, and if we accept tools as part of the core experience, we are able to make decisions that are impossible given the language by itself. For example, many people find type inference bad because it obscures developer documentation, especially if it is not completely local. However, given an editor where inferred type annotations are easily viewed when needed, this is no longer a problem. Likewise, a type system that cannot provide good comprehensible error messages in the tooling is fundamentally broken, but tooling can be enhanced to reach that point of comprehensibility. Type systems in general are heavily intertwined with tooling these days, playing a huge role in features like code completion that many developers refuse to live without. And of course, there are huge advances to be had when language and programming model design can complement the debugging experience. There are other alternative opinions about this, to quote and reply to Andreas from the other topic: Only if you do so for the right reason. Tooling is great as an aid, but making it a prerequisite for an acceptable user experience isn't. A language is an abstraction and a communication device. Supposedly, a universal means of expression, relative to a given domain. It fails to be a (human-readable) language if understanding is dependent on technical devices. This is a good point: code should stand on its own as a human communication medium. However even our human communication is increasingly tool dependent, as we rely on things like the Google to extend our capabilities. We are becoming cyborgs whether we like it or. You can try to go on a life-long crusade to make every tool that ever enters the programming work flow "smart" and "integrated", and denounce books and papers as obsolete. But that is going to vastly increase complexity, coupling, and cost of everything, thus slows down progress, and just creates an entangled mess. And in the mean time, every programmer will struggle with the language. This is more of an observation about current and future success of programming experiences. I would say it has already happened: successful programming languages already pay these costs, and new languages must either devote a lot of resources to tooling, or build them in at the beginning, or they simply won't be successful. Some people wonder why language X isn't successful, and then chalk it up to unenlightened developers...no. Nothing ideological or academic about that, just common sense. Since an appeal to common sense is made, I would just think we have different world views. We at least exist in different markets. By selecting a PL — whether by designing it yourself or adopting one designed by someone else (or, likely, something between those extremes) — you place limits on its future use; although I often say "programming is language design", that moment when you fix the choice of base language is when the limits happen. After that moment, you can create all sorts of tools to make the language "go" further, and you can construct abstractions within the language to make it "go" further, but the limits on both of these things are contained in the language you chose to start with.. I've always figured a language should be designed to maximize what can be done within it by abstraction; but whether the same is true for tools, I'm less sure. Designing a language for the sake of the tools one means to use with it seems prone to compromising the basic integrity of the language. I'd rather see the language designed so it's thoroughly awesome when used with nothing but a text editor and bare-bones compiler/interpreter, and then soup it up with awesome tools from there (but I'm also reminded of a remark from somewhere, long ago, about software speed, that for every nine hours you spend "cycle-shaving" on the finished product, you'd have done better to spend one more hour ahead of time coming up with a better algorithm). [I recall a guy on my college dorm floor who was heavily into airplanes; he had posters of high-tech planes all over his dorm room walls, like that USAF thing with forward-swept wings which was then a quite recent development; of course, forward-swept wings makes an inherently unstable aircraft that requires a computer to fly. One day we happened to watch an episode of something-or-other on the TV in the dorm lounge where the "chase scene" was a dogfight that included a barnstormer's triplane, and he remarked with real reverence in his voice that 'those things can fly at thirty miles an hour without stalling'.]. Isn't a fallacy though? Focusing on the experience doesn't mean we start doing a crappy job on the language design, it just means that trade offs can be more optimally distributed. I'd rather see the language designed so it's thoroughly awesome when used with nothing but a text editor and bare-bones compiler/interpreter, and then soup it up with awesome tools from there But then you (or someone else?) also argue that too much type inference is evil, because you can't see the types and the error messages are obtuse...since given a text editor and a command line compiler, that is all you are going to get! By making the language stand on its own, you have already doomed it to a series of sub-optimal design decisions that naturally follow. Triplanes are great for some tasks, just don't put them up against an F22. Fly-by-wire is amazing, the plane is in a constant state of instability. Manual rudder control is impossible. By making the language stand on its own, you have already doomed it to a series of sub-optimal design decisions that naturally follow. I doubt this. Here's an alternative conjecture: any language feature that would be bad for the bare-bones language would still be bad when augmented by tools. That is, the best experience with tools is founded on the best experience without tools. I'm open to being convinced otherwise, but note that just because some feature X doesn't work well in bare-bones languages, and bare-bones languages with X can be made useful using tools, does not preclude that tools might have done even better augmenting a bare-bones language without X. How would you develop an experiment or proof to justify your alternative conjecture? It seems very idealistic, and you're clearly placing a much larger burden of proof on "being convinced otherwise" than on accepting your conjecture. I think there are a lot of cost thresholds, or activation thresholds, in PX and UX (and emergent systems in general). If some activity is too difficult, it doesn't happen. Even if it were theoretically possible to do with plain text anything you can do with structured content, it won't matter if the effort is high. Every little barrier to development is relevant. Designing for tooling is about shifting costs around, making it cheaper to create a desired toolset such that the costs fall below acceptable thresholds. I would be very interested if you were to develop a language where tooling costs are very low but the language itself is convenient for plain-text reading and writing by humans. Unfortunately, a lot of conventional features aimed to make a system more convenient for filesystems and plain-text editors and expressiveness or readability in plain text tend to create barriers for tooling - e.g. sophisticated syntax, namespaces, overloading, external build systems and package managers, dynamic dependencies on external data files for content that doesn't fit nicely into plain text, etc.. Sure it's idealistic. It needn't, by that, necessarily be wrong. These two contending conjectures have to be weighed against each other, I think, by considering cases: too forward-speculative to subject to proof or, likely, to practical experiment, so thought experiment would seem the available option. Consider sorts of language features, sorts of tools. To attempt a better articulation of my above caveat, suppose we're considering a feature X that, if one were designing a bare-bones language, one would reject. Presumably, we've already considered bare-bones with X and bare-bones without X; we'd have to have considered those, to reach our supposed rejection of X for bare-bones. My intended point was that, if we want insight into the contending conjectures, we would have to also consider, carefully, both tools with X and tools without X. Otherwise, an enthusiast might say, "see, tools can enable X to work after all", which is good to know but doens't differentiate between the conjectures. Yes, the tools can become a crutch for bad languages. But let us ignore those experiments. Let us consider people (like those on LtU) who know that the language must still nevertheless not suck, no matter what the deal is with the tools. Then we can agree to disagree on whether tools are worth adding to the language ecosystem. Personally I heartily say, "hell yes". I find it hard to believe that anybody who has used Emacs vs. an IDE would say there is nothing good about the IDE over Emacs. I say this as an Emacs nut -- any language I used has to have an Emacs mode or else I'm not interested. ;-) The question, I think, is not whether tools are worthwhile, but whether one ought to take tools into account in the design of the bare-bones language. This is all rather abstract, so there's not much to build a reconciliation of views from. Here's a very small definite example. Remember the "goto fail" bug, early last year? The short explanation of the error (as gasche described it in the LtU discussion, here) was if (foo()) goto end; goto end; important and useful stuff... end: cleanup When that LtU thread started, it seemed so blindingly obvious to me that the language shouldn't have had such an error-prone syntax for its if statement, that I fully expected that point would be made promptly in the discussion at a language-design-savvy site like LtU, so I didn't make the comment myself. For a while. But then it got clear nobody was saying the "obvious", so I did — and, to my amazement, actually got a bit of push back on the suggestion (not that others didn't chime in to agree with me). I actually had to write a separate post to make the (I'd have thought, equally obvious) observation that I wasn't somehow suggesting dead-code detection tools aren't useful. In this case, the language could have included indentation in its definition, and the editor could make it blindingly obvious that an uncodnitioned "goto end" was there. Syntax is a practical concern of clarity and aesthetics, it is also core to the editing and reading experience so it is insane that it would be considered separate of those. the language could have included indentation in its definition, Which would be part of the definition of the syntax of the language, yes, although it's a rather inflexible choice, likely to get in the way of the programmer choosing the optimal format for a particular situation; I'd favor something such as an endif keyword. Still, either way, syntax of the bare-bones language. the editor could make it blindingly obvious that an uncodnitioned "goto end" was there. There's an example of what I mean. A language that only works right if you use a certain editor with it is a bad language. Syntax is a practical concern of clarity and aesthetics, it is also core to the editing and reading experience so it is insane that it would be considered separate of those. There's a major modern trend, I've observed it for some years now in such a broad range of situations I don't know how to do justice to the sheer scope of the thing, toward choosing options that are inherently unstable and then expecting to keep the balls constantly in the air. Which fails catastrophically when it fails. I thought of that trend when I heard about the Segway, that has to constantly adjust itself just to stay upright, so if the battery fails it just falls over. I thought of it especially when I heard about replacing the repealed simple prohibition from Glass-Steagall with a complicated regulatory mechanism, because of what that implied about prefering a complicated solution that has to be constantly monitored over a simple solution that you only have to not break. Given the choice between a programming language that makes you more likely to get things right even if you write it with a mimimalist text editor, and a programming language that makes you likely to get things wrong unless you use a special editor, the first is preferable to the second. There's no excuse for failing to design a programming language so it's error-resistant with a minimal text editor. Fly by wire works this way: your modern fighter jet is in a constant state of instability, the only reason it doesn't fall out of the sky is because of a computer driving continuous micro adjustments. The pilot no longer has control over stability. Now, why bother with all that complexity? Not to mention the failure mode is huge: if the computer crashes, the plane just falls out of the sky. Turns out this enables a lot of agility and component costs, not to mention reliability, and fighter jets (and today even passenger jets) without FBW are not competitive...the marketplace demands that the more complex system wins. The same is true with programming. Ya, you could use an old fashioned text editor and a language that is editor agnostic. But if something comes along that integrates to achieve a 3-10x productivity increase, you no longer have that choice...your peers who choose the new system will simply out compete you. There's no excuse for failing to design a programming language so it's error-resistant with a minimal text editor. No. This is not how the world works. The same is true with programming. I've yet to see a reason to think so. Ya, you could use an old fashioned text editor and a language that is editor agnostic. How could you make it editor agnostic? I can't imagine how you could prevent the right sort of editor from being an improvement. You seem to think that improving the behavior of the language when written with a minimal text editor would somehow damage the ability of an editor to be an improvement; is there evidence in favor of that belief? Are the programming languages you use still optimized for punch card input? Is 72 columns really optimal? You seem to think that improving the behavior of the language when written with a minimal text editor would somehow damage the ability of an editor to be an improvement; is there evidence in favor of that belief? Designing for the lowest common denominator has detrimental effects in practice: you try to optimize for EVERYTHING and wind up optimizing for no one. By saying that the language must be functional in notepad, you basically made a decision to dumb it down for that context. By saying that the language must be functional in notepad, you basically made a decision to dumb it down for that context. That's the claim on your part I'm asking for some evidence to support. By describing what I'm talking about as "dumbing down", you're assuming your conclusion. It's only "dumbing down" if it requires giving up something in the upscale-editor case; I've expressed an interest in what that would be, and I'm still interested. Narrow text columns are a problem when you want long and unique names, which might be hidden by a smart editor, but get serialized verbosely in plain text. Several times Sean has mentioned an idea about disambiguating symbols via means that imply a system is auto-assigning very long names to some things. I like code in 80 columns, so I can see as many windows side-by-side as possible, without lines wrapping in displayed code. It's hard to get by with only four or five views at once, and only two would be a disaster in window shuffling. Some folks use screen real estate to write very long lines of code, because the only thing they care about is what they write at the moment, without considering what happens when you need to compare and contrast widely separated parts. Folks who do a lot of maintenance learn to like narrower columns. Long and descriptive names consume a lot of screen real estate. I hate code that devotes one line of code to each argument passed to a function, because each argument is an expression 32 characters long, and is already indented a lot. There's a conflict between making code small enough to see a lot at once, and making each thing self-describing to the extent further docs are not necessary. As a user-interface principle, it's harder to read super-long lines than short ones. Newspapers divide their text into columns which, I believe, are a bit narrower than 80 characters. Likewise, good web design limits its primary text to a maximum width which isn't too far from newspaper practice. Maybe the best programmer experience design would be to enable, support, and encourage programs that can be written, viewed, and edited in narrow columns? Ya, narrow columns are better than wider ones. But we still lack multi column code formats (you can put buffers side by side, but code usually doesn't wrap). I'm assuming that you want the programmer to have a good experience when all they can use is Notepad. Heck, let's throw on a bunch other archaic constraints, like they can only compile once a day (that used to be a thing back in the 60s), or code must be readable by non-programmers (again...COBOL). Every constraint UNIVERSALLY creates a trade off. Sometimes these constraints are great for creative lateral thinking that can lead to new inventions, say, the language must be usable via touch on a tablet. But sometimes the constraints are just reactionary throwbacks to a less fortunate past where we had nothing better than notepad or emacs, and it would be too expensive to provide a more advanced IDE. Out of peripheral curiosity, have you ever used COBOL, for an application of the sort it was designed for? Every constraint UNIVERSALLY creates a trade off. You're overlooking the possibility that the effects of the constraint are redundant to the effects of other constraints otherwise in force. A constraint can't create a trade off if the trade off already existed. But sometimes the constraints are just reactionary throwbacks to a less fortunate past where we had nothing better than notepad or emacs, and it would be too expensive to provide a more advanced IDE. I understand that to imply that wanting source code to be editable with a text editor is a reactionary throwback. I've watched for decades while proprietary data formats live their lives, like mayflies, while the formats that last... are text. I have never used COBOL before. That was before my time. Constraints necessarily narrow your design space. Yes, you can have constraints that are not relevant because of other constraints, but editors have a huge effect on the experience and are not one of those. I understand that to imply that wanting source code to be editable with a text editor is a reactionary throwback. I'm implying that working with design constraints from the 1970s will lead to 70s-style artifacts. Ya, we can probably do better than we could do in the 70s with these constraints (we know more!), but what if we instead dealt with the constraints of today rather than yesterday? I've watched for decades while proprietary data formats live their lives, like mayflies, while the formats that last... are text. This has nothing to do with serialization and persistence formats, which are orthogonal. I have never used COBOL before. That was before my time. I had a chance to use it on a summer job in (iirc) the late 1980s. Fascinating experience; I found it an (unexpectedly, given its reputation) well-designed language, elegantly powerful for its intended purpose. working with design constraints from the 1970s If one dismisses text as a seventies thing, one is then conspicuously unable to explain the success of wiki markup. I suggest that if a format can't be accessed without specialized software, it will eventually not be accessible. A lot of things were just too expensive in the 70s. Garbage collection was a niche thing (Lisp, Smalltalk), but it wouldn't be until the 90s until hardware caught up with it to make a mainstream thing. Bitmap displays were around, but also plenty of terminals, and some people were still using punch cards...the MOAD was just in 1969 and it would take a couple of decades for that tech to catch up. But we advanced. We aren't going back there. Again, irrelevant. Serialization format can be as verbose and readable as you want it to be. We can jury rig a machine that etches code onto stone so that it will be readable for thousands of years. I sense each of us is saying things the other perceives as irrelevant. I wish I entirely understood why that is; there's something deeper going on, that probably isn't what either of us is saying yet, getting at which is why I'm still pursuing this branch of the discussion despite its frustrations. Garbage collection seems to me an irrelevant example, because it's orthogonal to the bare-bones/tools dimension (or, if it isn't orthogonal, it supports my side). A format becoming inaccessible is totally dissimilar to a medium deteriorating: etching onto stone would be, presumably, an attempt to store data on a medium that'll last, but doesn't preserve the data if it's written in rongorongo; on the other hand, electronic data can survive quite a while by being copied repeatedly even though individual copies may have a limited life span (though I grant copy errors would get more likely on a scale of centuries), but good luck using an old spreadsheet in a data format that was abandonned thirty years ago. I feel like we are just valuing different things, like with self driving cars. The value systems will work themselves eventually in the marketplace at least. We are only trying to predict where the future successful advancements of programming languages will eventually go, rather than whose ideology/values are more subjectively correct. In the end, the former is all that really matters. just valuing different things Yes, that seems so. We are only trying to predict where the future successful advancements of programming languages will eventually go, rather than whose ideology/values are more subjectively correct. True, though with the exciting twist that we also get to help shape the future. The observer is not separate from the observed. I've thought about writing an SF novel, but —so far— I've been more driven to build the future than write about it. There is gold in those mountains of you look in the right place, otherwise you might come up empty handed! This debate is important in deciding where we focusing our energy looking for the next big thing. Building is better than talking, but I can't code for 12 hours a day! It's only "dumbing down" if it requires giving up something in the upscale-editor case; I've expressed an interest in what that would be A few examples: That's just a small taste of what you're giving up. There's a lot more: visualization of behavior, multiple editable views of code, tables and spreadsheets, live programming, collaborative development, fine-grained dependencies and packaging. A lot of Bret Victor's videos over the last few years demonstrate software development concepts that would be difficult to achieve with a language designed for effective use with Notepad. The videos "Stop Drawing Dead Fish" and many others are worth watching. Maybe plain-text programming is a Blub, only obviously weak to those who have studied or attempted to develop richer programming models or tools. I don't think you're responding to John's conjecture. He isn't arguing that text is best. He's wondering whether non-text tools might as well be layered on top of a well designed text-only language. Sean's position has been that you have to design the tools and language hand-in-hand. I think the counterexample to John's conjecture would be an example where the program format depends heavily and usefully on interaction. Something like Coq's Ltac, which is borderline unreadable as a text language. But then, maybe that inability to be read as a transcript means that Ltac isn't actually a good language and so isn't a counterexample after all. Is the language what the user types or what the user sees? With interactive systems, those don't have to be the same. That's kind of the point. For the record, I'm with you and Sean: design for the end experience. I'm personally trying to have a usable text transcript available, but that's not the priority. He's wondering whether non-text tools might as well be layered on top of a well designed text-only language. Review my third bullet point. It's one I consider very important, due to cost thresholds, which magnify the effect of increase in costs. It seems to me that 'well designed' for text-only involves a long series of tradeoffs that are bad for layering tools after the fact. I think the counterexample to John's conjecture would be an example where the program format depends heavily and usefully on interaction. Something like Coq's Ltac Yeah, that's another good example. Well, dmbarbour's naming some specific things that are desirable but hard to do with text; the injection of specificity is refreshing. :-) Coq Ltac is an interesting specfic thought, too. :-) It seems there may be room for a text-based language to contain provisions for more sophisticated uses; arguably that's exactly what any markup or programming language is, after all, text meant to be interpreted by other software. Thank you; that's a thought-provoking list. It does occur to me (though it's late here; I need to turn in) that a bunch of that stuff is not prohibited by text, if the text engages a good markup language. For example, the first bullet can likely be handled by something akin to wiki markup, the success of which has been due to its being an eminently human-manageable text format. The one about multiple editable views of code is, off hand, the one I find most interesting. I think the point us you write software in your head, not while sat in front of the keyboard. Entering a program is a very small part if the process. It is better to be able to communicate programs and programming concepts without the computer needing to be there. As such I might write books on programming that need to be understandable without the IDE. Programs have to make sense on the printed page. Declarative code is clearer and more understandable, and that means code should not be interactive but static like a mathematical equation. I think however all those features you list can be provided by the tooling in a way that doesn't spoil the read and writability of code. - A syntax aware cross a referencing tool, that maintains an index of the code. Pass in any function or variable and it finds the definition (you would need to give a source location to disambiguate different scopes). - The IDE can extract comments from inline syntax and convert into tooltips or margin notes. - overloading and multi-methods are features of generic programming and are good things irrespective of the method of programming. I don't see any of this list as being really an issue. - any media editing in the IDE is likely to be inferior to dedicated tools (Photoshop etc). Also different people prefer different tools, so IDE integration us bad for choice and competition. A competitive market for tools is better for the user in the end to avoid stagnation. Another problem is that it would limit the language to media formats supported by the IDE, the actual binary format would have to be shared by IDE and the program being written. I think for this the IDE could provide a media library that can be used in the code, and the language should support user-defined literals, so that a JPEG image would be Base64 encoded in the text which the IDE can display directly. So none of those arguments are really convincing for me. I think languages need to have an unambiguous linguistic representation (that is I can talk about code and write it down on paper). I think adding support for better cross referencing and annotation is a good idea. I think its fine for the IDE to manipulate the languages data-structures directly, and I think the compiler should provide an API for IDE writers. Effectively the compiler should be split into a library and command line tooling, and should probably use some kind of database instead of intermediate files. Text does have nice properties for portability. We've mostly settled on a common standard of ASCII/UTF-8 instead of the historical messes of codepages and big-5 (and EBCDIC, etc.). But text certainly isn't unique in having nice properties for portability. A simple bytecode can also be very portable, for example, and requires much less explanation of semantics and linking models than a typical plain-text PL. In any case, if a language specifies a simple (maybe even textual) import/export format or mime-type, that's sufficient to address portability and backup concerns even if unsuitable for direct human reading and writing (e.g. due to size, density, organization). I strongly disagree with your second-person characterization that "you write software in your head, not while sat in front of the keyboard". In my experience, a lot of programming relies on feedback. I don't frequently build fifty line functions in my heads then lay it out. I get an idea for what I want, I search libraries for available tools to do it, I begin writing something out, I might backtrack a little as my ideas and approach solidifies. I react to compiler errors and fix those, as a common basis for cleanup. I sometimes use type holes to get some extra clues for how to fill a gap. If writing HTML generation code, I might test the results in the browser and tweak and twiddle until satisfied. Refactoring is often a reaction to seeing a pattern in code. I doubt I'm alone in this development style. Regarding books etc.: I don't have much motivation to explain computing and programming concepts to people who aren't equipped to try it out. Rather than learning from a static book, why not have more interactive tutorials? We should stop drawing dead fish. I agree that declarative code has a lot of nice properties, and let's have more of that. But I disagree with your tacked-on conclusion, "that means code should not be interactive but static". Spreadsheets are a fine example of how declarative and interactive can be married. Coq Ltac, too. Live coding with declarative languages is also quite feasible. Theoretically you can, with enough effort, implement tools even for a language and constraints that are relatively hostile to advanced tooling. In practice, you probably won't, or the result won't be as robust. Design for tooling is about affordances, shifting the costs so that tooling becomes an easy and natural approach. Asking what can be provided is simply missing the point. Ask instead what is easy enough that we can reasonably expect it to actually happen. There are many ways to achieve generic programming. Not all of them are human-syntax-friendly and tooling-hostile like overloading and multi-methods. You seem to be assuming that media editing in the IDE would primarily be built into the IDE, e.g. it comes with support for JPEG and a built-in JPEG editing tool. My own assumption is that media editing is mostly a library concern, with the IDE providing a simple API. Thus the tooling for media types would be a lot more portable, extensible, and even competitive. Although you see coding as an interactive experience, I think you underestimate the amount of sub-concious processing the brain does. The important thing is the cognitive model (the semantics of the language) not the syntax. If the semantics are unpredictable then its difficult to reason about the program. If the semantics are suitable for humans, then humans will develop a language to discuss the semantics. "I want to iterate over a collection of widgets, inverting each one...". So even if you don't provide a linguistic interface, humans will create one. You then have the problem of two interfaces and two ways to think about coding (visual and linguistic). I think that where humans are involved the linguistic will eventually win. This means languages should try to be more like existing written languages (say English), but without the fuzzy definitions. So I think static text will be chosen due to fitting better with the way people think. An example of this is electronics design, where visual component tools have been available for years, but VHDL and Verilog have pretty much replaced a lot of interactive visual editing. I think its because peoples linguistic processing ability is much more sophisticated than their visual processing. They can internalise the code quicker and then manipulate it internally. You can see this when people edit code, vary rarely do they make a single text change, most often they have pre-planned a series of edits to achieve the required re-factoring. They can do this in their heads because of the linguistic nature of code. By externalising these transformations you limit the achievable transforms to those supported by the IDE. Manipulation of linguistic representations is only limited by the human imagination. When I write a program, I already know what architecture and syntax I want to use. I know what data-structures I want. Its like writing an essay, the IDE is there like a grammar and spell-checker, but I already know what I want before I start typing. I personally use Vi for editing as I don't like to get feedback before I am ready. I frequently adjust code as I am writing it, perhaps only sketching sections in, and skipping around leaving invalid partially complete syntax before coming back to it. I then get feedback from the compiler when I have something I think is ready. Having said all that, I think code organisation in large projects is a problem. I would really like an editor that show only functions and datatypes used in the current code context. So when I write a function that uses a datatype, I can see its definition, or the definitions of functions I call. But I would like this laid out like a plain text page. I really don't like side-bars, tool-bars, sub-windows or popups... the more it looks like a plain text page the better, but I don't mean like an 80-column dos-text, more like a nicely formatted interactive document (if that makes any sense). I agree that ideas can occur at any time. I often have good ideas in meetings, or showers, or meal times, or while sitting in front of unrelated code. But putting it into code does a lot to refine and concretize an idea. Ideas aren't fully formed, and certainly aren't executable by a machine, without a lot of massaging. Or, at least, my ideas aren't. Regarding your first few sentences: I don't see an interactive coding experience as incompatible with sub-conscious processing. I also did a lot of sub-conscious processing when driving from home to work, but that doesn't make it any less interactive. I do agree that a simple, predictable, cognitive model is very valuable. Effective support for local reasoning and compositional reasoning are among the values I rate most highly in language design. Much higher than 'readable' syntax. Any language designer who reaches this conclusion must make a decision: Shall I sacrifice a readable plain-text syntax in order to further improve the cognitive model? My answer was yes. I agree that people will still create linguistic interfaces. Naturally, if a system is flexible enough to support user-defined visual EDSLs for image editing, it's certainly flexible enough to build many textual languages above the structured one. This is something I actively considered before I reached my answer above. Visual and interactive programming has a lot of use cases even in a system where various plain-text DSLs might be dominant. Assuming a programming environment where mixed modes can peacefully coexist, I fully expect they will do so. You seem to be assuming a competitive environment where there can be only one, eventually, after enough seasons. Regarding transforms on code, you again seem to be assuming the IDE supports a fixed set and thus we're "limited by the IDE". An IDE might certainly have a limited set of built-in transforms, but I think this is an area (the same area as support for media types, mentioned earlier) where IDEs should draw from libraries. Upon doing so, the set of transforms extensible and portable (and competitive). And the IDE becomes simpler. Re: "I already know what I want before I start typing." - That may be the experience for you and some subset of other programmers. But I think it would be unwise to generalize. Some fields of programming are a lot more R&D than others. Some fields more aesthetic and inherently require a lot of tweaking and tuning. Some modes of programming are a lot more reactive or interactive. The preferences and habits of people vary, too. I agree regarding code organization in large projects, modulo blindly requesting definitions in plain text. I would love to have more interactive documentation, and a rendering of automatic QuickCheck-like sample input and outputs (especially around border cases). I think there is a difference in emphasis here. I am not against any of these user-experience enhancements, but I don't want to loose the ability to treat code as plain text, so I can still write code on paper when I don't have a computer near me. I think a language implementation should be a library, of which a text/file front end and an I.D.E. can be clients. I would give equal importance to each, but there should be no features that prevent you having a plain text representation of the program. This is my perspective as well. I think designing the programming language for end user experience means making sure you have the best IDE experience possible but also that you support a reasonable text encoding. I don't think you'll lose your ability to write code on paper regardless, whether it be sketching out an FBP diagram or a Kripke structure or a graph or a linguistic DSL or even a little pseudo-code that can be translated later on. Plain text is vastly more restrictive than what you can do with pen and paper. I'm interested alternatives to desktop programming, i.e. the conventional KVM (keyboard, video, mouse) setup. Augmented reality might make it a lot easier to 'program on the fly' with pen and paper, as an alternative to waving your hands about in the air and suffering 'gorilla arm'. I imagine sketching, graphs, would be a big part of that, perhaps combined with haptic approaches (e.g. arranging a few physical objects). a language implementation should be a library, of which a text/file front end and an I.D.E. can be clients [..] there should be no features that prevent you having a plain text representation of the program A text/plain front-end and filesystem integration become constraints on design, filters on the feature set, a lowest common denominator. You'll sacrifice a lot of opportunities to get there, at least if you want your language to be conveniently readable and editable by that interface. If you believe it a tradeoff worth making, that's your prerogative as a designer. I don't share some of your motivations, such as making code pretty for display on dead trees or in primitive text editors. Also, I have some of my own goals, such as unifying PX and UX, an effort that greatly benefits both from interaction and the ability to treat manipulation of rich media types as programming. With such a difference in concerns, our designs will be divergent. But when people start arguing or conjecturing that there is no tradeoff... that annoys me. I suppose this is a peril of opportunity costs: so long as you're ignorant of them, they don't seem like costs at all. I'm pretty sure I've seen you advocate against tradeoffs - when you find yourself facing a tradeoff, try to change viewpoints to find a solution that doesn't accept the tradeoff. John made a similar point somewhere in this comment thread, I believe. I think this is a good situation in which to try. Specifically, what I think we can do is factor out the syntactic component of a language in a way that allows it to be "skinned" with either a textual encoding or a more structural encoding. Finding a reasonable factorization takes a little work because of issues like binding that are handled quite differently by structural and textual encodings. The idea is to move most of the elements that you complained about in your earlier post (overloading, etc.) that seem like artifacts of a text encoding out of the core language semantics and into the text encoding. It's a little tricky, but I think possible and worthwhile. I have frequently said: Invention is the art of avoiding tradeoffs. cf. TRIZ. If tradeoffs can be avoided, that's a good thing. And I think tradeoffs often can be avoided, if only our puny human brains find a way. But this avoidance of tradeoffs doesn't allow for placing on a pedestal our primitive paradigms, tools, traditions, and text-editors. By its nature, invention is something that replaces our tools and models. (Also, the initial time and effort and latency and money spent inventing and prototyping and installing and learning the new tools is an investment that should be accounted as a tradeoff at least from an economic perspective.) A change in viewpoints isn't sufficient to avoid a tradeoff. A change in tooling is essential. No matter what, if you attempt to squeeze a language into simple text editor, tradeoffs are inevitable. What's left is designing towards a set of tradeoffs you find acceptable. My current PL designs avoid many tradeoffs between textual and visual programming, support for DSLs and tooling. But it simply isn't happening with Notepad. Your skinning idea sounds interesting and worth pursuing, but you will make tradeoffs elsewhere if you intend to support a pleasant experience reading and editing it unskinned in Notepad. I don't think anyone is saying that you're going to get the full UI experience in notepad or even emacs. At least that's not what I'm arguing. I'm arguing that we still need good support for text underlying our fancy tools. The best argument I have for this is that our most efficient method of interaction at the moment is still using the keyboard. Our fancy IDEs need to support "show me the text encoding for this" so that the programmer knows how to type things in. Another opinion, which is perhaps relevant to this discussion, is that I'm skeptical of heavily interactive programming, like Ltac. Interaction has overhead. The programmer has to pause to understand the context that the IDE is now presenting before entering a response. If the context changes too quickly it's disorienting and slowing. Building text constructs works well as a stable keyboard interface that doesn't require interaction. I think there are two separate issues here: The first point addresses a lot of the issues you describe: efficient use of keyboard, stable behavior without interaction where you want it, etc.. This is easy enough with embedded languages or textual EDSLs. The second issue is the one critical to John's conjecture and most of this thread about tooling, and also your "show me the editable text encoding" query. That text encoding simply won't do you much good if it's too large, too dense, too opaque, too low level, too noisy due to specifying interactive behavior, etc.. Thus, if you insist this query and editing the result should be effectively supported, you'll make sacrifices regarding what media and literal types your language supports. There are a lot of algebraic structures in programming (type systems, datatypes), and a lot of logic too. When thinking about these things we naturally have a text / symbolic representation of them. When you say "A text encoding simply won't do you much good if it's too large, too dense, too opaque" I would suggest that if it doesn't have a good text representation it will not be easy for people to think about. I think the brains reasoning is shaped by human language, and abstraction level is a critical part of that. If something gets too detailed we invent new sub-languages to break it down and discuss it. Put another way, I am sure you can do all sorts of things with a graphical interactive interface that take programming away from linguistic representations, but I think this will make it harder for people to use not easier. It's the reason why 'cyberspace' never really made much sense. Data is not naturally represented by a 3D landscape, so any mapping is entirely artificial and not as useful as a text representation. The Matrix's tumbling kanji are a much better representation of data than Tron's light cycles and rotating cores. if it doesn't have a good text representation it will not be easy for people to think about Well, that's just false. There are a lot of things that don't have good text representations because they're trivial to reason about. Images data, where the primary action upon it is to render it, would be a simple example. Also, the clarity, comprehensibility, efficacy, etc. of textual representations is NOT compositional. If you take a procedure with one well written line of code, and add another, you now have a sequence with two well written lines of code. If you do this a thousand more times, you have a mess. Quantity impacts quality. Perhaps you could avoid that mess if you do all your development in the text layer. But if dealing with large media objects - a big graph, for example - that won't happen. Factoring graph data into a dozen little functions would just complicate the tooling and other things, and wouldn't necessarily aide comprehension of the graph (excepting independently meaningful subgraphs). Returning to an earlier point: you can offload to external tools a lot of content that doesn't fit nicely into your PL, but you'll still need to deal with that content AND you've gained a panoply of problems for testing, packaging, integration, type safety, language purity, partial evaluation, constant propagation, staged programming, transparent procedural generation, etc.. Ignoring the graphical and interactive aspects is simplistic - a local illusion of simplicity that ultimately explodes into complications down the road. I can reason about a cat, the fact that it is a mammal, and an animal. This is reasoning. How do I tell what kind of animal a cat is from a JPEG? The fact is that this kind of reasoning is natively linguistic and word based. Most of the time I don't even need to see the picture, knowing it is a picture of my cat is enough. As I wrote above, "image data, where the primary action is to render it". In this context, I don't need to know whether the image is of a cat, a hat, or green eggs and ham. I only need to reason about how to render images. If you want to render your cat without using an image, just using comprehensible human text, you're in for a challenge. Reasoning that your cat is a mammal and animal isn't going to help much. Rendering the cat image is trivial, its not really reasoning, and its not really a problem. What's wrong with giving the cat image a symbolic name? Since you seem to have fantastically missed my first point, I'll review it: First, since you clearly missed it, the triviality was very intentional in contradiction to your earlier suggestion. To clarify: Compared to other code and objects, you don't need to think hard about whether your image data is going to behave in a manner that, for example, compromises network security. This doesn't mean you don't reason about it, it's just easy enough that it barely registers. And yet the image data still lacks a good text representation. Thus, your hypothesis that things that lack good textual representation are difficult to reason about is clearly contradicted. QED. Since you also clearly missed the add, I did specify rendering your cat "without using an image". You believe in the power of plain text programming, do you not? So, why would you need a cat pic to render your cat? Just use plain text. I'll be impressed if you find rendering your cat without textually opaque image data to still be trivial. If you can't tell, I'm annoyed with the direction this thread has taken, and that you ignored more relevant points to discuss your ability to reason about cats. I just don't get your argument? How does if follow that image data is easy to reason about because it does not threaten network security? Even the initial premise is wrong, images can threaten network security, and have been used to do so. Deliberate byte sequences can encoded to cause buffer overruns in certain JPEG libraries, allowing a root escalation attack, and then launching a network packet sniffer or other payload. But more fundamentally I don't think the image itself is important. Almost all human thought consists of abstract concepts, that is things that do not have a concrete representation. Because these things are not concrete there are no images to represent them. Instead we have words. What colour is the cat? Can you even represent that question as an image? Do all cats have four legs? As soon as I want to do anything at all useful with the cat image I need words to describe what to do. The cat image itself is irrelevant, most likely coming from a camera or a database of images. When I want to describe how to process the video stream its much easier to describe verbally. But I agree I might be missing the point. I don't see any problem in having tools to make working with images easier. I don't dislike the idea of an IDE showing a thumbnail of an image in the code, nor do I dislike the idea of being able to cut and paste images in to a REPL loop. My point is this is not really programming, just displaying literals in a better way. The program that manipulates the images would seem to be best expressed as 'text'. Image data lacks a good textual representation and abstraction not because it's difficult to reason about, but because we don't need to reason about it in any manner but the most trivial - where it came from (a static content module or external database), where it's going (to render). If you can easily reason about whether an image is a network risk, you can easily reason about the image. Proof by example. QED. This is the same sort of proof as: "I can walk, therefore I can move." This certainly doesn't imply ALL movement is possible for you (can you bend your knees both ways?). And similarly, I haven't proven that ALL reasoning about the image is easy (can you recognize the cat?). But I did address relevant reasoning: we typically don't do much with images other than render them. images can threaten network security [..] buffer overruns in certain JPEG libraries True. If your rendering library isn't memory safe, then perhaps you couldn't easily reason about network security. Almost all human thought consists of abstract concepts [..] we have words I've not once argued against abstraction, nor even against use of text as a programming media. There are enormous differences between these three positions: The first position is common for anyone who insists their PL work nicely for maintenance using a basic plain text editor. The motivation to stick within the limits of widely accessible tools and non-interactive distribution platforms (such as thin slices of dead tree) is obvious. The second position is the one I take. Use text where it's appropriate. Use visual programming where it fits nicely (a lot of specialized cases). Gain many benefits because much data that would be externalized to avoid polluting plain text can now be modeled and maintained directly within code, and is subject to tests and staging and partial evaluation and transparent procedural generation, and conveniently easy to use with purely functional programming (e.g. no side-effects to load external files). The cost is that we no longer can expect programs to be fully accessible or maintainable via plain text editors. The third position isn't held by anyone in this thread, but does seem to be a popular straw man in almost every discussion about visual programming. Based on your apparent assumption that I'm against using linguistic abstraction, it seems this discussion is no exception. It seems we are both somewhere within the second position you stated. I am not so sure about putting media in code though. We spend a lot of time factoring even strings out of code (for internationalisation), putting more in code seems odd to me. I want to leave the design elements to designers, so the web approach of code producing data that is combined with styles, themes and design templates seems more useful to me. Code tends to output simplified XML markup which then gets transformed using XSLT in the presentation layer. When GUIs are themeable you have to refer to components by their function (left-margin-image for example). The user selects the theme they want for their desktop. I am not sure I get the use case for inline media literals. We should be separating the presentation layer more, not binding it more tightly. You're operating under the popular premise that PX and UX are entirely distinct worlds. I think this premise has been very bad for all of us, both users and programmers. Every application becomes a walled garden. Reuse and extensibility are very poor by default, and inconsistent between apps. Data resources are relatively painful to use compared to string literals and functions. With the opposite premise, that PX and UX should be more unified, media in code makes a lot of sense. Designers are no longer clearly separated from the codebase (though they might work on different parts of it). Some GUIs might be represented directly within code for contexts like live coding or tangible functional programming. This doesn't mean we can ignore internationalization and similar design concerns (at least, not more than we already ignore them). But, rather than externalizing your "internationalization database" (which isn't really about stateful data), you'd just model said database as another module in code. I still don't quite get it. Programmers do not make good designers, just compare the state of the average desktop application (with programmed UI done by developers) to web applications (where designers have done the UI). There are many more great looking web-applications precisely because of the separation of design from programming, and in reality it doesn't cause any issues, plus re-theming the application is so much easier. If a customer asks for an organisation specific UI its easy to task a designer to look at their corporate style guide and develop a new theme without touching any functionality. As for internationalisation, its much easier to give the external strings database to a translation agency to translate than it is to give them source code, and expect them to edit it. It is not reasonable to expect them to lean to use the coding environment just to translate strings. Plus how do you cope with user-theming? In the age of responsive design, where applications should be targeting devices with varied screen sizes and resolutions hard coding media into applications seems the wrong approach. I should be able to have all my apps with the same GUI components and theming, which might be a personal theme where the images will not be available to the developer. Putting UX skins and behavior into the same codebase doesn't mean we suddenly ignore separation of concerns, coupling and cohesion, best practices for modularity, etc.. It's pretty clear that you haven't (seriously) contemplated what it might mean to unify PX and UX. You keep going on about "programmers" and "designers" as though designers wouldn't be programmers with a specialized goal, operating on a different part of the problem (and hence a different subset of the codebase, if your modularization is any good). Customers and users would also be programmers, albeit with their own codebases and a lot of programming wouldn't look like what we call programming today (i.e. because it mostly wouldn't be manipulation of plain-text source and higher order abstractions). When you envision "editing source code", you have a picture in your head of C or Java or other plain-text programming languages. But we're deep in a thread regarding programming with other media. Try envisioning images, graphs, tables, etc. as first class source code. Internationalization by sharing source code for editing is NOT unreasonable assuming you've partitioned said content into separate modules with an appropriate media for editing, such as tables. (That said, you could just as easily provide a conventional database and import the results back into your codebase.) User theming requires users have their own codebase. Which they would because, upon unifying PX and UX, all users are programmers (even if it's mostly shallow and they're not really thinking about it), and a codebase would replace many roles of filesystems (including access to rich media, like your personal images for skins). You'll either pull a copy of a remote application for local installation, or create local skins for remote services. Overall, this problem is a more specialized variation of the broader extensibility and accessibility concerns I'm aiming to address in the first place. Don't let your beliefs about how things 'should be' interfere too much with envisioning alternatives for how things could be. I understand how it could work, and Mathematica already works a lot like that. My concerns are more about why you think this is a better approach, and who the target audience is. If you could get photoshop to save content into a module directly, so that designers did not need to waste time learning to program it might work. But its going to be far easier to adapt the industry formats into your system than persuade Adobe to support it. Perhaps if a module could be a tar file of images, and the tar file names directly become object names that might be viable. But you still need to import the module into your development environment, rather then just copying the images into the correct place in the filesystem. It seems you imagine a world where everyone is a programmer, however this seems unrealistic to me. Designers don't want to program, nor do clients who just want to select a skin. At a guess your target is something like the same audience as Mathematica, IE not professional developers building software for end users, but academics, amateurs, researchers, and spreadsheet-users. If this helps get people into something better than Excel, then its a great idea, but its clearly not for everyone, and its not a good fit for the kind of development I do. Whether it be paperwork, presentations, or programming, every professional is expected to do some things they "don't want to". Whether they like it or not, designers should be expected to program, e.g. to create interactive mockups or quick and dirty prototypes. And the education necessary for this should start at a young age - computation, taught alongside math and science, as a core competency. Of course, designers today have a legitimate excuse: the poor tooling, the discontinuity between using or consuming content vs. programming it, creates a massive barrier against the sort of 'casual' programming a designer should be expected to perform. Designers cannot reasonably be expected to program until the tools are replaced. The goal with unifying PX and UX is to make 'casual' programming accessible and easy. While the existence of casual programming might create unwanted burdens for some reluctant designers, it also introduces opportunities and allows many lightweight efforts (tweaks, compositions, etc.) to proceed without bringing in a career programmer. While this may seem to be aimed at the amateurs and academics, we cannot neglect the professional developer. If the professional feels pressure to use different PL and tools, then the barriers between PX and UX will continue to persist. So the audience is everyone, and fundamentally must be. I think this projects a homogeneity on people that may not be desirable. People are different, the best graphic designer in the world could be a terrible programmer. If you insist on only hiring multi-skilled people you may be artificially limiting the talent pool in a way that is prejudicial to your business. I am not sure programming should be regarded as a fundamental skill that everyone needs like reading, writing and arithmetic. IT and computer-literacy is something everyone needs and this is reflected in school curricula. I think it is a tendency of generation-X to see programming as interesting as (personal) computers were a new thing. To post- millennials computers are just something to use, another appliance to use 'office' at work and access the internet. Some designers can program well enough to produce interactive prototypes, but not all, and even this can have problems tending towards fitting more coding into the prototype to show features working more realistically to the extent the prototype can become a development bottleneck. Sometimes wireframes and mockups are just the best solution (where the application is small, or the code is an inherently complex engineering problem). To cater for the professional developer the language will need to be able to ship binaries, that is so that the customer cannot reverse engineer the software, but still allow configuration changes such as theming, language changes etc. I certainly don't expect homogeneity in programming skills. There's a spectrum for programming, much like there is for writing or planning. OTOH, it wouldn't take a skilled programmer to support many use cases. For graphics design, a domain specific language and a little tile and wire manipulation could probably cover most interactive and animated mockup work. Terrible programmers are still programmers. Many terrible programmers would improve with experience and a little training. Others could lean on more skilled programmers in the group. While people are different, I think the median skill in programming could be much higher than it is today and that perhaps 90-95% of programming as a (separate) career could be eliminated. There is much value in eliminating the PL/UI tooling gaps such that non-programmers become programmers, even if some are forever terrible. While I agree that a language's toolset should enable compilation to binaries, your stated motivation for it isn't very sound. There are decompilers and tools like jsnice. Skilled programmers are able to re-engineer and clone just about any app without even seeing the source or binary, just by inferring the intentions and purposes. A belief that binaries somehow guard against reverse engineering should be treated with much skepticism. Some good reasons to support compiling to binary include performance and bootstrapping, support for a broader range of targets such as embedded programming and unikernels, etc.. If you take a procedure with one well written line of code, and add another, you now have a sequence with two well written lines of code. If you do this a thousand more times, you have a mess. Perhaps I'm forgetting quite what you mean by "compositional" (a term it's easy to presume one understands); it seems to me that good organization is inherently not compositional, and whether or not it's text doesn't bear on that. Good organization of the whole depends, in highly idiosyncratic ways depending on the nature of the whole, on complex facets of how it's put together; naive composition of many parts into a complex whole is likely to produce a mess, and it's even possible, to some extent, for a complex whole to be well-organized while all the individual parts are messy. Quick review: Algebraic composition is the only composition worth considering. A compositional property can reliably be inductively computed across composition, i.e. `∃f.∀A,+,B.P(A+B)=f(P(A),'+',P(B))` for the entire set of composition operators like +. Trivially, invariants are compositional. Ideally, P is useful but also much smaller than the objects being summarized. Compositional properties are valuable useful for modularity or very large scale systems because we can shift much reasoning to the summary of properties rather than reviewing implementation details. Of course, properties that are emergent or contextual fundamentally cannot be compositional. And a lot of important human concerns are emergent or contextual. Look and feel. Usability. Performance with a given cache size. "Good organization." Compositional properties are useful for aligning an organization, but good organization for humans has a lot of emergent and extrinsic human aspects that are also very important, e.g. Conway's law, or concerns about human memory (five-to-seven rule). Overall, I agree with what you wrote above. Aside: I think compositionality is a useful distinguishing property between type systems (which must align nicely with modules, and thus types are compositional) and other static analysis models (such as abstract interpretation). So, knowing which properties are compositional can give a pretty good idea about which properties we can protect with a type system if we tried. I think we may be near agreement. For example, supporting image literals is a good idea and, while you can have a text encoding for images, it's not particularly useful for manipulation of an image. However, I wouldn't characterize image editing as programming. There's a spectrum of activities that we would like to be doing in our next generation IDE, including image editing. The tasks that I consider programming are the ones where I believe I want a stable text UI. Other examples are building finite state machines or UI layout. Those are things you can do in an embedded non-text editor. But they're specialized and not general purpose programming. Yes, I consider visual DSLs to be mostly for special purpose tasks - geometries and images, game map editors, Kripke structure, UI layout, etc.. But there are quite a lot of small areas that benefit, and no PL designer will think of them all ahead of time. So I think it's also important that they be achieved and implemented through library-defined EDSLs. And while images might not seem like programming to you, they're certainly an important part of many programs, much like text and numbers. Text is a useful medium for a lot of problems, and one we have pretty good devices today for manipulating. I'm certainly not suggesting we deprecate it. But even lightly mixing text with visual media has a significant impact on whether plain text editors remain a suitable development environment. I just want to point out that it's not too hard to integrate text and non-text in a way that works reasonably well with the current approach. The text file might look something like this: -- This is a text source file. render ($import mycat.jpg) When viewed in the IDE, that can be an embedded image. A similar encoding issue exists even if you just want to partition a codebase into files. Again, my thinking is to design the IDE experience that you want first, but then to give due consideration to making the experience with unix style tools as painless as can be reasonably achieved without compromising the primary IDE experience. Editing mixed media files is difficult, however. Structural or projective editing might work, but language aware editing...it is still an open problem. If I understand what you mean by mixed media files, isn't the problem just that we don't have formats for media that allow the embedding of other media in a standard way? Well, tar files are standard and support embedding, and in principle are easy to manage by a web server or virtual file system. Editing mixed media was the point of 90's format embedding system software. But the web displaced 90's work on compound document architectures for mixed media in files: OLE at Microsoft (doc file format), OpenDoc at Apple (Bento and Quilt), and a framework at Taligent. The MS and Apple formats were of a file-system-in-a-file variety. Bento was a post-hoc indexing kludge that appended a toc (table of contents) after content written anywhere, and originally targeted compact disk formats, unifying a view under one umbrella. Both docfile and Quilt were block-oriented and suitable for paging (either system-based or hand-cached). Storage on resource-starved platforms was tough. OpenDoc had to work on 4MB Macs where the majority of users enabled virtual memory the smallest amount possible, just so common code was shared in a mapping by multiple concurrent apps. Otherwise VM was absent, so hand-rolled paging was necessary. Good indexing needed btrees for large docs. There wasn't enough RAM to run a database and OpenDoc too. The OS took a meg or two, OpenDoc code took another meg, and left a meg of RAM to juggle for other apps and i/o paging. Now we have so much in the way of resources, people just burn it. Deciding what to do is the problem, not whether enough resources are present. Simple sorts of tools tend to use OS file systems. So tar files are a low path of resistance solution, that's portable because every platform understands them. I think emacs even has a tar file viewer, looking at them like a directory. You can version files in tar by appending new replacements, so it's pretty easy to model VMS style file systems with version numbers. All you really need is an api for indirection, so you don't depend directly on how a thing is physically represented. Connect to a service that shows a file system, which might be a tar file. Write your own server that fronts your tar file, and then you'll know for sure it's a portable tar file, and yet the code interface works on whatever has the same model. Since we're talking about tooling, throw as many things at it as you like. Go ahead and use an SQL database too, why not. Nothing really stops you from having files, embedded mixed media files, database support of your choice, and absence of dependency on local OS. You can make it functional with copy-on-write versioning and garbage collection. (Sorry if this was long. All at once is sometimes shorter than eking out a little at a time in a long exchange.) It is not a format issue but one of real estate: how do you put the image in your code in a way that isn't disruptive? I haven't seen a good proposal for it yet. It might be useful in a side bar (something I'm trying to add to APX). Disruptive in what way? MIME already provides a format for embedding and linking resources in a flat representation, you just need a better syntax. The bits aren't a problem for PX (that is a problem for engineering). The layout and typography of how to create a multi-media editable document are. I have the feeling we are thinking about different problems. Why not let the programmer put the image where he wants? I expect images typically will be disruptive, and they probably don't usually belong in your code, but there are probably occasionally reasons to embed them. If reading code is almost as important, or even more important, than writing code, then why don't we typeset our code like we typeset our papers (Literate Programming in reverse, Fortress with layout added to the mix)? In that case, it makes sense to include images, tables, math mode, detailed type set comments, test case results manipulated in real time (iPython), ... I'm not sure if we should go there or not. On the one hand, it elevates code reading (and other non-writing tasks like debugging) to a polished first-class concern. On the other hand, there are a couple of disadvantages to consider: We can embellish a lot with a language-aware editor that is still based on text (as I've done in my prototypes). However, we can't really embellish layout with pure text input or even with WYSIWYG rich text input. Witness the horror of writing a paper in Word vs. writing it in LaTeX. It is definitely worth trying, but...I'm not sure if it is on my list yet or not. The worlds of programming languages and markup languages are on a colloision course. I see stuff I know from programming trying to crop up in markup, and vice versa. I'm not sure quite how to articulate this, but I'll take a shot at it. Starting from the markup side: The Wikimedia Foundation has decided it needs to make Wikipedia pages smartphone-friendly with a WYSIWYG interface. There's some appalling politics involved, but the two relevant technical objections are that (1) the ease-of-human-use of wiki markup is the primary asset the wikis have, so that cutting yourself off from that through a WYSIWYG interface is suicide, and (2) the primary path by which inexperienced wiki users gradually become more experienced wiki users is by hands-on work with wiki markup, seeing how others did things — it's often trivially easy to see how someone else did something and imitate it, whereas looking at existing content tells you nothing about how to do it with a WYSIWYG editor and, conversely, learning how to do stuff with a WYSIWYG interface tells you nothing about how to do anything else except do that particular thing with that WYSIWYG editor. The obvious solution — obvious to me, that is :-) — is to have an editor that, when you say you want to edit a wiki page, takes you into a mode that shows you the wiki markup and gives you some sort of structured help with editing it. Notice, in this scenario there still is an underlying wiki markup; in my experience, there's no way to successfully pull off a pretense of an underlying text representation, as the pretense is inherently unstable, requiring a lot of work to maintain the pretense and probably lacking the flexibility of the real thing and occasionally suffering from bugs in the internal correspondence. But there are two ways for wikis to go from here: The Foundation is headed (perhaps without conciously intending it) for a vision in which ordinary users use a dumbed-down WYSIWYG interface to choose amongst the options provided to them by an elite of programmers who use programming languages to control what ordinary users are allowed to do. The alternative I favor has wikis themselves becoming more plastic and intergrating more features of programming, with straight text-editing always an option while increasingly augmented by a sort of wiki-IDE. The considerations involved with wiki-based programming are rather mind-bending for a programmer; just try to image a major piece of software in which anyone can edit the code (the appropriate emoticon here is, I think, o_O). But it's a vision of wikis that's clearly heading toward programming, and the discussion here seems to suggest programming may be headed toward markup. So maybe the two fuse at some point. I see stuff I know from programming trying to crop up in markup, and vice versa. I'm not sure quite how to articulate this, but I'll take a shot at it. This is a pretty common theme actually. Almost as soon as HTML was invented, someone wanted to abstract and compose HTML fragments. We see the same with CSS now. For instance, the Bootstrap CSS framework uses the less stylesheet compiler to introduce and reuse styling abstractions since CSS can't do this natively. Lesson being, abstractions are important, and if your markup is missing abstraction facilities, your language is incomplete. Others here have already made the case that programming languages should also have markup qualities, and I think the advantages are clear from EDSLs. The languages that can't embed concise markup-like combinators are often considered clumsy and inexpressive. We can certainly segregate textual content from the visual. But I think the sample here doesn't offer a very good understanding of the resulting experience. The image 'mycat.jpg' is opaque and we probably would ignore it for most debugging and maintenance purposes. If instead it were a module with an important impact on program behavior - e.g. a Kripke structure for a state machine - the Unix developer will suddenly be juggling a lot of tools. But I'd really prefer to avoid this segregation. I think mixed-media modules will see a lot of use if well supported. And I think there's a lot to gain from pushing most of the editing and rendering logic into libraries rather than locking it down in external tools. When optimizing for plain text expressiveness and readability, and filesystem integration (to leverage the common text editor), it is common to introduce features such as overloading or multimethods, operator precedence, namespaces, imports, code-walking macros. The need to express mathematical formulas in code will not go away, no matter how smart your environment. So features like operator precedence and overloading will never go away either. If you have ever had the doubtful pleasure of using a graphical "formula editor" (e.g. the one in MS Office), you quickly realise what a vastly superior and more productive medium text is. Even if it is as terribly designed as LaTeX. Though with operator precedence, your editor can make the implied precedence more obvious visually (something I've been meaning to implement in APX by shading the inferred tree). Once precedence is visually more obvious, perhaps you can design more expressive precedence schemes without overburdening programmers cognitively (especially those reading code!). Given notepad as the editor, we wouldn't dare go with anything other than something very simple. Given notepad as the editor, we wouldn't dare go with anything other than something very simple. I think the version of notepad used by the early mathematicians who invented operator precedence supported a wide range of formatting options. Correct, but early teletype terminals couldn't replicate those formatting options, and we never really adopted bitmap displays for programming. Operator precedence won't go away, but an editor can help with it by making the parentheses a display concern, not something that is baked into the AST. The formula editor of Maple works well. The formula editor of LyX also works reasonably well, definitely better than plain LaTeX if you ask me. A long time ago people wrote LaTeX in a text editor, and then recompiled the graphical view (PDF/DVI). Then you had live preview: you have a preview next to the LaTeX view that automatically updates as you type. The next logical step, and I don't know if that actually exists, is the ability to click on the math in the graphical view, and get your cursor in the plain text view positioned on the corresponding subexpression. Going further you have an editor like LyX, where you directly edit the graphical view, and the plain text view becomes secondary. It's a whole lot nicer to look at math notation with horizontal lines and greek letters and subscripts than at nested \frac{\sum_{i=0}{...}}{...}. \frac{\sum_{i=0}{...}}{...} I agree that LaTeX is terrible. That's why it is particularly telling that it is still better than the graphical alternatives I've tried. ;) And I agree, it's a whole lot nicer to look at math notation with horizontal lines and greek letters (though some people here seem to disagree :) ) -- until you hit the point where you want to abstract over parts of your formulae, introduce abbreviations, parameters, etc. That is, until you want to program. Likewise, I've yet to see a "visual", "interactive" or otherwise "enhanced" approach to general-purpose programming that has a convincing story for scaling with complexity and abstraction, instead of just being naive about it and trapping us in an unproductive first-order box. Which, btw, is one of the reason why I have also remained utterly unimpressed by some of the Bret Victor talks that others get hyped up about. All this is solving the wrong problem. The real problem with software engineering today is reliability and scalability. Sustainable progress can only come from better abstraction mechanisms, not from interactive gadgetry. In fact, I expect overemphasis on the latter to cause dangerous regression on that front. Note I'm not talking about graphical or visual programming or free form drawing. I don't think those approaches are very promising. Rigid ASTs work well. I'm just not convinced that the best way to store or edit them is by linearizing them to an array of characters. What is the issue with abstraction in that context? You still have exactly the same subexpressions that you can extract out into a definition. I agree that the Bret Victor stuff is not that exciting. They are nice demos with visualizations tailored to exactly that scenario, but it's unclear how it would work for general programming. That first order box is much better than the higher order box Haskell programmers want to trap us in. "Hey look at my point free function that passes a function to a function to a function...what the heck does that even do?" The scaling onus is on us, and it's a lot of work, but not impossible. The crux of our disagreement, however: All this is solving the wrong problem. The real problem with software engineering today is reliability and scalability. Sustainable progress can only come from better abstraction mechanisms, not from interactive gadgetry. There is a problem with reliability and scalability today, but the abstraction hair shirt isn't going to save us. Again, when only a few lines of code is so abstraction dense that making sense of it requires a huge amount of time...well...it is not a mystery why Haskell will never be mainstream. Our brains just aren't good enough, our only hope are to become computer-augmented cyborgs...not by embedding chips in our head, but via that "interactive gadgetry" you hate so much. This is happening in almost every other field, perversely the Luddite-leaning PL community seems to be the most against it ("programming is a strictly human activity, keep computers out of it"). ...because higher-order subsumes first-order. You're throwing out a false dichotomy, but I'm sure you know that. Our brains are reasonably good at getting comfortable with abstractions. But you have to accept a learning curve. The more advanced a field gets the higher the education that's necessary to master it -- the same as in any other engineering or science discipline. Tooling can help, but cannot avoid that evolvement (and is actively harmful if it tries). The "higher-order" box I'm referring to is about usability, not expressiveness, but I'm sure you know that. PL researchers focus way too much on expressiveness and not enough on usability; who cares if you can succinctly express X with a bunch of higher order functions chained together if no one knows what the hell is going on? It is not just a learning curve. The extra indirection and abstractness over the machine approach actually has significant cognitive costs. Data flow and chains of deeply nested indirect function application are a pain in the ass to reason about and debug, almost as bad as the callback hell they are purporting to replace. Perhaps "well-typed programs can't go wrong" was not meant as a claim, but as a threat, since Haskell lacks a good debugging experience! But again, higher-order programming wouldn't be so bad if the computer could actually help out. Better tooled programming experiences will continue to win out in the market place. "That first order box is much better than the higher order box Haskell programmers want to trap us in. "Hey look at my point free function that passes a function to a function to a function...what the heck does that even do?" In my own programming I've seen unreadable, complex loops turn into easy to understand higher order composed logic enough times to know that higher order functions can simplify a lot of code. In other cases I've seen it eliminate tones of boilerplate. If going to higher order functions DOESN'T make your program simpler, then don't do it there. If Haskell is hard to understand it's not because it's using a bunch of higher order concepts, but because it's using bad higher order concepts. Higher order code requires higher order reasoning necessarily, it is actually directly in the definition of "higher order." Indirection adds complexity, whether it is through v-tables or f-pointers. And have you ever tried to debug a higher order function? I use LINQ in C# and often go to loops for complicated stuff simply because the debugger actually works. I don't know what v-tables or f-pointers are, and Racket's debugger doesn't have any problem with higher order functions. But I may not have had to debug any of that. [edit] oh,virtual tables and function pointers Hmm, I wasn't doing anything so complex that I was pulling a lot of functions out of tables, or a bunch of objects... My examples were really ones of having statically chosen some tests (sometimes a fair number of them) to use in a long running computation or to use to process a complex data structure, and making the code driving the tests separate from the tests. It was a matter of separating concerns, but the functions being passed around were so predefined that they could have been inlined. The one exception in the code I used were continuations used for non-determinacy, hard to reason about, but I converted them into abstractions that are very easy to reason about. It's hard to make any sense of a continuation, but easy to make sense of amb. Laziness is a big issue, though I've heard that VS2015 fixes some of the issues for C#. It is just annoying to have to dive into multiple applications one at a time, when everything would just be there in one frame if you used a loop. Tooling can help, but I haven't seen many decent FP debuggers out there that actually deal with these issues (please point them out if you know of them). Compared to reasoning about mutable state, reasoning about higher-order functions is fairly trivial. I disagree. And we know how to make mutable state more usable, but how the heck can we do the same for higher order functions? Not to mention the horrific debugging experiences. Well. Now I am rewriting my compiler in C++ I can tell you that falling back to first-order reasoning where I had higher-order at my disposal is utterly horrible, unnecessarily verbose, and hard to debug. ;P Example? Edit: maybe this is another issue unrelated to the one at hand. Control flow is dead simple and easy to reason about (and debug!) even if it lacks in abstraction capabilities. Contrast that to composing two functions whose behavior is deferred to be accessed through a value somewhere else. The control flow of the program becomes easily twisted, and we are supposed to reason about the data flow instead (never mind that there are no good data flow debuggers out there). If the indirect procedure manipulates state, we easily cross into callback hell, but if it's a pure function that consumes and returns a monad we are somehow..useable? Well. Uhm. Everything blows up? It's just the drawback of going back to first-order after being used to a declarative style. But I knew that when I started. I am gonna be lazy. Just read the compiler bootstrap sources here in the download section and tell me how you would solve all that in C++. The abstract data types become tagged class hierarchies (tremendous blowup), and instead of parser combinators I now need to do the look-ahead with case switches and lookeahead predicates everywhere I switch between production rules. It's really a problem. I hope it'll work out but I am not sure yet. Can you give me a link that isn't blocked in China? I've written my recursive descent incremental parser and type checker (with aggressive type inference) for APX with just a few virtual methods, a few factory methods accessed through a dictionary. It is way more advanced than most other parsers, and the type checking code is mixed in with the parsing code so we can go straight to code gen afterwards. Oh, and it was easy to write: just take tokens from the lex stream, and decide what to do with them. Parsing is not a hard a problem that requires tons of indirection and abstraction. Whenever I see a parser combinator, I wonder why the hell they go and add so much complexity for so little functionality (must produce separate trees to type check, not incremental)? And how the heck do you even debug those? it really is no wonder why most industrial languages continue to rely on recursive descent. No idea. Can I post a tar file somewhere temporarily, or mail you? The thing with infinite lookahead parser combinators is that once you have them, you're going to exploit that feature. A lot. I.e., where I write p + q + r it might try an arbitrary match on p, and q, before deciding r. Now I need to do the match, parse the given alternative, and deliver the right result. p + q + r But there are many more examples. This is just what I am encoding in C++ now. (The good thing, I guess, is that falling back to C++ enables me to check I didn't do too much weird things while parsing stuff.) You can do infinite look ahead with recursive descent, it is just code after all! But if I learned anything from Martin, it's keep the syntax simple (if need more than one token look ahead, you are doing it wrong), users and implementor both benefit (of course, not everyone has the luxury of deciding the syntax they must parse). I wanted to keep my grammar mostly LL(1) too. But given namespaces, I already had a problem. And parser combinators also allow to fail on extra semantic checks I sometimes need to parse arithmetic expressions, so that is a bit of a problem now. Stuff blew up. But it looks it's solvable. Don't you have a vpn service? Or is that illegal in China? We tried. The VPN providers become blocked quickly. You got mail. Having hand written parsers for years, I recently wrote a C++ parser combinator library, and posted about it on LtU. There are several advantages: The combinators produce faster parsers that are easier to read from the declarative descriptions, and are easier to change and modify without messing up complex parsing, like adding a new term type to a recursive expression. I found after I had written a recursive descent parser, which seemed straight forward for the known grammar, it was really difficult to come back to it months later to change it. In my compiler architecture I make extensive use of the visitor pattern, as it allows the state for various operations on the AST and types to be kept in the visitor rather than in the objects themselves. Almost every compiler operation apart from parsing can be implemented using visitors. You can see this approach in the type inference code I posted on GitHub. We had this discussion several years ago, and as far as I appreciate your C++ skills, I opted out of it because: a) I found your solution somewhat unreadable, and b) I wanted more, amongst which error reporting, parsing over token streams, and speed. So that, to me, was a non-solution. I went for straightforward recursive descent and I am not regretting it yet. As far as the visitor pattern goes. Nice, it isn't as verbose as I thought, but what if I want more, more precise (attribute) information flow around the AST nodes? I.e., you have a node: | APPLICATION / \ TERM0 TERM1 And I want ↓:T | ↑:T APPLICATION ↙:T / ↗:T ↘:T \ ↖:T TERM0 TERM1 The arrows denote the information flow around the nodes. Up-to-down, analytical, information and down-to-up, synthetical, information. (The information is chained, so whatever came out TERM0 goes into TERM1, while both terms may be rewritten.) I am simply going to implement this without the visitor pattern with a template and a gigantic case switch. Yeah, somewhat ugly, don't care. Well, that's the thought for this moment. Maybe you have a solution? Would be nice to see but still probably won't make it to what I am implementing. Actually my solution has good error reporting, parses over streams of tokens using lazy tokenisation (which is where you apply a transformation the the parser instead of to the input stream), and its faster than hand written recursive descent parsers. Here's a sample error report from the example Prolog parser: what(): failed parser consumed input at line: 46 column: 84 expr(Fun, C1, arrow(A, B)), a(X) -> b(X) -> c(X), a(X -> Y -> Z), X -> Y -> Z, Z expr(Arg, C2, A), ^----^ expecting: variable, operator, op-list where: atom = lowercase, {alphanumeric | '_'}; op-list = term, [operator, op-list]; operator = {punctuation - ('_' | '(' | ')' | ',')}- - "." - ":-"; struct = atom, ['(', op-list, {',', op-list}, ')']; term = variable | struct; variable = (uppercase | '_'), {alphanumeric | '_'}; The EBNF is auto-generated from the parser combinators, which I think makes for a pretty understandable default error report. Maybe you don't change your grammars as often as I do though? I have a lot of projects that include parsers and I find it easier to maintain them all with the combinator library than when they were all hand written recursive descent. Regarding the visitor pattern, multi-directional information flow is not a problem you simply maintain the state in the visitor object. "type_instantiate" is an example of this kind of information flow. A typing is passed in, and it recurses polymorphically over the type's tree structure returning a copy (the synthesised attribute) but at the same time it passes forward the list of variables encountered so far (the inherited attribute). By doing this identical type variables in the input type map to identical type variables in the output. This is necessary because we want identical variables to all refer to the same node, not just have the same name. Yeah well. It just needs to work for this interpreter/compiler. I just ran a handcrafted parser on a 500KLoc file. 11secs. And then it takes a whole lot of time to print it? I am not sure what's up. Something with vectors? Really annoying. Did you test it on a 500KLoc file, yet? I have tested the backtracking expression parser on 37,000+ character expressions, which takes about half a second on my laptop. The Prolog example parser parses about 3.4MB/s of data. I prefer measuring characters per second, rather than lines per second. 30ms on 40K characters (Unicode). Ah well, guess I am going with what I have. I'll reappreciate your visitor pattern solution but I haven't warmed up to the feeling yet. The point was backtracking is optional, so the 20,000 to 45,000 chars/sec (runtime variance) for expressions is fast, as every operator is a two way choice point for infix. The Prolog parser also parses expressions, but uses a more controlled backtracking, so only the operator parser itself backtracks (there are no choice points). This achieves 3.4M characters per second, so appears to be over 100 times faster. Even this is not a truly fair comparison because its not LL(1) due to the effective infinite look ahead on operator symbols. The LL(1) csv parser gets over 20M char / second, compared to my recursive descent implementation that gets 5M (4x faster). So the conclusion is, it is in every case faster, but the factor depends on the precise parser. When you say you want to parse a stream of tokens, do you run the tokeniser pass on the whole input, storing the result in memory, or have you implemented some kind of lazy (on demand) token evaluator where the token get stored into a linked-list that is consumed by the parser, and triggers generation of more tokens when its empty? The reason I went for a tokenizing transform on the parser itself, even though this results in more use of backtracking is because writing all the tokens to memory will be slow, and resource intensive for large programs, and implementing on-demand proper lazy tokenization seemed more complex. At the moment you could build a tokenizer out of parser combinators, but you would have to write all the results to an array before passing to a second set of combinators to parse the tokens, but I might implement a lazy stream to link two parsers together if is seems interesting. parser that needs no backtracking. I know that's no big deal, though it isn't the standard shunting yard algorithm - which seems to be mentioned everywhere, though someone said that the standard version of it allows illegal parses which mine doesn't. It turns out that there are precedence relations that don't fit into grammars that don't have explicit precedence cleanly, adding a new operator would involve adding more than one production - so they don't compose. I'm going to experiment with building a whole language out of precedence relations with some sort of context (more than one kind of expression) and and mixfix operators. Those should compose in a useful way, which other grammars seem not to. People say that Parsing Expression Grammar compose, but I don't think they compose in a particularly useable way. I fixed an order and fixity rules on operators. You can have an infinite number of them, but cannot determine the precedence yourself. It uses a lexicographic ordering for longer operators. The bad thing of variable precedences is that: a) my language will probably have a lot of operators defined, and b) the semantics of the language change once you start defining the precedences. So, I fixed it a priori and hope that will pan out. proof of concept. If you take this further to having a table to look up the precedence of operators you'll notice that there are two distinct phases of the parse which makes it possible to have the same operator to be both post and prefix or prefix and infix etc by having separate tables for those phases. This algorithm can do what prolog parsers do, allow you to add new operators and precedences on the fly, while you're parsing. I have an operator table with fixed prefix, postfix, and infix predicates. I just like it fixed, it wouldn't be hard to make the operator table dynamic and you would get your functionality. The only difference is that my operator table is 'static' since I like it that way. I parse like this: def parse_prefix_expr: parser ast = (parse_position <*> \p -> parse_prefix_op </> \e0 -> parse_prefix_expr <@> \e1 -> expr_application p e0 e1) <+> parse_expr_primaries) Never was really certain wether this scheme works but it passes unit tests, so well, fuck Dijkstra, I guess. I tried to deduce what I was doing, started thinking about it, tried to give a proof, and immediately found a bug. Ah well. Precisely want I didn't want because I am rushing another compiler. There's a solution there somewhere, will fix later. Neither the lexer or tokenizer are lazy. I went for ease of implementation, so the lexer works on an in memory Unicode string and the parser works on a vector of tokens. I could make the lexer lazy trivially, though; since I use lookahead a lot I won't change the in memory vector of tokens. I probably mostly pay for Unicode handling and heavily for the use of shared pointers to represent term trees. It slows down substantially on 500K of short definitions. (My bet is that even when printing, the program spends most of its time creating and destroying refcounted pointers. Somehow, that slows down on large terms, which completely defeats the use of reference counting.) Ah well. Life is never easy. Since my language got completely no response, yet, it doesn't pay off to optimize the poor performance on very large files away. Guess I'll stick with this.. Here's the user code from the example Prolog parser. Note this also keeps track of variables used more than once in a set called 'repeated' as this is useful for optimising the post-unification cycle check: struct return_variable { return_variable() {} void operator() (type_variable** res, string const& name, inherited_attributes* st) const { name_t const n = st->get_name(name); var_t i = st->variables.find(n); if (i == st->variables.end()) { type_variable* const var = new type_variable(n); st->variables.insert(make_pair(n, var)); *res = var; } else { st->repeated.insert(i->second); *res = i->second; } } } const return_variable; And here's the parser. "var_tok" is the tokenized recogniser for variable names, "variable" applies the above user-function to the output of the recogniser, and adds a name label used when pretty printing parsers for the error messages like above: auto const var_tok = tokenise(accept(is_upper || is_char('_')) && many(accept(is_alnum || is_char('_')))); auto const variable = define("variable", all(return_variable, var_tok)); This parser is pretty printed like this automatically by the library: variable = (uppercase | '_'), {alphanumeric | '_'}; Thinking about parsing more generally, by the time you add backtracking state, a parser ends up as a poor implementation of Prolog (not handling the state / inherited attributes very well). Having already found type inference to be a partial implementation of Prolog, I am tending towards adding parsing primitives to my Prolog implementation rather than making spending too much time on the parser combinators. Of course my Prolog implementation uses the parser combinators so I want to make them as maintainable as possible, as I suspect it will be a long time before my language is self-hosted. Writing the tokens to memory will be big bottleneck by the way, as you can no longer fit things in the CPU cache for large programs. The penalty for write out and read back could be 20 times (main memory being approx 10 times slower than cache, and needing to write out and read back. Have you ever noticed that if you put a cut at the end of each production of a definite clause grammar, the result is a straightforward implementation of Parsing Expression Grammars? Leaving the cut out, you get rid of the "language hiding", deterministic nature of PEGs. But you do have to put a cut in SOMEWHERE or the memory useage for decision points grows with the length of the input. If your purpose for parsing is a compiler then speed and memory no longer matters! Computers are thousands of times faster and have thousands of times more memory than when parsing technology was invented for compilers - we no longer should be wasting our efforts on optimizations like bottom up grammars with tables let alone regular expressions. There is no reason not to use more expressive tools, now. That said, the precedence parser I'm playing with is sadly efficient enough for the 50's, so I'm contradicting myself. Precedence parsing is so simple that it's funny.. Well, as I gather that's not shared pointers, right? That looks like raw pointers referenced from a table. Not C++'s reference counted shared pointers. Am I wrong? I am programming somewhat defensively. Since I don't know what will come up but I thought I need refcounting since as terms get rewritten during various passes it (probably) pays off to simply have thems share the parts which weren't rewritten. I.e., internally you end up rewriting a directed acyclic graph, which implies some form of GC. Since it is a DAG and C++ has shared pointers I chose that. But I have the uncanny feeling C++'s shared pointer implementation does a hash-table lookup on a pointer which means O(n) degradation, which becomes noticeable on large terms, and completely defeats the purpose. The whole point of refcounting, which is what I want to use in a tiny VM too, is to do without the O(n) slowdown you normally get with stop-and-copy or mark-and-sweep GCs. So, I am not happy with it. I am using shared pointers in the sense that all instances of the same variable point to the same shared data structure. I didn't realise you meant a C++ "shared_ptr". You could write a version that uses shared_ptr, but its not very efficient. You don't need a shared pointer in the map in any case as its lifetime is less than the AST that is being created (IE Parsing is an operation over the AST tree, in the same way a pretty printer is, and its safe to assume no AST changes during such operations). If you really want to use shared pointers you can use the same map-mechanism but just cast the pointer to a shared_ptr when constructing new AST nodes that reference the variable returned from this parser, so this code would not need to change. I use region based memory management, so when AST objets are created a "unique_ptr" to them is pushed onto a stack, so that when the stack is popped they are destroyed. It is a property of parsers, much like Prolog implementations that you only ever need to destroy stuff in blocks at the top of this stack. You can record the top of stack at choice points for backtracking, but in general you normally just let the whole AST get destroyed when it goes out of scope, as destroying the stack in the AST head object triggers all the objects held in the unique pointers to be destroyed. Because the AST objects are owned by the AST head object, you should use non-owning pointers (raw-pointers) from everywhere else that accesses the data from inside the scope of the owning object. Its good practice to separate ownership from referencing, and to have only one owning object. The basis of a compiler is bookkeeping and term rewriting. If I am not going to rewrite a DAG then I am going to rewrite a tree, and the (other) bottle-neck becomes copying of the tree, which is what you'll probably find in C compilers. In the end, if you want optimal performance, you'll probably rewrite trees but then you'll hit corner cases where the copying won't outweigh reference counting. As far as I know, Prolog rewrites DAGs or graphs. It makes no sense that all terms you construct would be copies of trees. That'll be a hefty penalty to pay for large terms with a lot of sharing. I am going for reference counting because of ease of implementation and it allows me to, more or less, write idiomatic C++. Ah well. Maybe someone will sometime implement a better shared_ptr. You can have shared_ptr in your AST. The point is the variable parser creates variable nodes, it does not hold any references to other nodes so that's why you would not see any shared_ptr even if you were using them in your AST implementation. You would use a shared_ptr in the node containing the variable. The map is there to make sure references to the same variable name return the same variable_node, as we can then test for variable identity simply by comparing node pointers. Your point about tree copying is relevant when doing the tree transformations, but as I said you can use share_ptr in the AST. My approach at the moment is to only create new nodes and keep the old ones. When you want to clean up you can make a fresh copy of the tree in a new region and free the old one. I am not saying you should use my approach, I am just saying that's why you wont see shared_ptr in my code, but you can still use them if you want. Tree copying can often be faster than shared nodes due to memory locality. Prolog implementations now nearly exclusively use structure copying because in practice it turns out to be faster than structure sharing. It keeps references close to the top of the backtracking stack. Abstracty an AST is either a tree or a DAG. So you either use some form of GC, refcounting, or you copy trees. Both have corner cases where the sharing either outweighs the copying, or the copying outweighs the sharing. (Though I grant you that at the moment, given the internals and the speed of GCC/LLVM, copying seems to outweigh sharing. But that may depend on the implementation of shared_ptr too.) It isn't harder than that? (My semantic analysis is substantially larger than what you need to do for a Prolog language. It includes getting the naming and referencing right in a namespaced language as well as making datatype constructors and method declarations explicit, type checking (interface declarations), and getting the identification of deconstructors and constructors right. So, yeah, I worry about it more than what you need in a Prolog parser.) The Prolog parser is just an example included with the parser-combinator library. The semantic analysis for the language I am developing is likely to be much more complex. Copying or Sharing are the major strategies, but what I think is interesting is how the tree-rewriting is a natural fit for Prolog. Prolog also can define the grammars for intermediate languages naturally (if you remember my post on nano-pass compiling in Prolog). It seemed to me writing my compiler I was writing bits of a Prolog implementation in an ad-hoc way for the compiler functions, and that I might be better off using Prolog with appropriate rules instead. I don't think performance is critical at this stage, and I could always extend Prolog with some specific built-ins if I find any performance bottlenecks. One way to think of this is as a scriptable compiler where you can define the intermediate grammar rules, and tree-rewrite rules in a Prolog-like syntax. Yeah well. I got you're not far in the semantic analysis yet. I'll go sharing despite the cost on large files. It simply pays of in ease of implementation and I don't have any users, and maybe never will have. This is a discussion for people interested in constructing compilers which will often need to compile 1MLoc, or more, programs. I think copying outweighs sharing in that case, so a C compiler will always beat a handcrafted Lisp compiler there. But for now, it simply isn't worth it for me. I bootstrapped more or less declaratively so all I do is term rewriting. No tables, or tables constructed on the fly whenever I need them. I am mostly going to repeat that effort, so I more or less desperately want the sharing. Ah well. Sucks to pay performance this early in the implementation phase. Control flow is dead simple and easy to reason about Either your definition of "dead simple" or of "reasoning" must be at odds with the rest of the field. ;) Are you seriously suggesting that, say, Hoare and separation logic is simpler than equational reasoning? If the indirect procedure manipulates state, we easily cross into callback hell, but if it's a pure function that consumes and returns a monad we are somehow..useable? Well, not if it is a state monad. Though even then the explicitness of the monad helps some. But you are still assuming state. The trick is to avoid it. Because stateless computations simply don't have magic "behaviour". Instead of worrying about data and control flow, you only have to worry about data flow. If I was interested in live coding and most of my work would involve getting the interaction between source code and gui right, I wouldn't be too interested in declarative styles of programming too. Hoare and separation logic are good illustrations of how red tape can make anything difficult. That's a shortcoming of our mathematics, not of our human faculty of reason. From (I believe) Shakespeare's Henry VI Part Two: "The first thing we do, let's kill all the lawyers." Many smart people have tried to come up with something simpler over the last 60 years. And separation logic already was a significant simplification! All evidence (and there is plenty of it) indicates that the complexity of these methods is a direct reflection of the complexity of their subject. Unless you can demonstrate an alternative that is vastly simpler? I intended no assumptions there, in either direction, about what is or isn't possible mathematically. I had in mind more bemusement at our human foibles; if it came across more mean-spirited than that (which admittedly is always a danger when invoking that particular Shakespeare quote), I apologize. I'm skeptical of our ability to figure things out quickly; it seems, for example, you may be putting more confidence than I would in the decisiveness of many smart people trying and failing to achieve something for about half a century. Yes, with superglue. I gave a demo during my defense. I'm much happier that I can do real programs now. Either your definition of "dead simple" or of "reasoning" must be at odds with the rest of the field. ;) Are you seriously suggesting that, say, Hoare and separation logic is simpler than equational reasoning? Academia has always had different ideas about simplicity than the real world. Theoretically, Is simpler than imperative, but when usability on real problems is considered, there are other issues to consider beyond theory. You cannot avoid state in extremely interactive programs...and the key to making a stateless computation (like parsing) incremental is to add state. Control flow is easier to reason about than data flow dynamically given that this is what the computer gives you to debug. Usually we have to debug both, but it makes no sense unnecessarily transforming something easy to debug (control flow) into something much harder to debug (data flow). Control flow is easier to reason about than data flow dynamically given that this is what the computer gives you to debug. That's a non-sequitor, of course. Also, it's ironic that you keep making this argument, given your strong belief in better tooling as a solution. My point is that language design must consider tooling. Control flow is debug-able and that is why I prefer it. Show me a decent data-flow or higher-order function debugging experience and I could totally change my mind, there is no irony in that. Yet whenever we get to that point, all I hear about is how debugging isn't really necessary. Andreas, do you personally use a debugger and value the experience? Couldn't subtext or indeed your own work on usable live programming support higher order functions relatively easily? Just like for normal functions you would be able to select a specific call for each lambda, except that the calls are filtered by the call which created that lambda. For example if you have some code: function f(list){ list.map(function(x){ ... x ... }) } You would first select a specific call of f. Then the selection list for the inner function gets filtered down to those calls that correspond to the outer call that you selected. So if this function f was called on list1 and list2, and if you select the call of list1 for f, then for the inner function you would only see the list of calls that correspond to list1. Your notion of trace can also be generalized for pure functional programming. In an impure language order of function calls matters, so the trace of a program is a linear list. But for pure functional programs the order of calls matters only when the output of the first function is needed as the input of the second. So a trace is no longer a linear list but a directed acyclic graph. If you have code like this: trace("the start") a = f(x) b = g(y) c = a + b trace("the end") Then instead of having a trace like this: [the start] [message 1 from f(x)] [message 2 from f(x)] [message 1 from g(y)] [message 2 from g(y)] [message 3 from g(y)] [the end] You have this, because you know that the two calls are independent: [the start] | /------------------------------\ | | [message 1 from f(x)] [message 1 from g(y)] [message 2 from f(x)] [message 2 from g(y)] | [message 3 from g(y)] | | \------------------------------/ | [the end] Of course you could flatten this to the former list if you wanted, but having more structure is generally useful. Typical FP call chains are DEEP, even if you figure out a way to flatten recursion. Each level of call hierarchy about doubles user confusion, meaning they'll get lost pretty quickly if you can't get them to focus a single line of execution at a time. And this is the basic challenge of it all: how can you hide enough details from the user to avoid overwhelming while still allowing them to dig out what they need? So for the FP case, there are just too many function calls to navigate through. The only steps forward I've seen in this area are with slicing (e.g. Roly Perera's work), which is maybe what you are getting at? Note that statement order also doesn't matter much in Glitch: given re-execution semantics all statements will see side effects of all other statements. That is totally achievable without purity. What I was getting at was: - Higher order functions fit naturally in your Usable Live Programming. - It may be better to display a given trace as a DAG rather than as a list. When the language is pure this has the additional advantage that only connected things can have an effect on each other. I think the size of the trace is a problem in any language, imperative or functional. Whether that means a loop with a huge amount of iterations or a big recursion tree doesn't matter that much. In fact I'd expect that a tree with a million nodes is easier to navigate than a flat list with a million nodes. If the branching factor of the tree is 2 then the depth is just 20. You are right: the work does apply to functional programming; and represent traces as DAGs (or at least trees) is a good idea and not just for FP, I've been thinking about this for awhile (also traces in other formats, like tables, but the digraph is probably more cutting edge). But tracing is necessarily imperative (even if we ignore the code being traced, trace statements themselves have implicit positions in the trace!), and I'm not sure how it fits in to debugging functional programs. Tracing pure functional code doesn't really make sense to me, since the computations can be re-used willy nilly in multiple time-less iterations. Part of what makes tracing work well in YinYang is that it is indexed by time (so we can drag the slider to view different configurations of the trace), but in FP...it seems like everything would be dumped into a single huge trace (time would be there, but the debugger wouldn't be able to see how the code was using it). So in FP is that everything is explicit and nothing is fixed (time and side effects are just values to be consumed and returned), giving me nothing fixed to latch onto in the programming model beyond function application and reduction rules. For FP, I think the right way to go are custom library-specific domain-specific debuggers, since those explicit values and higher order function compositions are usually defining something rather simple and fixed (e.g. imperative effects, or basic first-order data-flow wiring). I use debuggers, but typically only as a last resort, when faced with the obligatory memory corruption problems in C++. Those tend to be the most unpleasant moments of my job. In most other cases, especially when dealing with non-trivial algorithms or data structures, debuggers are far too low-level a tool and not very effective. You rarely want to micro-step through control flow or inspect complex graphs manually. Strategic high-level printf digests usually get you much further much quicker. In really complex cases you want to build domain-specific visualisers yourself. I'd argue that classical debuggers are -- to some degree at least -- a good example of a tool primarily addressing a problem that we shouldn't have in the first place if we were using better, more high-level languages. A symptom of our fixation on low-level control flow hackery. And indeed, functional programmers seem to see the need for a debugger much more rarely, which may explain why you find few impressive ones. That said, there are or have been data flow debuggers for Haskell AFAICT, but since I never used them I can't judge them. OCaml also has a reasonable conventional debugger, but I've never used it either. I think your position is taken by much of the PL community on the theoretical side, that we shouldn't need debuggers at all and we should be able to reason about programs in our head, or failing that, printf or some offline verification tool. And that is our disagreement...just what the programmer experience should be in the first place. Not quite. Debugging is necessary. But "debuggers" only help at the lowest level of abstraction, which is not what I need to look at most of the time. And they don't scale to what's actually interesting. Abstraction is the enemy of concrete of course, while debugging difficulty increases as code becomes more abstract (you could argue that the cost is offset by less code to debug, but it's still a hill to climb). There are many domains where debuggers are essential, my crazily-reactive live programming work is one of them I guess, where overly abstract code is bad. When I write code, I think first about how I'm going to debug it since I know I'll have to (too many moving pieces to get it right the first time). If you construct your debugger out of prinfts or use one provided by the IDE, they are debuggers (tools for debugging). There is no standard definition of a debugger (debuggers could also better support printf debugging). Why reason in our head, or with pen-and-paper (most realistically)? Why not tool support? I expect some theory people might not look for tool support there because they're "brainiacs enough", but I'm not sure restricting to that is good. For instance, some Haskell people expect you to do equational reasoning. Like when you simplify algebraic expressions in school. However, effective support for that is lacking. Heck, why do *I* need to do these algebraic simplification on pen and paper, instead of having a computer helping me? Mathematicians also use computers instead of pen and paper sometimes — I'm thinking of algebra systems. In most other cases, especially when dealing with non-trivial algorithms or data structures, debuggers are far too low-level a tool and not very effective. Not when you have a good visual debugger, like that found in visual studio. printf doesn't remotely compare to a good debugger in my experience. That said, I'm not sure what a dataflow debugger is supposed to achieve. If your language is typed, your dataflow is already consistent. Often when you are debugging you see some value that is wrong. A dataflow debugger lets you answer the question 'where did this value come from?' as opposed to manually stepping backward (if your debugger even supports that) until you see the point where the wrong value was created. You want both actually. Printf is great at creating a trace, in the usable live programming paper we show how to make that navigable and integrated into a more comprehensive debugging experience. I have never seen a type system that can eliminate all bugs from all programs. Data flow needs to be debugged just like everything else. I'll agree to a large extent — I can think of collection combinators. Heck, even Smalltalk's collection library is higher-order. If that were literally true, we could argue that category theory is a source of bad abstractions by pointing out at categorical Haskell code. At least people who studied category theory will disagree. Let's instead agree with findings showing that abstraction is cognitively hard, and figure out what we can do about that. Heck, the average mathematician complains about category theory being too abstract! Yet, category theory can make things simpler (that is, having a more compact explanation). Not easier, unless you've already spent the effort to master the abstractions. (I named "Reductio ad category theory" after "Reductio ad Hitlerum" for extra fun, though I don't propose that I automatically lose the argument by using this technique). I think what makes higher order programming hard is the proliferation of hidden types. Take fold, which is easy on its own, and then compose a few together operating on parameters. Its hard to see what's going on as you have to 'unfold' the code in your head to work out what all the intermediate types are. Equivalent imperative code will often iterate (works like vector indexes) over mutable containers who's type is constant throughout the computation. This is much easier to understand. Mutating a single container would be analogous to multiple maps, not folds. Also, the cases that you can implement by mutating a single container necessarily have the same type at every stage, so there wouldn't really be anything hidden. I'm thinking of examples like this: Now I'm not claiming that visual programming is "solved", but this supposed first-order limitation doesn't really exist: higher order visual programming. . I'm not suggesting that operator precedence will go away. I'm only pointing out that it (and many other features designed to make textual programming tolerable) do have non-trivial costs for tooling. I've also not argued that text is unproductive as a medium. There are many domains where text works very effectively or at least not too badly. I agree that text can work reasonably well for math formula. But we do have options for how this happens. E.g. rather than building math formula, syntax, precedence and so on into the language and every tool that parses it, we could simply write a formula (or pieces of it) into a literal string value and parse it via a library-based DSL and a little staging or partial evaluation. When optimizing for plain text expressiveness and readability, and filesystem integration (to leverage the common text editor) It seems evident to me that the unix-style hierarchical filesystems have a far larger impact on modern language design than text editor support does. After all, folks use a wide variety of editors that only need to agree on the text encoding. Meanwhile, there's only a narrow variety of filesystems and they all must agree on a pretty sophisticated set of abstract semantics. Reports I've seen was that the Goto-Fail bug was caused by a version control merge, not an edit. It's entirely possible that no human being even looked at that code until long after it had shipped. Many different sorts of tooling could have prevented Goto-Fail. The editor is the last place I would look for answers. Let me apologize in advance for my remarks bordering on trivial. the language shouldn't have had such an error-prone syntax for its if statement I suspect no one laid eyes on the code change committing that bug, and that it was an auto merge no one inspected. (Yes, crazy.) The if-statement syntax is error prone though. For example, you might want to search a code base to see if any lines have a semi-colon at the end of the first line of an if-statement: if (foo()); goto end; After the first time I saw one of those, in college, I turned up two more in a global source code search at work. Braces in K&R style make it look off. Some shops, like mine, require open and close braces always be used even when optional. So braces and indentation are redundant. Some redundancy is good to act as consistency check. It doesn't seem verbose in practice when an open brace occurs at the end of a previous line, K&R style, and a close brace occurs on what is otherwise a blank line. (I use actual blank lines rarely.) Tools saying why your code looks questionable are a good idea. False reports are a problem though, when treating warnings as errors per disciplined practice. I like the idea of languages coming with specific suggested tools, in general. Itemizing things a tool could verify in a language seems worthwhile, for example, in a language spec. It all borders on trivial except that the consequences of an error can be, and in this case were, highly nontrivial. Requiring the if to be terminated by an endif, a keyword not used for anything else in the language, would make the problem both visually obvious and, most likely, syntactically incorrect even if generated automatically with no human looking at it (until, of course, a human does look when they learn the automatically generated code won't compile because it's syntactically incorrect). endif would also eliminate the problem with if (foo()); goto end; since to avoid a syntax error it would become either if (foo()) endif; goto end; or if (foo()) goto end; endif; that programming languages that academia are interested in and actively working on NEVER designed to have robust, easy to scan syntaxes? Actually usable languages are always left up to hobbyist sorts to invent. For instance, you based your own paper on scheme. I realize that it would have been a distraction from your thesis to make the language it defined readable and give it a robust syntax, and a layer to preserve the abstraction in that case (s-expression to syntax and back or something). There's enough that could be said about that that I will stop here instead of changing the subject. Some academics fall in love with clever devices that make things work; others are looking at something else and can't afford to distract themselves. I have some ideas, myself, regarding Lisp and syntax, but the size of that problem is probably at least one SDU (something I picked up from my own academic experiences: I tend to measure massive personal efforts relative to what it takes to do a typical doctoral dissertation — a Standard Dissertation Unit, SDU). Even McCarthy didn't intend s-expressions to be the real lisp notation, but here we are! Notation seems to have more to do with momentum and familiarity than with ergonomics. Python I think was the first language to treat syntax as a serious asethetic and ergonomic concern. Is haskell readable? Is logic notation? Are the notations used in papers? ... Also, the idea that a language has "macros" but those macros can't create a readable notation for anything is mind boggling in its stupidity. Yes it's true that computer scientists have managed to find some abstract meaning for macros and Fexprs that is about implementation, but somehow avoids anything particularly USEFUL, like the ability to process notations that a human being would find actually usable. [Edit] I guess that's not fair. The non-academic languages haven't managed to make macros that can define usable syntax either. But lisp promised to be an extendable language and it failed to be extended to be as readable as COBOL or FORTRAN which suggests that the goal failed. Anyway I'm working on the problem myself. Maybe I'll find a way to report back here that won't be this cranky. I have a toothache this morning so I'm cranky. Another example is the instance that recursion should replace iteration. ... So try to rewrite all of your numerical methods code in Haskell, or Curry or Prolog. But have a few good friends watching you do it, and make sure that they're strong enough and fast enough to pull the razor out of your hand before you kill yourself. Sure, iteration can be expressed as recursion, and the lack of state changing variables can sometimes make optimizations a bit more obvious - and some less obvious. All of this obsession over implementation shouldn't have to be expressed at the human-coded, human-readable level. It's the wrong level. My answers to these three questions would be: Haskell: Yes, when point-free style and arcane infix operators are not abused. Logic: Yes. Paper notations: I don't know which you mean, but assuming "inference rules" as a syntax, Yes. You use strong wordings in your posts ("stupidity", "actually usable", suicide metaphors), but I find them to have little actionable content. I would be interested in less grand rhetoric and more precise, justified discussions. Your antagonizing style was already pointed out as problematic on the Pycket topic. Could you make an effort to be less blunt and more respectful of the others (present or potentially present) in the discussion? I'm sorry, I reedited my comment above to apologize for being cranky. But take the classic, "Numerical Recipes in Fortran" and recode the programs into Haskell. If you find that the logic is more obscured in Haskell than in Fortran then 60 years of progress hasn't even managed to preserve the obvious advances of the first CS languages. I have been programming for 30 years, and I never stop being surprised at how little useful progress I see in programming. I have been programming for 30 years, and I never stop being surprised at how little useful progress I see in programming. Fwiw, from the perspective of a second-generation programmer (my mother got her first programming job on ORDVAC in, iirc, 1952), I've got a pretty philosophical attitude toward progress. I grew up in the 1970s when it was "common knowedge" that the pace of history was accelerating exponentially, and I eventually concluded that's a perpetual illusion, caused by our knowledge of history always being most detailed at the late end. For a century or so, as I recall, in the European Dark Ages the number of watermills in Europe was increasing at an exponential rate. Anything in mathematics that happened less than a century ago is recent, and programming is a *lot* like math. CAR Hoare: "[ALGOL 60] was not only an improvement on its predecessors but also on nearly all its successors." Is haskell readable? No. Most languages lack readable syntax, and I don't think that's restricted to academia. Lisp is frankly one of the best, relatively speaking, largely because it doesn't waste a lot of effort creating hard-to-read syntax for the supposed sake of readability. At least it's simple and syntactically unambiguous. Which is why I've been cautious about envisioning an improvement to it. Depends, relatively, on which notation you're talking about. I'm currently putting together a blog post about Church's 1932 logic, and I gotta say, if you think the modern logical notations are hard to read, they don't hold a candle to this stuff. CS papers have become unreadable to me. Once upon a time, code would be in some readable notation. Algorithms would be in code. Now I think I mostly see logic and type notations that I suppose are Haskell and logic put in a blender with greek letters. Even if I decode the notation, it's proofs not code. I feel cheated out of hours, conceivably weeks of time it would take to glean anything useful from them. I think it's important to consider tools when designing a language, but I don't see why also considering bare-bones tool environments should lead to much sub-optimality. If you're developing any kind of general-purpose language (as opposed to a focussed DSL), different users will have different metrics anyway: what is optimal for one will be sub-optimal for others, so you need to provide a broad near-optimal plateau in any case. If the motivating idea for *your* language design requires you to sacrifice bare-bones utility, then by all means go for it. But, I think the idea that *any* language design will necessarily require the same kind of sacrifice to work well with advanced tools is a much more extraordinary claim, and deserves appropriate skepticism. As a concrete matter, I am interested in learning what things are relevant to advanced tool support, so I can avoid those which might make it more difficult, or pursue those which might make it more effective. (I should also mention that I'm inclined to go ahead with my plan to steal as much H-M-style type inference as I can get away with...) Aggressive type inference is a good example where considering a bare bones environment leads to a bad experience. You can't really fly an F18 without a heads up display. Good IDEs seem to help programmers. But I wouldn't know, I use vim. I'm particularly interested in tools that tell you what happened (or what's happening), since you could a design language where finding out would be hard, unless you considered how a tool would be able to present such info. stop talking about programming languages in isolation of the tools that support them. As Moore's law stalls, and further progress in ramping up offerings will involve more cooperating sequential programs, I would like to see tools address system behavior when more than one "program" is running to get things done. I think system behavior should be part of language runtime, so tools are responsible for presenting means for devs to understand, monitor, and control what happens when concurrent entities interact. I think you had something slightly different in mind, about how a language system could be very clever about code and program relationships. So I wanted to encourage interest in showing emergent effects of dynamic systems as part of the purview of language tools.. What is "the language" then? What you type in or what you print out? I say the latter, while the former is just a typing aid. And you need to design the print format first, because your "language" can't be communicated without, i.e., isn't a language. So you've basically reinforced my point. In all cases where you want to read code, you have a computer. Why think of code as some linear stream of text, it could be a tree, it could have hyper links, no one "reads" code from beginning to end like a novel. They explore it like it is its own little world. So what is your point then? That we should purely focus on the strings of sequential words that aren't going to be consumed sequentially anyways in any normal context? Or might we focus on how code will be consumed by programmers instead when they are exploring it for various tasks (debugging, understanding, crash analysis, code reviews...). Unless you are Amish and are not allowed to read code on a computer, I don't see why you insist that the reading and writing experiences be separated from the language. Its because linear language is the most efficient way to get information in and out of a human. Our cognative facility is developed through spoken language, so this will always be the most natural form of communication. One word after another modifying a hidden state. Code is not written beginning to end like a novel. Its access is task specific. All advancements related to language after the development of spoken language (50-100kya) and writing (6kya) have been related to tools, not the language itself (scrolls, codex, books, printing press, telegraph, ...). The language of code is already quite developed, the potential for advancement lie elsewhere. In all cases where you want to read code, you have a computer. That's a false premise right there, and I very much assume it will remain so for many years to come. More importantly, even having a "computer" does not imply that you have a smart development IDE, let alone a suitable one for each particular language you might need to look at on a given day of the week. Browsing a programming tutorial on a tablet? Do I need to install an app first, one for each language in the world? Or does each tutorial web page include 30 MB of code smartifier just to be able to display code in interactively readable form? You are assuming a technological mono culture that is neither realistic nor desirable. Let me repeat. Using IDEs: very good. Depending on IDEs: very bad. I'm a happy user of Eclipse, but I would be terrified by the thought that I could only read or edit code with a heavy-handed tool like Eclipse. Complex IDEs on the critical path would be a recipe for disaster in many domains. What if I need to debug code remotely, through some narrow SSH tunnel, to give just one example? The success of text-based language as a low-overhead, portable abstraction is neither a coincidence, nor obsolete. The only exceptions would be domains that are very GUI-driven and proprietary anyway. But even there: Java failed on the web, JavaScript succeeded; there is a strong case that this has a lot to do with its simple, directly readable deployment format as plain text that everybody can read and hack everywhere with low tooling overhead. Why would you need to edit code over SSH?! This sounds like an extreme edge case, and it's not worth dictating a representation for code to satisfy that. Just download the code to your local machine and edit it there. That's the right thing to do even for plain text. Code is already processed by some syntax highlighter in order to be displayed in a tutorial. A syntax highlighter for a structured format is even easier to write than making a parser first (or worse, some crazy regex) and then highlighting code based on that. And with a structured format we can easily have nice things like linking function names to their documentation page. Because, say, it runs on some device, or in a data center, or some other hardware that is different from my local machine. Having to copy code back and forth for each little edit in a debug cycle would be wildly inconvenient. This may be an edge case, but probably far more common than you might think. Where I live, people have to do it all the time. The executable runs on the device, not the source code. If for some reason you need the source code on the target device it's not hard to set up file system sync. I see no reason why you would want to run an editor like Vi on the target device and access it via SSH, rather than running a nice editor on your workstation and then syncing the code, or better yet, compiling on the workstation and copying the executable. Warping the entire programming tool chain because you want to avoid having to sync some files seems crazy to me. ...that is not how it is likely to work in practice, in a heterogeneous environment. And cross building actually is a very hard problem once you move beyond standard text I/O with your program. I've done embedded development with cross-building and remote debugging in a company, so I'm confused — can you explain what's "very hard"? We were using C, and I'm not sure tooling for other languages is as advanced; but that alone doesn't make a cross-compiler conceptually hard. (I'll speculate that Unix geeks who understand functional programming are scarce, and that might be the biggest problem.) Try building or debugging Chrome for Windows on your Linux dev desktop and you'll see. And don't even get me started about mobile OSes... ;) Everything you discuss is a tooling concern. You don't want to depend on some tools because you need to use some other tools, and you aren't confident that they can integrate in practice. But what IDE doesn't support remote debugging via a SSH tunnel? I mean, seriously, it's all been done, the plugin model is one of the few successes of extensible architectures. Dart and Typescript exist solely because JavaScript is not toolable. Why it succeeded where Java failed was strictly due to its integration with the DOM (we could say both were pervasive, but one was made to replace the web, and one was made to augment it). Likewise, few of us want to replace text, we just want to augment it. Do we have concrete examples of "good" language features that are enabled by wide-spread tooling in a given community? Of "bad" language features? Assuming everyone had bought Sean's point years ago, precisely which parts of the languages we have today would have been done differently? One example that I think is good: the go fix tool "finds Go programs that use old APIs and rewrites them to use newer ones". There are surprisingly few languages that make wide-spread uses of such tools today, and Go is one of them *because* tooling (because of the other "go fmt" tool that has a totalitarian control on the concrete syntax of user programs and can guarantee that pretty-printing a modified file will produce a diff no larger than reasonable). This is a godsend to API designers; the ability to lower the cost of API breakage for our users. One example that I think is bad: Java tolerated inane amount of verbosity for years (we discussed the lack of type inference recently) because of decent boilerplate-generation support from major IDE vendors. In my experience the verbosity of the language makes for a very bad experience teaching to beginners (it gives them the idea that programming is mostly about rote memorization of magical incantations). Here are some syntactic features that interact with auto complete. The x.f(y) notation found mostly in object oriented languages works well in combination with auto complete. You type "x." and your editor displays a list of operations that x supports. This can be generalized to arbitrary subexpressions: if your cursor is in some enclosing expression C[...] then you get suggestions based on the type expected by C. For example if the type expected is boolean and the lexical environment contains a variable p of type boolean then that variable will be in the list of suggestions. If you view f as a first class function in x.f(y) then this gives the auto complete behaviour of current OO IDEs. For example if x is a List<T> and you type "x." then the expression e in the context x.e must be of type List<T> -> Q. You then look in the lexical environment for all values that match this type, which are the functions that work on lists.(". The vast majority of values in the lexical environment are functions, not variables like x and p, so it's useful to have a syntax that gives good auto complete for function names. Another example in C# is the LINQ syntax which looks like SQL but has the order of some clauses reversed to make sure that variable binding comes before variable use (from clause comes first in LINQ). This ensures that you have auto complete for those variables in subsequent clauses. from One problem with the generalization you're proposing to dot completion is that literally anything could legal in most contexts. foo(cat:String):Nat = 3*(c|) I've drawn the cursor with a pipe. Should we offer to complete the c to cat? What if they are trying to type cat.length? The magic thing about dot is that you know that 'o.|' is going to be followed by a field of o. So, yes, you can make the generalization you're suggesting, but you'll either always be suggesting almost everything or you'll sometimes miss what they really wanted. The dot model depends on the type of the receiver for reasons of usability. It's not perfect and comprehensive, but it is good enough in a "worse is better" kind of way. Contrast that to code completion proposals I've seen for Haskell: immensely powerful but ultimately unusable. And this is really just an accident: OO happened to have a convenient anchor to latch onto when code completion became a thing; Haskell, with all its statically typed glory, did not. Imagine if SPJ and Wadler, etc..., went back and re-designed Haskell with the usable code completion as a first-class language design concern...wouldn't the result be awesome? I don't think it's a binary choice between doing the suggestion and not doing the suggestion. It's a ranking problem. That completion after the dot is more useful is exactly my point! This doesn't mean that other completion is useless though. In F# you have the |> operator which is like the dot but works with any expression on the right hand side. So you could do list |> reverse but also list |> (fun l -> reverse l). The completions for list |> could still be ranked according to whether the function takes a list as its first argument, even though there are other compound expressions that could also be valid. list |> reverse list |> (fun l -> reverse l) list |> Funnily enough the reason that c could be completed to anything is because the dot operator exists. If you had only f(x,y) syntax and no operators then the only thing that c could be completed to is some Nat or some function returning a Nat. There would be no way to make it a Nat after you typed cats. I agree with you. I was just pointing out that dot has somewhat special syntactic properties that interact very well with completion. And most OOP languages consider dot to be pulling fields from the object rather than as supplying the object as the first parameter to an arbitrary function. So the set of candidate fields is closed in most OOP languages rather than being an open set of candidate functions. That is an interesting case study, but what exactly does it mean in term of tooling? I understand it as "designing the syntax of your language so that the small-search-space thing comes first and the large-search-space thing comes next lets programmer explore the large space, refined by the small-space choice, through tooling". Or maybe it is not small-search-space vs. large-search-space but local-meaning (which kind of implies small search space, but also that completion would not behave reliably-similarly in most places) vs. globally-available-choice. Or asking programmers to write the name they have in immediate memory, and then help them find the one that is not. So this particular example of yours would be an example of purely-textual syntax choice, informed by what we know to be easy to tool. To compare to Sean's original proposla: if we accept tools as part of the core experience, we are able to make decisions that are impossible given the language by itself The present case is not about tooling enabling more/better choices, but rather about tooling proposing a *distinction* of two alternatives that could be perceived to be equivalent. Ah, I read your question backward: I thought you meant examples of how we design languages differently because of tools. The question of tooling enabling better choices is indeed much more interesting. Look at language extension. In Lisp you have macros, which allow you to create constructs that interpret sexpr subexpressions in arbitrary ways. There are a couple of problems with them. For one lexical scoping doesn't always work well. There are hygienic macros but they don't feel right to me for various reasons. Macros are limited to sexprs; you can't have arbitrary syntax inside them. Editors may not work well with them: you may have syntax highlighting for keywords so that `if` and `defun` get different color than normal identifiers, but inside a macro the editor doesn't know what are keywords and what are normal identifiers. You can also imagine a language that allows you to hook into the parser, which would allow arbitrary syntax extension, but this makes the editor problem even worse, and they behave too wildly in general. Even arbitrary syntax extension is limited: you can't use it to extend the language with RGB color literals which you can pick visually in a color picker. It's limited to plain text. If you broaden the view of programming language to the whole user experience you can do better. Lets think about how a simple structural editor would work. You have some representation of the AST, and the editor knows how to display that AST. Each node in the AST gets displayed as its own little GUI widget depending on the node type, with nested widgets for its subexpressions. How would language extension work in this model? We would add a new node type to the AST, and define how that new node type gets displayed as a GUI widget in the editor. So we could define a new node type ColorLiteral(R,G,B) which gets displayed as a small square with the given color, and if you select it you get a color picker. Instead of defmacro you get a defsyntax construct which lets you define and bring into scope a new language construct along with a function that takes a ColorLiteral(R,G,B) and outputs its GUI widget. You can solve the hygiene problem by letting that function take an additional argument that represents the lexical environment. ColorLiteral does not care about the lexical environment, but constructs that support nested expressions do. The `let x = e1 in e2` construct in a given lexical environment E first creates the GUI widget for e1 in environment E and then creates the GUI widget for e2 in the environment E + {x}. So the lexical environment becomes a first class thing, and each node in the tree controls the lexical environment of its children. You have a special AST node `Hole` which represents a subexpression that has not yet been entered. Its GUI widget is a text box which provides auto-complete based on the lexical environment (which it got from its parent node). The things inside the lexical environment are not just variables, but in fact all language constructs. There is an `if` entry inside the lexical environment, which when picked creates an `if` node. Variables in the lexical environment are just those entries that happen to be entries that create a variable node. Note that defsyntax is not special: it just adds an entry to the lexical environment which creates that new node type. Hopefully this was not too confusing. So what does this enable? It enables clean and truly arbitrary language extension, and each language extension comes with a specific display and editor for each new language construct. In fact a language extension is nothing but an extension to the IDE. You could imagine extending a language with SQL queries, regex literals, bitmap literals, a logic programming sublanguage, pattern matching, tables, state machines, graph literals, etc. If you wanted you could even create a Java sublanguage, for which the AST node would be represented as the ASCII Java code, and the GUI widget would be a plain text editor. It's also great for language evolution. New language constructs can be tried out as a library. You can modify the syntax of the language without breaking anybody's code: you just change the way AST nodes are displayed. As a trivial special case you can rename functions without breaking anybody's code: it just changes how identifiers of that function are displayed: instead of displaying as "foo" the exact same AST node now displays as "bar".(". This doesn't seem insurmountable though. Given a language with f(x,y) calling syntax, we could simply use a placeholder to represent the function to be filled in via autocomplete. For instance, typing "?(x,y)" displays all possible functions accepting two parameters of the appropriate types. I wonder how well phrase completions would work? Start with an expression with '?' in any number of places and let the IDE offer completions that make the whole thing make sense. In the x.f(y) case, x is a record (product) type and the list displays the projection maps out of x. There should be a dual kind of auto-completion for union (sum) types which displays a list of constructors. I'm not sure how that works syntactically, and it's non-existent in OO language IDE's but it's straightforward in a visual programming system: dual auto-completion. You simply drag an input backwards onto the canvas, instead of dragging the output as in the standard case. I don't get it, can you give me an example of the use case? Starting at 3:25 in this video. It should be self-explanatory though. Instead of listing methods that take type x as input, it lists methods that produce type x - the most primitive of these being constructors. Ah, crystal clear! So let me summarize: Blueprints has two forms of code completion: (a) for creating a new patch to receive an output from an existing one, and (b) for creating a new patch that sends to the input of an existing one. If we think about it, there are actually three kinds of code completion possible: The last two are more meaningful in functional languages, and I think that is how Haskell code completion systems work (at least the ones I've heard described to me!). Function application syntax of the language also helps here, given that the producer of the value is always on the most left (vs. OO syntax where the producer is to the right of one or a few dots). (1) is a special case of (2) with clever syntactic sugar in OO languages, namely that you can write x.f(y) instead of x.class.f(x, y). Technically correct, but conceptually I would say (2) is more about data-flow while (1) is about object access. It makes sense that the experiences would be different. You only need one kind of code completion: "Who can produce a value of type T". Then "Who can take a value of type T" is equivalent to "Who can produce a value of type T -> Q" for some Q. It's an ugliness of most languages that they treat producers and consumers asymmetrically that way. To treat producers and consumers symmetrically you'd need a linear language with explicit splitting and merging of data flow. I understand where the folks on both sides of this argument are coming from. However, I think that there is a clear middle ground being overlooked. When I learned the word "toolstrapping", my whole perspective on tooling changed. The challenge is not to create a language with great tooling, the challenge is to create great tooling which makes new kinds of languages possible. Sean's fighter jet hud display example is a good one in the sense that there are some tasks which really can't be accomplished without tool assistance. What doesn't logically follow is that such a cyborg language needs to live on an IDE island. The key insight, for me anyway, is that we can't even program a typical 1970s language without tools! We need text editors, but more than that, we need an alphabet! It's long before me, but there was a time when we didn't all agree on which sequences of bits represented which characters, so there was no hope of a your diff tool collaborating with my editor, nevermind somebody else's compiler. It's only once we assimilate a particular technology that we advance. So it was with the written word, so it is with text editors, so it will be with next generation environments. Another problem with the fighter jet example is that fighter jet pilots still need *lots of training* and are often only trained on one or a few kinds of jets. The skills are not transferable, doubly so if pilots can't take their favorite auto-stabilizer software with them to their new fighter jet. Further, those planes are *expensive* and lots of otherwise well funded militaries can't afford them. Military developments tend to be simultaneously ahead and behind both academic research and industry practice. We should strive for "a rising tide raises all ships" rather than "setting the high watermark". I also agree with those folks who trumpet the inherit advantages of written language, but I also acknowledge that they are not a universal tool. In particular, textual language tends to fail once inherit information density (entropy? I'm not a Shannon expert) reaches some threshold. Multimedia is the canonical example: The frequency of recorded music or the resolution of modern photography are far more information dense than any human could ever hope to process in text. Nobody can look at a text dump and tell you that it's a video of a rock concert. So let's not put the cart before the horse by attempting to build a next generation language and it's IDE all at once. Let's instead build tools which augment the text tools that we have now. Only once we've explored how those new tools enhance our existing capabilities should we begin to explore what kinds of new capabilities they've created. I have some ideas of intermediate form such tools might take, but I hesitate to broadcast my bet, for fear it suffocate my message above.
http://lambda-the-ultimate.org/node/5157
CC-MAIN-2017-51
refinedweb
27,895
61.16
Guido van Rossum wrote: > My idea was to make the compiler smarter so that it would recognize > exec() even if it was just a function. > > Another idea might be to change the exec() spec so that you are > required to pass in a namespace (and you can't use locals() either!). > Then the whole point becomes moot. I vote for the latter option. Particularly if something like Namespace objects make their way into the standard lib before Py3k (a Namespace object is essentially designed to provide attribute style lookup into a string-keyed dictionary- you can fake it pretty well with an empty class, but there are a few quirks with doing it that way). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ---------------------------------------------------------------
https://mail.python.org/pipermail/python-dev/2005-October/057188.html
CC-MAIN-2016-36
refinedweb
125
70.73
Developers at a software vendor (ISV), Okta customers, and system-integrators (SI) want to facilitate fast, enterprise-wide deployment of their app by integrating with Okta for user provisioning primarily via the SCIM standard. This article describes: new users in the downstream application based on values derived from the Okta user profile and Okta group assignment. Import users & groups from the downstream application in order to match them to existing Okta users, or to create new Okta users from the imported application.. Okta sets the user’s password to either match the Okta password or to be a randomly generated password. Learn more about the overall use case in Using Sync Password: Active Directory Environments. actions can be combined to solve for end-to-end use cases.) Now that you understand the most common provisioning actions and use cases, let’s review your options to support provisioning as an app developer. While we outline a few different methods below, Okta recommends all ISVs support the SCIM standard. Okta has seen broad adoption of the standard in the market amongst our network of app developers over the course of 2016. Okta has doubled down on our investment in our SCIM client and launched our own SCIM provisioning developer program. The options above are geared towards cloud apps but we have a solution for on-premise applications as well. See the product documentation for details about Okta’s agent-based provisioning solution.. Have questions? Need help? Email us at developers@okta.com or post your question on Stack Overflow.: In Okta, an application instance is a connector that provides Single Sign-On and provisioning functionality with the target application.. Your SCIM API MUST be secured against anonymous access. At the moment, Okta supports authentication against SCIM APIs with one of the following methods: Okta doesn’t support OAuth 2.0 Resource Owner Password Credentials grant flows. Your service must be capable of storing the following four user attributes:) Your SCIM API must support the following SCIM API endpoints to work with Okta: Your SCIM 2.0 API should allow the creation of a new user account. The four basic attributes listed above must be supported, along with any additional attributes that your application supports. If your application supports entitlements, your SCIM 2.0 API should allow configuration of those as well. An HTTP POST to the /Users endpoint must return an immutable or system ID of the user ( id) must be returned to Okta. Okta will call this SCIM API endpoint under the following circumstances: Direct assignment When a user is assigned to an Okta application using the “Assign to People” button in the “People” tab. Group-based assignment When a user is added to a group that is assigned to an Okta application. For example, an Okta administrator can assign a group of users to an Okta application using the “Assign to Groups” button in the “Groups” tab. When a group is assigned to an Okta application, Okta sends updates to the assigned application when a user is added or removed from that group. Below is an example demonstrating how the sample application handles account creation: @app.route("/scim/v2/Users", methods=['POST']) def users_post(): user_resource = request.get_json(force=True) user = User(user_resource) user.id = str(uuid.uuid4()) db.session.add(user) db.session.commit() rv = user.to_scim_resource() send_to_browser(rv) resp = flask.jsonify(rv) resp.headers['Location'] = url_for('user_get', user_id=user.userName, _external=True) return resp, 201 Note: force=True is set because Okta sends application/scim+json as the Content-Type and the .get_json() method expects application/json. For more information on user creation via the /Users SCIM endpoint, see section 3.3 of the SCIM 2.0 Protocol Specification. Your SCIM 2.0 API must support the ability for Okta to retrieve users (and entitlements like groups if available) from your service. This allows Okta to fetch all user resources in an efficient manner for reconciliation and initial bootstrap (to get all users from your app into the system). Here is an example using curl to make a GET request to /Users: curl Below is how the sample application handles listing user resources, with support for filtering and pagination: @app.route("/scim/v2/Users", methods=['GET']) def users_get(): query = User.query request_filter = request.args.get('filter') match = None if request_filter: match = re.match('(\w+) eq "([^"]*)"', request_filter) if match: (search_key_name, search_value) = match.groups() search_key = getattr(User, search_key_name) query = query.filter(search_key == search_value) count = int(request.args.get('count', 100)) start_index = int(request.args.get('startIndex', 1)) if start_index < 1: start_index = 1 start_index -= 1 query = query.offset(start_index).limit(count) total_results = query.count() found = query.all() rv = ListResponse(found, start_index=start_index, count=count, total_results=total_results) return flask.jsonify(rv.to_scim_resource()) If you want to see the SQL query that SQLAlchemy is using for the query, add this code after the querystatement that you want to see: print(str(query.statement)) For more details on the /Users SCIM endpoint, see section 3.4.2 of the SCIM 2.0 Protocol Specification. Your SCIM 2.0 API must support fetching of users by user id. Below is how the sample application handles returning a user resource by user_id: @app.route("/scim/v2/Users/<user_id>", methods=['GET']) def user_get(user_id): try: user = User.query.filter_by(id=user_id).one() except: return scim_error("User not found", 404) return render_json(user) If we don’t find a user, we return a HTTP status 404 (“Not found”) with SCIM error message. For more details on the /Users/{id} SCIM endpoint, see section 3.4.1 of the SCIM 2.0 Protocol Specification. When a profile attribute of a user assigned to your SCIM enabled application is changed, Okta will do the following: /Users/{id}on your SCIM API for the user to update. /Users/{id}in your SCIM API with the updated resource as the payload. Examples of things that can cause changes to an Okta user profile are: Below is how the sample application handles account profile updates: @app.route("/scim/v2/Users/<user_id>", methods=['PUT']) def users_put(user_id): user_resource = request.get_json(force=True) user = User.query.filter_by(id=user_id).one() user.update(user_resource) db.session.add(user) db.session.commit() return render_json(user) For more details on updates to the /Users/{id} SCIM endpoint, see section 3.5.1 of the SCIM 2.0 Protocol Specification. Deprovisioning is perhaps the most important reason customers why customers ask that your application supports provisioning with Okta. Your SCIM API should support account deactivation via a PATCH to /Users/{id} where the payload of the PATCH request sets the active property of the user to false. Okta also does a PUT if the Patch is not supported for deactivation. Your SCIM API should allow account updates at the attribute level. If entitlements are supported, your SCIM API should also be able to update entitlements based on SCIM profile updates. Okta will send a PATCH request to your application to deactivate a user when an Okta user is “unassigned” from your application. Examples of when this happen are as follows: Below is how the sample application handles account deactivation: @app.route("/scim/v2/Users/<user_id>", methods=['PATCH']) def users_patch(user_id): patch_resource = request.get_json(force=True) for attribute in ['schemas', 'Operations']: if attribute not in patch_resource: message = "Payload must contain '{}' attribute.".format(attribute) return message, 400 schema_patchop = 'urn:ietf:params:scim:api:messages:2.0:PatchOp' if schema_patchop not in patch_resource['schemas']: return "The 'schemas' type in this request is not supported.", 501 user = User.query.filter_by(id=user_id).one() for operation in patch_resource['Operations']: if 'op' not in operation and operation['op'] != 'replace': continue value = operation['value'] for key in value.keys(): setattr(user, key, value[key]) db.session.add(user) db.session.commit() return render_json(user) For more details on user attribute updates to /Users/{id} SCIM endpoint, see section 3.5.2 of the SCIM 2.0 Protocol Specification. userName eq(Required) Your SCIM API must be able to filter users following the pattern “userName eq “…” “. This is because most provisioning actions, besides Import Users, require the ability for Okta to determine if a user resource exists on your system. Consider the scenario where an Okta customer with thousands of users has a provisioning integration with your system, which also has thousands of users. When an Okta customer adds a new user to their Okta organization, Okta needs a way to determine quickly if a resource for the newly created user was previously created on your system. Examples of filters that Okta might send to your SCIM API are as follows: userName eq "jane.doe" userName eq "jane.doe@example.com" Here is an example of how to implement SCIM filtering in Python: request_filter = request.args.get('filter') match = None if request_filter: match = re.match('(\w+) eq "([^"]*)"', request_filter) if match: (search_key_name, search_value) = match.groups() search_key = getattr(User, search_key_name) query = query.filter(search_key == search_value) For more details on filtering in SCIM 2.0, see section 3.4.2.2 of the SCIM 2.0 Protocol Specification. Okta currently only supports filtering on username eq. However, we may support additional parameters and operators in the future to unlock new use cases. You may want to support these now to future-proof your application. These additional filters may include the following: meta.lastModified: This filter would be needed" externalId: Okta may use the externalId as a more robust alternative to userName when determining if the user already exists in your application during a reactivation flow. externalId is a more stable identifier for users, because the userName and email addresses for a user can change. Here is an example of an externalId filter that might be sent to your application: externalId eq "00u1abcdefGHIJKLMNOP" For details about supporting externalId, see section 3.1 of RFC 7643, excerpted below. Note that in the following excerpt, “provisioning client” refers to Okta and the service provider refers to you, the 3rd-party that Okta is making calls to. (externalId. When adding support for externalId filtering to your application, we suggest that you use OAuth2.0 for authentication and use the OAuth2.0 client_id to scope the externalId to the provisioning domain. id: Both of these attributes could also be used by Okta to determine if the user already exists in your application, instead of userName or externalId. When returning large lists of resources, your SCIM implementation must support pagination using a limit ( count) and offset ( startIndex) to return smaller groups of resources in a request. Below is an example of a curl command that makes a request to the /Users/ SCIM endpoint with count and startIndex set: $ curl '' { "Resources": [ { "active": false, "id": 1, "meta": { "location": "", "resourceType": "User" }, "name": { "familyName": "Doe", "givenName": "Jane", "middleName": null }, "schemas": [ "urn:ietf:params:scim:schemas:core:2.0:User" ], "userName": "jane.doe@example.com" } ], "itemsPerPage": 1, "schemas": [ "urn:ietf:params:scim:api:messages:2.0:ListResponse" ], "startIndex": 0, "totalResults": 1 } Note: When returning a paged resource, your API should return a capitalized ResourcesJSON key (“Resources”), however Okta will also support a lowercase string (“resources”). Okta will also accept lowercase JSON strings for the keys of child nodes inside Resourcesobject such as startindex, itemsperpage, or totalresults. One way to handle paged resources is to have your database do the paging for you. Here is how the sample application handles pagination with SQLAlchemy: count = int(request.args.get('count', 100)) start_index = int(request.args.get('startIndex', 1)) if start_index < 1: start_index = 1 start_index -= 1 query = query.offset(start_index).limit(count) Note: This code subtracts “1” from the startIndex, because startIndex is 1-indexed and the OFFSET statement is 0-indexed. For more details pagination on a SCIM 2.0 endpoint, see section 3.4.2.4 of the SCIM 2.0 Protocol Specification. Some customer actions, such as adding hundreds of users at once, causes large bursts of HTTP requests to your SCIM API. For scenarios like this, we suggest that your SCIM API return rate limiting information to Okta via the HTTP 429 Too Many Requests status code. This helps Okta throttle the rate at which SCIM requests are made to your API. For more details on rate limiting requests using the HTTP 429 status code, see section 4 of RFC 6585. Okta currently supports the /groups endpoint for GET /groups of a SCIM API. This is usually done to check for groups data and is not mandatory for SCIM to work. The minimum check we require is for the resources to be of JSON. check example below. Example: { "schemas": [ "urn:ietf:params:scim:api:messages:2.0:ListResponse" ], "totalResults": 1, "startIndex": 0, "itemsPerPage": 0, "Resources": [ { "id": "66ed8bece1944aa18bf96fb5c935c4ba", "schemas": [ "urn:ietf:params:scim:schemas:core:2.0:Group" ], "displayName": "Marketing", "members": [ { "value": "m1@atko.com", "$ref": "localhost:8080/Users/12345", "display": "Marketing User 1" }, { "value": "m2@atko.com", "$ref": "localhost:8080/Users/12346", "display": "Marketing User 2" } ] } ] } With Group Push Beta, Okta now supports creation of a Group along with its user memberships in the downstream SCIM enabled application if your SCIM 2.0 API supports it. The caveat is that the users must already be provisioned in your SCIM enabled application. With Group Push Beta, Okta now supports reading the Group’s details by group id along with the membership details. If a Group is not found, your SCIM application may return a HTTP status 404(“not found”).For more details on the /groups/{id} SCIM endpoint, see section 3.4.1 of the SCIM 2.0 Protocol Specification. With Group Push Beta, any updates to the Group profile and memberships in Okta can now be reflected into your SCIM application. Okta will do the following to make the Group changes effective: /groups/{id}on your SCIM API for the group to update. /groups/{id}in your SCIM API with the updated resource as the payload. With Group Push Beta, Okta can delete the Group in your SCIM enabled application. For more details on deleting resources, see section 3.6 of the SCIM 2.0 Protocol Specification. The following features are currently not supported by Okta: Deleting users via DELETE is covered in section 3.6 of the SCIM 2.0 Protocol Specification. Okta users are never deleted; they are deactivated instead. Because of this, Okta never makes an HTTP DELETE request to a user resource on your SCIM API. Instead, Okta makes an HTTP PATCH request to set the active setting to false. The ability to query users with a POST request is described in section 3.4.3 of the SCIM 2.0 Protocol Specification. Querying using POST is sometimes useful if your query contains personally identifiable information that would be exposed in system logs if used query parameters with a GET request. Okta currently does not support this feature. The ability to send a large collection of resource operations in a single request is covered in section 3.7 of the SCIM 2.0 Protocol Specification. Okta currently does not support this feature and makes one request per resource operation. The /Me URI alias for the current authenticated subject is covered in section 3.11 of the SCIM 2.0 Protocol Specification. Okta does not currently make SCIM requests with the /Me URI alias. Okta does not currently make queries against the /Schemas endpoint, but this functionality is being planned. Here is the specification for the /Schemas endpoint, from section 4 of RFC 7644:. Okta does not currently make queries against the /ServiceProviderConfig endpoint, but this functionality is being planned. Here is the specification for the /ServiceProviderConfig endpoint, from section 4 of RFC 7644:. Okta does not currently make queries for resources using meta.lastModified as part of a filter expression. Okta plans to add functionality" In order to allow customers to use your SCIM provisioning integration with Okta, you’ll need to get your app published in the Okta Integration Network. Follow the steps below to test and submit your application for Okta review: Have questions? Need help? Email us at developers@okta.com or post your question on Stack Overflow.. The first step is to build a compliant SCIM server. Even if you already support SCIM, it is important that you review Okta’s SCIM docs above, especially the following sections, to understand the specifics of Okta’s support for the SCIM standard:. If you do not have a Runscope account already, we suggest starting with Runscope’s free trial plan for Okta. Here is how to get started: You will use this file to import Okta’s SCIM test suite into Runscope.) Now that you’ve imported Okta’s SCIM test suite into Runscope, your next step will be to customize the test suite for the SCIM integration that you are writing. After importing Okta’s SCIM test suite into Runscope, you will need to configure the test for your SCIM integration. Here is how to do that: “Initial Variables” should be selected, click the “Add Initial Variable” link and add the following: Now that you have updated your SCIM test in Runscope for your SCIM server, it is time to run the test: As you are developing your SCIM server, you will likely want to share test results with teammates or with Okta. Here is how to share a test result from Runscope with someone else: Once you have a SCIM server that passes all of the Runscope tests, test your SCIM integration directly with Okta. To do so, you will first need to sign up for an Okta developer account. Note: If you are using OAuth Authorization Code Grant flow as your authentication method or need to support the Profile Master action, Okta will need to custom-configure a template app for you. Please request this in your email to developers@okta.com.). Your QA team should test the use cases in this downloadable spreadsheet: Okta SCIM Test Plan.: After performing these steps, navigate to the OIN Manager at to complete the submission form and track review status. Before submitting your application to Okta, you should check the User Attributes to make sure that the attributes are set to what you would want your users to see. Check your Profile Attributes as follows: In the “Attributes” section, remove all attributes that are not supported by your application. This is an important step! Your users will get confused if your application appears to support attributes that are not supported by your SCIM API. You can delete an attribute by selecting it, then clicking the “Delete” button located in right hand attribute details pane. Before removing, check the mapping between Okta and Application and remove the mappings for the attribute(s) to be deleted. The last step for you to complete before submitting your application to Okta is to check the User Profile Mappings for your application. These mappings are what determine how profile attributes are mapped to and from your application to an Okta user’s Universal Directory profile. To check the User Profile Mappings for your application, do the following:: If you are already familiar with Runscope, then import the OKTA SCIM 2.0 CRUD Test and configure the SCIM Base URL variable to point at the base URL for your SCIM server, for example:. If you are not familiar with Runscope, follow the detailed instructions to get started with using Runscope to test your SCIM server.:. In order for an app to be published in the Okta Integration Network in Partner-Built EA, it must meet the following criteria: Once Okta completes the QA process and the requisite changes are made (all issues are closed), Okta allows the provisioning integration to enter the Okta Integration Network in Partner-Built EA.: Whether Partner-Built EA or Okta-Verified, when issues arise related to the SCIM integration, the ISV acts as the first point of contact. What are the differences between SCIM 1.1 and 2.0? Namespaces: Namespaces are different, therefore 2.0 is not backwards compatible with 1.1 Service Provider Configuration Endpoint: There’s no s at. typesub-attribute (e.g., the same email address may be used for work and home) but SHOULD NOT return the same (type, value) combination more than once per attribute, as this complicates processing by the client.. If I submit my app with a set of attributes, and then I want to add attributes during the testing phase of the app, is this acceptable? Yes. Add a new app instance in your dev org to test the new attributes and email developers@okta.com.. Okta provides an example SCIM Server written in Python, with documentation. This example SCIM server demonstrates how to implement a basic SCIM server that can create, read, update, and deactivate Okta users. You can find the sample code to handle HTTP requests to this sample application in Required SCIM Server Capabilities Use the instructions that follow to set up and run the example SCIM server. This example code was written for Python 2.7 and does not currently work with Python 3. Here is how to run the example code on your machine: First, start by doing a git checkout of this repository, then cd to directory that git creates. Then, do the following: cd to the directory you just checked out: $ cd okta-scim-beta Create an isolated Python environment named venv using virtualenv: $ virtualenv venv Next, activate the newly created virtual environment: $ source venv/bin/activate Then, install the dependencies for the sample SCIM server using Python’s “pip” package manager: $ pip install -r requirements.txt Finally, start the example SCIM server using this command: $ python scim-server.py Below are instructions for writing a SCIM server in Python, using Flask and SQLAlchemy. A completed version of this example server is available in this git repository in the file named scim-server.py. We start by importing the Python packages that the SCIM server will use: import os import re import uuid from flask import Flask from flask import render_template from flask import request from flask import url_for from flask_socketio import SocketIO from flask_socketio import emit from flask_sqlalchemy import SQLAlchemy import flask re adds support for regular expression parsing, flask adds the Flask web framework, flask_socketio and flask_sqlalchemy add a idiomatic support for their respective technologies to Flask. Next we initialize Flask, SQLAlchemy, and SocketIO: app = Flask(__name__) database_url = os.getenv('DATABASE_URL', 'sqlite:///test-users.db') app.config['SQLALCHEMY_DATABASE_URI'] = database_url db = SQLAlchemy(app) socketio = SocketIO(app) Below is the class that SQLAlchemy uses to give us easy access to the “users” table. The update method is used to “merge” or “update” a new User object into an existing User object. This is used to simplify the code for the code that handles PUT calls to /Users/{id}. The to_scim_resource method is used to turn a User object into a SCIM “User” resource schema. class User(db.Model): __tablename__ = 'users' id = db.Column(db.String(36), primary_key=True) active = db.Column(db.Boolean, default=False) userName = db.Column(db.String(250), unique=True, nullable=False, index=True) familyName = db.Column(db.String(250)) middleName = db.Column(db.String(250)) givenName = db.Column(db.String(250)) def __init__(self, resource): self.update(resource) def update(self, resource): for attribute in ['userName', 'active']: if attribute in resource: setattr(self, attribute, resource[attribute]) for attribute in ['givenName', 'middleName', 'familyName']: if attribute in resource['name']: setattr(self, attribute, resource['name'][attribute]) def to_scim_resource(self): rv = { "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User"], "id": self.id, "userName": self.userName, "name": { "familyName": self.familyName, "givenName": self.givenName, "middleName": self.middleName, }, "active": self.active, "meta": { "resourceType": "User", "location": url_for('user_get', user_id=self.id, _external=True), # "created": "2010-01-23T04:56:22Z", # "lastModified": "2011-05-13T04:42:34Z", } } return rv We also define a ListResponse class, which is used to return an array of SCIM resources into a Query Resource. class ListResponse(): def __init__(self, list, start_index=1, count=None, total_results=0): self.list = list self.start_index = start_index self.count = count self.total_results = total_results def to_scim_resource(self): rv = { "schemas": ["urn:ietf:params:scim:api:messages:2.0:ListResponse"], "totalResults": self.total_results, "startIndex": self.start_index, "Resources": [] } resources = [] for item in self.list: resources.append(item.to_scim_resource()) if self.count: rv['itemsPerPage'] = self.count rv['Resources'] = resources return rv Given a message and HTTP status_code, this will return a Flask response with the appropriately formatted SCIM error message. By default, this function will return an HTTP status of “HTTP 500 Internal Server Error”. However you should return a more specific status_code when possible. See section 3.12 of RFC 7644 for details. def scim_error(message, status_code=500): rv = { "schemas": ["urn:ietf:params:scim:api:messages:2.0:Error"], "detail": message, "status": str(status_code) } return flask.jsonify(rv), status_code This sample application makes use of Socket.IO to give you a “real time” view of SCIM requests that Okta makes to this sample application. When you load the sample application (the “/” route), your browser will be sent a web application that uses Socket.IO to display updates without the need for you to reload the page: @app.route('/') def hello(): return render_template('base.html') This page is updated using the functions below: send_to_browseris syntactic sugar that will emitSocket.IO messages to the browser with the proper broadcastand namespacesettings. render_jsonis more syntactic sugar which is used to render JSON replies to Okta’s SCIM client and emitthe SCIM resource to Socket.IO at the same time. test_connectis the function called with a browser first starts up Socket.IO, it returns a list of currently active users to the browser via Socket.IO. test_disconnectis a stub that shows how to handle Socket.IO “disconnect” messages. The code described above is as follows: def send_to_browser(obj): socketio.emit('user', {'data': obj}, broadcast=True, namespace='/test') def render_json(obj): rv = obj.to_scim_resource() send_to_browser(rv) return flask.jsonify(rv) @socketio.on('connect', namespace='/test') def test_connect(): for user in User.query.filter_by(active=True).all(): emit('user', {'data': user.to_scim_resource()}) @socketio.on('disconnect', namespace='/test') def test_disconnect(): print('Client disconnected') Below is the JavaScript that powers the Socket.IO application described above. For the full contents of the HTML that this JavaScript is part of, see the base.html file in the templates directory of this project. $(document).ready(function () { namespace = '/test'; // change to an empty string to use the global namespace var uri = 'https://' + document.domain + namespace; console.log(uri); var socket = io.connect(uri); socket.on('user', function(msg) { console.log(msg); var user = msg.data; var user_element = '#' + user.id var userRow = '<tr id="' + user.id + '"><td>' + user.id + '</td><td>' + user.name.givenName + '</td><td>' + user.name.familyName + '</td><td>' + user.userName + '</td></tr>'; if($(user_element).length && user.active) { $(user_element).replaceWith(userRow); } else if (user.active) { $('#users-table').append(userRow); } if($(user_element).length && user.active) { $(user_element).show(); } if($(user_element).length && !user.active) { $(user_element).hide(); } }); }); This bit of code allows you to run the sample application by typing python scim-server.py from your command line. This code also includes a try/catch block that creates all tables of the User.query.one() function throws an error (which should only happen if the User table isn’t defined. if __name__ == "__main__": try: User.query.one() except: db.create_all() app.debug = True socketio.run(app)
https://developer.okta.com/standards/SCIM/index
CC-MAIN-2018-17
refinedweb
4,543
50.12
Agenda See also: IRC log -> Accepted. -> Accepted. Paul and Vojtech give regrets -> Norm: Anyone have any questions or comments about E01 and/or E02? ... Hearing none, I propose that we accept them. Accepted. Some discussion of what to do next; updating the errata document is the answer. <scribe> ACTION: Norm to construct an update to the errata document pointed to from the spec and pass it off to someone who can update it. [recorded in] Norm: Vojtech, you had a question about namespace bindings. Vojtech: Yes, in 5.7.5, in the first list, there are rules about how to construct namespace bindings. ... The way I understand it now, if an XPath expression returns a sequence of nodes, then we use the in-scope namespace bindings off the first node if the expression returns a node set. Norm: I think that exists so that if an expression selects a QName in content, the right namespace bindings are carried forward. General agreement that everything is ok. -> Some discussion of Paul's comment -> Henry: I think this would be clearer if we changed "conformant processor" to "conformant XProc processor" in that sentence. Paul: I think that could be clearer. ... I'm happy to leave the improvements to the editor. Norm: Paul also asks about a profile that's smaller than "minimum". I don't feel strongly about the names. ... How about "minimum", "basic", "modest", and "recommended" Paul: That sounds good. Norm: Anyone have concerns about these names? None heard. Norm: Paul's last comment is mostly editorial, but I agree. General agreement that it should read "reading and processing" as Paul suggests. Henry: Perhaps I should report on my action to add something about invariants ... I've started. Looking over the XML Spec again, it's not going to be as nice as I'd like. ... The best I can do for the first two profiles (which don't read any external markup) is to say things in two parts. ... For documents which are, or should be, standalone=yes and for documents which are standalone=no ... Because for documents which are standalone=no, if you don't read the external subset there isn't much you can say. ... You aren't gauranteed to get much at all. ... The most you can say is that you'll get the document element name and attributes (provided they don't contain entity references) ... But almost no one bothers with standalone="yes" and the default is standalone="no", so it'll be tricky to get right. ... Especially since processors aren't required to report unexpanded entities. ... But the other two are easier and I think we can get somewhere with them. ... The scope for variation is reduced after the external subset has been read and processed. Norm summarizes the state of the issues list, not much progress to be made today Adjourned rrsagent draft minutes
http://www.w3.org/XML/XProc/2010/07/01-minutes
CC-MAIN-2018-26
refinedweb
479
73.98
Version 3.5 M5 of Eclipse WTP Released The Eclipse Web Tools Platform (WTP) has released milestone build M5 of WTP 3.2. Version 3.2 M5introduces support for the new Element Collection mapping introduced in JPA 2.0. and introduces the ability to configure the target namespace, prefix and file suffix for the abstract definition file. Developers can also now choose from two preferences in the XML editor. The Format comments preference disables comment formatting entirely, while the Join lines preference allows comments to have their lines joined if a line’s text doesn’t exceed the maximum line width. The open source project is available to download now.
https://jaxenter.com/version-3-5-m5-of-eclipse-wtp-released-100160.html
CC-MAIN-2015-48
refinedweb
110
53.92
Guidelines to run OFBiz under WASCE 2.0.0.1 or Geronimo 2.1.1 Deprecated This was working with R4.0. It's now deprecated. If ever someone needs to run OFBiz under Geronimo again this could be re-used but some parts have been removed from the trunk, so more historical now... Goal This is intended to be used at production stage. The idea is to develop as normal under OFBiz, and then, when at production stage, to deploy on a WASCE/Geronimo server. You can then update anew when needed from developement machine(s). How it works This uses a totally exploded EAR architecture (with exploded WARs as well). The deployment tool is for the moment intended to be use on a sole machine. You can deploy remotely (on a Linux Server from a Windows machine for instance, or whatever actually) but there are some drawbacks due to Geronimo itself (see comments at end of this page). So the idea is to have an OFBiz instance on the server you want to deploy, and to update it from your development machine(s) (using Subversion for instance). Then to deploy from this updated OFBiz instance on the server itself. WASCE 2.0.0.1 is based on Geronimo 2.0.3 which is a snapshot, previous versions don't work nor 2.1, but Geronimo 2.1.1 works (and I suppose above). The code and templates are in framework/appserver/wasce2. Of course, this works under Linux and Windows as well. Install WASCE WASCE 2.0.0.1 or Geronimo 2.1.1 If needed, refer to documentation using bread crumbs links in the web page above (WASCE 2.0 Documentation > Index > Content > Setup > Choosing an installation bundle) or/and use Geronimo documentation : - At the moment I tested, Eclipse plugins where not working. They were not up to date and will not be of a great help in our case anyway (I tried them, and lost some time) - After installation, you may set your GERONIMO_HOME env var. But you might prefer to set it in the appserver.properties file. In the same time check the content of this file and adapt it suiting your needs. If present GERONIMO_HOME env var is not overloaded by geronimoHome appserver.properties value. Setup and deploy (or redeploy) - First check and adapt the content of the appserver.properties file - For instance system/manager are the defaut login/pwd coming with WASCE (and Geronimo). You may need to change them, but there are more important informations, please check From a clean OFBiz installation which runs well on the same machine than where WASCE is installed, at OFBiz root (or Eclipse, or what else) run java -jar ofbiz.jar -setup wasce2 You may do it offline or inline depending of appserver.properties file setting for offline parameter. Of course if you use offline=false the application server should run before. This should deploy OFBiz under your application server and generate 4 kinds of files. - OFBiz jar files in the Geronimo repository - Deployment plans (application.xml and geronimo-application.xml) in a generated META-INF directory (in OFBiz root directory). Though the term "deployment plans" is normally used only for Geronimo specific "J2EE deployment descriptors" (application.xml for instance is a standard "J2EE deployment descriptors") - Deployment plans (geronimo-web.xml) in each applications WEB-INF directories - There is one more README file generated in OFBiz META-INF directory. Open it and follow its intructions before running your application server (so the 1st time you need to deploy offline). Don't forget to have a look in this file, there are important informations generated there Something like "Illegal character in path" in the log means that you must remove the corresponding file, these errors are often related to .xls of .pdf files (or what else) found in the OFBiz directories structure (these kinds of files don't exist in OFBiz OOTB). If it's not related directly to OFBiz remove it. You might see an error message like Error: ofbiz. Don't worry, this is because the deployment tool try before to undeploy (not in redeploy mode, note that redeploy implies an application server running). And if you see this message it's only because this module is not deployed on the server. Of course if this error appears during the deployment phase and not the undeployment one, there is a problem in your parameters. In case of any other problems try to look 1st at I should mention that it seems to me that deploying with a server already running (offline=false in appserver.properties) is faster but I did not measure exactly yet. Anyway, you need to deploy offline the 1st time you deploy since you need to put in the geronimo script (.bat or .sh) the generated and mandatory informations found in META-INF/README before running the server. Else OFBiz modules will not load and the server will hang. Beware that this module use the strings "/framework/", "/applications/", "/specialpurpose/" and "/hot-deploy/" to find location of files. So if you have a such string in your OFBiz path you will encouter an issue. Also if you have developed some new applications you may have to adapt application.xml and geronimo-application.xml to suit your needs. Run - Then if you launch the WASCE server using regular command (or running "Geronimo run" from GERONIMO_HOME\bin) it should start with OFBiz modules loaded. For obvious port conflicts reason, don't run standard OFBiz at the same time. If you run the Geronimo script from GERONIMO_HOME\bin geronimo run with the geronimo (and possibly setenv) script modified following instructions in the README file you will see what happens. This is why I think it's a preferable way of launching the server. - You will find OFBiz specific logs in geronimo_home\bin\framework\logs directory Derby If you are using the embedded Derby database you should consider this. - OFBiz creates 2 folders for the Derby Database under the runtime directory: ofbiz and ofbizolap, these contain all the setup data for OFBiz. - If you run Geronimo, under your Geronimo bin directory you will see the 2 same directories. But those will not contain any data as you are not running OFBiz with the necessary parameters (you are running Geronimo, not OFBiz). Simply replace the 2 directories: ofbiz & ofbizolap in your Geronimo bin directory by the 2 directories from your OFBiz runtime directory. RMI If you need to use RMI follow directions at end of the framework\appserver\templates\wasce2\README file. Note that I only tried without SSL using this tip Multi-instances The appserver "-setup" deployment option is able to deploy OFBiz multi-instances in Geronimo. Actually there are, at least, 2 cases for multi-instances. - Instances are all the same (simple case) - Instances are differents or mixed (some instances may be the same) In the simple case we need only one root (ofbiz.home) and one classpath. Instances are numbered, except the "default one". This allows to keep OFBiz current code : all links from a numbered instances going out of the current application to another application will go to the corresponding called application in the default non numbered instance. This is more a hack but it works. It's interesting when you have a lot of people working in a sole application as it's the case when you have a group of people working to seel goods by phone or other channels (customer support) and for the eCommerce application as well. In the second case there are as many as roots and classpathes couples as there are different instances. I have already in code what is needed to deal with both cases, but I got an issue when writing to modules web.xml (the xml root node is readonly and I don't understand why). In the simple case it's not a problem as we need only to write in the 1st web.xml file (in webtools's) and we can do it by hand (we don't need to dynamically put an instance number in it). Anyway there is much more than this since OFBiz is not build to run multi-instances. In the 1st case it's less a problem as all instances refer to the default corresponding application but to work properly in the general case it would need to introduce an instance number parameter in links. Waiting for such a solution I can see two ways : - Introduce an instance number in the web.xml file of the 1st web app loaded (OOTB it's webtools). Then use 2 instances for each instance one numbered and the other not. This is of course very clumsy - I also got a glimpse of another possibility during my reading of Geronimo or? WASCE documentation but I forgot where. I'm pretty sure it's the right solution, but I would have to search anew... It's a bit like Ludovic Maître's suggestion in one of his emails in user ML (he wrote about Tomcat namespaces but it was not clear to me about what exactly he was speaking). Miscellaneous experiences Redeployment You can't redeploy OFBiz modules (web apps actually) independently. You have to redeploy the whole ear. This is because we use a totally exploded EAR (with WARs exploded inside) more information on Geronimo ML. Remote deployment (not recommended) Remote deployment using the --inPlace option (for a totally exploded EAR with exploded and only exploded WARs inside as OFBiz is deployed above) is only possible if you exactly replicate the deployed directory structure on both the client and the server machine. If you are on Windows you must even replicate the drive more information on Geronimo ML. I have open a Jira issue for this. Also beware of heterogeous environments (Windows to Linux for instance) as I did any such tests. This solution was developed by Les7arts as a consulting service for Iwoot
https://cwiki.apache.org/confluence/display/OFBIZ/Geronimo+and+IBM+Websphere+Application+Server+Community+Edition
CC-MAIN-2019-30
refinedweb
1,662
63.7
Implement count function Having discussed the algorithm for our counting function, let’s try to implement it in another context. Imagine a class has completed a short online quiz to test their knowledge. Marks are out of 10, and you want to know how many learners got top marks. Wouldn’t it be good to get a computer to analyse the data for you? - First let’s generate a set of random scores to represent the class. We’ll use the randintfunction, from Python’s randomlibrary, which takes a minimum and maximum number for the values to be generated. from random import randint scores = [] for x in range (0, 30): scores.append(randint(0, 10)) # Generate a random number from 0 to 10 and append to scores print(scores) Now we have a list containing 30 learners’ quiz scores between 0 and 10. - Next we want to know how many of those learners achieved a top score of 10. To count the number of 10s, we need to iterate over the list checking to see if each score is equal to 10. When we find a 10, we need to increment a tens variable that keeps a count of how many we have found. Add the following to your existing code. tens = 0 # Initialise a variable for counting scores of ten for score in scores: if score == 10: tens += 1 print("{0} learners got top marks".format(tens)) Tip: The line tens += 1 is short for tens = tens + 1. It’s such a common thing to do that Python and other programming languages have a shorthand for it. Challenge Counting the number of occurrences in a list is a common task, so it makes sense to create a function that can check for any item in a list. The function would need to take two parameters: the target item that we want to count occurrences of, and the list of items. Can you add a generic count function to this program so that it prints out the number of learners that scored top marks? import random # # Add your count function here # scores = [] for x in range (0, 30): scores.append(random.randint(0, 10)) print(scores) top_scorers = count(10, scores) # Count function called here print("{0} learners got top marks".format(top_scorers)) See if you can use your function to count other items in lists. - What about counting occurrences of a given item in a list of strings? - Can you count numbers in a particular range, e.g. less than 3? - Can you count vowels in a word? In the next step, you’ll have the chance to share your code and suggestions.
https://www.futurelearn.com/courses/programming-102-think-like-a-computer-scientist/0/steps/53105
CC-MAIN-2020-16
refinedweb
443
80.51
. Getting StartedGet up and running and learn the basic Pupil workflow . Make sure there is space between the headset frame and your forehead.. Good and bad eye video Before calibrating, be sure to check that your eyes are well positioned for a robust eye tracking performance. For more details check out. - Pupil Headset Adjustments Rbutton in the left hand side of the world window. - The elapsed recording time will appear next to the Rbutton. - Stop recording: Press the rkey on your keyboard or press the circular Rbutton were initially captured. Player Workflow Use Pupil Player to visualize data recorded with Pupil Capture and export videos of visualization and datasets for further analysis. 1. Open Pupil Player Now that you have recorded some data, you can play back the video and visualize gaze data, marker data, and more. Visualize Player comes with a number of plugins. Plugins are classified by their use-case. Visualization plugins can be additive. This means that you can add multiple instances of a plugin to build up a visualization. high speed 2d world camera comes with two lenses. 60 degree FOV lens (shown on the left) and a wide angle 100 degree FOV lens (shown on the right). The world camera lens are interchangeable, so you can swap between the two lenses provided for normal or wide angle FOV. Arm Extender If you need to adjust the eye cameras beyond the built in adjustment range, you can use the orange arm extenders that are shipped with your Pupil headset. Unplug your eye camera. Slide the existing eye camera arm off the headset. Slide the arm extender onto the triangular mount rail on the headset frame. Slide the camera onto the extended mount rail. Plug the camera back in. The eye camera arm extender works for all existing 120 and 200hz systems. Nose Pads All Pupil headsets come with 2 sets of nose pads. You can swap the nose pads to customize the fit. Pupil Headset Adjustments A lot of design and engineering thought has gone into getting the ergonomics of the headset just right. It is designed to fit snugly, securely, and comfortably. The headset and cameras can be adjusted to accommodate a wide range of users. To ensure a robust eye tracking performance, make sure all the cameras are in focus with a good field of view of your eyes. Slide Eye Camera The eye camera arm slides in and out of the headset frame. You can slide the eye camera arm along the track. Rotate World Camera You can rotate the world camera up and down to align with your FOV. Rotate Eye Camera The eye camera arm is connected to the eye camera via the ball joint. You can rotate about its ball joint. Ball Joint Set Screw You can adjust the set screw to control the movement of the eye camera about the ball joint. We recommend setting the set screw so that you can still move the eye camera by hand but not so loose that the eye camera moves when moving the head. You can also tighten the set screw to fix the eye camera in place. Focus Cameras No focus 200hz Eye Camera 200hz eye cameras do not need to be focused, and can not be focused. The lens of the 200hz eye camera is arrested using glue. Twisting the lens will risk breaking the mount. Focus 120hz Eye Camera If you have a 120hz eye camera, make sure the eye camera is in focus. Twist the lens focus ring of the eye camera with your fingers or lens adjuster tool to bring the eye camera into focus. Focus World Camera Set the focus for the distance at which you will be calibrating by rotating the camera lens. HTC Vive Add-On Add eye tracking powers to your HTC Vive with our 120hz binocular eye tracking add-on. This section will guide you through all steps needed to turn your HTC Vive into an eye tracking HMD using a Pupil Labs eye tracking add-on. Install the Vive add-on Install the Vive. USB Connection and Camera IDs. USB and Camera IDs. HoloLens Add-On Add eye tracking powers to your HoloLens with our 200hz eye tracking add-ons. Follow the instructions in the video to install your add-on.. Windows Driver Installation If you are using Windows, you will need to install drivers for your cameras. Please refer to the instructions here. were developed as part of the whole headset and carry the revision number of the headset they were a Window The Capture window is the main control center for Pupil Capture. It displays live video feed from pupil headset. - Graphs - This area contains performance graphs. By default the graphs CPU, FPS, and pupil algorithm detection confidence will be displayed. You can control graph settings with the System Graphsplugin. - Hot keys - This area contains clickable buttons for plugins. - Menu - This area contains settings and contextual information for each plugin. - Sidebar - This area contains clickable buttons for each plugin. System plugins are loaded in the top and user added plugins are added below the horizontal separator. Capture Selection By default Pupil Capture will use Local USB as the capture source. If you have a Pupil headset connected to your machine you will see the. - RealSense 3D - select this option if you are using an Intel RealSense 3D camera as your scene camera. Read more in the RealSense 3D section. After switching to a different capture source, you can click the Start with default devices button. This will automatically select the correct sensor and start capturing for corresponding world and eye windows. Or, you can manually select the capture source to use from the world and eye windows. Pupil Detection Pupil’s algorithms automatically detect the participant’s pupil. With the 3d detection and mapping mode, Pupil uses a 3d model of the eye(s) that constantly updates based on observations of the eye. This enables the system to compensate for movements of the headset - slippage. To build up an initial model, you can just look around your field of view. Fine-tuning Pupil Detection As a first step it is recommended to check the eye camera resolution as some parameters are resolution dependent. For fast and robust pupil detection and tracking we recommend using the default resolution settings. For 200hz eye cameras the default resolution is set to 192x192 pixels. If you have an older 120hz eye camera, the default is 320x240 pixels. In Pupil Capture you can view a visualization of the pupil detection algorithm in the eye windows. For fine-tuning switch to this mode: General Settings > Algorithm Mode. Sensor Settings Resolution: 192x192 for 200hz eye cameras. 320x240 for 120hz eye cameras. Absolute Exposure Time: Make sure that there is a high contrast between the pupil and its surrounding. In order to do so, you might need to increase brightness of the images. Start by testing one of these Absolute Exposure Timevalues: 64, 94, 124. Pupil Detector 2D/3D Pupil Min/Max: Change to General > Algorithm Mode. The two red circles represent the min and max pupil size settings. The green circle visualizes the current apparent pupil size. Set the min and max values so the green circle (current pupil size) is within the min/max range for all eye movements. Intensity Range: Defines the minimum “darkness” of a pixel to be considered as the pupil. The pixels considered for pupil detection are visualized in blue within the Algorithm Mode. Try to minimize the range so that the pupil is always fully covered while having as little leakage as possible outside of the pupil. Be aware that this is dependent on the brightness and therefore has a strong interaction with UVC Source/Sensor Settings/Absolute Exposure Time.. Calibration Process Monocular Binocular Pupil Headset comes in a variety of configurations. Calibration can be conducted with a monocular or binocular eye camera setup.). Your pupil is properly detected by the eye camera Make sure the world camera is in focus Calibration Methods Before starting calibration, ensure that eye(s) are robustly detected and that the headset is comfortable for the participant. Pupil Calibration Marker v0.4 Pupil Calibration Stop Marker v0.4 This method is done with an operator and a subject. It is suited for midrange distances and can accommodate a wide field of view. The operator will use a printed calibration marker like the one shown in the video. Download Pupil Labs Calibration Marker v0.4 to print or display on smartphone/tablet screen. Manual Marker Calibration con your keyboard or click the blue circular Cbutton subject’s. Single Marker Calibration Calibrate using a single marker displayed on screen or hand held marker. Gaze at the center of the marker and move your head in a spiral motion. You can also move your head in other patterns. This calibration method enables you to quickly sample a wide range of gaze angles and cover a large range of your FOV. - Select Single Marker Calibration con your keyboard or click the blue circular Cbutton on the left hand side of the world window to start calibration. - Look at the center of the marker. - Slowly move your head while gazing at the center of the marker. We have found that a spiral pattern is an efficient way to cover a large area of the FOV. - Press the Cbutton on your keyboard or show the stop marker to stop calibrating. within their field of vision. Note – pick a salient feature in the environment. - Click on that point in the world window. - Data will be sampled. - Repeat until you have covered the subject’s field of view (generally about 9 points should suffice) con your keyboard or click the blue circular Cbutton in the left hand side of the world window to stop calibration. Fingertip Calibration Calibrate using your fingertip! We have found that the easiest way to calibrate with your fingertip is as follows: Fingertip Calibration con your keyboard or click the blue circular Cbutton on the left hand side of the world window to start calibration. - Extend your arm and hold your index finger still at the center of the field of view of the world camera. - Move your head for example horizontally and then vertically while gazing at your index finger fingernail. - Show five fingers or press con your keyboard or click the blue circular Cbutton on the left hand side of the world window to stop calibration. This calibration method enables you to quickly sample a wide range of gaze angles and cover a large range of your FOV within 10 seconds. A Convolutional neural network (CNN) is implemented for the fingertip detection: First, a hand detector, based on MobileNet and SSD, searches for a hand in the image. The position of the fingertip is then found out by a fingertip detector, adapted from YOLSE and Unet. Notes on calibration accuracy In 2D mode, you should easily be able to achieve tracking accuracy within the physiological limits (sub 1 deg visual degrees). Using the 3d mode you should achieve 1.5-2.5 deg of accuracy. - Any monocular. - Calibration accuracy can be visualized with the Accuracy Visualizerplugin. If the Accuracy Visualizerplugin is loaded, it will display the residual between reference points and matching gaze positions that were recorded during calibration. - Gaze Prediction Accuracy can be estimated with an accuracy test. Start the accuracy by running a normal calibration procedure but press the Tbutton in the world window and not the Cbutton. After completing the test, the plugin will display the error between reference points and matching gaze positions that were recorded during the accuracy test. Recording r on your keyboard or press the blue circular R button Open the Plugin Manager menu on the right. It lists all available plugins. Click the button next to the plugin’s name to turn on or off the plugin. Third-party plugins You can easily load third party plugins. Third party plugins will appear in the Pupil Capture or Pupil Player plugin list. Copy the plugin to the plugins folder within the pupil_capture_settings or pupil_player_settings folder. Fixation Detector The online fixation detector classifies fixations based on the dispersion-duration principle. Fixations are used by the screen and manual marker calibrations to speed up the procedure. A fixation is visualized as a yellow circle around the gaze point that is shown in the Pupil Capture world window. You can find more information in our dedicated fixation detector section. Network plugins Pupil Capture has a built-in data broadcast functionality. It is based on the network library ZeroMQ and follows the PUB-SUB pattern. Data is published with an affiliated topic. Clients need to subscribe to their topic of interest to receive the respective data. To reduce network traffic, only data with at least one subscription is transferred. Pupil Remote Pupil Remote is the plugin that functions as the entry point to the broadcast infrastructure. It also provides a high level interface to control Pupil Capture over the network (e.g. start/stop a recording). - Load the Pupil Remoteplugin from the Generalsub-menu in the GUI (it is loaded by default). - It will automatically open a network port at the default Address. - Change the address and port as desired. - If you want to change the address, just type in the address after the tcp:// Pupil Groups Pupil Groups can help you to collect data from different devices and control an experiment with multiple actors (data generators and sensors) or use more than one Pupil device simultaneously: - Load the Pupil Groupsplugin from the Generalsub-menu in the GUI. - Once the plugin is active it will show all other local network Pupil Group nodes in the GUI - Furthermore, actions like starting and stopping a recording on one device will be mirrored instantly on all other devices. Pupil Time Sync If you want to record data from multiple sensors (e.g. multiple Pupil Capture instances) with different sampling rates it is important to synchronize the clock of each sensor. You will not be able to reliably correlate the data without the synchronization. The Pupil Time Sync protocol defines how multiple nodes can find a common clock master and synchronize their time with it. The Pupil Time Sync plugin is able to act as clock master as well as clock follower. This means that each Pupil Capture instance can act as a clock reference for others as well as changing its own clock such that it is synchronized with another reference clock. Pupil Time Sync nodes only synchronize time within their respective group. Be aware that each node has to implement the same protocol version to be able to talk to each other. Frame Publisher The Frame Publisher plugin broadcasts video frames from the world and eye cameras. There is a pupil-helper example script that demonstrates how to receive and decode world frames. Remote Recorder The Pupil Mobile app can be controlled via Pupil Capture when connected. This includes changing camera and streaming settings. The Remote Recorder plugin extends this list with the possibility to start and stop recordings that are stored in the phone. Surface Tracking The Surface Tracker plugin allows you to define planar surfaces within your environment to track areas of interest (AOI). Surfaces are defined using square markers. Markers You can generate markers with this script, or download the image on the right. Markers can be printed on paper, stickers, or displayed on a screen. The design of our markers was greatly inspired by the ArUco marker tracking library. However our markers use 5x5 grid instead of the 7x7 grid ArUco uses. This allows us to make smaller markers that can still be detected well. The 5x5 design allows for a total of 63 unique markers. Preparing your Environment A surface can be based on one or more markers. The markers need to be placed in close proximity or within your desired AOI. If your AOI is for example a computer monitor, you could display your markers in the corners of the screen or place them somewhere on the bezel. If your AOI is a magazine page, you could place the markers in the corners of the page, or anywhere else on the page where they are not occluding the content. When placing your markers please follow the guidelines: - All markers of a surface need to lie within the same two dimensional plane. - An individual marker can be part of multiple surfaces. - The used markers need to be unique, i.e. you may not use multiple instances of the same marker in your environment. - Using more markers to define a surface yields greater robustness in the tracking of that surface. - Surfaces defined with more than 2 markers are detected even if some markers lie outside of the camera image or are obscured. Defining a Surface Surfaces can be defined with Pupil Capture in real-time, or post-hoc with Pupil Player. In both cases the necessary steps are as follows: - Prepare your environment as described above. - Turn on the Surface Trackerplugin . - Make sure the camera is pointing at your AOI and the markers are well detected. In the post-hoc case (using Pupil Player) seek to a frame that contains a good view of your desired AOI. - Add a new surface by clicking the Add surfacebutton. - Give your surface a name. - Click the edit surfacebutton and move the corners of your surface into the desired position. In the real-time case (using Pupil Capture) this is much easier if you freeze the video by clicking the Freeze Scenebutton. - If markers have been erroneously added or left out, click the add/remove markersbutton and afterwards onto the according marker to add/remove them from your surface. Reusing Surface Definitions Your surfaces are automatically saved in a file called surface_definitions in the pupil_capture_settings directory. If you restart Pupil Capture or the Surface Tracker plugin, your surface definitions from previous sessions will be loaded. The surface_definitions file is copied into each recording folder as well, so you will have access to your surface definitions in Pupil Player. You can copy & paste this file to move definitions from one session or recording to another. Gaze Heatmaps for Surfaces You can display gaze heatmaps for each surface by enabling Show Heatmap in the Surface Tracker menu. Two heatmap modes are supported: * Gaze within each surface: Visualizes the distribution of gaze points that lie within each surface. * Gaze across different surfaces: Color codes the surfaces to visualize the amount of time spend gazing on each surface in relation to other surfaces. Red color represents a lot of gaze points or time spent. Blue color represents few gaze points or little time spent. The smoothness of the heatmap in Gaze within each surface mode can be set using the Heatmap Smoothness slider, which will effectively change the bin size of the underlying histogram. In the online case the heatmap is computed over the most recent data. The exact time window to consider can be set using the Gaze History Length field. Further Functionality - You can click the Open Surface in Windowbutton to open a view of the surface in a separate window. Gaze positions on the surface will be visualized in this window in real-time. - Streaming Surfaces with Pupil Capture - Detected surfaces as well as gaze positions relative to the surface are broadcast under the surfacetopic. Check out this video for a demonstration. - Surface Metrics with Pupil Player - if you have defined surfaces, you can generate surface visibility reports or gaze count per surface. See our blog post for more information. Blink Detection The pupil detection algorithm assigns a confidence value to each pupil datum. It represents the quality of the detection result. While the eye is closed the assigned confidence is very low. The Blink Detection plugin makes use of this fact by defining a blink onset as a significant confidence drop - or a blink offset as a significant confidence gain - within a short period of time. The plugin creates a blink event for each event loop iteration in the following format: { # blink datum 'topic': 'blink', 'confidence': <float>, # blink confidence 'timestamp': <timestamp float>, 'base_data': [<pupil positions>, ...] 'type': 'onset' or 'offset'} The Filter length is the time window’s length in which the plugin tries to find such confidence drops and gains. The plugin fires the above events if the blink confidence within the current time window exceeds the onset or offset confidence threshold. Setting both thresholds to 0 will always trigger blink events, even if the confidence is very low. This means that onsets and offsets do not appear necessarily as pairs but in waves. Audio Capture The Audio Capture plugin provides access to a selected audio source for other plugins and writes its output to the audio.mp4 file during a recording. It also writes the Pupil Capture timestamp for each audio packet to the audio_timestamps.npy file. This way you can easily correlate single audio packets to their corresponding video frames. Audio is recorded separately from the video in Pupil Capture. You can play back audio in sync with video in Pupil Player. Audio is automatically merged with the video when you export a video using Pupil Player. Annotations The Annotation Capture plugin allows you to mark timestamps with a label – sometimes referred to as triggers. These labels can be created by pressing their respective hotkey or by sending a notification with the subject annotation. This is useful to mark external events (e.g. “start of condition A”) within the Pupil recording. The Annotation Player plugin is able to correlate and export these events as well as add new ones. Remote Annotations You can also create annotation events programmatically and send them using the IPC, or by sending messages to the Pupil Remote interface. Here is an example annotation notification. {'subject':"annotation",'label':"Hi this is my annotation 1",'timestamp':[set a correct timestamp as float here],'duration':1.0,'source':'a test script','record':True} Camera Intrinsics Estimation This plugin is used to calculate camera intrinsics, which will enable one to correct camera distortion. Pupil Capture has built in, default camera intrinsics models for the high speed world camera and the high resolution world camera. You can re-calibrate your camera and/or calibrate a camera that is not supplied by Pupil Labs by running this calibration routine. We support two different distortion models, radial distortion and fisheye distortion. For cameras with a FOV of 100 degrees or greater (like e.g. the high speed world camera) the fisheye distortion model usually performs better, for cameras with a smaller FOV (e.g. the high resolution world camera) we recommend the radial distortion model. Camera Intrinsics Estimation - Select the correct ‘Distortion Model’ - Click on ‘show pattern’ to display the pattern - Resize the pattern to fill the screen - Hold your Pupil headset and aim it at the pattern. - With the world window in focus, press con your keyboard or the circular Cbutton in the world windows to detect and capture a pattern. - Data will be sampled and displayed on the screen as a border of the calibrated pattern. (Note - make sure to move your headset at different angles and try to cover the entire FOV of the world camera for best possible calibration results) - Repeat until you have captured 10 patterns. - Click on show undistorted imageto display the results of camera intrinsic estimation. This will display an undistorted view of your scene. If well calibrated, straight lines in the real world will appear as straight lines in the undistorted view.. Player Window Let’s get familiar with the Player window. The Player window is the main control center for Pupil Player. It displays the recorded video feed from pupil capture file. - Graphs - This area contains performance graphs. By default the graphs CPU, FPS, and pupil algorithm detection confidence will be displayed. You can control graph settings with the System Graphsplugin. -. - Frame Stepping - You can use the arrow keys on your keyboard or the << >>buttons to advance one frame at a time. - Trimming - Drag either end of the timeline to set a trim beginning and ending trim marks. The trim section marks directly inform the section of video/data to export. - Menu - This area contains settings and contextual information for each plugin. - Sidebar - This area contains clickable buttons for each plugin. System plugins are loaded in the top and user added plugins are added below the horizontal separator. Offline Surface Tracker. Vis on the gaze positions. Here is an example demonstrating Vis Light Points with a falloff of 73. Vis Pupil Data And Post-hoc Detection Offline (post-hoc) Pupil Detection and Gaze Mapping Offline (post-hoc) Gaze Mapping With Manual Reference Locations Use Offline (post-hoc) Calibration For Another Recording Offline (post-hoc) Gaze Mapping Validation By default, Player starts with the Pupil From Recording plugin that tries to load pupil positions that were detected and stored during a Pupil Capture recording. Alternatively, one can run the pupil detection post-hoc. Offline Pupil Detector The Offline Pupil Detector plugin can be used with any dataset where eye videos were recorded. The plugin tries to load the eye videos, and runs the pupil detection algorithm in separate processes. This plugin is especially relevant for recordings made with Pupil Mobile, because Pupil Mobile does not perform any pupil detection or gaze estimation on the Android device. This plugin is available starting with Pupil Player v0.9.13. The Detection Method selector sets the detection algorithm to either 2d or 3d detection (see the section on Pupil Detection for details). The Redetect button restarts the detection procedure. You can use the Offline Pupil Detector plugin to debug, improve, and gain insight into the pupil detection process. Gaze Data And Post-hoc Calibration By default, Player starts with the Gaze From Recording plugin that tries to load gaze positions that were detected and stored during a Pupil Capture recording. Alternatively, one can run the gaze mapping process post-hoc. Offline Calibration The Offline. Analysis Plugins These plugins are simple unique plugins, that operate on the gaze data for analysis and visualizations.. All files generated by the Offline Surface Detector>_<surface_id>.png- Heatmap of gaze positions on the surface aggregated over the entire export. fixations_on_surface_<surface_name>_<surface_id>.csv- A list of fixations that have occurred on the surface. gaze_positions_on_surface_<surface_name>_<surface_id>. srf_positons_<surface_name>_<surface_id>- List of surface positions in 3D. The position is given as the 3D pose of the surface in relation to the current position of the scene camera. m_to_screenis a matrix transforming coordinates from the camera coordinate system to the surface coordinate system. m_from_screenis the inverse of m_to_screen. Fixation Detector The offline fixation detector calculates fixations for the whole recording. The menu gives feedback about the progress of the detection, how many fixations were found, shows and detector section. Head Pose Tracking Head Pose Tracker Tutorial This plugin uses fiducial markers (apriltag) to build a 3d model of the environment and track the headset’s pose within it. See the detailed data format section for more information about the exported data. the [Visualization Plugins](#visualization-plugins) section). export pupil and gaze data tp .csv files and is active bu default. for Unity3d can be found here Pupil Mobile Pupil Mobile is a companion app to Pupil Capture and Pupil Service. It is currently in public beta. Introducing Pupil Mobile Pupil Mobile enables you to connect your Pupil eye tracking headset to your Android device via USBC. You can preview video and other sensor streams on the Android device and stream video data over a WiFi network to other computers (clients) running Pupil Capture. Seamlessly integrated with Pupil Capture and Pupil Service. Home Screen Home screen is the main control center for Pupil Mobile. It displays a list of available sensors. Click any sensor for a preview. - Sensors - This area contains available sensors. Pupil headset cameras along with other sensors connected to or built into the Android device like audio and IMU. - Record - Click the Record button to save video and other sensor data locally on the Android device. - General Settings - Main settings menu for Pupil Mobileapp. Sensor Preview Preview live video feed from your Pupil headset and other available sensors. Sensor preview windows will automatically close and take you back to the Home screen in order to conserve battery. - Sensor settings - Settings for the sensor. For cameras you can set frame rate, exposure, white balance, and many more parameters. - Sensor name and recording status - This displays the sensor name and a dot displaying the recording status of this sensor. - Preview stream - A preview of sensor data. General Settings Main settings menu for Pupil Mobile app and information about the Android Device. - Device name - Text input field to name your device. This is the device name that will appear in Pupil Capture. - Close/Quit app - Press this button to close the app. Pupil Mobile runs a service in the background. This enables the app to continue running even when your screen is off. Therefore, just swiping away the app view will not close the app. - Save directory - Select the location where recordings should be saved. By default recordings are saved on Android’s built in storage. You can also save to an SD card, if available. Recordings Screen View all the datasets that were recorded on your Android device. - Recording folder - A directory containing all of your recordings. - Delete - Permanently delete recording files from the device. See the Transfer Recordings section below on how to transfer recordings from your phone to your computer. Switch Views Pupil Mobile is designed to for efficient navigation. Swipe left or right for quick access to other views. - Swipe - Swipe left or right to switch between views. Swipe right from the home screen to go to the recording view. Swipe left from the home screen to the sensor preview views. Streaming To Subscribers Pupil Mobile devices on the same local WiFi network are automatically detected by Pupil Capture. To subscribe to a Pupil Mobile device in Pupil Capture, go to Capture Selection and select Pupil Mobile as the capture source. WiFi Bandwidth & Network Make sure you have a good WiFi newtork connection and that it’s not saturated. The quality of your WiFi connection will affect the sensor streams to your subscribers. NDSI Communication Protocol The communication protocol is named NDSI, it is completely open. A reference client for Python exsits here. Transfer Recordings You will have to manually transfer recordings to your computer to open them in Player. There are two different methods to archive that: SD card File Transfer This method requires - that you set the Save directoryto SD cardin the main settings - a SD card reader that can be connected to your computer Once you finished the recording, remove the sd card from your phone, insert it into the SD card reader, and connect it to your computer. The SD card should show up on your computer as if it was a USB stick. You can find all recodings in the Pupil Mobile folder on the SD card. Copy the recording folders to a directory of your choice on your computer. Afterwards, you can remove the SD card from the computer and open the copied recordings using Pupil Player. Direct Phone Connection - Connect your phone via USB to your computer. - A notification on your phone should pop up: Android System - USB charging this device. - Double tap the notification to open the options. Use USB to transfer files. - Your phone should show up on your computer as if it was an USB stick. - The recordings are saved in two different locations depending on the Save directorysettings chosen before the recording: - Default: Internal storage/Movies/Pupil Mobile - SD Card: SD card/Pupil Mobile - Copy the recording folders to a directory of your choice on your computer. - Disconnect the phone from your computer. - Open the copied recordings in Pupil Player. Download App The app is free. You can download it in the Google Play Store. Supported Hardware - MotoZ2 Play - Google Nexus 6p, Nexus 5x - OnePlus 3, 3T, 5, 5T - potentially other USB-C phones (untested) Bugs & Features I found a bug or need a feature! Please check out existing issues or open a new issue at Pupil Mobile repository. This app is in Alpha state, help us make it better. Experiments I want to use this for my experiments in the field Feel free to do so, but do not rely on the app to work all the time! Many features and environments are still untested. If you have trouble, please open an issue. Data Format The data format for Pupil recordings is 100% open. In this section we will first describe how we handle high level concepts like coordinate systems, timestamps and synchronization and then describe in detail the data format present in Pupil recordings.. - Camera Coordinate System Some of the raw data (such as the estimate of the 3D gaze point) is specified in the three-dimensional world camera coordinate system. The origin of this coordinate system is in the projection center located behind the midpoint of the 2D image plane. The z-axis points forward along the optical axis while the x-axis points to the right and the y-axis downwards. Timestamps All indexed data - still frames from the world camera, still frames from the eye camera(s), gaze coordinate, and pupil coordinates, etc. - have timestamps associated Detailed Data Format Every time you click record in Pupil Capture or Pupil Mobile, a new recording is started and your data is saved into a recording folder. You can use Pupil Player to playback Pupil recordings, add visualizations, and export in various formats. Access to raw data Note that the raw data before processing with Pupil Player is not immediately readible from other software (the raw data format is documented in the developer docs). Use the ‘Raw Data Exporter’ plugin in Pupil Player to export .csv files that contain all the data captured with Pupil Capture. Exported files will be written to a subfolder of the recording folder called exports. The following files will be created by default in an export: - export_info.csv - Meta information on the export containing e.g. the export date or the dta format version. - pupil_positions.csv - A list of all pupil datums. See below for more infotmation. - gaze_positions.csv - A list of all gaze datums. See below for more infotmation. - pupil_gaze_positions_info.txt - Contains documentation on the contents of pupil_positions.csv and gaze_positions.csv - world_viz.mp4 - The exported section of world camera video. If you are using additional plugins in Pupil Player, these might create other files. Please check the documentation of the respective plugin for the used data format. pupil_positions.csv This file contains all exported pupil datums. Each datum will have at least the following keys: Depending on the used gaze mapping mode, each datum may have additional keys. When using the 2D gaze mapping mode the following keys will be added: When using the 3D gaze mapping mode the following keys will additionally be added: This file contains a list of all exported gaze datums. Each datum contains the following keys: When using the 3D gaze mapping mode the following keys will additionally be available: World Video Stream You can compress the videos afterwards using ffmpeg like so: cd your_recording ffmpeg -i world.mp4 -pix_fmt yuv420p world_compressed.mp4; mv world_compressed.mp4 world.mp4 ffmpeg -i eye0.mp4 -pix_fmt yuv420p eye0_compressed.mp4; mv eye0_compressed.mp4 eye0.mp4 ffmpeg -i eye1.mp4 -pix_fmt yuv420p eye1_compressed.mp4; mv eye1_compressed.mp... When using the setting more CPU smaller file: A mpeg4 compressed video stream of the world view will be created in will be created in. head_pose_tacker_model.csv and head_pose_tacker_poses.csv head_pose_tacker_model.csv: A list of all markers used to generate the 3d model and the 3d locations of the marker vertices. head_pose_tacker_poses.csv: The headset’s pose (rotation and translation) within the 3d model coordinate system for each recorded world frame. By default, the location of the first marker occurance will be used as the origin of the 3d model’s coordinate system. In the plugin’s menu, you can change the marker that is being used as the origin. Developer Docs Development Overview Overview of language, code structure, and general conventions Language Pupil is written in Python 3, fixations, track surfaces, and more… - Records video and data. Most, and preferably all coordination and control happens within the World process.. The 3D pupil detector extends the 2D pupil datum with additional information. Below you can see the Python representation of a pupil and a gaze datum. { # pupil datum 'topic': 'pupil', 'method': '3d c++', 'norm_pos': [0.5, 0.5], # norm space, [0, 1] 'diameter': 0.0, # 2D image space, unit: pixel 'timestamp': 535741.715303987, # time, unit: seconds 'confidence': 0.0, # [0, 1] # 2D ellipse of the pupil in image coordinates 'ellipse': { # image space, unit: pixel 'angle': 90.0, # unit: degrees 'center': [320.0, 240.0], 'axes': [0.0, 0.0]}, 'id': 0, # eye id, 0 or 1 # 3D model data 'model_birth_timestamp': -1.0, # -1 means that the model is building up and has not finished fitting 'model_confidence': 0.0, 'model_id': 1 # pupil polar coordinates on 3D eye model. The model assumes a fixed # eye ball size. Therefore there is no `radius` key 'theta': 0, 'phi': 0, # 3D pupil ellipse 'circle_3d': { # 3D space, unit: mm 'normal': [0.0, -0.0, 0.0], 'radius': 0.0, 'center': [0.0, -0.0, 0.0]}, 'diameter_3d': 0.0, # 3D space, unit: mm # 3D eye ball sphere 'sphere': { # 3D space, unit: mm 'radius': 0.0, 'center': [0.0, -0.0, 0.0]}, 'projected_sphere': { # image space, unit: pixel 'angle': 90.0, 'center': [0, 0], 'axes': [0, 0]}} Gaza data is based on one (monocular) or two (binocular) pupil positions. The gaze mapper is automatically setup after calibration and maps pupil positions into world camera coordinate system. The pupil data on which the gaze datum is based on can be accessed using the base_data key. { # gaze datum 'topic': 'gaze', 'confidence': 1.0, # [0, 1] 'norm_pos': [0.5238293689178297, 0.5811187961748036], # norm space, [0, 1] 'timestamp': 536522.568094512, # time, unit: seconds # 3D space, unit: mm 'gaze_normal_3d': [-0.03966349641933964, 0.007685562866422135, 0.9991835362811073], 'eye_center_3d': [20.713998951917564, -22.466222119962115, 11.201474469783548], 'gaze_point_3d': [0.8822507422478054, -18.62344068675104, 510.7932426103372], 'base_data': [<pupil datum>]} # list of pupil data that was used to calculate the gaze – chat with us on the Pupil channel on Discord. - Intel RealSense 3D instructions if you want to use an Intel RealSense R200 as scene camera. python3 main.py Linux Dependencies These installation instructions are tested using Ubuntu 16.04 or higher running on many machines. Do not run Pupil on a VM unless you know what you are doing. We recommend using 18.04 LTS. Install Linux Dependencies Let’s get started! Its time for apt! Just copy paste into the terminal and listen to your machine purr. Ubuntu 18.04 Install dependencies with apt-get. libtbb-dev install ffmpeg >= 3.2 sudo apt install -y libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libavresample-dev ffmpeg x264 x265 libportaudio2 portaudio19-dev install OpenCV >= 3. sudo apt install -y python3-opencv libopencv-dev` Ubuntu 17.10 or lower If you’re using Ubuntu <= 17.10, you will need to install OpenCV from source, and install ffmpeg-3 from a different ppa. install ffmpeg3 from jonathonf’s ppa sudo add-apt-repository ppa:jonathonf/ffmpeg-3 sudo apt-get update sudo apt install -y libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libavresample-dev ffmpeg libav-tools x264 x265 libportaudio2 portaudio19-dev install OpenCV from source The requisites for opencv to build python3 cv2.so library are: python3 interpreter found libpython***.so shared lib found (make sure to install python3-dev) numpy for python3 installed. git clone cd opencv mkdir build cd build cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON .. make -j2 sudo make install sudo ldconfig Turbojpeg wget -O libjpeg-turbo.tar.gz tar xvzf libjpeg-turbo.tar.gz cd libjpeg-turbo-1.5.1 ./configure --enable-static=no --prefix=/usr/local sudo make install sudo ldconfig custom version of libusb Required for 17.10 and with 200hz cameras only. Otherwise IGNORE!) - Build or download fixed binary from release: - Replace system libusb-1.0.so.0 with this binary. sudo cp '~/path to your fixed binary/libusb-1.0.so.0' '/lib/x86_64-linux-gnu/libusb-1.0.so.0' apriltag git clone cd apriltag mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j4 sudo make install sudo ldconfig Install packages with pip sudo pip3 install numexpr sudo pip3 install cython sudo pip3 install psutil sudo pip3 install pyzmq sudo pip3 install msgpack==0.5.6 sudo pip3 install pyopengl sudo pip3 install pyaudio sudo pip3 install cysignals sudo pip3 install git+ sudo pip3 install git+ sudo pip3 install git+ sudo pip3 install git+ sudo pip3 install git+ sudo pip3 install git+ sudo pip3 install git+ 3D eye model dependencies First install sudo apt-get install -y libboost-dev sudo apt-get install -y libboost-python-dev sudo apt-get install -y libgoogle-glog-dev libatlas-base-dev libeigen3-dev Ceres Ubuntu 18.04 sudo apt install -y libceres-dev Next we need to install the Ceres library. In Ubuntu 18.04 Ceres is available as a package in the repositories. In older versions it has to be compiled from source. Choose the correct command on the right depending on your version of Ubuntu! Ubuntu <= 17.10 # (Optional) Install PyTorch + CUDA and cuDNN. Version 1: Without GPU acceleration: Install PyTorch via pip pip3 install pip3 install pip3 install torchvision Some bleeding edge features require the deep learning library PyTorch. Without GPU acceleration some of the features will probably not run in real-time. Version 2: With GPU acceleration: Install PyTorch via pip pip3 install torch torchvision Please refer to the following links on how to install CUDA and cuDNN: - CUDA - cuDNN: install pkg-config brew install scipy brew install libjpeg-turbo brew install libusb brew install portaudio # opencv will install ffmpeg, numpy, and opencv-contributions automatically # tbb is included by default with brew install opencv brew install glew brew install glfw3 # dependencies for 2d_3d c++ detector brew install boost brew install boost-python3 brew install ceres-solver Install libuvc git clone cd libuvc mkdir build cd build cmake .. make && make install Install apriltag git clone cd apriltag mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j4 sudo make install Python Packages with pip PyOpenGL, ZMQ, … pip3 install PyOpenGL pip3 install pyzmq pip3 install numexpr pip3 install cython pip3 install psutil pip3 install msgpack pip3 install pyaudio pip3 install cysignals pip3 install torch torchvision pip3 install git+ pip3 install git+ pip3 install git+ pip3 install git+ pip3 install git+ pip3 install git+ pip3 install git+ That’s it – you’re Done! Windows Dependencies System Requirements We develop the Windows version of Pupil using 64 bit Windows 10. Therefore we can only debug and support issues for Windows 10. Notes Before Starting - Work directory - We will make a directory called workat C:\workand will use this directory for all build processes and setup scripts. Whenever we refer to the workdirectory, it will refer to C:\work. You can change this to whatever is convenient for you, but note that all instructions and setup files refer to C:\work - Command Prompt - We will always be using x64 Native Tools Command Prompt for VS 2017as our command prompt. Make sure to only use this command prompt. Unlike unix systems, windows has many possible “terminals” or “cmd prompts”. We are targeting x64systems and require the x64command prompt. You can access this cmd prompt from the Visual Studio 2017 shortcut in your Start menu. - 64bit - You should be using a 64 bit system and therefore all downloads, builds, and libraries should be for x64unless otherwise specified. - Windows paths and Python - path separators in windows are a forward slash \. In Python, this is a special “escape” character. When specifying Windows paths in a Python string you must use \\instead of \or use Python raw strings, e.g. r'\'. - Help - For discussion or questions on Windows head over to our #pupil Discord channel. If you run into trouble please raise an issue on github! Install Visual Studio Download Visual Studio 2017 Community version 15.8 from visualstudio.com - Run the Visual Studio bootstrapper .exe. - Navigate to the Workloadstab - In the Workloadstab, choose Desktop Development with C++. This will install all runtimes and components we need for development. Here is a list of what you should see checkedin the Desktop development with C++in the Summaryview: - VC++ 2017 v141 toolset (x86,x64) - C++ profiling tools - Windows 10 SDK (10.0.15063.0) for Desktop C++ x86 and x64 - Visual C++ tools for CMAKE - Visual C++ ATL support - MFC and ATL support (x86, x64) - Standard Library Modules - VC++ 2015.3 v140 toolset for desktop (x86, x64) - Navigate to the Individual Componentstab - In the Individual Componentstab check Git. This will install giton your system. In the Summary Panel for Individual Componentsyou should see: Git for Windows - Click Install Install 7-Zip Install 7-zip to extract files. Install Python - Download Python x64 - Run the Python installer. - Check the box Add Python to PATH. This will add Python to your System PATH Environment Variable. - Check the box Install for all users. This will install Python to C:\Program Files\Python36. System Environment Variables You will need to check to see that Python was added to your system PATH variables. You will also need to manually add other entries to the system PATH later in the setup process. To access your System Environment Variables: - Right click on the Windows icon in the system tray. System. - Click on Advanced system settings. - Click on Environment Variables.... - You can click on Pathin System Variablesto view the variables that have been set. - You can Editor Addnew paths (this is needed later in the setup process). Python Wheels Most Python extensions can be installed via pip. We recommend to download and install the pre-built wheel (*.whl) packages maintained by Christoph Gohlke. (@Gohlke Thanks for creating and sharing these packages!) Download the most recent version of the following wheels Python3.6 x64 systems. - numpy - scipy - boost.python - cython - opencv - pyopengl (do not download pyopengl-accelerate) - psutil - pyaudio - pyzmq - pytorch - For pytorch, select these options: Stable, Windows, Pip, Python 3.6, 9.0. - You will be provided with two commands. Run them in the order given to install this wheel. Open your command prompt and Run as administrator in the directory where the wheels are downloaded. - Install numpyand scipybefore all other wheels. - Install all wheels with pip install X(where X is the name of the .whlfile) - You can check that libs are installed with python import Xstatements in the command prompt where Xis the name of the lib. Python Libs Open your command prompt and install the following libs: pip install msgpack pip install win_inet_pton pip install pyaudio pip install git+ pip install git+ pip install git+ Note - cysignals is a dependency on macOS and Linux but not Windows. Pupil Labs Python Wheels Download the following Python wheels from Pupil Labs github repos. pyuvc requires that you download Microsoft Visual C++ 2010 Redistributable from microsoft. The pthreadVC2 lib, which is used by libuvc, depends on msvcr100.dll. Ceres for Windows Navigate to your work directory git clone --recursive - Download Eigen 3.3.3 - Unzip Eigen and rename the extracted eigen-eigen-67e894c6cd8fdirectory to Eigen - Copy the Eigendirectory into ceres-windows - Copy C:\work\ceres-windows\ceres-solver\config\ceres\internal\config.hto C:\work\ceres-windows\ceres-solver\include\ceres\internal - Open ceres-2015.slnand with Visual Studio 2017 and agree to update to 2017. - Set configurations to Releaseand x64 - Right click on libglog_staticand Build - Right click on ceres_staticand Build Boost Download and install the latest boost version for Windows x64 with version number matching your Visual Studio 2017 MSVC version. - For VS 2017 the MSVC version supported by boost is 14.1 - Download boost from sourceforge - Extract boost to work directory and name the boost dir boost - Open C:\work\boost\boost\python\detail\config.hppwith Visual Studio 2017 - Change L108 from define BOOST_LIB_NAME boost_pythonto define BOOST_LIB_NAME boost_python3 - Save the file and close Visual Studio The prebuilt boost.python depends on python27.dll. The files from package boost.python are built with Visual Studio 2015. One solution to this issue is to build boost from source. - Open your command prompt - cd to C:\work\boost - Run boostrap.bat. This will generate b2.exe. Change user config before compiling boost. - Copy C:\work\boost\tools\build\example\user-config.jamto boost\tools\build\src\user-config.jam. - Uncomment and edit following lines in the user-config.jamfile according your msvc and python version: using msvc : 14.1 ;in section MSVC configuration using python : 3.6 : C:\\Python36 : C:\\Python36\\include : C:\\Python36\\libs ;in section Python configuration Build boost.python - Open your command prompt and navigate to your work dir - cd to boost b2 --with-python link=shared address-model=64 - The generated DLL and Lib files are in C:\work\boost\stage. Add Boost libs to your system path - Add C:\work\boost\stage\libto your system PATH in your System Environment Variables Clone the Pupil Repo - Open a command prompt in your work dir git clone Setup pupil_external dependencies The following steps require you to store dynamic libraries in the pupil_external folder of the cloned repository so that you do not have to add further modifications to your system PATH. GLEW to pupil_external - Download GLEW Windows binaries from sourceforge - Unzip GLEW in your work dir - Copy glew32.dllto pupil_external GLFW to pupil_external - Download GLFW Windows binaries from glfw.org - Unzip GLFW to your work dir - Copy glfw3.dllfrom lib-vc2015to pupil_external FFMPEG to pupil_external - Download FFMPEG v4.0 Windows shared binaries from ffmpeg - Unzip ffmpeg-shared to your work dir - Copy the following 8 .dllfiles to pupil_external avcodec-58.dll avdevice-58.dll avfilter-7.dll avformat-58.dll avutil-56.dll postproc-55.dll swresample-3.dll swscale-5.dll OpenCV to pupil_external - Download opencv 3.2 exe installer from sourceforge - Unzip OpenCV to your work dir and rename dir to opencv - Copy opencv\build\x64\vc14\bin\opencv_world320.dllto pupil_external Include pupil_external in PATH variable - Follow the instructions under the System Environment Variables section above to add a new environment variable to PATH - Add the following folder: C:\work\pupil\pupil_external - Restart your computer so that the PATH variable is refreshed Modify pupil_detectors setup.py - Open pupil\pupil_src\capture\pupil_detectors Modify optimization_calibration setup.py - Open pupil\pupil_src\shared_modules\calibration_routines\optimization_calibration Start the application To start either of the applications – Capture, Player, or Service – you need to execute the respective run_*.bat file, i.e. run_capture.bat, run_player.bat, or run_service.bat. You can also run main.py directly from your IDE, or with the commands python main.py capture, python main.py player, or python main.py service. Windows Driver Setup In order to support isochronous USB transfer on Windows, you will need to install drivers for the cameras in your Pupil headset. Download drivers and tools - Download Pupil camera driver installer from the Pupil github repo - alternatively you could Install drivers for your Pupil headset - Right click PupilDrvInst.exe- and run as admin. This will install drivers. - Manual Installation of DIY Camera Drivers If any issues arise when trying to install drivers for your DIY eye or world cameras, or if you are installing them for the first time, you can try following these instructions: - Unplug Pupil Headset from your computer and keep unplugged until the last step - Open Device Manager - Click View > Show Hidden Devices - Expand the libUSBK devicescategory and expand the Imaging Devicescategory within the Device Manager (sometimes a camera may be under the Camerascategory) - Uninstall and delete drivers for all Pupil Cam 1 ID0, Pupil Cam 1 ID1, and Pupil Cam 1 ID2 devices within both libUSBK and Imaging Devices Category - Restart Computer - Start the latest version of Pupil Capture (ensure that you have admin privileges on your machine) i. General Menu > Restart with default settings - Plug in Pupil Headset after Pupil Capture relaunches - Please wait, drivers should install automatically. You may need to close/cancel automatic Windows driver installation If the above doesn’t work, uninstall all drivers by following steps 1 to 3. You can then try install libusbK drivers with Zadig as outlined in steps 1-7 in the pyuvc docs Windows Pupil Labs Python libs from Source This section is for Pupil core developers who want to build pyav, pyndsi, pyglui, and pyuvc from source and create Python wheels. If you just want to run Pupil from source, go back to the Windows Dependencies section and install the prebuilt wheels. This section assumes that you have a Windows development environment set up and all dependencies installed as specified in the Windows Dependencies section. Please also see the Notes Before Starting section for a clarification of terms. Clone Pupil Labs Python libs Go to your work dir and open a cmd prompt and clone the following repos git clone --recursive git clone --recursive git clone --recursive git clone --recursive git clone git clone git clone git clone Install wheel In order to create wheels, you will need to install the wheel lib. pip install wheel Download FFMPEG Dev You will need both .dll files as well as FFMPG libs in order to build pyav. You should have already downloaded FFMPEG shared binaries in the FFMPEG to pupil_external step above. Download libjpeg-turbo - Download libjpeg-turbo v1.5.1 from sourceforge - Open the .exefile for setup and navigate to where you want install in your work dir as libjpeg-turbo64 - Add C:\work\libjpeg-turbo\binto your system PATH Build pyav - Copy the following ffmpegshared libs from ffmpeg-shared\binto pyav/av - avcodec - avdevice - avfilter - avformat - avutil - postproc - swresample - swscale - Open a cmd prompt python setup.py clean --all build_ext --inplace --ffmpeg-dir=C:\work\ffmpeg-4.0-win64-dev -c msvc - replace the ffmpeg-dir with the location of your ffmpeg-dev dir - You can create a wheel from within this directory with pip wheel . pyglui from source - Open a cmd prompt python setup.py install - You can create a wheel from within this directory with pip wheel . nslr-hmm from source - Open a cmd prompt - From inside the cloned nslrrepository run: python setup.py install - From inside the cloned nslr-hmmrepository run: python setup.py install pyndsi from source - Open a cmd prompt - make sure paths to ffmpeg and libjpeg-turbo are correctly specified in setup.py python setup.py install - You can create a wheel from within this directory with pip wheel . libusb - Clone the libuvcrepo git clone - open libusb_2017.sln with MSVC 2017 - Set to Releaseand x64(libusb-1.0 (static)) - Build solution libuvc - Make a dir in libuvctitled bin - Download CMAKE from cmake.org - Install CMAKE. Make sure to check the box to add CMAKE to your PATH - Download POSIX Threads for Windows from sourceforge Note this is a 32 bit lib, but that is OK! Move PThreads to your work dir. - Download Microsoft Visual C++ 2010 Redistributable from microsoft. The pthreadVC2lib depends on msvcr100.dll. - Open CMAKE GUI - Set source code directory to libuvc repo - Set binary directory to libuvc/bin - Click Configure Visual Studio 15 2017 x64as generator for the project Use default native compilers - Fix paths in the config. Set to the following LIBUSB_INCLUDE_DIR= C:\work\libusb\libusb LIBUSB_LIBRARY_NAMES= C:\work\libusb\x64\Release\dll\libusb-1.0.lib PTHREAD_INCLUDE_DIR= C:\work\pthreads-w32-2-9-1-release\Pre-built.2\include PTHREAD_LIBRARY_NAMES= C:\work\pthreads-w32-2-9-1-release\Pre-built.2\lib\x64\pthreadVC2.lib - Click Configureagain and resolve any path issues - Click Generate - Open libuvc/bin/libuvc.sln- this will open the project in Visual Studio 2017 Preview ALL_BUILDand set to Releaseand x64and Build Solutionfrom the Buildmenu. - Add C:\work\libuvc\bin\Releaseto your system PATH pyuvc - in pyuvc/setup.pymake sure the paths to libuvc, libjpeg-turbo, and libusbare correctly specified python setup.py install - You can create a wheel from within this directory with pip wheel . Intel RealSense 3D RealSense Dependencies librealsense All Intel RealSense cameras require librealsense to be installed. Please follow the install instructions for your operating system. pyrealsense pyrealsense provides Python bindings for librealsense. Run the following command in your terminal to install it. pip3 install git+ Usage RealSense 3D in the Capture Selection menu and activate your RealSense camera. Afterwards you should see the colored video stream of the selected camera. Pupil Capture accesses both streams, color and depth, at all times but only previews one at a time. Enable the Preview Depth option to see the normalized depth video stream. The Record Depth Stream option (enabled by default) will save the depth stream during a recording session to the file depth.mp4 within your recording folder. By default, you can choose different resolutions for the color and depth streams. This is advantageous if you want to run both streams at full resolution. The Intel RealSense R200 has a maximum color resolution of 1920 x 1080 pixels and maximum depth resolution of 640 x 480 pixels. librealsense also provides the possibility to pixel-align color and depth streams. Align Streams enables this functionality. This is required if you want to infer from depth pixels to color pixels and vice versa. The Sensor Settings menu lists all available device options. These may differ depending on your OS, installed librealsense version, and device firmware. Color Frames Pupil Capture accesses the YUVY color stream of the RealSense camera. All color frames are accessible through the events object using the frame key within your plugin’s recent_events method. See the plugin guide for details. Depth Frames Depth frame objects are accessible through the events object using the depth_frame key within your plugin’s recent_events method. The orginal 16-bit grayscale image of the camera can be accessed using the depth attribute of the frame object. The bgr attribute provides a colored image that is calculated using histogram equalization. These colored images are previewed in Pupil Capture, stored during recordings, and referred to as “normalized depth stream” in the above section. The librealsense examples use the same coloring method to visualize depth images. Interprocess and Network Communication This page outlines the way Pupil Capture and Pupil Service communicate via a message bus internally and how to read and write to this bus from another application on the same machine or on a remote machine. Networking All networking in Pupil Capture and Service is based on the ZeroMQ network library. The following socket types are most often used in our networking schemes: - REQ-REP, reliable one-to-one communication - PUB-SUB, one-to-many communication We highly recommend to read Chapter 2 of the ZeroMQ guide to get an intuition on the philosophy behind these socket types. The Pupil apps use the pyzmq module, a great Python ZeroMQ implementation. For auto-discovery of other Pupil app instances in the local network, we use Pyre. The IPC Backbone Pupil Capture/Service/Player use from, to, and within Pupil apps. IPC Backbone used by Pupil Capture and Service The IPC Backbone has a SUB and a PUB address. Both are bound to a random port on app launch and message, 200% faster than ujson) and because encoders exist for almost every language. Message Topics Messages can have any topic chosen by the user. Below a list of message types used by the Pupil apps. Pupil and Gaze Messages Pupil data is sent from the eye0 and eye1 process with the topic pupil.0 or pupil.1. Gaze mappers receive this data and publish messages with topic gaze. See the Pupil Datum format for example messages for the topics pupil and gaze., it is probably not a notification but another kind of data, socket.send_string(topic, flags=zmq.SNDMORE) socket.send(msgpack.dumps(payload, use_bin_type=True)) Pupil software uses their primary communication channel to the IPC Backbone. This makes plugins natural actors in the Pupil message scheme. To simplify the above mentioned documentation behavior, plugins will only have to add a docstring to their on_notify() method. It should include a list of messages to which the plugin reacts and() ip = 'localhost' #If you talk to a different machine use its IP. port = 50020 #The port defaults to 50020 but can be set in the GUI of Pupil Capture. # open Pupil Remote socket requester = ctx.socket(zmq.REQ) requester.connect('tcp://%s:%s'%(ip,port)) requester.send_string('SUB_PORT') ipc_sub_port = requester.recv_string() # setup message receiver sub_url = 'tcp://%s:%s'%(ip,ipc_sub_port) receiver = Msg_Receiver(ctx, sub_url, topics=('notify.meta.doc',)) # construct message topic = 'notify.meta.should_doc' payload = msgpack.dumps({'subject':'meta.should_doc'}) requester.send_string(topic, flags=zmq.SNDMORE) requester.send(payload) requester.recv_string() # wait and print responses while True: # receiver is a Msg_Receiver, that returns a topic/payload tuple on recv()_string('SUB_PORT') sub_port = requester.recv_string() Pupil Remote uses the fixed port 50020 and is the entry point to the IPC backbone for external applications. It also exposes a simple string-based interface for basic interaction with the Pupil apps: Send simple string messages to control Pupil Capture functions: 'R' start recording with auto generated session name 'R rec_name' start recording and name new session name: rec_name 'r' stop recording 'C' start currently selected calibration 'c' stop currently selected calibration 'T 1234.56' Timesync: make timestamps count form 1234.56 from now on. 't' get pupil capture timestamp; returns a float as string. # IPC Backbone communication 'PUB_PORT' return the current pub port of the IPC Backbone 'SUB_PORT' return the current sub port of the IPC Backbone Reading from the Backbone Subscribe to desired topics and receive all relevant messages (i.e. messages whose-REP.) #..: # continued from above import msgpack as serializer notification = {'subject':'recording.should_start', 'session_name':'my session'} topic = 'notify.' + notification['subject'] payload = serializer.dumps(notification) requester.send_string(topic, flags=zmq.SNDMORE) requester.send(payload) print(requester.recv_string()) from time import time, sleep requester.send_string('PUB_PORT') pub_port = requester.recv_string() publisher = ctx.socket(zmq.PUB) publisher.connect('tcp://%s:%s'%(ip, pub_port)) sleep(1) # see Async connect in the paragraphs below notification = {'subject':'calibration.should_start'} topic = 'notify.' + notification['subject'] payload = serializer.dumps(notification) publisher.send_string(topic, flags=zmq.SNDMORE) publisher.send(payload) A full example A full example can be found in shared_modules/zmq-tools.py. Delivery guarantees ZMQ ZMQ is a great abstraction for us. It is super fast, has a multitude of language bindings and solves a lot of the nitty-gritty networking problems we don’t want to deal with. dropped messages are: Async connect: PUB sockets drop messages before messages-REP When writing to the Backbone via REQ-REP. We have not been able to produce a dropped message for network reasons on localhost. However, unreliable, congested networks (e.g.… # continued from above ts = [] for x in range(100): sleep(0.003) #simulate spaced requests as in real world t = time() requester.send_string('t') requester.recv_string(): #). Plugin Guide Plugins Basics Plugins encapsulate functionality in a modular fashion. Most parts of the Pupil apps are implemented as plugins. They are managed within the world process event-loop. This means that the world process can load and unload plugins during runtime. Plugins are called regularly via callback functions (see the Plugin API for details). We recommend to use the network (see the IPC backbone) if you only need access to the data. You are only required to write a plugin if you want to interact with the Pupil apps directly, e.g. visualizations, manipulate data. In the following sections, we assume and recommend that during plugin development you run the Pupil applications from source.. Plugin API Plugins are Python classes that inherit the Plugin class. It provides default functionality as well as series of callback functions that are called by the world process. The source contains detailed information about the use-cases of the different callback functions. Register your plugin automatically To add your plugin to Capture all you need to do is place the source file(s) in the plugin directory. If you run from source: - Pupil Capture: [root_of_source_pupil_source_git_repo]/capture_settings/plugins/ - Pupil Service: [root_of_source_pupil_source_git_repo]/service_settings/plugins/ - Pupil Player: [root_of_source_pupil_source_git_repo]/player_settings/plugins/ If you want to add your plugin to a bundled version of Pupil: - Pupil Capture: [your_user_dir]/pupil_capture_settings/plugins/ - Pupil Service: [your_user_dir]/pupil_service_settings/plugins/ - Pupil Player: [your_user_dir]/pupil_player_settings/plugins/ [your_user_dir] is also called HOME (for Linux and MacOS) or USER (for Windows). Note: if your plugin is contained in a directory, make sure to include an __init__.py inside it. For example: When a valid plugin is found in these dirs, Pupil imports your Plugin classes and adds them to the dropdown list of launchable plugins. If your plugin is a calibration plugin (i.e. it inherits from the Calibration_Plugin base class), then it will appear in the calibration drop down menu. Example plugin development walkthrough Inheriting from existing plugin If you want to add or extend the functionality of an existing plugin, you should be able to apply standard inheritance principles of Python 3. Things to keep in mind: g_poolis an abbreviation.label = 'Threshold' Lets determine its execution order in relation to the other plugins: self.order = .8 You can allow or disallow multiple instances of the Custom Plugin through the uniquenessattribute: self.uniqueness = "by_class" See the source for a list of all available uniqueness options. Finally, lets implement what our new Plugin will do. Here we choose to apply an OpenCv threshold to the world image and give us proper feedback of the results, in real time. Good for OpenCv and related studies. It is possible by means of the recent_events method: def recent_events(self, events): if 'frame' in events: frame = events['frame']) recent_events is called everytime a new world frame is available but latest after a timeout of 0.05 seconds. The events dictionary will include the image frame object if it was available. It is accessible through the frame key. You can access the image buffer through the img and the gray attributes of the frame object. They return a BGR ( height x width x 3) and gray scaled ( height x width) uint8-numpy array respectively. Visualization plugins (e.g. vis_circle.py modify the img buffer such that their visualizations are visible in the Pupil Player exported video. Use OpenGL (within the Plugin.gl_display method) to draw visualizations within Pupil Player that are not visible in the exported video (e.g. surface heatmaps in Offline_Surface_Tracker. See below for more information. The events dictionary contains other recent data, e.g. pupil_positions, gaze_positions, fixations, etc. Modifications to the events dictionary are automatically accessible by all plugins with an higher order than the modifying plugin. Plugin Integration pyglui UI Elements ‘pyglui’ is an OpenGL-based UI framework that provides easy to use UI components for your plugin. User plugins often have at least one menu to inform the user that they are running as well as providing the possibility to close single plugins. from plugin import Plugin from pyglui import ui class Custom_Plugin(Plugin): # Calling add_menu() will create an icon in the icon bar that represents # your plugin. You can customize this icon with a symbol of your choice. icon_chr = '@' # custom menu icon symbol # The default icon font is Roboto: # Alternatively, you can use icons from the Pupil Icon font: # icon_font = 'roboto' # or `pupil_icons` when using the Pupil Icon font def __init__(self, g_pool, example_param=1.0): super().__init__(g_pool) # persistent attribute self.example_param = example_param def init_ui(self): # Create a floating menu self.add_menu() self.menu.label = '<title>' # Create a simple info text help_str = "Example info text." self.menu.append(ui.Info_Text(help_str)) # Add a slider that represents the persistent value self.menu.append(ui.Slider('example_param', self, min=0.0, step=0.05, max=1.0, label='Example Param')) def deinit_ui(self): self.remove_menu() def get_init_dict(self): # all keys need to exists as keyword arguments in __init__ as well return {'example_param': self.example_param} Export Custom Video Visualizations As descrbed above, plugins are able to modify the image buffers to export their visualizations. The plugins recent_events method is automatically called for each frame once by the video exporter process. Plugins might overwrite changes made by plugins with a lower order than themselves. OpenGL visualizations are not exported. See vis_circle.py for an example visualization. Export Custom Raw Data Each Player plugin gets a notification with subject should_export thar includes the world frame indices range that will be exported and the directory where the recording will be exported to. Add the code to the right to your plugin and implement an export_data function. See fixation_detector.py for an example. def on_notify(self, notification): if notification['subject'] is "should_export": self.export_data(notification['range'], notification['export_dir']) Background Tasks All plugins run within the world process. Doing heavy calculations within any of the periodically called Plugin methods (e.g. recent_events) can result in poor performance of the application. It is recommended to do any heavy calculations within a separate subprocess - multi-threading brings its own problems in Python. We created the Task_Proxy to simplify this procedure. It is initialized with a generator which will be executed in a subprocess. The generator’s results will automatically be piped to the main thread where the plugin can fetch them. from plugin import Plugin from pyglui import ui import logging logger = logging.getLogger(__name__) def example_generator(mu=0., sigma=1., steps=100): '''samples `N(\mu, \sigma^2)`''' import numpy as np from time import sleep for i in range(steps): # yield progress, datum progress = (i + 1) / steps value = sigma * np.random.randn() + mu yield progress, value sleep(np.random.rand() * .1) class Custom_Plugin(Plugin): def __init__(self, g_pool): super().__init__(g_pool) self.proxy = Task_Proxy('Background', example_generator, args=(5., 3.), kwargs={'steps': 50}) def recent_events(self, events): # fetch all available results for progress, random_number in task.fetch(): logger.debug('[{:3.0f}%] {:0.2f}'.format(progress * 100, random_number)) # test if task is completed if task.completed: logger.debug('Task done') def cleanup(self): if not self.proxy.completed: logger.debug('Cancelling task') self.proxy.cancel() Recording Format Required Files key,value Recording Name,2018_07_19 Start Date,19.07.2018 Start Time,14:56:21 Start Time (System),1532004981.666572 Start Time (Synced),701730.897108953 Duration Time,00:00:13 World Camera Frames,402 World Camera Resolution,1280x720 Capture Software Version,1.7.159 Data Format Version,1.8 System Info,"User: name, Platform: Linux, ..." Each recording requires three files: 1. An info.csv file that includes two columns – key and value. (See left for example) 2. At least one video file and its corresponding timestamp file. See the Video Files section below for details. A minimum requirement of two key, value pairs are required in the info.csv file. 1. Recording Name,<name> 2. Data Format Version,<version> Data Files Timestamp Files Timestamp files must follow this strict naming convention: Given that a data file is named <name>.<ext> then its timestamps file has to be named <name>_timestamps.npy. Timestamp files are saved in the NPY binary format. You can use numpy.load() to access the timestamps in Python. A datum and its timestamp have the same index within their respective files, i.e. the ith timestamp in world_timestamps.npy belongs to the ith video frame in world.mp4. Video Files Video files are only recognized if they comply with the following constraints: Allowed video file extentions are: .mp4 .mkv .avi .h264 .mjpeg Allowed video file names are: world: Scene video eye0: Right eye video eye1: Left eye video The video files should look like: world.mp4, eye0.mjpeg, eye1.mjpeg We also support multiple parts of video files as input. For instance: world.mp4, world_001.mp4 eye0.mjpeg, eye0_001.mjpeg And their corresponding timestamp files should follow the pattern: world_timestamps.npy, world_001_timestamps.npy Audio File An audio file is only recognized in Pupil Player’s playback plugin if the file is named audio.mp4. pldata Files These files contain a sequence of independently msgpack-encoded messages. Each message consists of two frames: 1. frame: The payload’s topic as a string, e.g. "pupil.0" 2. frame: The payload, e.g. a pupil datum, encoded as msgpack For clarification: The second frame is encoded twice! Pupil Player decodes the messages into file_methods.Serialized_Dicts. Each Serialized_Dict instance holds the serialized second frame and is responsible for decoding it on demand. The class is designed such that there is a maximum number of decoded frames at the same time. This prevents Pupil Player from using excessive amounts of memory. You can use file_methods.PLData_Writer and file_methods.load_pldata_file() to read and write pldata files. Other Files Files without file extention, e.g. the deprecated pupil_data file, and files with a .meta extention are msgpack-encoded dictionaries. They can be read and written using file_methods.load_object() and file_methods.save_object() and do not have a corresponding timestamps file. Fixation Detector In Salvucci and Goldberg define different categories of fixation detectors. One of them describes dispersion-based algorithms:. [1] Pupil’s fixation detectors implement such dispersion-based algorithms. [1]Salvucci, D. D., & Goldberg, J. H. (2000, November). Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications (pp. 71-78). ACM. Usage Online Fixation Detector This plugin detects fixations based on a dispersion threshold in terms of degrees of visual angle with a minimum duration. It publishes the fixation as soon as it complies Detector This plugin detects fixations based on a dispersion threshold in terms of degrees of visual angle within a given duration window. It tries to maximize the length of classified fixations. Fixation Format If 3d pupil data is available the fixation dispersion will be calculated based on the positional angle of the eye. These fixations have their method field set to “pupil”. If no 3d pupil data is available the plugin will assume that the gaze data is calibrated and calculate the dispersion in visual angle within the coordinate system of the world camera. These fixations will have their method field set to “gaze”. Capture Fixations are represented as dictionaries consisting of the following keys: topic: Static field set to fixation norm_pos: Normalized position of the fixation’s centroid base_data: Gaze data that the fixation is based on duration: Exact fixation duration, in milliseconds dispersion: Dispersion, in degrees timestamp: Timestamp of the first related gaze datum confidence: Average pupil confidence method: pupilor gaze gaze_point_3d: Mean 3d gaze point, only available if pupilmethod was used Player Player detected fixations also include: start_frame_index: Index of the first related frame mid_frame_index: Index of the median related frame end_frame_index: Index of the last related frame mod the hardware and expose every camera directly.
https://docs.pupil-labs.com/
CC-MAIN-2019-26
refinedweb
12,485
57.06
User Tag List Results 1 to 3 of 3 - Join Date - Jan 2009 - 26 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Namespacing custom attributes in HTML Background. I have read several threads around the topic of custom attributes. Seemed ejmalone was going for a similar implementation to that which I have in mind in another thread (it won't allow me to include a link yet... said I need more posts to "earn my stripes" to ensure I'm not spamming etc.) Basic Idea. Namespace a custom attribute so as not to break a potential future spec'd attribute with the same name. Example. <div myNamespace:...</div> Thoughts. Seems robust, unless browsers start removing non-spec'd attributes, or the JS stopped seeing them so you couldn't set/get those attributes, or there was a spec'd namespace collision. Notable here - I'm not sure how I'm using "namespacing" here is materially different from naming the attribute something which is extremely unlikely to become standard (perhaps by prefixing with something special). Begs the question: Questions. - I did not see a post in the threads I read indicating that something like my example above is likely to be a problem - now or ever - anyone have insight to the contrary? - Is JS just seeing the attribute name "myNamespace:myCustomAttr"? - Is it perhaps effectively the same as zcorpan's (if I recall correctly) note that the HTML 5 suggestion will definitely allow for a "data-" prefix to custom attributes for this purpose? Additional Thoughts. Using the class attribute to store "additional information" is actually not always ideal - analogous to storing comma delimited values in a database field for example - you potentially have to post-process the string you get. Imagine you want to store a custom id for something - you could add a class such as myID_14, but that "14" is going to change with your use of this now "custom attribute" buried in the class attribute. Hence, you have to parse out the 14 for it be useful. Albeit trivial, it's not so elegant. In my opinion, a single call to getAttribute on a custom attribute is preferable. Really appreciate any thoughts here! Thanks all! JSM - Join Date - Dec 2004 - Location - Sweden - 2,668 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) So your JS has to use different checks for text/html and for XML. Simon Pieters - Join Date - Jan 2009 - 26 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Thank you zcorpan!!! Really appreciate the detailed response around how the "namespace" would be interpreted and accessed. To your points, including the simplicity and html validity down the road, I'll go for the data-* solution. Thanks again! JSM Bookmarks
http://www.sitepoint.com/forums/showthread.php?593627-Namespacing-custom-attributes-in-HTML&p=4108453
CC-MAIN-2013-48
refinedweb
453
59.23
Contents sstable PURPOSE This processor is used to do some spread sheet calculation in a regular wiki table using only Python. The first column/first line coordinate is A0. This code is based on the spreadsheet code posted by Raymond Hettinger and Richard Copeland at It is also based on the wiki:/sctable parser by ReimarBauer. CALLING SEQUENCE { { { #!sstable [-column_header, -row_header, -show_formular, -format, -input_separator, -output_separator ] } } }. -input_separator : is used to read tables delimitted by something other than the default '||' -output_separator : is used to write tables delimitted by something other than the default '||' PROCEDURE - All formulars have to start by a "=" sign. - All formulars do not need a "@", but they can have one. - text in the cells is printed to the left and numbers to the right - Please remove the version number from the routine name! RESTRICTIONS MODIFICATION 1.0.4: improved regular expression that detects numbers - Andrew Shewmaker moved formula parsing into SpreadSheet class - Andrew Shewmaker return 0 when eval results in type error in SpreadSheet class - Andrew Shewmaker 1.0.3: improved compatibility with other spreadsheets - Andrew Shewmaker 1.0.2: improved cell range functions - Andrew Shewmaker correct special character handling - Reimar Bauer 1.0.1: use unicode function - Andrew Shewmaker 1.0: based on sctable-1.5.2-5 by Reimar Bauer (R.Bauer AT fz-juelich.de) resolved issues You don't use eval, don't you? -- FlorianFesti Yes. However, the code (from Richard Copeland) restricts what is evaluated by setting __builtins__ to None in the tools dict. -- AndrewShewmaker May be you could add a History tag to the code comments as we have done it for ImageLink. Or add my name behind 1.0: based on sctable-1.5.2-5. I do prefer a python only version too. Nice work! -- ReimarBauer 2006-04-01 08:56:57 Thanks! I'll add your name behind the reference to sctable. -- AndrewShewmaker - special signs as e.g. german "Umlaut" chars can't be used in the current version ||öäü||ÄÜÖ|| asd|| ||1||2||=A0+B0|| ||10||20||=@sum(A1:B1)|| I had been developing this parser under Mac OS X, Python 2.3.5, and MoinMoin 1.3.x and it didn't like unicode function, so I disabled it. When I tested it on FC4, Python 2.4.1, MoinMoin 1.5, I forgot to switch the second to the last line of the parser back to using the unicode function. Change is in sstable-1.0.1.py, and the Parser doesn't choke now, but it didn't look correct because I guess I don't have the right fonts. -- AndrewShewmaker #wikiizer = wiki.Parser(result,self.request) # parser for wiki tabular wikiizer = wiki.Parser(unicode(result,'latin-1'),self.request) # parser for wiki tabular The last four lines should be better something like this. Then it looks correct. -- ReimarBauer 2006-04-01 20:18:30 from MoinMoin import wikiutil result = wikiutil.unquoteWikiname(result) wikiizer = wiki.Parser(result,self.request) # parser for wiki tabular wikiizer.format(formatter) Yes, that looks right. Included in 1.0.2 -- AndrewShewmaker 2006-04-02 00:27:28 Here I cannot enter non-ASCII characters in a table. 1.0.3. The following table cashes with 'ascii' codec can't encode character u'\xe9' in position 13: ordinal not in range(128): ||'''Taux horaire:'''||=D2/B2 ||'''Heures prévues:'''|| 40 ||'''Heures restantes:''' || =D0-B2 || ||Travailleur ||Heures ||Ratio ||Salaire|| Avance || Balance || ||'''Total''' ||=@sum(B3:B4)||=@sum(C3:C4) ||=D0*35||=@sum(E3:E4) || =D2-E2 || ||JoeUser ||2.5 ||=B3/B2 ||=C3*D2|| 0 || =D3-E3 || ||OtherUser ||10 ||=B4/B2 ||=C4*D2|| 0 || =D3-E3 || it's the "é" it doesn't like. This works in sctable. -- TheAnarcat 2006-05-05 17:50:21 Update: this table now works with sstable 1.0.4 on moinmoin 1.5.4, but i have to remove the "bold" (''') because if i leave them there, the cells appear empty (!?). -- TheAnarcat 2006-09-04 19:28:04 unparseable entries should be just displayed as is... I was having trouble with sctable (and now equally with sstable) with entries that look like numbers but aren't. Dates, for example, are not well parsed by sstable (2006-02-02 becomes 2002.00), IP addresses crash the thing (invalid float or something), etc. IMHO, entries should be calculated only if prefixed by an equal sign, as Excel/OpenOffice Calc does. BTW, to enter an equal sign in Excel, you prefix a single quote. This is what most people (or at least me ) would expect from a spreadsheet. If you don't want to change semantics like this, at least avoid crashing on invalid input and just display invalid input as such... -- TheAnarcat 2006-04-21 20:45:13 When I first made this parser, I thought that I would expose the Python implementation of the spreadsheet. In other words, I was planning on treating strings and numbers exactly as Python does. However, I think that you are right that most people would expect this to behave more like other spreadsheets they've used. That's what I've attempted to provide with version 1.0.3. Dates and IP addresses are presented as typed, entries are only calculated if prefixed with an equal sign, and you can prefix an equal sign with a single quote. Please try it again. -- AndrewShewmaker 2006-04-24 11:54:21 - Super! That fixed display of IP addresses! There are a few more things though, I hope you don't mind me sharing those here... If I input just a dash ("-") in a field, I get ValueError: invalid literal for float(): -. - The sstables need to be square, as opposed to regular Moin tables. - Invalid values are not tolerated by functions (should be considered as 0) Can that be fixed too? In general, I think there should be a try: somewhere that catches parse errors and just sets fields to 0 or raw. I had a hard time figuring out where to do this in the code... Thanks for everything! -- TheAnarcat 2006-04-25 16:25:42 You're welcome. As of 1.0.4, a single dash is valid and a type error will cause a spreadsheet function to return 0 (one invalid input will currently cause the function to return 0). I don't think I can help you with the square tables request. I've moved parsing of input into the setitem method of the Spreadsheet class. In my experience, I usually see errors raised in the getitem method (which does have try/catch blocks). -- AndrewShewmaker 2006-04-26 07:19:05 Excellent, thank you!! -- TheAnarcat 2006-05-05 17:50:21 May be we need only to write a python routine from one of our idl routines string_is_number. May be something similiar is already somewhere defined. -- ReimarBauer 2006-04-25 17:38:23 I'd like to keep trying to make a regular expression work. Do you see any problems with self.num_re = re.compile('^-?(\d*)\.?[\d]+$') -- AndrewShewmaker 2006-04-26 07:19:05 This seems to go well I do some further tests later. Another problem is if you missed to fill the whole table e.g. ||10||20|| ||=1*3|| then it ends up with a KeyError 'b1'. sctable is different for that and has shown the result of all given cells -- ReimarBauer DateTime(2006-04-29T18:10:51Z) Update: this now doesn't crash in 1.0.4, but the filling cell has some random string in it. -- TheAnarcat 2006-09-04 19:28:04 I would like to make sstable use "0" for empty rows, where in the code do I have to look for this? Thanks! -- AlvaroTejero 2007-01-05 16:45:25 One of these lines ought to be the place. You probably don't need t change both. I didn't test this. -- AndrewShewmaker 2007-01-05 23:51:05 --- sstable.py 2006-04-26 07:07:06.000000000 -0600 +++ sstable-test.py 2007-01-05 23:46:24.000000000 -0700 @@ -384,7 +384,7 @@ while True: try: if ( self._cells[key] == None or self._cells[key] == ''): - rv = '' + rv = '0' else: rv = eval(self._cells[key], self.tools, self._cache) break @@ -534,7 +534,7 @@ cell = str(self.ss[cellname]) if cell == '': - cell = ' ' + cell = '0' if show_formular == 0: num_match = self.ss.num_re.match(cell)
http://www.moinmo.in/ParserMarket/Sstable/ReleaseNotes
crawl-003
refinedweb
1,387
68.26
A simple and easy to use PID controller in Python. If you want a PID controller without external dependencies that just works, this is for you! The PID was designed to be robust with help from Brett Beauregards guide. Usage is very simple: from simple_pid import PID pid = PID(1, 0.1, 0.05, setpoint=1) # Assume we have a system we want to control in controlled_system v = controlled_system.update(0) while True: # Compute new output from the PID according to the systems current value control = pid(v) # Feed the PID output to the system and get its current value v = controlled_system.update(control) Complete API documentation can be found here. To install, run: pip install simple-pid The PID class implements __call__(), which means that to compute a new output value, you simply call the object like this: output = pid(current_value) The PID works best when it is updated at regular intervals. To achieve this, set sample_time to the amount of time there should be between each update and then call the PID every time in the program loop. A new output will only be calculated when sample_time seconds has passed: pid.sample_time = 0.01 # Update every 0.01 seconds while True: output = pid(current_value) To set the setpoint, ie. the value that the PID is trying to achieve, simply set it like this: pid.setpoint = 10 The tunings can be changed any time when the PID is running. They can either be set individually or all at once: pid.Ki = 1.0 pid.tunings = (1.0, 0.2, 0.4) To use the PID in reverse mode, meaning that an increase in the input leads to a decrease in the output (like when cooling for example), you can set the tunings to negative values: pid.tunings = (-1.0, -0.1, 0) Note that all the tunings should have the same sign. In order to get output values in a certain range, and also to avoid integral windup (since the integral term will never be allowed to grow outside of these limits), the output can be limited to a range: pid.output_limits = (0, 10) # Output value will be between 0 and 10 pid.output_limits = (0, None) # Output will always be above 0, but with no upper bound To disable the PID so that no new values are computed, set auto mode to False: pid.auto_mode = False # No new values will be computed when pid is called pid.auto_mode = True # pid is enabled again When disabling the PID and controlling a system manually, it might be useful to tell the PID controller where to start from when giving back control to it. This can be done by enabling auto mode like this: pid.set_auto_mode(True, last_output=8.0) This will set the I-term to the value given to last_output, meaning that if the system that is being controlled was stable at that output value the PID will keep the system stable if started from that point, without any big bumps in the output when turning the PID back on. When tuning the PID, it can be useful to see how each of the components contribute to the output. They can be seen like this: p, i, d = pid.components # The separate terms are now in p, i, d To eliminate overshoot in certain types of systems, you can calculate the proportional term directly on the measurement instead of the error. This can be enabled like this: pid.proportional_on_measurement = True To transform the error value to another domain before doing any computations on it, you can supply an error_map callback function to the PID. The callback function should take one argument which is the error from the setpoint. This can be used e.g. to get a degree value error in a yaw angle control with values between [-pi, pi): import math def pi_clip(angle): if angle > 0: if angle > math.pi: return angle - 2*math.pi else: if angle < -math.pi: return angle + 2*math.pi return angle pid.error_map = pi_clip Use the following to run tests: tox Licensed under the MIT License.
https://openbase.com/python/simple-pid
CC-MAIN-2021-39
refinedweb
684
62.38
I ported minidlna on Windows using cygwin, it works. Is there anyone who is interested in it? Hello, I'm really interested. I try to port minidlna on Windows by myself. But I don't find the solution. Is it possible for you to detail how you have done? Thank you by advance. Very cool! Nice work hieroun. I need to start pulling in some of these patches soon. Thanks hieroun! I compiled minidlna successfully on cygwin. However, I hat some trouble setting the db_dir. It turned out, that make_dir() was not working correctly. I supplied patch 3310785 Justin, it would be really cool, if you would incorporate hieroun's patch! Thanks fku for the information and the patch. It seems EACCES is returned in case system directory is included in the db_dir path. I will upload the new patch after 1.0.20 is released. hiero Great. I know of one more thing which is still to fix: system( "rm -rf files.db….." ) is called in at least 2 locations. This should be changed to unlink() an rmdir() calls. I will look into it as soon as I can (might take some days) It is easier to use win32 API for recursive delete. Following works. #include <sys/cygwin.h> #include <windows.h> void delete_db(char *db_path) { char real_path[PATH_MAX*2+1], path_tmp[PATH_MAX]; int ret, len1, len2; SHFILEOPSTRUCT file_op = { NULL, FO_DELETE, real_path, NULL, FOF_SILENT | FOF_NOCONFIRMATION | FOF_NOERRORUI, FALSE, NULL, "" }; snprintf(path_tmp, sizeof(path_tmp), "%s/files.db", db_path); len1 = cygwin_conv_path (CCP_POSIX_TO_WIN_A | CCP_ABSOLUTE, path_tmp, real_path, 0); cygwin_conv_path(CCP_POSIX_TO_WIN_A | CCP_ABSOLUTE, path_tmp, real_path, PATH_MAX); snprintf(path_tmp, sizeof(path_tmp), "%s/art_cache", db_path); len2 = cygwin_conv_path (CCP_POSIX_TO_WIN_A | CCP_ABSOLUTE, path_tmp, real_path+len1, 0); cygwin_conv_path(CCP_POSIX_TO_WIN_A | CCP_ABSOLUTE, path_tmp, real_path+len1, PATH_MAX); real_path[len1+len2] = '\0'; ret = SHFileOperationA(&file_op); if (ret != 0) fprintf(stderr, "cannot delete files.db, art_cache\n"); } Great work! Running very fine here. Thank you so much. This is my installation memo. Based on: Files: MiniDLNA-memo-1.0.22.txt, Note-for-MiniDLNA-cygwin-1.0.22.txt In: minidlna_cygwin_port_1.0.22.tar.gz From: Environment: cygwin 1.7.11 Windows7 Home Premium 64bit/SP1 Install ffmpeg's static libraries: Install cyg-pkgs: gcc4-core, make, pkg-config Download src: Note: I recommend ver.0.10.x. Because ver.0.11.x no longer supports an api 'av_open_input_file'. This causes an incompatibility issue in minidlna's configuration check. Build: $ ./configure -disable-shared -enable-static -enable-pthreads -disable-yasm -enable-gpl -enable-version3 $ make $ make install CHECK: 'libavutil.a' and others appeared in /usr/local/lib folder. Install minidlna: Install cyg-pkgs: patchutil, autoconf, automake, libiconv, gettext-devel, libsqlite3-devel libjpeg-devel, libexif-devel, libid3tag-devel, libogg-devel, libvorbis-devel, flac-devel Note: I recommend libsqlite3-devel ver.3.7.3-1. Because ver.3.7.12.1-1 seems defected (Does not contain a static library). $ cvs -d:pserver:anonymous@minidlna.cvs.sourceforge.net:/cvsroot/minidlna login $ cvs -z3 -d:pserver:anonymous@minidlna.cvs.sourceforge.net:/cvsroot/minidlna co -D 2011-11-11 -P minidlna Download and apply patch: File: minidlna_CVS_20111110-454_cygwin.patch In: minidlna_CVS_20111110-454_cygwin_patch.tar.gz Build: $ ./autogen.sh $ ./configure $ make $ make install Configure and start: $ cp /tmp/minidlna/minidlna.conf /etc/minidlna.conf $ vi /etc/minidlna.conf (modify 'media_dir') $ minidlna -f /etc/minidlna.conf Please try! > marverix Can anyone distribute a working binary for Windows ? Does it work the same as linux ? minidlna is not designed to run on Windows. Instructions for running under cygwin (a Unix-like environment and command-line interface for Microsoft Windows) are included in the post above yours. If you are running Windows, there are other DLNA servers available, so I'm curious as to why you want a Windows version of minidlna. /Craig Maybe because other dlna server sucks... The only dlna serve wich can serve HD files on eee-pc without lags is windows media player (yerk). So a windows port of minidlna will be a good news. This is Sourceforge... welcome. There are very good reasons why there is no Windows port. Windows is significantly different to Unix like environments, hence why no-one has successfully ported minidlna to Windows natively. Install the Cygwin version as described above, or undertake the task of porting to Windows yourself. Good luck. Native windows port (without Cygwin) Working binary for Windows A standalone Windows version I made with Cygwin seems to work well. The necessary files for a binary installation are: minidlnad.exe 58 Cygwin DLLs minidlna.conf Three source edits were required: (a) to getifaddr.c, to use Linux-type sockets; (b) to minidlna.c, to use remove(...) statements instead of a system(...) statement; (c) to utils.c, so that make_dir(...) works with Cygwin-style filenames. Cygwin "mounts" Windows drives as /cygwin/drive_letter e.g. /cygwin/c for C:. A standalone minidlnad made with Cygwin understands file and directory names that begin with /cygwin/[a-z]/ as referring to the appropriate Windows drive; it knows nothing of the default file and directory names in minidlnad (e.g. /var/run), so that it is neccesary to override the defaults with a config file. The command line (or shortcut) to start minidlnad, with command-line parameters in Cygwin-style, and with the .exe, .dll and .conf (and pid, database and log) files in path1 on the C drive, is then something like: c:\[path1]\minidlnad.exe -P /cygdrive/c/[path1]/pid -f /cygdrive/c/[path1]/minidlna.conf -R The config file, with a media_dir on path2 on the C drive, is then something like: media_dir=/cygdrive/c/[path2] db_dir=/cygdrive/c/[path1] log_dir=/cygdrive/c/[path1] Cygwin implements fork(), so that a standalone minidlnad started under Windows [7] forks to leave a detached minidlnad process, with no window and no tray icon; it is not a service, but the end effect is the same as though it were. If it is necessary to kill the process, a control program such as the Task Manager has to be used.
https://sourceforge.net/p/minidlna/discussion/879956/thread/dea3b259/
CC-MAIN-2016-40
refinedweb
989
52.15
Using yt with AstroBlend: SPH Data At a thousand points, the light is shining through. In this tutorial we will once again use yt directly in Blender, but in this example, we'll import some SPH code and plot some points in 3D instead of visualizing AMR surfaces. Again, please make sure you are set up correctly first and you have read the first and second AstroBlend tutorials. You might also find the first SPH tutorial useful to look over as well. folowing is just a rehash of how to set up yt and AstroBlend to use both of them together in Blender as discussed in the first yt tutorial, so feel free to skip this section if you've already gone through that tutorial. "TipsyGalaxy" data found on the yt website (~10MB) here. Generating SPH Visualizations with yt+AstroBlend Loading in the SPH data directly from a simulation is very similar to loading it from a file with on difference - we need to specify what to color the SPH particles by as show in this snippet of code where we color them by temperature: import science filename = '/Users/jillnaiman/data/TipsyGalaxy/galaxy.00300' color_field = ('Gas', 'Temperature') # color by temp color_log = True # take log of temp for mapping colors color_map = 'Rainbow' # these two things play off eachother! # larger halo size is needed when cam is far away halo_size = 0.108 # need to play with this and cam distance set_cam = (0,0,70) scale = [(1.0, 1.0, 1.0)] cam = science.Camera() cam.location = set_cam cam.clip_begin = 0.0001 lighting = science.Lighting('EMISSION') myobject = science.Load(filename, scale=scale, halo_sizes = halo_size, color_field = color_field, color_map = color_map, color_log = color_log, n_ref=8) This results in the following setup in your Blender window: Note that there are a few things that are different than when we imported things from a pre-formatted text file. In our "object selector" panel you'll notice that there are about 250 particle types listed. This is because our color maps are made up of 256 colors, so each particle type is made up of all the particles within that temperature bin, excluding colors where there are no particles in that temperature bin. For now, that is just how things need to be input. In the future, I'm hoping to figure out a way to mesh all of the particle types into one object. One fun thing we can do is turn on "Render View" in our 3D viewer like so: This allows for a rendered view as you move around the space using your number pad as discussed in the first tutorial. If you don't have a number pad but want to use the number keys to move around check out how to do this in the Blender Preferences under "Numpad emulation". For fun, here is a short little movie of how you can move around your SPH simulation and check out things while its being rendered in real time: One thing you may note is how the particles look all "fuzzy" when you are up close to them. This is because the halo_sizes was set to give a good render when the camera was high above the galaxy. The halo size and camera distance is something you can play around with to find a good fit - too high and you just get a bunch of fuzzy lumps, but too small and nothing will show up on your rendered image. yt+AstroBlend SurfacesPrevious Tutorial Simple 3D ObjectsNext Tutorial
http://www.astroblend.com/tutorials/tutorial_callingytsph.html
CC-MAIN-2021-39
refinedweb
582
65.96
I am working on a project, and I'm new to applets. I don't know how to find a file using these arguments. I know there is another question out there that is almost the same, but I want this in an easy, simplified way because I'm new to this. Any help would be awesome!!! Here is my code: import java.applet.Applet; import java.applet.AudioClip; import java.awt.Graphics; public class SoundDemo extends Applet { public void init() { AudioClip clip = getAudioClip( getCodeBase(), "sounds/Dragon Roost.wav" ); clip.play(); } public void paint( Graphics g ) { g.drawString( "Now Playing Clip", 10, 10 ); } It might help you to understand. Here I am reading a music file that is stored under music folder in src folder of my project as shown in below snapshot. getDocumentBase() points to the bin folder (class-path) where all the classes are stored. In your case it will fetch the music from bin/sounds/Dragon Roost.wav Gets the URL of the document in which this applet is embedded. For example, suppose an applet is contained within the document: The document base is: Gets the base URL. This is the URL of the directory which contains this applet. Sample code: Applet: URL url = getDocumentBase(); AudioClip audioClip = getAudioClip(url, "music/JButton.wav"); Project structure:
https://codedump.io/share/nqKg1Pj2cKLo/1/how-do-you-use-getdocumentbase-and-getcodebase-correctly-in-java-applets
CC-MAIN-2017-04
refinedweb
217
68.87
Abstract: How to create a Console Application How do I create and run an old DOS style Console Application in Borland C++ Builder 5? Creating a DOS Style Console Application in Borland C++ Builder 5 You may use C++ Builder 5 to edit and compile DOS style applications in addition to the Form based applications that are the first line programs in Builder. You cannot use the Form or its underlying text edit file to create and compile a DOS style program. To compile this type of program (DOS style), Follow these instructions: 1. Open Builder. 2. Click on "File" in the menu and follow the path: File -> New -> Console Wizard. 3. In the Console Wizard Select C or C++ and Console App. 4. Click Finish. You may enter and compile your code in the Text Edit window that appears. #pragma hdrstop // Provided by compiler #include <condefs.h> // Provided by compiler #include <iostream.h> #include <conio.h> // Contains getch() prototype //--------------------------------------------------------- #pragma argsused // Provided by compiler int main(int argc, char* argv[]) // Arguments provided by compiler { cout << "Hello World!" << endl; getch(); // literally "GET CHaracter". // This will hold the program at this // point until a keystroke is entered // so that the programmer may see // the program output. return 0; } Server Response from: SC2
http://edn.embarcadero.com/article/21588
crawl-002
refinedweb
211
64.91
Ok so I need to make a Hangman game without using string data type and it needs to display the hangman as the game goes. such as if the guess is wrong show the gallows if wrong =2 show head on gallows, then wrong 3 show body etc. MY QUESTION IS: How do I start the project, can someone show a flowchart or a list on how the project needs to run in order so I can start the coding for it, ty. I made one "WITH" using the String data type: Code:#include <iostream> #include <string> #include <vector> #include <algorithm> #include <ctime> #include <cctype> using namespace std; int main() { const int MAX_WRONG = 8; vector<string> words; words.push_back("CAT"); words.push_back("RABBIT"); words.push_back("SNAKE"); words.push_back("DOG"); words.push_back("PARROT"); words.push_back("FISH"); words.push_back("HEDGEHOG"); words.push_back("RACOON"); words.push_back("BEAR"); words.push_back("HAWK"); words.push_back("PIG"); words.push_back("LION"); srand(static_cast<unsigned int>(time(0))); random_shuffle(words.begin(), words.end()); const string THE_WORD = words[0]; int wrong = 0; string soFar(THE_WORD.size(), '-'); string used = ""; cout << "Welcome to Hangman!\n"; cout << " ______\n"; cout << " | | \n"; cout<< " O | \n"; cout<< " [|] | \n"; cout<< " ( ) | \n"; cout<< "_______| \n"; cout<< "\n The Word is an animal \n"; while ((wrong < MAX_WRONG) && (soFar != THE_WORD)) { cout << "\n\nYou have " << (MAX_WRONG - wrong); cout << " incorrect guesses left.\n"; " << guess << endl; cout << "Enter your guess: "; cin >> guess; guess = toupper(guess); } used += guess; // if word = a , if word = h , if word = n , if word =g, if word = m if (THE_WORD.find(guess) != string::npos) { cout << "That's right! " << guess << " is in the word.\n"; for (unsigned int i = 0; i < THE_WORD.length(); ++i) { if (THE_WORD[i] == guess) { soFar[i] = guess; } } } else { cout << "Sorry, " << guess << " isn't in the word.\n"; ++wrong; } } if (wrong == MAX_WRONG) cout << "\nYou've been hanged!"; else cout << "\nYou guessed it!"; cout << "\nThe word was " << THE_WORD << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/137137-making-hangman-game-without-using-string-data-type.html
CC-MAIN-2014-42
refinedweb
314
72.76
From a signalling perspective, the world is a noisy place. In order to make sense of anything, we have to be selective with our attention. We humans have, over the course of millions of years of natural selection, become fairly good at filtering out background signals. We learn to associate particular signals with certain events. For instance, imagine you’re playing table tennis in a busy office. To return your opponent’s shot, you need to make a huge array of complex calculations and judgements, taking into account multiple competing sensory signals. To predict the motion of the ball, your brain has to repeatedly sample the ball’s current position and estimate its future trajectory. More advanced players will also take into account any spin their opponent applied to the shot. Finally, in order to play your own shot, you need to account for the position of your opponent, your own position, the speed of the ball, and any spin you intend to apply. All of this involves an amazing amount of subconscious differential calculus. We take it for granted that, generally speaking, our nervous system can do this automatically (at least after a bit of practice). Just as impressive is how the human brain differentially assigns importance to each of the myriad competing signals it receives. The position of the ball, for example, is judged to be more relevant than, say, the conversation taking place behind you, or the door opening in front of you. This may sound so obvious as to seem unworthy of stating, but that is testament to the just how good we are at learning to make accurate predictions out of noisy data. Certainly, a blank-state machine given a continuous stream of audiovisual data would face a difficult task knowing which signals best predict the optimal course of action. Luckily, there are statistical and computational methods that can be used to identify patterns in noisy, complex data. Correlation 101 Generally speaking, when we talk of ‘correlation’ between two variables, we are referring to their ‘relatedness’ in some sense. Correlated variables are those which contain information about each other. The stronger the correlation, the more one variable tells us about the other. You may well already have some understanding of correlation, how it works and what its limitations are. Indeed, it’s something of a data science cliche: “Correlation does not imply causation” This is of course true — there are good reasons why even a strong correlation between two variables is not a guarantor of causality. The observed correlation could be due to the effects of a hidden third variable, or just entirely down to chance. That said, correlation does allow for predictions about one variable to made based upon another. There are several methods that can be used to estimate correlated-ness for both linear and non-linear data. Let’s take a look at how they work. We’ll go through the math and the code implementation, using Python and R. The code for the examples this article can be found here. Pearson’s Correlation Coefficient What is it? Pearson’s Correlation Coefficient (PCC, or Pearson’s r) is a widely used linear correlation measure. It’s often the first one taught in many elementary stats courses. Mathematically speaking, it is defined as “the covariance between two vectors, normalized by the product of their standard deviations”. Tell me more… The covariance between two paired vectors is a measure of their tendency to vary above or below their means together. That is, a measure of whether each pair tend to be on similar or opposite sides of their respective means. Let’s see this implemented in Python: def mean(x): return sum(x)/len(x) def covariance(x,y): calc = [] for i in range(len(x)): xi = x[i] - mean(x) yi = y[i] - mean(y) calc.append(xi * yi) return sum(calc)/(len(x) - 1) a = [1,2,3,4,5] ; b = [5,4,3,2,1] print(covariance(a,b)) The covariance is calculated by taking each pair of variables, and subtracting their respective means from them. Then, multiply these two values together. - If they are both above their mean (or both below), then this will produce a positive number, because a positive×positive=positive, and likewise a negative×negative=positive. - If they are on different sides of their means, then this produces a negative number (because positive×negative=negative). Once we have all these values calculated for each pair, sum them up, and divide by n-1, where n is the sample size. This is the sample covariance. If the pairs have a tendency to both be on the same side of their respective means, the covariance will be a positive number. If they have a tendency to be on opposite sides of their means, the covariance will be a negative number. The stronger this tendency, the larger the absolute value of the covariance. If there is no overall pattern, then the covariance will be close to zero. This is because the positive and negative values will cancel each other out. At first, it might appear that the covariance is a sufficient measure of ‘relatedness’ between two variables. However, take a look at the graph below: Looks like there’s a strong relationship between the variables, right? So why is the covariance so low, at approximately 0.00003? The key here is to realise that the covariance is scale-dependent. Look at the x and y axes — pretty much all the data points fall between the range of 0.015 and 0.04. The covariance is likewise going to be close to zero, since it is calculated by subtracting the means from each individual observation. To obtain a more meaningful figure, it is important to normalize the covariance. This is done by dividing it by the product of the standard deviations of each of the vectors. In Python: import math def stDev(x): variance = 0 for i in x: variance += (i - mean(x) ** 2) / len(x) return math.sqrt(variance) def Pearsons(x,y): cov = covariance(x,y) return cov / (stDev(x) * stDev(y)) The reason this is done is because the standard deviation of a vector is the square root of its variance. This means if two vectors are identical, then multiplying their standard deviations will equal their variance. Funnily enough, the covariance of two identical vectors is also equal to their variance. Therefore, the maximum value the covariance between two vectors can take is equal to the product of their standard deviations, which occurs when the vectors are perfectly correlated. It is this which bounds the correlation coefficient between -1 and +1. Which way do the arrows point? As an aside, a much cooler way of defining the PCC of two vectors comes from linear algebra. First, we center the vectors, by subtracting their means from their individual values. a = [1,2,3,4,5] ; b = [5,4,3,2,1] a_centered = [i - mean(a) for i in a] b_centered = [j - mean(b) for j in b] Now, we can make use of the fact that vectors can be considered as ‘arrows’ pointing in a given direction. For instance, in 2-D, the vector [1,3] could be represented as an arrow pointing 1 unit along the x-axis, and 3 units along the y-axis. Likewise, the vector [2,1] could be represented as an arrow pointing 2 units along the x-axis, and 1 unit along the y-axis. Similarly, we can represent our data vectors as arrows in an n-dimensional space (although don’t try visualising when n > 3…) The angle ϴ between these arrows can be worked out using the dot product of the two vectors. This is defined as: Or, in Python: def dotProduct(x,y): calc = 0 for i in range(len(x)): calc += x[i] * y[i] return calc The dot product can also be defined as: Where ||x|| is the magnitude (or ‘length’) of the vector x (think Pythagoras’ theorem), and ϴ is the angle between the arrow vectors. As a Python function: def magnitude(x): x_sq = [i ** 2 for i in x] return math.sqrt(sum(x_sq)) This lets us find cos(ϴ), by dividing the dot product by the product of the magnitudes of the two vectors. def cosTheta(x,y): mag_x = magnitude(x) mag_y = magnitude(y) return dotProduct(x,y) / (mag_x * mag_y) Now, if you know a little trigonometry, you may recall that the cosine function produces a graph that oscillates between +1 and -1. The value of cos(ϴ) will vary depending on the angle between the two arrow vectors. - When the angle is zero (i.e., the vectors point in the exact same direction), cos(ϴ) will equal 1. - When the angle is -180°, (the vectors point in exact opposite directions), then cos(ϴ) will equal -1. - When the angle is 90° (the vectors point in completely unrelated directions), then cos(ϴ) will equal zero. This might look familiar — a measure between +1 and -1 that seems to describe the relatedness of two vectors? Isn’t that Pearson’s r? Well — that is exactly what it is! By considering the data as arrow vectors in a high-dimensional space, we can use the angle ϴ between them as a measure of similarity. The cosine of this angle ϴ is mathematically identical to Pearson’s Correlation Coefficient. When viewed as high-dimensional arrows, positively correlated vectors will point in a similar direction. Negatively correlated vectors will point towards opposite directions. And uncorrelated vectors will point at right-angles to one another. Personally, I think this is a really intuitive way to make sense of correlation. Statistical significance? As is always the case with frequentist statistics, it is important to ask how significant a test statistic calculated from a given sample actually is. Pearson’s r is no exception. Unfortunately, whacking confidence intervals on an estimate of PCC is not entirely straightforward. This is because Pearson’s r is bound between -1 and +1, and therefore isn’t normally distributed. An estimated PCC of, say, +0.95 has only so much room for error above it, but plenty of room below. Luckily, there is a solution — using a trick called Fisher’s Z-transform: - Calculate an estimate of Pearson’s r as usual. - Transform r→z using Fisher’s Z-transform. This can be done by using the formula z = arctanh(r), where arctanh is the inverse hyperbolic tangent function. - Now calculate the standard deviation of z. Luckily, this is straightforward to calculate, and is given by SDz = 1/sqrt(n-3), where n is the sample size. - Choose your significance threshold, alpha, and check how many standard deviations from the mean this corresponds to. If we take alpha = 0.95, use 1.96. - Find the upper estimate by calculating z +(1.96 × SDz), and the lower bound by calculating z - (1.96 × SDz). - Convert these back to r, using r = tanh(z), where tanh is the hyperbolic tangent function. - If the upper and lower bounds are both the same side of zero, you have statistical significance! Here’s a Python implementation: r = Pearsons(x,y) z = math.atanh(r) SD_z = 1 / math.sqrt(len(x) - 3) z_upper = z + 1.96 * SD_z z_lower = z - 1.96 * SD_z r_upper = math.tanh(z_upper) r_lower = math.tanh(z_lower) Of course, when given a large data set of many potentially correlated variables, it may be tempting to check every pairwise correlation. This is often referred to as ‘data dredging’ — scouring the data set for any apparent relationships between the variables. If you do take this multiple comparison approach, you should use stricter significance thresholds to reduce your risk of discovering false positives (that is, finding unrelated variables which appear correlated purely by chance). One method for doing this is to use the Bonferroni correction. The small print So far, so good. We’ve seen how Pearson’s r can be used to calculate the correlation coefficient between two variables, and how to assess the statistical significance of the result. Given an unseen set of data, it is possible to start mining for significant relationships between the variables. However, there is a major catch — Pearson’s r only works for linear data. Look at the graphs below. They clearly show what looks like a non-random relationship, but Pearson’s r is very close to zero. The reason why is because the variables in these graphs have a non-linear relationship. We can generally picture a relationship between two variables as a ‘cloud’ of points scattered either side of a line. The wider the scatter, the ‘noisier’ the data, and the weaker the relationship. However, Pearson’s r compares each individual data point with only one other (the overall means). This means it can only consider straight lines. It’s not great at detecting any non-linear relationships. In the graphs above, Pearson’s r doesn’t reveal there being much correlation to talk of. Yet the relationship between these variables is still clearly non-random, and that makes them potentially useful predictors of each other. How can machines identify this? Luckily, there are different correlation measures available to us. Let’s take a look at a couple of them. Distance Correlation What is it? Distance correlation bears some resemblance to Pearson’s r, but is actually calculated using a rather different notion of covariance. The method works by replacing our everyday concepts of covariance and standard deviation (as defined above) with “distance” analogues. Much like Pearson’s r, “distance correlation” is defined as the “distance covariance” normalized by the “distance standard deviation”. Instead of assessing how two variables tend to co-vary in their distance from their respective means, distance correlation assesses how they tend to co-vary in terms of their distances from all other points. This opens up the potential to better capture non-linear dependencies between variables. The finer details… Robert Brown was a Scottish botanist born in 1773. While studying plant pollen under his microscope, Brown noticed tiny organic particles jittering about at random on the surface of the water he was using. Little could he have suspected a chance observation of his would lead to his name being immortalized as the (re-)discoverer of Brownian motion. Even less could he have known that it would take nearly a century before Albert Einstein would provide an explanation for the phenomenon — and hence proving the existence of atoms — in the same year he published papers on E=MC², special relativity and helped kick-start the field of quantum theory. Brownian motion is a physical process whereby particles move about at random due to collisions with surrounding molecules. The math behind this process can be generalized into a concept known as the Weiner process. Among other things, the Weiner process plays an important part in mathematical finance’s most famous model, Black-Scholes. Interestingly, Brownian motion and the Weiner process turn out to be relevant to a non-linear correlation measure developed in the mid-2000’s through the work of Gabor Szekely. Let’s run through how this can be calculated for two vectors x and y, each of length N. - First, we form N×N distance matrices for each of the vectors. A distance matrix is exactly like a road distance chart in an atlas — the intersection of each row and column shows the distance between the corresponding cities. Here, the intersection between row i and column j gives the distance between the i-th and j-th elements of the vector. 2. Next, the matrices are “double-centered”. This means for each element, we subtract the mean of its row and the mean of its column. Then, we add the grand mean of the entire matrix. 3. With the two double-centered matrices, we can calculate the square of the distance covariance by taking the average of each element in X multiplied by its corresponding element in Y. 4. Now, we can use a similar approach to find the “distance variance”. Remember — the covariance of two identical vectors is equivalent to their variance. Therefore, the squared distance variance can be described as below: 5. Finally, we have all the pieces to calculate the distance correlation. Remember that the (distance) standard deviation is equal to the square-root of the (distance) variance. If you prefer to work through code instead of math notation (after all, there is a reason people tend to write software in one and not the other…), then check out the R implementation below: set.seed(1234) doubleCenter <- function(x){ centered <- x for(i in 1:dim(x)[1]){ for(j in 1:dim(x)[2]){ centered[i,j] <- x[i,j] - mean(x[i,]) - mean(x[,j]) + mean(x) } } return(centered) } distanceCovariance <- function(x,y){ N <- length(x) distX <- as.matrix(dist(x)) distY <- as.matrix(dist(y)) centeredX <- doubleCenter(distX) centeredY <- doubleCenter(distY) calc <- sum(centeredX * centeredY) return(sqrt(calc/(N^2))) } distanceVariance <- function(x){ return(distanceCovariance(x,x)) } distanceCorrelation <- function(x,y){ cov <- distanceCovariance(x,y) sd <- sqrt(distanceVariance(x)*distanceVariance(y)) return(cov/sd) } # Compare with Pearson's r x <- -10:10 y <- x^2 + rnorm(21,0,10) cor(x,y) # --> 0.057 distanceCorrelation(x,y) # --> 0.509 The distance correlation between any two variables is bound between zero and one. Zero implies the variables are independent, whereas a score closer to one indicates a dependent relationship. If you’d rather not write your own distance correlation methods from scratch, you can install R’s energy package, written by very researchers who devised the method. The methods available in this package call functions written in C, giving a great speed advantage. Physical interpretation One of the more surprising results relating to the formulation of distance correlation is that it bears an exact equivalence to Brownian correlation. Brownian correlation refers to the independence (or dependence) of two Brownian processes. Brownian processes that are dependent will show a tendency to ‘follow’ each other. A simple metaphor to help grasp the concept of distance correlation is to picture a fleet of paper boats floating on the surface of a lake. If there is no prevailing wind direction, then each boat will drift about at random — in a way that’s (kind of) analogous to Brownian motion. If there is a prevailing wind, then the direction the boats drift in will be dependent upon the strength of the wind. The stronger the wind, the stronger the dependence. In a comparable way, uncorrelated variables can be thought of as boats drifting without a prevailing wind. Correlated variables can be thought of as boats drifting under the influence of a prevailing wind. In this metaphor, the wind represents the strength of the relationship between the two variables. If we allow the prevailing wind direction to vary at different points on the lake, then we can bring a notion of non-linearity into the analogy. Distance correlation uses the distances between the ‘boats’ to infer the strength of the prevailing wind. Confidence Intervals? Confidence intervals can be established for a distance correlation estimate using a ‘resampling’ technique. A simple example is bootstrap resampling. This is a neat statistical trick that requires us to ‘reconstruct’ the data by randomly sampling (with replacement) from the original data set. This is repeated many times (e.g., 1000), and each time the statistic of interest is calculated. This will produce a range of different estimates for the statistic we’re interested in. We can use these to estimate the upper and lower bounds for a given level of confidence. Check out the R code below for a simple bootstrap function: set.seed(1234) bootstrap <- function(x,y,reps,alpha){ estimates <- c() original <- data.frame(x,y) N <- dim(original)[1] for(i in 1:reps){ S <- original[sample(1:N, N, replace = TRUE),] estimates <- append(estimates, distanceCorrelation(S$x, S$y)) } u <- alpha/2 ; l <- 1-u interval <- quantile(estimates, c(l, u)) return(2*(dcor(x,y)) - as.numeric(interval[1:2])) } # Use with 1000 reps and threshold alpha = 0.05 x <- -10:10 y <- x^2 + rnorm(21,0,10) bootstrap(x,y,1000,0.05) # --> 0.237 to 0.546 If you want to establish statistical significance, there is another resampling trick available, called a ‘permutation test’. This is slightly different to the bootstrap method defined above. Here, we keep one vector constant and ‘shuffle’ the other by resampling. This approximates the null hypothesis — that there is no dependency between the variables. The ‘shuffled’ variable is then used to calculate the distance correlation between it and the constant variable. This is done many times, and the distribution of outcomes is compared against the actual distance correlation (obtained from the unshuffled data). The proportion of ‘shuffled’ outcomes greater than or equal to the ‘real’ outcome is then taken as a p-value, which can be compared to a given significance threshold (e.g., 0.05). permutationTest <- function(x,y,reps){ estimates <- c() observed <- distanceCorrelation(x,y) N <- length(x) for(i in 1:reps){ y_i <- sample(y, length(y), replace = T) estimates <- append(estimates, distanceCorrelation(x, y_i)) } p_value <- mean(estimates >= observed) return(p_value) } # Use with 1000 reps x <- -10:10 y <- x^2 + rnorm(21,0,10) permutationTest(x,y,1000) # --> 0.036 Maximal Information Coefficient What is it? The Maximal Information Coefficient (MIC) is a recent method for detecting non-linear dependencies between variables, devised in 2011. The algorithm used to calculate MIC applies concepts from information theory and probability to continuous data. Diving in… Information theory is a fascinating field within mathematics that was pioneered by Claude Shannon in the mid-twentieth century. A key concept is entropy — a measure of the uncertainty in a given probability distribution. A probability distribution describes the probabilities of a given set of outcomes associated with a particular event. To understand how this works, compare the two probability distributions below: On the left is that of a fair six-sided dice, and on the right is the distribution of a not-so-fair six-sided dice. Intuitively, which would you expect to have the higher entropy? For which dice is the outcome the least certain? Let’s calculate the entropy and see what the answer turns out to be. entropy <- function(x){ pr <- prop.table(table(x)) H <- sum(pr * log(pr,2)) return(-H) } dice1 <- 1:6 dice2 <- c(1,1,1,1,2:6) entropy(dice1) # --> 2.585 entropy(dice2) # --> 2.281 As you may have expected, the fairer dice has the higher entropy. That is because each outcome is as likely as any other, so we cannot know in advance which to favour. The unfair dice gives us more information — some outcomes are much more likely than others — so there is less uncertainty about the outcome. By that reasoning, we can see that entropy will be highest when each outcome is equally likely. This type of probability distribution is called a ‘uniform’ distribution. Cross-entropy is an extension to the concept of entropy, that takes into account a second probability distribution. crossEntropy <- function(x,y){ prX <- prop.table(table(x)) prY <- prop.table(table(y)) H <- sum(prX * log(prY,2)) return(-H) } This has the property that the cross-entropy between two identical probability distributions is equal to their individual entropy. When considering two non-identical probability distributions, there will be a difference between their cross-entropy and their individual entropies. This difference, or ‘divergence’, can be quantified by calculating their Kullback-Leibler divergence, or KL-divergence. The KL-divergence of two probability distributions X and Y is: The minimum value of the KL-divergence between two distributions is zero. This only happens when the distributions are identical. KL_divergence <- function(x,y){ kl <- crossEntropy(x,y) - entropy(x) return(kl) } One use for KL-divergence in the context of discovering correlations is to calculate the Mutual Information (MI) of two variables. Mutual Information can be defined as “the KL-divergence between the joint and marginal distributions of two random variables”. If these are identical, MI will equal zero. If they are at all different, then MI will be a positive number. The more different the joint and marginal distributions are, the higher the MI. To understand this better, let’s take a moment to revisit some probability concepts. The joint distribution of variables X and Y is simply the probability of them co-occurring. For instance, if you flipped two coins X and Y, their joint distribution would reflect the probability of each observed outcome. Say you flip the coins 100 times, and get the result “heads, heads” 40 times. The joint distribution would reflect this. P(X=H, Y=H) = 40/100 = 0.4 jointDist <- function(x,y){ N <- length(x) u <- unique(append(x,y)) joint <- c() for(i in u){ for(j in u){ f <- x[paste0(x,y) == paste0(i,j)] joint <- append(joint, length(f)/N) } } return(joint) } The marginal distribution is the probability distribution of one variable in the absence of any information about the other. The product of two marginal distributions gives the probability of two events’ co-occurrence under the assumption of independence. For the coin flipping example, say both coins produce 50 heads and 50 tails. Their marginal distributions would reflect this. P(X=H) = 50/100 = 0.5 ; P(Y=H) = 50/100 = 0.5 P(X=H) × P(Y=H) = 0.5 × 0.5 = 0.25 marginalProduct <- function(x,y){ N <- length(x) u <- unique(append(x,y)) marginal <- c() for(i in u){ for(j in u){ fX <- length(x[x == i]) / N fY <- length(y[y == j]) / N marginal <- append(marginal, fX * fY) } } return(marginal) } Returning to the coin flipping example, the product of the marginal distributions will give the probability of observing each outcome if the two coins are independent, while the joint distribution will give the probability of each outcome, as actually observed. If the coins genuinely are independent, then the joint distribution should be (approximately) identical to the product of the marginal distributions. If they are in some way dependent, then there will be a divergence. In the example, P(X=H,Y=H) > P(X=H) × P(Y=H). This suggests the coins both land on heads more often than would be expected by chance. The bigger the divergence between the joint and marginal product distributions, the more likely it is the events are dependent in some way. The measure of this divergence is defined by the Mutual Information of the two variables. mutualInfo <- function(x,y){ joint <- jointDist(x,y) marginal <- marginalProduct(x,y) Hjm <- - sum(joint[marginal > 0] * log(marginal[marginal > 0],2)) Hj <- - sum(joint[joint > 0] * log(joint[joint > 0],2)) return(Hjm - Hj) } A major assumption here is that we are working with discrete probability distributions. How can we apply these concepts to continuous data? Binning One approach is to quantize the data (make the variables discrete). This is achieved by binning (assigning data points to discrete categories). The key issue now is deciding how many bins to use. Luckily, the original paper on the Maximal Information Coefficient provides a suggestion: try most of them! That is to say, try differing numbers of bins and see which produces the greatest result of Mutual Information between the variables. This raises two challenges, though: - How many bins to try? Technically, you could quantize a variable into any number of bins, simply by making the bin size forever smaller. - Mutual Information is sensitive to the number of bins used. How do you fairly compare MI between different numbers of bins? The first challenge means it is technically impossible to try every possible number of bins. However, the authors of the paper offer a heuristic solution (that is, a solution which is not ‘guaranteed perfect’, but is a pretty good approximation). They also suggest an upper limit on the number of bins to try. As for fairly comparing MI values between different binning schemes, there’s a simple fix… normalize it! This can be done by dividing each MI score by the maximum it could theoretically take for that particular combination of bins. The binning combination that produces the highest normalized MI overall is the one to use. The highest normalized MI is then reported as the Maximal Information Coefficient (or ‘MIC’) for those two variables. Let’s check out some code that will estimate the MIC of two continuous variables. MIC <- function(x,y){ N <- length(x) maxBins <- ceiling(N ** 0.6) MI <- c() for(i in 2:maxBins) { for (j in 2:maxBins){ if(i * j > maxBins){ next } Xbins <- i; Ybins <- j binnedX <-cut(x, breaks=Xbins, labels = 1:Xbins) binnedY <-cut(y, breaks=Ybins, labels = 1:Ybins) MI_estimate <- mutualInfo(binnedX,binnedY) MI_normalized <- MI_estimate / log(min(Xbins,Ybins),2) MI <- append(MI, MI_normalized) } } return(max(MI)) } x <- runif(100,-10,10) y <- x**2 + rnorm(100,0,10) MIC(x,y) # --> 0.751 The above code is a simplification of the method outlined in the original paper. A more faithful implementation of the algorithm is available in the R package minerva. In Python, you can use the minepy module. MIC is capable of picking out all kinds of linear and non-linear relationships, and has found use in a range of different applications. It is bound between 0 and 1, with higher values indicating greater dependence. Confidence Intervals? To establish confidence bounds on an estimate of MIC, you can simply use a bootstrapping technique like the one we looked at earlier. To generalize the bootstrap function, we can take advantage of R’s functional programming capabilities, by passing the technique we want to use as an argument. bootstrap <- function(x,y,func,reps,alpha){ estimates <- c() original <- data.frame(x,y) N <- dim(original)[1] for(i in 1:reps){ S <- original[sample(1:N, N, replace = TRUE),] estimates <- append(estimates, func(S$x, S$y)) } l <- alpha/2 ; u <- 1 - l interval <- quantile(estimates, c(u, l)) return(2*(func(x,y)) - as.numeric(interval[1:2])) } bootstrap(x,y,MIC,100,0.05) # --> 0.594 to 0.88 Summary To conclude this tour of correlation, let’s test each different method against a range of artificially generated data. The code for these examples can be found here. Noise set.seed(123) # Noise x0 <- rnorm(100,0,1) y0 <- rnorm(100,0,1) plot(y0~x0, pch = 18) cor(x0,y0) distanceCorrelation(x0,y0) MIC(x0,y0) - Pearson’s r = - 0.05 - Distance Correlation = 0.157 - MIC = 0.097 Simple linear # Simple linear relationship x1 <- -20:20 y1 <- x1 + rnorm(41,0,4) plot(y1~x1, pch =18) cor(x1,y1) distanceCorrelation(x1,y1) MIC(x1,y1) - Pearson’s r =+0.95 - Distance Correlation = 0.95 - MIC = 0.89 Simple quadratic # y ~ x**2 x2 <- -20:20 y2 <- x2**2 + rnorm(41,0,40) plot(y2~x2, pch = 18) cor(x2,y2) distanceCorrelation(x2,y2) MIC(x2,y2) - Pearson’s r =+0.003 - Distance Correlation = 0.474 - MIC = 0.594 Trigonometric # Cosine x3 <- -20:20 y3 <- cos(x3/4) + rnorm(41,0,0.2) plot(y3~x3, type='p', pch=18) cor(x3,y3) distanceCorrelation(x3,y3) MIC(x3,y3) - Pearson’s r = - 0.035 - Distance Correlation = 0.382 - MIC = 0.484 Circle # Circle n <- 50 theta <- runif (n, 0, 2*pi) x4 <- append(cos(theta), cos(theta)) y4 <- append(sin(theta), -sin(theta)) plot(x4,y4, pch=18) cor(x4,y4) distanceCorrelation(x4,y4) MIC(x4,y4) - Pearson’s r < 0.001 - Distance Correlation = 0.234 - MIC = 0.218 Thanks for reading!
https://www.freecodecamp.org/news/how-machines-make-predictions-finding-correlations-in-complex-data-dfd9f0d87889/
CC-MAIN-2021-31
refinedweb
5,288
55.64
Windows Management Instrumentation (WMI) is the Microsoft implementation of WBEM, an industry initiative that attempts to facilitate system and network administration. WMI was introduced with Windows 2000, and has since evolved to include data about the most Windows resources, both hardware and software. There are several ways in which you can access WMI data, and most of them use WQL queries. You can use several tools to execute WQL queries, the most accessible of which is a tool called WMI Tester (wbemtest.exe) - it is installed with Windows. This article is a short tutorial that attempts to shed some light on several WQL aspects through a series of example WQL queries. I grouped the queries by their type. If you have questions, or a query that you would like to share, please leave a comment at the bottom of the page. Most of the queries presented here will get all WMI object properties (e.g., Select * …) in order to make the queries more readable. If you are familiar with SQL, you are probably aware of the recommendation that you should never use Select * (unless you really need all the columns) in order to make queries more efficient. I haven’t been able to confirm that selecting only specific properties has any impact on query efficiency in WQL, but you can easily replace * with property names. WMI Tester (Wbemtest.exe) is a tool that provides the basic functionality for executing WQL queries. You can run it by typing 'wbemtest.exe' in the Run box: This opens the WMI Tester: You first need to connect to the WMI namespace that contains the class you want to query (Root\Cimv2 in most cases): Root\Cimv2 Run the query by clicking the 'Query' or 'Notification Query' button: Enter the query text: Click the 'Apply' button. This opens a window with the query results: If the query is invalid, you will receive an error message: Object queries are used to get information about system resources. Query text: Select * From Win32_Process WMI namespace: Root\Cimv2. Comment: This is probably the WQL query most often found in various WMI articles and textbooks. It simply gets all the instances of a WMI class named Win32_Process which represents Windows processes. If you are interested in the properties of Win32_Process, see here. Win32_Process Select * From Win32_Process Where ProcessId = 608 If you don’t really want all Windows processes, you can qualify your query using the WHERE clause. This clause looks like this: WHERE Where PropertyName Operator PropertyValue where Operator is one of the WQL relational operators. The above query will return Win32_Process instances with process ID equals to 608. Operator Select * From Win32_Process Where Priority > 8 One of the WQL relational operator is ‘>’ (greater than). The above query returns all Win32_Process instances with Priority greater than 8. Priority Select * From Win32_Process Where WriteOperationCount < 1000 This query returns all Win32_Process instances where the WriteOperationCount is less than 1000. WriteOperationCount Select * From Win32_Process Where ParentProcessId <> 884 Select * From Win32_Process Where ParentProcessId != 884 Select * From Win32_Process Where Not ParentProcessId = 884 All three queries return Win32_Process instances where ParentProcessId is not equal to 884. ParentProcessId Select * From Win32_Service Another commonly seen query that retrieves all information about Windows Services. See here for details about the Win32_Service class. Note that this query returns all class instances. Sometimes this is just what you want, other times it is not, and yet other times, this is something you should definitely avoid. Win32_Service Select * From Win32_Service Where Name = "MSSQL$SQLEXPRESS" Here is an improved query – it returns only Win32_Service instances that have the Name property equal to “MSSQL$SQLEXPRESS”. It happens that Name is the key property for the Win32_Service class, so the returned WMI object collection will have 0 or 1 item, but in general, if you qualify a query with a WMI class property value, you get all class instances where the property matches the entered value. Name MSSQL$SQLEXPRESS Select * From Win32_Service Where DisplayName = "Plug and Play" Here is a caveat. If you are familiar with Windows services, you know that you can access service information using Services.msc (Start->Run-> Services.msc). When you open that applet, the text in the Name column is not equal to the Win32_Service.Name value. It is equal to the Win32_Service.DisplayName property value, so if you want to get services by their Services Control Panel applet name, use the above query. Win32_Service.Name Win32_Service.DisplayName Select * From Win32_Service Where PathName = "C:\\WINDOWS\\system32\\inetsrv\\inetinfo.exe" Here is another caveat. If a property value contains backslashes, you need to escape them by putting another backslash before (or after) each of them – otherwise, you get the ‘Invalid query’ error. Select * From Win32_Service Where Name Like "%SQL%" What if you don’t know the exact service name (or display name)? This is where the LIKE WQL operator comes in handy. Just like in SQL, the ‘%’ meta character replaces any string of zero or more characters, so this query returns all Win32_Service instances where the Name property contains the string "SQL". LIKE Select * From Win32_Service Where Name > "M" And Name < "O" You can use all WQL operators with string properties. This query returns all Win32_Service instances whose Name is greater than ‘M’ or less than ‘O’. The usual string comparison rules apply. Select * From Win32_Service Where ExitCode = "1077" The Win32_Process.ExitCode property type is UInt32, but it is enclosed in quotes. WMI will does its best to interpret a string value and convert it to an appropriate type. This doesn’t work the other way – with string properties, you have to use quotes. Win32_Process.ExitCode UInt32 Select * From Cim_DataFile Where Drive = "C:" And Path = "\\Scripts\\" Cim_DataFile is a WMI class with which you should definitely always use the WHERE clause. ‘Select * From Cim_DataFile’ alone can take hours to complete, because it will return all files on your computer. Note that the Path property doesn’t contain file names or the drive letter which is stored in the Drive property. Cim_DataFile Select * From Cim_DataFile Path Drive Associators Of {Win32_NetworkAdapter.DeviceId=1} As you can see, Select queries are not the only query type in WQL. Just like Select queries, Associators Of queries can return either WMI objects or class definitions. There is a difference though: Select queries always return a collection of instances of one WMI class, or at least instances of classes that have the same parent class at some level. Associators Of queries, on the other hand, usually return a collection of WMI objects that belong to different WMI classes. One thing that the returned objects have in common is that they are all associated with the WMI object specified in the Associators Of query (enclosed between the curly braces). Note that you have to use the key properties of an object to specify its path, and that there are no spaces between the property name, the equals sign, and the property value. On my computer, the above query returns instances of the following classes: Associators Of Win32_ComputerSystem Win32_DeviceMemoryAddress Win32_IRQResource Win32_NetworkAdapterConfiguration Win32_NetworkProtocol Win32_PnPEntity Win32_PortResource Win32_SystemDriver Associators Of {Win32_NetworkAdapter.DeviceId=1} Where ResultClass = Win32_NetworkAdapterConfiguration Most of the time, you will not need all WMI objects associated with a WMI object. You can narrow the returned collection by specifying the class of the returned objects in an Associators Of query Where clause. The above query returns an instance of Win32_NetworkAdapterConfiguration associated with the source object. Where Associators Of {Win32_NetworkAdapter.DeviceId=1} Where AssocClass = Win32_NetworkAdapterSetting WMI classes are associated by a special type of WMI classes, called association classes. Win32_NetworkAdapter and Win32_NetworkAdapterConfiguration objects are associated by instances of the association class called Win32_NetworkAdapterSetting. As you can see, you can also use association class names to limit the returned object collection. Win32_NetworkAdapter Win32_NetworkAdapterSetting References Of {Win32_NetworkAdapter.DeviceId=1} You can use a References Of query to examine WMI object associations. Unlike Associators Of queries, References Of queries return only WMI association classes. This query returns instances of the following WMI association classes: References Of Win32_AllocatedResource Win32_PnPDevice Win32_ProtocolBinding Win32_SystemDevices Event queries are used for WMI event subscriptions. Select * From __InstanceCreationEvent Within 5 Where TargetInstance Isa "Win32_Process" This query is also often found in WMI samples. WMI event queries are different from other query types in that they don’t return WMI objects immediately. Instead of that, they are used for subscribing to WMI events, and objects are returned as events arrive. Note the two distinctive characteristics of event queries not present in other query types: the Within clause and the __InstanceCreationEvent class. The Within clause tells WMI how often to poll for events in seconds. In the above query, the polling interval is 5 seconds. WQL event monitoring consumes system resources, so it is important to balance between the need to get events on time and the need not to burden the system. The __InstanceCreationEvent class is one of the classes used only in event queries (other two commonly used classes are __InstanceModificationEvent and __InstanceDeletionEvent). An instance of this class is created when a requested event arrives. The __InstanceCreationEvent.TargetInstance property holds a reference to a WMI class that actually triggered the event. In the above query, it is the Win32_Process class, and we can use the TargetInstance property to access its properties. Within __InstanceCreationEvent __InstanceModificationEvent __InstanceDeletionEvent __InstanceCreationEvent.TargetInstance TargetInstance Select * From __InstanceCreationEvent Within 5 Where TargetInstance Isa "Win32_Process" And TargetInstance.Name = "Notepad.exe" This query monitors the process creation event but only for processes named ‘Notepad.exe’. Select * From __InstanceDeletionEvent Within 5 Where TargetInstance Isa "Win32_Process" And TargetInstance.Name = "Notepad.exe" Use this query to monitor process deletion events for processes whose Name property is equal to ‘Notepad.exe’. A process deletion event occurs when a process exits. There is one thing you should note about a query like this: if you open Notepad and then quickly close it (within less than 5 seconds), it is possible for WMI to miss that and not report it as an event. Select * From __InstanceModificationEvent Within 5 Where TargetInstance Isa "Win32_Process" And TargetInstance.Name = "Notepad.exe" This query monitors Win32_Process modification events, not the process modification event. This is an important distinction – if the Windows process entity has a property that is not represented with a Win32_Process WMI class, and if that property changes, WMI will not report the change. Only Win32_Process property changes are returned as WMI events. Select * From __InstanceOperationEvent Within 5 Where TargetInstance Isa "Win32_Process" And TargetInstance.Name = "Notepad.exe" This query monitors all three types of events: creation, deletion, and modification events. The __InstanceOperationEvent class is the parent for the __InstanceCreation, __InstanceDeletion, and __InstanceModification classes, and you can use this fact to subscribe to all three event types at the same time. The __InstanceOperationEvent class is abstract (which means that it doesn’t have instances), so the actual event class returned by an event is one of the tree instance classes, and you can find out which one by inspecting its __Class system property. This way, you can determine the event type. __InstanceOperationEvent __InstanceCreation __InstanceDeletion __InstanceModification abstract __Class Schema queries are used to get information about WMI itself and its structure. Select * From Meta_Class WMI namespace: Any. This is the most basic schema query. You can connect to any WMI namespace and use this query to get all the classes present in it. Meta_Class is a meta class used only in schema queries. Meta_Class Select * From Meta_Class Where __Class = "Win32_LogicalDisk" This query uses the __Class system property to get the Win32_LogicalDisk class. Schema queries don’t return class instances, but class definitions, and a query like this will return a class definition regardless of whether there are any instances. Why would you want to get a class definition? New WMI classes are added for every new Windows version, and a query like this can check if a class exists on a system. Win32_LogicalDisk Select * From Meta_Class Where __Superclass Is Null WMI is organized hierarchically – there is a hierarchy of namespaces as class containers, and there is a hierarchy of classes within each namespace. There is only one top level namespace called 'Root', but there is always more than one top level class in a namespace (even when you create a new empty namespace, a number of system WMI classes are created automatically). You can use this query to get all top level classes for a namespace. (This query also works if you use '=' instead of 'Is'.) = Is Select * From Meta_Class Where __Superclass = "Win32_CurrentTime" For each WMI class, the __Superclass property holds the name of its immediate parent class. You can use this query to return all immediate children classes of a class. Note the quotes around the class name. __Superclass is one of the seven WMI system properties (see details here), and you can use them in schema queries. All except one – the __Dynasty property is a string array, and you can’t use array properties in WQL queries. The above query returns two classes: Win32_LocalTime and Win32_UTCTime, the immediate children of Win32_CurrentTime. __Superclass __Dynasty Win32_LocalTime Win32_UTCTime Win32_CurrentTime Select * From Meta_Class Where __Dynasty = "Cim_Setting" __Dynasty is another WMI system property – for each class, it holds the name of the top level class from which the class is derived. This query will return all children of Cim_Setting, a top level class situated in the Root\Cimv2 namespace. Cim_Setting Select * From Meta_Class Where __Class Like "Win32%" All WMI classes belong to a schema (or at least they should). For example, classes whose name begins with ‘Cim’ belong to the Cim schema, a group of ‘core and common’ WMI classes defined by DMTF. Classes that begin with ‘Win32’ belong to the ‘Win32’ schema – these classes are derived from Cim classes and extend them. You can use this query to list all classes that belong to the ‘Win32’ schema. The query uses the Like operator – this means, it can’t be used on Windows versions earlier than Windows XP, because the Like operator was added to WQL for XP. Cim Win32 Like Select * From Meta_Class Where __This Isa "__Event" This is not an event query despite the fact that it uses the __Event class. It is a schema query that lists all child classes (both direct and indirect) of __Event. Note the use of ‘__This’, a special WMI property used in schema queries, and the ‘Isa’ operator. __Event __This Is.
https://www.codeproject.com/Articles/46390/WMI-Query-Language-by-Example?PageFlow=Fluid
CC-MAIN-2019-09
refinedweb
2,384
53.71
# logger.py def log(*args, **kwargs): from .util import dict_format if kwargs: print("LOG: %s, %s" % (str(args), dict_format(kwargs)) else: print("LOG: %s" % str(args)) #—————– # util.py from logger import log def my_func1(bla): log("calling my_func1") def my_func2(bla, **kwargs): log("calling my_func1", **kwargs) def dict_format(d): # now here, we can't call "log" with kwargs or we get an infinite loop return str(d) from miniboa import TelnetClient from miniboa.telnet import TelnetClient If you listen in on the ichat channel on IMC, you may know that I've been working on an IMC client with Python in addition to a MUD built in Python using the Miniboa telnet library. Progress has been good on both of these projects, and as of tonight I've integrated the IMC client into my MUD and ran some successful tests! After all of this I've come across a few realizations. First, I've been teaching myself the Python language using these as my pet projects to try different things out. The biggest problem I've encountered is circular import problems, which is symptomatic of a bad design. Unfortunately coming from a C# background where everything kind of "plugged together" through meta-data I was not prepared for tackling module design properly. After a bit of hacking here and there to get around these issues I believe it is time to redesign my codebase. My question is this: Does anyone know of good on-line material covering idioms and/or best practices in regards to the Python programming language? Especially in regards to organizing modules and bigger multi-file projects. Thanks!
http://mudbytes.net/forum/topic/2758/
CC-MAIN-2020-40
refinedweb
271
59.13
For the last 5 years, I've been using an Apple Magic Keyboard. There wasn't any major reason to be using it, except having the same layout of my laptop, but it did the job pretty nicely. After joining Finiam, I started looking more into how I used my keyboard, and how I could become more efficient with it. A thing that was bothering me for some time was the fact that my keyboard had a Portuguese layout, and every time I wanted to make {, ] characters, Finger Gymnastics 101, would be required. I know I could always change the input language, but it wasn't the same thing. This was just the excuse I needed to get a new keyboard and with a US layout. So I went by and bought a Keychron k2v2 and set myself up to start looking for more ways of having it optimized for me. The first thing that I did was installing Karabiner-Elements. It's a keyboard customizer for macOS, where you can easily swap a key with another, use a base set of complex modifications developed by the community or create your own rules. I started investigating and trying new combinations... But then it came to my mind: "Wouldn't it be nice if I could make a heatmap of my keystrokes and then create aliases or easier configurations for things that I identify as the most used?" Well, of course, it would, and I can do it as a nice excuse to play with Elixir again. So let's get our hands dirty and get into some code. Developing the Heatmapper First, we need to collect our keystrokes. This would be a nice challenge to do, but I'll leave it on the side for now. So, I started looking into existing keyloggers in Github (Yeah, I know this sounds a bit weird because the chosen one could be malicious - I'm aware of that possibility). After a few possibilities, I found out caseyscarborough/keylogger, went through the code and it seemed simple to use and make changes to adjust it for my specific needs. Now let's dive into the heatmap and drawing part. From the beginning I wanted the heatmapper to draw the keyboard and not use a base SVG layout (or something similar). Zamith's mogrify_draw enables me to draw keyboard keys in a simple way using mogrify as the base. Another thing was that since the code will handle drawing a keyboard, it would be an even cooler solution to be able to draw any keyboard, given the provided structure of the keys. The first step is to parse the log file of keystrokes provided. As I mentioned earlier, I made some changes to the keylogger and this version can be found here. You can then follow the instructions in the README on how to use it, but it outputs to the file the following on each keystroke read: <keycode>::<keyname> I'll probably remove keyname in the future, as I'm not using it in the application itself (just for debugging purposes), and therefore reduce the size of the log file. Looking, at the code, we first open the file and then parse its content to generate a map of frequencies. In the parse_line/1 function, it uses a regex to get the keycode in each line. {:ok, contents} <- File.read(path) frequencies = parse(contents) def parse(contents) do contents |> String.split("\n", trim: true) |> Enum.map(&parse_line(&1)) |> Enum.reject(&is_nil(&1)) |> Enum.frequencies() |> normalize_frequencies end The normalize_frequencies/1 function, calculates the maximum value of keystrokes (the max frequency), and then for each value in the frequencies map, returns a normalized value: value/max. See the example below: # log file contents 31::o 45::n 8::c 31::o # frequencies map after parse frequencies = %{ 31: 2, 45: 1, 8: 1 } # frequencies normalized max_frequency = 2 normalized = %{ 31: 2/2, 45: 1/2, 8: 1/2 } Now that we have normalized values, we can jump to the drawing process. The first thing to do here is to implement the logic for several keyboard layouts to be drawn, so together with the path to the log file, it's also necessary to pass which keyboard type to draw. For organizational purposes, I decided to divide the layouts by keyboard brand first, and then by type. For example, below we have the types :keychron, :macbook and :niz. def get_keyboard_layout(model, keyboard) do case model do :keychron -> keyboard_layout(Layouts.Keychron, keyboard) :macbook -> keyboard_layout(Layouts.Macbook, keyboard) :niz -> keyboard_layout(Layouts.Niz, keyboard) _ -> {:error, :no_model} end end $> get_keyboard_layout(:keychron, :k2v2) Calling the function with a valid model and type returns the keyboard layout. Each layout is under layouts/<model>.ex, and consists of a function with the name of the keyboard type and the following structure: # file: layouts/keychron.ex def k2v2 do [ [ %{size: 1, keycode: 53, name: "esc"}, ... ], [ %{size: 1, keycode: 50, name: "`~"}, ... ], ... ] end It's basically a list of rows of the keyboard, where each row, the list of keyboard keys is defined. A keyboard key can be defined with keycode, name, and size (assuming that a size of 2, will draw a key with a width of 2 * DEFAULT_KEY_PX_SIZE and a fixed height of DEFAULT_KEY_PX_SIZE). It is also possible to define a height and width instead of size if we wish to draw keys with different heights. Now let's see how everything is drawn. The first step is using mogrify_draw to paint the base image where everything will be laid next. It'll create a png with the respective width and height of the keyboard we want to draw. defp draw_base_image(keyboard) do %Mogrify.Image{path: "heatmap.png", ext: "png"} |> custom("size", "#{95 * keyboard_width}x#{95 * keyboard_height}") |> canvas("white") end To draw each keyboard key, we go through each row, and subsequently across all row's keys, keeping track of the xx_position and yy_position that we are at the moment. With this information, we know the exact place in pixels to start drawing the key in the base image. But before actually drawing, we get the respective frequency for that key and the RGB color calculated accordingly to the frequency passed. Then, we draw a rounded rectangle (the key) and a white text over it (the key name). defp draw_key(key, xx_position, yy_position, image, frequencies) do freq = frequencies[to_string(key.keycode)] || 0 {red, green, blue} = get_rgb_color(freq) image |> custom("fill", "rgb(#{red},#{green},#{blue})") |> rounded_rectangle( xx_position * @key_size + 10, yy_position * @key_size + 10, xx_position * @key_size + @key_size * key_width(key), yy_position * @key_size + @key_size * key_height(key), 10, 10 ) |> custom("fill", "white") |> Mogrify.Draw.text( xx_position * @key_size + 45, yy_position * @key_size + 45, key.name ) end Looking at the function rounded_rectangle it's derived from the ones enabled by mogrify_draw, but using the roundRectangle primitive from the -draw given by the mogrify command-line tools (check here all the primitives from mogrify if you need something more custom). For this primitive, it's necessary to pass the base image, the upper left and bottom right point, together with the border-radius information. defp rounded_rectangle( image, upper_left_x, upper_left_y, lower_right_x, lower_right_y, border_w, border_h ) do image |> custom( "draw", "roundRectangle #{ to_string( :io_lib.format("~g,~g ~g,~g ~g,~g", [ upper_left_x / 1, upper_left_y / 1, lower_right_x / 1, lower_right_y / 1, border_w / 1, border_h / 1 ]) ) }" ) end After recursively going through all the rows and keys, the image heatmap.png is created. - Grey keys are keys that weren't used (or the keycode is not being detected by the keylogger.) - Stronger blue keys are the ones used with less frequency. - Mix of blue and red are the keys with mid-frequency. - Red keys the most used ones. You can check the complete code and more detailed instructions on how to run it, in this Github repo. Looking at the final result of a week of work we can identify some keys that are heavily used, such as the arrow keys, left command, left shift, left control, and even the backspace. It's somewhat a good representation of my usage as the referred keys take part in most of the keybindings that I use for work (except the backspace - not sure of what I delete so much 😅). Creating custom key modifications with Karabiner-Elements As I mentioned earlier, initially I made some simple modifications with Karabiner, but now we are in the condition of making some more complex ones, that are not provided by the community. To create custom modification rules, we need to create a file under ~/.config/karabiner/assets/complex_modifications/ where you can have only one file with all the rules or group them in different files. After having valid rules they will appear as available to enable in the Complex modifications tab. As an example, below, I'm creating a simple rule to change the caps_lock key to be a control key and just be a caps_lock when pressed together with shift. karabiner/assets/complex_modifications/capsToControl.json { "title": "Change caps key", "rules": [ { "description": "Change caps_lock key to control. (Use shift+caps_lock as caps_lock)", "manipulators": [ { "type": "basic", "from": { "key_code": "caps_lock", "modifiers": { "mandatory": [ "shift" ], "optional": [ "caps_lock" ] } }, "to": [ { "key_code": "caps_lock" } ] }, { "type": "basic", "from": { "key_code": "caps_lock", "modifiers": { "optional": [ "any" ] } }, "to": [ { "key_code": "right_control" } ] } ] }, ] } To see more on how to write these rules, you can the Karabiner Configuration Reference Manual or use this online editor to generate them. Wrapping up There's still some stuff to do to make the heatmap more accurate and robust, such as making it work for more complex keyboard layouts, like split keyboards or with vertical gaps between keys. But I'll leave it for later when I have the time. This was a fun and simple project to try some ideas and make something generic that could be used by other people. The experiments with Karabiner-Elements weren't all described here, but they were about trying more multiple key combinations, using only keys that I don't find necessary for my daily use, such as the end, page down and page up. Hope you also liked reading it and that you give it a try. Have a nice one and see you in the next post 👋 Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/finiam/developing-a-keyboard-heatmap-and-customizing-keys-with-karabiner-59ai
CC-MAIN-2022-33
refinedweb
1,688
60.14
Fixing Venture Capital Article (Couldn't find a 'Discuss this article' link on the page, and I don't see one in the list of topics, so I'm arbitrarily starting my own damn topic to ask a question:) Christo Fogelberg Wednesday, June 04, 2003 Joel mentions three books and two blogs themed around VC... but doesn't mention any to do with the non-VC or Ben-and-Jerry way of starting up a software company. Has anyone read or found anything relevant to this way of doing things? (Random Addendum: I've read High St@kes, No Prisoners too - it's worthwhile reading if you ever want to do the entrepreneurial thing:) Christo Fogelberg Wednesday, June 04, 2003 Great article Joel. John Rosenberg Wednesday, June 04, 2003 Interesting analysis (as always) from Joel. Only question is: What's the alternative ? I guess self-funding is the best way to go for many cases (that's what I'm doing), or perhaps a loan from a friendly bank (friendly compared to a VC at least). Anyone care to share their experiences of growing software businesses, wither with or without VC support? Steve Jones (UK) Wednesday, June 04, 2003 Good, straightforward, article, given the target audience, but it shouldn't be news to anyone (anyone who's seriously looking for funding) that some VCs rather like the all-or-nothing business plan. "I don't want to invest in nice little companies" was how a guy I was talking to the other day put it. And fair enough, in my opinion. You want a secured loan for a small amount, go talk to the bank - that's what they're for. But entrepreneurs do need to know that this is the state of play, so good one JS for explaining it so clearly. Oh, and one upside of the recent slowdown in VC investment (at least here in the UK) is that VCs are more willing to 'dance' now. Able Baker Wednesday, June 04, 2003 Where to begin? As an entreprenuer who has considered the VC route, this article really struck a nerve. I think Joel is mostly correct about this and I think that VCs have several other perverse incentives which make them a problem in the entreprenuerial process in many, but certainly not all cases. The most signifigant of these, I think, is that the VC has no incentive to create a company which will be profitable in the long term. Their incentive is to create a company which looks like it will be profitable long term, at about the seven year point. You would think that actually creating a long term profitable company would be the best way to do this, but in reality it is not. Of course, in defense of the principle of VC, I must point out that because of the VC explosion in the late nineties, most of us have as examples, some pretty poor quality VCs to look at. There were just too many people trying to be VCs and the average quality just had to suffer. I had always read that a proper VC wasn't just a source of money but that he, and your board would offer advice and guidance. Judging by many of the amatuerish decisions the start-uos I have worked for made, this system just wasn't working in the late nineties. Another problem I have with VCs, is the same problem most people have, they see what they do as the most important part of the process. You will here it again and again- the most important think is the managment team. They will tell you that they would rather have an "A" management team and a "C" technology than the other way around. That can't be right, can it? I mean, I agree the management team should be great, but what are they going to do with crap technology? I think that to be successful all aspects of your business need to be pretty good. This attitude (not limited to busienss types, BTW) that what I do is the tricky part and everything else just falls into place if you spend the money, is absolute non-sense. Another thing that drives me nuts, is the VCs (and angels to some extent) that they own the process and that the silly little entreprenuer only disagrees with them through lack of experience. I have read and heard it again and again, VCs and angels saying the biggest problem is that entreprenuers over value their companies. Get it? Every startup has an intrinsic fair valuation which is known to the potential investor, while the guy who started the company is all caught up in emotion and cannot fairly value it. It's not that the principle of supply and demand enters into things or that one party has a price he is willing to pay and the other has a price at which he will sell; There is a correct price which the VC knows and the entreprenuer must be carefully educated for the deal to move forward. BTW, I'm not sure this still goes one, but in the recent heyday of VC, didn't these guys make a butt-load of money just for holding on to (er- managing) funds which had not yet been invested? This is not to say that VC is all bad. There are many types of companies for which VC funding is the correct way to go. There are many for which it is not, including those great little companies which expect to make a modest return over a long period of time. VC is necessarily based on the rare big score and can't really work any other way. For smaller ventures you really need an angel or two. What I would be interested in seeing is a study on the success and failure rates of VC backed companies. I have had business people tell me that giving most of you company to a VC with the hopes of it increasing rapidly in value is just the way businesses are built these days, but is it? What percentage of the fortune 500 were built this way? My apologies for the long post- this one just stuck in my craw. It kinda hurts. Erik Lickerman Wednesday, June 04, 2003 Google is one company where the VCs seem, repeat SEEM, to be in it for the long run. Whenever they're being interviewed Sergey, Larry and Eric all deny an upcoming IPO. They often say they haven't even started discussing it. Maybe Kleiner Perkins have made some secret lucrative deal or maybe it's Google's unique way of doing business, I don't know, but it looks to me like they've got a different deal with the VCs than most other computerrelated companies. Rikard Linde Wednesday, June 04, 2003 I have the same feelings towards VCs as Joel. What else to do? Be self sufficient. That may mean putting a really big idea on the back burner if it's capital intensive up front. Banks? Trust me, banks only loan money to businesses that don't need it. Sam Jurgensen Wednesday, June 04, 2003 Cristo, Take a look at a book called 'Growing a Business' by Paul Hawken. UI Designer Wednesday, June 04, 2003 Another good book that develops Joel's "Ben & Jerry's" approach is "Beyond Entrepreneuship: Turning Your Business Into An Enduring Great Company" by Collins and Lazier. I too believe that most businesses should be grown incrementally from cash flow to maximize chances of success. However, there are some exceptional cases where VC money makes sense-- Federal Express (requires huge infrastructure to get started that could only be developed with outside money) and Amazon (needed to develop a brand quickly) are two. Joel, I like your time plots of a growing business. There's a simulation called "Boom & Bust Enterprises" that dynamically develops similar behavior for a growing business (showing the trade-offs in capacity, pricing, marketing). If you like, you can try it at: Michael Bean Wednesday, June 04, 2003 Strange. The URL I posted in the message above doesn't seem to work. Try: It is a good artical but missing a point reiterated in an earlier point. I'll reiterate it a bit diferently... The VC's _sole_ interest is in making the VC money in the short term. They do this by _selling_ the impression of "long term" profitability. This impression might reflect the truth but it could reflect pure fantasy. Most, not all, founders's interest is in creating a sustanable business (in fact). This may be a sufficient property but is not necessary. Some founders do have the _same_ interest as the VC (namely creating an impression of long term profit that is not real). The VC business is a bit like a pyramid scheme in that the earlier participants make money and the later participants don't (always in a pyramid scheme but sometimes in a VC'd business). Another example is that some businesses are organized to produce "attractive" crap that they plan to sell only for a short period (e.g. "as seen on TV" garbage). njkayaker Wednesday, June 04, 2003 I've found these two points among the thread, but wanted to lay them bare. And baseball's cool. Stripping out all the charts and Graphs, the article makes two good points. 1. The way that business owners define success is different from the way that VC's define success. A business owner (in Joel's estimation) defines success as sustainable growth that yields a company with longevity. He wants base hits. The VC's (in Joel's estimation) define success as money earned on the company-as-product. They want home runs. 2. The way that VC's mitigate risk (increase "success" chance) is very different from the way that owners mitigate risk, and that difference leads to conflict. VC's mitigate risk by owning multiple teams, one of which is sure to win the world series. Business owners mitigate risk by growing and developing the one team they are responsible for, hoping to be home-town darlings and win game-by-game. Daniel Mehaffey Wednesday, June 04, 2003 There are two other points of tension between the startup committed entrepreneur and VC capital. The classic VC curve for software companies is 5 years. That is the VC will support losses for around 4 years so long as they can float the company in the fifth year (there are multiple rounds of finance on the way to the 5th year). As Joel says, ten years is a more reasonable time for software to actually both become stable and profitably saleable. So trying to meet a 5 year cycle just pushes those other curves, its another pressure to over hire and grow too fast just because there is VC capital involved. The other point is that the ultimate aim of the VC is to sell the original stake for a bumper return, either by flogging it to some other even riskier VC or going the IPO route, going public. The last thing the original creators are interested in (if they are committed entrepreneurs), is flotation. Employees can get excited by IPO (well the first couple of times they get sold that pup), but the original owners are giving away everything they created in the first place. Now if its successful they have a huge paper return (which they can't really cash in without throwing the baby out along with it), but they get a whole squadron of major investors peering over their shoulders, and they are far worse than VC auditors. Simon Lucy Wednesday, June 04, 2003 I completely agree with this article. Joel has summed up my experiences with VC's far better than I could have done. I definitely council against VC funding at any startup I work for, unless it's really neccesary (ie you can't generate a revenue stream without it). The vast majority of VC's have little buisness expertise to share and can be likened to a lead weight around the leg of a swimmer. There are good VC's out there, but unless you get lucky or they approach you, it's unlikely you will have an opportunity to do buisness with them. I certainly wouldn't build a strategy around getting a good VC firm. Brian Niemeyer Wednesday, June 04, 2003 If employees (or anybody) is excited by an IPO, they are interested in the short term prospects (i.e. they are largely indistinguishable from VC's). If they are interested in getting a wad of shares (that will have long term value), then they are "founders". Erik - a *true* "A" management team will have "A" technology. They'll be able to hire and retain excellent coders who can create the "A" tech you need. The trick is - do not confuse an "A" salesman who's a "D" manager with a true "A" manager. Philo Philo Wednesday, June 04, 2003 Sometimes, well quite often in the past, employees have been enticed on board startups with the promise of Founder shares and an IPO valuation plucked out of the air, and in return taken a hit on pay or contract terms. Joel made very valid points in hisessay. This brought up a couple of questions: 1.) Aren’t angel investors bit different from VC’s? Don't they (Angel Investors) align themselves more with the founders? 2.) Since most of the Venture capital firms hire people who have founded successful companies why do they still align their goals, in this manner? thoughts… Prakash S Wednesday, June 04, 2003 what i didn't understand about the VC as a concept (and the article). - whats the point of their policies - they are bound to run down _all_ the companies they own, and ruining all the capital they have is also not their objective. In the current economic climate the likelyhood of some place making it like Netscape is 0; so holding on to the old game looks like a loose - loose situation. In other words, why didn't VC change as a result of the slowdown? Another question: there seem to be more VC holdings than Netscapes, so how did the remaining VC holdings manage to survive the apparent lack of Netscapes? Michael Moser Thursday, June 05, 2003 Philo I agree. You cannot be considered a good management team if you don't know how to evaluate, hire and retain "A" people in most parts of your business (not just tech). This, I suppose, is why it is nauseating/ironic that angels say they would rather have "A" management and "C" technology. Erik Lickerman Thursday, June 05, 2003 A lot of enterpreneurs who become VC partners are "serial enterpreneurs" who have started a bunch of companies. The've have learned that it's much more profitable (and easier) to start a company, get it off the ground a bit through whatvever means necessary (inflated revenues, PR, employees), and cash out. Very few of these people have ever seen a company through long term. These people aren't that different from VCs. One or two early successes are enough to set you up so that you can keep trying to hit the home run again with the next startup. You can't be completely stupid or you'll just waste away your bankroll, but you can definitely try to reduce the risk by trying a bunch of times. VC model works extremely well for these kinds of enterpreneurs because no good gambler in his right mind would want to bet with his own money. On the other hand, if you're interested in building a company for the long term, VCs are not the best option for all the reason Joel listed. AC Thursday, June 05, 2003 One reason VCs look for "A" team rather than "A" idea is that world changes pretty fast. Something that was "A" idea today, could be wrong tomorrow. Or your idea may look great on paper but may not fly in the market in the first place. Even if your idea works, you'll need to expand your product line and market to grow. A great team can adjust to these problems, while a mediocre team will just get bogged down. igor Thursday, June 05, 2003 The reason for backing the best teams (which really means having a track record), is that they deliver. Now, given a lead weighted sow the best team in the world might fail, that's less important. What is important is that the risk is minimised by having the best team. Simon Lucy Thursday, June 05, 2003 There are two key reasons why without an "A" management team, you're screwed: 1) Having a few bad employees rarely ever destroy a company. If nothing else, they just don't have the reach of influence and power required. Bad managers, however, can crush even the greatest of Big companies in pretty short order. They can illiminate morale, implement a system-wide paradigm of perverse incentives, fire the best and most experienced employees, explode a brand name into a worthless string of letters, and many other such horrifying hair-raising things. Bad employees and bad technologies can be recovered from, typically. But bad managers...you'll not likely survive long enough to put up much of a fight, depending on the business and it's environment. And: 2) If you look at all the successful people and companies in the world, you know what you'll often find? Their first ideas failed. Their original plans had to be disgarded or changed beyond all recognition because conditions changed or they just plain didn't work. In might turn out that your A technology turns out to be a dead-end bust. An unforseen competitor could come out of nowhere and take you from an A straight to an F. Lawsuits, sudden changes, major evolutions or revolutions...can't predict them. And if your main value is technology in the age where perhaps the speed of change is fastest in the world? Good management, however, by definition must be capable of dealing with major changes effectively. Products and campaigns fail all the time, yet good companies must survive them and constantly seek to improve. Example: New Coke. Whoops - yet I don't know of many people who are betting Coca Cola will fail. Of course, this is all rather begging the question. If one could really indentify A-anything reliably and consistently, one wouldn't be worried about wasting money, because one would have approximately infinate supplies of it. Plutarck Tuesday, June 10, 2003 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/joelonsoftware2/default.asp?cmd=show&ixPost=48096&ixReplies=24
CC-MAIN-2018-05
refinedweb
3,138
68.91
#include <jevois/Component/ValidValuesSpec.H> Regex-based valid values spec, everything that is a match to the regex is considered valid. Uses boost::regex internally (because std::regex does not alow one to get the original string specification back from the regex, but we need that to display help messages). This allows for highly flexible valid values definitions. For example, say you want an int parameter to be in range [0..59] but it could also have value 72, your regex would be: ^(([0-5]?[0-9])|72)$ You can find on the web regex examples to match things like a valid filename, a valid URL, a valid credit card number, etc. Just make sure your regex is in the syntax expected by boost::regex since several syntaxes are floating around for regular expressions. Definition at line 175 of file ValidValuesSpec.H. No default constructor, always need to provide a regex. Construct from a given regex that specifies valid values. Definition at line 140 of file ValidValuesSpecImpl.H. Destructor. Definition at line 145 of file ValidValuesSpecImpl.H. Check whether a proposed value is valid, returns true iff value is a match against our regex. Implements jevois::ValidValuesSpecBase< T >. Definition at line 149 of file ValidValuesSpecImpl.H. References jevois::paramValToString(). Convert to a readable string. Returns Regex:[expression] where expression is replaced by the actual regex. Implements jevois::ValidValuesSpecBase< T >. Definition at line 156 of file ValidValuesSpecImpl.H. Th eregex that defines our valid values. Definition at line 195 of file ValidValuesSpec.H.
http://jevois.org/doc/classjevois_1_1ValidValuesSpecRegex.html
CC-MAIN-2017-13
refinedweb
252
60.31
by oxman on Thu Feb 10, 2011 9:33 pm by adzenith on Tue Mar 01, 2011 6:40 am by oxman on Sat Apr 09, 2011 10:39 pm by adzenith on Sun Apr 10, 2011 5:05 pm import sublime_pluginsublime_plugin.all_callbacks.items() by oxman on Fri Aug 12, 2011 5:53 pm by oxman on Sun Aug 28, 2011 12:18 pm by lencioni on Wed Dec 07, 2011 2:39 pm by GMath on Wed Dec 07, 2011 10:12 pm by adzenith on Wed Dec 07, 2011 10:57 pm by tobia on Thu Feb 21, 2013 10:35 pm Return to Ideas and Feature Requests Users browsing this forum: No registered users and 3 guests
http://www.sublimetext.com/forum/viewtopic.php?p=18586
CC-MAIN-2015-32
refinedweb
120
50.51
Diagrams/Dev/Fixpoint From HaskellWiki Revision as of 03:14, 25 May 2014) It's important that this is NOT a newtype, so we can freely use the Monad instance for Contextual when working with diagrams. This also means we don't need Wrapped or Rewrapped instances for it anymore. - The implementation of applyAnnotationneeds to change. All it needs to do is create a new etc.) - Of course the implementation of almost all the rest of the functions in this module will need to change as well. I'll walk through a few particular examples and then discuss more general principles. First, value, and uses it to create a leaf DUALTree. Everything up through UpAnnots still applies (though the name of UpAnnots has changed). What's different is that we aren't creating an explicit tree with summary values at leaves; we simply return the summary value as part of the result of the diagram function. So we can create a function leafS in parallel with leafU: emptyTree = Node REmpty [] leafS :: Summary b v m -> QDiagram b v m leafS s = return (emptyTree, s) Then the implementation of pointDiagram only needs to change to pointDiagram p = leafS (inj . toDeletable $ pointEnvelope
https://wiki.haskell.org/index.php?title=Diagrams/Dev/Fixpoint&diff=prev&oldid=58195
CC-MAIN-2017-30
refinedweb
200
51.99
I’ve been working with container networking a bunch this week. When learning about new unfamiliar stuff (like container networking / virtual ethernet devices / bridges / iptables), I often realize that I don’t fully understand something much more fundamental. This week, that thing was: network interfaces!! You know, when you run ifconfig and it lists devices like lo, eth0, br0, docker0, wlan0, or whatever. Those. This is a thing I thought I understood but it turns out there are at least 2 things I didn’t know about them. I’m not going to try to give you a crisp definition, instead we’re going to make some observations, do some experiments, ask some questions, and make some guesses. What happens if you don’t have any network interfaces? I was messing around with network namespaces, and I created a new one with: sudo ip netns add ns1 It turns out that when you create a new network namespace, it doesn’t have any network interfaces at all! What does that mean? Let’s explore and see what it looks like: We can run commands inside this new network namespace with sudo ip netns exec ns1 COMMAND. I’m just going to run a shell inside this network namespace, and then try out some things. So let’s start with sudo ip netns exec ns1 bash $ sudo ip netns exec ns1 bash $ ifconfig (no output) That makes sense, this is a new network namespace so there are no network interfaces set up yet. Still inside that network namespace, let’s try to make a webserver and connect to it. $ nc -l 8900 & # make a server on port 8900 $ netstat -tulpn # list open ports Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address PID/Program name tcp 0 0 0.0.0.0:8900 2918/nc $ curl localhost:8900 curl: (7) Couldn't connect to server Okay, so this is sort of interesting. I can create a server on port 8900 with nc -l 8900. And netstat shows that that server exists. But when I try to curl localhost:8900, nothing happens! What if I try to create a server listening on 127.0.0.1? sudo nc -l 127.0.0.1 8080 nc: Cannot assign requested address Doesn’t work. Makes sense. I think what’s happening here is: nc -l 8900is listening on 0.0.0.0:8900, which means “all network interfaces” - but there are no network interfaces - so when we do curl localhost:8900, no packets actually get sent (when I ran tcpdump, no packets show up) - so ncnever receives any packets Let’s do an experiment to try to confirm our hypotheses: let’s add a network interface! The idea is that if we have a lo network interface, then curl localhost:8900 will actually send packets, nc will receive them, and everything will work. $ ip link set dev lo up # this sets uo the 'lo' loopback interface $ curl localhost:8900 # BAM! this totally works! # the backgrounded netcat prints out this output: GET / HTTP/1.1 User-Agent: curl/7.35.0 Host: localhost:8900 Accept: */* This is rad. What we know now: - if you don’t have any network interfaces, you can’t do any networking (but you can start servers on 0.0.0.0 and netstat shows those servers) - when we add a network interface, our server starts working right away (without having to restart the server) A packet can appear multiple times in tcpdump Something I’ve been observing recently but haven’t fully understood is – sometimes I’ll be on a machine which has - virtual network interfaces for each container ( vethXXXXXXX) - a bridge interface ( cni0) - and a “real” network interface to the outside world ( eth0) When containers send packets to the outside world and I’m running sudo tcpdump -i any, I’ll see those packets 3 times. I know a few more things about how tcpdump works: - I can run sudo tcpdump -i cni0to listen on a specific interface. When I do that, the packets appear only once - tcpdump happens at the “beginning” of the network stack. I think that means that packets are captured by tcpdump when packets enter a network interface What does “enter a network interface” actually mean, though? I tried to look at this 20,000 word article on the iinux network stack and I think I have a workable theory! What happens when a packet is created? Okay, so I skimmed Monitoring and Tuning the Linux Networking Stack: Receiving Data and I think I have a working hypothesis for how packets - get assigned network interfaces - get captured by tcpdump - can be assigned more than one network interface First thing first, this document refers to “network interfaces” as “network devices”. I think those are the same thing. So!! Let’s say I create a packet on my computer. step 0: iptables prerouting rules step 1: the packet gets routed. Routing a packet means “assigning it a network device”. Let’s do a tiny experiment in routing – I have 3 interfaces on my computer right now $ ifconfig docker0 Link encap:Ethernet HWaddr 02:42:ef:ab:0d:ac inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 enp0s25 Link encap:Ethernet HWaddr 3c:97:0e:55:b3:7g inet addr:192.168.1.213 Bcast:192.168.1.255 Mask:255.255.255.0 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 and here are the routes: $ sudo ip route list table all default via 192.168.1.1 dev enp0s25 proto static metric 100 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.1.0/24 dev enp0s25 proto kernel scope link src 192.168.1.213 metric 100 local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 local 172.17.0.1 dev docker0 table local proto kernel scope host src 172.17.0.1 So – if I make a request to 172.17.0.1 ( curl 172.17.0.1:8080), it seems like that would end up on the docker0 device. Right? Wrong, apparently. If I run tcpdump -i lo packets to 172.17.0.1 show up, and if I run tcpdump -i docker0, the packets don’t show up. So it seems right now, on my machine, packets sent to 172.17.0.1 go through the lo device. The reason they get sent to lo instead of docker0 is that there’s a route for 172.17.0.1 in my route table that says local – the same reasons that packets to 127.0.0.1 get sent to lo. step 2 tcpdump gets the packet This is pretty straightforward – once there’s a network device attached to the packet, then tcpdump gets the packet. That’s all I know for now! ok so what do we know about network interfaces? Here’s what I think so far: - they can be physical network interfaces (like eth0) or virtual interfaces (like loand docker0) - you can list them with ifconfigor ip link list - if you don’t have any network interfaces, your packets don’t enter the linux network stack at all really. To go through the network stack you need network interfaces. - When you send a packet to an IP address, your route table decides which network interface that packet goes through. This is one of the first things that happens in the network stack. - tcpdump captures packets after they’re routed (assigned an interface) Though there’s a PREROUTINGchain in iptables that happens before routing!` Some of this is probably wrong, let me know what! I’m on Twitter as always ()
https://jvns.ca/blog/2017/09/03/network-interfaces/
CC-MAIN-2019-09
refinedweb
1,339
72.87
In the last article Python SMTP Send Email Example we had learnt how the email transfer from the internet to the receiver’s email address, we have also learnt the basic source code to send email to SMTP server in Python program. In this article, we will tell you how to send more complex email content such as Html format content, image, and attachment through SMTP server. 1. Send HTML Content In Email. What if we want to send HTML messages instead of plain text in email? The method is simple. When you construct the MIMEText object, pass in the HTML string, and change the second parameter from ‘plain’ to ‘html’, other codes are the same. When you send the email again, you can see Html content in the email body. from email.MIMEText import MIMEText msg = MIMEText('<html><body><h1>Hello World</h1>' + '<p>this is hello world from <a href="">Python</a>...</p>' + '</body></html>', 'html', 'utf-8') 2. Send Email With Attachments. Mail with attachments can be regarded as mail with several parts: text and attachments. So, we can construct a MIMEMultipart object to represent the mail itself, then add a MIMEText object that contains email content to the mail body, and then add a MIMEBase object to the mail body to represent the attachment. Then, follow the normal send process to send the message ( MIMEMultipart object ) to an SMTP server, then the email with attachments will be received. from email.MIMEMultipart import MIMEMultipart from email.MIMEText import MIMEText # get user input # input sender email address and password: from_addr = input('From: ') password = input('Password: ') # input receiver email address. to_addr = input('To: ') # input smtp server ip address: smtp_server = input('SMTP server: ') # email object that has multiple part: msg = MIMEMultipart() msg['From'] = from_addr msg['To'] = to_addr msg['Subject'] = Header('hello world from smtp server', 'utf-8').encode() # attache a MIMEText object to save email content msg_content = MIMEText('send with attachment...', 'plain', 'utf-8') msg.attach(msg_content) # to add an attachment is just add a MIMEBase object to read a picture locally. with open('/Users/jerry/img1.png', 'rb') as f: # set attachment mime and file name, the image type is png mime = MIMEBase('image', 'png', filename='img1.png') # add required header data: mime.add_header('Content-Disposition', 'attachment', filename='img1.png') mime.add_header('X-Attachment-Id', '0') mime.add_header('Content-ID', '<0>') # read attachment file content into the MIMEBase object mime.set_payload(f.read()) # encode with base64 encoders.encode_base64(mime) # add MIMEBase object to MIMEMultipart object msg.attach(mime) server = smtplib.SMTP(smtp_server, 25) server.set_debuglevel(1) server.login(from_addr, password) server.sendmail(from_addr, [to_addr], msg.as_string()) server.quit() 3. Send Email With Image. What if you want to embed an image in the body of the mail? Is it okay to link the image address directly in HTML mail? The answer is that most email service providers automatically block pictures with external links because they don’t know whether these links point to malicious websites or not. To embed the image into the body of the mail, we just need to add the image as an attachment in the way of sending attachments. Then, in HTML, we can embed the attachment as an image by referring an image to src= “cid:0”. If you have multiple pictures, numbering them in turn and then referring each image to a different cid: x(image number). from email.MIMEText import MIMEText msg.attach(MIMEText('<html><body><h1>Hello</h1>' + '<p><img src="cid:0"></p>' + '</body></html>', 'html', 'utf-8')) 4. Support Both HTML And Plain Text Formats. If we send HTML mail, the recipient can browse the content of the mail normally through the browser or software such as Outlook, but what if the device used by the recipient is too old to view HTML mail? The solution is to attach a plain text while sending HTML. If the recipient can’t view the mail in HTML format, it can automatically degrade to view the plain text mail. Using MIMEMultipart, you can combine HTML and Plain Text content in one email. Note that subtype should be alternative. from email.MIMEMultipart import MIMEMultipart from email.MIMEText import MIMEText # Pass 'alternative' to the MIMEMultipart constructor. msg = MIMEMultipart('alternative') msg['From'] = input('From: ') msg['To'] = input('To: ') msg['Subject'] = input('Subject: ') msg.attach(MIMEText('hello this is a plain text version.', 'plain', 'utf-8')) msg.attach(MIMEText('<html><body><h1>Hello this is html version</h1></body></html>', 'html', 'utf-8')) 5. Encrypted SMTP. When connecting to SMTP server using standard port 25, it uses plaintext transmission, and the whole process of sending mail may be eavesdropped. To send email safer, you can encrypt SMTP sessions, which is essential to create an SSL secure connection before sending mail using the SMTP protocol. # define smtp server domain and port number. smtp_server = 'smtp.yahoo.com' smtp_port = 989 # create smtp server object. server = smtplib.SMTP(smtp_server, smtp_port) # use ssl protocol to secure the smtp session connection, all the data transferred by the session will be secured. server.starttls() # send the email as normal. server.set_debuglevel(1) 6. Send Both Html, Image & Alternative Text Example. This example will use Python to send an email with HTML content, if the email client is too old to support HTML content, it also sends an alternate text content with it. The below source code embed an image in the email Html content also. # Import smtplib library to send email in python. import smtplib # Import MIMEText, MIMEImage and MIMEMultipart module. from email.MIMEImage import MIMEImage from email.MIMEMultipart import MIMEMultipart from email.MIMEText import MIMEText # Define the source and target email address. strFrom = '[email protected]' strTo = '[email protected]' # Create an instance of MIMEMultipart object, pass 'related' as the constructor parameter. msgRoot = MIMEMultipart('related') # Set the email subject. msgRoot['Subject'] = 'This email contain both Html, text and one image.' # Set the email from email address. msgRoot['From'] = strFrom # Set the email to email address. msgRoot['To'] = strTo # Set the multipart email preamble attribute value. Please refer to learn more. msgRoot.preamble = '=====================================================' # Create a 'alternative' MIMEMultipart object. We will use this object to save plain text format content. msgAlternative = MIMEMultipart('alternative') # Attach the bove object to the root email message. msgRoot.attach(msgAlternative) # Create a MIMEText object, this object contains the plain text content. msgText = MIMEText('This object contains the plain text content of this email.') # Attach the MIMEText object to the msgAlternative object. msgAlternative.attach(msgText) # Create a MIMEText object to contains the email Html content. There is also an image in the Html content. The image cid is image1. msgText = MIMEText('<b>This is the <i>HTML</i> content of this email</b> it contains an image.<br><img src="cid:image1"><br>', 'html') # Attach the above html content MIMEText object to the msgAlternative object. msgAlternative.attach(msgText) # Open a file object to read the image file, the image file is located in the file path it provide. fp = open('/usr/var/test.jpg', 'rb') # Create a MIMEImage object with the above file object. msgImage = MIMEImage(fp.read()) # Do not forget close the file object after using it. fp.close() # Add 'Content-ID' header value to the above MIMEImage object to make it refer to the image source (src="cid:image1") in the Html content. msgImage.add_header('Content-ID', '<image1>') # Attach the MIMEImage object to the email body. msgRoot.attach(msgImage) # Create an smtplib.SMTP object to send the email. smtp = smtplib.SMTP() # Connect to the SMTP server. smtp.connect('smtp.code-learner.com') # Login to the SMTP server with username and password. smtp.login('hello', 'haha') # Send email with the smtp object sendmail method. smtp.sendmail(strFrom, strTo, msgRoot.as_string()) # Quit the SMTP server after sending the email. smtp.quit() 1 thought on “Python Send Html, Image And Attachment Email Example” Thanks!
https://www.code-learner.com/python-send-html-image-and-attachment-email-example/
CC-MAIN-2021-43
refinedweb
1,301
51.04
Next Chapter: Abstract Classes Count Method Calls Using a Metaclass Introduction After you have hopefully gone through our chapter Introduction into Metaclasses you may have asked yourself about possible use cases for metaclasses. There are some interesting use cases and it's not - like some say - a solution waiting for a problem. We have mentioned already some examples. In this chapter of our tutorial on Python, we want to elaborate an example metaclass, which will decorate the methods of the subclass. The decorated function returned by the decorator makes it possible to count the number of times each method of the subclass has been called. This is usually one of the tasks, we expect from a profiler. So we can use this metaclass for simple profiling purposes. Of course, it will be easy to extend our metaclass for further profiling tasks. Preliminary Remarks Before we actually dive into the problem, we want to call to mind again how we can access the attributes of a class. We will demonstrate this with the list class. We can get the list of all the non private attributes of a class - in our example the random class - with the following construct. import random cls = "random" # name of the class as a string all_attributes = [x for x in dir(eval(cls)) if not x.startswith("__") ] print(all_attributes) ['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', '_BuiltinMethodType', '_MethodType', '_Sequence', '_Set', '_acos', '_ceil', '_cos', '_e', '_exp', '_inst', '_log', '_pi', '_random', ''] Now, we are filtering the callable attributes, i.e. the public methods of the class. methods = [x for x in dir(eval(cls)) if not x.startswith("__") and callable(eval(cls + "." + x))] print(methods) ['Random', 'SystemRandom', '_BuiltinMethodType', '_MethodType', '_Sequence', '_Set', '_acos', '_ceil', '_cos', '_exp', '_log', ''] Getting the non callable attributes of the class can be easily achieved by negating callable, i.e. adding "not": non_callable_attributes = [x for x in dir(eval(cls)) if not x.startswith("__") and not callable(eval(cls + "." + x))] print(non_callable_attributes) ['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'SG_MAGICCONST', 'TWOPI', '_e', '_inst', '_pi', '_random'] In normal Python programming it is neither recommended nor necessary to apply methods in the following way, but it is possible: lst = [3,4] list.__dict__["append"](lst, 42) lstThe previous Python code returned the following output: [3, 4, 42] Please note the remark from the Python documentation: ." A Decorator for Counting Function Calls Finally, we will begin to design the metaclass, which we have mentioned as our target in the beginning of this chapter. It will decorate all the methods of its subclass with a decorator, which counts the number of calls. We have defined such a decorator in our chapter Memoization and Decorators: def call_counter(func): def helper(*args, **kwargs): helper.calls += 1 return func(*args, **kwargs) helper.calls = 0 helper.__name__= func.__name__ return helper We can use it in the usual way: @call_counter def f(): pass print(f.calls) for _ in range(10): f() print(f.calls) 0 10 It better if you call to mind the alternative notation for decorating function. We will need this in our final metaclass: def f(): pass f = call_counter(f) print(f.calls) for _ in range(10): f() print(f.calls) 0 10 The "Count Calls" Metaclass Now we have all the necessary "ingredients" together to write our metaclass. We will include our call_counter decorator as a staticmethod: class FuncCallCounter(type): """ A Metaclass which decorates all the methods of the subclass using call_counter as the decorator """ @staticmethod def call_counter(func): """ Decorator for counting the number of function or method calls to the function or method func """ def helper(*args, **kwargs): helper.calls += 1 return func(*args, **kwargs) helper.calls = 0 helper.__name__= func.__name__ return helper def __new__(cls, clsname, superclasses, attributedict): """ Every method gets decorated with the decorator call_counter, which will do the actual call counting """ for attr in attributedict: if callable(attributedict[attr]) and not attr.startswith("__"): attributedict[attr] = cls.call_counter(attributedict[attr]) return type.__new__(cls, clsname, superclasses, attributedict) class A(metaclass=FuncCallCounter): def foo(self): pass def bar(self): pass if __name__ == "__main__": x = A() print(x.foo.calls, x.bar.calls) x.foo() print(x.foo.calls, x.bar.calls) x.foo() x.bar() print(x.foo.calls, x.bar.calls) 0 0 1 0 2 1 Next Chapter: Abstract Classes
https://python-course.eu/python3_count_function_calls.php
CC-MAIN-2019-13
refinedweb
720
55.34
No project description provided Project description Rex: an open-source quadruped. Related repositories rexctl - A CLI application to bootstrap and control Rex running the trained Control Policies. rex-cloud - A CLI application to train Rex on the cloud. Rex-gym: OpenAI Gym environments and tools This repository contains a collection of OpenAI Gym Environments used to train Rex, the Rex URDF model, the learning agent implementation (PPO) and some scripts to start the training session and visualise the learned Control Polices. This CLI application allows batch training, policy reproduction and single training rendered sessions. Installation Create a Python 3.7 virtual environment, e.g. using Anaconda conda create -n rex python=3.7 anaconda conda activate rex PyPI package Install the public rex-gym package: pip install rex_gym Install from source Clone this repository and run from the root of the project: pip install . CLI usage Run rex-gym --help to display the available commands and rex-gym COMMAND_NAME --help to show the help message for a specific command. Use the --arg flag to eventually set the simulation arguments. For a full list check out the environments parameters. To switch between the Open Loop and the Bezier controller (inverse kinematics) modes, just append either the --open-loop or --inverse-kinematics flags. rex-gym COMMAND_NAME -ik rex-gym COMMAND_NAME -ol For more info about the modes check out the learning approach. Policy player: run a pre-trained agent To start a pre-trained agent (play a learned Control Policy): rex-gym policy --env ENV_NAME Train: Run a single training simulation To start a single agent rendered session ( agents=1, render=True): rex-gym train --playground True --env ENV_NAME --log-dir LOG_DIR_PATH Train: Start a new batch training simulation To start a new batch training session: rex-gym train --env ENV_NAME --log-dir LOG_DIR_PATH Robot platform Mark 1 The robot used for this first version is the Spotmicro made by Deok-yeon Kim. I've printed the components using a Creality Ender3 3D printer, with PLA and TPU+. The hardware used is listed in this wiki. The idea is to extend the robot adding components like a robotic arm on the top of the rack and a LiDAR sensor in the next versions alongside fixing some design issue to support a better (and easier) calibration and more reliable servo motors. Simulation model Base model Rex is a 12 joints robot with 3 motors ( Shoulder, Leg and Foot) for each leg. The robot base model is imported in pyBullet using an URDF file. The servo motors are modelled in the model/motor.py class. Robotic arm The arm model has the open source 6DOF robotic arm Poppy Ergo Jr equipped on the top of the rack. To switch between base and arm models use the --mark flag. Learning approach This library uses the Proximal Policy Optimization (PPO) algorithm with a hybrid policy defined as: a(t, o) = a(t) + π(o) It can be varied continuously from fully user-specified to entirely learned from scratch. If we want to use a user-specified policy, we can set both the lower and the upper bounds of π(o) to be zero. If we want a policy that is learned from scratch, we can set a(t) = 0 and give the feedback component π(o) a wide output range. By varying the open loop signal and the output bound of the feedback component, we can decide how much user control is applied to the system. A twofold approach is used to implement the Rex Gym Environments: Bezier controller and Open Loop. The Bezier controller implements a fully user-specified policy. The controller uses the Inverse Kinematics model (see model/kinematics.py) to generate the gait. The Open Loop mode consists, in some cases, in let the system lean from scratch (setting the open loop component a(t) = 0) while others just providing a simple trajectory reference (e.g. a(t) = sin(t)). The purpose is to compare the learned policies and scores using those two different approach. Tasks This is the list of tasks this experiment want to cover: - Basic controls: - Static poses - Frame a point standing on the spot. - [x] Bezier controller - [ ] Open Loop signal - Gallop - forward - [x] Bezier controller - [x] Open Loop signal - backward - [ ] Bezier controller - [ ] Open Loop signal - Walk - forward - [x] Bezier controller - [x] Open Loop signal - backward - [x] Bezier controller - [ ] Open Loop signal - Turn - on the spot - [x] Bezier controller - [x] Open Loop signal - Stand up - from the floor - [ ] Bezier controller - [x] Open Loop signal - Navigate uneven terrains: - [x] Random heightfield, hill, mount - [ ] Maze - [ ] Stairs - Open a door - Grab an object - Fall recovery - Reach a specific point in a map - Map an open space Terrains To set a specific terrain, use the --terrain flag. The default terrain is the standard plane. This feature is quite useful to test the policy robustness. Random heightfield Use the --terrain random flag to generate a random heighfield pattern. This pattern is updated at every 'Reset' step. Hills Use the --terrain hills flag to generate an uneven terrain. Mounts Use the --terrain mounts flag to generate this scenario. Maze Use the --terrain maze flag to generate this scenario. Environments Basic Controls: Static poses Goal: Move Rex base to assume static poses standing on the spot. Inverse kinematic The gym environment is used to learn how to gracefully assume a pose avoiding too fast transactions. It uses a one-dimensional action space with a feedback component π(o) with bounds [-0.1, 0.1]. The feedback is applied to a sigmoid function to orchestrate the movement. When the --playground flag is used, it's possible to use the pyBullet UI to manually set a specific pose altering the robot base position ( x, y, z) and orientation ( roll, pitch, jaw). Basic Controls: Gallop Goal: Gallop.3, 0.3]. The feedback component is applied to two ramp functions used to orchestrate the gait. A correct start contributes to void the drift effect generated by the gait in the resulted learned policy. Open Loop signal This gym environment is used to let the system learn the gait from scratch. The action space has 4 dimensions, two for the front legs and feet and two for the rear legs and feet, with the feedback component output bounds [−0.3, 0.3]. Basic Controls: Walk Goal: Walk.4, 0.4]. The feedback component is applied to two ramp functions used to orchestrate the gait. A correct start contributes to void the drift effect generated by the gait in the resulted learned policy. Forward Backwards Open Loop signal This gym environment uses a sinusoidal trajectory reference to alternate the Rex legs during the gait. leg(t) = 0.1 cos(2π/T*t) foot(t) = 0.2 cos(2π/T*t) The feedback component has very small bounds: [-0.01, 0.01]. A ramp function are used to start and stop the gait gracefully. Basic Controls: Turn on the spot Goal: Reach a target orientation turning on the spot. In order to make the learning more robust, the Rex start orientation and target are randomly chosen at every 'Reset' step. Bezier controller This gym environment is used to optimise the step_length and step_rotation arguments used by the GaitPlanner to implement the 'steer' gait. It uses a two-dimensional action space with a feedback component π(o) with bounds [-0.05, 0.05]. Open loop This environment is used to learn a 'steer-on-the-spot' gait, allowing Rex to moving towards a specific orientation. It uses a two-dimensional action space with a small feedback component π(o) with bounds [-0.05, 0.05] to optimise the shoulder and foot angles during the gait. Basic Controls: Stand up Goal: Stand up starting from the standby position This environment introduces the rest_postion, ideally the position assumed when Rex is in standby. Open loop The action space is equals to 1 with a feedback component π(o) with bounds [-0.1, 0.1] used to optimise the signal timing. The signal function applies a 'brake' forcing Rex to assume an halfway position before completing the movement. Environments parameters PPO Agent configuration You may want to edit the PPO agent's default configuration, especially the number of parallel agents launched during the simulation. Use the --agents-number flag, e.g. --agents-number 10. This configuration will launch 10 agents (threads) in parallel to train your model. The default value is setup in the agents/scripts/configs.py script: def default(): """Default configuration for PPO.""" # General ... num_agents = 20 Credits Papers Sim-to-Real: Learning Agile Locomotion For Quadruped Robots and all the related papers. Google Brain, Google X, Google DeepMind - Minitaur Ghost Robotics. Inverse Kinematic Analysis Of A Quadruped Robot Leg Trajectory Planning for Quadruped Robots with High-Speed Trot Gait Robot platform v1 Deok-yeon Kim creator of SpotMini. The awesome Poppy Project. SpotMicro CAD files: SpotMicroAI community. Inspiring projects The kinematics model was inspired by the great work done by Miguel Ayuso. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rex-gym/0.2.5/
CC-MAIN-2021-04
refinedweb
1,525
53.92
So im new to programing and need help solving my program because my teacher never taught us about how to solve the issue. it works and compiles as of right now but if you run it. the issue is when i input for ex. .92 cents for the total bill and i input 1 dollar for tendered i need 8 cents but i only get 1 nickel and 2 pennies. she said because it rounds down the last penny. /** * Name: Salvatore LoCricchio * Date January 22, 2013 * Class/Section: CIT160-03 * Problem: Calcualte change * * * Sample Input: Enter bill total for .92 cents * Enter tendered in dollars 1 dollar * * Sample Output: Change = 8 cents */ import java.util.Scanner; public class Locricchiochangmaker { public static void main(String args[]) { float spent, tendered; int dollars,quarters, dimes, nickels, pennies,change; Scanner keyboard = new Scanner(System.in); System.out.println("Bill Total"); spent = keyboard.nextFloat( ); System.out.println("Tendered."); tendered = keyboard.nextFloat( ); change = (int) ((tendered - spent)*100); dollars = (change/100); change = change%100; quarters = (change/25); change = change%25; dimes = (change/10); change = change%10; nickels = (change/5); change = change%5; pennies = (change); System.out.println(dollars + " dollars"); System.out.println(quarters + " quarters"); System.out.println(dimes + " dimes"); System.out.println(nickels + " nickels"); System.out.println(pennies + " pennies"); }//end main }//end class
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/23220-java-change-maker-need-help-rounding-up-pennies-printingthethread.html
CC-MAIN-2017-39
refinedweb
216
51.14
How to solve this error? "AttributeError: 'module' object has no attribute 'wrapAffine'" I am trying to run the following code: import cv2 import numpy as np img = cv2.imread('images/input.png') num_rows, num_cols = img.shape[:2] translation_matrix = np.float32([[1, 0, 70], [0, 1, 110]]) img_translation = cv2.wrapAffine(img, translation_matrix, (num_cols, num_rows)) #I get the error in this line cv2.imshow('Translation', img_translation) cv2.waitKey() But when I run the code I get this "C:\Python27\python.exe C:/OpenCV_Python/Python_Test.py Traceback (most recent call last): File "C:/OpenCV_Python/Python_Test.py", line 8, in <module> img_translation = cv2.wrapAffine(img, translation_matrix, (num_cols, num_rows)) AttributeError: 'module' object has no attribute 'wrapAffine'" I am using OpenCV 3.1.0 and Python 2.7.11 Please help me. Thanks a lot in advance. This code is from a book named "OpenCV with Python by Example" by Prateek Joshi. This book was first published in September 2015. hmm, why did you try to "correct" the error in your example ? it does not make any sense, and will confuse readers I mistakenly asked the question with a typo that's why i had to correct it. I am extremely sorry. don't be sorry, just edit it back to the original, and let's keep the "follow-up" problems with the answer, right ? okay. I am editing back to its original form.
https://answers.opencv.org/question/95851/how-to-solve-this-error-attributeerror-module-object-has-no-attribute-wrapaffine/
CC-MAIN-2020-10
refinedweb
228
70.5
This site uses strictly necessary cookies. More Information We are building a 2D game like this - I've been using Unity for a while, but have never had to build a grid system like this. Ideally this would be our pipeline: 1) We use orthographic camera 2) All art is broken up into power of two dimensions 3) We draw all the art in perspective 4) My level designers just drag and drop buildings, road etc, and they snap to a grid perfectly Any thoughts on bringing a game like this to pass in Unity (grid based)? 1) We use orthographic camera 1) We use orthographic camera No problem with this part. I have a ortographic camera positioned at ( 0, 5, -15 ), rotated at ( 45,0,0 ), with camera-size of 6 to get the effect in the screenshot. All art is broken up into power of two dimensions All art is broken up into power of two dimensions I am not exactly sure what you are talking about, but all the visible-object should have their size adhered to a specific unit-scale. For example, a basic grid will be 1 unit X 1 unit, a small building might be 5 unit X 3 unit, there should be no 1.5 unit or 1.2 units. We draw all the art in perspective We draw all the art in perspective To add on that, You can have 3D objects, it will seems 2D when it is viewed by the orthographic camera, a good thing with having a 3D object is that you can rotate it around to show a different side of the building, so a building can be reused to looks like 4 slightly different building (4 slightly different sides) if done correctly $$anonymous$$y level designers just drag and drop buildings, road etc, and they snap to a grid perfectly $$anonymous$$y level designers just drag and drop buildings, road etc, and they snap to a grid perfectly When you move stuff around in the editor, if you hold on Ctrl while dragging, the object will snap to a 1x1 grid. If you need a larger grid, or you want to not use the Ctrl, you will need to write your own editor script to do the calculation and snapping in the editor(a topic that I am not familiar with) Answer by sotirosn · Jun 14, 2013 at 05:54 AM you can make a script that executes in edit mode. [ExecuteInEditMode] public class EditModeGridSnap : MonoBehaviour { public float snapValue = 1; public float depth = 0; void Update() { float snapInverse = 1/snapValue; float x, y, z; // if snapValue = .5, x = 1.45 -> snapInverse = 2 -> x*2 => 2.90 -> round 2.90 => 3 -> 3/2 => 1.5 // so 1.45 to nearest .5 is 1.5 x = Mathf.Round(transform.position.x * snapInverse)/snapInverse; y = Mathf.Round(transform.position.y * snapInverse)/snapInverse; z = depth; // depth from camera transform.position = new Vector3(x, y, z); } } You may need to adjust the script depending on if objects are odd or even dimension, because it changes the centers, i.e. a cube .75 wide needs different math than a cube 1 unit wide when snapping to the nearest .25 units. Awesome script, I just used it for my level editor! thanks! id like to say 'thanks' for this code, it works great and fits what i need thank you for the code; [ExecuteInEdit$$anonymous$$ode] is fancy! It's the end of 2016 and Unity still hasn't a grid snapping feature! Thank you for your script, very useful!! :) This needs to be a feature in Unity. Answer by pozzoe · Sep 09, 2014 at 03:32 PM I see there is a feature called Unit Snapping. Click ctrl while dragging and object will move by the amount defined on Edit->Snap Settings Source Yes, but in Unity 5.5, you have to select the movement tool for this to take effect (the tool bound to the "W" key by Grid puzzle game 1 Answer Lines are disappearing and merging as I zoom out 2 Answers 2D Animation does not start 1 Answer Why is 200x100 Tile Taking 3 100x100 Grid Spaces? 0 Answers Texture grid displayed oddly when width =/= height 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/474418/make-grid-and-snap-to-it-in-unity-editor-2d-game.html
CC-MAIN-2021-31
refinedweb
712
67.18
Your Live View manages the other half of your playground book's user interface as well as the interactions between that interface and the data. It doesn't take much to create the live view. Your playground template contains a LiveView.swift file. You could write all of your live view code inside the LiveView.swift file, but then you'll need to copy all of your code to another playground page live view if you wanted to make multiple pages that share the same live view behavior. So with these few lines of code, I can share the same View Controller in the Sources Folder. Here's what it looks like under the hood: import PlaygroundSupport let viewController: ViewController = ViewController(1) PlaygroundPage.current.liveView = viewController import PlaygroundSupport let viewController: ViewController = ViewController(1) PlaygroundPage.current.liveView = viewController First, we import the PlaygroundSupport framework: import PlaygroundSupport Then, I mirror a view controller that's in the Sources folder. In the Live View we copy a View controller that is in the Sources folder to a locale constant variable: let viewController: ViewController = ViewController(1) Then when the playground page looks for the current page live view, we give the page an instance of View Controller.swift that's located in the Sources folder: PlaygroundPage.current.liveView = viewController That's it! This Live View is "Always-On" it runs in a different process as the code editor on the left that runs when code in the editor is compiled. In the playground page, manifest.plist contains the following key-value pairs specifying the attributes for a page. The LiveViewEdgetoEdge controls the size of the live view with a boolean value. Setting the key to NO reduces the size of the live view, while setting the key to YES expands the view. The LiveViewMode is used to control the display of the live view area while the live view isn't running. The only values that can be used are: • VisibleByDefault - Shows the live view when the playground opens. • HiddenByDefault - Hides the live view until the playground is run. The base name of an image file that is shown centered and unscaled in the live view before the live view runs. Your image can be located in any Resource folder of your playground book.
https://learn.adafruit.com/create-a-swift-playgroundbook-with-bluetooth-le/playground-live-view
CC-MAIN-2022-33
refinedweb
380
63.49
Synt. Like many concepts, syntactic noise is both loose and subjective, which makes it hard to talk about. A while ago Gilhad Braha tried to illustrate his perception of syntactic noise during a talk at JAOO. Here I'm going to have a go at a similar approach and apply it to several formulations of a DSL that I'm using in my current introduction in my DSL book. (I'm using a subset of the example state machine, to keep the text a reasonable size.) In his talk he illustrated noise by coloring what he considered to be noise characters. A problem with this, of course, is this requires us to define what we mean by noise characters. I'm going to side-step that and make a different distinction. I'll distinguish between what I'll call domain text and punctuation. The DSL scripts I'm looking at define a state machine, and thus talk about states, events, and commands. Anything that describes information about my particular state machine - such as the names of states - I'll define as domain text. Anything else is punctuation and I'll highlight the latter in red. I'll start with the custom syntax of an external DSL. events doorClosed D1CL drawOpened D2OP lightOn L1ON end commands unlockDoor D1UL lockPanel PNLK end state idle actions { unlockDoor lockPanel } doorClosed => active end state active drawOpened => waitingForLight lightOn => waitingForDraw end A custom syntax tends to minimize noise, so as a result you see relatively small amount of punctuation here. This text also makes clear that we need some punctuation. Both events and commands are defined by giving their name and their code - you need the punctuation in order to tell them apart. So punctuation isn't the same as noise, I would say that the wrong kind of punctuation is noise, or too much punctuation is noise. In particular I don't think it's a good idea to try to reduce punctuation to the absolute minimum, too little punctuation also makes a DSL harder to comprehend. Let's now look at an internal DSL for the same domain information in Ruby. event : doorClosed , " D1CL " event : drawOpened , " D2OP " event : lightOn , " L1ON " command : lockPanel , " PNLK " command : unlockDoor , " D1UL " state : idle do actions : unlockDoor , : lockPanel transitions : doorClosed => : active end state : active do transitions : drawOpened => : waitingForLight , : lightOn => : waitingForDraw end Now we see a lot more punctuation. Certainly I could have made some choices in my DSL to reduce punctuation, but I think most people would still agree that a ruby DSL has more punctuation than a custom one. The noise here, at least for me, is the little things: the ":" to mark a symbol, the "," to separate arguments, the '"' to quote strings. One of the main themes in my DSL thinking is that a DSL is a way to populate a framework. In this case the framework is one that describes state machines. As well as populating a framework with a DSL you can also do it with a regular push-button API. Let's color the punctuation on that. Event doorClosed = new Event(" doorClosed ", " D1CL "); Event drawOpened = new Event(" drawOpened ", " D2OP "); Event lightOn = new Event(" lightOn ", " L1ON "); Command lockPanelCmd = new Command(" lockPanel ", " PNLK "); Command unlockDoorCmd = new Command(" unlockDoo r", " D1UL "); State idle = new State(" idle "); State activeState = new State(" active "); StateMachine machine = new StateMachine( idle ); idle .addTransition( doorClosed , activeState ); idle .addCommand( unlockDoorCmd ); idle .addCommand( lockPanelCmd ); activeState .addTransition( drawOpened , waitingForLightState ); activeState .addTransition( lightOn , waitingForDrawState ); Here's a lot more punctuation. All sorts of quotes and brackets as well as method keywords and local variable declarations. The latter present an interesting classification question. I've counted the declaring of a local variable as punctuation (as it duplicates the name) but it's later use as domain text. Java can also be written in a fluent way, so here's the fluent version from the book. public class BasicStateMachine extends StateMachineBuilder { Events doorClosed , drawOpened , lightOn ; Commands lockPanel , unlockDoor ; States idle , active ; protected void defineStateMachine() { doorClosed . code(" D1CL "); drawOpened . code(" D2OP "); lightOn . code(" L1ON "); lockPanel . code(" PNLK "); unlockDoor . code(" D1UL "); idle .actions( unlockDoor , lockPanel ) .transition( doorClosed ).to( active ) ; active .transition( drawOpened ).to( waitingForLight ) .transition( lightOn ). to( waitingForDraw ) ; } Whenever two or three are gathered together to talk about syntactic noise, XML is bound to come up. <stateMachine start = " idle "> <event name=" doorClosed " code=" D1CL "/> <event name=" drawOpened " code=" D2OP "/> <event name=" lightOn " code=" L1ON "/> <command name=" lockPanel " code=" PNLK "/> <command name=" unlockDoor " code=" D1UL "/> <state name=" idle "> <transition event=" doorClosed " target=" active "/> <action command=" unlockDoor "/> <action command=" lockPanel "/> </state> <state name=" active "> <transition event=" drawOpened " target=" waitingForLight "/> <transition event=" lightOn " target=" waitingForDraw "/> </state> </stateMachine> I don't think we can read too much into this particular example, but it does provide some food for thought. Although I don't think we can make a rigorous separation between useful punctuation and noise, the distinction between domain text and punctuation can help us focus on the punctuation and consider what punctuation serves us best. And I might add that having more characters of punctuation than you do of domain text in a DSL is a smell. (Mikael Jansson has put out a lisp version of this example. Mihailo Lalevic did one in JavaScript.)
http://martinfowler.com/bliki/SyntacticNoise.html
CC-MAIN-2016-07
refinedweb
869
53.92
ANE-Google-Analytics is a native extension to use Google Analytics on the iOS and Android platforms. Differently from other Google Analytics wrappers, this extension is not Javascript based, so it works with AIR apps. The extension follow closely the original Analytics SDK methods and functionalities, so it should be easy to use and understand. Among other features, the extension allows the tracking of page views and events. All tracking is performed within the scope of a session, so the developer must explicitly start and stop the session. All tracking data must be sent while the session is active. Sample import eu.alebianco.air.extensions.analytics.GATracker; if (GATracker.isSupported()) { var tracker:GATracker = GATracker.getInstance(); tracker.startNewSession("UA-00000000-0", interval); tracker.trackPageView("/custom/view/url"); } Great! thanks a lot. This will definitely be used a lot. You’re welcome! I was in need for something like that too Hi, ANE-Google-Analytics is great and easy to be used, but have a question. the code above shows interval that is the time of session, wondering what will be happened to tracker object after time out of the session. do we need to create another instance of tracker in every time out? I ‘ve used 20 as interval and about first 20sec, my app crashes without any warns, but if i set huge numbers or -1 and it dosen’t crash Hi! As far as I could tell, the “interval” param is used to automatically send your tracking data to Google servers. If your app is crashing, that’s probably because it’s unable to send the data (maybe internet permission or something like that?). If you use -1 for “interval”, you have to manually send the tracking data using dispatch(). Try to contact the ANE author to get more info or report a bug. the extension simple not work to me and i can’t understand why i use AIR SDK 3.6 You can open an issue at the github repository or you can ask @alebianco for help. The code above doesnt work and there is no GATracker Hi! The code samples I provide are usually not functional, since they are there just to give the developer a glimpse of the tool API. Please check the tool doc/website for more infornation (including a working code sample).
https://www.as3gamegears.com/air-native-extension/ane-google-analytics/
CC-MAIN-2021-17
refinedweb
389
65.22
Seeds Issue 1 Credits Editor: Jupiter Hadley - @Jupiter_Hadley Production Assistant: Chris Bowes - @contralogic Contributors: Jonathan Pagnutti Gillian Smith Tommy Thompson Jonas Delleske Balint Mark Joao Oliviera Marek Skudelny Lena Werthmann @TearOfTheStar Oliver Carson Luke O’Connor Todd Furmanski Sam Geen Isaac Karth David Murphy Ciro Duran David Morrison Eggy Interactive Davide Aversa Kevin Chapelier Ahmed Khalifa Max Kreminski Gabriella A. B. Barros Ahmed Khalifa Tim Stoddard Martin Černý Jo Mazeika Kate Rose Pipkin Organisers: Michael Cook, Azalea Raad Press Officer: Jupiter Hadley Art By: Khalkeus, Tess Young Speakers: Gabriella Barros, Joris Dormans, Becky Lavender, Mark Nelson, Emily Short, Tanya Short, Adam Summerville, Jamie Woodcock Aidan Dodds Ex Utumno Afshin Mobramaein Scott Redrup Mark Johnson Kate Compton Johanthan Pagnutti Jim Whitehead Mark Bennett Niall Moody Katie Compton Grégoire Duchemin Dave Griffiths Rune Skovbo Johansen Thanks to: Heidi Ball, Simon Colton, Mark Nelson, Blanca Pérez Ferrer, The Metamakers Institute, Falmouth University, Sekrit Inc. and everyone in the PROCJAM community Cover art by: Kevin Chapelier Some header/Footer paterns from Eduardo Lopes’ Procedural Fabrics Generator: 1 ProcJam Make Something That Makes Something The ProcJam is not like most other game jams. This jam is aimed at making procedural generation accessible to more people and to show off projects that are pushing the boundaries of generative software. This jam is easy to enter, laidback, and fun to be apart of. We are building a community of friends and peers across disciplines all interested in procedural generation. This game jam takes place across nine days, including two weekends. You can enter anything you’d like - art, boardgames, tools, games, anything you can think of, as long as it has something to do with procedural generation/random generation/generative software, ect. You can even take an existing thing and add some generative magic to it for the jam! If you start before the jam or want to finish the jam later, that is fine too. We have a kickoff day at the start of the jam, taking place in Falmouth this year, where loads of awesome speakers are going to talk about procedural generation. This unconference is livestreamed that day, as well as put up online to be watched in the future. The ProcJam is happening as a part of Metamakers Institute’s ‘Games as Arts/Arts as Games’ festival. This Zine was made by the ProcJam community. We hope you enjoy it! 2 How is Music Like A Spring? By Jonathan Pagnutti Music is weird. Once upon a time, I did a lot of music theory. I actually ended up with a music performance minor rather than major because I took two extra classes of music theory rather than the required music history classes of my undergrad (nerd alert). My final project as an undergrad was a music generator, which I wrote in C and only exists on a rapidly aging desktop computer collecting dust. One of the wonderful things about procedural generation is that if we can encode a theory into the computer, we can have a computer generate endless examples of stuff that follows the theory. Even better, if we line up our metaphors, the magic box can present something in terms of something else. So, the question at hand: how is music like a spring? When you push or pull on a spring, you add tension to it. Let go, and the spring releases that tension. I'm sure this tension has a special name, but I only ever took freshman physics. Music carries and releases tension too, according to a music theory called Functional Harmony. But, first, we need to take a whirlwind tour on sheet music. Ok, so collections of notes played at the same time (called chords) can get special labels based on their lowest note in a scale. The cool thing is that those special labels can tell us which chords carry lots of tension they want to release, and which chords are less stressful. 3 They can also tell us how to go from chord to chord to increase and decrease tension. This chart was ruthlessly stolen from Dmitri Tymoczko's 'A Geometry Of Music'. Annotations are mine. The degree symbol by the vii is the same as the (dim) by the vii in the earlier picture. So, then, we can push and pull our spring and play chords at various tension levels. In theory, as the spring expands and contracts, we'll get 'pleasing' chord progressions. There is a bunch more to consider to turn these roman numerals into sound, but the raw idea comes from this theory. I encoded this theory, and using P5.js made a tiny little generative audio prototype. 4 Music generation prototypes don't make the best screenshots. You can try it out for yourself at Now, why make this? Music, even generative music, seems to be tethered to events or notes, but it doesn't have to be. Music is just as pervasive and continuous as, well, physics. I grounded this tension-release model in Functional Harmony because I know it, but music generates tension in so many other ways. With this tension-release model, we're starting to get at musical velocity. And if that has a nice metaphor, maybe we can describe a collision musically? Music is more than the notes on the page. Music is weird. 5 Why Do We PCG? By Gillian Smith “We are drawn to the promise of infinite (or at least, really huge) amounts of content to explore.” I want to talk about “replayability” and meaning and depth. We often talk about how PCG can give people different experiences each time they play. We are drawn to the promise of infinite (or at least, really huge) amounts of content to explore. And then we, inevitably, are disappointed by the infinite: there’s a lot of it, but it all feels so similar. It is unrealistic to hope for constant, infinite beauty. Why do we replay games? Why do we re-read books, or re-watch movies, or re-listen to music? We don’t hope for books to be infinitely long. Sometimes we want to hear more about the characters after reading a favorite novel, but ultimately it’s probably for the best that we don’t. It’s better to yearn for and imagine what happens next. Stories that have potential futures have a power to them. Would an infinitely long story, where our urge to know what happens next is always fulfilled, be enjoyable to read forever? We don’t need a beloved movie to have different content every time we watch it in order to find it fulfilling. With some movies, there is satisfaction in finding things we missed earlier viewings: elements of foreshadowing, interesting background character behavior, and clever cinematic tricks. Sometimes we find our experience of watching the movie has changed, because though the movie’s content has remained static, we have changed as viewers and are reacting differently to the same material. We don’t ask composers to write music that adapts in realtime to our mood. We instead work to build and curate playlists (though sometimes with AI assistance) for a huge variety of contexts, from needing motivation for a bleary-eyed 6am workout to setting the desired ambience for an intimate dinner party. There is a satisfaction to finding unexpected new music while putting in effort 6 to explore an enormously varied space of all the potential music in the world. We are happy to revisit the same piece of art multiple times, if it has sufficient depth. And we are happy to put in effort to explore enormous spaces, when the act of exploring is satisfying and comes with the promise that sometimes we will find extraordinary beauty in that space. We put in effort to find and re-engage with art that we find emotionally resonating. So what does focusing on this notion of emotional resonance mean to me when it comes to content generation in games? I’m not entirely sure yet. Maybe it means creating games that acknowledge and meaningfully engage with the machine’s ability to create huge amounts of similar content (something that I think No Man’s Sky is incredibly successful at) in a context that resonates with the player, instead of presenting machine-generated content as an infinite number of individual levels that become boring over time. Or maybe we could try writing new kinds of generators that aim to create smaller amounts of content that are individually more meaningful to players, maybe content or even games that have multiple layers or depth and complexity. “Maybe it means creating games that acknowledge and meaningfully engage with the machine’s ability to create huge amounts of similar content...” The joy that comes from hitting the generate button over and over to see something new is intense but fleeting, and then the joy turns to boredom, frustration, and disappointment. I want to stop thinking about content generators as being powerful because they can create a lot of things, and start thinking about ways to harness them for creating new kinds of emotionally resonant experiences. 7 Sure Footing By Tommy Thompson, Supreme Science Overlord, Table Flip Games Ltd. @GET_TUDA_CHOPPA “While the core game worked, there was still a lot to be done to ensure the stability of the systems and the validity of the content it makes.â€? Hello to our fellow Procjammers - or is that Seeders? I'm really pleased to be able to report back on the current status of Sure Footing, which you may recall was the focus of a talk delivered by yours truly on behalf of Table Flip Games at PROCJAM in 2015. In our talk (which you can find on YouTube), I gave an overview of our infinite runner that transformed from a small research project into a fully-fledged game that we planned on launching. At the time we had just finished building our core procedural generation framework: a system derived from research in platforming games by my peers in the academic community. While the core game worked, 8 there was still a lot to be done to ensure the stability of the systems and the validity of the content it makes. Ultimately we still needed to trust this thing to run for hours at a time without breaking, but also to create platforming levels that aren't going to prove impossible for players to traverse. So here we are a year later and the game has come a long way. Firstly, we've completed the generative framework which now allows us to 'swap out' different PCG systems on the fly. Our game has two 'tiers' of PCG: one which considers the actions the players will be forced to make to survive and one which translates that action sequence into a playable 'sprint'. The real trick is that we can build multiple generators that look after each tier - and we now have almost a dozen generators either in development or currently in the game itself. Given this is partially a research project, we've been testing and experimenting with the framework since we finished it and presented the game at the playable experience track of the 2015 conference in Artificial Intelligence for Interactive Digital Entertainment (AIIDE). We subsequently published our first full academic paper at the Procedural Content Generation workshop at the 1st DiGRA/FDG joint conference in August of 2016. We write about how our system works as well as quantify how expressive and flexible it is. This led to us adopting a large number of metrics in the game that allow us to effectively measure content as it is created and build an understanding of how long, how intense and how varied each level will be. The research hasn't stopped there. We continued to devise new geometry generators reliant upon genetic algorithms to create new and unique interpretations of the action space. Also, we're using our level metrics combined with player testing to see if we can learn how to 'fit' the generative system around a player’s performance. There are now over 20 parameters in the PCG system that allow us to define the starting difficulty of the game. We're keen to see if we can learn about our players to create new difficulty settings dynamically that will play against their weaknesses but without being unfair. We have a long way to go before any of this is complete, but hopefully we can give you an update next year! But enough about the research: what about the game? Well, why don't you see for yourself! Sure Footing launched in early-access on the itch.io Refinery in “We're keen to see if we can learn about our players to create new difficulty settings dynamically that will play against their weaknesses...” 9 “We knew that if we didn't launch the game soon then we never would...â€? 10 September of 2016. Our plans had always been to launch the game either late in 2016 or early in 2017 but new research ideas as well as new gameplay ideas continue to emerge the longer we work on it. We knew that if we didn't launch the game soon then we never would - and given how popular it is when we take it to festivals in the UK we would be mad not to. As such, we've launched the game in early-access and are continuing to add new gameplay modes and features every month while talking with our players to take on their feedback. It's been a crazy time for us since launch and while our community is small right now they have been overwhelmingly positive and supportive of our work. We're now looking not only to start migrating our work from the research build (aka the Branch of DOOM) into the public playable version, but also to run player-testing research with our community. If you want to know more or get yourselves an early access copy, head over to: tableflipgames.itch.io/sure-footing Sector A23 Procedural world generation at run-time By Jonas Delleske, Balint Mark, Joao Oliviera, Marek Skudelny, Lena Werthmann. This student project started with the simple idea that the world only exists when you see it. To build this game we started out creating an algorithm that would generate our cave system. Simultaneously we developed alien looking plants and strange creatures to live in this cave. Then we enhanced our algorithm to place those entities in the world. The world generation was also built in a way that we can add new world parts seamlessly to an existing world. So all we needed to do was delete the parts of the game world that the player does not see and generate a completely new piece of world when the player is about to turn. And thus the world only exists when you see it. Summarized we successfully created a highly confusing game. The evil thing is that the player doesn’t see when the world changes. And since everything looks alien and new the player doesn’t even recognize that something is missing when he turns around. The change becomes part of the world and the world without orientation is accepted. Try the free game: 11 An Easy Way To Generate Fictional Alphabets By @TearOfTheStar “...that one can use simple binary masks to generate a grid based maze inside it.â€? Maze generation is something every disciple in the field of procedural generation goes through at some point. The concept is extremely easy to understand (there are some excellent articles by Jamis Buck and Walter D. Pullen)*, but it is an endlessly powerful tool in the world of procedural generation. While tinkering with maze generation, an idea came to me: that one can use simple binary masks to generate a grid based maze inside it. From this idea, the way to generate an alphabet was born. One makes/generates a binary mask (0\black - can't maze, 1\white - can maze here) like this: ... and generate small mazes using this mask. The result will look like this: Because this maze generation is grid based, one can remove the corners, and this will result in a generated alphabet: 12 As shown here, only most limited of masks do not have variations. So here's how to do it. My grid based mazes are generated with 1px cells, mostly because I like pixelart, so they look like that. However, one can generate them in any way one would like, even with bitmasking/curves etc. * , 13 In(finite) Content: Level Level Design Design for for Games Games with with Procedural Procedural Generation Generation By Oliver Carson @OhCarson | ocarson.itch.io | “Procedural content could be almost any part of your game, 2D, 3D, Audio, AI,Level Design, all to varying degrees.” 14 The lure of procedural generation is an attractive one to game developers and game players alike. A game that can generate assets eases the amount of content developers have to create, and adds potentially limitless variance to aspects of a game. Procedural content could be almost any part of your game, 2D, 3D, Audio, AI,Level Design, all to varying degrees. Let’s examine, probably my favorite aspect of procedural generation, 3D world design, and how this links into level design. The first thing you’ll need to figure out is what defines a level? Lets look at an example from one of my current projects. Level Design in Dispatch! I’m in Pursuit! Dispatch is a pet project of mine, it’s what first got me into using procgen for 3D models. The gameplay in Dispatch sees hotshot rookie cop Lt Blaze cruise the city, stopping crimes and getting in hot pursuits in a future tech jet bike. Various buildings are littered around the city, criminal hideouts, raidable buildings, such as jewelry stores, and neutral buildings. So the gameplay sees various heists happening in buildings throughout the city, and Lt Blaze has to race to each point and intercept getaway drivers before they escape to a hideout. To create this city procedurally, we need to generate a road network with traffic, a pavement for pedestrians. different building types, and navigation data for the AI. Phew that’s a whole bunch of things! The Tech Part The road network is core to the gameplay of Dispatch, this can be simplified down to a series of connecting lines. The first thing we need to do is create these lines in 2D. One way to do this, is to distribute random points in a 2D space of a set size, with each point having a radius where no other points can be placed. These points are then connected using a Voronoi diagram. The voronoi diagram creates 2D polygons out of these points. We can use these polygons to define the world geometry. Each edge from these polygons, in gameplay terms, defines the road. We need to “grow” the road geometry from these edges. If we copy a polygon and shrink it we can create geometry for a road, however, we have to shrink this polygon down in a specific way, a standard scaling operation would look wrong, we need to bring in each edge from the island towards the center, and discard overlapping segments. line From there we’ve got enough data to begin extruding the road geometry with triangles. We can create additional polygons to create further details, such as a pavement, a building or a field. Lets think about each polygon being an area. We could perform additional operations in the innermost “building” area to add more detail. Here we subdivide the building area and use these subdivisions to create separate buildings and back alleys. Each building is then assigned a type, such as jewelry store, or a criminal hideout. “...it’s worth questioning if procedural generation is a good choice for certain assets.” As a benefit to creating all these lines, we can store them and use them for pathfinding. We also know the context or each area, so we can put cars on the road lines, pedestrians in the pavement area, back alleys might have more undesirables. 15 When To Use Procedural Generation The biggest issue with procgen is also it’s biggest strength, variance, creating a generator to make buildings in the city could be as simple as make a cube with some windows on it, but this could get pretty samey, creating large amounts of variance in any generator will undoubtedly take a longer time to code, it’s worth questioning if procedural generation is a good choice for certain assets. Getting an artist to work on some discreet aspects may be the better choice for some of your content. It may take significantly less time too. As someone who has spent a lot of time going down the rabbit hole of procgen, sometimes it’s worth standing back from your work and asking, would making a generator for this content be worthwhile to the player? Can I afford the time? I feel procgen is generally best used for level design and random events, but people are making some amazing stuff in everything from, 2D, 3D, audio, and even narrative out there, so focus on what's important for your game and audience. Procgen is a powerful thing, but is just one of your many tools to further your vision. Use it wisely and appropriately. Please look out for Cyglide a procedurally generated, cyborg animal hang gliding game in winter, and Dispatch! I’m in Pursuit next year. 16 Grow Your Worlds By Luke O’Connor We, as humans, are great at inventing new things to reach the unreachable. Boats, planes and rockets are all pretty good examples, but so are tanks of compressed air and crampons. Even something as simple rope is a pretty handy tool for getting someplace new. Why then, when making games, would we start with the rope, the plane and the rocket, and then start creating worlds that need them? If all you have is a hammer, every problem looks like a nail. When it comes to procedurally generated levels, it can take a lot of effort to make sure the levels that pop out of the algorithm are traversable using the tools at hand. In other words, what we end up aiming for is a generator that outputs nails of different shapes and sizes. Which is kind of boring. What if, instead of trying to ensure that your generator only pops out levels that are traversable given a set of movement mechanics, the movement mechanics were designed around the generated worlds? Why not make our worlds first, then figure out a way to explore them. Better yet, make our worlds then allow the player to invent new ways to explore it. A more genuine experience of exploration awaits! This opens up all sorts of possibilities, and allows us to generate more natural worlds. Worlds that just emerge from the universe inside our computers. Worlds that don’t have the artificial limitations that gameplay may impose. Worlds that are as novel, and as unknown as as our own, and that beg us to consider what is over the horizon, and how we might get there. “...it can take a lot of effort to make sure the levels that pop out of the algorithm are traversable using the tools at hand.” 17 Forska By Todd Furmanski Forska has been my current project for the last two Procedrual Game Jams, and for reasons implied below, will be on my schedule for 2016 as well. The project can be described as a navigable, procedural landscape painting, where a user simply clicks on a static image to move, and the “painting” updating to that implied location. I mean to talk a little about why I developed the project, and go into a little bit of detail on how a few of its aspects work. Various versions of Forska can be found at A Sketchbook, Portfolio, Toolbox, and Zoo “Terrains, agent behaviors, graphical effects, and other dynamics can be hard to develop in a vacuum....” Forska (Swedish for “Research”) is meant to be a place where I can implement different procedural techniques and see how they play together, without having to worry too much about a specific goal. Specific goals can be achieved later in external projects, where I can transplant code fragments and modify them to a more specialized end. Within Forska itself, each technique has a demonstrative effect on the virtual space, and I try to generalize it to make transplantation to a different project as easy as I can. Having them all in one place can be very handy as well. A lot of my experimentation with simulations and agent behaviors, for instance, proved slow going I realized I needed an interesting enough environment for the agents to inhabit, sense, and react to. Terrains, agent behaviors, graphical effects, and other dynamics can be hard to develop in a vacuum, so placing them all into one project seemed like the logical project for an ongoing coding jam. I use Unity as the primary engine, which has helped me port code to a variety of different platforms, as well as allowing me to view parameters and world states easily. A certain amount of traffic between the Processing java libraries and my own venerable C++ codebase has also been known to happen. 18 Navigation The idea behind Forska’s navigation is quite simple – you click on the image, and you’re teleported to where you clicked. In many ways this calls back to adventure games like Myst, but instead of having a limited database of images, I take the appropriate image using a virtual camera and a 3D space. It’s a simple matter of raycasting from the mouse cursor to the corresponding point in the landscape. I realize that many VR experiences have adopted a similar approach. This idea of “click to move” came from a need to quickly explore large virtual spaces, without slogging back and forth, or rocketing too fast past small destinations. With the paradigm Forska uses, taking a single step or walking a mile can both be done in one click. This approach has also proven to work very well with touchscreens and similar interfaces. I have watched far too many people struggle with a game controller while exploring a space at 24-60 frames per second. My countless hours of gaming in my youth have given me the dexterity to use a joystick or gamepad – many people have not had this tacit education. I wanted to experiment with removing this barrier of entry to exploring a virtual space. Non-Photoreal Rendering The painted effect I give each rendered image starts with a typically 3D rendered camera shot, which then has a heavily modified blur shader applied to it. A separate, “noisy” texture input gives the blur a series of offset distances – the final result mimics brushstrokes, and it is this image input that controls stroke size, direction, etc. I tend to blur more in the midground, keeping the fore and backgrounds relatively detailed, since a faraway point of interest can be the same relative size on the screen as an object close to the camera, and faraway features can be obliterated if one simply does a “more distance = more 19 “...I mean to do more, each with a mixture of handcrafted and procedural components.” 20 blur” calculation. Making blurs proportional to the sine of the distance can be helpful! In keeping with the “sketchbook” approach, I’ve developed several methods that mimic styles like oil paints, pastels, woodcuts, mosaics, and the like. Other procedural elements like terrain generation and dynamic skies give these shaders good subjects to work from, and can give even identical scenes their own sense of character. Future I mean to continue adding to this menagerie of techniques, as wells as methods to explore and view them. I’ve done a few smaller works using components from Forska, and I mean to do more, each with a mixture of handcrafted and procedural components. I do not mean to add things like narrative, puzzles, encounters, or real time graphics to Forska itself, but I certainly plan to develop them for a variety of the project’s offspring! Simulations By Sam Geen @eegnsma I am an astrophysicist, and I make universes, I tell people. Or... well, maybe. I suppose it's about the story I want to tell. Let me tell you a story. Human history is often landmarked by the machines it creates. In earlier times, binary stars Mizar and Alco were used as a test of eyesight. Ancient astronomy relied on the optics in our own heads. Today, we put telescopes in space to observe galaxies billions of years old in light our eyes cannot see. We don't limit ourselves to observing the skies. We remake them. Where the namesake of my first computer, Archimedes, once traced geometric proofs in the sand, today we build model universes in silicon. We trace gigayears of galaxy evolution in humming boxes, bits representing stars, interstellar gas and invisible matter leaping from machine to machine at the speed of light. Here's an example. This week I used a small supercomputer somewhere near Paris to simulate a bubble of hydrogen ions heated to ten thousand degrees by radiation from two massive stars. I wanted to understand a particular piece of how nebulae expand – what happens when the radiation increases tenfold as new stars are born. The question was whether an equation written in 1978 to describe this expansion still holds. It does. “We don't limit ourselves to observing the skies. We remake them.” What does a simulation mean? An equation is a story. Each part is laid out in sequence, every variable clear in its role. In this example, I used a simulation to see whether this story was true or not – if we put all the same 21 characters in the same setting, will they reach the same ending as the story? “A simulation is a landscape, unexplored until it is plotted, visualised, reduced.â€? A simulation isn't a story itself. A simulation is a landscape, unexplored until it is plotted, visualised, reduced. It is not a real landscape, but one we choose to generate, with its own choices and limitations. If our model for the violent death of massive stars is wrong, or we cannot resolve these vast explosions in our simulated galaxies, say, does our simulation tell us anything useful about them? As Deep Thought told Loonquawl and Phouchg, an answer without a question is pointless, and it's the question that takes the most thought. Simulations have value because they are constructed to answer certain questions. They are expensive and time-consuming, both for the machine and for our limited time on the planet. We must, without biasing ourselves towards a certain result, ask what kind of story we want to tell before we ask the computer what the ending is. As scientists we must ask 22 ourselves why we're doing what we do. Science is a toolkit for understanding the universe, and simulations are a powerful new tool in that kit, growing in power each time a new processor is printed by workers in Asia, each time designers push the physical limits of these intricate machines. It's tempting to see this as an end in itself, the march of technology accelerating us to a utopian future. But the only warmth a supercomputer gives is waste heat, dumping entropy into the slow heat death of the universe. Is this all we're doing, making bigger numbers until the universe grows cold and dark? We must teach ourselves how to tell stories. Science is not an algorithm, a handle to be turned until the whole universe unfurls before us. It is a human act, people sharing narratives, trying to come to a deeper understanding. Scientific discoveries are human joys, whether it's finding a new creature deep under the ocean or trying to fit the cosmos into our heads. Games, then. Computers can be sterile things. Procedural landscapes stretching for infinity with no life or purpose. But the computer was only ever a canvas, something for humans to impart meaning to. I follow a simulated person go about their day in a cityscape I built. I watch two siblings bond over their simulated person finding love in a dressing gown. I see a colonist nurse a raider back to simulated health, become friends. annihilate each other or to be at the mercy of a cruel, self-inflicted fate”. The simulators have only interpreted the world, in various ways. The point, procjammers, is to change it. “Or do we want to deepen our understanding of the world around us, to foster beauty, to bring each other closer…” Systems in games tell stories. The simplicity necessitated by simulations mean we must choose the underlying models. Do we want to tell a story of endless conflict, capital accumulation, ecological collapse, nationalism, fear of the other, shooting someone for the last tin of beans in the shattered world of our own making? Or do we want to deepen our understanding of the world around us, to foster beauty, to bring each other closer, to imagine worlds where exploitation, want and cruelty are not necessary and eternal. As Einstein said, “human beings are not condemned, because of their biological constitution, to 23 Style Transfers By Isaac Karth procedural-generation.tumblr.com Some of my recent experiments with the NeuralDoodle style transfer neural network. ”St Michael at the North Gate, Oxford, England, in the style of The Rocks by Vincent van Gogh” “A comparison of the source photo and the result: a photo of the woods in the style of The Cemetery by Carl Fredrik Hill” 24 ”A photo of the woods, in the style of Part of a Crucifix with the Ascent of Christ, 13th century, artist unknown” “A photo of Stonehenge, in the style of a Mughal painting of Bibi Ferzana, c. 1675, artist unknown” ”A photo of a train station, in the style of View of Toledo by Aureliano de Beruete” 25 Space Noise Machine By David Murphy “The result is a splotchy noise, large patches of +1 with short edge fades to -1.â€? I created my own noise library, named Space Noise Machine. It borrows heavily from the "module" concept of libnoise by Jason Bevins. I found libnoise was great, but I wanted to create my own. One of the reasons is my goal is a game made entirely of my own code. The second reason was that I thought libnoise could do with more modules. The Earth-like planet you see was generated through a couple of spheres of noise. The ground, and the clouds. The ground layer is produced by combining two Perlin Noise generators, one through a Ridged Multi-Fractal modifier and the other through a Fractional Brownian Motion modifier. This gives a nice rough surface with mountain ranges, we'll call it Noise A. A third Perlin Noise is generated which is also put through a Fractional Brownian Motion modifier. This is used as a selector between two constant modules, -1 and +1. The result is a splotchy noise, large patches of +1 with short edge fades to -1. Or put another way, landmasses (with no detail) surrounded by beaches. Call is Noise B. 26 Noise A and B are then combined by using Noise B to decide whether to sample from a Constant -1 value (the water) or a value from Noise A. End result is randomly generated land masses with ridged mountains. The combination was done this way to avoid the constant "mountains in the middle" of the landmasses that you see when just taking Perlin Noise. It takes a lot more computational power, but the result is much better. The cloud layer is generated in much the same way. A Perlin Noise is put through a Contrast Curve modifier and is used to generate clouds. A second Perlin Noise is used to act as a selector between a Constant -1 "no clouds" generator and the Clouds. The final, tricky part, is spiraling this noise around randomly distributed points by random amounts. This is what gives the clouds that swirly nature. One of the most interesting things in this is that these aren't 2D images that are being generated, but rather cubemaps. This is done by all of the noise being generated in three dimensions and then being sampled along the surface of a sphere before being projected onto the cubemap. This results in no distortion at the poles. “I needed to find a way to have pretty graphics without knowing how to draw.â€? The moons you see are low-res works in progress. The hardest part is the creation of craters. The size, distribution and nature of them is difficult; I've yet to create the modules to give them high ridges that quickly falloff into low valleys. This is all a work in progress for a game I'm working on. As a programmer, not an artist, I needed to find a way to have pretty graphics without knowing how to draw. By letting my programming generate the art, it absolves me of a need for artistry and also gives me access to an endless amount of content. 27 Postmortem On Generating TOWNs By GrĂŠgoire Duchemin As my end-of-studies project, I teamed with four other students to release TOWN, our Tiny prOcedural World geNerator. You can check it out online (); feel free to contact us at pcg.town@gmail.com if you have any questions. We were originally aiming for a village generator with 4 different themes to choose from, including "Countryside Village" and "Seaside City". Features like hills, lakes, forests and flat lands were required, and we used noise-generating functions parameterized to give us the terrain we wanted. Optionally, a river can pathfind its way from one side of the map to another. To avoid floating buildings as much as possible, the village is placed on a large and flat area, upon which randomly-scattered points have been used to generate a Voronoi graph. Each face of the graph is assigned a lot type (deciding if it will be filled with buildings or decorative props) and some building templates, while roads are drawn along the edges. Other decorative features like a main road, street lights and utility poles were added to frame the village as part of a bigger world. We also have on-the-fly generated music playing while visiting the village. Having not been involved in it, I sadly cannot say much about it, other than that it uses common phrases from jazz music and can produce really cool pieces with a bit of luck. My role in this project was to fill the housing lots with buildings, a work I did in two parts. The first was writing a house model builder accessible via a small script language and a generator to create said scripts, and the second was about delimiting spaces in our lots to place the generated buildings. I learned a lot working on these tasks, and wanted to highlight some of my favourite takeaways from this project. 28 - Do not be afraid to use simple methods. The few articles I found on house and building generation were all using 3D maths to intersect solids. Since that was not trivial to implement from scratch, I decided to first try a grid-based approach. While the result feels blockier, it helped us when choosing a vision for the finished project (we tried to emulate a Godus concept art), and we eventually settled on it as an art style. Similarly, when we realized that our music sounded more "classical" (to untrained ears) when played with harpsichord samples, it removed the need for a more complex method. - Like with handcrafting assets, following a reference is a must. As mentioned earlier, we used references and concept art from the Internet to define our visual style. This stays valid for non-visual things, like having an example script for a DSL or the outline of a complex algorithm to help staying focused on bite-sized pieces of the code base. - Setting up an out-of-code, easy way to test your generator encourages more frequent tests when implementing new features; this was the reasoning behind my script-based house builder. That way, testing out new features and building styles is easier, does not require knowledge of the code base, and paves the way for presets and themed-based generation. “By tweaking ours to force a very mountainous terrain, we managed to make a village spawn on top of a mountain...â€? - Breaking the parameters' limits can lead to unexpected yet interesting outputs. By tweaking ours to force a very mountainous terrain, we managed to make a village spawn on top of a mountain, which was a scenario we wanted to avoid, and it turned out much nicer than we expected. We liked it so much it ended up in our presentation video ! - An organic look is nice, but can quickly devolve into a chaotic mess. Adding some smaller, repeating patterns helps with making the 29 “...a sheep field and handmade detailed houses to serve as known landmarks for the observer.â€? output feel less totally random and is a first step toward constrained/themed generation. In TOWN, we contrasted the organic feel of a Voronoi graph with a grid-based placement for house in the middle of a lot. We also added some staple locations like a marketplace, a sheep field and handmade detailed houses to serve as known landmarks for the observer. - As a follow up to the last point, regularly getting exterior feedback is a must, especially on a group project where your vision of the finished product is not the only one to shape the development. Had I worked alone, there would be much more grid-based placement in TOWN, which would have detracted from our aesthetic goals. Overall, while there are things I would do differently were I to reimplement them, I think we managed to get the key points right and produced a result we were proud of. Maybe our major mistake was not having someone dedicated to make the visuals look nicer to the eye... 30 Growing self representational life forms & some dusty software archaeology By Dave Griffiths Sometimes you stumble over a dusty collection of source code you haven't thought about for years and can't even really remember writing. This article is about a bit of software archaeology, Moore’s law and procedurally generating alien lifeforms. GEO was a free/open source software game I wrote around 10 years ago. I made it a couple of years after I started working in my first job at a games company, and was obviously influenced by that experience. At the time I remember it was a little demanding for graphics hardware so I moved on to other things and forgot all about it, but it turns out the intervening years processing power has caught up. It was an attempt at a purely procedural game, with no assets at all – influenced by how demosceners built vast procedural worlds only with code. The main thing about GEO is that while being a slightly awkward 2D space shooter, the difficulty curve is a side effect of artificial evolution that happens as you play, and learns from your actions. The game is set in an expanding region of space inhabited by lifeforms built from component parts with different purposes – squares for generating energy, triangles for defence and pentagons which can be used to spawn copies when conditions are right. The lifeforms grow over time according to a genetic code which is copied to descendants with small errors, giving rise to evolution. The lifeforms have mass, and your role is to collect keys which orbit around gravitational wells in order to progress to the next level, which is repopulated by copies of the most successful individuals from the previous level. Each game begins at level 1 with a population of randomly generated individuals, and the first couple of levels are quite simple to complete, as they mostly consist of dormant or 31 “...a program trying to imitate something complex like a human brain, it really just represents itself...” Self destructive species – but after 4 or 5 generations the surviving lifeforms are the ones that have started to reproduce, and by level 10 one or two species will generally have emerged to become highly invasive conquerors of space. It becomes an against the clock matter to find all the keys before the gravitational effects are too much for your ship’s engines to escape, and the growth becomes too fast for your collection of weapons to ‘prune’ the emergent structures. AI in games is mostly considered to be about emulating humans. What I like about this form of more humble AI (or Artificial Life) is that instead of a program trying to imitate something complex like a human brain, it really just represents itself – challenging you to exist in it’s utterly alien but consistent world. I wonder why the dominant cultural concept of sentient AI is a supercomputer deliberately designed usually by a millionaire or huge company. It seems to me far more likely that some form of life will arise – perhaps even already exists – by accident in the wild variety of 32 online spambots and malware mainly talking to each other, and will be unnoticed – at first, and perhaps forever by us. What I’ve enjoyed most about playing and tinkering with this rather daft game is exploring and attempting to shape the possibilities of the Artificial Life while observing and categorising the common solutions that emerge during separate games – cases of parallel evolution. There is a fitness function that grades individuals which is used to bootstrap the population before they are good enough to survive (e.g. they get points for simply shooting at the player or generating an energy surplus), but most of the evolution after the first couple of levels tends to occur ‘naturally’ while you are playing. The species which takes over is the one that manages to reproduces most effectively, defends itself and repairs damage the best. 'How it works' Each individual in the population carries around a text description of itself, a Lindenmayer system, which contains an axiom (it's starting condition) and 4 replacement rules. We start with the axiom and the rules are repeatedly run on the output string to 'grow' the life form. This is an example of a successful organism “grown in the wild” and it's L system description: axiom: "[[sp3ss" '0' → "][p3t]]]]" '1' → "t][[[[[" '2' → "2[s]t[tt" '3' → "ps[spp0s0" At each growth step, the numbers are replaced by the corresponding strings – which can contain their own numbers and provide recursive self similarity. These are the first 5 growth steps for this organism, starting with the axiom: 0: [[sp3ss 1: [[spps[spp0s0ss 2: [[spps[spp][p3t]]]]s][p3t]]]]ss 3: [[spps[spp][pps[spp0s0t]]]]s][p ps[spp0s0t]]]]ss 4: [[spps[spp][pps[spp][p3t]]]]s][ p3t]]]]t]]]]s][pps[spp][p3t]]]]s][ p3t]]]]t]]]]ss These strings are then parsed to convert them into structures: 't','s' and 'p' represent triangle, square and pentagon. When one of these are found the parser searches for following blocks of characters enclosed by square brackets. These are attached to the sides of the shape in order to provide a tree topology. There are many things that could be done to improve or expand this game, make it 3D, get it on different platforms and so on. I've recently uploaded the source here with some tweaks to make it easier to compile: Let me know if you get anything interesting out of it, or develop it in new directions. About the author: Dave Griffiths is an award winning game designer, programmer and livecoding algoraver based in Cornwall. In 2014 he co-founded Foam Kernow, an independent research institution for exploring uncharted regions of art/science and designing speculative cultures. Previously he worked in the games (Sony Europe) and film computer graphics industry (Moving Picture Company), and has credits on feature films including Troy and Kingdom of Heaven. 33 Chance A: Me By Ciro Duran “It seems that ideas come from what you most usually do or live.â€? I'm a frequent commuter since I was old enough to take public transport by myself. Subway, bus, train, you name it. Some people hate the idea of taking public transport, but I sincerely love it. Most of my game ideas revolve around the idea of experimenting with traffic, and that hit me once a friend pointed that out to me. It seems that ideas come from what you most usually do or live. This is a realisation that I can live with, and it still keeps giving me ideas, even if I can't get them together for a game. One of the most curious things I've found during my commute is the "love connection" section of the free newspapers I get before I board the train. This section contains very short messages of people that saw other people and wish to see them again to start talking. I've seen teenagers reading the section messages with funny voices in order to pass the time in the train, so I guess getting your message out there like that seems kind of desperate. Still, some of these texts are clever, some sweet, mostly are not specific enough to be creepy; you could even argue they are the product of some intern’s mind and their desires, written in a corner at the newspaper headquarters. Anyhow, I started wondering on a game that happens while commuting but it's not about traffic, but rather about the people that commute regularly. There are some unwritten rules about commuting: do not chit chat too long with someone who is visibly annoyed by your attempts to talk with, do not stare too long at people, DEFINITELY do NOT talk to someone who is wearing their earphones, among others. 34 At the same time, if you're looking to connect with someone, you need to somehow go around these rules. This forms the basis for a short game, for which we can add a bit of procedural generation to generate stories. What if the love of your life is with you in the same carriage at this very moment? They're sending a message to you, you just have to figure out who is telling you that, and try to find their gaze. Look into them for very little time, and they won't find out, look into them too much, and you'll scare them away. I'm currently experimenting to see where this premise leads to. In the technical part, I'm using OpenFL for drawing some faces and animating them, and I'm using Tracery () for building the messages. I'm specifically using a Haxe port I made () from the original Javascript. The game generates a character from a series of parameters (you can see an example here -), and then it would build a grammar from those features to build a message. Since the person must describe you, you should also create your own character. You can notice that the hair is still work in progress. :) The idea of the game is to have very simple controls, just move the mouse and click a button, using the gaze as the main verb in the game, but this is all still experiments. For now, I just have a simple way to display faces, and a way to generate silly descriptions. In order to get the ball rolling, I fed the Tracery grammar to a bot with these descriptions, which you can see at @chancea_me thanks to Cheap Bots, Done Quick! (). Hopefully the bot will explore some ways a relationship could start (or crash and burn). I hope you have fun, and you can find some inspiration on your bots/procedural generation/stories in your day to day. 35 Overworld Forever By David Morrison Overworld Forever is a tile-based adventure game created by training a level generation system on the overworld map from The Legend of Zelda. The game uses an n-gram based approach that can create overworlds that are statistically similar in layout to the original game. By increasing and decreasing the length of the n-gram, the system can be made to produce maps that vary between having no coherence between adjacent tiles to ones are so constrained that they are identical to the original Zelda map. Somewhere in the middle of these two extremes are overworlds that are similar enough to the source map to playable while providing enough variation to be interesting. They are often hard or impossible to traverse with paths that lead to nowhere, rivers that flow into deserts then stop and horizontal bands of hedgerows and boulders where the system gets stuck in probabilistic cul-de-sacs. Like all Machine Learning based generative algorithms, the system works in two phases; first, it trains, then it generates. In the training phase, the system works out the probability that 36 each tile will appear at the end of a sequence of other tiles in the original map. Any sequence of tiles can be represented as a string of their indices concatenated together. These can then be put into a hash table. Each sequence hash is keyed to a list of possible successor tiles. The algorithm iterates over the map and every time it encounters a sequence, it records what the next tile is and places into that sequences successor list. To generate overworlds, the system puts down a random tile at the map's origin, then looks for the sequence containing only that tile in the hash table. It then selects the next tile from the list of successor tiles for that sequence. This process continues until the system has generated enough tiles to fill the new map. In the future, I’m hoping to expand the game to include dungeon levels and limit the space of possible overworlds to ones that satisfy basic playability constraints like making sure the player can walk to every room on the map. Perhaps hybrid systems that blend grammar-based approaches with machine learning might be a good way to train dungeon generators with similar flows to the original Zelda while making sure they can be traversed and completed. The n-gram technique described above can be applied to any tile based game or image. Statistical learning approaches such as this provide designers with new ways to explore the parameter spaces around their designs without explicitly formulating them as generative systems. There is the potential to integrate them into level editors, allowing designers to train their design tools on collections of their previous work. The broader family of techniques is not limited to grids. For example, stochastic graph and shape grammars can be trained on level topologies. This increases the kinds artifacts and forms that can be generated to include most game genres. Machine Learning also opens up the possibility discovering structures and patterns inside existing games and directly transferring them to future ones. The topology of the levels in a game like Pacman could be used to train a model for generating first person shooter levels, for example. Perhaps that’s getting a bit far out but even relatively straightforward techniques can yield unique and interesting results based on existing games and content that’s just sitting around waiting to be mined! Overworld Forever is implemented in Processing using sprites and map data stolen from The Legend of Zelda. It is available on Github at erworld-forever. “Machine Learning also opens up the possibility discovering structures and patterns inside existing games and directly transferring them to future ones.” David Morrison is a research assistant in the St Andrews Human-Computer Interaction Research Group. A long time ago he used to work in the games industry. 37 Moai By Eggy Interactive An Introduction Moai is a procedurally-generated low-poly exploration game where you play as a moai, or a sentient stone being, with the power of infinite patience, allowing you to fast-forward time and watch trees sprout and days and nights pass. This game was made as an undergraduate senior project at the University of California, Santa Cruz by a team named Eggy Interactive. Our vision from the beginning was to create a beautiful, vibrant, and vast world that players could lose themselves in. To achieve creating a world with the size and complexity that we wanted, we turned to something called procedural generation. This basically means that before the player hits PLAY, the world doesn’t exist yet. The game will create the world on the spot as soon as you hit that button, creating a world based on pseudo-random numbers and values that the computer generates, resulting in something completely different every time the player starts a new game. Creating a World The world created by the game comes with many system such as a day/night cycle, weather, vegetation system, and more. The most fundamental system is the terrain generator. The world is generated in square units we call chunks. The topography of these chunks are determined with perlin noise maps, which decide the height, kind of topography, and biome of each chunk, resulting in various formations like hills, valleys, and mountains. What about the objects? If we places objects randomly into the world, the system would sometimes end up putting all the trees in one spot, similar to how sometimes when you flip a coin multiple times, you somehow get heads every time. Even if the objects were placed spaced out randomly, the area would look more like a messy room than a forest. Instead, we divided up the chunk into smaller 38 areas, then designate a spot to place a cluster of objects in those areas, and then place objects in a random space inside a circle around that spot. This way we get nice clusters of objects and makes the area look more like a natural forest. A Natural Low-Poly Look When it comes down to it, there was just so much we could leave to the system to generate. That being said, not every single aspect of the world was left for our generators to decide. While the “when’s,” “where’s,” and “how’s” were decided by the generators, the “what’s” were designed by us. One of the key elements the system needs to know when deciding what to put in each chunk was what biome that particular chunk is going to be. We designed eight unique biomes that the system can choose from, each with their own set of weather, vegetation, and scenery. These things all need to be manually designed by our art director, who decided on the low-poly aesthetic. Low-poly is a very popular aesthetic taken on by many games especially indie games because of its simple yet elegant look that is easy on the eyes. During prototyping, we attempted the “super simple” style that many low-poly games opt for, but while the visual were very nice and clean, we felt the hard edges made the world feel less vivid, natural, and “alive,” so we tried a different approach. Instead of using the polygonal shapes to define just the generic shape of each object, we also used the vertices and polygons to create texture in each object, adding more detail to object and giving it a more natural look while still keeping the elegant style of the low-poly aesthetic. We have to give props to our very talented art director who somehow found the perfect combination of order and chaos, incorporating a sense of flow for the eyes to follow in each design and animation. Speaking of animation, the world actually grows right before your eyes. Every large plant is animated to grow as you fast-forward time, allowing you to witness the life cycle of entire forests. Additionally, the vegetation and scenery are “Instead of using the polygonal shapes to define just the generic shape of each object, we also used the vertices and polygons to create texture in each object...” 39 programmed to each have their own ambient animations, such as vines swinging, flowers waving, and water stirring. When you stand still, the world isn’t frozen, but full of life. With all of this, we’ve made a vivid and beautiful world with absolutely nothing to do in it. In fact, with procedural generation, it’s very easy to make something extremely pretty, but also extremely boring. Our biggest design obstacle was now: what do we do with this world? We’ve made the interesting to see; now how do we make the interesting to do? The Interesting to Do Our solution to that is Points of Interest. The primary goal of points of interest is to promote exploration - or in other words, get the player to walk around the pretty world we worked so hard to make. In addition to placing the vast amounts of vegetation and scenery, we also have our system strategically place shrines and obelisks, which have cryptic symbols on them used for puzzles. These structures are made to be very noticeable - they have glowing beams and react to player interaction. These are meant to lead the player from point to point in the world. Our biggest points of interest are giant floating islands in the sky. We’ve designed these islands to be the ultimate end-goal of the game, so they’re really high in the sky and you can see them from practically anywhere, acting as a visual goal that the player strives to reach. All of these things that we’ve designed and generated amounts to this vast and beautiful world that the player can explore endlessly. Moai was a game that was designed over a short period of five months with a small team of five people as an ambitious senior game design studio project, which is part of the University of California, Santa Cruz undergraduate Computer Science, Computer Game Design program. The current release of the game is only the 40 beginning of the vast vision we have for this game. During the two quarters we had to make this game, we had to scrap many amazing ideas because we had many deadlines to meet in an unfavorable amount of time, but now that the idea has become a tangible, working game, we’re excited to continue working on it and turn it into the vision we’ve always wanted Moai to be. Until then, the game can be downloaded at eggyinteractive.itch.io/moai. We hope that you have as much fun playing it as we did making it! Moai by Eggy Interactive Brian Lin – Creator, Designer, Programmer Yunyi Ding – Art Director Ryan Lima – Developer, Programmer Nathan Irwin – Producer, Programmer Anderson Tu – Composer, Audio Designer, Coordinator eggyinteractive@gmail.com “...we had to scrap many amazing ideas because we had many deadlines to meet in an unfavorable amount of time...” QA 41 PCG without a Computer: Combinatorial Literature By Davide Aversa @thek3nger “The reader can, therefore, “generate” a different poem by changing each of the 14 verses with one of the 10 variations.” For us computer scientists and game developers, Procedural Content Generation is directly connected with computers and algorithms. It seems such a modern thing! In reality, the exploration of the “combinatorial nature of art and human thoughts” is much older concept. Probably, the most interesting and old this 10 variations. At the end, the book contains 1014 different combinations, hundreds thousand billions poems, precisely. Probably, Marc Saporta, another French writer, thought that 1014 was not enough because in 1962 he published the book “Composition n° 1”, a book composed by 150 not-numbered pages that can be shuffled at will by the reader producing 150! (factorial) different books. 42). ordered pair). These squares are then explored “randomly” by the L-shaped narration producing a list of objects that the author have to include in the current chapter. I know, it is quite confusing right now. But I really encourage you to look more in details on the mathematical structure of this book! It is worth your time. 43 “...each part of the description can be exchanged with each other so that “the reader can create its own path in the book.” Another author fascinated by the use of mathematical rules to generate novels was Italo Calvino, an Italian novelist. The influence of the combinatorial authors is clear in book such as Le città invisibili (Invisible Cities) in which the author (as Marco Polo) describe 45 cities according 9 thematic groups and in such a way that each part of the description can be exchanged with each other so that “the reader can create its own path in the book”. Or in Il castello dei destini incrociati (The Castle of Crossed Destinies) in which the author use author I mentioned are French (except for Calvino), but they have another point in common. They all (except for Saporta) belong to the same literary group: the Oulipo (Ouvroir de littérature 44 potentielle, workshop of potential literature). If you are interested in this complex experimentation with the narrative and the human language you definitely have to take a look to the Oulipo’s authors. Ah, by the way, the group is still on activity and has a nice website (). Check this out. 3D Cellular Automata By Kevin Chapelier I started investigating offline rendering of 3D cellular automata after my work on รถde ( ) for PROCJAM 2015 which used 3D cellular automata to create abstract skyscraper-like structures in a huge simplex/perlin noise desert. Nowadays I use MagicaVoxel and a custom command line interface tool ( ) to modelize the volume on the GPU by applying several CA rules iteratively. In this particular workflow, each cellular automata rule can be thought of as a simple volumetric hue or paint and, with enough practice, the user develops a general intuition of how those paints will behave when mixed together. 45 Working directly with MagicaVoxel (developed by ) offers a lot of advantages: quick previews of the volume while working on it, the ability to 'undo/redo', a path tracing rendering engine with a lot of options including a marching cube rendering which is perfect for cellular automata and the tool is frequently updated with new features. 46 47 ProcEngine: An Open Source Procedural Map Generation Engine By Ahmed Khalifa “ProcEngine is an open source procedural map generation engine that allow the user to select from bunch of different generating algorithms and tune them.” Every time I start thinking about designing a roguelike or a game that use procedural generation for maps, I start googling to see what are the different techniques generation techniques. After selecting the best one, I start writing a code for it from scratch or copying it. This process is tiring and cumbersome especially during prototyping phase. In prototyping, I need just to test the idea as quickly as possible. The main problem when one of these ideas depend on procedural generated maps. After creating couple of roguelike prototypes, I couldn't take it anymore. I decided to write my own library that I can use it in prototyping. I called this library ProcEngine. ProcEngine is an open source procedural map generation engine that allow the user to select from bunch of different generating algorithms and tune them. ProcEngine is inspired by Nicky Case (Simulating the world (in Emoji)) and Kate Compton (tracery.js). The current version of ProcEngine (v1.1.0) supports the following features: ● ● ● ● ● ● 48 Different techniques to divide the map into rooms. Only two techniques are implemented: equal division and tree division. Equal division divides the map into a grid then selects room from this grid, while tree division divide the whole map along the longest dimension till reach the required number of rooms. Define different tiles and define their maximum count. Define different neighborhoods in form of 2D matrix of 1's and 0's. 1's are the places to check while 0's otherwise. Define any number of cellular automata that the system will apply after each other. Specify where to apply the cellular automata. The system support two positions either applied on the whole map regarding of the room structures (useful for smoothing the whole map or generating game objects) or on the generated rooms (useful for designing dungeons). Connect/delete the generated islands after applying each cellular automata. ● Cellular automata rules can have multiple conditions and replacing values. The engine allows the users to modify the underling generator through the following functions: ● procengine.initialize(data): to initialize the system with your rules. ● procengine.generateMap(): to generate a level (you have to call initialize beforehand). ● procengine.toString(): to get a string that shows the current data saved in the system. ● procengine.testing.isDebug: set to true to allow console printing after each step in the system. In order to use the system you need to call procengine.initialize(data) function first then you can call procengine.generateMap() for as many as you want. Each time you get a new generated map. For more details about how to use the engine refer to github (). Here is a bunch of examples that shows the capabilities of the system. The first example is a very simple generator. The generator should generate a map of 36x24 with 10 rooms using equal division technique. var data={ "mapData":["36x24", "solid:empty"], "roomData":["equal:4x4:10", "empty:1"], "names":["empty:-1", "solid:-1"], "neighbourhoods":{"plus": "010,101,010"}, "generationRules":[ {"genData":["0", "map:-1", "connect:plus:1"], "rules":[]} ] }; 49 Here are four different generated maps from the previous data, where white is empty and black is solid: The second example is more complicated where it generates a map of 36x24 with 5 rooms using tree division technique. Also, it uses three cellular automatas in the following order: 1. Generate the solid structure of the rooms. 2. Connect the rooms together all over the whole map. 3. Adds objects (1 player, 10 gold pieces (at most), and 15 enemies (at most)). var data={ "mapData":["36x24","solid:empty"], "roomData":["tree:8x8:5","empty:2|solid:1"], "names":["empty:-1","solid:-1","player:1","gold:10","enemy:15"], "neighbourhoods":{ "plus": "010,101,010", "all": "111,111,111" }, 50 "generationRules":[ {"genData":["3","room:-1","connect:plus:1"], "rules":["empty,all,or,solid>5,"solid:4|empty:1"]}, {"genData":["1","map:-1","connect:plus:1"], "rules":[]}, {"genData":["1","room:-1","connect:plus:1"], "rules":["empty,plus,or,empty>2,"player:1|empty:8|gold:2|enemy:2" ]}] }; Here are four different maps generated from the previous data, where black is solid, white is empty, blue is player, red is enemy, yellow is gold: 51 Gardening Games By Max Kreminski A lot of procgen-heavy games ask players to explore: to go out into the game world and actively seek out surprises amidst the procedurally generated landscape. Exploration of this kind tends to monopolize the player’s attention; as you explore, you have to pay close attention to the terrain you’re traversing, the landmarks you encounter, and the dangers that beset your path. You must keep your wits about you as you venture ever deeper into parts unknown. “Stories are fundamentally about change, and you can't witness change in anything or anyone besides yourself.” In exploration games that feature large expanses of procedurally generated terrain, this often entails spending a whole lot of time looking at “samey”, repetitive content: the connective tissue that fills the gaps between sparsely distributed points of interest. With nothing to distinguish one massive flat expanse of desert from the next, the novelty of scale rapidly gives way to the tedium of picking your painstaking way across another hundred dunes. What happens once you finally do find something – a temple in the desert? In many exploration games, there's no real reason to ever visit the same place twice. The loop goes something like this: you travel until you discover an interesting place; investigate it as thoroughly as you like; take from it any resources you might want or need; and then keep pushing steadily onward, away from the clean-picked remains of your past. This, as a format, is hostile to narrative. Stories are fundamentally about change, and you can't witness change in anything or anyone besides yourself unless you observe that thing or person repeatedly over a period of time. If you never encounter the same character twice, none of the characters will ever have any chance to undergo long-term change. This limits the stories that can be told about them to the scope of however much change they can undergo in the course of a single encounter. * * * 52. The surprises of the garden are nothing as monumental as isolated temples in the desert. Instead, they are narrative surprises: surprises of cause and effect, of pushing on one small part of an interconnected system and watching the effects reverberate throughout the whole.. “The player can use a variety of tools to exert influence on the garden, but the ultimate outcome is always shaped by forces entirely outside of the player’s control.” tree becomes a character in an ongoing story, with a personal narrative arc all its own. * * * 53 What games are gardening games? Neko atsume is a gardening game. Animal crossing is the quintessential gardening game. Stellaris, when played in certain non-expansionist ways, has something of the gardening game about it. Epitaph (), an idlegame I made for the fermi paradox jam, was initially conceived as – and largely remains – a gardening game. Twitter bots, too, are garden-like in nature. You set up a generator and let it run, stopping by occasionally to search through its recent output for a harvest of surprising content. Although the underlying generative structure of a twitter bot is often painfully evident from only a small sample of its tweets, there is a great deal of pleasure to be had in seeing how the different elements of this structure sometimes conspire to produce funny or startling results. Let a thousand gardening games bloom! 54 Procedural Generation in Super-W-Hack! By Ahmed Khalifa and Gabriella A. B. Barros Super-W-Hack! is a synchronous ( erent-time-systems/) roguelike game where everything is automatically generated: levels, weapons, bosses, and sounds. It is a tribute to the roguelike genre, and is inspired by various games, such as NetHack, Super Crate Box, The Binding of Isaac, Spelunky, Sproggiwood, and more. In Super-W-Hack!, the player explores five levels designed into a 2D map similar to those in The Binding of Isaac. To proceed, one needs to clear each room of enemies. After killing all the enemies, the player must accept a crate that will replace their current weapon with a new one. In this game, weapons are represented as patterns. One will not see the actual weapon causing damage, but will see its effect on the map before choosing to trigger it. Our intentions behind weapon generation was to (hopefully) increase diversity, since one player may never get the same weapon twice, and encourage strategic planning. Weapon generation starts with a pattern that is filled arbitrarily in a random size grid. Patterns can be centered on the player or appear in front of them. If placed in front, they may also be infinite patterns, which repeat themselves until they reach a wall or enemy. Additionally, playtesting showed us that most players had a hard time predicting patterns behaviors “After killing all the enemies, the player must accept a crate that will replace their current weapon with a new one.â€? 55 when they were too "noisy". Our solution was mirroring patterns in relation to the player. “The first step is using cellular automata to generate the room structure.â€? Finally, the weapon's name is generated using a combination of adjectives and nouns, and its sound effects are generated using sfxr. Both sounds and name are based on how good a given weapon is, which is calculated regarding how large is the area of attack and how protected is the player (can they attack from a distance?). A set of 100 weapons is generated at the beginning of the game, and sorted according to their power. Whenever the player gets a new weapon, one is selected and removed from the set, based on the level difficulty. Levels in Super-W-Hack! are generated using multiple steps. First, the game chooses the dungeon name in the following format "The \#dungeonType of the \#adjective \#bossType}". \#dungeonType consists of 5 categories of dungeons which affects tile colors. \#adjective is a list of funny adjectives added to the \#bossType. \#bossType is a list of different objects, animals 56 and jobs. After that, the game selects the map dimensions and generate a 2d maze of rooms using breadth first search, then assign a type for each room. Breadth first search is a search algorithm that, starting from a certain room, explores all non visited neighboring spaces and may transform it into a room, then repeats using the new rooms. visited. Possible room types are the starting room, an enemy room, an empty room, or a boss room (only on the fifth level of the dungeon). Finally, as soon as the player enters, the game starts generating the room. %based on its room type. The first step is using cellular automata to generate the room structure. Cellular automata is a technique inspired by Conway's Game of Life. Each map tile have a probability to be either solid or empty based on the surrounding neighbors. After that, if the room type is an empty room, then produce a crate; if it is an enemy room, add enemies based on the level number and the generated gun power. Super-W-Hack! has 4 enemy types: (C)haser chases the player, (P)atrol moves horizontally or vertically and shoot laser if the player in front of it, (S)pinner rotates in the middle of the room and shoot laser if the player in front of it, and (M)iner moves randomly leaving a mine trail behind it. Enemies can attack each other if an enemy is in between the player and another enemy. Bosses contain two or three behavior strategies, which can be either movement strategies (moves randomly, teleports, chases the player), attack strategies (leaves a mine on the floor, charges towards the player, shoots a single spot or a laser in front of it) or a special strategy (spawns enemies, heals itself). The generator selects strategies and creates the boss based on the \#bossType in the level name. at implementing many interesting features in a short time span. Although we didn't have enough time for enhancing the game to its full potential, it received positive feedback for its nostalgic art style, generated weapons, fast paced gameplay and short respawn time. On the other hand, some found the game confusing, % a synchronous roguelike game with laser enemies was confusing as they expected the enemies to move after the player and not at the same time, and the current tutorial didn't clarify it enough. It was also a hard game, due to one-hit kills, enemies spawning too close to the player, among other reasons. “It was also a hard game, due to one-hit kills, enemies spawning too close to the player, among other reasons.â€? Super-W-Hack! was an ambitious project. We aimed 57 Caverns, Gems and Plenty of Text By Tim Stoddard Back in September 2014, I decided that my final year thesis was going to involve procedural generation. I had enjoyed games with random yet functional levels such as Spelunky and Rogue Legacy that I wanted to explore the idea of creating levels from parameters. It has been two years and I’m still learning several approaches to procedural generation and even created my own to use in my current game, Gemstone Keeper, a twin-stick shooter roguelike with a heavy ASCII art style, procedural levels and gemstones! It started with my thesis project, the Procedural Level Editor. This a program and library combined that let you generate multi-roomed procedural levels where you can adjust the parameters and preview at runtime. I wrote the library so the level generation can be separated into multiple parts, which also means the editor itself can preview each stage. Since graduating from University I’ve since updated the library and tool to include pathfinding, make overriding the parameters easier and even exporting individual levels in both text and images formats. I still use the Procedural Level Editor to generate the levels in Gemstone Keeper. 58 The Procedural Gemstones were something I created for PROCJAM 2016 back in February, as a means to display 3D gemstone graphics without creating and loading in models. I was inspired by methods to generate snowflakes by use of symmetry. By controlling the shape of the gemstone with the amount of lines of symmetry, I could create the vertices needed to create the gemstones I needed. I originally created the procedural gemstones in Unity, but Gemstone Keeper is written in C++ with SFML, so there was the fun task of writing a software 3D renderer. In the end I found creating procedural meshes to be a fun challenge that became very useful. Since then I still find myself using procedural generation to solve problems. Recently I’ve been using Worley and Perlin Noise to add some ice and fire effects to the caverns, and using Markov Chains to generate the names for the gemstones. Ever since I started work on Gemstone Keeper, I’ve often enjoyed the challenge of making something from almost nothing, which is why almost all the graphics are made from a single text font file. When the object gets more challenging, the process to creating that object gets more creative. 59 Stop Worrying And Love Constraint Solvers By Martin Černý A few months ago, I read a nice post by Kate Compton on creating generators (). However one thing was bugging me – the post says that constraint solvers are not something you could easily use for PCG. Here, I want to convince you about the opposite – constraint solvers are great tools for PCG and implementing your own is easy. “....constraint solvers are great tools for PCG and implementing your own is easy.” So what are constraint solvers for? Let’s say you have a dungeon map and want to decide what goes into individual rooms (enemies, loot …). You also have some idea on how the dungeon should be composed – “There is always a healing item close to strong enemies”; “Total strength of all enemies is less than 200” or “No two adjacent rooms have the same content”. Or you develop an open-world game and you want to generate “bring me an item” side-quests using existing NPCs, so you need to choose an NPC as a quest giver, the item it wants and an NPC that has the item. You want the NPCs to be in reasonable distance from each other and the item must be something the quest-giver wants. Both examples can be modelled as a bunch of variables (contents of the individual rooms / quest-giver, item and item-owner) where each variable is associated with a domain. A domain is simply list of possible values (enemies and loot / existing NPCs / item types) and every variable may have a different domain. Your design requirements than form constraints that say what combinations of values are OK. Constraints can concern a single variable (“the quest giver must like the player”), a pair of variables (“quest giver does not have the item”) or even multiple variables. The solution is then an assignment of the variables from their respective domains that satisfies all constraints. This forms a constraint satisfaction problem [1] which is solved by a constraint solver. The nice part here is that you don’t have to know how to find what _______________________________________________________________________ [1] Some of the terminology I use in this article may seem arcane, but it is used because of its Googleability. 60 you are looking for, you only need to be able to recognize a valid result when you have found it. So how do we implement a simple constraint solver for our generator? We combine two things: search (try all possible combinations) and inference (quickly eliminate obviously wrong possibilities). The search part (also called backtracking) goes like this: 1. Start: no variables are assigned a value, choose the 1st variable as currentVariable 2. Repeat [2] a. If all variables up to currentVariable are assigned values that satisfy all constraints, move to the next variable (currentVariable++) i. If there are no more unassigned variables, current assignment is a solution b. Else assign next value to currentVariable. c. If all values for currentVariable have been tried, unassign currentVariable and return to previous variable (currentVariable--). This is called “backtrack”. i. If all values for the 1st variable have been exhausted, there is no solution. Once you implement this you can add inference techniques, until the generator is fast enough. The beginner’s menu consists of: ● Node consistency: Before the search, check all constraints that concern only one variable and remove the failing values once and for all. ● Forward checking: After moving to a new variable in 1.a, scan the domains of the remaining unassigned variables (one at a time) and remove values that do not satisfy constraints involving the already assigned variables. Note _______________________________________________________________________ [2] Some sources describe the algorithm in recursive form. The forms are equivalent. 61 “These three weird tricks are sufficient to solve small problems (as in the sidequests example) in microseconds!” ● that you have to remember which values were removed in this step, because they need to be returned on backtrack. Bitmasks are an efficient way of storing which values should not be tried. Backjumping: Upon backtrack, you can safely skip multiple variables back as long as the skipped variables are not involved in a constraint with the variable that caused the backtrack. These three weird tricks are sufficient to solve small problems (as in the sidequests example) in microseconds! Adding more juice, (links below) can get you solutions for problems with few dozen variables (as in the dungeon generator) in milliseconds. You should also not forget to randomize the order of variables in the domains prior to running the algorithm to get different results every run. And that’s it! You have a solver! Further reading: ● “How to build a constraint propagator in a weekend” by Ian Horswill and Leif Foged (includes C# code) o.pdf, also a related academic paper “Fast procedural level population with playability constraints” describing CSPs for filling in a dungeon. ewFile/5466/5691 ● My work with CSPs for Kingdom Come: Deliverance described in detail in Chapter 6 of my thesis (C source code available) or a more condensed version in an academic paper: w/8995 _______________________________________________________________________ Figure: Debugging CSPs in Kingdom Come:Deliverance (finding tuples of NPCs for short events). 62 Three Lenses For Generation By Jo Mazeika These are three different lenses to use when looking at your shiny new procedural generation system. These are not intended to be the only or best ways of thinking about a system, but are things that might be useful to keep in mind, regardless of the system's domain or generation method. i. Ontology To generate something, we need to know what it's composed of. Songs are made of chords, which are made of notes; stories are made of characters and actions (all of these are reductive). But when we generate something, there are always atomic units. We can't break a music note into parts, and characters have traits, but typically we don't see traits as having subtraits. Defining the ontology for the generator (or the set of all possible concepts that exist within the generator) is a critical part of building the generator, since nothing that is outside of the ontology can be output by the generator. ii. Mereology With the set of things in the world in place, we need some way of describing how to combine them. Mereology, the study of parts and wholes, is the foundation for this. In order to make things from their constituent parts (events in a narrative, furniture in a room layout, organisms in a planet's ecosystem) we need to be able to describe how things are composed from which subparts. For any given artifact, there can be multiple ways of breaking it into component parts: a place setting is made of cutlery, plates, bowls and cups; or a place setting is made of a central dish, with some things to the left, some to the right and some above. This framework gives us a way of not only describing how our generated objects are comprised of their parts, but also a way of describing the “Defining the ontology for the generator (or the set of all possible concepts that exist within the generator) is a critical part of building the generator.â€? 63 set of things that can become parts of other things. iii. Semiotics “It's hard to not find contexts and connections between any sort of generated materials.â€? Semiotics isn't a new lens, at least in academia. It's the field of signs and symbols (the field that lets us say that this is not a pipe, but just a visual representation of one). It's easy to run right down the rabbit hole of saying that no things are ever generated, only representations of things. But that's not useful if you aren't concerned with the philosophical implications and are more concerned with making stuff that makes stuff. Where semiotics comes in is as follows: humans are very good at pattern matching and meaning making (thank you evolution). It's hard to look at :) without seeing the smile. It's hard to not find contexts and connections between any sort of generated materials. Thinking about the semiotics of a system allows the designer to not only avoid unfortunate consequences of symbol combinations (insert your 64 favorite example of unfortunate implications here) but also allows the designer to leverage the power of useful symbols in context. Often this will involve an extra layer of design on top of the main generator, or careful planning on the ontology/mereology level. Mirror Lake By Kate Rose Pipkin 65 Tips for Terrain: Define Your Function in World Space By Rune Skovbo Johansen Judging from activity in the PCG community, procedural terrains is one of the most popular forms of procedural generation. There's no denying that to a lot of us, creating a terrain that you can emerge yourself in and explore is very appealing. I've given this a go a few times myself, and I want to pass on a nice tip for working with terrain functions that has helped making things easier for me. I'll skip the basics and I'll assume you've gotten a simple bumpy terrain up and running based on a noise function such as Perlin noise, Simplex noise or similar. The tyranny of ranges It's likely that your framework requires inputs or outputs to be in certain ranges. For example, for a heightfield that accepts height values between 0 and 1, you might populate your height data like this: 66 for (int i = 0; i < resolution; i++) { for (int j = 0; j < resolution; j++) { //Pass array index co-ord i,j to the terrain func data[i, j] = TerrainFunction(i, j); } } This easily leads you to define your functions such that they match those ranges. Your terrain function might look like this: // Function takes x and z index into terrain data and returns height in 0-1 range. float TerrainFunction(int x, int z) { // Base height (0.4) is a bit lower than middle. // Make each noise bump average to being 50 vertices wide // and 10% high (+/-). var heightValue = 0.4 + Noise(x / 50.0, y / 50.0) * 0.1; } While this can work fine for a while, it creates friction down the line. One of the first things you'll do to make your terrain more interesting might be to add together multiple noise functions with different scales. And at one point they might exceed the 0-1 range that the terrain accepts. Now you have to scale the output of all your noise functions down to compensate. If you have any calculations that take slopes into account, those need to be adjusted as well. “One of the first things you'll do to make your terrain more interesting might be to add together multiple noise functions with different scales.â€? The same problem might occur if you find out you want the terrain to be more or less detailed, or if you decide to make the overall terrain area coverage smaller or larger. All the values in your function are also rather arbitrary, which makes them harder to visualize. Using world space units The solution to all this is to not let your terrain functions depend on arbitrary ranges but define them in world space. Just define everything in meters or feet, or whatever unit you use in your world. 67 // Function takes x and z coordinates in meters and returns // height in meters. float TerrainFunction(float x, float z) { // Base height is at 10 meters above 0 (sea level). // Make each noise bump average to being 40 meters wide // and 20 meters high (+/-). var height = 10.0 + Noise(x / 40.0, y / 40.0) * 20.0; } “These are just the bounds in world space your terrain was already taking up.â€? Defining terrain bounds To be able to do things this way you need to define your world space terrain bounds. These are just the bounds in world space your terrain was already taking up. You can derive those bounds from the existing size of your terrain, or you can define the bounds first and scale your terrain to fit. Either way you'll end up having bounds values you can make use of. For example like this (assuming y axis is upwards): var minX = 0.0; var maxX = 1000.0; var minZ = 0.0; var maxZ = 1000.0; var minHeight = -20.0; var maxHeight = 40.0; Converting coordinates from array indices to world space You can convert from array indicies to world space coordinates with these conversions: for (int i = 0; i < resolution; i++) { var x = minX + (maxX - minX) * i / resolution; for (int j = 0; j < resolution; j++) { var z = minZ + (maxZ - minZ) * j / resolution; // Pass world coordinate x,z to the terrain function... height = TerrainFunction(x, z); } } Converting heights from world space to 0-1 range And as a very final step, you can scale your world space height into a 0-1 range using: 68 for (int i = 0; i < resolution; i++) { var x = minX + (maxX - minX) * i / resolution; for (int j = 0; j < resolution; j++) { var z = minZ + (maxZ - minZ) * j / resolution; // Pass world co-ord x,z to your terrain func height = TerrainFunction(x, z); heightValue = (height - minHeight) / (maxHeight minheight); data[i, j] = heightValue; } } Now that your data format and your terrain function is completely uncoupled, you can change the terrain resolution, or the area the terrain covers, without having to change your functions. And you can mess about with your functions in meters (or whatever) without thinking about fitting them into a specific range. If they go out of range, just increase the range accordingly in your defined bounds, and everything is well again. You can read more about procedural generation on Rune's blog at “And you can mess about with your functions in meters (or whatever) without thinking about fitting them into a specific range.â€? 69 Be Less Random with rand() By Aidan Dodds @Aidan_Dodds Introduction “Procedural content however is less about randomness, and rather more about building upon sources of randomness to create unique and artistic content.” It should come as no real surprise that most procedural content generation (PCG) systems are underpinned by a good random number generator. Most programming languages provide a means to generate random numbers; traditionally via a rand() function. This function typically generates uniform random numbers, which is to say, any number has the same likelihood of being returned as any other. If you are looking to implement your own then a nice starting point may be the Xorshift PRNG. Procedural content however is less about randomness, and rather more about building upon sources of randomness to create unique and artistic content. In this article, we will look at rand() and see how it can be extended into something more versatile. I feel a quick disclaimer is in order however; I am a programmer, not a statistician, so while the following techniques have served me well for my PCG needs, there is a good chance my math or terminology is wrong… Starting point As a starting point, let’s assume that we have a function randf() that returns a uniform number in the range [-1, +1]. Such a function may already be provided by your language but is generally trivial to implement, for instance: 70 ``` function randf() # where rand() returns a random unsigned integer return 1.0f - float(rand() % 4096) / 2048; end ``` Uniform distributions however are often not the best fit artistically for a game or PCG system. It can be very useful to have control over the probability of our generated values. So lets look at some alternatives to the uniform distribution, and how to produce them (in pseudo code form): 1D distributions Triangular distribution: By taking the average of two random numbers it can be shown that there is a much stronger chance of a value near 0.0 being produced than that of 1.0 or -1.0. 71 Such a distribution can be useful when you want to add a some variance to data with a few large variations and substantially more small variations. ``` function rand_triangle() return (randf()+randf()) / 2.0 end ``` “The pinch distribution as I call it (because i do not know the correct term) is somewhat like an extreme version of the triangle distribution.â€? 72 Pinch distribution The pinch distribution as I call it (because i do not know the correct term) is somewhat like an extreme version of the triangle distribution. values near 0.0 are very probable where as it is rare that values near 1.0 and -1.0 will be returned. This distribution has great results when used for adding a little variation to firing lines for example. I have also had nice results using this distribution to effect the direction of each element in a particle system. ``` function rand_pinch() return randf() * abs(randf()) end ``` Gaussian distribution A Gaussian distribution (or normal distribution as its also known) can be constructed, and is very distinctive with its bell like appearance. Interestingly the higher to number of rounds the better the approximation becomes. This distribution can be nice when you want a good range of values with a few larger outliers. This could make a nice basis for generating good looking star systems. 73 2D and 3D distributions Random 2D vector in a circle When writing procedural generation systems it is often desirable to be able to generate a 2D or 3D vector that falls uniformly within a circle or sphere. That is to say the vectors direction is random, and its magnitude ranges from (0.0, 1.f]. This can be useful for applications such as random sampling around a point, making random walks and stochastic approximations like ambient occlusion. 74 ``` function rand_circle() float x = 0.0, y = 0.0 while (True) x = randf() y = randf() if ((x*x + y*y) <= 1.0) return (x, y) end end end ``` Random unit 2D vector Generating a good random unit vector (vector with length 1.0) can be a little more trick then it first seems. The most obvious solution would be to randomize the x, y and z components and then normalize the vector; which however produces a less then ideal vector since it will be biased towards diagonals. “...which however produces a less then ideal vector since it will be biased towards diagonals.â€? We can generate an unbiased vector by starting with our random_circle() function before normalizing it. ``` function rand_unit_vector() return vector_normalize(rand_circle()) end ``` Random 3D vectors in a sphere The same approach we took for generating 2D vectors can easily be 75 extended to three dimensions as follows. ``` function rand_sphere() fload x = 0.0, y = 0.0, z = 0.0 while (True) x = randf() y = randf() z = randf() if ((x*x + y*y + z*z) <= 1.0) return (x, y, z) end end end ``` Like we did before, if we normalize this vector then we can produce an unbiased unit 3D vector. In closing A few relatively simple techniques to generate more interesting random numbers have been presented. Where and how they are applied is still firmly where the artistic element of procedural generation lies. Like an artist however, its always good to have more brushes to paint with. 76 Voxel Models By Ex Utumno Voxel models generated with the wave function collapse algorithm n/WaveFunctionCollapse 77 Challenges and Experiences in Generalized Level and Content Generation By Afshin Mobramaein, UC Santa Cruz In the world of procedural content generation (PCG) there have been a wide myriad of systems and algorithms that create new and interesting content such as levels, assets and even complete games. But one of the limitations of these generators is the one of domain specificity. Level generators for instance, focus on either one very specific type of games (i.e Super Mario Bros), or assets (terrains, trees, textures). As of recently there has been an increased interest on generalized content generation, an example of this is the GVGAI [1] competition on level generation. Removing the constraints of domain specificity might lead into a new series of generators that can create content that might adapt to innovative ideas of gameplay. “Since AGD systems create the context for level and asset generation, generators should adapt to create compelling content regardless of whatever context is thrown at them.” One field in which generalized content generation might come in handy is the one of automated game design (AGD) with systems capable of creating new game rules and mechanics. Building larger-scale AGD systems implies going beyond rules and mechanics into the generation of gameplay spaces and assets that “make sense” with the new types of game rules and mechanics that are being generated. Since AGD systems create the context for level and asset generation, generators should adapt to create compelling content regardless of whatever context is thrown at them. This invariably leads to a series of very interesting research questions and design considerations. The first question to arise, is how do we set up an architecture to create context independent generators for levels in AGD systems? Assuming we already have a rule generation mechanism in place, how do we generate levels that best fit the context generated by our rule generator? An initial suggestion would be to integrate an intermediate layer that can associate specific patterns in game rules to a notion of genre (as different types of games yield different level geometry) in game, and from there to choose an appropriate level generation mechanism that better suits the understood context from our rule generation process. This notion is explored by Zook and Riedl [2] in their paper "AI as a Game Producer" in which creative direction is given by a producer layer that has knowledge about genre and can lead a series of generative systems to create a game. But how do we map rules to a notion of genre? From this point, we could imagine using several different approaches: For instance we could build a ruleset that maps certain elements of game rules (player affordances, goals, camera relationships, world physics) into 78 a game archetype genre that fits the best. This can be seen as a starting point, and one that will require a large knowledge engineering effort. On the other hand, we could picture using a data-driven approach in which we can cluster different types of games from a standardized corpus (think VGDL or PuzzleScript) to learn a notion of genre, and then apply a classification model for new observations generated by our system. Another important question that arises is the one of which generative method to use after a type of context has been determined. One strategy that could work is to leverage the power of the wealth of great generative systems that the PCG community has used and to choose a generator that fits the type of game that fits the generated rules better. While this sounds like a very viable option, it also implies that a standardized knowledge representation of what a game level looks like. A more modern approach could also involve using deep learning techniques, such as generative models with a corpus of video game levels such as the VGLC by Summerville et. al [3] as its training set. Finally, an approach such as the one explored by Sorenson and Pasquier [4] in their paper “Towards a Generic Framework for Automated Video Game Level Creation” that relies in breaking down levels into building blocks called “design elements” from different types of games can be used. A concern here is the one explored by our ruleset approach for our genre mapping, is the one of knowledge engineering to encode a large set of different design element families. “...that relies in breaking down levels into building blocks called “design elements” from different types of games can be used.” The questions above are some of the emerging issues surrounding generalized level and content creation, and there are clearly more questions that are as interesting such as the ones involving evaluating the quality of generated artifacts. But for now, this seems to be a promising area in PCG research and the future seems to hold some interesting systems being developed such as the ones that competed in the GVGAI content generation track this year. Hopefully, we will see a large wealth of great generalized level generators in the near future, and with that a myriad of lessons that we can learn from them. References [1] Perez-Liebana D et al. “General Video Game AI:Competition, Challenges and Opportunities”, 2016, In AAAI 2016 Proceedings. [2] Zook A, Riedl, M, “AI as Game Producer”, 2013, In CIG (Computational Intelligence in Games) 2013 Proceedings. [3] Summerville A et al, The VGLC: The Video Game Level Corpus ,2016, In Proceedings of FDIGRA 2016 7th Workshop on Procedural Content Generation [4] Sorenson N, Pasquier P, Towards a Generic Framework for Automated Video Game Level Creation, 2010, In International Conference on Evolutionary Computation in Games, EvoGame Proceedings. 79 Lucidity By Scott Redrup Watch online at Who I am: I am Scott Redrup and I have spent the past three years completing a Bsc (hons) degree in Computing and Game Development at Plymouth University. In my final year I completed a substantial project that explored procedural level design, and created a game called Lucidity. What I did The project had two components, a research phase and an implementation stage with a final demonstration. I analysed current work within the fields of procedural generation, level design, procedural level design and game design patterns. This allowed me to identify current issues and propose a solution. My final product is a 3D dungeon crawler with seven different level objectives that demonstrate how game design patterns can be used with procedural generation to create interesting levels. Research I had little knowledge about the current work within level design so I spent a lot of time reading the works of Togelius, Dahlskog, Bjork amongst other notable game AI researchers. Significant time was invested in analysing design patterns, starting with Alexanders pattern language compared against the gang of fours approach to software engineering and finally Bjorks collection of patterns. This lead to interesting discussions about how patterns can be identified, classified and categorised. All worthy research projects in their own respects. Implementation Levels were generated by first creating the basic level structure. Levels feature two heights of elevation, the ground and mountains layers, both of which are flat. Players can access the mountain layer via stairs. The level is built by carving the ground into mountains by using a random walk algorithm and then searching for a suitable location to place stairs. 80 The level is then split into a grid of 5 x 5 chunks. Each chunk represents an area of space within the level and contains information for how the area should look i.e what scenery is generated and its arrangement. The third step is to tailor the basic level in order to create the 7 different level types. To create ‘Arena Battles’ the levels had to be searched for spaces to place an arena. Whereas for levels like ‘Lots O Enemies’, enemy spawn rates had to be increased. Finally basic gameplay elements had to be added including a win / loss condition e.g being killed, as well as a menu, tutorial screens etc. Evaluation Three main issues were encountered with my solution: Code Structure: Procedural projects are really fun and it is easy to make something quickly, without paying attention to the structure of the back end. I fell for this trap and by the end of the project trying to add or remove features proved impossible. ○ Performance: I relied on the Unity Asset Store to make a game that looked good. With a limited range of free assets available, Lucidity ended up with lots of poorly optimised assets cause significant lag on most low-mid ranged machines. Levels with specific spawning requirements such as Arena Battles caused 1-3 seconds of initial lag. ○ Difficulty: I envisioned levels were to be assessed on how easily the difficulty rating could be adjusted. This would link to procedural enemies that would vary in health, damage, speed etc. Which would be set by a difficulty class, due to the aforementioned code structure issues as well as a lack of time, I failed to implement the feature. Discussion and Going Forward I’d be interested to see my approach applied within a large scale application as I’m unsure if the levels would remain interesting or become repetitive. I’m also unsure if the approach is optimised to handle large scale generation. ○ 81 “So ensure that you go into the project knowing what you want to achieve, use version control and commit often!” 82 I managed to achieve an illusion of variety in Lucidity by using a forest themed environment and vegetation appeared to hide repeating patterns effectively. In an urban environment this would I predict this would be more noticeable. One thing I recognised with most of the research I carried out was that a lot of the work is still only concerned with simple game genres such as platformers and dungeon crawlers. I’d want to see my work applied to a more complex genre. Conclusion To conclude, 80% of this project was spent thinking that I’d make a procedural thing that did nothing, and there were certainly a few moments where I’d hit play in Unity and a small change had messed up the whole generation. So ensure that you go into the project knowing what you want to achieve, use version control and commit often! BUT! Procedural projects are scary, yet definitely give it a go and explore! There is something strangely satisfying about generating unique complex levels at the click of a button! Ultima Ratio Regum By Mark Johnson This year in the ongoing development of Ultima Ratio Regum, a ten-year experimental roguelike project focused on the procedural generation of culture and cultural behaviours, my focus has been almost entirely on people. The world has been notoriously devoid of human life for several years despite the tremendous social, religious and political detail that has gone into the worldbuilding, and it was finally time – with all these foundational elements in place – to change that. Firstly, what should they look like? I wound up creating an interwoven two part model of biological and cultural NPC elements. On the biological front, we have a range of variations: different genetic groups have different randomly-selected shapes of eyes, chins, necks, ears, noses, and so forth, alongside different colours for their hair, and their eyes. Skin tone of course varies with how close to the equator a particular person’s family originally hail from, with an appropriate range of variation from the darkest black to the palest white. I then combined these with cultural elements, which take two distinct forms: cultural elements that are applied to an NPC’s face (the only part of their “body” you can see in-game), and those applied to the items (clothing, weapons, etc) that a character happens to carry with them. On the faces of NPCs we find a massive range of hairstyles for both women and men which vary with culture, along with sets of distinctive cultural practices: scarification, tattooing, specific kinds of jewellery, turbans, paint markings, and many others. “...project focused on the procedural generation of culture and cultural behaviours, my focus has been almost entirely on people.” This was then joined by clothing styles, for which I found myself building a rather detailed procedural clothing style generator. Clothing styles can have shirts and trousers, waistcoats and skirts, dresses, or togas, or anything in-between, with additional variation 83 in style and appearance determined by the overall aesthetic preferences of the nation in question for certain shapes, certain colours, and so forth. Styles are distinctive either to entire cultures, or to niche demographics within a culture, such as the religious clergy, or soldiers. Each clothing style then breaks down into multiple tiers, helping the player identify the status of an unknown NPC and adding far greater variation to this part of the game visuals. “...so the player must instead rely on their knowledge of that particular generated world in order to draw conclusions...” This therefore allowed for the interesting intersection of biological and cultural traits, and the ability for the player to play detective. Consider an empire from an equatorial region – the player is used to encountering characters with a dark skin-tone wearing a certain set of clothing and jewellery. At some point, however, the player happens to bump into a pale-skinned character with a different hair colour, who nevertheless possesses the same clothing styles (so biological difference, cultural similarity). Does this person represent a conquered colony? A trader trying to fit in? A slave or servant? Or something else? Nothing of this sort is ever explicitly told to the player, and so the player must instead rely on their knowledge of that particular generated world in order to draw conclusions based on their physical appearance, their clothes and any facial cultural traits, as well as their actions and patterns of speech, which brings us to our latter point – what NPCs actually do. Developing NPC behaviours means how they spend their day, and how they talk to the player. The NPCs in URR now range from mercenaries to priests, guards to merchants, farmers to inquisitors, and arena fighters to servants and eunuchs. Each NPC class spawns and lives in a different part of the map and has a very different set of rules for their average everyday behaviours – take, for example, 84 this screenshot of priests and worshipers (standard humans, shown with an “h”) going about their day. These highly active NPCs are then married with a pretty unusual speech system. Joining us now in this last part of 2016’s URR journey will be Orangejaw Moonblizzard, my profoundly procedurally-generated and facially-tattooed playtesting character who has travelled with me for over a month now – which is to say, I haven’t in this time needed to generate a new world to experiment with, and thereby expunge brave Orangejaw from existence. The goal was to create a speech system where the player could ask a tremendous range of questions without having to resort to programming it as a “chatbot”, to create realistic (or at least realistic-ish) human conversations, and to allow the player to uncover large volumes of information about the game world simply by speaking to its inhabitants. AS things stand now, I feel very 85 confident this objective is almost complete: With all of these elements (almost) complete, URR now finds itself replete with a procedurally-generated cast of culturally-detailed characters, ready for the player to discover, watch, talk to, and perhaps find out crucial clues from... 86 87 88 Elision By Isaac Karth procedural-generation.tumblr.com Intending to travel by road to Naissus, Virgil left Ulpiana. It was at least 80 miles. He passes another milestone. Along the road are graves, and a cenotaph. An oxcart passes, loaded with grain. The road narrows here, an orchard wall encroaching on it. There a spring wells up, and around about it is a meadow. * * * Intending to travel by road to Naissus, Virgil left Bononia (Moesia). It was at least 76 miles. A cloud passes in front of the sun. As they go up from Bononia (Moesia), they see the ruined walls. A grove of Minerva is hard by the road, a grove of poplar trees. The sun beats down. Now the road is quieter. Not far from the road is a grave, on which is mounted a soldier standing by a horse. Who it is I do not know, but both horse and soldier were carved by Praxiteles. Workers are raising the level of the road. This is a smooth road, by which many wagons were bringing wood to Naissus. * * * From Ancona to Iader is a journey of about 107 miles when travelling by ship down the coast. Out of the clouds bursts fire fast upon fire. Dubious days of blind darkness we wander on the deep, nights without a star. Then comes the creak of cables and the cries of seamen. Frequent flashes light the lurid air. All nature, big with instant ruin, frowned destruction. The oars are snapped. Piteous to see, it dashes on shoals and girdles with a sandbank. The helmsman is dashed away and rolled forward headlong. “Out of the clouds bursts fire fast upon fire. Dubious days of blind darkness we wander on the deep, nights without a star.â€? Then was land at last seen to rise, discovering distant hills and sending up wreaths of smoke. 90 “The generator picks a fraction of phrases from the list and joins them together.â€? Within a long recess there lies a bay: an island shades it from the rolling sea and forms a port secure for ships to ride. Two towering crags, twin giants, guard the cove, and threat the skies. Betwixt two rows of rocks a sylvan scene appears above, and groves for ever green. Beneath a precipice that fronts the wave, with limpid springs inside, and many a seat of living marble, lies a sheltered cave. Ships within this happy harbor meet, the thin remainders of the scattered fleet. They lay their weary limbs still dripping on the sand. * * * For Virgil's Commonplace Book, which I generated for National Novel Generation Month 2015, I made use of elision, a literary trick I learned from Nick Montfort's 1K Story Generators. Each kind of connection has a list of evocative sentences describing the journey. The generator picks a fraction of phrases from the list and joins them together. Many of the phrases are atmospheric and 90 imply relationships while not relying on the existence of any of the other phrases. The reader fills in the gaps left by the missing phrases. Additionally, this technique let me borrow many of the phrases from Roman travel literature or Virgil's own poetry, lending another layer of structure, allusion, and meaning. Is Self Similarity Too Similar By Mark Bennett A question. How does procedurally generated terrain compare to the real thing? Having had opportunities to travel and see a variety of landscapes, my conclusion is; not very well. Real terrain is very varied and often has distinctive features, which is what can make a particular terrain striking to the eye. I have decided to use Outerra () to compare real terrains with their corresponding fractal terrains as Outerra uses heightmaps of real terrain (Earth) to a specific resolution (30m) and interpolates between height points using fractal methods (described in more detail here), therefore allowing a direct comparison. This article is not a critique of Outerra which is an outstanding piece of software. I have also performed some experiments using the midpoint displacement algorithm and shown the results below. Some Examples Of Real World Terrains 91 Comparison of Real vs. Procedural Terrain Below are images of real terrains, compared with their fractalised versions using Outerra from as close a viewpoint as possible. Some of the character of the original terrains is lost when fractal methods are used to interpolate between height points. Jungfrau (Switzerland) Jungfrau (Outerra) The Jungfrau loses the beautiful Silberhorn to the right of the main summit. Monument Valley Monument Valley (Outerra) The distinctive towers which make Monument Valley in the USA such a big tourist attraction disappear when fractalized as can be seen above. 92 Prabella Mountain Prabella Mountain (Outerra) Parabella mountain in Russia has a very distinctive central ridge from which it gets its name. This also does not survive the fractal sledgehammer. El Capitan El Capitan (Outerra) El Capitan, in the Yosemite Valley in California gains a series of jagged peaks on its summit which are not present on the original, by contract, the huge cliff face loses all of its features. Experiments I implemented the simplest possible fractal algorithm, the Midpoint 93 Displacement Algorithm and set about tweaking some of it’s parameters. All these experiments use the same seed for the random number generator. The Python code is available at: ts Standard Midpoint Displacement The output of the unadorned algorithm. Random Lacunarity Here the lacunarity has been randomised (using a separate RNG) by only allowing a 50% chance of displacement at each iteration. 94 Random Midpoint Placement Here the position of the midpoint has been randomised (separate RNG again). This allows cliffs and scale variations to appear. Random (Triangular) Midpoint Placement Randomised midpoint again but using the triangular random function to get more consistent output. Random Midpoint and Random Lacunarity Both Lacunarity and midpoint randomised 95 Some additional experiments were performed using simple functions to modulate the amount of random displacement. Again, the same random seed is used each time. The output of the function is normalised to as close to 0 and 1 as possible and clamped so as not to go below 0. The output is on the left, the function shown below and the graph of the function shown on the right. Some of the functions produce output which exceeds the upper scale of the graph, however, the full value is used to modulate the RNG, which sometimes exceeds one, giving heights above the given height parameter. Also, some of the finer details may be lost due to resizing the images. sin(x) sin(x/2-2)*8 96 sin(x/2+6)*10 sin(x/2+7)*10 sin(x)*10 sin(x+3)*50 97 sin(x+6)*30 log(x) log(x*3) 98 Conclusions This seems to demonstrate that varying parameters can have a significant effect on the output and give greater variety to terrains. The greatest effect is made by randomising the midpoint though this process needs more control. Using functions as parameters allows a great deal of control and gives a good variation in output although a lot on number tweaking is required. It is now clear how these methods will work with other PCG algorithms such as Perlin noise. Further Work Future experiments will look at bringing more control to the randomized midpoint, perhaps by using functions to modulate the output of RNG. It is also important to adapt these approaches to 3D. The functions used can be applied to 3D as seen in the graph below although it is less clear how that can be applied to a 3D terrain. Possibly the fractal terrain could be summed with the function, or each x z point of the function could be applied to a fractal parameter of the terrain at that point. It would also be useful to be able to smoothly transition between one function and another over the width, depth and height dimension. This might be possible by having multiple terms and varying the constants used, when a term is multiplied by a constant of zero it is effectively removed from the equation. The constants themselves could be varied by a fractal algorithm. It would also be useful to try to adapt these methods to other PCG algorithms. 99 Hummingbird By Niall Moody These are pages from the zine I made for my game Hummingbird. Hummingbird is an infinite procedural musical exploration game. Built around a complex set of synthesizers and procedural behaviours and colour palettes, the world you encounter will be different each time you play. 100 I made the zine by hand-writing/drawing hundreds of sentences generated by the game's text generators (which are a combination of custom grammars, and rudimentary markov chain generators whose inputs are Wasily Kandinsky's Concerning the Spiritual in Art, Walt Whitman's Leaves of Grass, and Robert Kirk's A Secret Commonwealth). 101 To download this cut-and-play please check the PDF Zine at Procjam.com 102 This is the first issue of the ProcJam Zine! Published on Oct 24, 2016 This is the first issue of the ProcJam Zine!
https://issuu.com/jupiterhadley/docs/seeds_1__1_
CC-MAIN-2020-50
refinedweb
21,148
59.33
Coding Custom Unit Tests Using the Unit. To help provide a clearer overview of the Unit Testing Framework, this section organizes the elements of the UnitTesting namespace into groups of related functionality. Elements Used for Data-Driven Testing Use the following elements to set up data-driven unit tests. For more information, see How to: Create a Data-Driven Unit Test and Walkthrough: Using a Configuration File to Define a Data Source. A code element decorated with one of the following attributes is called at the moment you specify. For more information, see Anatomy of a Unit Test. Every test class must have the TestClass attribute, and every test method must have the TestMethod attribute. For more information, see Anatomy of a Unit Test. Unit tests can verify specific application behavior by their use of various kinds of Assert statements, exceptions, and attributes. For more information, see Using the Assert Classes. Visual Studio test engine.For example, some of these attributes appear as columns in the Test Manager window and Test Results window, which means that you could use it to store an indicator of the kind of test it is: [TestProperty("TestKind", "Localization")]. The property you create by using this attribute, and the property value you assign, are both displayed in the Visual Studio Properties window under the heading Test specific. The attributes in this section relate the test method that they decorate to entities in the project hierarchy of a Team Foundation Server team project. As described in Using Publicize to Create a Private Accessor,.
https://msdn.microsoft.com/en-us/library/ff808351
CC-MAIN-2016-36
refinedweb
257
54.42
The diff and patch utilites can be intimidating to the newcomer, but they are not all that difficult to use, even for the non-programmer. If you are at all familiar with makefiles, you might find yourself frequently wanting to patch a file, either to correct an error that you've found or to add something that you need to the makefile. After I began using the mrxvt terminal, I wanted to give it Japanese capability. My main O/S is FreeBSD. It manages packages with its ports and package system. To install a package from a port, one uses the port's Makefile, which will download and compile the souce code, in a manner familiar to those who use Gentoo's portage (which was inspired by FreeBSD ports) or ArchLinux's makepkg. In this case, I wanted to edit the port's Makefile to enable Japanese support. To do this, I simply had to add a line to the Makefile. CONFIGURE_ARGS+= --enable-xim --enable-cjk --with-encoding=eucj Ok, this is simple. However, after doing this, I thought that perhaps I should submit a patch to the port's maintainer, giving others the opportunity to include Japanese support. This was a little more complicated, because the change to the Makefile meant that I should include a message when they installed the port, telling them what to do if they wished to include Japanese capability. So, I had to add the following lines, in their proper place in the port's Makefile. .if defined(WITH_JAPANESE) CONFIGURE_ARGS+= --enable-xim --enable-cjk --with-encoding=eucj .endif # WITH_JAPANESE pre-everything:: @${ECHO_MSG} "=========================================>" @${ECHO_MSG} "For Japanese support use make -DWITH_JAPANESE install" @${ECHO_MSG} "=========================================>" diff -uN Makefile Makefile.new > patch.Makefile The diff command has various flags. Simply doing a diff between two files shows something like (I'm just showing a few lines here) 5c5 < # $FreeBSD: /repoman/r/pcvs/ports/x11/mrxvt/Makefile,v 1.7 2005/07/22 22:38:58 pav Exp $ --- > # $FreeBSD: ports/x11/mrxvt/Makefile,v 1.7 2005/07/22 22:38:58 pav Exp $ 22a23,26 > .if defined(WITH_JAPANESE) > CONFIGURE_ARGS+= --enable-xim --enable-cjk --with-encoding=eucj > .endif # WITH_JAPANESE > 25a30,37 The 5c5 means that there is a difference in the 5th line. The c means something would have to be changed for them to match. The 25a30,37 means that text would be added at line 25. In this case, we don't have it, but there is also use of the letter d for text to be deleted. This is a bit hard to read, especially if there are many differences. Therefore, most people prefer unified diffs, diff with the -u flag. This gives us something like (again, with many lines snipped) --- Makefile.orig Sat Sep 10 17:16:53 2005 +++ Makefile Fri Sep 16 03:13:52 2005 @@ -20,9 +20,21 @@ USE_X_PREFIX= yes GNU_CONFIGURE= yes USE_REINPLACE= yes +.if defined(WITH_JAPANESE) +CONFIGURE_ARGS+= --enable-xim --enable-cjk --with-encoding=eucj +.endif # WITH_JAPANESE This shows a few lines before and after the change, which helps define context. (I've snipped the lines below this change, but you can see that three lines above it are included.) Let's examine this a bit. The first lines are fairly straightforward, they have --- and the old file's name, then +++ and the name of the new file. It also contains the ctime (the time the file was last modified.) Next is what is known as the hunk. This line will start with @@ then have the old file's starting line, the old number of lines, the new start and the new number of lines, then another @@. Understand that the three lines above and below the change remain as they are. The 3 lines are simply to give context. In this case, including that context, the change starts at line 20. Lines 20-23 will remain unchanged. Including the 3 lines above and below the differences, the change will go for 9 lines. So, we are changing 9 lines, starting from line 20, (which will include 3 lines above and 3 lines below the actual change). Therefore, this is shown with a minus sign. Following that is the plus sign. The first number 20, is the first line of the new file and the change, including the 3 lines above and below, will continue for 21 lines. Note that I have not shown the entire patch and also some of those lines may simply be blank lines. So, the hunk starts with @@ -20,9 +20,21 @@ Next comes the actual patch itself, the 3 lines of context and the change. Note that in the patch, there is a space before the 3 lines of context, and then the lines below have a plus sign. A space before a line means that nothing will be changed. A plus sign means the line will be added. If there had been lines to be deleted, they would have had a minus sign in front of them. Let's create two files to make this a little clearer. Using your favorite text editor, create patchtest.txt and patchtestnew.txt. The patchtest.txt will read This is a file. These first three lines are lines of context. They will remain unchanged. They will have spaces in front of them. Here are the lines that will be changed. They will begin with minus signs, because they are being deleted. Now, we will add three more lines that are only context. They will have spaces in the patch Now, patchtest1.txt This is a file. These first three lines are lines of context. They will remain unchanged. They will have spaces in front of them. These lines have been changed. They will have plus signs in front of them. Now, we will add three more lines that are only context. They will have spaces in the patch Create the patch. patch -uN patchtest.txt patchtest1.txt > patch.txt View the patch less patch.txt You will see --- patchtest Sun Feb 26 19:35:43 2006 +++ patchtest1 Sun Feb 26 19:35:14 2006 @@ -2,8 +2,9 @@ These first three lines are lines of context. They will remain unchanged. They will have spaces in front of them. -Here are the lines that will be changed. They will begin with -minus signs, because they are being deleted. +These lines have been changed. They will have plus +signs in front of them. Now, we will add three more lines that are only context. They will have spaces in the patch +This is yet another line that is different. You can see the first line, This is a file, wasn't included in the patch--that's because it was outside of the three lines of context. Now that we've made our patch, we can apply it. patch patchtest.txt < patch.txt You will see Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- patchtest Sun Feb 26 19:35:43 2006 |+++ patchtest1 Sun Feb 26 19:35:14 2006 -------------------------- Patching file patchtest using Plan A... Hunk #1 succeeded at 2. done Now, patchtest.txt has been patched. If you now do a diff between patchtest and patchtest1, you'll just be put back at your command prompt, showing that there are no differences. This is simplest form of creating diffs and using patches. Sometimes, you patch an entire directory--those who compile their own kernels may have done this. Rather than downloading an entire new tarball of the new kernel, there are often patches, especially of minor revision numbers. The README in /usr/src/linux has instructions for using these patches. When you are applying a patch to an entire directory tree, you may need to use the -p1 option. The p[number] basically helps determine the path to the file or files being patched. See man(1) patch for details and examples. For instance, if you had a patch for the entire Linux kernel source tree, and were in /usr/src you might do patch -p1 < mylinuxsource.patch As this varies, depending not only upon your location when applying the patch, but also what is in the patch, it's best to see the man page, however, just keep in mind that if trying to apply a patch that covers several files in a directory doesn't work, it may be the p[number] that is causing the difficulty. Although this becomes more complex when making patches consisting of many hunks, or patching many files in a directory, (such as the Linux kernel source tree) the basic concept is the same. It is hoped that this article gives the reader a better understanding of diff and patch, and will help them to read and understand patches. This can be very handy--sometimes, a patch has something you don't want, so it' always good to look at it before applying it. Patches can also be reversed with the -R flag. Suppose you try an experimental patch and it breaks something. You can then patch the file again with the -R flag. Take our patchtest and patchtest1. Let's run patch again with the -R flag. patch -R patchtest.txt < patch.txt Again, you'll see that Hrrm...Looks like a unified diff to me message and a message that it succeeded. Actually, if you forget the -R flag, patch often catches it. Patch the file one more time with patch patchtest.txt < patch.txt and it should succeed. Once again, patchtest and patchtest1 are identical. Now, try it again, without the -R flag. You'll see a message Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- patchtest Sun Feb 26 19:35:43 2006 |+++ patchtest1 Sun Feb 26 19:35:14 2006 -------------------------- Patching file patchtest using Plan A... Reversed (or previously applied) patch detected! Assume -R? [y] If you type y then you should once again see that it succeeded. If anyone is interested, my patch for mrxvt was accepted, and the port is now available with the option to enable Japanese. i am trying to merge 2 files using patch/diff : file1.txt -------- hello my name is ABC file2.txt --------- hello my name is DEF and i want the output : ---------------------- hello my name is ABCDEF is this possible ??
http://www.linuxforums.org/applications/using_diff_and_patch.html
crawl-001
refinedweb
1,722
83.66
Feb! If you prefer the video walkthrough of this post, check it out here or below. Step 1: First, create a directory named repo_stats for holding project files. Open up a terminal and cd into repo_stats. $ mkdir repo_stats $ cd repo_stats Step 2: Install virtualenv Zappa requires the use of a Python virtual environment, and using virtual environments is a good practice in general. Virtual environments separate dependencies between projects, so those dependencies need not be maintained in the system Python installation. This way, separate projects can depend on separate versions of the same dependencies. You can read more about virtual environments here. To make sure you have virtual environments installed, type the following command in your terminal: $ pip install virtualenv Step 3: Create a virtual environment After that command completes, create a virtual environment with: $ virtualenv venv A directory named venv now exists inside repo_stats. Next, activate the virtual environment with: $ source venv/bin/activate (Unix) $ venv\Scripts\activate (Windows) Your command prompt should reflect your virtual environment being active by having the name of the virtual environment (venv) to the left of the prompt. So, like – (venv) jwheeler:repo_stats$ The virtual environment can be deactivated at any time by typing deactivate or closing the terminal, but for now, keep it active. Step 4: Install dependencies Lastly for this step, install all the project dependencies into the virtual environment by typing: $ pip install flask-ask zappa requests awscli The preceding command installs Flask-Ask, Zappa, and two additional dependencies used in this tutorial. Requests is a Python module for making HTTP requests, and the AWS Command Line Interface (CLI) is for generating the AWS configuration Zappa uses to deploy to AWS Lambda. Before generating the AWS configuration with AWS CLI, let's create the skill and a user in the AWS console. We are now ready to write the Lambda function for our skill using Flask-Ask. We’ll be using GitHub REST API (v3) to get statistics for the Flask-Ask repository. Please note that since this is a demonstration, the code is intentionally simple. The skill we'll build is great for personal use but not general enough for a public release so do not submit it for certification. Step 5: Open up a code editor, and create a file named repo_stats.py inside the directory repo_stats. Type or copy-and-paste the code below: import logging from operator import itemgetter import requests from flask import Flask from flask_ask import Ask, statement REPOSITORY = 'johnwheeler/flask-ask' ENDPOINT = '{}'.format(REPOSITORY) app = Flask(__name__) ask = Ask(app, '/') logger = logging.getLogger() @ask.launch def launch(): return stats() @ask.intent("StatsIntent") def stats(): r = requests.get(ENDPOINT) repo_json = r.json() if r.status_code == 200: repo_name = ENDPOINT.split('/')[-1] keys = ['stargazers_count', 'subscribers_count', 'forks_count'] stars, watchers, forks = itemgetter(*keys)(repo_json) speech = "{} has {} stars, {} watchers, and {} forks. " \ .format(repo_name, stars, watchers, forks) else: message = repo_json['message'] speech = "There was a problem calling the GitHub API: {}.".format(message) logger.info('speech = {}'.format(speech)) return statement(speech) In the stats method, the Requests library invokes the GitHub API. The response returned is a JSON structure with details about the repository including its statistics. Before collecting the statistics, we make sure the response status is 200 (i.e. successful). Then, a message with the statistics is returned as an Alexa response. If the API call isn't successful, an error message is returned instead. Now that the code is complete, let's get ready to deploy to AWS Lambda using Zappa. The first step in preparing for deployment is creating an AWS IAM user. Creating this user is mostly a point-and-click operation. First, you'll need to have an AWS account. If you don't have one, head over to, click "Create an AWS Account", and follow the instructions to create your account. Once your account is created: 1. Open the IAM Console. 2. In the navigation pane, choose Users. 3. Click the Add User button. 4. Name the user zappa-deploy, choose Programmatic access for the Access type, then click the "Next: Permissions" button. 5. On the permissions page, click the Attach existing policies directly option. 6. A large list of policies is displayed. Locate the AdministratorAccess policy, select its checkbox, then click the "Next: Review" button. 7. Finally, review the information that displays, and click the Create User button. 8. Once the user is created, its Access key ID and Secret access key are displayed (click the Show link next to the Secret access key to unmask it). 9. Copy and paste these credentials into a safe location for later reference. You will need these in Deployment Step 2. Also, treat these with the same care as you do with your AWS username and password because they have the same privileges. The IAM user for Zappa has been created. Back in your terminal, make sure you’re inside the directory repo_stats, then type: $ aws configure Follow the prompts to input your Access key ID and Secret access key. For Default region name, type: us-east-1. For Default output format accept the default by hitting the Enter key. The aws configure command installs credentials and configuration in an .aws directory inside your home directory. Zappa knows how to use this configuration to create the AWS resources it needs to deploy Flask-Ask skills to Lambda. We’re now ready to deploy our skill with Zappa. In the terminal, create a zappa configuration file by typing: $ zappa init The initialization process is interactive. Accept the defaults to all the questions, by hitting enter for each of the questions. Once initialization is complete, deploy the skill by typing: $ zappa deploy dev The initial deployment process can take a few minutes while Zappa creates API gateways and bundles and uploads the code and dependencies. Releasing code updates doesn't recreate the API gateways and is a bit faster. Such updates are handled through a separate command: $ zappa update dev After deploying or updating, Zappa outputs the URL your skill is hosted at. The URL looks similar to this one: With the code deployed and URL available, we're ready to configure the skill in the Alexa developer console and test it. Like creating the IAM user above, configuring the Alexa skill is mostly a point-and-click operation done through a web console. Alexa has its own developer console separate from AWS. Working with the Alexa console also requires its own separate developer account. First, go to and click Sign In. If you've never signed in, you'll be prompted to fill out a profile and registration details. Otherwise: 1. Go to the list of Alexa skills 2. Click the Add a New Skill button. 3. The skill configuration screen opens. Skill configuration is broken into multiple sections. Fill out each section using the instructions below. On the Skill Information section: Set the Name of the skill to Repo Stats and use an Invocation Name of repo stats. Accept the defaults for the rest of the section, and click the Next button. On the Interaction Model section: Paste in the following Intent Schema: { "intents": [ { "intent": "StatsIntent" } ] } StatsIntent what my stats are StatsIntent give me my stats StatsIntent about my stats Click the Next button. On the Configuration section: Select HTTPS as the Service Endpoint Type. You might have noticed the other option AWS Lambda ARN seems more appropriate, but since Zappa uses API gateways for WSGI emulation, we're going to select HTTPS here. Select the appropriate geographical region that is closest to your customers checkbox and enter the URL Zappa output during the deploy step in the text field. Click the Next button. On the SSL Certificate section: Select the option that reads: My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority Click the Next button. The skill is ready to test. Using an Echo device or the Service Simulator available in the Test section, tell Alexa to "launch repo stats". Alexa should respond by telling you how many stars, watchers, and forks the repository has. Your Alexa skill has now been deployed to AWS Lambda using Zappa. To tail Amazon CloudWatch logs, type: $ zappa tail dev You can limit the output returned and eliminate the HTTP noise in the logs by using the --since 1m and --non-httpcommand options: $ zappa tail dev --since 1m --non-http If you want to remove the AWS Lambda function, API gateway, and CloudWatch logs, type: $ zappa undeploy dev After undeploying, updates no longer work and the HTTPS endpoint in the Alexa console is invalid. You'll have to issue another zappa deploy dev to recreate the Lambda function and gateway. Also, undeploying does not delete the S3 bucket the code is stored in; you must delete it in the AWS console manually. Zappa isn't just for Flask-Ask. It can be used to deploy to AWS Lambda with Flask itself, Django, and other popular Python web frameworks that conform to WSGI. In addition, using Zappa makes code more portable; for example, the same Python code running on Lambda can run on Amazon EC2 without modification. For complete Zappa documentation, check out Zappa's extensive README file on GitHub. That wraps up this tutorial. Congratulations for making it through! You've taught yourself how to deploy Flask-Ask and how to use Zappa, which are technical skills that are useful together and independently of one another. While Zappa is fantastic for deploying to production environments, I wouldn't recommend it for local development and testing. To master that development scenario, head over to the first post where we rapidly create an Alexa skill using Flask-Ask and ngrok. Happy coding!
https://developer.amazon.com/blogs/post/8e8ad73a-99e9-4c0f-a7b3-60f92287b0bf/new-alexa-tutorial-deploy-flask-ask-skills-to-aws-lambda-with-zappa
CC-MAIN-2022-05
refinedweb
1,622
55.54
Includes enterprise licensing and support Mapplets API adds to this event model by defining custom events for Mapplet objects. It is important to note that the Mapplets API events are separate and distinct from the standard DOM events. However, as different browsers implement different DOM event models, the Mapplets API also provides mechanisms to listen for and respond to these DOM events without needing to handle the various cross-browser peculiarities. Events in the Mapplets API are handled by using utility functions within the GEvent namespace to register event listeners. Each Mapplets API object exports a number of named events. For example, the GMap2 object exports click, dblclick, and move events, and a host of others as well. Each event happens within a given context, and can pass arguments that identify that context. For example, the dragstart event fires when the user clicks and holds(); GEvent.addListener(map, "click", function(marker, point) { if (marker) { map.removeOverlay(marker); } else { map.addOverlay(new GMarker(point)); } }); Run this example (event-simple.xml) Listeners are also able to capture the context of the event. In the following example code, we display the latitude and longitude of the center of the map after the user drags the map. var map = new GMap2(); GEvent.addListener(map, "moveend", function() { map.getCenterAsync(function(center) { map.openInfoWindowHtml(center, center.toString()); }); }); map.setCenter(new GLatLng(37.4419, -122.1419), 13); Run this example (event-context.xml) sequential number values to a set of markers. Clicking on each marker will display a different value, which is not contained within the marker itself. var map = new GMap2(); // Creates a marker at the given point with the given number label function createMarker(point, number) { var marker = new GMarker(point); GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml("Marker #" + number); }); return marker; } // Add 10 markers to the map at random locations map.getBoundsAsync(function(bounds) {(createMarker(point, i + 1)); } }); Run this example (event-closure.xml) Many events in the Mapplets API event system pass arguments when the event is triggered. For example, the GMap2 "click" event passes two arguments ( overlay,point) in the event. You can access these arguments by passing the specified symbols directly to the functions within the event listeners. For example, the "click" event on a map passes an overlay and point argument. We pass these arguments directly within the callback function to invoke within the event listener. Notice that we need to make two asynchronous requests in this example to retrieve the pixel coordinates and current zoom level. We use GAsync() to process these requests in parallel before we initiate the callback method based on those return values. var map = new GMap2(); map.setCenter(new GLatLng(37.4419, -122.1419), 13); GEvent.addListener(map,"click", function(overlay,point) { if (point) { GAsync(map, 'fromLatLngToDivPixel', [point], 'getZoom', function(divPixel, zoom) { var myHtml = "The GPoint value is: " + divPixel + " at zoom level " + zoom; map.openInfoWindow(point, myHtml); }); } }); Run this example (event-arguments.xml) The Mapplets API event model creates and manages its own custom events. However the DOM also creates and dispatches its own events, according to the particular browser event model in use. If you wish to capture and respond to these events, the Mapplets API provides browser-neutral wrappers to listen and bind DOM events, without the need for custom code. Note that a mapplet can only listen to DOM events within its own <iframe>. is especially important in Mapplets as the listeners introduce extra communication overhead.(); this.map.setCenter(new GLatLng(37.4419, -122.1419), 13); var myEventListener = GEvent.bind(this.map, "click", this, function(marker,point) { if (this.counter == 0) { if (point) { this.map.addOverlay(new GMarker(point)) this.counter++; } else { this.removeOverlay(marker) } } else { GEvent.removeListener(myEventListener); } }); } var application = new MyApplication(); Run this example (event-removal.xml) Continue on to the Mapplet Overlays.
http://code.google.com/apis/maps/documentation/mapplets/events.html
crawl-002
refinedweb
633
50.84
Bayesian Updates and the Dice Problem An interesting application of our Bayesian Approach, the author of Think Bayes introduces a toy problem where we’ve got a bag of D&D dice, randomly select a die and roll it. From here, we want to know the probability that the die was a d4, d6, ..., d20. What makes this problem novel/interesting is how repeated rolls of the selected die allow us to update our estimation. Rolled a 6 In the simple case, imagine that we pulled out a die and rolled a 6. Let’s construct our table as before. from IPython.display import Image Image('images/die_6.PNG') The hypothesis is simply all of the “d N” values and our prior, P(Hyp) is simply a uniform distribution of 1/n where n is the number of different possible dice. Then, filling in the value for our likelihoods, P(Data | Hyp), is a matter of setting to 0 if the die can’t attain the number (as with the d4), or 1/faces, per die. The last two columns are our usual plug-and-chug. Note that our most likely candidate is the d6 at this point. Rolling again, 8 Imagine now, that we re-reroll that die and get an 8. Not only does this rule out the possibility of it being a d6, it also reinforces whatever posterior distribution that we’ve formed to this point. What the author shows computationally, but doesn’t belabor, is that we essentially redo our table method, but crucially, using the Posterior Distribution of having rolled a 6, as our prior. Image('images/dice_6_then_8.PNG') It shouldn’t be a shocking characteristic, but perhaps merits mentioning that if we’d instead rolled an 8 and then a 6, we’d have the same posterior distribution for “having rolled a 6 and an 8”, regardless of the order Image('images/dice_8_then_6.PNG') This is simply due to the Commutative property that we get through independence of the results. Rolling again, 13 Continuing, we re-roll the die a third time and it shows 13. As you’d expect, this excludes d4, d6, d8, d10 and basically normalizes the probability of being d12 or d20 to sum to 1. Image('images/dice_8_then_6_then_12.PNG') Object Oriented By now, the “Table method” should be pretty straight-forward. The only trickiness lies in appropriate coding of the Likelihood column. To facilitate ease of computation and cut back on code reuse, the author builds out a Suite object to basically hold the contents of these tables. However, when he does it, he kind of dumps a bunch of code, scattered out over 15 or so pages– and never together. Reading examples is a mess without having the whole object in front of you (lots of references to object attributes, but without the self., and vice-versa). Basically, the idea is: - Instantiate a class with your priors - Code up the Likelihood function Then the class gives you access to a useful .Normalize() method, which will scale your prior probabilities such that they add up to 1. Then we get access to an .Update() method, which takes in observed “Data” Like in the third column of our Table Method, the class looks up the Likelihood of seeing our Data, given our Hypothesis. But then under the hood, it uses these values to multiply by our Priors, and then call .Normalize() once more, essentially doing all of the last 3 columns in one shot. Once you get the hang of the design (and upgrade from Python 2…), the value of building out the Suite object becomes obvious when calling such code from dicebag import DiceBag bag = DiceBag() for roll in [6, 8, 7, 7, 5, 4]: bag.Update(roll) and all of the posteriors are automatically carried forward as the next call’s priors, or can be output after all of your rolls are complete to get your final conjoint posterior probabilities.
https://napsterinblue.github.io/notes/stats/bayes/dice/
CC-MAIN-2021-04
refinedweb
662
58.82
Tkinter ("Tk Interface")is python's standard cross-platform package for creating graphical user interfaces (GUIs). It provides access to an underlying Tcl interpreter with the Tk toolkit, which itself is a cross-platform, multilanguage graphical user interface library. Tkinter isn't the only GUI library for python, but it is the one that comes standard. Additional GUI libraries that can be used with python include wxPython, PyQt, and kivy. Tkinter's greatest strength is its ubiquity and simplicity. It works out of the box on most platforms (linux, OSX, Windows), and comes complete with a wide range of widgets necessary for most common tasks (buttons, labels, drawing canvas, multiline text, etc). As a learning tool, tkinter has some features that are unique among GUI toolkits, such as named fonts, bind tags, and variable tracing. Tkinter is largely unchanged between python 2 and python 3, with the major difference being that the tkinter package and modules were renamed. In python 2.x, the tkinter package is named Tkinter, and related packages have their own names. For example, the following shows a typical set of import statements for python 2.x: import Tkinter as tk import tkFileDialog as filedialog import ttk Although functionality did not change much between python 2 and 3, the names of all of the tkinter modules have changed. The following is a typical set of import statements for python 3.x: import tkinter as tk from tkinter import filedialog from tkinter import ttk Let's test our basic knowledge of tkinter by creating the classic "Hello, World!" program. First, we must import tkinter, this will vary based on version (see remarks section about "Differences between Python 2 and 3") In Python 3 the module tkinter has a lowercase t: import tkinter as tk In Python 2 the module Tkinter has a uppercase T: import Tkinter as tk Using as tk isn't strictly necessary but we will use it so the rest of this example will work the same for both version. now that we have the tkinter module imported we can create the root of our application using the Tk class: root = tk.Tk() This will act as the window for our application. (note that additional windows should be Toplevel instances instead) Now that we have a window, let's add text to it with a Label label = tk.Label(root, text="Hello World!") # Create a text label label.pack(padx=20, pady=20) # Pack it into the window Once the application is ready we can start it (enter the main event loop) with the mainloop method root.mainloop() This will open and run the application until it is stopped by the window being closed or calling exiting functions from callbacks (discussed later) such as root.destroy() . Putting it all together: import tkinter as tk # Python 3.x Version #import Tkinter as tk # Python 2.x Version root = tk.Tk() label = tk.Label(root, text="Hello World!") # Create a text label label.pack(padx=20, pady=20) # Pack it into the window root.mainloop() And something like this should pop up: import tkinter as tk class HelloWorld(tk.Frame): def __init__(self, parent): super(HelloWorld, self).__init__(parent) self.label = tk.Label(self, text="Hello, World!") self.label.pack(padx=20, pady=20) if __name__ == "__main__": root = tk.Tk() main = HelloWorld(root) main.pack(fill="both", expand=True) root.mainloop() Note: It's possible to inherit from just about any tkinter widget, including the root window. Inheriting from tkinter.Frame is at least arguably the most flexible in that it supports multiple document interfaces (MDI), single document interfaces (SDI), single page applications, and multiple-page applications. Tkinter comes pre-installed with the Python installer binaries for Mac OS X and the Windows platform. So if you install Python from the official binaries for Mac OS X or Windows platform, you are good to go with Tkinter. For Debian versions of Linux you have to install it manually by using the following commands. For Python 3 sudo apt-get install python3-tk For Python 2.7 sudo apt-get install python-tk Linux distros with yum installer can install tkinter module using the command: yum install tkinter Verifying Installation To verify if you have successfully installed Tkinter, open your Python console and type the following command: import tkinter as tk # for Python 3 version or import Tkinter as tk # for Python 2.x version You have successfully installed Tkinter, if the above command executes without an error. To check the Tkinter version, type the following commands in your Python REPL: For python 3.X import tkinter as tk tk._test() For python 2.X import Tkinter as tk tk._test() Note: Importing Tkinter as tk is not required but is good practice as it helps keep things consistent between version.
https://riptutorial.com/tkinter
CC-MAIN-2021-17
refinedweb
803
56.25
#include <wx/sizer.h> Container for sizer items flags providing readable names for them. Normally, when you add an item to a sizer via wxSizer::Add, you have to specify a lot of flags and parameters which can be unwieldy. This is where wxSizerFlags comes in: it allows you to specify all parameters using the named methods instead. For example, instead of you can now write This is more readable and also allows you to create wxSizerFlags objects which can be reused for several sizer items. Note that by specification, all methods of wxSizerFlags return the wxSizerFlags object itself to allowing chaining multiple methods calls like in the examples above. Sets the wxSizerFlags to have a border of a number of pixels specified by borderinpixels with the directions specified by direction. Prefer to use the overload below or DoubleBorder() or TripleBorder() versions instead of hard-coding the border value in pixels to avoid too small borders on devices with high DPI displays. Sets the wxSizerFlags to have a border with size as returned by GetDefaultBorder(). Sets the object of the wxSizerFlags to center itself in the area it is given. Same as CentreHorizontal(). Same as CentreVertical(). Center an item only in horizontal direction. This is mostly useful for 2D sizers as for the 1D ones it is shorter to just use Centre() as the alignment is only used in one direction with them anyhow. For 2D sizers, centering an item in one direction is quite different from centering it in both directions however. Note that, unlike Align(), this method doesn't change the vertical alignment. Center an item only in vertical direction. The remarks in CentreHorizontal() documentation also apply to this function. Note that, unlike Align(), this method doesn't change the horizontal alignment. Sets the border in the given direction having twice the default border size. Sets the border in left and right directions having twice the default border size. Sets the object of the wxSizerFlags to expand to fill as much area as it can. Set the wxFIXED_MINSIZE flag which indicates that the initial size of the window should be also set as its minimal size. Returns the border used by default in Border() method. This value is scaled appropriately for the current DPI on the systems where physical pixel values are used for the control positions and sizes, i.e. not with wxGTK or wxOSX. Returns the border used by default, with fractional precision. For example when the border is scaled to a non-integer DPI. Sets the proportion of this wxSizerFlags to proportion. Set the wxRESERVE_SPACE_EVEN_IF_HIDDEN flag. Normally wxSizers don't allocate space for hidden windows or other items. This flag overrides this behaviour so that sufficient space is allocated for the window even if it isn't visible. This makes it possible to dynamically show and hide controls without resizing parent dialog, for example. Set the wx_SHAPED flag which indicates that the elements should always keep the fixed width to height ratio equal to its original value. Sets the border in the given direction having thrice the default border size.
https://docs.wxwidgets.org/trunk/classwx_sizer_flags.html
CC-MAIN-2021-17
refinedweb
515
56.35
My setup is the following: I have an ssh access to a distant machine, running the last version of Ubuntu Server edition, and I have to run a software on it "through" HideMyAss VPN (because this machine's IP has to be spoofed when the software is running). But of course I want to be able to still access the machine via it's "original" IP. My problem is that as soon as I start the hma-start script on the distant machine (which basically gets a configuration file from their servers, and run openvpn with it), I loose the connection and I can't connect to it anymore. Is that solvable at all and if so how? Thanks! This question came from our site for professional and enthusiast programmers. If you only want your specific software to use the VPN connection, then you may use network namespaces. basically, the command ip netns add vpn will create another namespace for network devices, routing tables, firewall rules and so on, so it's like running two network stacks. ip netns add vpn A network device can only be in one namespace, so you will need a virtual device, acting as a bridge between your two namespaces. That's exactly what virtual network interfaces are for: ip link add veth0 type veth peer name veth1 everything that goes in veth0 will go out of veth1, and so on. Now you only need to move one of the virtual interfaces to the other network namespace : ip link set veth0 netns vpn Now you have a situation similar to this network topology : .----------. .------. [intertubes] ------| Host |--------------| vpn | eth0`----------`veth1 veth0`------` You can apply whatever method you want to share the internet connection. Either do masquerading (if the vpn supports traversing NAT), routing/bridging (you will need another IP address or serious configurations) or whatever method you like. When you want to 'access' vpn, run ip netns exec vpn bash and you will end up in the vpn namespace. You will see that this namespace only have the veth0 network interface, as well as an unconfigured lo interface that you may want to configure using ip addr add 127.0.0.1/8 dev lo && ip link set lo up. Now just configure your veth0 interface so you can connect to the internet, then launch your VPN so it can reconfigure the network to go through the VPN. You will see that the main namespace will not use the VPN, while the vpn namespace will. vpn ip netns exec vpn bash veth0 lo ip addr add 127.0.0.1/8 dev lo && ip link set lo up The connection breaks because the VPN will change the default route so everything goes into the VPN. You may change that routing table, but it can be tricky to get right, especially if you lose ssh access if things go wrong. One easy simple solution is to tell your server to reach your own IP address via eth0 by setting a route : ip route add your_ip_address via the_server's_gateway and hoping that the VPN script won't touch it. If you also want to allow other hosts to access the server's original address, that is to say, if you also want your server to both answer his original IP address and the VPN's IP address, you will need to alter how the VPN changes your route, or at least know how it changes them to workaround what it does. Basically, what you want is policy routing. You will have two routing table: one will use the VPN, and the other will not use it. If the VPN script will only modify the main table, then you can add another routing table to be used for the original IP address. main So basically, before launching the VPN, you duplicate the main's content into another table, for example table 2 (2 is an arbitrary number here, see /etc/iproute2/rt_tables to define a name alias): table 2 ip route add (network)/(prefixlen) dev eth0 src (address) table 2 ip route add default via (gateway) dev eth0 src (address) table 2 Now add a rule to use that table if your server is accessed by its original IP address from the eth0 interface: eth0 ip rule add to (address) iif eth0 table 2 Then you launch your VPN script. In theory, you should run ip rule add before adding the default route to your second table because otherwise, the kernel will reject this rule, saying that it can't route to the gateway. but in your case it will just work fine as main can already route to the gateway. ip rule add By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 1996 times active 1 year ago
http://superuser.com/questions/463649/linux-routing-and-hma
CC-MAIN-2015-27
refinedweb
809
61.9
#include <OP_Bundle.h> Definition at line 87 of file OP_Bundle.h. Adds a node interest to the bundle. If the bundle changes, it will alert all the nodes that expressed interest in it by calling bundleChanged() on it. This is a separate notification mechanism from passing the OP_BundleEvent via myEventNotifier. Adds the node to the bundle and sends out a notification that a node has been added. Adds the nodes in the list to the bundle and sends out a notification that nodes have been added. If filter or a pattern is set, only the nodes that match them will be added. Adds a parameter interest to the bundle. If the bundle changes, it will alert all the parameter channels by calling parmChanged() on the node with parm id. This is a separate notification mechanism from passing the OP_BundleEvent via myEventNotifier. Builds a string that specifies all the members of the bundle. Processes a list of nodes that have just been added to some network. Any nodes that match the pattern (if set) will be added to the bundle. Changes (ie, increases or decreases) the reference count by the given amount. When the count decreases to zero, the bundle list (ie, the owner of all the bundles) will delete the bundle. Definition at line 251 of file OP_Bundle.h. Removes all the member nodes (or cached nodes and dirties the bundle). Determines whether or not a node is contained in the bundle. If the check_representative flag is true, then the node's parents will be checked for containment inside the bundle. Converts the normal bundle to a smart bundle by using the members of the bundle to constuct a pattern that will match all of the current members and only the current members. Returns the number of member nodes. Calculates and caches the member nodes that match the specified pattern, and other member values (such as myPatternSubnetInclusion) Obtains the union of the member nodes from all the given bundles. The nodes in the result list are unique. Returns the current pattern for nodes. The nodes that match the pattern are the members of the bundle. Definition at line 169 of file OP_Bundle.h. Returns an object that emmits events originating from the bundle when something about the bundle changes. Definition at line 324 of file OP_Bundle.h. Returns the current node filter. Definition at line 150 of file OP_Bundle.h. Obtains all the bundle member ids. Obtains all the bundle members as a node list. Obtains the unique name of the bungle. Definition at line 93 of file OP_Bundle.h. Returns the i-th member of the bundle. The order is arbitrary, but the index should not exceed the number of total entries. Returns the root node at which the search begins when matching the pattern. Only the ancestors (children, grandchildren - that is nodes contained in some way by the root) are considered when matching the pattern. Returns the current pick (selected) flag. Definition at line 319 of file OP_Bundle.h. Returns the current reference count. Definition at line 254 of file OP_Bundle.h. Returns the node used to resolve relative paths in the pattern. Returns the pattern originally set on the bundle. Definition at line 161 of file OP_Bundle.h. Obtains the subnet inclusion flag. Definition at line 195 of file OP_Bundle.h. Returns the touch time, which is an integer that gets incremented each time the bundle contents changes. Definition at line 219 of file OP_Bundle.h. Determines whether the budnle has been internally created. An internal bundle is created in C++ code, based on some pattern string obtained from a node's parameter. The non-internal bundles are explicitly created by the user and are all listed in the bundle pane. Definition at line 99 of file OP_Bundle.h. Returns a flag that indicates if the bundle tries to automatically add newly created nodes to itself. Definition at line 223 of file OP_Bundle.h. Returns true if the bundle is "smart". That is if it is a non-internal bundle whose contents is determined by a pattern. Definition at line 183 of file OP_Bundle.h. Processes a new node that has been added to some network. The bundle may decide to add that node to itself, if it is a pattern bundle. Informs the bundle that some unspecified nodes have been added or deleted. The bundle will mark itself as dirty, if necessary. Processes a node that is about to be deleted from some network. If that node belongs to the bundle, it will be reomved as a member. Processes the given group after it has changed. If any bundles reference this group, the will be marked as dirty. Informs the bundle that some other bundle contents has changed. This bundle will mark iself as dirty, if necessary. Processes a node when its type (or representative type) has changed. The bundle gets marked as dirty if necessary. Removes the node interest from the bundle. Removes the node from the bundle and sends out a notification event. Removes the parameter interest from the bundle. Rename the bundle to a new name. Mark the bundle pattern as dirty. Definition at line 175 of file OP_Bundle.h. Sets a new node filter for the bundle, dirties the bundle, and sends out an event. Sets the flag on on all the members of this bundle. sets op visibility. This is smarter than just truning the display flag. It also adds to the visible children parameter of the ancestors so that the bundle nodes become visibile. Sets the picked (selected) flag to on/off. Sets the pattern and turn the bundle into a smart bundle, if it is not already smart. If the pattern is NULL, the bundle will no longer be smart (it will be converted into a normal bundle). The subnet inclusion flag determines whether a pattern includes subnet contents. This means that if a node does not explicitly a member of the bundle (ie, does not match the pattern), but its ancestor does, then that node is also a member. Sorts the member nodes alphanumerically by node path. Sorts the member nodes by numerical value of the node pointer. Syncs (if sync_flag == true) and unsyncs (if sync_flag == false ) the HDA definitions of the nodes contained in this bundle.
http://www.sidefx.com/docs/hdk/class_o_p___bundle.html
CC-MAIN-2018-43
refinedweb
1,049
68.26
_lwp_cond_reltimedwait(2) - set process group ID #include <sys/types.h> #include <unistd.h> pid_t setpgrp(void); If the calling process is not already a session leader, the setpgrp() function makes it one by setting its process group ID and session ID to the value of its process ID, and releases its controlling terminal. See Intro(2) for more information on process group IDs and session leaders. The setpgrp() function returns the value of the new process group ID. No errors are defined. See attributes(5) for descriptions of the following attributes: setpgrp(1), Intro(2), exec(2), fork(2), getpid(2), getsid(2), kill(2), signal(3C), attributes(5), standards(5)
https://docs.oracle.com/cd/E19963-01/html/821-1463/setpgrp-2.html
CC-MAIN-2021-43
refinedweb
111
57.87
An Introduction To Heap Sort With Examples. Heapsort is one of the most efficient sorting techniques. This technique builds a heap from the given unsorted array and then uses the heap again to sort the array. Heapsort is a sorting technique based on comparison and uses binary heap. => Read Through The Easy C++ Training Series. What You Will Learn: What Is A Binary Heap? A binary heap is represented using a complete binary tree. A complete binary tree is a binary tree in which all the nodes at each level are completely filled except for the leaf nodes and the nodes are as far as left. A binary heap or simply a heap is a complete binary tree where the items or nodes are stored in a way such that the root node is greater than its two child nodes. This is also called max heap. The items in the binary heap can also be stored as min-heap wherein the root node is smaller than its two child nodes. We can represent a heap as a binary tree or an array. While representing a heap as an array, assuming the index starts at 0, the root element is stored at 0. In general, if a parent node is at the position I, then the left child node is at the position (2*I + 1) and the right node is at (2*I +2). General Algorithm Given below is the general algorithm for heap sort technique. - Build a max heap from the given data such that the root is the highest element of the heap. - Remove the root i.e. the highest element from the heap and replace or swap it with the last element of the heap. - Then adjust the max heap, so as to not to violate the max heap properties (heapify). - The above step reduces the heap size by 1. - Repeat the above three steps until the heap size is reduced to 1. As shown in the general algorithm to sort the given dataset in increasing order, we first construct a max heap for the given data. Let us take an example to construct a max heap with the following dataset. 6, 10, 2, 4, 1 We can construct a tree for this data set as follows. In the above tree representation, the numbers in the brackets represent the respective positions in the array. In order to construct a max heap of the above representation, we need to fulfill the heap condition that the parent node should be greater than its child nodes. In other words, we need to “heapify” the tree so as to convert it to max-heap. After heapification of the above tree, we will get the max-heap as shown below. As shown above, we have this max-heap generated from an array. Next, we present an illustration of a heap sort. Having seen the construction of max-heap, we will skip the detailed steps to construct a max-heap and will directly show the max heap at each step. Illustration Consider the following array of elements. We need to sort this array using the heap sort technique. Let us construct a max-heap as shown below for the array to be sorted. Once the heap is constructed, we represent it in an Array form as shown below. Now we compare the 1st node (root) with the last node and then swap them. Thus, as shown above, we swap 17 and 3 so that 17 is at the last position and 3 is in the first position. Now we remove the node 17 from the heap and put it in the sorted array as shown in the shaded portion below. Now we again construct a heap for the array elements. This time the heap size is reduced by 1 as we have deleted one element (17) from the heap. The heap of the remaining elements is shown below. In the next step, we will repeat the same steps. We compare and swap the root element and last element in the heap. After swapping, we delete the element 12 from the heap and shift it to the sorted array. Once again we construct a max heap for the remaining elements as shown below. Now we swap the root and the last element i.e. 9 and 3. After swapping, element 9 is deleted from the heap and put in a sorted array. At this point, we have only three elements in the heap as shown below. We swap 6 and 3 and delete the element 6 from the heap and add it to the sorted array. Now we construct a heap of the remaining elements and then swap both with each other. After swapping 4 and 3, we delete element 4 from the heap and add it to the sorted array. Now we have only one node remaining in the heap as shown below. So now with only one node remaining, we delete it from the heap and add it to the sorted array. Thus the above shown is the sorted array that we have obtained as a result of the heap sort. In the above illustration, we have sorted the array in ascending order. If we have to sort the array in descending order then we need to follow the same steps but with the min-heap. Heapsort algorithm is identical to selection sort in which we select the smallest element and place it into a sorted array. However, heap sort is faster than selection sort as far as the performance is concerned. We can put it as heapsort is an improved version of the selection sort. Next, we will implement Heapsort in C++ and Java language. The most important function in both the implementations is the function “heapify”. This function is called by the main heapsort routine to rearrange the subtree once a node is deleted or when max-heap is built. When we have heapified the tree correctly, only then we will be able to get the correct elements in their proper positions and thus the array will be correctly sorted. C++ Example Following is the C++ code for heapsort implementation. #include <iostream> using namespace std; // function to heapify the tree void heapify(int arr[], int n, int root) { int largest = root; // root is the largest element) { //swap root and largest swap(arr[root], arr[largest]); // Recursively heapify the sub-tree heapify(arr, n, largest); } } // implementing heap sort void heapSort(int arr[], int n) { // build heap for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i); // extracting elements from heap one by one for (int i=n-1; i>=0; i--) { // Move current root to end swap(arr[0], arr[i]); // again call max heapify on the reduced heap heapify(arr, i, 0); } } /* print contents of array - utility function */ void displayArray(int arr[], int n) { for (int i=0; i<n; ++i) cout << arr[i] << " "; cout << "\n"; } // main program int main() { int heap_arr[] = {4,17,3,12,9,6}; int n = sizeof(heap_arr)/sizeof(heap_arr[0]); cout<<"Input array"<<endl; displayArray(heap_arr,n); heapSort(heap_arr, n); cout << "Sorted array"<<endl; displayArray(heap_arr, n); } Output: Input array 4 17 3 12 9 6 Sorted array 3 4 6 9 12 17 Next, we will implement the heapsort in Java language Java Example // Java program to implement Heap Sort class HeapSort { public void heap_sort(int arr[]) { int n = arr.length; // Build heap (rearrange array) for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i); // One by one extract an element from heap for (int i=n-1; i>=0; i--) { // Move current root to end int temp = arr[0]; arr[0] = arr[i]; arr[i] = temp; // call max heapify on the reduced heap heapify(arr, i, 0); } } // heapify the sub-tree void heapify(int arr[], int n, int root) { int largest = root; // Initialize largest as root) { int swap = arr[root]; arr[root] = arr[largest]; arr[largest] = swap; // Recursively heapify the affected sub-tree heapify(arr, n, largest); } } //print array contents - utility function static void displayArray(int arr[]) { int n = arr.length; for (int i=0; i<n; ++i) System.out.print(arr[i]+" "); System.out.println(); } } class Main{ // main program public static void main(String args[]) { int arr[] = {4,17,3,12,9,6}; int n = arr.length; HeapSort ob = new HeapSort(); System.out.println("Input array: "); HeapSort.displayArray(arr); ob.heap_sort(arr); System.out.println("Sorted array:"); HeapSort.displayArray(arr); } } Output: Input array: 4 17 3 12 9 6 Sorted array: 3 4 6 9 12 17 Conclusion Heapsort is a comparison based sorting technique using binary heap. It can be termed as an improvement over selection sort since both these sorting techniques work with similar logic of finding the largest or smallest element in the array repeatedly and then placing it into the sorted array. Heap sort makes use of max-heap or min-heap to sort the array. The first step in heap sort is to build a min or max heap from the array data and then delete the root element recursively and heapify the heap until there is only one node present in the heap. Heapsort is an efficient algorithm and it performs faster than selection sort. It may be used to sort an almost sorted array or find k largest or smallest elements in the array. With this, we have completed our topic on sorting techniques in C++. From our next tutorial onwards, we will start with data structures one by one. => Look For The Entire C++ Training Series Here.
https://www.softwaretestinghelp.com/heap-sort/
CC-MAIN-2021-17
refinedweb
1,599
69.62
Hey I'm just starting to learn how to program and right now I am making a C++ tool with a Command line interface in Xcode 3 and this is my code. #include <iostream> using namespace std; int main() { //declare input output variable double base=0.0; double height=0.0; double area=0.0; double perimeter=0.0; double side2=0.0; double side3=0.0; //input data cout<< "Enter Base of Triangle" <<endl; cin>>base; cout<< "Enter Height of Triangle" <<endl; cin>>height; cout>> "Enter side 2's length" <<endl; cin>>side2; cout>> "Enter side 3's length" <<endl; cin>>side3; //calculation area=(base*height)/2; perimeter=(base+side2+side3); //display the answer cout<< "Area of the triangle is" <<area<< "cm2"<<endl; cout<< "Perimeter of the triangle is" <<perimeter<< "cm" <<endl; return 0; } according to Xcode I have 2 errors but I don't see what's wrong... Please help, thank you! :)
https://www.daniweb.com/programming/software-development/threads/187358/errors-in-code
CC-MAIN-2018-43
refinedweb
154
74.19
1. Executable programs or shell commands TMUXSection: User Commands (1) Index | Return to Main Contents BSD mandoc NAMEtmux - terminal multiplexer SYNOPSIStmux -words [-2CluvV ] [-c shell-command ] [-f file ] [-L socket-name ] [-S socket-path ] [command [flags ] ] DESCRIPTIONtm . Each session has one or more windows linked to it. A window occupies the entire screen and may be split into rectangular panes, each of which is a separate pseudo terminal (the pty Sx CONTROL MODE section). Given twice ( Fl CC ) Xc disables echo. - -c shell-command - Execute shell-command using the default shell. If necessary, configuration files in the first session created, and continues , as described in the following sections. If no commands are specified, the new-session command is assumed. KEY BINDING interactively. - t - Show the time. - w - Choose the current window interactively. - x - Kill the current pane. - z - Toggle zoom state of the current pane. - { - Swap the current pane with the previous pane. - } - Swap the current pane with the next pane. - ~ - Show previous messages from ,-vertical,SThis section contains a list of the commands supported by . Most commands accept the optional -t (and sometimes -s argument with one of target-client target-session target-window or target-pane These specify the client, session, window or pane which a command should affect. target-client is the name of the pty: - A session ID prefixed with a $. - An exact name of a session (as listed by the list-sessions command). - The start of a session name, for example `mysess' would match a session named `mysession' - special token, listed below. - A window index, for example `mysession:1' is window 1 in session `mysession' - A window ID, such as @1. - An exact window name, such as `mysession:mywindow' - The start of a window name, such as `mysession:mywin' - windows. Each has a single-character alternative form. - Token Ta Ta Meaning - {start} Ta ^ Ta The lowest-numbered window - {end} Ta $ Ta The highest-numbered window - {last} Ta ! Ta The last (previously current) window - {next} Ta + Ta The next window by number - {previous} Ta - Ta The previous window by number target Sx FORMATS section) and the display-message list-sessions list-windows or list-panes commands. shell-command arguments are sh(1) commands. This may be a single argument another" $ tmux kill-window -t :1 $ tmux new-window \; split-window -d $ tmux new-session -d 'vi /etc/passwd' \; split-window -d \; attach CLIENTS AND SESSIONSThe tmux server manages clients, sessions, windows and panes. Clients are attached to sessions to interact with them, either when they are created with the new-session command, or later with the attach-session command. Each session has one or more windows linked into it. Windows may be linked to multiple sessions and are made up of one or more panes, each of which contains a pseudo terminal. Commands for creating, linking and otherwise manipulating windows are covered in the Sx WINDOWS AND PANES section. The following commands are available to manage clients and sessions: - attach-session [-dEr [-c working-directory ] ] [-t target-session ] - If run from outside , create a new client in the current terminal, update-environment option will not be applied. - detach-client [-P ] [-a ] [-s target-session ] [-t target-client ] - Detach the current client if bound to a key, the client specified with -t or all clients currently attached to the session specified by -s The -a option kills all but the client given with -t If -P is given, send SIGHUP to the parent process of the client, typically causing it to exit. - has-session [-t target-session ] - Report an error and exit with 1 if the specified session does not exist. If it does exist, exit with 0. - kill-server - Kill the tmux server and clients and destroy all sessions. - kill-session [-a ] [-t target-session ] - Destroy the given session, closing any windows linked to it and no other sessions, and detaching all clients attached to it. If -a is given, all sessions but the specified one is killed. - list-clients [-F format ] [-t target-session ] - List all clients attached to the server. For the meaning of the -F flag, see the Sx FORMATS section. If target-session is specified, list only clients connected to that session. - list-commands - ] - (80 by 24 if not given). If run from a terminal, any termios `#{session_name}:' but a different format may be specified with -F If -E is used, update-environment option will not be applied. update-environment - refresh-client [-S ] [-t target-client ] - Refresh the current client if bound to a key, or a single client if one is given with -t If -S is specified, only update the client's status bar. - rename-session [-t target-session ] new-name - Rename the session to new-name - show-messages [-IJT ] [-t target-client ] - - Execute commands from path - space Ta W Ta - - Next space, end of word Ta E Ta - - Next word Ta w Ta - - Next word end Ta e Ta M-f - - Other end of selection Ta o Ta - - Paste buffer Ta p Ta C-y - - Previous page Ta C-b Ta Page `@' characters If append-selection copy-selection or start-named-buffer are given the -x flag, tmux will not exit copy mode after copying. copy-pipe copies the selection and pipes it to a command. For example the following will bind `C-w' not to exit after copying and `C Sx MOUSE SUPPORT ) . ] [-s src-pane ] [-t dst-pane ] -ep([-b buffer-name ] ) ] [-E end-line ] [-S start-line ] [-t target-pane ] - background selected interactively from a list. After a client is chosen, `%%' is replaced by the client pty(7) path in template and the result executed as a command. If template is not given, "detach-client -t '%%'" is used.. If -S is given will display the specified format instead of the default session format. If -W is given will display the ] - Display a visible indicator of each pane shown by target-client See the display-panes-time display-panes-colour and display-panes-active-colour session options. While the indicator is on screen, a pane may be selected with the `0' to `9' keys. - find-window [-CNT ] [-F format ] [-t target-window ] match-string - ] - generated. If -d is given, the newly linked window is not selected. - list-panes [-as ] [-F format ] [-t target ] - ] - ] - Create a new window. With -a the new window is inserted at the next index up from the specified target-window moving windows up if necessary, otherwise target-window is the new window location. If -d is given, the session does not make the new window the current'' for all programs running inside . New windows will automatically ] -,. If -t is present, key is bound in mode-table the binding for command mode with -c or for normal mode without. See the Sx WINDOWS AND PANES section and the list-keys command for information on mode key bindings. To view the default bindings and possible commands, see the list-keys command. - list-keys [-t mode-table ] [-T key-table ] - inherited. Window options are altered with the set-window-option command and can be listed with the show-window-options command. All window options are documented with the set-window-option command.s the `colors' entry for terminals which support 256 colours: "*256col*:colors=256,xterm*:XT" window linked to a session causes a bell in the current window of that session, none means all bells are ignored, current means only bells in windows other than the current window are ignored and other means bells in the current window histories are not resized and retain the limit at the point they were created. - supported: brightred brightgreen and so on), colour0 to colour255 from the 256-colour set, default or a hexadecimal RGB string such as `#ffffff' , which chooses the closest match from the default 256-colour set. The attributes is either none or a comma-delimited list of one or more of: bright (or bold ) dim underscore blink reverse hidden or italics Sx window title if set-titles is on. Formats are expanded, see the Sx FORMATS section. - status [on | off ] - Show or hide the status line. - status-interval interval - Update the status bar bar. string will be passed through strftime(3) and formats (see Sx FORMATS ) will be expanded. It may also contain any of the following special character sequences: - Character pair Ta Replaced with - - #[attributes] Ta Colour or attribute change - - ## Ta A literal `#' - For details on how the names and titles can be set see the Sx off. - update-environment variables - Set a space-separated string containing a). The default is "DISPLAY SSH_ASKPASS SSH_AUTH_SOCK SSH_AGENT_PID SSH_CONNECTION WINDOWID XAUTHORITY". - `-_@ .' - set-window-option [-agoqu ] [-t target-window ] option value - Set a window option. The -a -g -o -q (\033k...\033\\). terminal escape sequence. It may be switched off globally with: set-window-option -g automatic-rename off - automatic-rename-format format - The format (see Sx unlimited setting. - main-pane-height height - - main-pane-width width - Set the width or height of the main (left or top) pane in the main-horizontal or main-vertical layouts. - mode-keys [vi | emacs ] - Use vi or emacs-style key bindings in copy and choice modes. As with the status-keys option, the default is emacs, unless VISUAL or EDITOR contains `vi' - mode-style style - Set window modes style. For how to specify style see the message-command-style option. - monitor-activity [on | off ] - Monitor for activity in the window. Windows with activity numbers. -iv option. - ] - ] - List the window options or a single option for target-window or the global window options if -g is used. -v shows only the option value, not the name. - - MouseDown2 Ta MouseUp2 Ta MouseDrag2 - - MouseDown3 Ta MouseUp3 Ta MouseDrag3 - - WheelUp Ta WheelDown Ta -Certain. A limit may be placed on the length of the resultant string by prefixing it by an `=' , a number and a colon, so `#{=10:pane_title}' will include at most the first 10 characters of the pane title. In addition, the first line of a shell command's output may be inserted using `#()' For example, `#(uptime)' will insert the system's uptime. When constructing formats, tmux does not wait for `#()' commands to_sample Ta Ta Sample of start of buffer - - buffer_size Ta Ta Size of the specified buffer in bytes - - client_activity Ta Ta Integer time client last had activity - - client_activity_string Ta Ta String time client last had activity - - client_created Ta Ta Integer time client created - - client_created_string Ta Ta String time client created - - client_control_mode Ta Ta 1 if client is in control mode - - client_height Ta Ta Height of client - - - - mouse_utf8_flag Ta Ta Pane mouse UTF-8 - - session_alerts Ta Ta List of window indexes with alerts - - session_attached Ta Ta Number of clients session is attached to - - session_activity Ta Ta Integer time of session last activity - - session_activity_string Ta Ta String time of session last activity - - session_created Ta Ta Integer time session created - - session_created_string Ta Ta String time session created - - session_last_attached Ta Ta Integer time session last attached - - session_last_attached_string Ta Ta String - - window_activity Ta Ta Integer time of window last activity - - window_activity_string Ta Ta String time of window last activity - - window_active Ta Ta 1 if window active - - window_activity_flag Ta Ta 1 if window has activity alert - - - - window_linked Ta Ta 1 if window is linked across sessions - - window_name Ta #W Ta Name of window - - window_panes Ta Ta Number of panes in window - - window_silence_flag Ta Ta 1 if window has silence alert - - window_width Ta Ta Width of window - - window_zoomed_flag Ta Ta 1 if window is zoomed - - wrap_flag Ta Ta Pane wrap flag - NAMES AND TITLESt . It is the same mechanism used to set for example the xterm(1) window title: - A command argument (such as -n for new-window or new-session ) - An escape sequence: $ printf '\033kWINDOW_NAME\033\\' -Whenatt ] - displayed, constructed from template if it is present, or `:' if not. Both inputs and prompts may contain the special character sequences supported by the status-left option. ] - Insert the contents of a paste buffer into the specified pane. If not specified, paste into the current one. With -d also delete the paste buffer. When output, any linefeed (LF) characters in the paste buffer are replaced with a separator, by default carriage return (CR). A custom separator may be specified using the -s flag. The -rStmux understands some : $ printf '\033]12;red\033\\' , Se Set or reset the cursor style. $ printf '\033[4 q' with argument 0 will be used to reset the cursor style instead.Em Ms This sequence can be used by tmux to store the current buffer in the host terminal's selection (clipboard). See the set-clipboard option above and the xterm(1) man page. CONTROL MODEtmux offers a textual interface called control mode This allows applications to communicate with tmux using a simple text-only protocol. In control mode, a client sends tmux commands or command sequences terminated - -renamed window-id name - The window with ID window-id was renamed to name FILES - ~/.tmux.conf - Default tmux configuration file. - /etc/tmux.conf - System-wide configuration file. EXAMPLESTo ALSOpty(7) AUTHORSAn Nicholas Marriott Aq Mt nicm@users.sourceforge.net Index - NAME - - SYNOPSIS - - DESCRIPTION - - KEY BINDINGS - - COMMANDS - - CLIENTS AND SESSIONS - - WINDOWS AND PANES - - KEY BINDINGS - - OPTIONS - - MOUSE SUPPORT - - FORMATS - - NAMES AND TITLES - - ENVIRONMENT - - STATUS LINE - - BUFFERS - - MISCELLANEOUS - - TERMINFO EXTENSIONS - - CONTROL MODE - - FILES - - EXAMPLES - - SEE ALSO - - AUTHORS - Return to Main Contents
https://eandata.com/linux/?chap=1&cmd=tmux
CC-MAIN-2019-22
refinedweb
2,213
52.09
what would be the best way to store a student's vehicle in a database? They can select bike, scooter or car. Some say the best way is to store a hash in the database (what would the attribute type be?) other say the best way is to create a Has many :through association. Thanks! If the vehicles themselves won't have any data and you're using Postgres 9.2 or newer then I'd recommend storing the vehicles in an array in a vehicle column on the student table. If the vehicles may ever have information about themselves such as how many wheels they have, top speed, weight etc. then you'd be far better off creating a vehicle table with a belongs_to/has_many :through relationship with students. Update: If you want to add a column that accepts arrays to your students table for holding vehicles you can do the following migration: def change add_column :students, :vehicles, :string, array: true, default: [] end
https://codedump.io/share/7z3wPOw0Uppp/1/what39s-the-best-way-to-store-checkboxes-in-a-postgres-database
CC-MAIN-2017-39
refinedweb
164
62.27
qscript 0.7.1 a tiny, small, & fast scripting lang. To use this package, run the following command in your project's root directory: QScript A fast, static typed, scripting language, with a syntax similar to D Language. Setting it up To add QScript to your dub package or project, run this in your dub package's directory: dub add qscript After adding that, look at the source/demo.d to see how to use the QScript class to execute scripts. Getting Started To get started on using QScript, see the following documents: spec/syntax.md- Contains the specification for QScript's syntax. spec/functions.md- Contains a list of predefined QScript functions. source/demo.d- A demo usage of QScript in D langauage. Shows how to add new functions examples/- Contains some scripts showing how to write scripts. The code is thoroughly documented. Separate documentation can be found here. Building Demo To be able to run basic scripts, you can build the demo using: dub build -c=demo -b=release This will create an executable named demo in the directory. To run a script through it, do: ./demo path/to/script You can also use the demo build to see the generated NaVM byte code for your scripts using: ./demo "path/to/script" "output/bytecode/file/path" Features - Simple syntax - Dynamic arrays - Fast execution - Static typed. - Function overloading - References TODO For Upcoming Versions - add cast(Type) - unsigned integers - bitshift operators - Structs - Be able to load multiple scripts, to make it easier to separete scripts across files. Something similar to D's import Hello World This is how a hello world would look like in QScript. For more examples, see examples/. function void main(){ writeln ("Hello World!"); } - Registered by Nafees Hassan - 0.7.1 released 14 days ago - Nafees10/qscript - MIT - Authors: - - Dependencies: - utils, navm - Versions: - Show all 19 versions - Download Stats: 0 downloads today 0 downloads this week 1 downloads this month 111 downloads total - Score: - 1.9 - Short URL: - qscript.dub.pm
https://code.dlang.org/packages/qscript
CC-MAIN-2020-16
refinedweb
334
65.93
Up today, I start experimenting with Angular.dart and Polymer.dart. For tonight, I would like to start with a simple Angular application and see what, if anything, I need to do in order to use my amazing <x-pizza>Polymer: I actually need to go back and fix that Polymer up some, but I think the learning ground is more fertile at this point with Angular than with refactoring or converting to JavaScript. Still, I do need to do both before <x-pizza>is Patterns in Polymer worthy. Anyway… Angular. I love it. I am probably a horrible person to ask an opinion on libraries and frameworks. Most people seem to be absolutely certain that some are better than others. I am unable to make decisions like that until I work with a library enough that it starts to affect how I think about other tools. By that time, I have almost always found some great use-cases for the tool at hand, leaving me hard pressed to dislike outright. So, for a guy that co-authored Recipes with Backbone and who still loves coding in Backbone.js, I also love Angular. Mostly I am glad to know both as I think they have overlap, but also different use-cases. The use case that I would like to explore in Angular tonight is an online pizza store. I have an <x-pizza>Polymer, which seems like it could be put to nice use throughout such an app. I start with a pubspec.yamlthat includes both the Angular and Polymer packages, but does not include the usual Polymer transformer: name: angular_example dependencies: angular: any polymer: any dev_dependencies: unittest: any # transformers: # - polymer: # entry_points: web/index.htmlI am unsure how the transformer would interact with Angular, so I leave that out of the equation tonight. While working with raw Polymer, I have needed to include initialization code in the web page: II <!-- Load component(s) --> <link rel="import" href="packages/angular_example/elements/x-pizza.html"> <!-- Load Polymer --> <script type="application/dart"> export 'package:polymer/init.dart'; </script> main.dart): <!DOCTYPE html> <html ng-app <head> <title>Pizza Store</title> <script type="application/dart" src="main.dart"></script> <!-- Load component(s) --> <link rel="import" href="packages/angular_example/elements/x-pizza.html"> <!-- Load Polymer --> <script type="application/dart"> export 'package:polymer/init.dart'; </script> </head> <body> <div class="container"> <h1>Dart Bros. Pizza Shoppe</h1> <ng-view></ng-view> </div> </body> </html>But, when I load that page, I see the following in the console: [ERROR] Only one Dart script tag allowed per documentBother. But not too much of a bother. Back when I was learning to test Polymer, I needed to initialize Polymer without using that script. That turned out to be pretty easy. Instead of exporting that one script, I import the Polymer packages into my main.dartAngular app and invoke initPolymer: import 'package:polymer/polymer.dart'; import 'package:angular/angular.dart'; import 'package:angular/routing/module.dart'; import 'package:angular_example/store.dart'; main() { initPolymer(); var store = new AngularModule() ..type(RouteInitializer, implementedBy: StoreRouter); ngBootstrap(module: store); }In addition to removing the export, I also need to ensure that the usual <link>imports of Polymer elements occur before the main.dart script: <!DOCTYPE html> <html ng-app <head> <title>Pizza Store</title> <link rel="import" href="packages/angular_example/elements/x-pizza.html"> <script type="application/dart" src="main.dart"></script> </head> <body> <div class="container"> <h1>Dart Bros. Pizza Shoppe</h1> <ng-view></ng-view> </div> </body> </html>With that, I am ready to take a look at my Angular application. As I said, I am starting very simple here. So simple, in fact, that I have a router and nothing more. The router declares two pages: the default start page and the custom pizza builder page: import 'package:angular/angular.dart'; import 'package:angular/routing/module.dart'; class StoreRouter implements RouteInitializer { Scope _scope; StoreRouter(this._scope); void init(Router router, ViewFactory view) { router.root ..addRoute( defaultRoute: true, name: 'start', enter: view('partials/home.html') ) ..addRoute( name: 'custom-pizza', path: '/pizza/custom', enter: view('partials/custom.html') ); } }I define my two partials. The home.htmlis a simple page that links to the route defined for building custom pizza: <h3>This is the home of darty pizza</h3> <p> Why not <a href="/pizza/custom">build your own, awesome pizza</a>! </p>When I first load the application, I see: Similarly, I define the custom.htmlpartial as: <h3>Oooh! Let's make beautiful pizza together</h3> <p> <x-pizza></x-pizza> </p>Which results in: That was rather easy to get working! Of course, I have yet to do anything real with this. I am not taking advantage of any of Angular's features and would very much like to explore binding values from Polymers into Angular applications. But that is grist for another day. For tonight, I am quite happy with the progress so far. Day #1,001 Inspired by this post, I tried a different route, of using Angular Dart directly inside a Polymer element. What this means is that you set this up as a regular polymer dart application, and include the directives inside the code for the Polymer element you create. I managed it with something similar to what you have, minus the pizza part. The trick is to include the element parameter in ngBootstrap(module: store, element: $['entry']); with entry being <div id="entry"> <ng-view></ng-view> </div> The only downside is routes don't seem to work for me in the same way as they do for you. I had to create a route with a path of '/index.html#/pizza/custom' before it would work. The upside is it can be done, with work arounds. At least for routes anyway. I also left the Tranformer in pubspec.yaml alone. Cool! You're getting ahead of me. Good to know that it can be done :) Well I managed to get routes to work too, using a mutation observer on <ng-view> to get the list of AnchorElements and create a proxy AnchorElement in the main document, activating the proxies click event whenever the AnchorElements are clicked in the Polymer Element. I'm not sure this is the best thing to do, but it works. I think there a whole lot of issues due to Polymer Elements shadowRoot hiding things. Angular, and the third party libraries it uses, relies on elements being searchable in the main document. It doesn't deal with the shadow root at all, thus, I think, current solutions for Angular within Polymer will have to involve some sort of proxy, as one solution. Oh and did I mention any events from Anchorelements within the Polymer element look to Angular as if they are created by the Polymer Element. route_hierarchical intercepts click events on the window and checks to see which element they are from before deciding what to do. It is looking for AnchorElements as the source of the event before it'll invoke the route. Yikes. That seems pretty involved. I may give it a go in JavaScript first and use that to compare with your experiences. What you describe sounds doable, but maybe not something for “Patterns.” If anything it will have at least eliminated one possibility a bit more quickly for you, or put you on a side track :-) I was going to look into this anyway, so I think you're helping me eliminate some work so that I can focus elsewhere. So thanks! I forgot to mention I did the ngBootstrap(...) in created(). It also works in enteredView()
https://japhr.blogspot.com/2014/01/day-1001-getting-started-with-angular.html
CC-MAIN-2017-43
refinedweb
1,269
57.47
i? return should be in place of break, I think. Also rounding to 2 decimals does not make sense after so small tolerance for error, generally you also do not round, you just format the printing. line 15 return estimate should be outside the loop I would write the code like this ... def newton_approx(x): """ Newton's method to get the square root of x using successive approximations """ tolerance = 0.000001 estimate = 1.0 while True: estimate = (estimate + x / estimate) / 2 difference = abs(x - estimate ** 2) if difference <= tolerance: break return estimate # test x = 2 print("newton = %0.15f" % newton_approx(x)) print("x**0.5 = %0.15f" % (x**0.5)) ''' newton = 1.414213562374690 x**0.5 = 1.414213562373095 ''' Thank you everyone for your helpful inputs in solving my problem. The code now works since I moved the return outside of the loop. Now for phase 2 of my homework converting the current function to a recursive function. Any suggestions? I would add parameter for current difference and return value from function which reaches the level of tolerance (which also could be a parameter).
http://www.daniweb.com/software-development/python/threads/442006/newton-function
CC-MAIN-2014-10
refinedweb
183
57.47
First. Actually not, C was my First :'( solution in Clear category for I Love Python! by oLeBeLo def i_love_python(): """ I find it highly flexible in exchange of some performance but i loved it from the start exactly for that: all the different ways of thinking Python was putting in my head because of this flexibility. I love programming in all other languages as most of them limit you to a constrict set of choices within the boundaries of the language. And that's puzzling and fun to find solutions. But Python is the opposite. The framework and set of choices is not limiting ...it pulls creativity and stretches the horizon of choices. I found it way more fun to play with and solve a problem in 100 different ways. Mainly thanks to checkio and all Creative users. I gotta thank you guys ;) """ return "I love Python!" if __name__ == '__main__': #These "asserts" using only for self-checking and not necessary for auto-testing assert i_love_python() == "I love Python!" Dec. 28, 2019 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/i-love-python/publications/oLeBeLo/python-3/first-actually-not-c-was-my-first/share/bd9f408beccf1e39868a1d32fa7b4d25/
CC-MAIN-2020-50
refinedweb
183
64.71
Are you sure? This action might not be possible to undo. Are you sure you want to continue? 58 Leverage Determinants in the Absence of Corporate Tax System: The Case of Non-Financial Publicly Traded Corporations in Saudi Arabia by Sulaiman A. Al-Sakran, Department of Finance and Economics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia Abstract In Saudi Arabia corporate tax code is unique where taxes are based on total networth. We used a sample of firms composed of all publicly traded firms except financial sectors to study the variations in leverage ratios and their determinants. It was found that leverage was employed with different variations in the studied sectors. The study examined a set of factors that determined leverage levels. I. INTRODUCTION Capital Structure is defined as the relative amount of debt and equity used to finance enterprise. This issue is one of the most contentious issues if not a puzzle in finance. A number of theories have been advanced to explain the variation in debt ratios across firms. The theories suggest that firms select capital structure depending on attributes that determine the various costs and benefits associated with debt and equity financing. Explanations vary from the irrelevancy hypotheses (M&M, 1958) to the optimal capital structure where the cost of capital is minimized and the firm value is maximized, and hence, maximizing the shareholders’ wealth. This study undertakes an objective of investigating an economy that apply a unique tax system to examine the implication of taxes on its publicly traded corporations capital structure. It is organized as follow: First the relevant literature review on capital structure is discussed. Second, a look at the Saudi economy and the unique features of its tax code is presented. Third part, presents the methodology and data. Fourth is a discussion of results. Finally, conclusion remarks are given. A. Literature Review A number of theories have been advanced to explain capital structure and to understand whether there is an optimal capital structure for a firm. In 1958, Modigliani & Miller published their seminal theory of investment where they classified firms into equivalent return classes assuming perfect market conditions. Their propositions state that the market value of any firm and its cost of capital are independent of its capital structure. However, it is dependent on the expected return appropriate to its class. Accordingly optimal capital structure does not exist. In 1963, they added to their findings that taxes could be an advantage and an increase in the after-tax yield on equity capital as leverage increases. This final conclusion urges the firms to use debt, even reaching to a 100% debt ratio if possible. Another theory in explaining capital structure is the agency theory which states that debt financing create an agency problems to firms. Barnea, Haugen, and Sanbet (1981) identified three problems that occur because of debt financing. First is the stockholders’ incentive to accept sub-optimal and high-risk projects, which transfer wealth from bondholders to stockholders. Second, The presence of debt in the capital structure causes the firm to forgo any investment with positive net market value being lower than the debt Volume 27 Number 10/11 2001 59 value. The third is the bankruptcy costs where bankruptcy probability increases with debt level since it increases the fear that the company might not be able to generate profits to pay back the interest and the loans. The need to balance gains and costs of debt financing emerged as a theory that was called the Static Trade-Off Theory by Myers (1984). It values the company as the value of the firm if unlevered plus the present value of the tax shield minus the present value of bankruptcy and agency costs. Alternatively, another theory, the pecking order theory, has emerged as an explanation for financing decisions by Myers (1984). It states that internal financing is preferred more than external financing. This is due to the transaction (floatation) costs and the resulting agency costs of issuing new securities. Internal financing is done through retained earnings. When retained earnings are not sufficient, debt financing is the next choice before considering offering new stocks. The reason is that the floatation costs of debt issuing are lower than those of equity issuing. The pecking order theory would indicate that the profitability of a firm affects its financing decisions. If it issues debt, this means that the firm has an investment opportunity that exceeds its internally generated funds. So, changes in the capital structure often serves as a signal to outsiders about the current situation of the firm as well as the managerial expectations concerning future earnings . This is called the signaling theory. The debt offering is believed to reveal information the management of a firm is expecting about future cash flows if it will cover the debt costs. However, the bankruptcy fears still impact the signal and intensify the cost of this signal. Such conclusions are supported by results of most empirical work — for example Asquith and Mullins (1986) and Eckbo’s (1986) - that documented a positive effect on stock prices when leverage increases while leverage-decreasing announcements have a negative effect. MacKie-Mason (1990) studied the tax effect on corporate financing decisions. The study provided evidence of substantial tax effect on the choice between debt and equity. He concluded that changes in the marginal tax rate for any firm should affect financing decisions. When already exhausted (with loss carry forwards) or with a high probability of facing a zero tax rate, a firm with high tax shield is less likely to finance with debt. The reason is that tax shields lower the effective marginal tax rate on interest deduction. The determinants of capital structure are studied in several papers. Titman and Wessels (1988) analyze the explanatory power of some of the theories of optimal capital structure that suggest attributes in determining the various costs and benefits associated with debt and equity financing. They applied a factor analysis technique for estimating the impact of unobservable attributes on the choice of corporate debt ratios. The results find that debt levels are negatively related to the “uniqueness” of a firm’s line of business and transaction costs is an important determinant of capital structure choice where shortterm debt ratios were shown to be negatively related to firm size. However, they failed to prove the effect of non-debt tax shields, future growth, volatility of earnings, and collateral value on debt ratios and the firm size on long-term debt. The theoretical relationship between earnings variability and financial leverage is ambiguous. Jaffe and Westerfield (1987) show that under certain conditions there would be a positive relationship. Castanias (1983) discussed conditions regarding bankruptcy cost, interest expenses and earnings variability necessary to derive a negative relationship. In a more recent study Thies and Klock (1992) found similar results that pertains to long term debt and common equity. The findings also refute claims that there is no cross- no Zakat is due. the adjusted net income for Saudi Income Tax and Zakat purposes is added to the Zakat base.Managerial Finance 60 sectional relationship between variability and capital structure and suggests that there are differences in the utilization of leverage across time and across firms. Moreover. which is. Johnson (1998) conducted a study on the effect of the existence of bank debt on a firm’s capital structure. It is important to note that there is no penalty for late payment of the Zakat. signaling. retained earnings or accumulated deficit. This part addresses the Zakat system and present major features of the stock market in Saudi Arabia. there is no exchange floor in Saudi Arabia. In studying signaling effect. Saudi Arabian Economy Unique Features Saudi Arabia is an oil dependent economy where more than a third of its GDP is generated from oil revenues. If both are negative. If a company has Saudi and non-Saudi owners. Share trading is consummated through local . There are more than 300. Our discussion will be mainly concerned with items related to capital structure included in the calculation of the Zakat base. Saudis pay Zakat based on their net worth. Deductions from the Zakat base include net fixed assets and properties under construction. they find a positive relation between leverage and size of the earnings increase. B. investments in other Saudi companies and Saudi government bonds. growth. Zakat & Corporate Taxation System in Saudi Arabia Zakat and Tax are managed by a government department of Zakat and Income Tax. If the Zakat base is negative or lower than the adjusted net income for the year. One of the basic features in the Saudi Arabian economy is the absence of income tax on citizens. The government own around 43% of traded shares. dividends distributed during the year not to exceed retained earnings at the beginning of the year. notes payable and advances if they are used to finance fixed assets. B. in most cases. Instead. Zakat is imposed on the adjusted net income. Saudis pay Zakat on their share of the Zakat base and non-Saudis pay income tax on their share of the taxable income.2.5% of the Zakat base. there is one form of tax that is called Zakat. The Saudi Stock Market Although Saudi companies represent about ten of the top thirty Middle East companies.000 registered companies from which there are only 74 publicly traded companies listed in the market and ten of these are banks. Barclay. Smith and Watts (1995) studied the effects of size. and regulation on debt levels. Zakat is 2. His findings are consistent with the proposition that firms can have higher optimal leverage if they borrow from banks. This is due to benefits from bank screening and monitoring. and adjusted deficits. The result matched the expectation that leverage increases with regulation. Finally. They expected that regulation effectively reduce the possibility for corporate under-investment agency problem simply by transferring much of management’s discretion over investment decision to regulatory authorities. The study reported a small economic effect of size on leverage level where results were mixed when regressing the leverage on total sales as a measure of size. According to Aljurad & Company Zakat base includes the share capital. based on payers’ net worth . It also includes long-term loans. Theoretical and empirical research suggested that bank debt mitigate the agency costs.1. Saudi Industrial Development Fund loans and Public Investment Fund loans. B. Its GDP has grown from around 120 billions in 1993 to around 150 billions in 1998. Using regression analysis. are open to the citizens of the other Gulf States. Share trading is restricted to Saudi nationals while some few companies. a further look into the effect of Zakat on leverage. Relevant information was available for a sample of 35 firms. Currently there are 74 joint stock companies listed in the stock market. Share ownership is transferred electronically from the seller to the buyer through the Saudi Shares Registration Company. The sources of data are mainly Bakheet Financial Advisors and annual reports of the companies chosen for the years between 1993 to 1997. Other sectors are the cement with 9% share and non-financial services represent 6% while electricity is 12% and agriculture is 1%. There were 171 ob- . Our study will identify and compare the different types of leverage ratios between sectors and the stock market as a whole. This exploratory analysis will try to identify if there are significant variations in capital structure of the sampled firms. Items selected from the balance sheets and income statements are shown in table 1. DATA AND RESULTS A. Figure 1: Saudi Stock Market Capitalization and Trading Volumes Saudi Stock Market 250 Billion SR.Volume 27 Number 10/11 2001 61 banks using a computerized network. As can be seen in figure 1 the market remains fairly illiquid with the value of shares traded being only 15% of the total market capitalization. excluding banks. The industrial stocks are representing 29% where the Saudi Arabia Basic Industrial Corporation has the largest share. if any. II. Data and Methodology The purpose of the paper is to study the capital structure of the publicly traded Saudi companies and to identify its determinants. Billion SR 200 150 100 50 0 1993 1994 1995 Year 1996 1997 Market Capitalization Trade Volume The government own nearly 43% of listed stocks that is dominated by the banking sector which represent 43% of total market capitalization of the market. In explaining variations leverage ratios will be regressed versus some selected items from firms’ balance sheets and income statements. RESEARCH METHODOLOGY. As in table 1 there are 74 companies of which ten are financial institutions. while small firms might be highly leveraged with shortterm debt . The eligibility of a company to get this subsidy increases its willingness to borrow. They proposed that Equity-controlled firms have a tendency to invest sub-optimally hence the cost associated with this agency relationship is likely to be higher for firms having higher growth. Natural log of total assets was used to measure the size factor. The more profits a company has. Titman and Wessels (1988) and Thies and Klock (1992) used the operating profit rate of return (EBIT/assets) as an indicator of profitability. growth. and short-term debt to capital ratio. which is included in the Zakat base. profitability was measured by two measures. Hence Zakat will be considered irrelevant and the other variables will be tested. The significance of this variable is tested via a dummy variable. So. In this study. Since the Saudi government is a major shareholder in many companies. Alsakran (1999) concluded the tendency of Saudi public firms to used equity as a financing tool to their operations. In Saudi Arabia debt employment in financing firms activities is fairly limited and below the international norm. the less it is expected to use debt financing. would be small since it is deducted from income. it guarantees a 7% profit to electricity companies. This is the same measure used by Titman and Wessels (1988). the independent variables developed here are government share. the effect of interest payments on loans. we will consider it as a determinant of the type of the capital structure. the government is subsidizing some industries. These will be the dependent variables in the models generated using regression analysis. a positive relationship is expected between government ownership measured by its percentage of shares owned and debt ratios. Hence. Also. Basically the reason is the ab- . long-term debt to capital ratio. It is important to note that the short-term debt was computed as the sum of the two balance sheet items of the current portion of long-term debt and the short-term debt. profitability is an important issue. This means that expected future growth should be negatively related to long-term debt levels. It is expected that Zakat makes no difference if financing was using equity or debt since both are included in the Zakat base. According to the pecking order theory. if any. The correlation coefficients between Debt/Capital ratios and Zakat support such hypothesis as can be seen in table (3) where all correlation coefficients are small and insignificant. The government ownership would give a confidence to lenders to extend loans to a company. On the other hand. For example. All of these data were based on book value of the selected items from the balance sheets and the income statements. firms prefer internal financing over external equity or debt issuing. Titman and Wessels (1988) used natural log of sales because they state that size factor affects mainly the very small firms. Also. These are the same variables used by Titman and Wessels (1988).Managerial Finance 62 servations (See table 2). The capital structure variables used are the leverage ratios based on book values. return on assets (EBZ/assets) and profit margin (EBZ/sales). firm size and profitability. government subsidy. The growth factor is measured using the percentage changes of assets. Large firms are expected to be highly leveraged. The ratios are the total debt to capital (debt + equity) ratio. Government subsidy has relatively high correlation with government share in the company. Share is: G. There exists a high positive correlation between the size of the firm and the total debt ratio and the long-term debt ratio. The highest negative correlation factors were between the profit margin and long-term debt ratio. Its worth to mention that we included loans that are extended by the Saudi Industrial Development Fund (SIDF) which is a government institution that extend funds to industries at an extremely low cost to be paid on an annual installment for 25 years. All industrial sector firms used both short-term and long-term financing. Government subsidy has relatively high correlation with the size of the firm and. Also. Subsidy + b3 Growth + b4 Size + b5 ROA + b6 PM Short-Term Debt/Capital = b0 + b1 G. subsidiary: A dummy variable = 1 if the firm in an industry receiving a subsidiary from the government and = 0 if no. There are five sectors examined in the study. Another reason of not using debts in Saudi Arabia is probably the limited sources of such financing mechanism. will start by examine the correlation between all variables. thus. Subsidy + b3 Growth + b4 Size + b5 ROA + b6 PM Long-Term Debt/Capital = b0 + b1 G. and five firms from the agricultural sector. Results and Findings Our analysis. This supports the expected effect of size on debt ratios. Share + b2 G. dependent and independent. Share + b2 G. Table 5 shows the characteristics of the sample in terms of variables of concern. The analysis would justify these correlation values. These models are: Total Debt/Capital = b0 + b1 G. . First is the industrial sector which includes 10 firms of the sample. total debt ratio and government subsidy. Table 4 shows the result of the correlation matrix. four electricity companies. The share of the government ownership in the firm. it has positive correlation with total debt and long-term debt. the cement sector has eight companies listed in the market. Subsidy + b3 Growth + b4 Size + b5 ROA + b6 PM Where G. the size of the firm measured by Natural log of total assets. the services sector includes eight companies.Volume 27 Number 10/11 2001 63 sence of bond market. This loan is considered in our study as part of short term loans. Return on Assets and Profitability margin. government share has a positive correlation with size indicating that the government has share in large companies only. Share + b2 G. Growth: Size: ROA: PM: B. the growth of the firm measured by the percentage changes of assets. As stated earlier our analysis will use a multi-linear regression models to investigate the different relationships. 05 0 1993 1994 1995 YEAR 1996 1997 TOTAL DEBT LONG TERM DEBT SHORT TERM DEBT Table 6 shows the regression result for this sector.2 0. The size of the firm has the highest positive relationship with all types of debts ratios. Also the profitability variable showed a negative relationship but with no significance. it is clear that the industrial sector used both short-term and long-term financing. Industrial Sector 64 The industrial sector sample represents 51% of the sample market capitalization.1 0. The nature of the industry is that the company sizes are relatively large.25 DEBT/CAPITAL Debt/Capital RRATIO &T 0. Also. In figure 2. This might explain the high short-term debt ratio. growth factor is having a significant positive relationship with total debt ratios but insignificant with its two components. This is in contradiction with expectation implied by previous empirical work of capital structure. . This may be due to the fact that such sector is securing more loans from government agencies rather than issuing debts from commercial financial institutions. SABIC with 70% government share and assets of SR 68 billions. Figure 2: Industrial Sector Average Ratios SECTOR : INDUSTRIAL 0. There are 47 observations for 10 firms and all six variables were used.15 0. Finally. over SR 7 billions in total assets.1.9%. The debt-to-capital regression model is significant with an explanatory power of 54. This conclusion is confirmed by the relationship between ROA and debt ratios where results are insignificant except in short term debt ratio. the largest company in Saudi Arabia.Managerial Finance B. is part of this sector where its leverage ratio has been around 21% for the last five years. Although small. the government share is having a negative significant relationship with total debt. Another important observation to this sector is the availability of interest-free loans provided by the SIDF. There are 19 observations for four firms and all six variables were used. Figure 3. Since there is no government subsidy provided to this sector the related variable was dropped in addition to profitability measure because of the availability of sales figures. There are 38 observations for eight firms.14 Debt/CapitalRATIO Ratio DEBT/CAPITAL 0. Finally. This sector is characterized by a high leverage ratios due to the following reasons.Volume 27 Number 10/11 2001 B. B. This may explain the sharp increase in debt financing in the last three years as shown in figure 3 bellow. the nature of utilities companies where normally long-term loans are more than short-term loans. Table 8 shows the regression result for the electricity sector.06 0. the government is subsiding their operations.12 0.1 0.08 0. First.04 0. One important note to such results is the relatively absence of short term debt to the whole sector during the period of study. . This may be due to the increasing demand on such product that led most of cement companies to undergo expansion projects to increase their production capacity. Cement Sector 65 The cement industry is a well performing industry during the period of study with an average ROA reach to around 21% and an average growth rate of 30%. Second. government operates and run these companies establishment where it owns between 50% and 98% of its shares.02 0 1993 1994 1995 YEAR 1996 1997 TOTAL DEBT LONG TERM DEBT SHORT TERM DEBT Table 7 shows the regression results for the cement sector. Results show that all regression variables were insignificant. Third is the increasing demand for electricity kingdom wide and as a result expansion project were assumed. Cement Sector Average Debt Ratios SECTOR : CEMENT 0. Electricity Sector All firms in this sector have long-term loans but only one used short-term debt twice in the period of study as shown in figure 4 below.3.2. Electricity Sector Average Debt Ratios 66 SECTOR : ELECTRICITY 0.7 0.5 0.Managerial Finance Figure 4.4 0. Results reveal that growth.01 0. Agricultural Sector Average Ratios SECTOR : AGRICULTURE 0.015 0.Statistic is significant for all types of leverages except for the shortterm debt due to the limited usage of such financing in this sector.005 0 1993 1994 1995 YEAR 1996 1997 TOTAL DEBT LONG TERM DEBT SHORT TERM DEBT .02 0. Figure 5.2 0. The strong relationship of ROA and capital structure variations may be explained by the fact that the sector is mostly having a negative ROA.8 0.6 0. size and ROA are significant factors in explaining the variations in debt-equity ratios .1 0 1993 1994 1995 1996 1997 DEBT/CAPITAL RATIO Debt/Capital Ratio TOTAL DEBT LONG TERM DEBT SHORT TERM DEBT The overall F.3 0. The growth factor is consistant with expectation by having a negative significant relationship while size shows a negative one.025 DEBT/CAPITAL RATIO Debt/Capital Ratio 0. 01 0 1993 1994 1995 YEAR 1996 1997 Debt/Capital Ratio DEBT/CAPITAL RATIO TOTAL DEBT LONG TERM DEBT SHORT TERM DEBT Table 10 shows the regression result for the services sector. show the debt ratios history over the period of study. the type of business of sampled firms showed an effect on its leverage ratios.5.03 0. The F statistic is significant but the model fails to explain the individual relationship with some kind of significance except for the size factor.06 0. Agriculture Sector 67 This sector is characterised by a very low ROA where its around an average of 1% and a small average size. The leverage ratios of cement and service . Table 9 shows the regression result for the agricultural sector. because there has been non during the period of study. Figure 6. There are 38 observations for eight firms.4. All the variables except the government subsidy were included. B. Figure 5 shows the debt ratio averages for the last five years.05 0. In summary. Companies in the electricity sector tend to be highly leveraged while in the industrial sector it was around the sample average. The overall F statistic is significant but only the government share factor was documented to have a positive significant relationship.02 0.07 0. Services Sector The services sector has the highest growth rate as measured by the percentage of changes of sales (51%) in the sample. Four of the eight companies used in our sample are not having any debt financing. Services Sector Average Leverage Ratios SECTOR : SERVICES 0. All the six factors were used.09 0. There are 24 observations for five firms.Volume 27 Number 10/11 2001 B. Figure 6. An important fact to notice is the absence of government direct subsidy to all firms of this sector. It used about 7% of total leverage where about 4% are in long term debt and the remaining 3% are in short-term .04 0.08 0. Average Leverage Ratios for Size Classes Size and Leverage 0. Since there are no firms in this category that have financed its activities using long-term debt. . Analysis by Size In this part of analysis our sampled firms were classified into four categories based on their size measured by total assets. Table 11 shows these categories with their studied variables averages.Managerial Finance 68 sector were below the sample average and in the agricultural sector is shown to be very low leverage ratio. The model was able to explain about 31% of the variations in the capital structure of these firms The expected relationship with growth is consistent with expectations of being negative . figure 7 tries to depict the relationship between size and average leverage ratios where it seems to supports the notion that a negative relationship between size and short-term leverage and a positive relationship between size and long-term leverage and total leverage.4 0. Small Firms There are 21 observations in this category. C. The average debt ratio is 0.6 0.5% where all in the form of mainly short-term debt.2.1.This may due to the high growth ratio of these companies.2 0 Small Medium Large Very Large D/C LD/C SD/C C. Figure 7. C. Medium Firms There are 51 observations that fall into this class. Also. the regression for the short-term debt is the same for total debt. The results are shown in table 13 . All variables were included in the sample and results are presented in table 12. Apparently the F statistic is not significant and the model could not explain the variations in capital structure for this class of firms. results are inclined to support the conclusion that leverage ratio increases with size but at a decreasing rate but there is a negative relationship between size and short-term leverage ratios. The average debtto-capital ratio shows that these companies used as much debt as equity.66 % combining 6. It can be seen that the use of debt has increased over the years of study especially in the last Figure 8. The average debt-to-capital ratio is 12.5% of long-term debt and 6. Very Large Firms There are 33 observations for firms that we characterize as very large.06 0. the average ROA and PM are very low and the government subsidy is very high. Most of the debt is in the form of long-term debt (95% of total debt). firm size is in accordance with expectation where it has a positive sign.4. So. Large Firms 69 There are 66 observations that fall into this category. In this class profitability measures showed a significant negative relationship with leverage. It is important to note that the electricity companies are part of this category as well as SABIC. Also.04 0.2% of short-term debt. Further.08 0.02 0 1993 1994 1995 1996 1997 TOTAL DEBT LONG TERM DEBT SHORT TERM DEBT .1 0.18 DEBT/CAPITAL RATIO Debt/Capital Ratio 0. D.3. The overall F statistic is significant. All Firms Analysis Figure 8 below shows the trend of our sampled firms in terms of their leverage ratios.2 0. Table 14 shows the regression results for these observations. The F statistic is significant and the variations in the variables of the model were able to explain about 90% of the variations in the capital structure of these firms. All Firms Average Ratios ECONOMY AVERAGES 0. Also.12 0.16 0. C. the government share is expected to be the highest in these firms. The six variables are included in the models shown in table 15.Volume 27 Number 10/11 2001 C.14 0. Managerial Finance 70 three years where the whole Saudi economy recovered from the consequences of the Gulf war and the oil industry enjoyed good prices. size and government share is documented to show a positive relationship with leverage. all variables were used. size has a significant positive effect on both total leverage and long-term leverage and government subsidies have a positive significant effect of the long-term leverage ratio. As expected. E. It was found that all seven variables except two namely government share percentage and growth rate are statistically significant factors in explaining the variations in leverage ratios. However. Single Factor Analysis In this section a simple regression models were used to test the relationship between each factor and the total leverage ratios using the whole sample as follows: Total Debt/Capital = b0 + b1 Government Share % Total Debt/Capital = b0 + b1 Government Share Total Debt/Capital = b0 + b1 Government Subsidy Total Debt/Capital = b0 + b1 Growth Total Debt/Capital = b0 + b1 Size Total Debt/Capital = b0 + b1 ROA Total Debt/Capital = b0 + b1 PM Table 17. leverage level would be lower due to the absence of its benefit in the form of interest deduction and other tax advantages. the examined factors play a determinant role in the determination of the leverage level. show the results of these relationships. We used a sample of firms composed of all publicly traded firms except financial sectors to study the variations in leverage ratios and their determinants. The study documented a negative relationship between growth. Our results documented a negative relation of leverage ratios with ROA and growth rate but not statistically significant. significant negative reactions to the variations of profit margin. The short-term debt usage remained almost constant over these years while changes were in the level of long-term debt. It was found that leverage was employed more than average in the electricity sector. average in the industrial sector. profitability and return on assets and leverage ratios. However it was almost negligible in services and agricultural sectors. and below average in the cement industry. The model was significantly explaining the variations in the sampled firms capital structure particularly in the long term debt where it reach to an explanatory power of 80%. There were 171 observations. III. SUMMARY & CONCLUSIONS It was expected that in the absence of taxes. and results are shown in table 16. Certainly. Also as expected. In the regression models. In Saudi Arabia corporate tax code is unique where taxes are based on total networth. . 9. Journal of Applied Corporate Finance. P.. MacKie-Mason. Journal of Financial Economics.. Corporation Finance and the Theory of Investment”. “Equity Issues and Offering Dilution”. The Middle East Business And Economic Review. “Fixed Income Securities as an Alternative Means of Financing Saudi Public Firms”. “Determinants of Capital Structure”. 12. M. 6. Mar. Review of Financial Economics. number 3. 8.11 No. 261-297. XXXIX. “The Cost of Capital. 3. M. W. Myers.. J. “Valuation Effects of Corporate Debt Offerings”. 27.. Titman.Volume 27 Number 10/11 2001 References 71 1. M. & Watts. pp. Jr. pp. K. 1986. 1-19. W. pp. L. No. “The Effects of Bank Debt on Optimal Capital Structure”. A. 119-151. C. Jaffe 7. Modigliani. S. 1. C. 7-22. “Zakat and Corporate Taxation in Saudi Arabia. July 1984. S. June 1958. & Miller. XLV.. Vol. & Miller. & Klock. Financial Management. The American Economic Review. pp. R. Modigliani. 4. Financial Management. Eckbo. pp. R. 14. Barnea. 2. H. A.. pp. 47-56. Vol. 1471-1493. and Capital Structure: A Review”. Vol. & Senbet. Asquith. pp. Journal of Financial Economics 15. No.. pp. XLIII. B. No. 61-89. W.. 11. Johnson. Alsakran. 7. 4. M. E. The Journal of Finance. & Mullins. 575-592. No. 5.1 June 1999. C. Vol. pp.. S. 10. 1992. & Wessels. “The Determinants of Corporate Leverage and Dividend Policy”. J. The Journal of Finance. Barclay.. 48. S. pp. Vol. Agency Problems. 1988.. 433-443. H. “Market Imperfection. Smith. F.. winter 1995. June 1963. 4-19. “The Capital Structure Puzzle”. “Corporate Income Taxes and the Cost of Capital : A Correction”.. The American Economic Review.” Al-Juraid & Company. 40-52. 1. “Do Taxes Affect Corporate Financing Decisions”. 1986. summer 1981. F. R. Haugen. D. pp.. L.. . spring 1998. 15. F. 5. 13. 13. S. The Journal of Finance. Vol. Thies. “The Determinants of Capital Structure Choice”. A. 712 1. (SIDC) Al-Ahsa Development Company National Co. Saudi Cement Co.642 9. MARKET CAPITALIZATION (SR Million) 14. United Saudi Bank TOTAL Saudi Basic Industries Co. Food Products Company National Gypsum Company Saudi Cable Company Saudi Advanced Industries Co. FIPCO SISCO Arabian Industrial Dev.121 6. Southern Province Cement Co.480 1. Saudi Arabian Fertilizer Co.790 18.326 48.398 3. Co. for Glass Industries Saudi Arabian Amiantit Co. SPIMACO (Al-Dawaiah) National Gas & Industrialization Co.020 1.775 8. Saudi Arabian Refinery Company Saudi Ceramic Company Savola Company National Industrialization Co.114 96.755 2.120 22. for Industrial Dev.Managerial Finance Table 1.345 1.000 6.182 765 214 598 363 79 372 336 644 861 206 N/A N/A N/A N/A 63. Qassim Cement Co.992 1. Yamama Saudi Cement Co. Saudi Co. Alujain Corp.527 INDUSTRIAL CEMENT . Saudi Joint Stock Companies Listed in the Saudi Stock Market 72 SECTOR BANKING COMPANY Riyadh Bank Jazirah Bank Saudi Investment Bank Saudi Hollandi Bank Saudi French Bank Saudi British Bank Arab National Bank Saudi American Bank Al-Rajhi Banking & Investment Co. (Nama) TOTAL Arab Cement Co.500 148 432 1.788 3. Arabian Pipes Co.236 2.650 9.465 4. Rafha Electricity Co. (JADCO) Bisha Agricultural Dev.363 4. (GACO) Hail Agricultural Dev. Saudi Land Transport Co.996 73 SERVICES Saudi Hotels & Resort Areas Co.687 9. 987 TOTAL 19. Co. 2.713 213 1.261 432 108 189 214 210 89 143 38 168 1591 ELECTRICITY AGRICULTURE . Al-Jouf Electricity Co.774 4. 2. Saudi Automotive Services Co. Arar Electricity Co. Saudi Joint Stock Companies Listed in the Saudi Stock Market (continued) Yanbu Cement Co. (NADEC) Qassim Agricultural Dev.150 176 830 195 176 940 3.269 105 109 890 17 12. (HADCO) Tabuk Agricultural Dev. Co. (Thimar) TOTAL SCECO-CENTRAL SCECO-WEST SCECO-EAST SCECO-SOUTH Tabuk Electricity Co. of Saudi Arabia Saudi Arabia Public Transport Co. (ASMAC) Ash-Sharqiah Agricultural Dev. Asir Company Taibah Investment & Real Estate Co. Co. TOTAL National Agricultural Dev. Timah Electricity Co.451 Tabuk Cement Co. (Mobarrad) Al-Aziziah Panda United Al-Baha Dev. (JAZADCO) TOTAL 775 852 1.678 Eastern Province Cement Co. Jizan Agricultural Dev.Al-Mukairish Tihama Advertising & Public Relations Co. National Shipping Co. National Agricultural Marketing Co. (TADCO) Saudi Fishries Co. & Investment Co.277 1. Co. Co. Saudi Real Estate Co. Co. Al-Mawashi . Co.Volume 27 Number 10/11 2001 Table 1. Saudi Industrial Export Co.040 8.030 11 17 20 1 1 4 26. Co. (SAHDCO) Al-Jouf Agricultural Dev. Makkah Construction & Dev. Arriyadh Dev. Co. Haql Electricity Co. Co. 16 0.051 0. Correlation Coeffecients Between Zakat and Capital Ratios Zakat Debt / Capital Long-Term-Debt / Capital Short-Term-Debt / Capital Equity/Capital 0.54784 -0.0678 0.069 0.314 0.0765 1 -0.465 0.121491 0.51754 -0.051 -0.008 -0.1162 1 0.193 -0.39693 0.1097 -0.Managerial Finance Table 2.023 -0.47482 -0.04 1 0.017 1 1 0.054 0.0289965 ROA PM Total Debt Long-Term -0.1877 -0.201 -0.753 -0.9133581 0.38 -0.55707 0.6704 -0.067478 -0.0655 0.066 .57 -0.13702 Revenue (Sales) Earnings Before Zakat Zakat Earnings After Zakat (Net Income) 74 Table 4.4587 -0.163 1 -0. Correlation Matrix Gshare% Gsubsidy Growth Size Class Government Share Government Subsidy % Change assets LN Assets ROA Profit Margin Debt / Capital Long-Term-Debt / Capital Short-Term-Debt / Capital 1 0. Selected Balance Sheets and Income Statements Items Government Subsidy Total Fixed Assets Total Assets Current Portion of Long Term Loans Long Term Loans Government Loans Bank Loans Total Shareholders Equity Retained Earnings Table 3.832 -0.53 -0.4334702 1 0.137015 0. 2% 2.94% 0 0.15% 7234 0.03 0.Volume 27 Number 10/11 2001 Table5.28 51.07 Average ROA Average PM .54% 0 30.5% 6.0% 1296 Percentage of whole sample market capitalization Average Government Share Average Government Subsidy Average Growth Average Size (SR million) 51% 17% 9% 22% 1% 15.5% 0. 5.1% 19996 Services 7.29 16.000) Inustrial Industrial 21% 10% 11% 61391 Cement 7.4% 1.21 N/A 44.03 -0.47% 550 0.01 0.50% 1455 0.39% 0.95 6.9% 0.38 76.84 13. Sample Table SampleCharacteristics Chracterestics 75 Parameter Average Debt/Capital Average Long-Term Debt/Capital Average Short-Term Debt/Capital Sector Sample Market Capitalization (SR 1000.1% 4.82% 0.9% 10921 Electricity 65% 64.7% 26207 Agricultural 1.04 0.14% 1662 0.51% 26379 -0.08 5.58 % 0.9% 1. 1585 t=3.451 .7056 t=-2.549 Long Debt / Capital -1.8753 t=-1.0058 t=0.64 8.Managerial Finance 76 Table 6.1567 t=-1.0052 t=2.61 0.0954 t=-0.19 -0.1883 t= -5.12 0.7249 t=-1.56 -0.5489 t=-2.32 0.55 5.811 Short Debt / Capital -0.40 -0.93 -0.7800 t=-3.17 -0.38 0.0612 t=-0.0864 t=-0.70 29.95 0.0923 t=-1.67 -0.32 0.29 0.9131 t= -3.1157 t=-1.22 -0.0919 t=6.0666 t=1.95 -0.31 0.95 0.19 -0.996 -0.0545 t=1.25 0. Industrial Sector Regression Results Total Debt / Capital Intercept Government Share % Government Subsidy Growth (%∇Assets) Size (ln Assets) Return on Assets Profit Margin F-Statistic R-Square -1.0025 t=1.94 -0.0028 t=3. 0054 t=-0.45 -0.0171 t=0.1420 t= -0. Table 7.07 -1.0462 t=-0.142 Long Debt / Capital -0.26 -0.0001 t=-0. Cement Sector Regression Results Total Debt / Capital Intercept Government Share % Growth (%∇Assets) Size (ln Assets) Return on Assets F-Statistic R-Square -0.74 -0.64 -8.92 0.39 -0.9 E (-5) t=-0.53 -0.48 -0.26 -0.37 0.77 0.1354 t=-1.40 0.139 Short Debt / Capital 0.07 -0.89 1.047 .1683 t=-1.0453 t= -0.420 0.01 1.0117 t=0.0148 t=-0.0967 t=0.0609 t=-1.0328 t=-1.3 E (-5) t=-0.88 0.Volume 27 Number 10/11 2001 77 Table7. 70 Short Debt / Capital -0.2924 t= 2.69 Long Debt / Capital 6.46 -0.Managerial Finance 78 Table 8.49 -0.57 0. Electrical Sector Regression Results Total Debt / Capital Intercept Government Share % Government Subsidy Growth (%∇Assets) Size (ln Assets) Return on Assets Profit Margin F-Statistic R-Square 6.4362 t=1.78 .19 11.52 -0.79 -0.38 0.38 0.37 -0.2848 t= 1.0016 t=-0.0259 t=-1.59 0.76 0.0148 t=0.0266 t= 0.0180 t= -2.79 0.41 4.3070 t= -2.3218 t= -2.01 -0.462 -0.0839 t= 2.40 -2.50 -0.0441 t= -0.50 0.06 14.0725 t=-0.2123 t=-0.16 0.0196 t=-2.2503 t= 2.00 7.60 5.9579 t=-2.2962 t= 2.00075 t= 0.3401 t= 1.34 -0.05 -0.0962 t=-1. 0265 t=1.3566 t= -5.06 0.86 0.0008 t= 0.52 0.54 0.24 0.51 0.29 3.72 0.3950 t= -4.47 0.96 0.0973 t=-1.0004 t= 0.644 Short Debt / Capital -0.41 -0.36 6.110 .0042 t=0.0057 t= 0.0671 t=-1.03 -0.58 0.0004 t= 0.0320 t=4.0302 t=-0.496 Long Debt / Capital -0.1376 t = 0.Volume 27 Number 10/11 2001 79 Table 9.00 -0.0016 t=-0.42 0.66 0.00 -0. Agricultural Sector Regression Results Total Debt / Capital Intercept Government Share % Growth (%∇Assets) Size (ln Assets) Return on Assets Profit Margin F-Statistic R-Square -0.68 -0.09 0.0384 t=-0.0280 t=-1.0278 t=5.1319 t= 1.74 0. 0770 t=4.02 0.02 -0.68 3.29662 t= -1.24 -0.1878 t=1.0031 t=0.0367 t=1.162 .12 0.56 1.0739 t=6.02 -0.69 0.94 19.0206 t=0.1 E (-5) t=-0.0232 t=1.0032 t=-0.71 0.0247 t=-0.63 0.1726 t= -1.24 0.0573 t=1.08 E (-5) t=0.05 0. Services Sector Regression Results Total Debt / Capital Intercept Government Share % Government Subsidy Growth (%∇Assets) Size (ln Assets) Return on Assets Profit Margin F-Statistic R-Square -0.034 0.01 E (-5) t=0.0206 t=-0.788 Short Debt / Capital -0.0106 t=0.84 -0.23 10.81 -2.96 1.1241 t=-0.15 0.0453 t=-1.1846 t=0.0126 t=1.655 Long Debt / Capital -0.Managerial Finance 80 Table 10.88 0.12 0.14 0.74 0.41 0. 77 823625 0.036 0.42 0.0112 t=-0.29 0.0102 t= -1.669 2E+07 0.0118 t= -0.297 0.53 -0.274 0.0913 0.0069 t= 0.6667 8.8725 0.13 -0.1 272969 0.4 Table 11.0003 t= 0.Volume 27 Number 10/11 2001 Categorization RANGE Total Assets < 440 440 < Total Assets < 1200 1200 < Total Assets < 3250 3250 < Total Assets 81 AVERAGES Gshare Subsidy Growth Size ROA PM 0.153 .221 0.0476 -12. Small Firms Regression Results Total Debt / Capital Intercept Size (ln Assets) Government Share % Government Subsidy Growth (%∇Assets) Return on Assets Profit Margin F-Statistic R-Square 0.0008 t= -0.2115 0.67 -0.1061 19.84 7.87 E (-5) t= 0.0499 0. Sample Classification by Size Size Small Medium Large Very Large Table 12.0073 -0.02 -0.0879 0.1316 0.84 2E+06 0.0784 40.1261 0. 0736 t= -1.3 E (-5) t= -1.58 -8.45 3.6942 t= -3.20 -0.1547 t= -0.0206 t= -0.0676 t= 2.0002 t= -2.307 Long Debt / Capital -0.0273 t= -0.22 -0.29 0.00 -0.68 -0.34 -0.2164 t= 0.1407 t=3.8139 t= -3.22 -0.3711 t= 1.215 Short Debt / Capital -1.01 0.74 -0.5070 t=-3.36 -0.08 0.0774 t= -1. Medium Size Firms Regression Results Total Debt / Capital Intercept Size (ln Assets) Government Share % Government Subsidy Growth (%∇Assets) Return on Assets Profit Margin F-Statistic R-Square -2.Managerial Finance 82 Table 13.2083 t= 3.79 2.15 -0.0463 t= 1.59 -0.0614 t=-1.8802 t= -2.016 t= -0.61 -0.5276 t= -2.65 0.313 .08 0.34 0.0003 t=-2.60 3.99 0.46 0.25 0. 57 4.300 Long Debt / Capital -1.56 -0.08 -1.08 3.21 0.3086 t= -0.62 -0.21 -0.Volume 27 Number 10/11 2001 83 Table 14.0715 t=-1.3125 t= -2.220 Short Debt / Capital -0.72 -0.4475 t= -2.55 6.28 0.6927 t= -3.2 E (-5) t= 0.0328 t=0.1089 t= 2.0756 t= -1.0571 t=-2. Large Size Firms Regression Results Total Debt / Capital Intercept Size (ln Assets) Government Share % Government Subsidy Growth (%∇Assets) Return on Assets Profit Margin F-Statistic R-Square -1.25 2.37 0.78 0.0052 t= -3.05 -0.0694 t= -0.64 -0.0041 t=-0.28 .45 -0.7561 t= -1.42 0.1269 t= -2.0415 t= -1.13 -0.2 E (-5) t= -0.0279 t= -0.85 0.1417 t= 1.0698 t= -2.58 -0.9 E (-5) t= -0.24 -0.92 -1.17 -0.07 -1. 0255 t= -0.1905 t= 3.Managerial Finance 84 Table 15.69 -0.54 0.84 0.62 0.2778 t= 1.0300 t= 5.3458 t= -4.4547 t= -2.0160 t= -1.0003 t= 0.61 -0.34 -1.0005 t = 0.0067 t= 0.0002 t=0.665 .892 Short Debt / Capital -0.94 -0.0095 t=-0.73 0.0294 t= -2.33 0.35 0.2108 t= -4.893 Long Debt / Capital -1.09 0.0084 t=1.20 -1.1039 t= 6.1611 t= 3.0234 t= -1.52 -0.1769 t= -1.0954 t= 5.16 0.00 0.1350 t=-1.51 0.76 0.19 0.94 8. Very Large Firms Regression Results Total Debt / Capital Intercept Size (ln Assets) Government Share % Government Subsidy Growth (%∇Assets) Return on Assets Profit Margin F-Statistic R-Square -1.96 0.29 35.25 -0.50 0.83 36. 65 -0.0990 t=-4.42 0.1149 t=4.2509 t= -13.95 -0.2848 t= -9.53 50.1 E (-5) t=-1.38 -0.1021 t=10. All Firms Regression Results Total Debt / Capital Intercept Government Share % Government Subsidy Growth (%∇Assets) Size (ln Assets) Return on Assets Profit Margin F-Statistic R-Square -1.798 Short Debt / Capital -0.47 0.63 -7.197 0.Volume 27 Number 10/11 2001 85 Table 16.99 -5.042 .70 0.0067 t=1.6 E (-5) t=-0.649 Long Debt / Capital -1.01 -0.0149 t=-0.00044 t=0.22 0.0339 t=-0.03 -0.0820 t=-5.19 -0.48 107.39 -0.0107 t=-0.33 0.0392 t=-1.83 -0.29 -0.0486 t=-1.0541 t=-0.0955 t=14.36 -0.0112 t=-0.025 0.15 1.0663 t=1.00013 t=-1.0170 t=-1.09 0.87 0. 02405 -0.100 65.1748 -1.0048 0.1076 0.2161 F-Statistic 31.1367 0.2781 .3241 1.1576 0.606 1.971 76.0115 0.01276 -0.Managerial Finance Table 17.104 86 R-Square 0.2017 β1 0.045 0.6597 0.0403 0.145 7.310 0.1878 0.4 E(-4) .811 221.0529 0.5668 .1186 0. Total Debt/Capital Regression Results β0 Government Share % Government Share Government Subsidy Growth Size ROA PM 0.1821 0.
https://www.scribd.com/document/187182922/5
CC-MAIN-2018-22
refinedweb
8,041
59.6
Created on 2015-05-25 23:28 by jab, last changed 2015-05-26 08:50 by rhettinger. This issue is now closed. Is it intentional that the second assertion in the following code fails? ``` from collections import OrderedDict d = dict(C='carbon') o = OrderedDict(d) assert d == o assert d.viewitems() == o.viewitems() ``` Since d == o, I'm surprised that d.viewitems() != o.viewitems(). If that's intentional, I'd love to understand the rationale. Note: I hit this while testing a library I authored,, which provides a implementation for Python, so I'm especially keen to understand all the subtleties in this area. Thanks in advance. This question looks similar to: Should list compare equal to set when the items are equal? This looks like a bug in Python 2.7: # Python2.7 >>> from collections import Set >>> isinstance({1:2}.viewitems(), Set) False # Python3.5 >>> from collections import Set >>> isinstance({1:2}.items(), Set) True I think the dictitems object needs to be registered as a Set. The fix looks something like this: diff --git a/Lib/_abcoll.py b/Lib/_abcoll.py --- a/Lib/_abcoll.py +++ b/Lib/_abcoll.py @@ -473,6 +473,7 @@ for key in self._mapping: yield (key, self._mapping[key]) +ItemsView.register(type({}.viewitems())) Will add a more thorough patch with tests later. I don't know if it is worth to backport this feature (dict views were registered in 1f024a95e9d9), but the patch itself LGTM. I think tests should be foreported to 3.x (if they don't exist in 3.x). Are there generic set tests similar to mapping_tests and seq_tests? New changeset 9213c70c67d2 by Raymond Hettinger in branch '2.7': Issue #24286: Register dict views with the MappingView ABCs. New changeset ff8b603ee51e by Raymond Hettinger in branch 'default': Issue #24286: Forward port dict view abstract base class tests. > I don't know if it is worth to backport this feature I don't think so either. The Iterator registry is a bit of a waste. > Are there generic set tests similar to mapping_tests and seq_tests? Not that I know of. Also, I don't see the need.
https://bugs.python.org/issue24286
CC-MAIN-2021-17
refinedweb
357
78.14