text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
2. Sorting 2.1 Insertion Sort 2.2 Shell Sort 2.3 Quicksort 2.4 Comparison of Methods 3. Dictionaries 3.1 Hash Tables 3.2 Binary Search Trees 3.3 Red-Black Trees 3.4 Skip Lists 3.5 Comparison of Methods 4. Code Listings 4.1 Insertion Sort Code 4.2 Shell Sort Code 4.3 Quicksort Code 4.4 Qsort Code 4.5 Hash Table Code 4.6 Binary Search Tree Code 4.7 Red-Black Tree Code 4.8 Skip List Code 5. Bibliography 36 50 Preface This booklet contains a collection of sorting and searching algorithms. While many books on data structures describe sorting and searching algorithms, most assume a background in calculus and probability theory. Although a formal presentation and proof of asymptotic behavior is important, a more intuitive explanation is often possible. The way each algorithm works is described in easy-to-understand terms. It is assumed that you have the knowledge equivalent to an introductory course in C or Pascal. In particular, you should be familiar with arrays and have a basic understanding of pointers. The material is presented in an orderly fashion, beginning with easy concepts and progressing to more complex ideas. Even though this collection is intended for beginners, more advanced users may benefit from some of the insights offered. In particular, the sections on hash tables and sk ip lists should prove interesting. Santa Cruz, California Thomas Niemann March, 1995 1. ore ase Introduction Arrays and linked lists are two basic data structures used to st information. We may wish to search, insert or delete records in a datab based on a key value. This section examines the performance of these operations on arrays and linked lists. Arrays ------ s. . Figure 1.1 shows an array, seven elements long, containing numeric value To search the array sequentially, we may use the algorithm in Figure 1.2 The maximum number of comparisons is 7, and occurs when the key we are searching for is in A[6]. If the data is sorted, a binary search may be done (Figure 1.3). Variables Lb and Ub keep track of the lower bound an upper bound of the array, respectively. We begin by examining the middl element of the array. If the key we are searching for is less than the middle element, then it must reside in the top half of the array. Thus, set Ub to (M 1). This restricts our next iteration through the loop to the top half of the array. In this way, each iteration halves the size the array to be searched. For example, the first iteration will leave 3 items to test. After the second iteration, there will be 1 item left to test. Thus it takes only three iterations to find any number. Figure 1.1: An Array d e we of can nd eded This is a powerful method. For example, if the array size is 1023, we narrow the search to 511 items in one comparison. Another comparison, a we're looking at only 255 elements. In fact, only 10 comparisons are ne to search an array containing 1023 elements.]. cy A similar problem arises when deleting numbers. To improve the efficien of insert and delete operations, linked lists may be used. int function SequentialSearch (Array A , int Lb , int Ub , int K begin ey ); end; Figure 1.2: Sequential Search ; int function BinarySearch (Array A , int Lb , int Ub , int Key ) begin do forever M = ( Lb + Ub )/2; if ( Key < A[M]) then Ub = M 1; else if (Key > A[M]) then Lb = M + 1; else return M ; if (Lb > Ub ) then return 1; end; Figure 1.3: Binary Search Linked Lists In Figure 1.4 we have the same values stored in a linked list. Assuming llows: pointers X and P, as shown in the figure, value 18 may be inserted as fo X->Next = P->Next; P->Next = X; be list we of Insertion (and deletion) are very efficient using linked lists. You may wondering how P was set in the first place. Well, we had to search the in a sequential fashion to find the insertion point for X. Thus, while improved our insert and delete performance, it has been at the expense search time. Figure 1.4: A Linked List Timing Estimates ne mings. that e Several methods may be used to compare the performance of algorithms. O way is simply to run several tests for each algorithm and compare the ti Another way is to estimate the time required. For example, we may state search time is O(n) (big-ohofn). This means that, for large n, search tim is no greater than the number of items n in the list. The big-O notatio not describe the exact time that an algorithm takes, but only indicates upper bound on execution time within a constant factor. If an algorithm O(n2) time, then execution time grows no worse than the square of the si the list. To see the effect this has, Table 1.1 illustrates growth rate various functions. A growth rate of O(lg n) occurs for algorithms simil the binary search. The lg (logarithm, base 2) function increases by one n is doubled. Recall that we can search twice as many items with one mo comparison in the binary search. Thus the binary search is a O(lg n) al gorithm. 1.1: Growth Rates rithm m might If the values in Table 1.1 represented microseconds, then a O(lg n) algo may take 20 microseconds to process 1,048,476 items, a O(n1.25) algorith take 33 seconds, and a O(n2) algorithm might take up to 12 days! In the following chapters a timing estimate for each algorithm, using big-O not will be included. For a more formal derivation of these formulas you ma to consult the references. Summary As we have seen, sorted arrays may be searched efficiently using a binar y search. However, we must have a sorted array to start with. In the next section various ways to sort arrays will be examined. It turns out that this is computa tionally expensive, and considerable research has been done to make sorting algor ithms as efficient as possible. t ll ion on Linked lists improved the efficiency of insert and delete operations, bu searches were sequential and time-consuming. Algorithms exist that do a three operations efficiently, and they will be the discussed in the sect dictionaries. Sorting a) n. An Insertion Sort One of the simplest methods to sort an array is sort by insertio example of an insertion sort occurs in everyday life while playi ation, y wish 1. To sort the cards in your hand you extract a card, shift the rem cards, and then insert the extracted card in the correct place. process is repeated until all the cards are in the correct seque Both average and worst-case time is O(n2). For further reading, consult Knuth[1]. Theory down n the In Figure 2.1(a) we extract the 3. Then the above elements are shifted until we find the correct place to insert the 3. This process repeats i Figure 2.1(b) for the number 1. Finally, in Figure 2.1(c), we complete sort by inserting 2 in the correct place. Assuming there are n elements in the array, we must index through n 1 en For each entry, we may need to examine and shift up to n 1 other entries For this reason, sorting is a time-consuming process. The insertion sort is an in-place sort. That is, we sort the array in-p No extra memory is required. The insertion sort is also a stable sort. Stable sorts retain the original ordering of keys when identical keys ar present in the input data. Figure 2.1: Insertion Sort Implementation An ANSI-C implementation for insertion sort may be found in Section 4.1 (page ). Typedef T and comparison operator CompGT should be altered to reflect the data stored in the table. Pointer arithmetic was used, rath than array references, for efficiency. a) Shell Sort tries. . lace. er Shell sort, developed by Donald L. Shell, is a non-stable in-place sort. ing -case Shell sort improves on the efficiency of insertion sort by quickly shift values to their destination. Average sort time is O(n1.25), while worst time is O(n1.5). For further reading, consult Knuth[1]. Theory xtract In Figure 2.2(a) we have an example of sorting by insertion. First we e 1, shift 3 and 5 down one slot, and then insert the 1. Thus, two shifts were rt the 1 = 5 required. In the next frame, two shifts are required before we can inse 2. The process continues until the last frame, where a total of 2 + 2 + shifts have been made. In Figure 2.2(b) an example of shell sort is illustrated. We begin by d an insertion sort using a spacing of two. In the first frame we examine numbers 3-1. Extracting 1, we shift 3 down one slot for a shift count o Next we examine numbers 5-2. We extract 2, shift 5 down, and then inser After sorting with a spacing of two, a final pass is made with a spacing one. This is simply the traditional insertion sort. The total shift co using shell sort is 1+1+1 = 3. By using an initial spacing larger than we were able to quickly shift values to their proper destination. Figure 2.2: Shell Sort oing f 1. t 2. of unt one, ray d to g es To implement shell sort, various spacings may be used. Typically the ar is sorted with a large spacing, the spacing reduced, and the array sorte again. On the final sort, spacing is one. Although shell sort is easy comprehend, formal analysis is difficult. In particular, optimal spacin values elude theoreticians. Knuth[1] has experimented with several valu and recommends that spacing (h) for an array of size N be based on the following formula: Thus, values of h are computed as follows: To sort 100 items we first find an hs such that hs 100. For 100 h5 is selected. Our final value (ht) is two steps lower, or h3. our sequence of h values will be 13-4-1. Once the initial h val been determined, subsequent values may be calculated using the f Implementation ). e An ANSI-C implementation of shell sort may be found in Section 4.2 (page Typedef T and comparison operator CompGT should be altered to reflect th data stored in the array. When computing h, care must be taken to avoid underflows or overflows. The central portion of the algorithm is an insertion sort with a spacing of h. To terminate the inner loop correct ly, it is necessary to compare J before decrement. Otherwise, pointer value may wrap through zero, resulting in unexpected behavior. b) rt, Quicksort there is still room for improvement. One of the most popular sorting al gorithms is quicksort. Quicksort executes in O(n lg n) on average, and O(n2) in the worst-case. However, with proper precautions, worst-case behavior i s very unlikely. Quicksort is a non-stable sort. It is not an in-place s ort as stack space is required. For further reading, consult Cormen[2]. Theory en he vot d to The quicksort algorithm works by partitioning the array to be sorted, th recursively sorting each partition. In Partition (Figure 2.3), one of t array elements is selected as a pivot value. Values smaller than the pi value are placed to the left of the pivot, while larger values are place the right. int function Partition (Array A, int Lb, int Ub); begin select a pivot from A[Lb]A[Ub]; reorder A[Lb]A[Ub] such that: all values to the left of the pivot are all values to the right of the pivot are end; return pivot position; int Ub); (A, Lb, Ub); Lb, M 1); M + 1, Ub); pivot pivot procedure QuickSort (Array A, int Lb, begin if Lb Ub then M = Partition QuickSort (A, QuickSort (A, end; Figure 2.3: Quicksort Algorithm both at is element s shown ing in In Figure 2.4(a), the pivot selected is 3. Indices are run starting at ends of the array. Index i starts on the left and selects an element th larger than the pivot, while index j starts on the right and selects an that is smaller than the pivot. These elements are then exchanged, as i in Figure 2.4(b). QuickSort recursively sorts the two subarrays, result the array shown in Figure 2.4(c). Figure 2.4: Quicksort Example orrect ordering is maintained. In this manner, QuickSort succeeds in sorting t he array. If we're lucky the pivot selected will be the median of all values, thus equally dividing the array. For a moment, let's assume that this is the case. Since the array is split in half at each step, and Partition must eventually e xamine all n elements, the run time is O(n lg n). A[Lb]). to the hat would lement call ne. O(n2) To find a pivot value, Partition could simply select the first element ( All other values would be compared to the pivot value, and placed either left or right of the pivot as appropriate. However, there is one case t fails miserably. Suppose the array was originally in order. Partition always select the lowest value as a pivot and split the array with one e in the left partition, and Ub Lb elements in the other. Each recursive to quicksort would only diminish the size of the array to be sorted by o Thus, n recursive calls would be required to do the sort, resulting in a run time. One solution to this problem is to randomly select an item as a pivot. This would make it extremely unlikely that worst-case behavior would occ ur. Implementation ion 4.3 ect e basic An ANSI-C implementation of the quicksort algorithm may be found in Sect (page). Typedef T and comparison operator CompGT should be altered to refl the data stored in the array. Several enhancements have been made to th quicksort algorithm: ior occurs ement each The center element is selected as a pivot in Partition. If the list is partially ordered, this will be a good choice. Worst-case behav when the center element happens to be the largest or smallest el time Partition is invoked. For short arrays, InsertSort is called. Due to recursion and other over quicksort is not an efficient algorithm to use on small arrays. array with fewer than 12 elements is sorted using an insertion s optimal cutoff value is not critical and varies based on the qua generated code. head, Tail recursion occurs when the last statement in a function is a call to the function itself. Tail recursion may be replaced by iteratio results in a better utilization of stack space. This has been d the second call to QuickSort in Figure 2.3. After an array is partitioned, the smallest partition is sorted first. This results in a better utilization of stack space, as short pa are quickly sorted and dispensed with. Pointer arithmetic, rather than array indices, is used for efficient exe Also included is a listing for qsort (Section 4.4, page ), an ANSI-C stan library function usually implemented with quicksort. For this implement recursive calls were replaced by explicit stack operations. Table 2.1 sh timing statistics and stack utilization before and after the enhancement were applied. b) Comparison of Methods n sort, choice In this section we will compare the sorting algorithms covered: insertio shell sort and quicksort. There are several factors that influence the of a sorting algorithm: Stable sort. Recall that a stable sort will leave identical keys in the same relative position in the sorted output. Insertion sort is algorithm covered that is stable. Space. An in-place sort does not require any extra space to accomplish task. Both insertion sort and shell sort are in-place sorts. Q requires stack space for recursion, and thus is not an in-place However, the amount required was considerably reduced by tinkeri algorithm. Time. The time required to sort a dataset can easily become astronomica (Table 1.1). Table 2.2 shows the relative timings for each metho timing tests are described below. Simplicity. The number of statements required for each algorithm may be found in Table 2.2. Simpler algorithms result in fewer programm the only its .3. The time required to sort a randomly ordered dataset is shown in Table 2 1. Dictionaries a) Hash Tables A dictionary requires that search, insert and delete operations be supported. O ne of the most effective ways to implement a dictionary is through the use of ha sh tables. Average time to search for an element is O(1), while worst-case time is O(n). Cormen[2] and Knuth[1] both contain excellent discussions on hashing. In case you decide to read more material on this topic, you may want to know s ome terminology. The technique presented here is chaining, also known as open h ashing[3]. An alternative technique, known as closed hashing[3], or open addres sing[1], is not presented. Got that? Theory A hash table is simply an array that is addressed via a hash function. For exam ple, in Figure 3.1, HashTable is an array with 8 elements. Each element is a po inter to a linked list of numeric data. The hash function for this example simp ly To insert a new item in the table, we hash the key to determine which li st the item goes on, and then insert the item at the beginning of the list. For example, to insert 11, we divide 11 by 8 giving a remainder of 3. Thus, 11 goe s on the list starting at HashTable[3]. To find a number, we hash the number an d chain down the correct list to see if it is in the table. To delete a number, we find the number and remove the node from the linked list. If the hash function is uniform, or equally distributes the data keys am ong the hash table indices, then hashing effectively subdivides the list to be s earched. Worst-case behavior occurs when all keys hash to the same index. Then we simply have a single linked list that must be sequentially scanned. Consequ ently, it is important to choose a good hash function. Several methods may be u sed to hash key values. To illustrate the techniques, I will assume unsigned ch ar is 8-bits, unsigned short int is 16-bits and unsigned long int is 32-bits. Division method (tablesize = prime). This technique was used in the preceding e xample. A HashValue, from 0 to (HashTableSize - 1), is computed by dividing the key value by the size of the hash table and taking the remainder. For example: typedef int HashIndexType; HashIndexType Hash(int Key) { return Key % HashTableSize;} Selecting an appropriate HashTableSize is important to the success of this metho d. For example, a HashTableSize of two would yield even hash values for even Ke ys, and odd hash values for odd Keys. This is an undesirable property, as all k eys would hash to the same value if they happened to be even. If HashTableSize is a power of two, then the hash function simply selects a subset of the Key bit s as the table index. To obtain a more random scattering, HashTableSize should be a prime number not too close to a power of two. Multiplication method (tablesize = 2n). The multiplication method may be used f or a HashTableSize that is a power of 2. The Key is multiplied by a constant, a nd then the necessary bits are extracted to index into the table. Knuth[1] reco mmends using the golden ratio, or , as the constant. The following definitions m ay be used for the multiplication method: /* 8-bit index */ typedef unsigned char HashIndexType; static const HashIndexType K = 158; /* 16-bit index */ typedef unsigned short int HashIndexType; static const HashIndexType K = 40503; /* 32-bit index */ typedef unsigned long int HashIndexType; static const HashIndexType K = 2654435769; /* w=bitwidth(HashIndexType), size of table=2**m */ static const int S = w - m; HashIndexType HashValue = (HashIndexType)(K * Key) >> S; For example, if HashTableSize is 1024 (210), then a 16-bit index is sufficient a nd S would be assigned a value of 16 10 = 6. Thus, we have: typedef unsigned short int HashIndexType; HashIndexType Hash(int Key) { static const HashIndexType K = 40503; static const int S = 6; return (HashIndexType)(K * Key) >> S; } Variable string addition method (tablesize = 256). To hash a variable-length st ring, each character is added, modulo 256, to a total. A HashValue, range 0-255 , is computed. typedef unsigned char HashIndexType; HashIndexType Hash(char *str) { HashIndexType h = 0; while (*str) h += *str++; return h; } Variable string exclusive-or method (tablesize = 256). This method is similar t o the addition method, but successfully distinguishes similar words and anagrams . To obtain a hash value in the range 0-255, all bytes in the string are exclus ive-or'd together. However, in the process of doing each exclusive-or, a random component is introduced. typedef unsigned char HashIndexType; unsigned char Rand8[256]; HashIndexType Hash(char *str) { unsigned char h = 0; while (*str) h = Rand8[h ^ *str++]; return h; } Rand8 is a table of 256 8-bit unique random numbers. The exact ordering is not critical. The exclusive-or method has its basis in cryptography, and is quite e ffective[4]. Variable string exclusive-or method (tablesize 65536). If we hash the string tw ice, we may derive a hash value for an arbitrary table size up to 65536. The se cond time the string is hashed, one is added to the first character. Then the t wo 8-bit hash values are concatenated together to form a 16-bit hash value. typedef unsigned short int HashIndexType; unsigned char Rand8[256]; HashIndexType Hash(char *str) { HashIndexType h; unsigned char h1, h2; if (*str == 0) return 0; h1 = *str; h2 = *str + 1; str++; while (*str) { h1 = Rand8[h1 ^ *str]; h2 = Rand8[h2 ^ *str]; str++; } /* h is in range 0..65535 */ h = ((HashIndexType)h1 << 8)|(HashIndexType)h2; /* use division method to scale */ return h % HashTableSize Assuming n data items, the hash table size should be large enough to acc ommodate a reasonable number of entries. As seen in Table 3.1, a small table si ze cons iderably reduces the length of the list to be searched. As we see in Table 3.1, there is much leeway in the choice of table size. sizetimesizetime186912892432256642145124810610244165420483322840963641581923Table 3.1: HashT Time (ms), 4096 entries Implementation An ANSI-C implementation of a hash table may be found in Section 4.5 (page ). Ty pedef T and comparison operator CompEQ should be altered to reflect the data sto red in the table. HashTableSize must be determined and the HashTable allocated. The division method was used in the Hash function. InsertNode allocates a new node and inserts it in the table. DeleteNode deletes and frees a node from the table. FindNode searches the table for a particular value. a) Binary Search Trees In Section 1g n). Worst-case behavior occurs when ordered data is inserted. In this case the search time is O(n). See Cormen[2] for a more detailed description. Theory A binary search tree is a tree where each node has a left and right child. Eith er child, or both children, may be missing. Figure 3.2 illustrates a binary sea rch tree. Assuming Key represents the value of a given node, then a binary sear ch tree also has the following property: all children to the left of the node h ave values smaller than Key, and all children to the right of the node have valu es larger than Key. The top of a tree is known as the root, and the exposed nod es at the bottom are known as leaves. In Figure 3.2, the root is node 20 and th th e left child. The second comparison finds that 16 > 7, so we traverse to the ri ght child. On the third comparison, we succeed. Each comparison results in reducing the number of items to inspect by on e-half. In this respect, the algorithm is similar to a binary search on an arra y. However, this is true only if the tree is balanced. For example, Figure 3.3 shows another tree containing the same values. While it is a binary search tre e, its behavior is more like that of a linked list, with search time increasing proportional to the number of elements stored. Insertion and Deletion Let us examine insertions in a binary search tree to determine the conditions th (Figu re 3.4). Now we can see how an unbalanced tree can occur. If the data is present ed in an ascending sequence, each node will be added to the right of the previou s node. This will create one long chain, or linked list. However, if data is p resented for insertion in a random order, then a more balanced tree is possible. Deletions are similar, but require that the binary search tree property be maintained. For example, if node 20 in Figure 3.4 is removed, it must be rep laced by node 37. This results in the tree shown in Figure 3.5. The rationale for this choice is as follows. The successor for node 20 must be chosen such th at all nodes to the right are larger. Thus, we need to select the smallest valu ed An ANSI-C implementation of a binary search tree may be found in Section 4.6 (pa ge). Typedef T and comparison operators CompLT and CompEQ should be altered to re flect the data stored in the tree. Each Node consists of Left, Right and Parent pointers designating each child and the parent. Data is stored in the Data fie ld. The tree is based at Root, and is initially NULL. InsertNode allocates a n ew node and inserts it in the tree. DeleteNode deletes and frees a node from th e tree. FindNode searches the tree for a particular value. a) Red-Black Trees Binary search trees work best when they are balanced or the path length from roo t to any leaf is within some bounds. The red-black tree algorithm is a method f or balancing trees. The name derives from the fact that each node is colored re d or black, and the color of the node is instrumental in determining the balance of the tree. During insert and delete operations, nodes may be rotated to main tain tree balance. Both average and worst-case search time is O(lg n). This is, perhaps, the most difficult section in the book. If you get gl assy-eyed looking at tree rotations, try skipping to skip lists, the next sectio n. For further reading, Cormen[2] has an excellent section on red-black trees. Theory A red-black tree is a balanced binary search tree with the following properties[ 2]: 1. Every node is colored red or black. 2. Every leaf is a NIL node, and is colored black. 3. If a node is red, then both its children are black. 4. Every simple path from a node to a descendant leaf contains the same num ber of black nodes. The number of black nodes on a path from root to leaf is known as the black heig ht of a tree. These properties guarantee that any path from the root to a leaf is no more than twice as long as any other. To see why this is true, consider a tree with a black height of two. The shortest distance from root to leaf is tw o, where both nodes are black. The longest distance from root to leaf is four, where the nodes are colored (root to leaf): red, black, red, black. It is not p ossible to insert more black nodes as this would violate property 4, the black-h eight requirement. Since red nodes must have black children (property 3), havin g two red nodes in a row is not allowed. Thus, the largest path we can construc t consists of an alternation of red-black nodes, or twice the length of a path c ontaining only black nodes. All operations on the tree must maintain the proper ties listed above. In particular, operations which insert or delete items from the tree must abide by these rules. Insertion To insert a node, we search the tree for an insertion point, and add the node to the tree. A new node will always be inserted as a leaf node at the bottom of t he tree. After insertion, the node is colored red. Then the parent of the node is examined to determine if the red-black tree properties have been violated. If necessary, we recolor the node and do rotations to balance the tree. By inserting a red node, we have preserved black-height property (proper ty 4). However, property 3 may be violated. This property states that both chi ldren of a red node must be black. While both children of the new node are blac k (they're NIL), consider the case where the parent of the new node is red. Ins erting a red node under a red parent would violate this property. There are two cases to consider: Red parent, red uncle: Figure 3.6 illustrates a red-red violation. Node X is th e newly inserted node, with both parent and uncle colored red. A simple recolor ing removes the red-red violation. After recoloring, the grandparent (node B) m ust be checked for validity, as its parent may be red. Note that this has the e ffect of propagating a red node up the tree. On completion, the root of the tre e is marked black. If it was originally red, then this has the effect of increa sing the black-height of the tree. Red parent, black uncle: Figure 3.7 illustrates a red-red violation, where the u ncle is colored black. Here the nodes may be rotated, with the subtrees adjuste d as shown. At this point the algorithm may terminate as there are no red-red c onflicts and the top of the subtree (node A) is colored black. Note that if nod e X was originally a right child, a left rotation would be done first, making th e node a left child. Each adjustment made while inserting a node causes us to travel up the tree one step. At most 1 rotation (2 if the node is a right child) will be done, as the algorithm terminates in this case. The technique for deletion is similar. Implementation An ANSI-C implementation of a red-black tree may be found in Section 4.7 (page). Typedef T and comparison operators CompLT and CompEQ should be altered to reflec t the data stored in the tree. Each Node consists of Left, Right and Parent poi nters designating each child and the parent. The node color is stored in Color, and is either Red or Black. The data is stored in the Data field. All leaf no des of the tree are Sentinel nodes, to simplify coding. The tree is based at Ro ot, and initially is a Sentinel node. InsertNode allocates a new node and inserts it in the tree. Subsequentl y, it calls InsertFixup to ensure that the red-black tree properties are maintai ned. DeleteNode deletes a node from the tree. To maintain red-black tree prope rties, DeleteFixup is called. FindNode searches the tree for a particular value . a) Skip Lists Skip lists are linked lists that allow you to skip to the correct node. Thus th e performance bottleneck inherent in a sequential scan is avoided, while inserti on and deletion remain relatively efficient. Average search time is O(lg n). W orst-case search time is O(n), but is extremely unlikely. An excellent referenc e for skip lists is Pugh[5]. Theory The indexing scheme employed in skip lists is similar in nature to the method us ed to lookup names in an address book. To lookup a name, you index to the tab r epresenting the first character of the desired entry. In Figure 3.8, for exampl e, the top-most list represents a simple linked list with no tabs. Adding tabs (middle figure) facilitates the search. In this case, level-1 pointers are trav ersed. Once the correct segment of the list is found, level-0 pointers are trav ersed to find the specific entry. The indexing scheme may be extended as shown in the bottom figure, where we now have an index to the index. To locate an item, level-2 pointers are tra vers, th e coin is tossed to determine if it should be level-1. If you win, the coin is tossed again to determine if the node should be level-2. Another win, and the c oin is tossed to determine if the node should be level-3. This process repeats until you lose. The skip list algorithm has a probabilistic component, and thus has a pr obabilistic bounds on the time required to execute. However, these bounds are q uite tight in normal circumstances. For example, to search a list containing 10 00 items, the probability that search time will be 5 times the average is about 1 in 1,000,000,000,000,000,000[5]. Figure 3.8: Skip List Construction Implementation An ANSI-C implementation of a skip list may be found in Section 4.8 (page). Typed ef T and comparison operators CompLT and CompEQ should be altered to reflect the data stored in the list. In addition, MAXLEVEL should be set based on the maximum s ize of the dataset. To initialize, InitList is called. The list header is allocated and ini tialized. To indicate an empty list, all levels are set to point to the header. InsertNode allocates a new node and inserts it in the list. InsertNode first searches for the correct insertion point. While searching, the update array mai ntains pointers to the upper-level nodes encountered. This information is subse quently used to establish correct links for the newly inserted node. NewLevel i s determined using a random number generator, and the node allocated. The forw ard links are then established using information from the update array. DeleteN ode deletes and frees a node, and is implemented in a similar manner. FindNode searches the list for a particular value. a) Comparison of Methods We have seen several ways to construct dictionaries: hash tables, unbalanced bin ary. Thi s is especially true if many small nodes are to be allocated. For hash tables, only one forward pointer per node is required. In addition, th e hash table itself must be allocated. For red-black trees, each node has a left, right and parent pointer. In additio n, the color of each node must be recorded. Although this requires only one bit , more space may be allocated to ensure that the size of the structure is proper ly aligned. Thus, each node in a red-black tree requires enough space for 3-4 p ointers. For skip lists, each node has a level-0 forward pointer. The probability of hav ing a level-1 pointer is 1 2. The probability of having a level-2 pointer is 1 4. In general, the number of forward pointers per node is Time. The algorithm should be efficient. This is especially true if a large da taset is expected. Table 3.2 compares the search time for each algorithm. Note that worst-case behavior for hash tables and skip lists is extremely unlikely. Actual timing tests are described below. Simplicity. If the algorithm is short and easy to understand, fewer mistakes ma y be made. This not only makes your life easy, but the maintenance programmer e ntrusted with the task of making repairs will appreciate any efforts you make in this area. The number of statements required for each algorithm is listed in T able 3.2. methodstatementsaverage timeworst-case timehash table26O(1)O(n)unbalanced tree41O(lg n)O(n)re black tree120O(lg n)O(lg n)skip list55O(lg n)O(n) Table 3.2: Comparison of Dictionarie Average time for insert, search and delete operations on a database of 6 5,536 (216) randomly input items may be found in Table 3.3. For this test the h ash table size was 10,009 and 16 index levels were allowed for the skip list. W hile there is some variation in the timings for the four methods, they are close enough so that other considerations should come into play when selecting an alg orithm. methodinsertsearchdeletehash table18810unbalanced tree371726red-black tree401637skip list4831 Average Time (ms), 65536 Items, Random Input Table 3.4 shows the average search time for two sets of data: a random s et, where all values are unique, and an ordered set, where values are in ascendi ng order. Ordered input creates a worst-case scenario for unbalanced tree algor ithms, as the tree ends up being a simple linked list. The times shown are for a single search operation. If we were to search for all items in a database of 65,536 values, a red-black tree algorithm would take .6seconds, while an unbalanc ed tree algorithm would take 1 hour. counthash tableunbalanced treered-black treeskip list164325random2563449input4,0963761265,536 631,03361165,536755,019915Table 3.4: Average Search Time (us) 1. Code Listings a) Insertion Sort Code typedef int T; typedef int TblIndex; #define CompGT(a,b) (a > b) void InsertSort(T *Lb, T *Ub) { T V, *I, *J, *Jmin; /************************ * Sort Array[Lb..Ub] * ************************/ Jmin = Lb - 1; for (I = Lb + 1; I <= Ub; I++) { V = *I; /* Shift elements down until */ /* insertion point found. */ for (J = I-1; J != Jmin && CompGT(*J, V); J--) *(J+1) = *J; *(J+1) = V; a) typedef int T; typedef int TblIndex; #define CompGT(a,b) (a > b) void ShellSort(T *Lb, T *Ub) { TblIndex H, N; T V, *I, *J, *Min; /************************** * Sort array A[Lb..Ub] * **************************/ /* compute largest increment */ N = Ub - Lb + 1; H = 1; if (N < 14) H = 1; else if (sizeof(TblIndex) == 2 && N > 29524) H = 3280; else { while (H < N) H = 3*H + 1; H /= 3; H /= 3; } while (H > 0) { /* sort-by-insertion in increments of H */ /* Care must be taken for pointers that */ /* wrap through zero. */ Min = Lb + H; for (I = Min; I <= Ub; I++) { V = *I; for (J = I-H; CompGT(*J, V); J -= H) { *(J+H) = *J; if (J <= Min) { J -= H; break; } } *(J+H) = V; } /* compute next increment */ H /= 3; Quicksort Code typedef int T; typedef int TblIndex; #define CompGT(a,b) (a > b) T *Partition(T *Lb, T *Ub) { T V, Pivot, *I, *J, *P; unsigned int Offset; /***************************** * partition Array[Lb..Ub] * *****************************/ /* select pivot and exchange with 1st element */ Offset = (Ub - Lb)>>1; P = Lb + Offset; Pivot = *P; *P = *Lb; I = Lb + 1; J = Ub; while (1) { while (I < J && CompGT(Pivot, *I)) I++; while (J >= I && CompGT(*J, Pivot)) J--; if (I >= J) break; V = *I; *I = *J; *J = V; J--; I++; } /* pivot belongs in A[j] */ *Lb = *J; *J = Pivot; } return J; void QuickSort(T *Lb, T *Ub) { T *M; /************************** * Sort array A[Lb..Ub] * **************************/ while (Lb < Ub) { /* quickly sort short lists */ if (Ub - Lb <= 12) { InsertSort(Lb, Ub); return; } /* partition into two segments */ M = Partition (Lb, Ub); /* sort the smallest partition */ Qsort Code #include <limits.h> #define MAXSTACK (sizeof(size_t) * CHAR_BIT) static void Exchange(void *a, void *b, size_t size) { size_t i; /****************** * exchange a,b * ******************/ for (i = sizeof(int); i <= size; i += sizeof(int)) { int t = *((int *)a); *(((int *)a)++) = *((int *)b); *(((int *)b)++) = t; } for (i = i - sizeof(int) + 1; i <= size; i++) { char t = *((char *)a); *(((char *)a)++) = *((char *)b); *(((char *)b)++) = t; } void qsort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) { void *LbStack[MAXSTACK], *UbStack[MAXSTACK]; int sp; unsigned int Offset; /******************** * ANSI-C qsort() * ********************/ LbStack[0] = (char *)base; UbStack[0] = (char *)base + (nmemb-1)*size; for (sp = 0; sp >= 0; sp--) { char *Lb, *Ub, *M; char *P, *I, *J; Lb = LbStack[sp]; Ub = UbStack[sp]; while (Lb < Ub) { /* select pivot and exchange with 1st element */ Offset = (Ub - Lb) >> 1; P = Lb + Offset - Offset % size; Exchange (Lb, P, size); /* partition into two segments */ I = Lb + size; J = Ub; while (1) { while (I < J && compar(Lb, I) > 0) I += size; while (J >= I && compar(J, Lb) > 0) J -= size; if (I >= J) break; Exchange (I, J, size); J -= size; I += size; /* pivot belongs in A[j] */ Exchange (Lb, J, size); M = J; /* keep processing smallest segment, and stack largest */ if (M - Lb <= Ub - M) { if (M + size < Ub) { LbStack[sp] = M + size; UbStack[sp++] = Ub; } Ub = M - size; } else { if (M - size > Lb) { LbStack[sp] = Lb; UbStack[sp++] = M - size; } Lb = M + size; } } a) #include <stdlib.h> #include <stdio.h> /* modify these lines to establish data type */ typedef int T; #define CompEQ(a,b) (a == b) typedef struct Node_ { struct Node_ *Next; T Data; } Node; typedef int HashTableIndex; Node **HashTable; int HashTableSize; HashTableIndex Hash(T Data) { /* division method */ return (Data % HashTableSize); } Node *InsertNode(T Data) { Node *p, *p0; HashTableIndex bucket; /* insert node at beginning of list */ bucket = Hash(Data); if ((p = malloc(sizeof(Node))) == 0) { fprintf (stderr, "out of memory (InsertNode)\n"); exit(1); } p0 = HashTable[bucket]; /* next node */ /* data stored in node */ void DeleteNode(T Data) { Node *p0, *p; HashTableIndex bucket; /* find node */ p0 = 0; bucket = Hash(Data); p = HashTable[bucket]; while (p && !CompEQ(p->Data, Data)) { p0 = p; p = p->Next; } if (!p) return; /* p designates node to delete, remove it from list */ if (p0) /* not first node, p0 points to previous node */ p0->Next = p->Next; else /* first node on chain */ HashTable[bucket] = p->Next; } free (p); Node *FindNode (T Data) { Node *p; p = HashTable[Hash(Data)]; while (p && !CompEQ(p->Data, Data)) p = p->Next; return p; } a) Binary Search Tree Code #include <stdio.h> #include <stdlib.h> /* modify these lines to establish data type */ typedef int T; #define CompLT(a,b) (a < b) #define CompEQ(a,b) (a == b) typedef struct Node_ { struct Node_ *Left; struct Node_ *Right; struct Node_ *Parent; T Data; } Node; Node *Root = NULL; Node *InsertNode(T Data) { Node *X, *Current, *Parent; /* /* /* /* left child */ right child */ parent */ data stored in node */ /*********************************************** * allocate node for Data and insert in tree * ***********************************************/ /* setup new node */ if ((X = malloc (sizeof(*X))) == 0) { fprintf (stderr, "insufficient memory (InsertNode)\n"); exit(1); } X->Data = Data; X->Left = NULL; X->Right = NULL; /* find X's parent */ Current = Root; Parent = 0; while (Current) { if (CompEQ(X->Data, Current->Data)) return (Current); Parent = Current; Current = CompLT(X->Data, Current->Data) ? Current->Left : Current->Right; } X->Parent = Parent; /* insert X in tree */ if(Parent) if(CompLT(X->Data, Parent->Data)) Parent->Left = X; else Parent->Right = X; else Root = X; return(X); } void DeleteNode(Node *Z) { Node *X, *Y; /***************************** * delete node Z from tree * *****************************/ /* Y will be removed from the parent chain */ if (!Z || Z == NULL) return; /* find tree successor */ if (Z->Left == NULL || Z->Right == NULL) Y = Z; else { Y = Z->Right; while (Y->Left != NULL) Y = Y->Left; } /* X is Y's only child */ if (Y->Left != NULL) X = Y->Left; else X = Y->Right; /* remove Y from the parent chain */ if (X) X->Parent = Y->Parent; if (Y->Parent) if (Y == Y->Parent->Left) Y->Parent->Left = X; else Y->Parent->Right = X; else Root = X; /* /* /* if Y is the node we're removing */ Z is the data we're removing */ if Z and Y are not the same, replace Z with Y. */ (Y != Z) { Y->Left = Z->Left; if (Y->Left) Y->Left->Parent = Y; Y->Right = Z->Right; if (Y->Right) Y->Right->Parent = Y; Y->Parent = Z->Parent; if (Z->Parent) if (Z == Z->Parent->Left) Z->Parent->Left = Y; else Z->Parent->Right = Y; else Root = Y; free (Z); } else { free (Y); } Node *FindNode(T Data) { /******************************* * find node containing Data * *******************************/ Node *Current = Root; while(Current != NULL) if(CompEQ(Data, Current->Data)) return (Current); else Current = CompLT (Data, Current->Data) ? Current->Left : Current->Right; return(0); Red-Black Tree Code #include <stdlib.h> #include <stdio.h> /* modify these lines to establish data type */ typedef int T; #define CompLT(a,b) (a < b) #define CompEQ(a,b) (a == b) /* red-black tree description */ typedef enum { Black, Red } NodeColor; typedef struct Node_ { struct Node_ *Left; struct Node_ *Right; struct Node_ *Parent; NodeColor Color; T Data; } Node; /* /* /* /* /* left child */ right child */ parent */ node color (black, red) */ data stored in node */ #define NIL &Sentinel /* all leafs are sentinels */ Node Sentinel = { NIL, NIL, 0, Black, 0}; Node *Root = NIL; Node *InsertNode(T Data) { Node *Current, *Parent, *X; /*********************************************** * allocate node for Data and insert in tree * ***********************************************/ /* setup new node */ if ((X = malloc (sizeof(*X))) == 0) { printf ("insufficient memory (InsertNode)\n"); exit(1); } X->Data = Data; X->Left = NIL; X->Right = NIL; X->Parent = 0; X->Color = Red; /* find where node belongs */ Current = Root; Parent = 0; while (Current != NIL) { if (CompEQ(X->Data, Current->Data)) return (Current); Parent = Current; Current = CompLT(X->Data, Current->Data) ? Current->Left : Current->Right; } /* insert node in tree */ if(Parent) { if(CompLT(X->Data, Parent->Data)) Parent->Left = X; else Parent->Right = X; X->Parent = Parent; } else Root = X; InsertFixup(X); return(X); /* root of red-black tree */ * after inserting node X * *************************************/ /* check red-black properties */ while (X != Root && X->Parent->Color == Red) { /* we have a violation */ if (X->Parent == X->Parent->Parent->Left) { Node *Y = X->Parent->Parent->Right; if (Y->Color == Red) { /* uncle is red */ X->Parent->Color = Black; Y->Color = Black; X->Parent->Parent->Color = Red; X = X->Parent->Parent; } else { /* uncle is black */ if (X == X->Parent->Right) { /* make X a left child */ X = X->Parent; RotateLeft(X); } /* recolor and rotate */ X->Parent->Color = Black; X->Parent->Parent->Color = Red; RotateRight(X->Parent->Parent); } } else { /* mirror image of above code */ Node *Y = X->Parent->Parent->Left; if (Y->Color == Red) { /* uncle is red */ X->Parent->Color = Black; Y->Color = Black; X->Parent->Parent->Color = Red; X = X->Parent->Parent; } else { /* uncle is black */ if (X == X->Parent->Left) { X = X->Parent; RotateRight(X); } X->Parent->Color = Black; X->Parent->Parent->Color = Red; RotateLeft(X->Parent->Parent); } Root->Color = Black; **************************/ Node *Y = X->Right; /* establish X->Right link */ X->Right = Y->Left; if (Y->Left != NIL) Y->Left->Parent = X; /* establish Y->Parent link */ if (Y != NIL) Y->Parent = X->Parent; if (X->Parent) { if (X == X->Parent->Left) X->Parent->Left = Y; else X->Parent->Right = Y; } else { Root = Y; } /* link X and Y */ Y->Left = X; if (X != NIL) X->Parent = Y; void RotateRight(Node *X) { /**************************** * rotate Node X to right * ****************************/ Node *Y = X->Left; /* establish X->Left link */ X->Left = Y->Right; if (Y->Right != NIL) Y->Right->Parent = X; /* establish Y->Parent link */ if (Y != NIL) Y->Parent = X->Parent; if (X->Parent) { if (X == X->Parent->Right) X->Parent->Right = Y; else X->Parent->Left = Y; } else { Root = Y; } /* link X and Y */ Y->Right = X; if (X != NIL) X->Parent = Y; void DeleteNode(Node *Z) { Node *X, *Y; /***************************** * delete node Z from tree * *****************************/ if (!Z || Z == NIL) return; if (Z->Left == NIL || Z->Right == NIL) { /* Y has a NIL node as a child */ Y = Z; } else { /* find tree successor with a NIL node as a child */ Y = Z->Right; while (Y->Left != NIL) Y = Y->Left; } /* X is Y's only child */ if (Y->Left != NIL) X = Y->Left; else X = Y->Right; /* remove Y from the parent chain */ X->Parent = Y->Parent; if (Y->Parent) if (Y == Y->Parent->Left) Y->Parent->Left = X; else Y->Parent->Right = X; else Root = X; if (Y != Z) Z->Data = Y->Data; if (Y->Color == Black) DeleteFixup (X); free (Y); void DeleteFixup(Node *X) { /************************************* * maintain red-black tree balance * * after deleting node X * *************************************/ while (X != Root && X->Color == Black) { if (X == X->Parent->Left) { Node *W = X->Parent->Right; if (W->Color == Red) { W->Color = Black; X->Parent->Color = Red; RotateLeft (X->Parent); W = X->Parent->Right; } if (W->Left->Color == Black && W->Right->Color == Black) { W->Color = Red; X = X->Parent; } else { if (W->Right->Color == Black) { W->Left->Color = Black; W->Color = Red; RotateRight (W); W = X->Parent->Right; } W->Color = X->Parent->Color; X->Parent->Color = Black; } X->Color = Black; } } else { Node *W = X->Parent->Left; if (W->Color == Red) { W->Color = Black; X->Parent->Color = Red; RotateRight (X->Parent); W = X->Parent->Left; } if (W->Right->Color == Black && W->Left->Color == Black) { W->Color = Red; X = X->Parent; } else { if (W->Left->Color == Black) { W->Right->Color = Black; W->Color = Red; RotateLeft (W); W = X->Parent->Left; } W->Color = X->Parent->Color; X->Parent->Color = Black; W->Left->Color = Black; RotateRight (X->Parent); X = Root; } } Node *FindNode(T Data) { /******************************* * find node containing Data * *******************************/ Node *Current = Root; while(Current != NIL) if(CompEQ(Data, Current->Data)) return (Current); else Current = CompLT (Data, Current->Data) ? Current->Left : Current->Right; return(0); Skip List Code #include <stdio.h> #include <stdlib.h> /* define data-type and compare operators here */ typedef int T; #define CompLT(a,b) (a < b) #define CompEQ(a,b) (a == b) /* * levels range from (0 .. MAXLEVEL) */ #define MAXLEVEL 15 typedef struct Node_ { T Data; struct Node_ *Forward[1]; } Node; typedef struct { Node *Hdr; int ListLevel; } SkipList; SkipList List; #define NIL List.Hdr void InitList() { int i; /************************** * initialize skip list * **************************/ if ((List.Hdr = malloc(sizeof(Node) + MAXLEVEL*sizeof(Node *))) == 0) { printf ("insufficient memory (InitList)\n"); exit(1); } for (i = 0; i <= MAXLEVEL; i++) List.Hdr->Forward[i] = NIL; List.ListLevel = 0; /* user's data */ /* skip list forward pointer */ Node *InsertNode(T Data) { int i, NewLevel; Node *update[MAXLEVEL+1]; Node *X; /*********************************************** * allocate node for Data and insert in list * ***********************************************/ /* find where data belongs */ X = List.Hdr; for (i = List.ListLevel; i >= 0; i--) { while (X->Forward[i] != NIL && CompLT(X->Forward[i]->Data, Data)) X = X->Forward[i]; update[i] = X; } X = X->Forward[0]; if (X != NIL && CompEQ(X->Data, Data)) return(X); /* determine level */ NewLevel = 0; while (rand() < RAND_MAX/2) NewLevel++; if (NewLevel > MAXLEVEL) NewLevel = MAXLEVEL; if (NewLevel > List.ListLevel) { /* make new node */ if ((X = malloc(sizeof(Node) + NewLevel*sizeof(Node *))) == 0) { printf ("insufficient memory (InsertNode)\n"); exit(1); } X->Data = Data; /* update forward links */ for (i = 0; i <= NewLevel; i++) { X->Forward[i] = update[i]->Forward[i]; update[i]->Forward[i] = X; } return(X); void DeleteNode(T Data) { int i; Node *update[MAXLEVEL+1], *X; /******************************************* * delete node containing Data from list * *******************************************/ /* find where data belongs */ X = List.Hdr; for (i = List.ListLevel; i >= 0; i--) { while (X->Forward[i] != NIL && CompLT(X->Forward[i]->Data, Data)) X = X->Forward[i]; update[i] = X; } X = X->Forward[0]; if (X == NIL || !CompEQ(X->Data, Data)) return; /* adjust forward pointers */ for (i = 0; i <= List.ListLevel; i++) { if (update[i]->Forward[i] != X) break; update[i]->Forward[i] = X->Forward[i]; } free (X); /* adjust header level */ while ((List.ListLevel > 0) && (List.Hdr->Forward[List.ListLevel] == NIL)) List.ListLevel--; Node *FindNode(T Data) { int i; Node *X = List.Hdr; /******************************* * find node containing Data * *******************************/ for (i = List.ListLevel; i >= 0; i--) { while (X->Forward[i] != NIL && CompLT(X->Forward[i]->Data, Data)) X = X->Forward[i]; } X = X->Forward[0]; if (X != NIL && CompEQ(X->Data, Data)) return (X); return(0); } 1. Bibliography [1] Donald E. Knuth. The Art of Computer Programming, volume 3. Massachusetts : Addison-Wesley, 1973. [2] Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introducti on to Algorithms. New York: McGraw-Hill, 1992. [3] Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. Data Structures and Algorithms. Massachusetts: Addison-Wesley, 1983. [4] Peter K. Pearson. Fast hashing of variable-length text strings. Communica tions of the ACM, 33(6):677-680, June 1990. [5] William Pugh. Skip lists: A probabilistic alternative to balanced trees. C ommunications of the ACM, 33(6):668-676, June 1990. vv r r r r t<<<p<<<dPPNT"Arial, Helvetica e e tp~~~pp~pp~~~pp~t~ ~~ p~ ~~ dPPNT"ArialdPPNT"SystemdPPNT"Arial, (zS#dPPNT"Systemt~H c H c~c~H Hp~H c H c~c~H HdPPNT"ArialdPPNT"SystemdPPNT"Ar ( Q43dPPNT"Systempww4ww4 tu ywwvvvvuuuuuuuuuuuvvv v w w w x x xxxyyyyyyyyyxxxxxwwtNt2z8w8z2y3y3y3x3x3w3w3w3v3v3u3u3u3t2w8dPPNT ArialdPPNT"SystemdPPNT"Sy wHwHvHvHvGvGuGuGuGuFuFuFuEuEuEuEuDvDvDvDvDwDwDwDxDxDxDxDxDyEyEyEyEyFyFyFyGyGxGxG xGxHxHwHwHtNtiznwnziyiyiyixixiwjwjwjviviuiuiuitiwndPPNT ArialdPPNT"SystemdPPNT"Systempw|w zy~w~w~v~v~v~v~u}u}u}u}u|u|u|u|u{u{u{v{vzvzvzwzwzwzxzxzxzx{x{y{y{y|y|y|y|y}y}y}x }x~x~x~x~w~w~tNt zwz y yyxxwwwvvuuu t wdPPNT ArialdPPNT"SystemdPPNT"Systempw ww tuywwvvvvuuuuuuuuuuuvvvvwwwxxxxxyyyyyyyyyxxxxxw zwz y x x w w w v v u u 7 7 t%%%%p%%%%dPPNT"Arial, Helvetica .+4dPPNT"Systemt%888%%8p%888%%8t8JJJ88Jp8J temtJ9\]\9\]J]J9\9pJ9\]\9\]J]J9\9dPPNT"Arial*2dPPNT"Systemt%7%%77%p tN ptN p@r@[rtNpvvpqqqqqqqqqqqqqpvp f f (;(a)dPPNT"SystemdPPNTCentury Schoolbook*m(b)dPPNT"SystemdPPNTCentury SchoolbookdPPNT"Sy oolbook+m(c)dPPNT"Systemp/'/5/'/0/5tN,319/91314140404/4/4/4.4.4-4-4-4,4,3/9p'5'05 4,4,4,4+4+4*4*4)3,9"!! t&&&&p&&&&dPPNT"Arial, dPPNT ArialdPPNT"System t dPPNT Arial)m2sdPPNT"SystempCuC^u tNtzztttuuuuuuuuutttzdPPNT ArialdPPNT"System t Yd YYd d YdPPNT Arial)m1sdPPNT"Systemp j j tN} ~ ~ } dPPNT ArialdPPNT"System tz z z z dPPNT Arial( 1sdPPNT"Systemp tN} ~} ~ dPPNT ArialdPPNT"System tz z zzdPPNT Arial)m1sdPPNT"Systemp C u C ^ u tN}t z z t t t u u u u u u u u u~t~t}t zdPPNT ArialdPPNT"System tzY dzY Y dzdzYdPPNT Arial)m1sdPPNT"SystemdPPNTCentury Schoolbook (;(a)dPPNT"SystemdPPNTCentury Schoolbook*m(b)dPPNT"SystempA*A8A*A3A8 tN>7D<A<D7C7C7B7B7B7A8A8@8@7@7?7?7>7>7A<p*8*38tN7<<7777778887777 t\<nan<na\a\<n<p\<nan<na\a\<n<dPPNT"Arial (hL4dPPNT"Systemt\nnn\\np\nnn\\ndPPNT"Arial)l4dPPNT"Syst tNPVSVUUUTTSSSRRQQQPSpeeee etNbhehgggffeeeddcccbep tNp tNp as a s ssap p:L:LLL: ffpf f dPPNTCentury Schoolbook f, .+ 01)53)1dPPNTSymbol, Symbol( 6,)R,). dPPNT"SystemTlTl Tl Symbol(=+=))+)=(-=))+)=(>=) 73( 1/ dPPNT"System. t9"]"9"]]9"9p9"]"9"]]9"9dPPNT"ArialdPPNT"SystemdPPNT"Arial, Helvetica .+I4dPPNT"Systemt]" "]" ]"]p]" "]" ]"]dPPNT"ArialdPPNT"SystemdPPNT" NT"SystemdPPNT"Arial(eI1dPPNT"SystemtY]k k]k Y Y]k]pY]k k]k Y Y]k]dPPNT"Arial T"Arial)$5dPPNT"SystemtYkkkYYkpYkkkYYkdPPNT"ArialdPPNT"SystemdP ((a)dPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook*H(b)dPPNT"Sys lbookdPPNT"SystemdPPNTCentury Schoolbook+I(c)dPPNT"SystempVo9y o q s t t}tbt`u^w^y]w]u\tZt T ArialdPPNT"SystemdPPNT"SystempVoyoqsttttuwywuttttsqodPPNT ArialdPP r New, Courier ( ELbdPPNT"SystemdPPNT1Courier NewdPPNT"SystemdPPNT1Courier New) UbdPPNT"Systemp& + + ' & tN" ( " ( ( ( ( ' ' ' ' ' ' ' ( ( ( ( " dPPNTCentury SchoolbookdPPNT"SystemdPPNTCe New(UELbdPPNT"SystemdPPNT1Courier NewdPPNT"SystemdPPNT1Courier New)LMdPPNT"SystemdPPNT1 PNT1Courier New)ELbdPPNT"SystemDUpSb SRS S S t)B))BB)p)B))BB)t);B;;B)B);p);B;;B)B);dPPNT"ArialdPPNT"SystemdPPNT"Arial, .+-5#dPPNT"Systemt;MBMMB;B;Mp;MBMMB;B;MdPPNT"ArialdPPNT"SystemdPPNT"Arial*#d T"Systemt B B B p B B B t BB B p BB B dPPNT"ArialdPPNT"Syst emt x x x xp x x x xt x x xxp x x xxdPPNT"Arial t."2 222221111000////...... . . .!.!.!.!."/"/"/"/"0"0"0"1"1!1!1!2!2 2 2 2tNs#x x#s"s emdPPNT"SystempV0VtV0VttT.X2V2V2U2U2U2U1T1T1T1T0T0T0T/T/T/T/T.U.U.U.U.V.V.V.W.W.W.W.X .X/X/X/X/X0X0X0X1X1X1W1W2W2W2V2V2tNSsYxVxYsXsXsXsWsWsVsVsVsUsUsTsTsTsSsVxdPPNT Ariald PPNT"SystemdPPNT"Systemp 0 t 0 tt . 2 2 2 2 2 2 1 1 1 1 0 0 0 / / / / . . . . . . . . . . . Courier ( HashTabledPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial (# 0dPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial*1dPPNT"SystemdPPNT ArialdPPNT"Systemd PPNT"SystemdPPNT Arial*3dPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial*4dPPNT"SystemdP *5dPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial*6dPPNT"SystemdPPNT ArialdPPNT"System 5 5" 3t t3 t~ ! ! ! ! p~ ! ! ! ! dPPNT ArialdPPNT" .+ 20dPPNT"Systemt~&/EO5/2//1,2)5'8&;&?&B'F)I,K/M2N5O9N<M?KBIDFEBE?E;D8B5?2<19/5 &B'F)I,K/M2N5O9N<M?KBIDFEBE?E;D8B5?2<19/5/dPPNT ArialdPPNT"SystemdPPNT Arial(8<7dPPNT"Sys SiSlSpTsVvYx\z_|c|f|izlxovqsrprlriqeobl`i^f]c\p~S\r|c\_]\^Y`VbTeSiSlSpTsVvYx\z_|c |f|izlxovqsrprlriqeobl`i^f]c\dPPNT ArialdPPNT"SystemdPPNT Arial++.16dPPNT"System rialdPPNT"SystemdPPNT Arial(f4dPPNT"SystempAW4A4WpAJWaWaAJt~&E52/,)'&&&' mdPPNT Arial(f37dPPNT"SystempAWAWpAWWAP3B 323 .+4dPPNT"Systemt~/=O-/)/&1#2 58;?BF I#K&M)N-O0N4M7K9I;F<B=?<;;89572410/-/p~/=O-/ <B=?<;;89572410/-/dPPNT ArialdPPNT"SystemdPPNT Arial+-7dPPNT"Systemp$1$1t~8\X VsWpXlWiVeTbR`O^K]H\p~8\X|H\D]A^>`;b:e8i8l8p:s;v>xAzD|H|K|OzRxTvVsWpXlWiVeTbR`O^K ]H\dPPNT ArialdPPNT"SystemdPPNT Arial++16dPPNT"Systemt~S sc _ \ Y W U T S T 3 3 t~ ! ! ! ! dPPNT Arial, .+ 20 empAXAXpAXXAt~ \3 3 N >3 .+ 37 0 0 t "%)+./00/.+)%"p .+!blackdPPNT"Systemt Tbx fbcb_c\eYhWkUnTrTvUyW}Y \ _ c f j n q t v}wyxvxrwnvkt kthqencjbfbdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook(ilreddPPNT"System WY\_cfjnqtvw xxwvtqnjfp Txfc_\YWUTTU WY\_cfjnqtvw xxwvtqnjfdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook) reddPPNT"Syst book ( BdPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook(WTAdPPNT"Syste ookdPPNT"SystemdPPNTCentury Schoolbook( ( eparentdPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook) uncledP Y[^aeimpsvxz {{zxvspmip W{iea^[YXWWX Y[^aeimpsvxz {{zxvspmidPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook) blackdPPNT"Sy ( BdPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook(ZTAdPPNT"Syste PPNT"SystemdPPNTCentury Schoolbook( y y t "%(+-/00/-+(%"p .+!blackdPPNT"Systemt Tbx fbbb_c\eYhWkUnTrTvUyW}Y \ _ b f j m q t v}wyxvxrwnvkt kthqemcjbfbdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook(ilreddPPNT"System PNT"SystemdPPNTCentury Schoolbook ( BdPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook(WUAdPPNT"Syste ookdPPNT"SystemdPPNTCentury Schoolbook( ( eparentdPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook) uncledP ( AdPPNT"SystemdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook(YUXdPPNT"Syste PPNT"SystemdPPNTCentury Schoolbook( eCdPPNT"SystempTT tNps s p8G8Gps s ps s pdPPNT ( ddPPNT"SystemdPPNTSymboldPPNT"SystemdPPNTSymbol( gdPPNT"SystemdPPNTSymboldPPNT"Sys temdPPNTSymbol( adPPNT"SystemdPPNTSymboldPPNT"SystemdPPNTSymbol( "edPPNT"System t ;`;<=?AD H L O SWZ\^_`_^\ZWSOLHDA?=<;p ;`;<=?AD H L O (AblackdPPNT"SystempuAuAdPPNTCentury SchoolbookdPPNT"SystemdPPNTCentury Schoolbook (UBdPPNT"SystempuX gug XdPPNTSymboldPPNT"SystemdPPNTSymbol t222p222dPPNT"ArialdPPNT"SystemdPPNT"Arial, Helvetica .+"#dPPNT"Systemt222p222tMiMiiMMpMiMiiMMdPPNT"ArialdPPNT"SystemdPPNT"Ar dPPNT"ArialdPPNT"SystemdPPNT"Arial)7bobdPPNT"Systemt FF 8Fp8F 8FdPPNT"ArialdPPNT"SystemdPPNT"Arial)7bobdPPNT"SystemtFS SS FSpFS FSt8'FBF'FB8B8'F'p8'FBF'FB8B8'F'dPPNT"ArialdPPNT"SystemdPPNT"Arial)8caldPPNT"Syst mtF]SxS]SxFxF]S]pF]SxS]SxFxF]S]t8 FF F88 F p8 FF F88 F dPPNT"Ariald tJ"N'L'L'L'K&K&K&K&K&J%J%J%J%J$J$J$K#K#K#K#K#L#L"L"M"M#M#M#N#N#N#N$N$N$N%N%N%N%N&N&N M'M'L'tNIHOMLMOHOHNHNHMHMHMILILIKHKHKHJHJHIHLMdPPNT ArialdPPNT"SystemdPPNT"SystempL[L L[L ]L]L]K]K]K\K\K\J\J[J[J[J[JZJZKZKZKYKYKYLYLYLYMYMYMYMYNYNZNZNZNZN[N[N[N[N\N\N\N\M ]M]M]M]L]tNI~O L O~O~N~N M M M L L K K K J~J~I~L dPPNT ArialdPPNT"SystemdPPNT"SystempL LL NT ArialdPPNT"SystemdPPNT"SystempLL#LL#tJNLLLKKKKKJJJJJJJKKKKKLLLMMM L7K6K6K6K6K6J5J5J5J5J4J4J4K3K3K3K3K3L3L2L2M2M3M3M3N3N3N3N4N4N4N5N5N5N5N6N6N6M6M6M7M7L7tNIXO]L PPNT"SystempLkL LkL tJiNmLmLmLmKmKmKlKlKlJlJkJkJkJkJjJjKjKjKiKiKiLiLiLiMiMiMiMiNiNjNjNjNjNkN tSa2aa2S2SapSa2aa2S2SatSMaiaMaiSiSMaMpSMaiaMaiSiSMaMtSaaaSSapSaaaSSa tX"\'Z'Z'Y'Y&Y&Y&X&X&X%X%X%X%X$X$X$X#X#Y#Y#Y#Y#Z"Z"Z"[#[#[#[#\#\#\$\$\$\%\%\%\%\&\&[ ['Z'Z'tNWH]MZM]H\H\H\H[H[HZIZIZIYHYHXHXHWHWHZMdPPNT ArialdPPNT"SystemdPPNT"SystempZ[ZZ[Z ]Z]Y]Y]Y]Y\X\X\X\X[X[X[X[XZXZXZXZYYYYYYYYZYZYZY[Y[Y[Y[Y\Z\Z\Z\Z\[\[\[\[\\\\\\[\[ ][][]Z]Z]tNW]Z]\\\[[ZZZYYXXWWZdPPNT ArialdPPNT"SystemdPPNT"SystempZZ#Z stempZ5Z Z5Z tX2\7Z7Z7Y7Y6Y6Y6X6X6X5X5X5X5X4X4X4X3X3Y3Y3Y3Y3Z2Z2Z2[3[3[3[3\3\3\4\4\4\5\5\5\5 tu 2 2u2u pu 2 2u2u dPPNT"ArialdPPNT"SystemdPPNT"Arial(~"#dPPNT"Systemt 2 u pu u dPPNT"ArialdPPNT"SystemdPPNT"Arial)7bobdPPNT"Systemt tu' B ' BuBu' 'pu' B ' BuBu' 'dPPNT"ArialdPPNT"SystemdPPNT"Arial)8caldPPNT"Syst t " ' ' ' ' & & & & & % % % % $ $ $ # # # # # # " " " # # # # # # $ $ $ % % % % & & t 2 2 2 p 2 2 2 t M i M i i M Mp M i M i i M Mt p t " ' ' ' ' & & & & & % % % % $ $ $ # # # # # # " " " # # # # # # $ $ $ % % % % & & t 22 2 p 22 2 t MiMi i MMp MiMi i MMt 'B'B B ''p 'B'B B ' t"''''&&&&&%%%%$$$######"""######$$$%%%%&& 0dPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial*70dPPNT"SystemdPPNT ArialdPPNT"System 1dPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial*00dPPNT"SystemdPPNT ArialdPPNT"System 1dPPNT"SystemdPPNT ArialdPPNT"SystemdPPNT Arial*2dPPNT"Systemp p p p p"%"9dPPNTCentury Schoolbook, Helvetica .+ndPPNTSymbol, Symbol)=)+)+)+)=dPPNTCentury Schoolbook(1(&1*2(91*4(c2dPPNTT Extra( M ekst W 1990-91 Apple Computer Inc. 1990-91 Bitstream Inc.Copyright 1990-91 Appl Copyright 1990-91 Bitstream Inc.NormaaliCopyright 1990-91 Apple Computer Inc. Copyri 1990-91 Bitstream Inc.1.0@, %Id@QX Y!-,%Id@QX Y!-, P y PXY%%# P y PXY%-,KPX CEDY!-,%E`D-,KSX%%EDY!!-,ED-!!!!fzH NC cF y \ Q S,<EY]deg M\ghij s t S?@DE} )1239;BDKNOPR^_bdhjksz~ @@ (( $$ (( W 29<?CDJYdnu >?DEwx 9:op 12345ij !/mnrs @p @ ]s[]'(V[ t z !!b!c!o!p! ! """""######>#A%%&&''(_(b(( BTP BQ @0 @ @@@@@@@@ @ G@GCGWGZGGII IDIEISI`JdJqL B @ @V`i`p````aaaa)a]aga a clcmcqcrccccccddeeeef @ l @ @X u y W X d f * 2 C I } q s x B:\ @ @T @X 1 3 r # % a n u %>AUt @@ : $ $ $ $ $ x$ $ @$ $ @$ $ $ $ $ $ $ $ $ : < T f RYn 2JYu ytoiic$ $ $ $ $ @@ @!@$ P $ @@ @!@$ P $ @@ @!@$$ @@ @!@$ P $ @@ @!@$ P $ $ $ $ $$ & "$&(),.1489=?EKRSY\cjuv} .@@@@@@@@@@@@@@@* l @ ' .@@@@@@@@@@@@@@@@@@@@ $ 6/0*5G !!"""&#C#J$ &&''(((())))*5*7*F,;,L,M. ./O/P/ / ///000H0N0O0~0 0 00 P * l @ ' .@@@@@@@@@@@@@@@@@@@@<0001122224 7789:<<=>S>T>U>_>i>j>p>w>}> > > > > > > > @@@@@@@@@@@@@@@@@@@ $ l ( @@@@@@@@@@@ $ $xx xx @!@$ @@ P "> > >>>>>>>>>>>>>>>>>>?7?8?U@ @ABCGCNCYCfCvCwC C C C ] " l 4 ?1 @@@@@@@@@@@@@@@@ $ xx( l @ @@@@@@@@@@@@@@@@@@@@ $( l @ @@@@@@@@@@@@@@@$C C CCCCCCCCCCCCCDKDLDRD\DbDlDmDpDvD|D D D D D XVZWW xx " l 4 u TA;@@@@@@@@@@@@@@@@ $=WWWXX/XWXeXgYZTZ~Z Z ZZZZ[[$[/[B[ xx<_ _ _ _ _ _ _ _ _ _ _ _____aaddffffffghiiiiijjkflnnooooOo^q~q sit xx l 4 j@@@@@@@@@@@@@@@ " $A M O ~ I@@@@@@@@@@@@ $" l 4 W I@@@@@@@@@@@@@@@@" # $ $ % & , 7 G V ` a b e g i k m O +@@@@@@@@@@@@@@@@@@@@@@@" l 4 W |@@@@@@@@@@@@@@@@ $ $" l 4 W |@@@@@@@@@@@@! m n u y { } O +@@@@@@@@@@@@@@@@@@@@@@@. l L O +@@@@@@@@@@@@@@@@@@@@@@ $ $. l L O +@@@@@@@@@@@@@@@@@% %6MN]st "5Zjk O +@@@@@@@@@@@@@@@@@@@@@@@ $. l L O +@@@@@@@@@@@@@@@@@@@@@@ $. 34Xmx (./CDw- -.c~ ?Zj{ =\|} 'Qn ,R_u 67Q :x (Rp} Cm =q4 "3TVWst *YZ} .;MSTk} 5O]n +P^(Uy /W 6:NOf $`lm<= 578Sh 12Tf SubHeading Signature 44 @ @ @ (( $ D $ D$ D$ D$ D$ D$ Dx$ Dx$ D @ ! < < < < < < @ < < h x xx x P xx h h + <Zk;<Z[jk @ 0 @L`iqS < !"#$%&'() s) }B_Toc347461014B_Toc347460760B_Toc347460695B_Toc347460659B_Toc347460631B_Toc347460558B_ Toc347471499B_Toc347471389B_Toc347471856B_Toc356911628B_Toc356914183B_Ref357084512B_To c357133632B_Toc357145830B_Toc357147857B_Toc357148473B_Toc357150105B_Toc357764613B_Ref3 57150559B_Ref357074685B_Ref357075750B_Ref357075769B_Ref357076201B_Ref357076383B_Toc356 911629B_Toc356914184B_Toc357133633B_Toc357145831B_Toc357147858B_Toc357148474B_Toc35776 4614B_Toc357150106B_Toc356911630B_Toc356914185B_Toc357133634B_Toc357145832B_Toc3571478 59B_Toc357148475B_Toc357764615B_Toc357150107B_Ref357084306B_Toc356911631B_Toc356914186B _Toc357133635B_Toc357145833B_Toc357147860B_Toc357148476B_Toc357764616B_Toc357150108B_R ef357084721B_Toc356914187B_Toc356911632B_Toc357133636B_Toc357145834B_Toc357147861B_Toc 357148477B_Toc357764617B_Toc357150109B_Ref357130977B_Ref357131004B_Ref357133053B_Toc35 6911633B_Toc356914188B_Toc357133637B_Toc357145835B_Toc357148478B_Toc357147862B_Toc3577 64618B_Toc357150110B_Ref357133341B_Toc356911634B_Toc356914189B_Toc357133638B_Toc357145 836B_Toc357147863B_Toc357148479B_Toc357764619B_Toc357150111B_Toc356911635B_Toc35691419 0B_Toc357133639B_Toc357145837B_Toc357147864B_Toc357148480B_Toc357764620B_Toc357150112B_ Ref357132089B_Ref357133421B_Toc356911636B_Toc356914191B_Toc357133640B_Toc357145838B_To c357147865B_Toc357148481B_Toc357764621B_Toc357150113B_Ref357132123B_Ref357132186B_Ref3 57132229B_Ref357132268B_Toc356911637B_Toc356914192B_Toc357133641B_Toc357145839B_Toc357 147866B_Toc357148482B_Toc357764622B_Toc357150114B_Ref357132329B_Ref357132353B_Toc35691 1638B_Toc356914193B_Toc357133642B_Toc357145840B_Toc357147867B_Toc357148483B_Toc3577646 23B_Toc357150120B_Ref357132373B_Toc356911639B_Toc356914194B_Toc357133643B_Toc357145841B _Toc357147868B_Toc357148484B_Toc357764624B_Toc357150121B_Ref357133499B_Ref357132848B_R ef357132815B_Toc356914196B_Toc357133645B_Toc357145842B_Toc357147869B_Toc357148485B_Toc 357764625B_Toc357150122B_Toc356914197B_Ref357084543B_Ref357084607B_Toc357133646B_Toc35 7145843B_Toc357147870B_Toc357148486B_Toc357764626B_Toc357150123B_Toc356914198B_Toc3571 33647B_Toc357145844B_Ref357146615B_Ref357146659B_Toc357147871B_Toc357148487B_Toc357764 627B_Toc357150124B_Toc356914199B_Toc357133648B_Toc357145845B_Ref357146744B_Ref35714678 4B_Toc357147872B_Toc357148488B_Toc357764628B_Toc357150125B_Toc356914200B_Toc357133649B_ Toc357145846B_Ref357146897B_Ref357146953B_Toc357147873B_Toc357148489B_Toc357764629B_To c357150126B_Toc356914201B_Toc357133650B_Toc357145847B_Ref357147052B_Ref357147108B_Toc3 57147874B_Toc357148490B_Toc357764630B_Toc357150127B_Ref357147631B_Toc357147875B_Toc357 148491B_Toc357764631B_Toc357150128B_Toc356914202B_Toc357133651B_Toc357145848B_Ref35714 7724B_Ref357147741B_Toc357147876B_Toc357148492B_Toc357764632B_Toc357150129B_Toc3569142 03B_Toc357133652B_Toc357145849B_Ref357147788B_Ref357147806B_Toc357147877B_Toc357148493B _Toc357764633B_Toc357150130B_Toc356911640B_Toc356914195B_Toc357133644B_Toc357145850B_T oc357147878B_Toc357148494B_Toc357764634B_Toc357150131<<Gnnnnnnnnnn=--------!!!!!!!!%+>+> < P d $ %` ; F J E FH G\ Hp I J K M L Nh O| P Q R S U K K K< K( RR zzzzzzzzzzzzzzzzzzzG44444444!%!%!%!%!%!%!%!%&+K+K+K+K+K+K+K+K/1=>T>T>T>T>T>T>T>TBDD.
https://ru.scribd.com/document/128681723/Algorithm-Book
CC-MAIN-2019-47
refinedweb
10,544
55.44
Category: Distributed Systems TWS-4: Gossip protocol: Epidemics and rumors to the rescue Having Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data In the last decade and a half, there has arisen a class of problem that are becoming very critical in the computing domain. These problems deal with computing in a highly distributed environments. A key characteristic of this domain is the need to grow elastically with increasing workloads while tolerating failures without missing a beat. In short I would like to refer to this as ‘Web Scale Computing’ where the number of servers exceeds several 100’s and the data size is of the order of few hundred terabytes to several Exabytes. There are several features that are unique to large scale distributed systems - The servers used are not specialized machines but regular commodity, off-the-shelf servers - Failures are not the exception but the norm. The design must be resilient to failures - There is no global clock. Each individual server has its own internal clock with its own skew and drift rates. Algorithms exist that can create a notion of a global clock - Operations happen at these machines concurrently. The order of the operations, things like causality and concurrency, can be evaluated through special algorithms like Lamport or Vector clocks - The distributed system must be able to handle failures where servers crash, disk fails or there is a network problem. For this reason data is replicated across servers, so that if one server fails the data can still be obtained from copies residing on other servers. - Since data is replicated there are associated issues of consistency. Algorithms exist that ensure that the replicated data is either ‘strongly’ consistent or ‘eventually’ consistent. Trade-offs are often considered when choosing one of the consistency mechanisms - Leaders are elected democratically. Then there are dictators who get elected through ‘bully’ing. In some ways distributed systems behave like a murmuration of starlings (or a school of fish), where a leader is elected on the fly (pun unintended) and the starlings or fishes change direction based on a few (typically 6) closest neighbors. This series of posts, Thinking Web Scale (TWS) , will be about Web Scale problems and the algorithms designed to address this. I would like to keep these posts more essay-like and less pedantic. In the early days, computing used to be done in a single monolithic machines with its own CPU, RAM and a disk., This situation was fine for a long time, as technology promptly kept its date with Moore’s Law which stated that the “ computing power and memory capacity’ will double every 18 months. However this situation changed drastically as the data generated from machines grew exponentially – whether it was the call detail records, records from retail stores, click streams, tweets, and status updates of social networks of today These massive amounts of data cannot be handled by a single machine. We need to ‘divide’ and ‘conquer this data for processing. Hence there is a need for a hundreds of servers each handling a slice of the data. The first post is about the fairly recent computing paradigm “Map-Reduce”. Map- Reduce is a product of Google Research and was developed to solve their need to calculate create an Inverted Index of Web pages, to compute the Page Rank etc. The algorithm was initially described in a white paper published by Google on the Map-Reduce algorithm. The Page Rank algorithm now powers Google’s search which now almost indispensable in our daily lives. The Map-Reduce assumes that these servers are not perfect, failure-proof machines. Rather Map-Reduce folds into its design the assumption that the servers are regular, commodity servers performing a part of the task. The hundreds of terabytes of data is split into 16MB to 64MB chunks and distributed into a file system known as ‘Distributed File System (DFS)’. There are several implementations of the Distributed File System. Each chunk is replicated across servers. One of the servers is designated as the “Master’. This “Master’ allocates tasks to ‘worker’ nodes. A Master Node also keeps track of the location of the chunks and their replicas. When the Map or Reduce has to process data, the process is started on the server in which the chunk of data resides. The data is not transferred to the application from another server. The Compute is brought to the data and not the other way around. In other words the process is started on the server where the data, intermediate results reside The reason for this is that it is more expensive to transmit data. Besides the latencies associated with data transfer can become significant with increasing distances Map-Reduce had its genesis from a Lisp Construct of the same name Where one could apply a common operation over a list of elements and then reduce the resulting list of elements with a reduce operation The Map-Reduce was originally created by Google solve Page Rank problem Now Map-Reduce is used across a wide variety of problems. The main components of Map-Reduce are the following - Mapper: Convert all d ∈ D to (key (d), value (d)) - Shuffle: Moves all (k, v) and (k’, v’) with k = k’ to same machine. - Reducer: Transforms {(k, v1), (k, v2) . . .} to an output D’ k = f(v1, v2, . . .). … - Combiner: If one machine has multiple (k, v1), (k, v2) with same k then it can perform part of Reduce before Shuffle A schematic of the Map-Reduce is included below\ Map Reduce is usually a perfect fit for problems that have an inherent property of parallelism. To these class of problems the map-reduce paradigm can be applied in simultaneously to a large sets of data. The “Hello World” equivalent of Map-Reduce is the Word count problem. Here we simultaneously count the occurrences of words in millions of documents The map operation scans the documents in parallel and outputs a key-value pair. The key is the word and the value is the number of occurrences of the word. E.g. In this case ‘map’ will scan each word and emit the word and the value 1 for the key-value pair So, if the document contained “All men are equal. Some men are more equal than others” Map would output (all,1), (men,1), (are,1), (equal,1), (some,1), (men,1), (are,1), (equal,1), (than,1), (others,1) The Reduce phase will take the above output and give sum all key value pairs with the same key (all,1), (men,2), (are,2),(equal,2), (than,1), (others,1) So we get to count all the words in the document In the Map-Reduce the Master node assigns tasks to Worker nodes which process the data on the individual chunks Map-Reduce also makes short work of dealing with large matrices and can crunch matrix operations like matrix addition, subtraction, multiplication etc. Matrix-Vector multiplication As an example if we consider a Matrix-Vector multiplication (taken from the book Mining Massive Data Sets by Jure Leskovec, Anand Rajaraman et al For a n x n matrix if we have M with the value mij in the ith row and jth column. If we need to multiply this with a vector vj, then the matrix-vector product of M x vj is given by xi Here the product of mij x vj can be performed by the map function and the summation can be performed by a reduce operation. The obvious question is, what if the vector vj or the matrix mij did not fit into memory. In such a situation the vector and matrix are divided into equal sized slices and performed acorss machines. The application would have to work on the data to consolidate the partial results. Fortunately, several problems in Machine Learning, Computer Vision, Regression and Analytics which require large matrix operations. Map-Reduce can be used very effectively in matrix manipulation operations. Computation of Page Rank itself involves such matrix operations which was one of the triggers for the Map-Reduce paradigm. Handling failures: As mentioned earlier the Map-Reduce implementation must be resilient to failures where failures are the norm and not the exception. To handle this the ‘master’ node periodically checks the health of the ‘worker’ nodes by pinging them. If the ping response does not arrive, the master marks the worker as ‘failed’ and restarts the task allocated to worker to generate the output on a server that is accessible. Stragglers: Executing a job in parallel brings forth the famous saying ‘A chain is as strong as the weakest link’. So if there is one node which is straggler and is delayed in computation due to disk errors, the Master Node starts a backup worker and monitors the progress. When either the straggler or the backup complete, the master kills the other process. Mining Social Networks, Sentiment Analysis of Twitterverse also utilize Map-Reduce. However, Map-Reduce is not a panacea for all of the industry’s computing problems (see To Hadoop, or not to Hadoop) But the Map-Reduce is a very critical paradigm in the distributed computing domain as it is able to handle mountains of data, can handle multiple simultaneous failures, and is blazingly fast. To see all posts click ‘Index of Posts” A Cloud medley with IBM Bluemix, Cloudant DB and Node.js Published+ Presentation on the “Design principles of scalable, distributed systems” Also see my blog post on this topic “Design principles of scalable, distributed system“ Technological hurdles: 2012 and beyond Published. Technologies to watch: 2012 and beyond Published” Cache-22 If. Since writes are asynchronous the data will tend to be “eventually consistent” rather than being “strongly consistent” but this is a tradeoff that can be taken into account. Ideally it will be essential to implement the quorum protocol along with the “local reads & global writes” technique to ensure that you read your writes.. Eliminating the Performance Drag Nothing. To Hadoop, or not to Hadoop Published. Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. – Jamie Zawinski Introduction The power of Spark, which operates on in-memory datasets, is the fact that it stores the data as collections using Resilient Distributed Datasets (RDDs), which are themselves distributed in partitions across clusters. RDDs, are a fast way of processing data, as the data is operated on parallel based on the map-reduce paradigm. RDDs can be be used when the operations are low level. RDDs, are typically used on unstructured data like logs or text. For structured and semi-structured data, Spark has a higher abstraction called Dataframes. Handling data through dataframes are extremely fast as they are Optimized using the Catalyst Optimization engine and the performance is orders of magnitude faster than RDDs. In addition Dataframes also use Tungsten which handle memory management and garbage collection more effectively. The picture below shows the performance improvement achieved with Dataframes over RDDs Benefits from Project Tungsten Npte: The above data and graph is taken from the course Big Data Analysis with Apache Spark at edX, UC Berkeley This post is a continuation of my 2 earlier posts 1. Big Data-1: Move into the big league:Graduate from Python to Pyspark 2. Big Data-2: Move into the big league:Graduate from R to SparkR In this post I perform equivalent operations on a small dataset using RDDs, Dataframes in Pyspark & SparkR and HiveQL. As in some of my earlier posts, I have used the tendulkar.csv file for this post. The dataset is small and allows me to do most everything from data cleaning, data transformation and grouping etc. You can clone fork the notebooks from github at Big Data:Part 3 The notebooks have also been published and can be accessed below 1. RDD – Select all columns of tables 1b.RDD – Select columns 1 to 4 [[‘Runs’, ‘Mins’, ‘BF’, ‘4s’], [’15’, ’28’, ’24’, ‘2’], [‘DNB’, ‘-‘, ‘-‘, ‘-‘], [’59’, ‘254’, ‘172’, ‘4’], [‘8′, ’24’, ’16’, ‘1’]] 1c. RDD – Select specific columns 0, 10 [(‘Ground’, ‘Runs’), (‘Karachi’, ’15’), (‘Karachi’, ‘DNB’), (‘Faisalabad’, ’59’), (‘Faisalabad’, ‘8’)] 2. Dataframe:Pyspark –| +—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+ only showing top 5 rows 2a. Dataframe:Pyspark- Select specific columns |Runs| BF|Mins| +—-+—+—-+ | 15| 24| 28| | DNB| -| -| | 59|172| 254| | 8| 16| 24| | 41| 90| 124| +—-+—+—-+ 3. Dataframe:SparkR – Select all columns 3a. Dataframe:SparkR- Select specific columns 1 15 24 28 2 DNB – – 3 59 172 254 4 8 16 24 5 41 90 124 6 35 51 74 4. Hive QL – | +—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+ 4a. Hive QL – Select specific columns +—-+—+—-+ |15 |24 |28 | |DNB |- |- | |59 |172|254 | |8 |16 |24 | |41 |90 |124 | +—-+—+—-+ 5. RDD – Filter rows on specific condition [[‘Runs’, ‘Mins’, ‘BF’, ‘4s’, ‘6s’, ‘SR’, ‘Pos’, ‘Dismissal’, ‘Inns’, ‘Opposition’, ‘Ground’, ‘Start Date’], [’15’, ’28’, ’24’, ‘2’, ‘0’, ‘62.5’, ‘6’, ‘bowled’, ‘2’, ‘v Pakistan’, ‘Karachi’, ’15-Nov-89′], [‘DNB’, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘4’, ‘v Pakistan’, ‘Karachi’, ’15-Nov-89′], [’59’, ‘254’, ‘172’, ‘4’, ‘0’, ‘34.3’, ‘6’, ‘lbw’, ‘1’, ‘v Pakistan’, ‘Faisalabad’, ’23-Nov-89′], [‘8′, ’24’, ’16’, ‘1’, ‘0’, ’50’, ‘6’, ‘run out’, ‘3’, ‘v Pakistan’, ‘Faisalabad’, ’23-Nov-89′]] 5a. Dataframe:Pyspark – Filter rows on specific condition |Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns|Opposition| Ground|Start Date| +—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+ | 15| 28| 24| 2| 0| 62.5| 6| bowled| 2| | 35| 74| 51| 5| 0|68.62| 6| lbw| 1|v Pakistan| Sialkot| 9-Dec-89| +—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+ only showing top 5 rows 5b. Dataframe:SparkR – Filter rows on specific condition 5c Hive QL – Filter rows on specific condition |Runs|BF |Mins| +—-+—+—-+ |15 |24 |28 | |59 |172|254 | |8 |16 |24 | |41 |90 |124 | |35 |51 |74 | |57 |134|193 | |0 |1 |1 | |24 |44 |50 | |88 |266|324 | |5 |13 |15 | +—-+—+—-+ only showing top 10 rows 6. RDD – Find rows where Runs > 50 6a. Dataframe:Pyspark – Find rows where Runs >50 from pyspark.sql import SparkSession |Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns| Opposition| Ground|Start Date| +—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+ | 59| 254|172| 4| 0| 34.3| 6| lbw| 1| v Pakistan| Faisalabad| 23-Nov-89| | 57| 193|134| 6| 0|42.53| 6| caught| 3| v Pakistan| Sialkot| 9-Dec-89| | 88| 324|266| 5| 0|33.08| 6| caught| 1| v New Zealand| Napier| 9-Feb-90| | 68| 216|136| 8| 0| 50| 6| caught| 2| v England| Manchester| 9-Aug-90| | 114| 228|161| 16| 0| 70.8| 4| caught| 2| v Australia| Perth| 1-Feb-92| | 111| 373|270| 19| 0|41.11| 4| caught| 2|v South Africa|Johannesburg| 26-Nov-92| | 73| 272|208| 8| 1|35.09| 5| caught| 2|v South Africa| Cape Town| 2-Jan-93| | 50| 158|118| 6| 0|42.37| 4| caught| 1| v England| Kolkata| 29-Jan-93| | 165| 361|296| 24| 1|55.74| 4| caught| 1| v England| Chennai| 11-Feb-93| | 78| 285|213| 10| 0|36.61| 4| lbw| 2| v England| Mumbai| 19-Feb-93| +—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+ 6b. Dataframe:SparkR – Find rows where Runs >50 7 RDD – groupByKey() and reduceByKey() (‘Lahore’, 17.0), (‘Adelaide’, 32.6), (‘Colombo (SSC)’, 77.55555555555556), (‘Nagpur’, 64.66666666666667), (‘Auckland’, 5.0), (‘Bloemfontein’, 85.0), (‘Centurion’, 73.5), (‘Faisalabad’, 27.0), (‘Bridgetown’, 26.0)] 7a Dataframe:Pyspark – Compute mean, min and max | Ground| avg(Runs)|min(Runs)|max(Runs)| +————-+—————–+———+———+ | Bangalore| 54.3125| 0| 96| | Adelaide| 32.6| 0| 61| |Colombo (PSS)| 37.2| 14| 71| | Christchurch| 12.0| 0| 24| | Auckland| 5.0| 5| 5| | Chennai| 60.625| 0| 81| | Centurion| 73.5| 111| 36| | Brisbane|7.666666666666667| 0| 7| | Birmingham| 46.75| 1| 40| | Ahmedabad| 40.125| 100| 8| |Colombo (RPS)| 143.0| 143| 143| | Chittagong| 57.8| 101| 36| | Cape Town|69.85714285714286| 14| 9| | Bridgetown| 26.0| 0| 92| | Bulawayo| 55.0| 36| 74| | Delhi|39.94736842105263| 0| 76| | Chandigarh| 11.0| 11| 11| | Bloemfontein| 85.0| 15| 155| |Colombo (SSC)|77.55555555555556| 104| 8| | Cuttack| 2.0| 2| 2| +————-+—————–+———+———+ only showing top 20 rows 7b Dataframe:SparkR – Compute mean, min and max To see all posts click Index of Posts
https://gigadom.in/category/distributed-systems/
CC-MAIN-2021-39
refinedweb
2,677
62.68
Debouncing buttons in AVR C++ On the face of it, reading the open/closed state of a button should be straightforward. You would just wire up the button circuit and tap the current flow into one of the Arduino digital pins configured for input. Unfortunately you will soon discover that it’s just not that simple. At the point when a button’s contacts are being physically closed or opened the state of the electrical circuit is momentarily noisy. This means that your program will receive a rapid sequence of random HIGH/LOW signals which will certainly confuse it. You could spend money and solve the problem in hardware with some circuitry around your buttons but it’s far easier to just write some code to do it. The technique is straightforward enough; when a button state change is detected we do not act upon it unless the state change lasts for longer than a preset short period of time, known as the debounce delay. A C++ class solves the problem I’ve written a reusable C++ class to do the debouncing for you. Here’s the header file. #ifndef __99AC969B_B0C0_4ddb_BEDD_BF59AA339234 #define __99AC969B_B0C0_4ddb_BEDD_BF59AA339234 #include <stdint.h> // // Button implementation that does software debouncing // class DebouncedButton { private: // time to wait for bounce to clear static const uint32_t DEBOUNCE_DELAY_MILLIS=50; // Internal button state enum InternalState { Idle, // nothing happening DebounceDelay, // delaying... }; // The digital pin where the button is connected uint8_t _digitalPin; // The pressed state (HIGH/LOW) uint8_t _pressedState; // Internal state of the class InternalState _internalState; // The last time we sampled our button uint32_t _lastTime; public: // Possible button states enum ButtonState { NotPressed, // button is up Pressed, // button is down }; // Setup the class void setup(uint8_t digitalPin_,uint8_t pressedState_); // Get the current state of the button ButtonState getState(); }; #endif DebouncedButton.h To use this class you must first declare an instance of DebouncedButton for each button in your project. For example: DebouncedButton theButton; theButton.setup(2,HIGH); This would declare a button to be on digital pin 2 and that its pressed state reads HIGH. This feature allows you to handle normally open and normally closed buttons in this same class. Next you simply poll the class whenever you want to know the current state of the button, for example: if(theButton.getState()==DebouncedButton::Pressed) { // do something } The getState() member function is asynchronous and will not block the caller at all, even for the duration of the debounce delay. Source Code Here’s the full source code to DebouncedButton.cpp so you can just copy and paste into your own project: #include <wiring.h> #include "DebouncedButton.h" /* * Setup the class */ void DebouncedButton::setup( uint8_t digitalPin_,uint8_t pressedState_ ) { _digitalPin=digitalPin_; _pressedState=pressedState_; _internalState=Idle; // set up the pin pinMode(digitalPin_,INPUT); // activate the internal pull-up resistor digitalWrite(digitalPin_,HIGH); } /* * Get the current state */ DebouncedButton::ButtonState DebouncedButton::getState() { uint32_t newTime; uint8_t state; // read the pin and flip it if this switch reads high when open state=digitalRead(_digitalPin); if(_pressedState==LOW) state^=HIGH; // if state is low then wherever we were then // we are now back at not pressed if(state==LOW) { _internalState=Idle; return NotPressed; } // sample the clock newTime=millis(); // act on the internal state machine switch(_internalState) { case Idle: _internalState=DebounceDelay; _lastTime=newTime; break; case DebounceDelay: if(newTime-_lastTime>=DEBOUNCE_DELAY_MILLIS) { // been high for at least the debounce time return Pressed; } break; } // nothing happened at this time return NotPressed; } DebouncedButton.cpp Test Project For our test, let’s wire up the following simple circuit. On the breadboard it looks like this. The red wire goes to +5V, blue to Arduino digital #2 and the 10K resistor is going to ground. This breadboard connects horizontal holes together as a strip. Here’s some test code to demonstrate the button class. This example will flash the LED on the Arduino pin 13 when the button is pressed. The layout of this code is designed for Eclipse users. If you are using the Arduino IDE then you should just be able to copy and paste the setup() and loop() functions into the IDE Window. #include <wiring.h> #include <avr/wdt.h> #include "DebouncedButton.h" // Compatibility stub for undefined pure virtual extern "C" void __cxa_pure_virtual() { for(;;); } // the button class DebouncedButton theButton; /* * Main entry point */ int main(void) { init(); setup(); for(;;) loop(); } /* * Setup before loop */ void setup() { // setup the LED pinMode(13,OUTPUT); digitalWrite(13,LOW); // setup the button on pin 2 (active LOW) theButton.setup(2,HIGH); } /* * Main loop */ void loop() { uint8_t i; // check if the button is pressed if(theButton.getState()==DebouncedButton::Pressed) { // flash it 3 times for(i=0;i<=6;i++) { digitalWrite(13,i & 1); delay(200); } } } main.cpp.
http://andybrown.me.uk/2010/11/21/debouncing-buttons-in-avr-c/
CC-MAIN-2017-17
refinedweb
772
51.38
New Data Source Page (Report Manager) Use the New Data Source page to create a shared data source item. A shared data source defines a connection to an external data source. With a shared data source, you can create and maintain the settings for the data source connection separately from the reports that use the data source. To open this page, click New Data Source from a Contents page. - Name Type a name for the shared data source, which is used to identify the item within the report server namespace. - Description Provide information about the shared data source. This description appears on the Contents page. - Hide in list view Select this option to hide the shared data source from users who are using list view mode in Report Manager. List view mode is the default view format when browsing the report server folder hierarchy. In list view, item names and descriptions flow across the page. The alternate format is details view. Details view omits descriptions, but includes other information about the item. Although you can hide an item in list view, you cannot hide an item in details view. If you want to restrict access to an item, you must create a role assignment - Enable this data source Select to enable or disable the shared data source. You can disable the shared data source to prevent report processing for all reports that reference the item. - Connection Type Specify the data processing extension that is used to process data from the data source. Report server includes data processing extensions for SQL Server, Analysis Services, Oracle, SQL Server Integration Services (SSIS), SAP, XML, ODBC, and OLE DB. Additional data processing extensions may be available from third-party vendors. Note that if you are using SQL Server 2005 Express Edition with Advanced Services, you can only use SQL Server and Analysis Services data sources. - Connection String Specify the connection string that the report server uses to connect to the data source. The following example illustrates a connection string used to connect to the SQL Server AdventureWorks database: - Connect using Specify options that determine how credentials are obtained. - The credentials supplied by the user running the report (Connect using) Each user is prompted to type in a user name and password to access the data source. You can define the prompt text that requests user credentials. The default text string is "Enter a user name and password to access the data source." - Credentials stored securely in the report server (Connect using) Store an encrypted user name and password in the report server database. Choose this option to run a report unattended (for example, reports that are initiated by schedules or events instead of user action). - Use as Windows credentials when connecting to the data source (Connect using) Select if the credentials are Windows Authentication credentials. Do not select this check box if you are using database authentication (for example, a SQL Server logon). - Impersonate the authenticated user after a connection has been made to the data source (Connect using) Allows delegation of credentials, but only if a data source supports impersonation. For SQL Server databases, this option sets the SETUSER function. - Windows integrated security (Connect using) Use the Windows credentials of the current user to access the data source. Choose this option when the credentials that are used to access a data source are the same as those used to log on to the network domain. This option works best when Kerberos is enabled for your domain, or when the data source is on the same computer as the report server. If Kerberos authentication is not enabled, Windows credentials can be passed to one other computer. If additional computer connections are required, you will get an error instead of the data you expect. Do not use this option to run unattended reports or reports that are available for subscription. The report server initiates the running of unattended reports. The account that the report server runs under cannot be used to access external data sources. - Credentials are not required (Connect using) Specify that credentials are not required to access the data source. Note that if a data source requires a user logon, choosing this option will have no effect. You should only choose this option if the data source connection does not require user credentials. When you configure a data source to use no credentials, you must perform additional steps if the report that uses the data source is to support subscriptions, scheduled report history, or scheduled report execution. Specifically, you must create a low privileged account that the report server uses when running the report. This account is used in place of the service account that the report server normally runs under. For more information about this account, see Configuring an Account for Unattended Report Processing. - Apply Click Apply to save your changes.
http://technet.microsoft.com/en-US/library/ms180077(v=sql.90)
CC-MAIN-2014-41
refinedweb
809
53.61
Uncyclopedia:QuickVFD/archive8 From Uncyclopedia, the content-free encyclopedia March 4th Sod off Bryan Uy Category:Mars Personal Speak Category:Mandarin Phonetic Symbols Babel:Zh-hant/ㄅㄆㄇㄈ Babel talk:Zh-hant/ㄅㄆㄇㄈ Babel:Zh-hant/孔乙己 Babel:Zh-hant項少龍 Category:Mandarin Phonetic Symbol Category:Mandarin Phonetic symbol Template:Zh-mps/火星文2 Template:Zh-hant/火星文2 Bald Howie Mandel Roni558@walla.co.il PlayStation 3 (broken redirect page) March the III HowTo:Delete pagesNRV'd - Sir Sikon [formerly known as Guest] 18:39, 4 March 2006 (UTC) Sally Ezra Cacacola No-one Flamewar Montclair high school Zork Π/upstairs Aluba Counter Strike: Barbie Edition Muzzaland Redefined my life, I now know Abortion is a necessity -- TD Francis Jeffers Gamecube -contains 1 sentence. Information Super Highway -Reappeared unimproved Netbeans March Secundus Making up Matrix Quotes - Redirect to page deleted by Flammable Gay Unit Francis Jeffers Tom Petty Darth cucumber 0 AD It really kicks the llama's ass Muci Fleshtra J-Ho page needs deletion. It is a worthless vandal page. --Filmcom Why dont polar bears shit in the woods? Help A Brother Out Foundation Hospitals Eternity Space Ghost Dhaj Jaded Swai Arrested Development Stoke-on-Trent Trip Cheez-It Werner Wanker Traduko 500000000 Chalga music Expression Masta Gangsta Yangsta Liz shaw Wear a hat Gah'Lrah Tristan Winstone Stop abusing QVFD in this way. Only things that are vandalism, unallowable vanity, or crap should be put here. Why dont polar bears shit in the woods? is very much NOT QVFD material. Don't make us check every damn entry you put on. Don't put on pages that just happen to offend you or that you don't find funny. Use VFD/NRV for that. (Did not check them all, that is submitter's job) --Splaka 01:40, 2 March 2006 (UTC) Disney Dongs Canadien army Why Dr. Feelgood Ru Ad hominem March the First .hack Eviscerate User:TD The Zig User talk:69.134.205.98User's page, they are allowed (and working on it). Adams Grammar School Please delete Babel/Es:Dios. Recreated for someone who didnt heard about the exodus. Hour Masta Gangsta Yangsta Css arkansas Brasil Sexually transmitted disesased Ad hominem February The Last Lapland L (element) Kids Say The Darndest Things IHaveABadFeelingAboutThis Harmonica Flabby Anus David loves Karla Dave Breakcore Aberdeen Angus 60s Three dimensional The Random Game Neutral Poem The O.C Carmaggedon Whitney Russell VMware Lance Bass Nymark Pan flute Frobnicate Fluff - Worthless -- User:TD Strangle Kneeches Pet Rock Arrr, stop abusing QVFD in) - PS: I did not check all of these, there may well be some QVFD worthy, but the submitter should determine that before submitting them mixed in with obviously non-QVFD worthy. 23:10, 28 February 2006 (UTC) Life of piAs much as I personally dislike the book, this is certainly not what Uncyc is for Onimusha Adam Boyarko Heeavh De-breasting World War XI Grotsneezer Category:Purple Anonymity AAAAAY! Theory Baldie Fairies Hampshire 77777777777777 long number, short article Babel:Es/Frikipedia taken long ago Babel:Es/Simios - crap, not going to spanish sister Babel:Es/Canon - to spanish sister Babel:Es/Refranero_popular - to spanish sister Babel:Es/Tenerife - to spanish sister 冬甩坊 兩槍 Babel:Zh-hant/朝鮮民主主義民人民共和國 Babel:Zh-hant/北韓 Inskwi UnNews:Bush admits he is an idiot Linköping --Brigadier General Sir Zombiebaron 12:47, 28 February 2006 (UTC) Tante_Emma_Laden --Brigadier General Sir Zombiebaron 12:47, 28 February 2006 (UTC) Venezuelan Beaver Cheese February The XXVII Neutral Milk Hotel Pt:Nerd Metal Pt:Curiosidade Zoloft Rune Dahl My rules That old hippy chick who'se always collecting money (Page moved, links fixed; unnecessary redirect, sorry about the mix-up) --Some user 04:59, 27 February 2006 (UTC) Myth Useless, I say User:TD Babel:Es/Margarita Ochohijos February 26th No-one Kumquat Administrators Y helo thar Mulleteers Maria Sharapova Strip clubs Source of all evil Mount Doom Everybody Loves Raymond Tor Heyerdal February 25th No dice - This has been deleted already, apparently. --379reppoHdnalsI 23:29, 25 February 2006 (UTC) CHEAP WAY TO BIGGER UR SHORT & THIN D11CK taken Babel talk:Es/Teologia Argentina Not taking it--Rataube 09:34, 25 February 2006 (UTC) Babel:Es/Jesús de Chamberí taken Babel:Es/SGAE taken Babel:Es/Sigmund Freud taken Babel:Es/Jesús de Neanderthal taken Babel:Es/Cosa taken Babel:Es/Bloc de notas taken Babel:Es/Cyberjesús taken Babel:Es/José María Aznar and taken... Babel:Es/Cómo evitar ser denunciado por la SGAE ...to spanish siter--Rataube 18:09, 25 February 2006 (UTC) Urodele 88 position Valium Entertainment Power Cut The Future Republic of Novostograd Babel:Zh/香港高登討論區 Babel:Zh-hant/魁!!李塾 Babel:Zh-hant/熱!!! Babel:Zh-hant/桃花補典 - blanked by the only contributor. Babel:Es/Windows_Fenix - Moved to spanish sister. Babel:Es/Sigmund Freud - Moved to spanish sister. Babel:Es/Cómo evitar ser denunciado por la SGAE - Moved to spanish sister. Babel:Es/Cúpula directiva de la SGAE - Moved to spanish sister. Babel:Es/Pedro Farré - Moved to spanish sister. Babel:Es/Sir Teddy Bautista - Moved to spanish sister. Babel:Es/Universidad - Moved to spanish sister. Babel:Es/Sudoku - Moved to spanish sister. Babel:Es/Matemáticas - Moved to spanish sister. Image:Hallar la X.JPG - Moved to spanish sister. February XXIV Congress of Vienna Fingers 769z Rich-ass 8-bit man Ghost Rider Affen-SS Holmestrand Chuk Cell churchSee the talk page. See. The. Talk. Page. VFD if you wish, but not QVFD. - Sir Sikon [formerly known as Guest] 16:28, 28 February 2006 (UTC) 4004 B.C. Lokers Sentence Fragments Wayne's World 2 Inspector Gadget George Wiltshire Bush Chaz murray, the cheese prophet - VANITY Chris_perkins - Vanity. Have a mirror, narcissist. Babel:Es/SGAE moved to spanish sista. Little Chef Secretary General of the World -list G-d Hitler Youth Butty van Demon Summoning Darth Bush Lol apua Cat Peeling Kidz Bop Kasim Symonds-Slandanity SuperNoobs Square-Enix Leroy Brown Orla mcdonald-vanity Fr Cheating Micheal Klimczewski Shane flynn -slandanity Home Base Evoluon - No, actually, I think I'll just nominate this for QVFD. Mupas - SUCKSUCKSUCKSUCKSUCKSUCKS HgraN and Nargh - Looping redirect Spawn - Can we turn respawn off? People_who_are_better_than_you - Read, understood, hated 'cause it's a list, QVFD'd. Yay, circle of Wikilife. Acre - If you don't know what it is, either don't make the article or just make shit up. Dragobete - (And by shit, I don't mean bad articles like this one.) Square-Enix somebody was stupid enough to PWN my attempt at making a Square-Enix article, so fo schizzle this one isn't going to see the light of day. Babel:Es/PSOE moved to spanish sista. Babel:Es/COPE moved to spanish sista. Babel:Es/Jiménez Losantos moved to spanish sista. Babel:Es/Hangla Mangla Andgleber moved to spanish sista. Babel:Es/David Bravo moved to spanish sista. Babel:Es/Diego Armando Maradona moved to spanish sista. Babel:Es/Teologia Argentina moved to spanish sista. Babel:Es/Argentina moved to spanish sista. Babel:Es/El Hombre del Saco moved to spanish sista. --Emedeme 11:25, 24 February 2006 (UTC) Game:Zork- Accidental duplicate Babel:Es/Barrapunto moved to spanish sista--Rataube 04:50, 24 February 2006 (UTC) Spintherism - it's only one sentence User:Clorox/sig2 I don't need it anymore ----Clorox MUNMUN February 23rd Babel:Es/Ratón gone to spanish sister --Rataube 23:57, 23 February 2006 (UTC) Babel:Es/La Iglesia gone to spanish sister--Rataube 23:53, 23 February 2006 (UTC) Babel:Es/Historia Argentina gone to spanish sister--Rataube 23:53, 23 February 2006 (UTC) Babel:Es/León gone to spanish sister--Rataube 23:53, 23 February 2006 (UTC) Babel:Es/Francia gone to spanish sister--Rataube 23:35, 23 February 2006 (UTC) Babel:es/Colombia gone to spanish sister--Rataube 23:18, 23 February 2006 (UTC) Babel:es/Alboraya gone to spanish sister--Rataube 23:18, 23 February 2006 (UTC) Babel:es/París gone to spanish sister--Rataube 23:18, 23 February 2006 (UTC) :Template:Apesta Not taking this one.--Rataube 22:56, 23 February 2006 (UTC) Mr. Pibb World War 3D Peanut cows Category:Rehacer We are not taking this one.--Rataube 19:59, 23 February Puggers please tell me you saw this coming... Ryan brannon is a pathetic waste of hydrocarbons. --The King In Yellow (Talk to the Dalek.) 19:05, 23 February 2006 (UTC) Smurfguy oo, how shall I put this? Ah yes, get fucked. Babel:Es/Perl - To spanish sista, man Babel:es/Reggaeton - To spanish sister Babel:es/Windous 98 - To spanish sister--213.229.186.67 17:31, 23 February 2006 (UTC) Babel:es/Windows 98 - To spanish sister--213.229.186.67 17:31, 23 February 2006 (UTC) Babel:es/Estados Unidos - To spanish sister--213.229.186.67 17:29, 23 February 2006 (UTC) User_talk:Hinoa - Guest/Sikon put it in the wrong place (my username's Hinoa4). Frodo - *sporks eyes out* Babel:es/Abuela - To spanish sister--213.229.186.67 17:14, 23 February 2006 (UTC) Major_midget - TURN OFF YOUR GODDAMN CAPS LOCK. HK-47 - *barf* Urban turban - *gag* Inedda Shite - I'm tempted to call this vanity, just for kicks. Some person standing in the corner. Youth Military Forces Big trouble Police offers Quantum_Replay_Man - Do you really need to repeat the same sentence over and over again? (Hint: NO.) Chihuahua - .sdrawkcab etirw nac uoY .uoy rof dooG Babel:Es/Anorexia gone with spanish wind--Rataube 13:58, 23 February 2006 (UTC) Babel:Es/Artículos requeridos We are not moving this crap!--Rataube 12:47, 23 February 2006 (UTC) Babel:Es/George W. Bush Spanish sista.--Rataube 12:25, 23 February 2006 (UTC) Babel:Es/gringo Emigrated to spanish sister--Rataube 12:21, 23 February 2006 (UTC) Category:Rehacer We are not moving this crap--Rataube 12:00, 23 February Babel:Es/Bolígrafo We are not moving this crap--Rataube 12:00, 23 February 2006 (UTC) Babel:Es/Lápiz We are not moving this crap--Rataube 12:00, 23 February 2006 (UTC) Babel:Es/Taco Bell We are not moving this crap--Rataube 12:00, 23 February 2006 (UTC) Babel:Es/Internet Exploder Emigrated to spanish sister.--85.48.138.63 10:01, 23 February 2006 (UTC) Babel:Es/Guardia Civil Emigrated to spanish sister.--85.48.138.63 10:01, 23 February 2006 (UTC) Babel:Es/Delfín Emigrated to spanish sister.--85.48.138.63 10:01, 23 February 2006 (UTC) Babel:Es/Dan'up de fresa y plátano Emigrated to spanish sister.--85.48.138.63 10:01, 23 February 2006 (UTC) Babel:Es/Ser humano Emigrated to spanish sister.--85.48.138.63 10:01, 23 February 2006 (UTC) Babel:Es/Chile Emigrated to spanish sister.--Vate 09:29, 23 February 2006 (UTC) Babel:Zh-hant/白雪公主 Babel talk:Zh-hans/首页 Babel talk:Zh-hant/灰色的腐女 Babel talk:Zh-hant/薔薇馬戲團 Babel talk:Zh-hant/香港 Babel talk:Zh-hant/黃耀銓 Babel talk:Zh/首頁 Category talk:三國無雙 Template talk:Zh-hant/人民日報 Template talk:Zh-hant/中華民國 :Template:Stib-blanked by author Feburary 22nd Bob Ross - Please, for the love of God, get rid of this crap! Al Sharpton - YOu know the "Be funny and not just stupid"? Yeah. Not seeing it. Capitalization - this article sucks and is devoid of any grammar whatsoever not to mention facts Ian Ashley Stairsliding a 40+ entry list of RED LINKS. Need I say more? God save the queen Jerilderie Malakas slander... Please do not abuse QVFD in this way. VFD these if you hate them that much. These are mostly NOT QVFD worthy. Review the QVFD rules above. --Splaka 10:55, 23 February 2006 (UTC) Phaistos_disc - THE FACTS!!!! THEY BURN!!!!!! Babel:Es/Bilbao Emigrated to spanish sister. --Emedeme 19:46, 22 February 2006 (UTC) Babel:Es/iPod Emigrated to spanish sister. --Emedeme 19:16, 22 February 2006 (UTC) Tim Howell WHO? --The King In Yellow (Talk to the Dalek.) 17:52, 22 February 2006 (UTC) Babel:Es/Peheta Emigrated to spanish sister. --Emedeme 17:46, 22 February 2006 (UTC) Babel:Es/Lero Emigrated to spanish sister. --Emedeme 17:46, 22 February 2006 (UTC) Ren Hoek - 3 sentences Les_invalides - Le suck. Wumpscut - Take your editorials to The Forgotten Wiki. Weird_al_yankovich - LEARN HOW TO SPEEL. Babel:Es/Estrella Emigrated to spanish sister. --Emedeme 16:10, 22 February 2006 (UTC) Rowing No, that honor goes to laser tag, foo! --The King In Yellow (Talk to the Dalek.) 16:02, 22 February 2006 (UTC) Perkele AWW PO BABY IDIOT!!!1!!1one! --The King In Yellow (Talk to the Dalek.) 15:59, 22 February 2006 (UTC) Knox grammar school external link Dennis Hastert Sumit tiwari Babel:Es/La respuesta a la Vida, el Universo y Todo Emigrated to spanish sister.--Emedeme 15:31, 22 February 2006 (UTC) Babel:Es/Windows 95 Emigrated to spanish sister.--Emedeme 15:31, 22 February 2006 (UTC) Babel:Es/Valladolid Emigrated to spanish sister.--Emedeme 15:31, 22 February 2006 (UTC) Babel:Es/Steve Jobs Emigrated to spanish sister.--Emedeme 15:31, 22 February 2006 (UTC) Babel:Es/Socio-listo Rotten tomatoes Arnold A. Striven slander... utter shit. Image:Tom thurlow.jpg NNP, went with double-huffed slander shite. Spanish Inquisiton how bloody original. --The King In Yellow (Talk to the Dalek.) 14:57, 22 February 2006 (UTC) Babel:Es/España Emigrated to spanish sister.--Rataube 11:13, 22 Babel:Es/Madrid Emigrated to spanish sister.--Rataube Babel:Es/Dios Emigrated to spanish sister.--Rataube Babel:Es/Benedicto XVI Emigrated to spanish sister.--Rataube Babel:Es/Universo Emigrated to spanish sister.--Rataube Babel:Es/Badabín badabán Emigrated to spanish sister.--Rataube Babel:Es/Inframundo Emigrated to spanish sister.--Rataube Babel:Es/TierraEmigrated to spanish sister.--Rataube Babel:Es/BarEmigrated to spanish sister.--Rataube Babel:Es/Luna Emigrated to spanish sister.--Rataube Babel:Es/Bujero negro Emigrated to spanish sister.--Rataube Babel:Es/Pantallazo Azul Emigrated to spanish sister.--Rataube Babel:Es/Linux Emigrated to spanish sister.--Rataube Babel:Es/Richard Stallman Emigrated to spanish sister.--Rataube Babel:Es/Fachas y rojos Emigrated to spanish sister. Emedeme 13:32, 22 February 2006 (UTC) Babel:Es/Polla Emigrated to spanish sister. --Emedeme 13:42, 22 February 2006 (UTC) Babel:Es/XD Emigrated to spanish sister. --Emedeme 13:55, 22 February 2006 (UTC) Babel:Es/Ñaflas Emigrated to spanish sister. --Emedeme 13:55, 22 February 2006 (UTC) Babel:Es/Lunes Emigrated to spanish sister. --Emedeme 13:55, 22 February 2006 (UTC) Gerudo obscure video game references aren't funny. :Template:Ja-共産主義者 :Template:Ja-真理省 Babel talk:Zh-hant/蘭蘭 Feburary 21st Simon Cowell's inner child Tres Cruel David IrvingNRV'd Kenny McCormick Arrest Forskare Man Faye Help:Como hacer Broken redirect.--Rataube 19:55, 21 February 2006 (UTC) Desciclopedia:Cómo ser divertido y no estúpido Same.--Rataube 20:02, 21 February 2006 (UTC) Mneh - If you want to copyright your article, don't post it here! It sucks anyway! Green cheese Taking Back Sunday lol ghey. Glynn Robinson NNP vanityCVP'd Eoin User vanity Loop Spiral ha ha, self references are teh fnuny. Hæstkuk Gabber Luke frost is gay Batman is not a ninja Catherine called birdy Cathrine called birdy get fucked. Callum fowers Osiris see below Inifinite (can't they even fucking spell it?) see below Kittyslasher - all three are utter shit or NNP vanity (or both.) --The King In Yellow (Talk to the Dalek.) 14:46, 21 February 2006 (UTC) Babel:Es/Cyberjesús has been vandalized. Please, revert it. -- Largo Caballero Gloop -Shiit- --Simulacrum Caputosis<NRV'd Mr. Povlish -vanity most foul- --Simulacrum Caputosis Category:真理部認可 Feburary 20th Hightower_Trail_Middle_School - Extremely long, and well thought out VANITY Wakeman School - Vanity Scottlogan - Was just whacked. :Template:Idiomas I alredy copy-pasted it to our new spanish sister-- Rataube 17:07, 20 February 2006 (UTC) Babel:Es/Desciclopedia same Babel:Es/Como hacer same Babel:Es/Desciclopedia, cómo ser divertido y no estúpido Babel:Es/Wikipedia same. We are having technical problems, but the massive migration will come soon.--Rataube 17:13, 20 February 2006 (UTC) Babel:Es/Chuck NorrisSame.--Rataube 17:27, 20 February 2006 (UTC) Category:Zh templates :Template:Zh-GJ認可 :Template:Zh-真理部 :Template:Zh-火星文 :Template:Zh-閃光彈 :Template:Zh-中華民國 :Template:Zh-hant-stub :Template:Zh-hant/stub :Template talk:Zh-中華民國 Wilde:Al_Qaeda - Unless there's a rule about this that I don't know, then one Wilde quote doesn't make an article! Feburary 19th J-Ho - Wasn't this just whacked? Titan_A.E. - Guess who's back. Back again. Hoder Past it's kill date. Anal stretching --Simulacrum Caputosis Windows RG 2007 --Simulacrum Caputosis Divided States of America Glue-sniffing Iceberg lettuce Master Cheif Ultimate Overlord of the Digimon Babel:-Zh-hant/去死去死團 Zh-hant/無線電視台 amigacho -- Largo Caballero Osiris --Simulacrum Caputosis Inifinte --Simulacrum Caputosis February 18th Page%2A A malicious rapist Jigglypuff Aberdeen Angus --Simulacrum Caputosis Shithead McFuck --Simulacrum Caputosis Cornul_si_Laptele 趙雲 中华人民共和国 泽民江 Uncyclopedia:搜索 February 17th Ill tonkso GOURANGA - One sentence. Neoconservative --Simulacrum Caputosis Never --Simulacrum Caputosis Annoying asian girls that sit in the back and look at cute guys FUCK OFF. --The King In Yellow (Talk to the Dalek.) 20:24, 17 February 2006 (UTC) Escudo de armas Jonothan O'Brien FUCK OFF SOME MORE. February 16th ZZ ZomBri Richard madeley - Un-make. Jeb_Asuncion -- slandity James_May - SLAND0R!! Swings Spear of Destiny Ricardo Contreras Brian_Park Weasel Popping--Simulacrum Caputosis Garage bands --Simulacrum Caputosis URRSRSS Ugoff Krispy --Simulacrum Caputosis Auto-erotic_asphyxiation - It's only a matter of time... Ilya Gulayev - person of a year --Simulacrum Caputosis House_of_Crunk - Thumbs down. Gamel_abdel_nasser - I see baleetion in this article's future. Richard_Komar_Study_of_2006 - I should put the boring template on this one. Richard_Komar - I can disprove that with one inequality: Admins > you. Liquid Jesus - What? I totally didn't drink him. It was RC. Feglian get over your rejection elsewhere. Digg.com - Perhaps here. SuperNoobs - Sucks Springfield, Missouri Blanket 張如城 P⃠ Bday:RecentMeta -- is just a redirect, template moved to where it should be Induktion More_Help No%2C_your_Mom Lonnie_Sima -- looks non notable and is formatted lik no one cares Sam waterston - Definition: Vanity. (Hinoa) 1chan --Simulacrum Caputosis Choate --Simulacrum Caputosis Shelly Roberts --Simulacrum Caputosis Osiris -was previously huffed- Simulacrum Caputosis Inifinite --Simulacrum Caputosis 《錦城秋色草堂春》 Babel talk:Zh-hant/首頁 - not related to the page Babel:Zh-hant/《錦城秋色草堂春》 Babel:Zh-hant/泥菩薩過江 Babel:Zh-hant/牛奶 Shelly Roberts - NO U Kwyjibo -- wasn't this just deleted? You_Know_You%27re_Croatian_When....ima Dane Cook - first edit is factual. second edit is blank. Image:Example.jpgKept as text, to prevent people uploading to it. -Spl 02:24, 17 February 2006 (UTC) Febreeze 15th Pepperoni - YOU'RE a round ailment that writes like shit. Here - A tear, a tear. (Apple Logo Symbol) - Watch me: ZZZZZZZZZZ - AAAAAAAAA! clone Abgrund -- appears to be non-notable Shiminaha Penn State Nittany Lions Metal gear Kittybat - Fails the "Kitten and Hammer" morals test and the Funny Test. Haku Bode Miller Masacure Midgar wow, it really is being sucked into a black hole. Ahmet_Necdet_Sezer - "Hey, watch me press random keys on my keyboard!" Recep Tayyip Erdogan fdgh you too. Jenny Lewis umm... yeah. Sure. Fuck off. Free conservatives eat a DIIIIIIIIIIIICK!! (.wav file fo ma rokkit launcha!) --The King In Yellow (Talk to the Dalek.) 19:59, 15 February 2006 (UTC) Joonas Reini NNP slander... ah. Monster Giuseppe Zangara A hollow voice says: "Yesterday I added these Zork ones at the bottom instead of the top. Sorry." Game:Zork/Underworld4 Game:Zork/weed Game:Zork/sub Game:Zork/mental Game:Zork/eat Game:Zork/fyeah Game:Zork/wgs Game:Zork/tea Game:Zork/jam Game:Zork/bitches Game:Zork/beers Game:Zork/lab7stolic Game:Zork/lab7suit Huntsville - Too lame to NRV Black_Is_Out - And so are you. Henry_winkler - Sucks more than below article, and that is saying something. User:Cheftw/TheDuelOfTompkins - GET ALONG OR I'LL NRV BOTH OF YOUR USERPAGES. I vote Keep. for the above:59, 15 February 2006 (UTC) - Really, Tompkins, it degenerated into a lamefest, and it was both of your doing. Tell me I'm wrong. --—Hinoa KUN (talk) - I also vote keep you lose Hanoi! - You are wrong! Cheftw 03:03, 15 February 2006 (UTC) - You're supposed to leave comments out of the box. AND YOU"RE WRONG!!!:05, 15 February 2006 (UTC) - You DON'T. This is not a forum. Knock it off, k? --The King In Yellow (Talk to the Dalek.) 21:05, 15 February 2006 (UTC) - Stuff that needs voting belongs on VFD, not QVFD. It was innapropriate to ever) 00:52, 16 February 2006 (UTC) Delig The Big Bad Wolf - The QVFD pwnz your article. Go home! End of story. TheDuelOfTompkins you don't wanna know. Michael ellman Babel:Zh-hant/學生 Babel:Zh-hant/電車痴漢 February 14th Brendan_tapp - And now more crap in the same vain. Somewhere_Over_The_Rainbow - (singing) If you only had a brain... Vanessa_feltz - Never heard of her. -1 -- created blank 20xx -- created blank 2223 -- created blank 3265 -- created blank 4098230440 -- created blank Asteroid -- created blank Bender -- created blank Bobby Brown -- created blank Easter egg -- I lied. These aren't blank, they are advert spam clerverly hidden. Flavor Flav -- ditto Giuseppe Piazzi --ditto I Wish Today Was As Popular As Yesterday Day --ditto Zombie John F. Kennedy -- same Invisible Man - same League of Nations - same Stalingrad -- same National Discrimination Day -- same Darin - Vanity. Here's a mirror, you narsiccist. Cocktease - You like pie? Here's your pie: 3.141592654. Gerbils - This TOP SECRET article TOP SECRET sucks.Someone got to it. I R SLOW. Why - Everyone knows the right answer is "because I said so." Nigga_please - Deletion, please. Cheese is a Dairy Product thanks Captain FuckingObvious. Kick_therapy - Kick this page in the nuts. HARD. THAT's your kick therapy right there. Page ... yeah, I think Some User was right.... Haska King_Tut - Tut, tut, you fail at life. Raping ...like what I'm doing to your limp article. Lemonaid Spelling.... ah, who cares? Olympe De Gouges Starr Jones this is masturbation, pure and simple. Really, just tell her how you feel. Robert chiles SLAN-DAN-I-TY! Thank you, Vanessa! Wilde_E._Coyote - I believe you're looking for this article. Matt_%22The_Wilde%22_Pyle - THREE WORDS ARE NOT AN ARTICLE. Vytautas Lansbergis Daniel_Edwards - Stealing a gimmick from KT, Nobody cares. Million years The year ... it became apparent that this author has a kumquat for a brain! Burnz0r! Water Based Computer shhh.... it's a fucking secret! Viet Kong Tower of Instability uh.... no. World War 3 no, it's Waaaaa UR teh coxsuxx0rz, monkeynuts. Bolshevik revolution in QVFD, The King In Yellow (Talk to the Dalek.) huffs YOU! Darren coles yeah, that's an Indian name.... fuckwit. Foreign - Damn dirty bad writers... Ayatollah_Ruhollah_Khomeini - The author-a is an ass-a-hole-a. Barenaked_Ladies - Not to be confused with a decent article. Kuwait - Smashy! Silver_Spoons - PHAILZ. Anne_jagielski - SLAN-DOR. 03 - Wow. This redefines SUCK. Joseph Haydn - To be continued? I think not. Slandanity, as well. Eric_brown -- slandity ZW: 七人の侍 Babel:Ja/中国(地名) Babel:ZW Babel:Zh-hant/電視廣播有限公司 Babel:Zh-hant/孫中山 Babel:Zh-hant/英文 Doctor Strange Imperial_march - Darth Hinoa is displeased with you. February 13rd Eric_brown and Black Ops - Related articles, and both vanity (the latter barely less so) Last_enemy - Nothing funny. Funding Fathers Liquid Jesus Dino Jesus CAPS_VIRUS - PLEASE go away. UnNews:Clark is okay after sizure Go away. UnNews:Peterson dies Steven Delianites (Tontes) Kill.... kill. Simon_the_killer_boy - TOTAL SHIT. Toad_Vreek - NNP, if I'm not sorely mistaken. Shepton_mallet - Did a dog crap on your article? Because it's FULL OF SHIT.Has been expanded Gewgaw - Not even good enough for Undictionary.I'm beginning to dislike WotD... Luana - *sigh* Dino_Jesus - Incoherance, not even relating to the topic. Wah. His_Noodly_Appendage - This makes the baby Flying Spaghetti Monster cry. Http Alcoholism - INTERVENTION! The_Pentagon - NRV is too good for this. Xenosaga Hanh - HAAAAAAAAAAAAAAAAAAAAAAANH!!!!! Bad Translaion Bad Translating move redirects.. Cigarrette butts Matt_mooney - Try saying it to his face. Hundscheidt - Hulk HATE scatalogical nonhumor! Es:Tercermundista I alredy redirected it to the Babel namespace--Rataube 02:45, 13 February 2006 (UTC) Junming Elasto mania don't vote for pedro. Quahog Ruffalo Pedsnenting Luana Brasil Profanity Gewgaw Panda martin Zachary martin Motley Crue Screamo fuck off, kid. Stick to MySpace. Nala Plastic spoons College du Leman mine eyes doth bleed now... ARGH! Mr. Flibble The Great Foreskin Revolt clearly authored by a botched circumcision victim subsequently raised as female. Dozen of F***uary Practically everyone in Norway Unreal Tournament Rajbir_Basran - exactly the same as en.wp article Masturbates redir to Masturbation The old woman with the kleenex Ja:Engrish Ja:オダ・ノブナガ Ja:オダ幕府 Ja:コナミ Ja:ハ ハ ハ! Ja:メインページ Ja:七人の侍 Ja:中国(地名) Ja:日本 (Japan) Ja:碁 Ja:バ科事典について Category:台灣) Babel:Zh-hant/中國共產黨多黨合作制下華夏人民不便當家作主的共和國 Babel:Zh-hant/吃洨 Austin_Nunn - Wasn't this whacked already? Penal_colony - I'm SJ-G (Shit-joke Genocidal) Carcinogen - Say it with me. TWO WORDS ARE NOT AN ARTICLE. Ave_Maria - TEH SUXXOR (sorry, don't kill me) Pie_%28language%29 Reality_Check_NY - Reality check: THIS ARTICLE SUCKS. [H4]Not anymore. I suspect vandals caused the suckage. Daniel Radcliffe Harry Potter deserves better --Simulacrum Caputosis Regular Polish Notation --Simulacrum Caputosis The Used Hot Topic Goth Old Man -- Wild Weasel Otto Skorzeny - neo-nazi bullshit February 11st Craig_Stoakes - SLAN-DANITYYYYYYYYYY King of porno --Simulacrum Caputosis Aerith Aeris Aksel_Sandemose - BORING. Yo_momma - Whoever wrote this: Yo momma so dumb, she had you! The Meaning Of Life Tuusula Omega Age November Odin(captain)- 15:05, 11 February 2006 (UTC) Providence country day school vanity? need a 2nd opinion... --DW III 04:16, 11 February 2006 (UTC)NRV'd Frenetic_dyscouchism - [Coherence + funny = -11.] Chebend - Phallic nonhumor. Saint Vorderman--Winston 07:46, 11 February 2006 (UTC) Whinnie Fembebruary 10th Azimuth - Phails. Al-Bundy_Brigade - Too short to be worth anything. Indiwhatdie - NOT FUNNY. Frood - See below. Hoopy - I hate - HATE - to do this, but it sucks. `1234567890 %D0%AF George Bush Snr Georgi Markov German salute Groom Harvey Wallbanger Hugh Mitchell Infinitillionkept for expansion Irak John White Just Say Yes KGB-KGB/KGB/KGB King Mindaugas Lancelot Laser Blue Logician Maryam Mellotron Merlion Microsoft Patch Day Monkey stomp New Michigan OMFG Oracle Ouisuki Petia Po stikliuk%C4%85 Politics of Galiza Porthtowan Press 3kept (part of a series) Prime Problem Quail Quantum Hijacking Sneezium Spides Substitute President S%C3%A5gr The bureau of udder bullshitmoved & NRV extended Thunderbirds Tiramisu Tourist Training Camp User interface Vorarlberg Vulture Balls Wade Fulp War of the Wurlds White%2C Anglo-Saxon%2C heterosexual Catholic males Windpumps Yippy Kai Yay Mothafucka Yog-Sothoth Great Pasta Bowl 1000 BC 19th Century} flagged for merging 19th century} " " " A lot Amerindian An Post Austria salzburg Auto-erotic asphyxiation BNRflagged for eating BlixFish Bromine Chick flick Chow Zhi Wan Dolittle College Doneness Duchy of Truffles Duke Flaubert Ethnic European Cup Flq Cornul_si_Laptele - if (words == 3) article = false; Gloria_holbrook - We don't care if she's weird. What's the word... "Slandanity?" J._T._Ripper - Content: "...is cool." Well, you aren't for making a 2 word article! Hijo_de_puta - Tres palabras no son un artículo. Especially if not in the Spanish namespace. Preston_and_Steve - THE FACTS! THEY BURN MY EYES!!! Cole_Tirpak - SUCKS MORE Roshbert - SUCKSNever mind. Jacques_Brel - Not funny, too short, the list goes on and on. [H4] Babel:Zh-hant/空白 Babel:Zh-hant/空黑 U.S._Presidential_Elections - List; not funny Dutch wife - would you be my girl, would you be my girl... Adam Croyne link to exterior pic The Chindizian War RARRRRRRR — 2ND LT. Sir David, Grizzly of Wild KUN VFH FP (Oh my God! Grizzly Bear! Nooooo!) Marius Chris Carter Dutch Army Quantico Martin David Novin J. T. Ripper Mr. Povlish keep this site slander free! February Nein-th Robby brod BLAH!!! Hernando De Soto FACTS!!! Whoppers templates redirect created after move. unamercia I'm an idiot. --Bloodrage 19:05, 9 February 2006 (UTC) unamericia I'm a dyslexic retarded idiot. --Bloodrage 19:05, 9 February 2006 (UTC) Babel:Zh-hans/大便 Babel:ZH-hans\金刚 George W. Bush has been vandalized with it's evil wikipedia counterpart, needs immediate reversion.Not QVFD material - Sir Sikon [formerly known as Guest] 16:22, 10 February 2006 (UTC) Babel:Zh-hans\傻逼猴子王 United Spades Supeme Coat -- Simulacrum Caputosis Dragoons --Winston 04:32, 9 February 2006 (UTC) DHTML Joseph Goebbels 3dgames argentina Eyeshield 21 yawn... go away. The Crips John Von Goethe The Day ZB Lost IRC or Febuary 8th Everywhere -- Simulacrum Caputosis Osiris -- Simulacrum Caputosis Antipope animal -- Simulacrum Caputosis Worldwide Jewish conspiracy -- Simulacrum Caputosis Pinthongtha Shinawatra Panthongtae Shinawatra Paul knight Beehive Malin agnethe simonsen Chiquitistani Es:Río de la Plata The page up for deletion Swizzler - Vanity --—Hinoa KUN (talk) 20:36, 8 February 2006 (UTC) Babel:Zh-hant/迷戀女高中生三神器的癡漢 Lord Vader of Trollhättan Darth Traya Earth Girls Are Easy Leper Messiah Zork/lobby Shazbot Baby carrot M.C Jesus God doesn't beleive in atheists Roberta Bondar I smell slander, tasteless slander! I smell slander, and vandal, and shit! --The King In Yellow (Talk to the Dalek.) 18:19, 8 February 2006 (UTC) Blow Jobs Carter cole who gives a fuck? Hasin Syed PARRICIDA What I Didn't know About Romania go away Soccer riots 4x4 February 7 Monkey tale Fuzi0n Margaritaville Mongheaver Lord_Vader_of_Trollh%C3%A4ttan -- blanked by author Earth_Girls_Are_Easy Darth_Traya Kevinchowism. Skydome Cows go moooooooo!!! National Sports Anarchy Zork/lobby Seb Leper Messiah Queen (gay) Olde English Jandis -- slandity Olde English -please insta-huff Uncyclopedia Sucks Zh:用觸手侵犯女高中生 Zh:用觸手纏繞侵犯裸體美少女 Zh:用觸手纏繞侵犯女學生 Zh:用觸手調教 Zh:用觸手侵犯裸體高中女學生 Zh:伪基百科 Zh:首页 Zh:董建華 Zh:簡體字 Zh:中國 Zh:日本 伪基百科 Nanobiotechnolgy - fixed spelling, please delete redirect Scrambled eggs - merged into Egg, please make a redirect Cows go moooooooo!!! Jellyfish Skydome Serbian Orthodox Uncyclopedia Action Figures for that page, i ought to be drawn, hanged, and quartered. Jsonitsac 00:28, 7 February 2006 (UTC) Queen (gay) Zaeem Mahmood -- how many times can this be huffed? ALOT APPARENTLY!!! KILL IT! Polaquian Tom Cruise's Asshole Members of the Satan's Naughty Spawning Pool Who Like The Chocolate Too Much can you tell??? February 6 Square-Enix The Warriors John Von Goethe Software Development Superb owl Frink irc H4cko Vanity Redirect. Eazy E Yoshi's Island Laundry basket Muck Shoryuken Green's Theorem Blame Canada Gunt Drammen Uisky Babel:Zh/用觸手侵犯裸體高中女學生 Referee_Riley - originally NRV'd in December. appear substantially the same if not shorter Stijn_Klessens - blanked by author following an NRV Michelle_burdett - slandity Newtown High School Interwebbernets Graham Jones Hitlerbear CWAP. --The King In Yellow (Talk to the Dalek.) 14:06, 6 February 2006 (UTC) Kim Deal Michael warner smith Clangers 13375p34k3r Glass house Daveman The Quotes Casey Dillon Paul hunter New Jamie Scott previously huffed vanity shite. --The King In Yellow (Talk to the Dalek.) 20:27, 6 February 2006 (UTC) Uisky Eazy E NTL George Washington Colonials Horshoe February 5 Fayt Leingodnot a QVFD candidate -F 3912 The_Wombles Melanie_tracey Morten_meland Bra Wolverhampton Board Te:Genesis Irrational exuberance Lauren Savoie Helen ahn Ucows Delete this, and all subpages --LoogieNRV'd February 4 Cave menWIP'ed by owner. Sparing, although it makes me sad. -F Hillbilly Dan Gardner Sir Teddy Bautista QUUUFDHJEIFBNGBGAKSFHKKJHRUVRNFUBVHDKJFKJVGDKJSDJV Hāttānusvārāpājjatārattāmahattārhāmachātta Makkara Excellion Empire ZH:蘿莉控 Zh:DVD後援會 Zh:HKGolden Zh:MK文化 Zh:三國人 Zh:三屍十一焗 Zh:三皇五帝 Zh:三萌主義 Zh:世界十強武者 Zh:中出 Zh:中出妖怪 Zh:中國經濟史 Zh:中華人民共和國 Zh:九龍 Zh:全鐵聯 Zh:共匪 Zh:卡波提耶榨汁姬 Zh:台巴子 Zh:台灣 Zh:同人誌 Zh:同志 Zh:呂布 Zh:大硬 Zh:安祿山 Zh:宮崎勤 Zh:尾行癡漢 Zh:御宅族 Zh:思覺文化 Zh:愛洨會 Zh:懂建華 Zh:懂趙紅聘 Zh:戰你娘親 Zh:拜物教撲滅組織 Zh:指搗 Zh:支那 Zh:支那XX Zh:支那人 Zh:支那健畜 Zh:支那問候語 Zh:新年 Zh:新石器年代 Zh:曹操 Zh:曾飪豚 Zh:東瀛女優國 Zh:機動廚房 ─ 種廚的命運 Zh:檯灣 Zh:死亡筆記 Zh:水銀黨 Zh:江憐福田 Zh:泡泡狗無差別逆天帝國軍 Zh:洨者鬥惡龍 Zh:漢奸 Zh:漫遊逆天周報 Zh:火星大王 Zh:煤坑 Zh:煽風點火指南 Zh:熊貓 Zh:熱血漢奸Online Zh:瑞士 Zh:用觸手侵犯裸體妙齡美少女 Zh:癡漢 Zh:禽流感 Zh:第七次火星獨立戰爭 Zh:第二次愛麗絲大戰 Zh:簡化字 Zh:網路上沒人會知道你是一條狗 Zh:網路的目的 Zh:老師 Zh:老師評論雷特伊海戰 Zh:股票 Zh:胸部奧林匹克 Zh:腐林泡瓦基夫 Zh:腐田 Zh:致敬 Zh:苦狗公司 Zh:薔薇少女 Zh:薔薇少女之愛麗絲計劃 Zh:薔薇的鍊金術士 Zh:蘿莉 Zh:蘿莉控 Zh:費達拿 Zh:赤兔 Zh:運動貴族 Zh:邪教 Zh:鄭先生 Zh:鄭欣宜 Zh:鍊金術 Zh:鐵拳無敵孫中山 Zh:雪蛤雞精 Zh:電車痴漢 Zh:顏福偉 Zh:香港 Zh:香港高登討論區 Zh:體帝比爾 Zh:高達武鬥會 Zh:魁!!李塾 Zh:黃耀銓 三屍十一焗 曾蔭權 漫遊逆天周報 禽流感 費達拿 香港 香港高登討論區 Babel:Zh/DVD後援會 Babel:Zh/MK文化 Babel:Zh/三國人 Babel:Zh/三屍十一焗 Babel:Zh/三皇五帝 Babel:Zh/三萌主義 Babel:Zh/世界十強武者 Babel:Zh/中出 Babel:Zh/中國經濟史 Babel:Zh/中華人民共和國 Babel:Zh/九龍 Babel:Zh/伪基百科 Babel:Zh/共匪 Babel:Zh/台巴子 Babel:Zh/同志 Babel:Zh/呂布 Babel:Zh/大硬 Babel:Zh/安祿山 Babel:Zh/御宅族 Babel:Zh/恶搞 Babel:Zh/懂建華 Babel:Zh/懂趙紅聘 Babel:Zh/戰你娘親 Babel:Zh/支那 Babel:Zh/支那XX Babel:Zh/支那人 Babel:Zh/支那問候語 Babel:Zh/新石器年代 Babel:Zh/曹操 Babel:Zh/曾飪豚 Babel:Zh/東瀛女優國 Babel:Zh/檯灣 Babel:Zh/水銀黨 Babel:Zh/泡泡狗無差別逆天帝國軍 Babel:Zh/漫遊逆天周報 Babel:Zh/煤坑 Babel:Zh/煽風點火指南 Babel:Zh/熊貓 Babel:Zh/熱血漢奸Online Babel:Zh/用觸手侵犯裸體妙齡美少女 Babel:Zh/癡漢 Babel:Zh/第二次愛麗絲大戰 Babel:Zh/簡化字 Babel:Zh/網路上沒人會知道你是一條狗 Babel:Zh/老師 Babel:Zh/股票 Babel:Zh/胸部奧林匹克 Babel:Zh/腐林泡瓦基夫 Babel:Zh/苦狗公司 Babel:Zh/薔薇少女 Babel:Zh/薔薇少女之愛麗絲計劃 Babel:Zh/薔薇的鍊金術士 Babel:Zh/蘿莉 Babel:Zh/轉換 Babel:Zh/邪教 Babel:Zh/鄭先生 Babel:Zh/鍊金術 Babel:Zh/雪蛤雞精 Babel:Zh/電車痴漢 Babel:Zh/顏福偉 Babel:Zh/香港 Babel:Zh/香港高登討論區 Babel:Zh/體帝比爾 Babel:Zh/高達武鬥會 Babel:Zh/魁!!李塾 Babel:Zh/黃耀銓 February Part 3: Revenge of the CRAP! Zac soloman Completely incoherent; 1 line. --Xiao Li 01:03, 4 February 2006 (UTC) Paz 2 lines --Xiao Li 01:01, 4 February 2006 (UTC) Babel:Zh-hant/尾行癡漢 Translation: "I have a classmate who is like an idiot. We call him Orangutan Wang." --Xiao Li 00:55, 4 February 2006 (UTC) Blind people awareness day Pope Caroline III Einzelwortbeantwortungsneigung The Great Bunny of Christ Ressurection (redirect) Catting Robert j m More Robert Morris shit... fuckemkillemneatem. --The King In Yellow (Talk to the Dalek.) 15:43, 3 February 2006 (UTC) Hoffing Foot fungus Social Morons Loren hein Makkara Chris Bamford UnNews:Arkit Zorkatt's guide to moron-proofing your house Thomas Rawlings vanity? slander? Crap - YES --The King In Yellow (Talk to the Dalek.) 18:08, 3 February 2006 (UTC) Fucking up Recapitate It really kicks the llama's ass Fenerbahce Oliver's Army Intentionally blank IMHO, Nihilism does it much better. --The King In Yellow (Talk to the Dalek.) 18:54, 3 February 2006 (UTC) Helminen Cornish Great Shit Epidemic yeah, this is obviously shit... --The King In Yellow (Talk to the Dalek.) 19:22, 3 February 2006 (UTC) Star Wars: Battlefront 3 Helminen Bagard Indefiniitti Uncyclopedia v2.2.06 User:Nerd42/chat idea not working Zoidberg Mark johnson Kid A Rock opera Oral Claudiu Metalcore The beginning of space and time TV Trwam Mark johnson The White Person of the Year Award World Garlic Festival Puff the Magic Yongtastic Timothy McVeigh- VFD material, maybe, but not QVFD Joanna newsom Xiayi Supers Blunt Wools Wiccapedia Xmen Kade sherlock Time Lord WTF Radiation Hugh jackman Munich Gem of Immortality Kid vid Super Asian War Turbo Breakdance no, it eatz. February 0.99999th Jean_Sibelius Doors_OS Eurojesii Nauseacaa t3h m0v3h Thine mommah Hedemark Mein Kampf, the movie Lillehammer Nauseacaa Passes a Valley of Wind Kyle Orton Warriors of teh Nauseacaa Jean Sibelius Teal Symphony of a Thousandth - NRV Pain Fairy utter shit. --The King In Yellow (Talk to the Dalek.) 18:15, 1 February 2006 (UTC) The pora Micheal miller Michael miller - NRV because I don't know who it is Lumines Cunt_cunt_cunt_cunt_crap_crap_shit - Kept an Uncyclopedia institution Pierre elias Pyromania Doors OS Illegal immigrant Spat This page does not exist Mail Droppings Maher michel Womanfolk Being shot in the head with a 12-gauge Thermos Kristi Seks McNugget Heavy airship Banuary 31st Greenwich Mean Time War Tancos Pasi pyyppönen Communes Uncle Slappy's Fun-Tank Galaxy Fighters Egged Elbonics Devil mice Soviet Super Secret Spies Can Taustin - Slandity, deep in the heart of meeee...! Jennings Cyrus the Great US of Canada Jennings/ Frost Valley Supo No! Fifteenth Century Would you like to cybur? seriously... can we ban this fuck? --The King In Yellow (Talk to the Dalek.) 19:34, 31 January 2006 (UTC) Narutard Wery Dark Priest - from slander to vanity... --The King In Yellow (Talk to the Dalek.) 20:17, 31 January 2006 (UTC) Urho Kekkonen's corpse Wes Borland Muhammed - double redirect, all links resolved --GOD! 20:04, 31 January 2006 (UTC) Get rich or die tryin' Shankism Liberal Democrats Return to Castle Wolfenstein The lovechild of Mark Twain and Oscar Wilde Kilroy NNP vanity shite. Jamie Scott thanks for the connection. Pitchfork media Michael Hoyle ... and, I'm spent. --The King In Yellow (Talk to the Dalek.) 21:36, 31 January 2006 (UTC) Yanuary Dirtietht Zh:%E6%9B%BE%E9%A3%AA%E8%B1%9A Flied_Lice Image:Scivsnorse.gif Please delete. I accidentally uploaded a GIF, now replaced with a PNG version. For parody use in the Unintelligent Design article. --Lt. Sir Orion Blastar (talk) 19:01, 30 January 2006 (UTC) Euronintendo - it was only mildly funny the first time, same lame joke twice makes it crap Uncyclpedia wow... how clever. Anton Fucker Tim quaedackers Superhans Absolute zeroHas been expanded... let's see where it winds up. --The King In Yellow (Talk to the Dalek.) 15:34, 30 January 2006 (UTC) Sex addiction yeah, clearly this was written by a sufferer... right. Jedi civil wars KOTOR was fun... your "article" isn't. Spinestealers Stew the hell? Stu ok, now this smells like slandanity... Ascetic Matthew Lythell Word association Frump no, that's a Fanzanun! ELizabeth Wilkins Lose then why am I not on this page? Temperature The you lose game Jimp The Fear Power cut Timmy this isn't South Park. --The King In Yellow (Talk to the Dalek.) 18:56, 30 January 2006 (UTC) Therushforum.com John Techno's Blitzkrieg Allstars nor here, for that matter Vandread moved to undictionary. Bubble Mutant Guinea Pigs Thomas Oland SUJETO EXTREMADAMENTE GUAPO Bent ballestad Maroon 5 Quarter knight Becky pierce Robert Bentall crap, NNP Ghoti Dganooweary 29 David Crowder MerCuryRisIng Homsexual --Xiao Li 05:47, 29 January 2006 (UTC) Slime volleyball --Xiao Li 05:47, 29 January 2006 (UTC) Dominic tolan --Xiao Li 05:47, 29 January 2006 (UTC) Loas Visual C plus plus Tony-Tony Chopper Rubber chicken (though I did laugh) Dark Priest I love this company Marta Maes Hughes IntellectualPanther This Granary 28 Notary Public - NRV Babel:Ru/Бля obscenity, two sentences Compy 386 The complete list of integers between 1 and 20 Joseph ThomasJoseph thomas & Template:Joseph Thomas (!!) William Blake A.R.E. camp Fry-day #Esc (27) Page52 Eddie Cahill, it's rubbish. Pommie bastards Fizz Matt price Junglist The recent news page has come under attack by hordes upon hordes of red text. The text has offered no conditions for surrender. asshat... Simon Facer crap, how dare you slander God Bowie? GUNI-CUNI recreated, previously huffed United states of Fatbastard Blood Gulch hah... somehow I doubt that on so many levels --King In Yellow 16:40, 27 January 2006 (UTC) Talk:Simon Facer leftover from a deleted article Eddie Cahill Geta Calm down, Gorski... have another onion. --The King In Yellow (Talk to the Dalek.) 18:35, 27 January 2006 (UTC) Neighbor of the Beast uh... wha? --The King In Yellow (Talk to the Dalek.) 19:59, 27 January 2006 (UTC) Zelda3908 site plugging... Code lyoko Now jill is at yahoo and is enjoying sybersex right now. Jill Fat Females Bow-chica-bow-wow Your own cum Bryant Gumbel - crappy crappy crap crap --biggy 22:22, 27 January 2006 (UTC) Buttsex --Xiao Li 23:13, 27 January 2006 (UTC) Clock Crew --Xiao Li 23:14, 27 January 2006 (UTC) Thusday the 26th of Zombiebaron Imperial Bastards Of That Time MS-Word - redir to MS Word * Keep. What's wrong with having an extra redirect? * Nothing links to the redirect (except this QVFD link) so it won't do anything at the moment. --Rcmurphy Imatra Paul Ciancarine ...vanity... Daniel ...vanity... Bandwagon ...factual... Nabraska duuuuuuuuh Hammer Time Cassie Lamison Cassie Lamison and her friends CHAD HAS A BLOWN OUT HOLE IN HIS UNDERPANTS!!! woo fuckin hoo. Adam Curry Crap Adam Levine Troo Crap Adam Lopez More crap Antler Youth duh- NRVed Awards/MetaAward/Hack didn't work, obviously Bday:RecentMeta - might be in use by someone? Blur/test Dungeons master- redirected to Dungeons and Dragons Haseeb's camel MyLatestScript Sorry, just showing some writerguy how wiki could work for him. Monkeynucleosis - it has returned from the dead, disintegrate it! Seapoose - NRV L.A. Woman tagged for rewrite long ago, no action on either this page or creator's. -Also semifactual- Kala Jones this kid needs therapy. Joanne alce more of the same Pokeyman Edward pache Jan Brady 25 Zaeem mahmood Zaeem Mahmood Quartugals Of Portugal Piers Slandanity Alan Smith Template:Quantum An experiment in templating gone horribly wrong. Sucatraps Iain Duncan Smith - NRV "Sheep Shagger" Spike Dirk Emmer - Qua? Midgar Meccano Imatra Bronks Jelly recreated Slick owen CRAP The Potatoe-man Dan Quayle lives Watskeburt I nominate the Uncyc article John Sparrow David Thompson for quick deletion. The only notable John Sparrow David Thompson is the Canadian prime minister who served in the late 1800s -Wikipedia:John Sparrow David Thompson - and the Uncyc article has nothing to do with that famous Canadian and is totally random drivel about a sailor/pirate. It has no business hogging up the name space, especially since that's a name that the Canadian prime minister template Template:CanPM links to.--Ogopogo 04:45, 25 January 2006 (UTC) Self referential humour
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:QuickVFD/archive8
CC-MAIN-2014-52
refinedweb
6,975
57.37
Posted 22 Sep 2011 Link to this post D:\CI\eZone_Working\eZoneUITests\eZoneUITests.csproj(91,3): error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Telerik\WebUITestStudio\Telerik.WebUITestStudio.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. Failed to start MSBuild. External Program Failed: C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe (return code was 1) Posted 23 Sep 2011 Link to this post This is basically being caused by this line contained in the .csproj file: < Import Condition = "true" Project "$(MSBuildExtensionsPath)\Telerik\WebUITestStudio\Telerik.WebUITestStudio.targets" /> The Ultimate Collection is basically a set of installers and our licensing allows you to install them on only one machine at a time. Thus you will consume one full license to all components of your Ultimate Collection by installing it on your build server. That sounds like a heavy price to pay to get Test Studio builds to work on your build server (unless it's a license you don't need elsewhere) I see understand your frustration and can assure you that we value you and all our customers and will correct the wording on our website. I'd like to take this conversation offline - I will send you an email shortly. Posted 26 Sep 2011 Link to this post C:\Program Files (x86)\MSBuild\Telerik\WebUITestStudio\Telerik.WebUITestStudio.targets(9,5): error MSB4062: The "ArtOfTest.WebAiiVSIP.CodeGeneration.GenerateElementsTask" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\Telerik\WebUITestStudio\PrivateAssemblies\ArtOfTest.WebAiiVSIP.dll. Could not load file or assembly ':\Program Files (x86)\MSBuild\Telerik\WebUITestStudio\PrivateAssemblies\ArtOfTest.WebAiiVSIP.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. [D:\CI\eZone_Working\eZoneUITests\eZoneUITests.csproj] Posted 27 Sep 29 Sep 2011 Link to this post Just to summarize the outcome of our GoToMeeting today... we discovered that you do not have a full version of Visual Studio installed on your CC.NET build server. As a result the Test Studio installer did not install the support for compiling Test Studio test projects via MSBuild. Right now today you have two choices: 1) Install a full version of Visual Studio (Professional edition will suffice) then re-install Test Studio Run-Time 2) Split out the Test Studio project into it's own VS solution. Don't ask CC.NET to build it. Instead execute the test via our Scheduling server or using our ArtOfTest.Runner as a task in your CC.NET build script. I have taken this feedback and forwarded it to our product manager as a feature request. Hopefully someday in the not too distant future we can add support for your current environment. Posted 30 Sep 2011 Link to this post Hi Cody, Ok, I installed VS 2010 Pro and the project is now building. I do get this warning but I am sure it is not an issue: C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1360,9): warning MSB3247: Found conflicts between different versions of the same dependent assembly. [D:\CI\eZone_Working\eZoneUITests\eZoneUITests.csproj] anyways I have 2 other guestions. 1. Whenever someone logs into our build server we get a UAC message asking "Do you want to allow the following program from an unknown publisher to make changes to this computer" Program name: Telerik.TestStudio.Scheduling.Setup.exe 2. do you have an exaple of how I can get ccnet to run a test after a build? Chances are the MSB3247 error is the result of having both Visual Studio and "Microsoft Visual C++ Compiler" installed at the same time. For item 1) if you have no plans to use our Scheduling Server you can uncheck that feature during the install of the Run-Time. Then that feature won't get installed and it will stop that warning message. For item 2) this KB article should help. I have filed a feature request here about not requiring full Visual Studio to be able to build Test Studio projects in your environment. Posted 04 Oct 2011 Link to this post exec > <!--Delete the test Results file first. This is required as MsTest will not create the file if it exists this could be merged with the mstest action in a single batch file--> executable >$(windir)\system32\cmd</ baseDirectory >D:\CI\eZone_Working\eZoneUITests</ buildArgs >/c if exist Faculty_TestResults.trx del Faculty_TestResults.trx /f</ buildTimeoutSeconds >30</ </ <!-- Call mstest to run the tests contained in the TestProject --> >C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe</ <!-- testcontainer: points to the DLL that contains the tests --> <!-- runconfig: points to solutions testrunconfig that is created by vs.net, list what test to run --> <!-- resultsfile: normally the test run log is written to the uniquely named testresults directory --> <!-- this option causes a fixed name copy of the file to be written as well--> >/testcontainer:bin\debug\eZoneUITests.dll /test:Faculty /resultsfile:Faculty_TestResults.trx</ tasks publishers <!--to get the test results in the dashboard we have to merge the results XML file --> merge files file >D:\CI\eZone_Working\eZoneUITests\Faculty_TestResults.trx</ xmllogger buildresults >Microsoft (R) Test Execution Command Line Tool Version 10.0.30319.1</ > </ >Test Faculty cannot be found.</ >Starting execution...</ >No tests to execute.</ When using MSTest to execute tests you actually need to use a command line syntax like this: mstest /testcontainer:.\GoogleSearch.tstest Note you do NOT point to the .dll that the test will be using. If you create a Visual Studio test list you want to use this syntax: mstest /testlist:SampleTestList /testmetadata:..\MSTest-Tutorial.vsmdi This is documented here. NOTE: MSTest cannot be used to execute a Test Studio test list (i.e. a xxxx.aiilist file). Posted 05 Oct 2011 Link to this post >.\Faculty.tstest</ >Property accessor 'Name' on object 'ArtOfTest.WebAiiVSIP.WebAiiTest' threw the following exception:'Object reference not set to an instance of an object.'</ We neither recommend nor discourage anyone from using MSTest (with CC.NET or any other build system). It is simply one way of running tests that works just fine. There are other ways of running tests. Which you select depends on what is/isn't important to you. For example some customers like the ability to publish the results from a MSTest run back into TFS. For others they don't care (or don't have TFS). Can you run the MSTest from a Visual Studio command prompt? Try manually running something like this at the command line and tell me what happens: mstest /testcontainer:.\GoogleSearch.tstest If that gives you problems I'd like to look at this problem directly on your computer via GoToMeeting. Posted 06 Oct 2011 Link to this post Would you share with me the <exec> section from your ccnet config so I can confirm it looks setup correctly? < <!-- this option causes a fixed name copy of the file to be written as well --> >/testcontainer:.\Faculty.tstest /resultsfile:eZone_TestResults.trx</ > --> Posted 11 Oct 2011 Link to this post I am confused by the results compared to the command line. What confuses me is this line from the results: <message>Loading bin\debug\eZoneUITests.dll...</message> This implies you used a command line like: "MSTest /testcontainer:bin\debug\eZoneUITests.dll" We do not want to point MSTest to the dll. We want to point it to the .tstest file, the way you have stated in your last message "/testcontainer:.\Faculty.tstest". Would you mind trying it one more time? It just doesn't make sense you would get the results you got given that command line. >/testcontainer:.\Faculty.tstest /resultsfile:eZoneTestResults.trx</ Posted 12 Oct 2011 Link to this post I'm going to setup a CC.NET server here and see if I can figure out what's going on. Please give me until Monday to work on this. Posted 20 Oct 2011 Link to this post Posted 24 Oct 2011 Link to this post I apologize for the delay getting back to you. I finally got to the bottom of this. Turns out there's a very subtle bug in Test Studio that is causing the error: Failed to queue test run <test run name here': Value cannot be null. Parameter name: path1 I have filed a high priority bug here. Fortunately there's a couple of easy work arounds. Work around A: 1) Load the test in Test Studio 2) Add a coded step 3) Delete the coded step This modifies the test definition such that it is now compatible with MSTest. Work around B: 1) Load the test in Visual Studio 2) Make any change to the test 3) Undo the change (unless you meant to keep the change) 3) Save the test This also modifies the test definition such that it is now compatible with MSTest. Yes, running CC.NET as a service will cause a problem with dialog handling and/or any test steps that need to move the mouse or simulate typing at the keyboard. If your tests do any of these things you will need to run CC.NET via the command line. Lastly I found that the configuration of a project in CruiseControl has change significantly since I wrote our documentation. I'll work on creating a new document showing how it's done in the current version. In short this is the project configuration I used on my config.xml: project name "Test_Studio_Project_A" modificationset quietperiod "30" <!-- touch any file in Test_Studio_Project_A project to trigger a build --> filesystem folder "projects/${project.name}" schedule interval "10" command "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe" workingdir args ""/testcontainer:C:\Program Files (x86)\cruisecontrol-bin-2.8.4\projects/${project.name}\WebTest Yahoo.tstest"" Thanks, also we have ccnet running as a service under the local system with iteract with desktop on and I get this issue ------------------------------------------------------------ '24/10/2011 9:58:53 AM' - Starting execution.... '24/10/2011 9:58:55 AM' - LOG: Unexpected dialog encountered. Closing the dialog, and halting execution. '24/10/2011 9:58:56 AM' - 'Pass' : 1. Navigate to : '' '24/10/2011 9:58:56 AM' - 'Fail' : 2. Wait for element 'ContentPlaceHolder1LgnLoginUserNameText' 'is' visible. Failure Information: ~~~~~~~~~~~~~~~ Unexpected dialog: Privacy I am thinking this is cause IE has never been run under this account and is prompting for first run settings. do you know how to fix this. sorry I am not sure what you mean here: "Yes, running CC.NET as a service will cause a problem with dialog handling and/or any test steps that need to move the mouse or simulate typing at the keyboard" is this when you have "SimulateRealClick" and "SimulateRealTyping" checked? Posted 26 Oct 2011 Link to this post First, PITS bug 8268 was just fixed. The fix will be included in our next internal build due out on Friday this week. To answer your next question, yes "SimulateRealClick" and "SimulateRealTyping" is exactly what I'm referring to. Plus there's also the mouse actions as described in the middle of this documentation page. No I am sorry I do not know how to fix the error you have posted. I am not aware of anyone successfully running Test Studio tests using CC.NET running as a service. That's the reason I instruct all my customers to run CC.NET via command line. I tell our TFS customers the same thing when they're trying to run Test Studio tests as part of their TFS builds.
http://www.telerik.com/forums/cruisecontrol-net
CC-MAIN-2017-22
refinedweb
1,956
57.87
Modular plugin programming Hi All, I'm writing a plugin that imports, processes and exports 3d files (whatever cinema4d supports). It has two way of working: - using parameter -scene, it checks merges all the top level objects and check if there are duplicates, and then checks if the object is already stored in a database - using parameter -object, it just merges everything in the scene (removing cameras lamps etc) and then exports the object I wanted to keep the code in modules, storing the first part in a scene_exporter.py and the second one in object_exporter.py and here comes the problems. I don't know if it's because the plugin is in a .pyp file, but the import just doesn't work. What should I do to keep the code modular? Hello, there should be no problem importing *.pyfiles form a *.pypfile. What do you mean with "import just doesn't work."? For example: you can have a formulas.pyfile (next to your *.pyp) that looks like this: def SomeCalculation(): return 5 you can import int in your *.pypjust with import formulas and then use number = formulas.SomeCalculation() It gets a little bit more complicated if you want to register plugins in your sub-modules. In that case you would have to hand over the __res__structure to these modules. best wishes, Sebastian Let me share you an example. This is a simple hello world program. In 'hello.pyp' I have (how do I format the text to code?): import c4d import sys import hello_function def PluginMessage(id, data): if id==c4d.C4DPL_COMMANDLINEARGS: hello_function.say('Hello World') return True return False In 'hello_function.py' I have: def say(word): print(word) When I run this from commandline, in the console I get the error that the hello_function package has not being found. They're on the same folder. I don't see any reason why this is not working Hello, you find information on how to format code in this thread: How to Post Questions. best wishes, Sebastian Thanks Any hint why the hello world code I wrote is not working? Hello, you might have to add the pypfile path to the system path using: sys.path.append(os.path.dirname(__file__)) so import sys import os sys.path.append(os.path.dirname(__file__)) import hello_function best wishes, Sebastian
https://plugincafe.maxon.net/topic/11083/modular-plugin-programming
CC-MAIN-2020-10
refinedweb
390
66.94
A package is a name for a group of related classes and interfaces. In Chapter 3, we discussed how Java uses package names to locate classes during compilation and at runtime. In this sense, packages are somewhat like libraries; they organize and manage sets of classes. Packages provide more than just source code-level organization. They create an additional level of scope for their classes and the variables and methods within them. We'll talk about the visibility of classes later in this section. In the next section, we discuss the effect that packages have on access to variables and methods among classes. The source code for a Java class is organized into compilation units. A simple compilation unit contains a single class definition and is named for that class. The definition of a class named MyClass, for instance, could appear in a file named MyClass.java. For most of us, a compilation unit is just a file with a .java extension, but theoretically in an integrated development environment, it could be an arbitrary entity. For brevity, we'll refer to a compilation unit simply as a file. The division of classes into their own files is important because the Java compiler assumes much of the responsibility of a make utility. The compiler relies on the names of source files to find and compile dependent classes. It's possible to put more than one class definition into a single file, but there are some restrictions we'll discuss shortly. A class is declared to belong to a particular package with the package statement. The package statement must appear as the first statement in a compilation unit. There can be only one package statement, and it applies to the entire file: package mytools.text; class TextComponent { ... } In this example, the class TextComponent is placed in the package mytools.text. Package names are constructed hierarchically, using a dot-separated naming convention. Package-name components construct a unique path for the compiler and runtime systems to locate files; however, they don't create relationships between packages in any other way. There is really no such thing as a "subpackage"; the package namespace is, in actuality, flatnot hierarchical. Packages under a particular part of a package hierarchy are related only by convention. For example, if we create another package called mytools.text.poetry (presumably for text classes specialized in some way to work with poetry), those classes won't be part of the mytools.text package; they won't have the access privileges of package members. In this sense, the package-naming convention can be misleading. One minor deviation from this notion is that assertions, which we described in Chapter 4, can be turned on or off for a package and all packages "under" it. But that is really just a convenience and not represented in the code structure. By default, a class is accessible only to other classes within its package. This means that the class TextComponent is available only to other classes in the mytools.text package. To be visible elsewhere, a class must be declared as public: package mytools.text; public class TextEditor { ... } The class TextEditor can now be referenced anywhere. A compilation unit can have only a single public class defined and the file must be named for that class. By hiding unimportant or extraneous classes, a package builds a subsystem that has a well-defined interface to the rest of the world. Public classes provide a facade for the operation of the system. The details of its inner workings can remain hidden, as shown in Figure 6-6. In this sense, packages can hide classes in the way classes hide private members. Nonpublic classes within a package are sometimes called package private for this reason. Figure 6-6 shows part of the hypothetical mytools.text package. The classes TextArea and TextEditor are declared public so that they can be used elsewhere in an application. The class TextComponent is part of the implementation of TextArea and is not accessible from outside of the package. Classes within a package can refer to each other by their simple names. However, to locate a class in another package, we have to be more specific. Continuing with the previous example, an application can refer directly to our editor class by its fully qualified name of mytools.text.TextEditor. But we'd quickly grow tired of typing such long class names, so Java gives us the import statement. One or more import statements can appear at the top of a compilation unit, after the package statement. The import statements list the fully qualified names of classes and packages to be used within the file. Like a package statement, an import statement applies to the entire compilation unit. Here's how you might use an import statement: package somewhere.else; import mytools.text.TextEditor; class MyClass { TextEditor editBoy; ... } As shown in this example, once a class is imported, it can be referenced by its simple name throughout the code. It is also possible to import all the classes in a package using the * wildcard notation: import mytools.text.*; Now we can refer to all public classes in the mytools.text package by their simple names. Obviously, there can be a problem with importing classes that have conflicting names. The compiler prevents you from explicitly importing two classes with the same name and gives you an error if you try to use an ambiguous class that could come from two packages imported with the package import notation. In this case, you just have to fall back to using fully qualified names to refer to those classes. You can either use the fully qualified name directly, or you can add an additional, single class import statement that disambiguates the class name. It doesn't matter whether this comes before or after the package import. Other than the potential for naming conflicts, there's no penalty for importing many classes. Java doesn't carry extra baggage into the compiled class files. In other words, Java class files don't contain information about the imports; they only reference classes actually used in them. One note about conventions: in our efforts to keep our examples short we'll sometimes import entire packages (.*) even when we use only a class or two from it. In practice, it's usually better to be specific when possible and list individual, fully qualified class imports if there are only a few of them. Some people (especially those using IDEs that do it for them) avoid using package imports entirely, choosing to list every imported class individually. Usually, a compromise is your best bet. If you are going to use more than two or three classes from a package, consider the package import. A class that is defined in a compilation unit that doesn't specify a package falls into the large, amorphous, unnamed package. Classes in this nameless package can refer to each other by their simple names. Their path at compile time and runtime is considered to be the current directory, so packageless classes are useful for experimentation and testing (and for brevity in examples in books about Java). The static import facility is new in Java 5.0. Using a variation of the import statement you can import static members of a class into the namespace of your file so that you don't have to qualify them when you use them. The best example of this is in working with the java.lang.Math class. With static import, we can get an illusion of built-in math "functions" and constants like so: import static java.lang.Math.*; // usage double circumference = 2 * PI * radius; double length = sin( theta ) * side; int bigger = max( a, b ); int positive = abs( num ); This example imports all of the static members of the java.lang.Math class. We can also import individual members by name: import static java.awt.Color.RED; import static java.awt.Color.WHITE; import static java.awt.Color.BLUE; // usage setField( BLUE ); setStripe( RED ); setStripe( WHITE ); To be precise, these static imports are importing a name, not a specific member into the namespace of our file. For example, importing the name "foo" would bring in any constants named foo as well as any methods named foo( ) in the class. Static imports are compelling and make life easier in Java 5.0. Using them too much, however, could quickly make your code difficult to read.
https://flylib.com/books/en/4.122.1.55/1/
CC-MAIN-2021-25
refinedweb
1,410
55.84
IRC log of tagmem on 2002-11-04 Timestamps are in UTC. 19:43:21 [RRSAgent] RRSAgent has joined #tagmem 19:43:24 [Zakim] Zakim has joined #tagmem 19:43:44 [Stuart] So what is the magic incantation to get these two along? 19:45:08 [Ian] "/invite Zakim" 19:45:12 [Ian] "/invite RRSAgent" 19:51:06 [Ian] Ian has changed the topic to: W3C TAG 4 Nov 19:54:19 [Stuart] I tried that without success... got "INVITE :Not enough parameters" in the notices window 19:54:44 [Stuart] Maybe you need operator priv's too...? 19:57:29 [timmit] You need to specify the channel name explicitly in mIRC 19:57:43 [DanConn] DanConn has joined #tagmem 19:57:47 [Stuart] aha 19:58:14 [timmit] "/invike Zaim #tagmem" 19:58:21 [timmit] owtte 19:58:24 [RRSAgent] See 19:58:25 [Stuart] Dan, do you have a a long-distance carrier this week ;-) 19:58:30 [timmit] Zakim, this is tag 19:58:31 [Zakim] ok, timmit 19:58:36 [DanConn] RRSAgent, stop 19:58:42 [DanConn] RRSAgent, start 19:58:47 [DanConn] RRSAgent, pointer? 19:58:47 [RRSAgent] See 19:58:51 [Norm] Norm has joined #tagmem 19:59:15 [Zakim] +??P2 19:59:19 [Zakim] -TimBL 19:59:19 [Zakim] +TimBL 19:59:20 [DanConn] yes, phone problem got cleared up later that day, Stuart 19:59:53 [Stuart] zakim, ??P2 is me 19:59:54 [Zakim] +Stuart; got it 20:00:03 [timmit] Zakim, who is here? 20:00:04 [Zakim] On the phone I see TimBL, Stuart 20:00:05 [Zakim] On IRC I see Norm, DanConn, Zakim, RRSAgent, Stuart, timmit, Ian 20:00:10 [DanC] DanC has joined #tagmem 20:00:15 [timmit] Zakim, where is everyone? 20:00:15 [Zakim] sorry, timmit, I do not understand your question 20:00:17 [DanCon] DanCon has joined #tagmem 20:01:36 [Zakim] +Ian 20:01:59 [Zakim] +Norm 20:02:11 [Ian] Regrets: DO, CL 20:03:29 [Zakim] +DanC 20:03:31 [Ian] zakim, who's here? 20:03:32 [Zakim] On the phone I see TimBL, Stuart, Ian, Norm, DanC (muted) 20:03:33 [Zakim] On IRC I see DanCon, DanC, Norm, DanConn, Zakim, RRSAgent, Stuart, timmit, Ian 20:03:52 [Zakim] +??P5 20:04:06 [Ian] zakim, ??P5 is Paul 20:04:07 [Zakim] +Paul; got it 20:05:32 [Zakim] +??P6 20:05:40 [Ian] zakim, ??P6 is TBray 20:05:41 [Zakim] +TBray; got it 20:06:30 [Ian] Unknown: RF 20:06:35 [Ian] Regrets: DO, CL 20:06:48 [Ian] Present: TBL, SW (Chair), DC, PC, TB, NW, IJ 20:07:00 [Ian] 28 Oct minutes accepted: 20:07:01 [Ian] 20:07:18 [DanC] DanC has left #tagmem 20:07:28 [Ian] Agenda: 20:08:37 [DanCon] re today's agenda, my action denoted "* Action DC 2002/09/26" was actually from 26Aug, not 26Sep. threw me off for a bit. 20:08:47 [Ian] ok 20:08:52 [Ian] ======= 20:08:54 [Ian] Meeting prep 20:09:00 [Ian] Confirm TAG summary: 20:09:01 [Ian] 20:09:05 [TBray] TBray has joined #tagmem 20:09:13 [Ian] PC: Ok by me. 20:09:17 [Ian] 20:09:52 [Ian] TB: I approve as well. 20:09:55 [DanCon] looks ok to me. $Date: 2002/11/01 13:55:55 $ 20:10:01 [Ian] SW: Ok by me. 20:10:06 [Ian] Summary accepted. 20:10:17 [DanCon] RESOLVED. 20:10:18 [Ian] --- 20:10:27 [Ian] SW: Slides due next week! 20:10:33 [Ian] -- 20:10:45 [Ian] TAG ftf meeting agenda? 20:10:56 [Ian] SW: Four segments: namespace documents. 20:11:27 [Ian] TB: Bad news is that Jonathan won't attend meeting; Good news is that JB and I have reached agreement and I will be posting something about RDDL in the next few days. 20:12:01 [Ian] SW: Second segment - review material for AC meeting. 20:12:09 [Ian] SW: 3? 4? 20:12:24 [DanCon] umm... yeah... chapter 3 on doc formats. 20:12:27 [Ian] SW: Please email me input to the ftf agenda by the end of this week. 20:13:07 [Ian] DC: Arch doc doesn't say much about a self-describing Web (following your nose from one doc to another to build context). 20:13:30 [Ian] DC: TBL, do you believe this is web arch doc? Can we discuss this at the ftf meeting? TBL can you write something in advance of the ftf meeting? 20:13:45 [Ian] SW: See RF's posting from today, which I think touches on this topic somewhat. 20:14:02 [Ian] DC: I hadn't read it from that angle. 20:14:22 [Ian] RF posting: 20:14:30 [Ian] DC to TBL: Have you written on this? 20:14:56 [timmit] 20:15:10 [DanCon] i.e. on learning what document X means by following links from X->Y, where you know what Y is. 20:15:19 [Ian] Action TBL: Find or write something about the self-describing Web. 20:15:36 [Ian] TBL: I think that's captured by "grounded documents" in "Meaning". 20:15:36 [timmit] "Grounded documents" in the above 20:15:59 [Ian] Action TBL deleted. 20:16:12 [Ian] Action DC: Review TBL's text to see if there's any part of self-describing Web for the arch doc. 20:16:21 [Ian] ------------------------------ 20:16:27 [Ian] * Potential TAG issue re consistency XQuery/XSchema from Tim Bray 20:16:31 [Ian] 20:16:34 [Ian] See reply from PC: 20:16:59 [Ian] (tag only) 20:17:00 [Ian] 20:17:43 [Ian] [TB summarizes his issue.] 20:18:20 [Ian] TB: I have some technical issues with directions proposed by Query WG. Sharpest aspect is that parts of Query require XML Schema semantics. 20:18:43 [Ian] TB: I sent info to XML Query WG; received a short reply; since then I've received longer replies from individuals of the group. 20:18:54 [Ian] TB: My latest message is an attempt to break down the problem: 20:19:32 [timmit] 20:20:22 [Ian] q+ 20:20:32 [Ian] ack DanCon 20:20:33 [Zakim] DanCon, you wanted to ask timbl about formats 20:20:47 [Ian] TB: This may be a process issue rather than an arch issue. 20:21:54 [Ian] TB: We are moving beyond DTDs (after decades) into new territories of schemas. It seems to me at this point highly architecturally unsound for any really important Recommendation to bet the farm on a particular schema language. 20:22:43 [Ian] TB: XQuery is bigger than it needs to be. The WG has done the sensible thing of defining Basic Query (leaving out most of schema bits). There needs to be architectural pressure on groups to do less; ship sooner; ship simpler. 20:23:24 [Ian] ack Ian 20:24:02 [Ian] PC: Sorry for not replying in a more timely fashion to TB's points. 20:24:30 [TBray] q+ 20:24:48 [Ian] PC: On the topic of required integration: WG chartered (twice) to use XML Schema. 20:25:06 [Ian] PC: There haven't been comments prior saying that this is a bad thing. 20:25:24 [Ian] PC: If this dependency is to be changed, then Query WG needs to be rechartered. 20:25:31 [timmit] Firstly, I am surprised that TimBray is not encouraging interdependence between w3c specs - see HTML and Xlink discussion - PC 20:26:06 [timmit] PC: This makes this a process issue 20:26:25 [Ian] PC: IMO, the primary concern in public fora is not dependency on xschema. But rather whether update language is critical (public split 50/50) 20:26:59 [Ian] PC On living in a multiple-schema world: 20:27:53 [Ian] Just because someone waves a standards banner does not mean that 20:27:53 [Ian] the XML Query WG has to change its plans and delay its work to pay 20:27:53 [Ian] attention to such a banner waver. 20:28:04 [timmit] q+ to say that this is primarily an architectural issue in the sense of high-level modular design. It is a question of whether a flexible interface to the schema language should be provided. Of course the process and social issues are intertwined. 20:28:28 [Ian] PC: Perhaps the XML world needs an abstraction that would include the various schema languages. I think there's a work item in the schema charter that covers this item. 20:28:54 [Ian] From charter: "interoperability with other schema languages such as RELAX-NG and 20:28:54 [Ian] Schematron" 20:29:00 [Ian] 20:29:52 [Norm] brb 20:30:22 [Ian] PC: On item three on simplicity: We have worked hard to meet our requirements. To come along and say that the requirements are too big surprises me. I don't think that WGs at W3C should be constrained to pursuing only small specs. 20:31:31 [Ian] PC: Basic Query handles Schema Part 2. If we publish Basic Query as our only deliverable, we would not meet our requirements. I don't think that at this point in time we should split our deliverables given the progress we've made on the document. 20:31:35 [Stuart] q? 20:32:43 [Ian] PC: I think it's ok that the query spec is big. Some of the size has to do with clearer expectations about interoperability. 20:33:12 [Ian] PC: TB has identified a long-term goal -- clearer relationships among schema specs -- but I don't think that this should affect Query 1.0. 20:33:53 [Ian] PC: There are a number of XQuery 1.0 implementations, even prior to last call (both Member and non-Member implementers). 20:34:26 [Ian] PC: So TB's arguments sway me less since we have so much implementation experience that suggests we are doing the right thing. 20:34:54 [Ian] DC: Is PC arguing that this or is not a TAG issue? 20:35:28 [Ian] PC: Could be that the TAG issue is on multiple schema languages. Perhaps we could synthesize an abstract model for PSVI processors. 20:36:00 [Ian] DC: Is there an issue in the first place? 20:36:12 [Ian] DC: I'm convinced there's an issue given the substantive email exchanged. 20:36:24 [Ian] ack TBray 20:36:56 [Ian] TB: Tie-in to XLink is a big bogus; the arguments in that case were purely technical, not about it being a W3C spec. 20:37:41 [Ian] TB: In the community of Web designers, there is a wave of horror at the astounding complexity of schema and xpath 2.0. A strong feeling that something has gone amiss somewhere. 20:37:44 [Ian] DC: I have heard similar. 20:38:01 [Ian] TB: I am not simply running off at the mouth here, but I think accurately representing a feeling that's out there. 20:38:04 [Ian] ack DanCon 20:38:05 [Zakim] DanCon, you wanted to share concerns from the public about XML schema "leaking" into other specs; mostly XPath and to say that nearing last call is *exactly* the time to revisit 20:38:07 [Zakim] ... and confirm or reconsider requirements 20:38:09 [Ian] ack timmit 20:38:10 [Zakim] Timmit, you wanted to say that this is primarily an architectural issue in the sense of high-level modular design. It is a question of whether a flexible interface to the schema 20:38:12 [Zakim] ... language should be provided. Of course the process and social issues are intertwined. 20:38:34 [Ian] TBL: The question is architectural (whatever the charter said). 20:39:25 [Ian] TBL: Modularity is a good thing; can the specs be more modular? 20:39:54 [Ian] TBL: PC and TB do talk to different people (and it's good to hear from all of those people). 20:40:07 [Ian] TBL: It would be obviously costly to do anything to XQuery. 20:40:11 [TBray] q+ 20:40:30 [TBray] q- 20:40:35 [Ian] TBL: I read Xquery and it seemed pretty straightforward to me. 20:40:55 [Stuart] q+ 20:40:58 [Ian] TBL: PC's social point holds (cost of change). 20:41:19 [Norm] Norm has joined #tagmem 20:41:23 [Ian] TB: Query allows querying by types. Allowing query by those 19 data types seems reasonable. 20:41:45 [Ian] [TBL summarizes that TB's concern is about the dependency on part 1 of XML Schema.] 20:41:49 [Ian] ack Stuart 20:42:33 [Ian] SW: Is the focus on a dependency on a single schema language or more specifically on XML Schema? 20:42:55 [Stuart] s/Schema/Query 20:43:08 [timmit] q+ 20:43:16 [Ian] TB: I think that PC is correct -- there's a key technical question about whether XML Schema is a cornerstone of future XML specs. 20:43:32 [DanCon] well, tim, techincally, XML Schema part 2 depends on XML Schema part 1. 20:43:37 [DanCon] timbl 20:43:55 [Zakim] +??P8 20:43:58 [Ian] PC: I think the issue is more about multiple schema languages. 20:44:25 [Stuart] q? 20:44:27 [Norm] q+ 20:45:02 [Norm] q- 20:45:25 [Ian] zakim, ??P8 is Roy 20:45:26 [Zakim] +Roy; got it 20:45:31 [timmit] q+ to say that this sort of choice has to be made in each case on its merits. 20:45:33 [Ian] ack timmit 20:45:34 [Zakim] Timmit, you wanted to say that this sort of choice has to be made in each case on its merits. 20:46:04 [TBray] q+ 20:46:16 [Roy] Roy has joined #tagmem 20:46:18 [Ian] TBL: I am concerned by extreme stances such as "one should one always use w3c specs"; each case is different. 20:46:45 [Ian] TBL: Several good principles here - reuse stuff; modularity. Need to consider each case. 20:47:15 [Norm] q+ 20:47:20 [Ian] TBL: Just talking about the schema, case I think that it's not interesting to reset the Query WG. What is possible is for someone to find a clever way of achieving what is required. 20:47:31 [Ian] TBL: I haven't understood whether "Basic" is what TB needs. 20:47:51 [Ian] TBL: Is Basic what TB prefers, or is Basic not adequate (and needs tweaking). 20:48:21 [Ian] SW: Please frame comments in terms so we can define this issue. 20:48:39 [Ian] TB: I think that it's a good thing to have lots of schema languages out there since this area is new. 20:48:56 [Ian] TB: We don't have enough experience to know what schema meets which needs. 20:49:04 [Ian] (what schema language) 20:49:21 [Ian] TB: I highly approve of XQuery Basic and would strongly recommend that the WG release that on a separate Rec track. 20:49:45 [Ian] TB: It might even shorten time to Recommendation (for that part of the spec). 20:50:13 [Ian] TB: I have argued (with specifics) about how query/schema can be decoupled. I haven't heard substantive replies to my specific syntax. 20:50:39 [Ian] TB: issue proposal: "Schema languages: What can be said about multiple existing schema languages and their appropriate uses in W3C and the Web more generally?" 20:50:47 [Stuart] q? 20:50:53 [Ian] TBL: More specific than "What can be said about...?" 20:51:00 [Stuart] ack TBray 20:51:37 [Ian] TB: "Given the existence of more than one XML schema languages; what architectural implications does the use of a particular language have? To what extent is it useful to bind to all schema languages or a particular one?" 20:52:08 [Ian] DC: I'd be happy to consider "To what extent should schema be integrated into xpath and xquery?" 20:52:15 [Ian] DC: That's the concern I hear at confs. 20:52:18 [Ian] q? 20:52:21 [DanCon] xpath, that is 20:52:27 [Ian] ack Norm 20:52:32 [Stuart] ack Norm 20:52:43 [Ian] NW: I have a lot of the same concerns as TB. Though I'm not sure what the issue is, exactly. 20:52:54 [Ian] NW: I think the pragmatic issue will be setting the conformance levels right. 20:53:43 [Ian] NW: Substitution groups and inheritence look like they'd be hairy to decouple. 20:53:52 [DanCon] sigh. conformance levels are evil. This was a priniciple of XML 1.0 (which XML 1.0 didn't quite meet, actually) and it continues to be important. 20:54:39 [Ian] PC: What about extending DC's proposal to xforms and wsdl? 20:54:51 [Ian] DC: Not concerned about those as much as xpath, and xquery. 20:54:56 [Ian] NW: I'd support DC's proposal 20:55:06 [Ian] PC: I vote against the issue as proposed. 20:56:34 [Norm] q+ 20:56:39 [Ian] PC: XQuery 1.0 handles DTD and XML Schema. It's not been on the WG's work plan to handle other schema languages. 20:57:00 [Ian] PC: And it seems that the XQuery WG charter has as a work item addressing additional schema languages. 20:57:14 [Ian] PC: I don't understand why the TAG has to take this up since the WGs have items on their work plans. 20:57:38 [Ian] NW: I don't think that there's evidence that xquery and xpath will support xml schema and dtds equally well. 20:58:56 [Ian] RF: There seems to be an awful lot of support for Relax 20:59:23 [Ian] Proposed: Adopt as a new issue "To what extent should xml schema be integrated into xpath and xquery?" 20:59:56 [Ian] PC: I oppose this as an issue; I don't see what the architectural issue is from this wording. 21:00:26 [Ian] For: DC, TB, NW. 21:00:35 [Ian] Abstain: TBL, RF, SW 21:00:57 [Ian] q? 21:01:00 [Ian] ack Norm 21:01:32 [Ian] PC: If there an arch issue, I think it's about how schema languages interrelate. I'd like to take offline with TB and refine this. 21:03:07 [Norm] Yes, please 21:03:33 [Ian] [No action item assigned.] 21:03:38 [Ian] -------------------- 21:03:54 [Ian] * Use of frags in SVG v. in XML 21:03:54 [Ian] o Action DC 2002/09/26: Describe this issue in more detail for the TAG. Done 21:03:59 [Ian] 21:04:23 [DanCon] 21:05:38 [Norm] q+ 21:05:48 [Ian] DC proposed issue: "Use of fragment identifiers in XML". I think that CL might disagree with me, but I take that as evidence that there is an issue. 21:07:11 [Ian] TB: Is there not already an architectural slam dunk: RFC2396 says that what comes after # is up to the spec. 21:07:24 [Ian] DC: There are cases where two specs define what happens. 21:07:44 [Ian] DC: It seems to me that it means something, but it doesn't have to be exhaustive or exclusive. 21:07:57 [timmit] q+ 21:08:31 [Ian] TB: I could almost see a principle that says "When there is a language that might be served wtih one of multiple media types, inconsistencies in meaning for frag ids is harmful." 21:08:47 [Ian] SW: RFC2396 also discourages inconsistency. 21:08:49 [Ian] ack Norm 21:08:50 [Norm] q- 21:08:52 [Ian] ack Timmit 21:10:26 [Ian] TBL: We can ack the inconsistency in the architecture (e.g., when coneg is used). You can serve an HTML page as text/plan. You could serve up, similarly, a bag of bits using the appropriate mime type to give the meaning of a dog or car. 21:10:48 [Ian] TBL: I have resisted bringing in mime types. I've become more comfortable with the idea of using mime types to give a particular view on data. 21:11:27 [Ian] TBL: I think there is an issue here that we should write up. Fortunately, I think we can write it up and resolve it. 21:11:32 [Ian] q? 21:11:43 [Ian] [Straw poll] 21:11:58 [Ian] PC: I'm uncomfortable about doing this without Chris Lilley present. 21:12:36 [Ian] DC: That doesn't convince me that we shouldn't call the question, see if there's support today, and moving on. 21:12:49 [Ian] SW: Active support for the proposed issue? 21:12:56 [Ian] For: NW, TBL, DC, SW, RF 21:13:01 [Ian] Abstain: PC, TB 21:13:27 [Norm] People would like to be able to inject processing instructions (not PIs, but semantics) into fragment identifiers. That's where I'm feeling the pain today. 21:13:35 [Ian] Accepted: fragmentInXML-28. 21:13:41 [Ian] Action IJ: Add to issues lsit. 21:14:02 [Ian] ---- 21:14:04 [Ian] Findings versioning 21:14:10 [Ian] Proposal: 21:14:14 [Ian] 21:14:52 [Ian] DC: Formalizing this is burdensome. 21:15:13 [Ian] [DC: I feel differently for tech reports.] 21:15:29 [Ian] SW: I didn't want people to refer to things that would change. 21:15:35 [Ian] DC: Such is life. 21:15:40 [Ian] DC: Do other people really want to do this? 21:15:52 [Ian] SW: For me, this is what I'd like for findings. 21:15:53 [Ian] PC: Works for me. 21:16:03 [Norm] NW: It works for me, too. 21:16:26 [Ian] IJ: Number of findings per year (6 in 2002) seems manageable. 21:16:30 [Ian] SW: Ok, we 21:16:33 [Ian] will run with this. 21:16:35 [Ian] --------------------- 21:16:36 [Ian] Arch Doc 21:16:59 [Ian] Action IJ: Make this policy known to www-tag and link from findings page. 21:17:07 [Ian] --- 21:17:08 [Ian] Arch Doc 21:17:17 [Ian] 29 Oct draft: 21:17:33 [Ian] Is RF's action done? 21:17:38 [Ian] 1. Action RF 2002/09/25: Propose a rewrite of a principle (rationale -> principle -> constraint) to see whether the TAG prefers this approach. It was suggested that the example be about HTTP/REST, as part of section 4. 21:17:49 [Ian] 21:18:04 [DanCon] roy writes "I give up" as if to say "please withdraw this action" but I found his messag quite responsive to the action. 21:18:06 [Ian] RF: Regarding earlier question: are xquery and xml schema orthogonal? 21:18:30 [Ian] q+ 21:18:40 [Ian] TB, DC: I found the approach appealing. 21:18:46 [Ian] IJ, SW: Same here. 21:18:54 [Ian] ack DanCon 21:20:18 [Ian] IJ: "Change is inevitable, and therefore evolution should be planned." 21:20:31 [Ian] IJ: Seems like "evolution shoudl be planned" is for agents, not the system. 21:21:03 [Ian] IJ: Does "requirements" mean requirement on the designers or the system? 21:21:14 [Stuart] ack Ian 21:21:23 [Ian] RF: "The system needs to be be able to evolve since change is inevitable." 21:21:42 [Ian] TB: "Evolution should be planned *for*; when change happens things should not fall apart." 21:21:55 [Ian] RF: Regrets for 11 Nov. 21:22:12 [DanCon] I'm avilable 11Nov 21:22:15 [Ian] Next meeting: 11 Nov. 21:22:20 [Ian] RF: Possible regrets for 18 Nov. 21:23:02 [Ian] TBL action regarding info hiding done. 21:23:13 [Ian] CL Action about chapter three not done. 21:23:13 [Ian] NW: # Write some text for a section on namespaces (docs at namespace URIs, use of RDDL-like thing). 21:23:14 [Ian] Not done. 21:23:23 [Ian] # Action DC 2002/10/31: Resend redraft of arch doc section 2.2.1 on URIEquivalence-15. DC and IJ discussed on 30 October. Should IJ incorporate those comments in next draft? 21:23:35 [Ian] DC: Yes, IJ please incorporate 21:25:52 [Ian] IJ: What are our expectations for doc before AC meeting? 21:26:09 [Ian] PC: I am more comfortable approving 29 Oct draft and approving a bigger change at the ftf meeting. 21:26:54 [Ian] DC: I'd like IJ to get as much done as possible by 13 Nov, with approval with one other TAG participant's review. 21:28:18 [Ian] Resolved: We might not get a doc out by 13 Nov, but ok for IJ + two other participants (for this draft) sufficient to get to TR page. 21:29:05 [Ian] IJ: I will try to get a draft with some of RF's proposals by Thursday. 21:29:10 [DanCon] if it's out by Thu, I intend to read it by Monday 21:29:14 [Ian] TB, SW: Commit to read and give feedback. 21:29:21 [Ian] ------------------------------------- 21:29:28 [Ian] SW: Next week agenda priority IRIEverywhere-27. 21:30:26 [Ian] Action IJ: Invite Martin Duerst to the call next week. 21:30:42 [Ian] ADJOURNED 21:30:45 [Zakim] -Norm 21:30:47 [Zakim] -TimBL 21:30:48 [Zakim] -TBray 21:30:48 [Zakim] -Stuart 21:30:49 [Ian] RRSAgent, stop
http://www.w3.org/2002/11/04-tagmem-irc.html
CC-MAIN-2016-18
refinedweb
4,305
80.92
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn Studio glitches… so I pressed F5 and was proved wrong after getting the following runtime error: Compiler Error Message: BC30560: ‘ScriptManager’ is ambiguous in the namespace ‘System.Web.UI’ After doing some checking I actually found the following error in the output: errorCS0433:The type ‘System.Web.UI.ScriptManager’ exists in both ‘c:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.5.0.0__31bf3856ad364e35\System.Web.Extensions.dll’ and ‘c:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.6.0.0__31bf3856ad364e35\System.Web.Extensions.dll’ Well, that was the end, as it turns out I had installed both the Ajax that comes with the .NET 3.5 and the Ajax that comes with ASP.NET 3.5 Extensions, and I had referenced them both! All I needed to do was to remove one of the references and that’s it. So if you have both installed, make sure you reference only one of them throughout your project. Amit. Tags :.Net3.5 ExtensionsAJAXASP.NetDebugErrorBC30560ErrorCS0433ScriptManagerScriptManager Proxy Breeze : Designed by Amit Raz and Nitzan Kupererd Srikanth Said on April 16, 2008 : Hi Sir, I am also getting the same problem.Where can i remove these reference.Can you provide the total information of this? Thanks, Srikanth Amit Said on April 16, 2008 : Hi Srikanth It can happen from various reasons. you should check you GAC (C:\WINDOWS\assembly) to see how many installations of system.web.extentions you have, and what are their versions. Check it and get back to me I will try to help. Amit Srikanth Said on April 16, 2008 : Hi amit, I have system.web.extentions 3.5.0.0,1.0.61025.0,3.6.0.0 versions. Actually i have installes both Ajax 3.5 and as well as ASP.NET 3.5 EXTENSIONS Controls Please tell solution for my problem Amit Said on April 16, 2008 : Thats the problem. They are both related to the same framework so the Visual Studio gets confused between them. you should remove one of them. I recommend staying with the 3.5.0.0 one, that is what i am using and it works fine, though i think any of them will be OK. Amit srikanth Said on April 16, 2008 : How can i remove the 3.6.0.0?Can i remove the total asp.net 3.5 extension tool? Amit Said on April 16, 2008 : Use this application: it is for registering and removing dlls from the GAC Amit Fares Said on April 28, 2008 : Try modifying the web.config of the application, change the “System.Web.Extensions” assembly version from “3.6.0.0” to “3.5.0.0” or vice versa: The key will look like the following: <add assembly=”System.Web.Extensions, Version=3.5.0.0, …etc OR <add assembly=”System.Web.Extensions, Version=3.6.0.0, …etc Roberta Said on June 5, 2008 : I am not sure how to use this GAC to remove the extra versions. Where do I fing the Gacutil.exe. I did a search on C and did not locate it. Shahar Y Said on June 5, 2008 : Hi Roberta, It is usually located in C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin but you may have installed Visual Studio in a different folder. So, find the anydir:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin Roberta Said on June 5, 2008 : thanks I found the file but now how do I uninstall the extra versions? I tryed though dos but that did not work Shahar Y Said on June 5, 2008 : Roerta, You need to drag the gacutil.exe file into the command shell (cmd) and use the options you need. You can read about the available options here: Roberta Said on June 5, 2008 : Hi Shahar Y I did that and this is what I typed in C:\>”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe”system. web.extensions.dll, version=1.061025.0,culture=”natural,PublicKeytoken=31bf3856a d364e35 and this is what I got ‘”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe”system.web .extensions.dll’ is not recognized as an internal or external command, operable program or batch file. Roberta Said on June 5, 2008 : I am not sure if this makes a diff or not but when I run the gacutil.exe /l the system.web.extensions do not show up but when I go to c:windows/assembly there are two there. Shahar Y Said on June 5, 2008 : Roberta, 1) I see that you forgot to add a space between the gacutil.exe and your assembly name. 2) You need to use some flag. If you want to uninstalls an assembly from the global assembly cache, you need to write – ”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe” /u yourAssemblyName Shahar Y Said on June 5, 2008 : Hi Roberta, About the fact that you can’t find the system.web.extensions assembly using the /l option – try to run “gacutil.exe /l system.web.extensions” and see if it can be found. Roberta Said on June 5, 2008 : Thanks I did that: C:\>gacutil.exe /u “system.web.extensions, version=1.0.61025.0,culture=”natural” ,PublicKeytoken=31bf3856ad364e35 this is what I get C:\>gacutil.exe /u “system.web.extensions, version=1.0.61025.0,culture=”natural” ,PublicKeytoken=31bf3856ad364e35 is there any other way to remove this? Roberta Said on June 5, 2008 : sorry this is what I get Microsoft (R) .NET Global Assembly Cache Utility. Version 1.0.3705.0 No assemblies found that match: system.web.extensions, version=1.0.61025.0,cultu re=natural,PublicKeytoken=31bf3856ad364e35 Number of items uninstalled = 0 Number of failures = 0 Roberta Said on June 5, 2008 : it is not listed in the /l at all Roberta Said on June 5, 2008 : this is what I get C:\>gacutil.exe /l “system.web.extensions Microsoft (R) .NET Global Assembly Cache Utility. Version 1.0.3705.0 The Global Assembly Cache contains the following assemblies: The cache of ngen files contains the following entries: Number of items = 0 Shahar Y Said on June 5, 2008 : Roberta, 1) You wrote natural instead of neutral. 2) Not sure if it matters, but you didn’t use spaces and capital letters. Try to write it like that: system.web.extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Roberta Said on June 5, 2008 : ok thanks:~) I rewrote it and now it is telling me invalid file or assembly name that I need a .dll or .exe Just to make sure I am getting the assembly name from C:\WINDOWS\assembly is that right? Shahar Y Said on June 5, 2008 : Roberta, If ”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe” /u system.web.extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 doesn’t work, it is weird and sorry but I have no ideas left… Roberta Said on June 5, 2008 : ok thanks for all your help. :~) Jeff Said on January 25, 2009 : Thanks so much! Stupid VWD 2008 adds the 3.5 reference automatically. Good old Microsoft Thanks again! omyfish Said on June 26, 2009 : This work for me. gacutil.exe /u “system.web.extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ H Selik Said on February 11, 2010 : May be there is a diffrent solution .Our web project developed on 2.0 V. then we installed upper version “3.0”,”3.5” on the machine. The project needed to be developed.When I drag dropped a script manager on any page of prj. then it started to fall into error. “‘ScriptManager’ is ambiguous in the namespace ‘System.Web.UI’” I could use this solution but we have projects that have all of versions . It could be dangerous for the other projects Then I found out there had been two lines on web config that they had contained different version informations . “<add assembly=”System.Web.Extensions, Version=1.0.61025.0″ And “<add assembly=”System.Web.Extensions, Version=3.5.0.0″ I removed the last version line .But still error… Then I found the same lines on aspx on the top of pages. “” Some of them were different “Version=3.5.0.0″ . I changed the correct one. It is working now. indianbill Said on March 13, 2010 : This happend to me, after I already had a working site. The problem seems to be duplicate extensions. AJAX and .net Somehow, MS VS added an assembly definition line to my web.config file automatically while I was coding….not sure how..must have been something I did while editing the page. Removing this line from my web.config file fixed it for me. The ajax extension i’m using is… indianbill Said on March 13, 2010 : Keep this line… add assembly=”System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ Remove this one… add assembly=”System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35″
http://www.dev102.com/2008/03/21/ajax-scriptmanager-error-bc30560/
CC-MAIN-2016-40
refinedweb
1,539
68.36
OpenGL Discussion and Help Forums > DEVELOPERS > OpenGL coding: beginners > Visual c++ 6.0 and glut? PDA View Full Version : Visual c++ 6.0 and glut? LeprA 10-17-2000, 07:08 AM I just wonderd if Visual C++ 6.0 can use glut? If it works How do I do it? Inquisitor 10-17-2000, 07:17 AM #include <GL\glut.h> It's that simple. LeprA 10-17-2000, 07:38 AM Sure i have done that ... =) but it was an other problem i saw ... gahs... i picked up a Demo about bumpmapping and now i saw it was made with a main(...), and if i whant to compile it I must reprogram a lot i think ... or is there an easy way to do it? Sure i can use an other compilator but i dont have one =) humberto 10-18-2000, 01:34 PM Hi lepra, I don't know if I get your point but you need to configure your VC++6.0 to compile GLUT. Check these 3 files: x:\windows\system\glut32.dll x:\...\VC\lib\glu32.lib x:\...\VC\include\GL\Glut.h If you can't find one of these files go to: It's important to say that I create a project as a Win32 Console Application (MFC is disabled) when I am working with GLUT. I hope this info can be useful. Antonio 10-18-2000, 03:10 PM Don't forget to add glut32.lib to your libraries, otherwise you'll get errors when linking. Antonio () iss 10-18-2000, 03:53 PM Both GLUT & GLUI work well with VC6. Another way to import the glut32.dll calls besides using Project Settings... Link... is to add the following into your #includes #pragma comment(lib,"glut32.lib") You typically don't have to do this since it's already in glut32.h I prefer the latter (and so did the folks who wrote glut32 for Win32) since I can encapsulate the lib calls right in the editor and I see what's going on in the code. (just as with #include) Good Luck PS Take a look at GLUI: a GUI built with GLUT in mind. [This message has been edited by iss (edited 10-22-2000).] LeprA 10-19-2000, 04:20 AM Thanx you guys realy help me with that... i didnt know i could still do Dos programs in VC++ 6.0 ... but now i have a question to "ISS" ... with that command for including librarys is it exaktly what the VC do in the linker or do the library get in the *.xe file (Do the *.exe file get larger i mean)? Antonio 10-19-2000, 05:51 AM Where can I find GLUI? Antonio LeprA 10-19-2000, 11:05 AM Originally posted by Antonio: Where can I find GLUI? Antonio You will find it at BeatHam 10-20-2000, 03:43 AM hi, Im trying to get GLUT working in V c++ 6.0 I've added the GLUT lib and include directory in option I've added glut32.lib in project -> settings -> link tab -> And i get this error: --------------------Configuration: lesson6 - Win32 Debug-------------------- Linking... LIBCD.lib(wincrt0.obj) : error LNK2001: unresolved external symbol _WinMain@16 Debug/lesson6.exe : fatal error LNK1120: 1 unresolved externals Error executing link.exe. lesson6.exe - 2 error(s), 0 warning(s) could some help me??? please LeprA 10-20-2000, 04:12 AM Originally posted by BeatHam: hi, Im trying to get GLUT working in V c++ 6.0 I've added the GLUT lib and include directory in option I've added glut32.lib in project -> settings -> link tab -> And i get this error: This error you get then you not have choosen Console application i think. Try to make a new workspace with console App and add the librarys like before Antonio 10-20-2000, 04:20 PM You can still have a win32 app and avoid writing the WinMian function. Select project->settings from the main menu; Select the "Link" tab from the dialog box; Select "Output" from the "Category" combo box; In the "Entry-point symbol" textbox type "mainCRTStartup" Antonio () Powered by vBulletin® Version 4.2.2 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.
https://www.opengl.org/discussion_boards/archive/index.php/t-134217.html
CC-MAIN-2016-22
refinedweb
710
75.91
C++ Loop Statements. Repetition Revis. Repetition Revisited Using OCD, design and implement a function that, given a menu, its first valid choice, and its last valid choice, displays that menu, reads a choice from the user, and returns a value guaranteed to be (i) a valid choice from the menu, and (ii) a choice chosen by the user. You may assume that the valid menu choices form a continuous sequence. The tricky part is that the function must return a value that (i) is a valid menu choice; and (ii) was chosen by the user. One way to accomplish both goals is to use a loop that displays the menu, reads the user’s choice, checks its validity, and if it is not valid, gives them another chance... Description Predefined? Library? Name display strings yes iostream << read a char yes iostream >> check validity no built-in <=, && repeat steps yes built-in ? terminate loop yes built-in ? return a char yes built-in return Note that our algorithm terminates the repetition at the bottom of the loop. To code such loops conveniently and readably, C++ provides the do loop. The do loop tests its condition at the end of the loop, making it useful for any problem in which the body of the loop must be performed at least once. This function seems general enough to be reuseable by any menu-using program, so it should be stored in a library. Once our function is stored in a library, a programmer can write something like this: #include “Menu.h” int main() { const string MENU = “Please enter:\n” “ a - to do this\n” “ b - to do that\n” “ c - to do the other\n” “--> “; char choice = GetValidMenuChoice(MENU, ‘a’, ‘c’); // ... }//menu.cpp 0. Receive MENU, firstChoice, lastChoice. 1. Loop a. Display MENU via cout. b. Read choice from cin. c. If firstChoice <= choice and choice <= lastChoice: terminate repetition. d. Display error message. End loop. 2. Return choice. Our algorithm no longer terminates the repetition at the bottom of the loop. Instead, it terminates repetition in the middle of the loop, suggesting a forever loop. Which loop is best used depends on where execution leaves the loop in one’s algorithm. #include “Menu.h” int main() { const string MENU = “Please enter:\n” “ a - to do this\n” “ b - to do that\n” “ c - to do the other\n” “--> “; char choice = GetValidMenuChoice(MENU, ‘a’, ‘c’); // ... } If a programmer now writes the same thing: F count <= last T Statement count++A Counting Loop The for loop is most commonly used to count from one value first to another value last: for (int count = first; count <= last; count++) Statement T Expression F StatementList2Other Loops C++ also provides the forever loop: a for loop without expressions: for (;;) { StatementList1 if (Expression) break; StatementList2 } Repetition continues so long as Expression is false! Expression F StatementList2Pretest Loops If StatementList1 is omitted from a forever loop, we get a test-at-the-top or pretest loop: for (;;) { if (Expression) break; StatementList2 } Expression T StatementThe while Loop For such situations, C++ provides the more readable while loop, whose pattern is: while (Expression) Statement Statement can be either a single or compound C++ statement. Repetition continues so long as Expression is true! T Expression FPost-test Loops If StatementList2 is omitted in a forever loop, we get a test-at-the-bottom or post-test loop: for (;;) { StatementList1 if (Expression) break; } F Expression TThe do Loop For such situations, C++ provides the more readable do loop, whose pattern is: do Statement while (Expression); Statement can be either a single or compound C++ statement. Repetition continues so long as Expression is true! With four loops at our disposal, how do we know which one to use? Objects #include <iostream> // <<, >>, cout, cin using namespace std; int main() { const double SMALL_NUMBER = 1.0e-3; // 1 millimeter cout << "This program computes the number and height\n" << "of the rebounds of a dropped ball.\n"; cout << "\nEnter the starting height (in meters): "; double height; cin >> height; cout << "\nStarting height: " << height << " meters\n"; int bounce = 0; while (height >= SMALL_NUMBER) { height /= 2.0; bounce++; cout << "Rebound # " << bounce << ": " << height << " meters" << endl; } } This program computes the number and height of the rebounds of a dropped ball. Enter the starting height (in meters): 15 Starting height: 15 meters Rebound # 1: 7.5 meters Rebound # 2: 3.75 meters Rebound # 3: 1.875 meters Rebound # 4: 0.9375 meters Rebound # 5: 0.46875 meters Rebound # 6: 0.234375 meters Rebound # 7: 0.117188 meters Rebound # 8: 0.0585938 meters Rebound # 9: 0.0292969 meters Rebound # 10: 0.0146484 meters Rebound # 11: 0.00732422 meters Rebound # 12: 0.00366211 meters Rebound # 13: 0.00183105 meters Rebound # 14: 0.000915527 meters The four C++ loops provide very different behaviors: The while and for loops have their tests at the top, implying that if the loop’s condition is initially false, the body of the loop will not execute, which is called zero-trip behavior. The do loop has its test at the bottom, implying that the body of the loop will execute at least once, regardless of the value of the loop’s condition, which is called one-trip behavior. The forever loop its test in the middle: This might be called half-trip behavior. C++ provides four repetitive execution statements: Which loop you use to solve a given problem should be determined by your algorithm for that problem. Working in pairs of two, solve the following problem... /* Reverse() * Receive: number, an int. * PRE: number >= 0. * Return: the int consisting of number’s digits reversed. */ int Reverse(int number) { int answer = 0; // our result int rightDigit; // rightmost digit while (number > 0) // while digits remain { rightDigit = number % 10; // get rightmost digit answer *= 10; // L-shift answer’s digits answer += rightDigit; // add in new digit number /= 10; // chop rightmost digit } return answer; } Avoid declaring variables within loops, e.g. for (;;) { int rightDigit = number % 10; if (number > 0) break; answer *= 10; answer += rightDigit; number /= 10; } Processing a declaration consumes time. Processing a declaration in the body of a loop consumes time every repetition of the loop, which can significantly slow one’s program.
http://www.slideserve.com/dallon/c-loop-statements
CC-MAIN-2017-43
refinedweb
1,024
62.78
What is the use of a Junit before and Test package in java? how can i use it with netbeans? Can I have more than one method with @Parameters in junit test class which is running with Parameterized class ? @RunWith(value = Parameterized.class) public class JunitTest6 { private String str; public JunitTest6(String region, ... Is it possible to test for multiple exceptions in a single JUnit unit test? I know for a single exception one can use, for example @Test(expected=IllegalStateException.class) Uri's answer got me thinking about what limitations JUnit 4 aquired by using annotations instead of a specific class hierarchy and interfaces the way JUnit 3 and earlier did. I'm ... We have developed some code which analyzes annotated methods and adds some runtime behaviour. I would like to test this. Currently I am hand-coding stubs with certain annotations for setting up ... i have written a few junits with @Test annotation. If my test method throws a checked exception and if i want to assert the message along with the exception, is there ... I wish to launch the GUI application 2 times from Java test. How should we use @annotation in this case? @annotation public class Toto { @BeforeClass public static void setupOnce() { final Thread ... I am using junit 4.8.1. The following is the code. I am getting "Nullponiter" exception. I suspect that the "SetUp" code under @Before is not been excecuted before other methods. Request the ... @Before I've got the following test: @Test(expected = IllegalStateException.class) public void testKey() { int key = 1; this.finder(key); } I just used MyEclipse to automatically generate some JUnit test cases. One of the generated methods looks like this: @Ignore("Ignored") @Test public void testCreateRevision() { fail("Not yet implemented"); // TODO } I encountered TestDox tool that reads jUnit tests and processes them to support BDD-style documentation as follows: Test class: public class FooTest extends TestCase { public void testIsASingleton() ... What it the equivalent of using the @RunWith annotation for Junit 3.8? I've searched for a while on this, but Junit 3.8 is much older and I haven't been able to ... In general I prefer to have annotation tags for methods, including @Test ones, on the line before the method declaration like this @Test public void testMyMethod() { // Code } @Test public void testMyMethod() { // ... I've been looking for resources on how to extend JUnit4 with my own annotations. I'd like to be able to define a @ExpectedProblem with which I could label some of my tests. ... @ExpectedProblem I'm trying to use the timeout parameter for Annotation Type Test in a unit test within an IntelliJ IDEA project: The second optional parameter, timeout, causes a test to fail ... I am very new to Java programming. I have a unit test file to be run. It has annotations of @Before and @Test. I have tried to understand these concepts using ... @Test In an effort to design components that are as reusable as possible, I got to thinking recently about the possibility of so-called "adapter annotations." By this, I mean the application ... I wanted to create a custom JUnit annotation, something similar to expected tag in @Test, but I want to also check the annotation message. Any hints how to do that, or maybe ... I would like my @Before method to know the currently executing tests Annotations, so that the @Before method can do various things. Specifically, right now our @Before always does various initialization ... I am attempting to create a utility method that uses reflection to test getters/setters. My idea is to allow the caller to specify a set of test values and the expected ... We are using org.mule.tck.FunctionalTestCase for test cases. Its an abstract JUnit test case. This is how the dependencies are declared in the pom.xml: ... ... Is there a way say, import org.junit.Test; public interface ITest { @Test public void runTest(); } public class ... Is the test marked as not passing (i.e. red)? That should not happen. Do you by chance have two different classes with the name "MyCustomException"? Edit: alternative cause: You are running the Test as a JUnit 3 Testcase. This happens when you extend TestCase. If you want to use JUnit 4 features, then make sure that the class is recognized as ...
http://www.java2s.com/Questions_And_Answers/Java-Testing/junit/Annotation.htm
CC-MAIN-2013-20
refinedweb
717
58.69
Download presentation Presentation is loading. Please wait. Published byDevin Sarge Modified about 1 year ago 1 Chapter 10: The Social Discount Rate, Cost of Public Funds, and the Value of Information © Harry Campbell & Richard Brown School of Economics The University of Queensland BENEFIT-COST ANALYSIS BENEFIT-COST ANALYSIS Financial and Economic Appraisal using Spreadsheets 2 Three reasons why NPV>0 may not be the appropriate rule to identify projects which are efficient from a social viewpoint: the social discount rate may be lower than the market rate of interest; the marginal cost of public funds may exceed unity (ie. $1 of public funds costs more than $1); undertaking an irreversible investment involves a loss of option value. 3 The Social Discount Rate Why does the market discount future benefits and costs? impatience: people value utility today more highly than utility tomorrow. In making choices, future utility is discounted in comparison with utility in the present; diminishing marginal utility of consumption: people expect to be wealthier in the future. An extra dollar in the future will add less to utility than an extra dollar today. 4 The observed market rate of interest is the sum of the utility discount factor (reflecting impatience) and the utility growth factor (reflecting diminishing marginal utility of consumption). Example: economic growth rate: 2% elasticity of marginal utility of income: 1.5 utility growth factor: 1.5 x 2% = 3% utility discount factor: 1% real market rate of interest: 3% + 1% = 4% 5 Why do people argue that a social discount rate, lower than the market rate of interest, should be used to discount public projects? We should not be discounting the utility of future generations who are not able to participate in markets which determine levels of current investment, and, hence, future utility levels. It is argued that there is, in effect, a ‘missing market’ and we need to use non-market methods to determine the appropriate price (in this case an inter-temporal price in the form of a discount factor). 6 What is the appropriate discount rate for public projects? It is reasonable to employ a utility growth factor in discounting public projects: if future generations are going to be wealthier than us, we should take this into account in sacrificing present consumption to make provision for the future. It is not reasonable to employ a utility discount factor in discounting public projects: we should not treat the utility of future generations as any less important than that of the present generation. 7 Developing our simple example: instead of using the real market rate of interest of 4% as the discount rate for public projects, we would adjust it downwards by the amount of the utility discount factor (1%) to get a social discount rate equal to the utility growth factor (3%). Using a social discount rate would tend to make investment projects more attractive, but the 1% difference in discount rate would be crucial in only a few cases. 8 The Marginal Cost of Public Funds Raising public funds to undertake investment projects involves three types of costs: collection costs: costs of running the tax office; compliance costs: costs incurred by taxpayers; deadweight loss: costs of misallocation of resources as people respond to prices distorted by taxes. 9 Compliance and collection costs are largely fixed costs: they do not change when the amount of tax collected changes by a small amount. Since any given project will involve relatively small changes in the flow of public funds, compliance and collection costs can be ignored in social benefit-cost analysis. The amount of deadweight loss tends to rise (fall) as the amount of public funds raised rises (falls). A project which requires additional public funds imposes an additional deadweight loss on the economy; and a project which contributes to public funds reduces the amount of deadweight loss. 10 When the additional deadweight loss is taken into account, the NPV rule becomes: NPV = B - C - D > 0 where: B is the PV of project benefits C is the PV of project costs D is the additional deadweight loss The NPV rule could also be written as: B - C[(C+D)/C] >0, or B/C > (C+D)/C where (C+D)/C is the marginal cost of public funds. 11 There are three main ways of raising additional public funds: borrowing from the public i.e. selling government bonds in the market; borrowing from the central bank i.e. printing money; raising tax rates. If the required quantity of public funds is raised at minimum cost, the marginal cost of public funds from each source will be the same. 12 The deadweight loss resulting from selling bonds to the public. Suppose $100 worth of government bonds is sold on the market, and that $50 is diverted from private consumption spending, and $50 from private investment spending. The tax rate on the returns to private investment is around 1/3. Since the after-corporation tax rate of return on private investment must equal the government bond rate r, the before-tax rate of return on private investment must be r* = 1.5r. (Why? Because r*(1 - 1/3) = r ) 13 Now we can work out the cost to the economy of displacing $50 worth of private consumption and $50 worth of private investment: the loss of $50 worth of private consumption costs $50 the $50 worth of private investment would have yielded an annual before-tax return of $50r*. The present value of this return (at the market rate of interest) is $50(r*/r) = $50*1.5 = $75. The cost to the economy of raising $100 of public funds by borrowing from the public is $125. The deadweight loss is $25 and the marginal cost of public funds is $1.25 per dollar i.e 14 The deadweight loss resulting from collecting additional tax revenues. There is a wide range of taxes in our economy: personal and business income taxes goods and services tax excise duties on petrol, tobacco, alcohol etc. It can be argued that eventually all these taxes are borne by households. We can consolidate all these taxes into a single tax which can be regarded as a tax on the labour supply of households. 15 When the government wishes to raise additional tax revenue, it has a wide range of choice as to which tax rates to increase. However, we can argue that the effect is simply to increase the consolidated rate of tax on labour supply by households. Assuming that the aggregate labour supply curve of households is upward sloping, an increase in the rate of tax will cause a reduction in labour supply. 16 Figure 10.1 Taxation and Labour Supply $ D C S GF BA E W W0W0 W1W1 L0L0 L1L1 Quantity of Labour (Hours) 17 In Figure 10.1: the after-tax wage falls from W 0 to W 1 as a result of the increase in tax rate; the quantity of labour supplied falls from L 0 to L 1 because of the upward sloping supply curve. The cost to households of the tax increase is the loss of producer surplus: ABDE + BCD The extra tax revenue to government is measured by: ABDE - FGCB The cost per dollar of additional revenue is the ratio of the cost to households to the extra tax revenue. 18 Summary: Cost to the economy of additional public funds: Area ABDE + BCD; Quantity of additional public funds: Area ABDE - FGCB Cost per additional dollar of public funds: MC =ABDE + BCD ABDE - FGCB MC = (ABDE - FGCB) + (BCD + FGCB) (ABDE - FGCB) = 1 + FGCD (ABDE - FGCB) 19 How to interpret the formula for the marginal cost of public funds: MC = 1 + FGCD (ABDE - FGCB) The cost of an extra dollar of public funds is: $1 plus the additional deadweight loss per dollar of extra funds. Why does area FGCD represent an additional deadweight loss? 20 The effect of the tax rate increase is to divert a quantity of labour (L 0 - L 1 ) from work to leisure. In work that quantity of labour would have produced output with a value measured by area FGL 0 L 1. The value of the corresponding extra leisure time is measured by area DCL 0 L 1. The difference between these two measures is a loss to the economy, termed a deadweight loss. The marginal cost of public funds is: 1 + additional deadweight loss additional quantity of funds 21 Two complications: 1.When household labour supply falls, household earned income falls. When earned income falls, the household’s eligibility for social security payments rises. Some of the extra tax revenue raised will have to be used to fund increased social security payments rather than the public project the extra tax revenues are intended to fund. 2. Some public projects will, by their nature, cause a shift of the labour supply curve, and this may tend to increase or decrease the effects of the tax rate increase on the quantity of labour supplied and the quantity of leisure demanded. 22 Estimates of the marginal cost of public funds: In Australia and other OECD countries most estimates of the marginal cost of public funds are in the range. In other words, there is an additional deadweight loss of around 25 cents per dollar of extra tax revenue raised. Implications of the marginal cost of public funds for social benefit- cost analysis: All flows of public funds resulting from a project should be shadow-priced (at around 1.25 in Australia). This increases the cost of outflows and increases the benefits of inflows of funds as a result of a project. 23 The Value of Information Suppose that you have undertaken a social benefit-cost analysis and find that NPV>0. Is there any reason (other than a budget constraint) why you would recommend that the project should not go ahead immediately? There might be uncertainty about the values of some of the variables used to calculate the NPV e.g. future prices. Delaying the project might resolve these uncertainties. 24 To investigate the value of delaying the project we compare the NPV (at time 0) of undertaking the project immediately (at time 0) with the NPV (at time 0) of delaying the start of the project until time 1. In the example considered, it is assumed that, while we know the present price of output (at time 0), we don’t know whether the price of the project output is going to be high or low from time period 1 onwards. If we delay the project until time 1 we will get this information prior to deciding on whether to undertake the project. 25 In the example: K is the project cost R 0 is net benefit in year 0 (known with certainty) R H is net benefit from year 1 onwards if the high price prevails R L is net benefit from year 1 onwards if the low price prevails q is the probability of the high price prevailing (1-q) is the probability of the low price prevailing r is the rate of interest 26 The expected value of information is: The expected project NPV (at time 0) if we delay the project for 1 year minus the expected project NPV (at time 0) if we undertake the project at time 0. Figure 10.2 illustrates the expected NPVs of the two options. R H /r – K/(1+r) (1-q) q 0 [since {R I /r) – K/(1+r)}<0] R 0 +(qR H /r + (1-q)R I /r) - K Inves t Wait Figure 10.2 The benefit and cost of delaying an investment 27 When we subtract the expected NPV of undertaking the project immediately from the expected NPV if the project is delayed for one year, we get an estimate of the value of information. The value of information rises as: the initial capital cost, K, rises; the return in the low price environment, R L, falls; the probability of a low price, (1-q), rises; the return in the current period, R 0, falls; the interest rate falls. The ‘bad news principle’ - the level of the return in the high price environment, R H, does not affect the value of information. 28 The NPV Rule If there is no additional information to be obtained by delaying the project, the NPV rule is to undertake the project immediately if: NPV = E(R) + R 0 - K > 0, r where E(R) = q R H + (1-q) R L. If additional information can be obtained by delay, the NPV rule is to undertake the project immediately only if: E(R) + R 0 - K > b, where b > 0. r 29 The NPV rule when additional information can be obtained says that the project should be undertaken only when the option of delay is worthless. The condition which makes the option of delay (the value of waiting) worthless is the revised NPV rule: NPV = E(R) + R 0 - K > b, r where b = q{R H - K } r (1+r) Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/3880331/
CC-MAIN-2016-50
refinedweb
2,186
53.34
Im really new to java and need help with class. Here is what im trying to do Write a class that accepts a user's hourly rate of pay and the number of hours worked. Display the user's gross pay (gross pay = hours worked * hourly rate), the tax withheld (tax withheld = gross pay * tax rate) and the net pay (net pay = gross pay - tax withheld).Use a named constant for storing the tax rate of 0.15 Here is my Code: import java.util.Scanner; class Tutorial { public static void main(String[] args); { Scanner kb = new Scanner(sysem.in); double = hourlyrate; double = hoursworked; double = grosspay; double = netpay; double = taxrate 0.15; double = taxwithheld; System.out.println("Please enter rate of pay?" ); hourlyrate = kb.nextdouble(); System.out.println("Hours Worked?" ); hoursworked = kb.nextdouble(); System.out.println("Gross pay:" )+(grosspay = hoursworked*hourlyrate); System.out.println("Tax Withheld:" )+(taxwithheld= grosspay*taxrate); System.out.println("netpay:" )+(netpay= Grosspay-taxwithheld); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/35942-payroll-code-problem.html
CC-MAIN-2016-30
refinedweb
156
60.82
Build a Basic Ticket Sales App With ASP.NET Core, Angular, and Stripe Build a Basic Ticket Sales App With ASP.NET Core, Angular, and Stripe Learn to use these great frameworks along with a free Okta developer account to create and secure a full-stack web application. Join the DZone community and get the full member experience.Join For Free Internet shopping is about more than just Amazon. It's become a daily activity for most Americans, and e-commerce is a required feature for many projects a developer may encounter. In this tutorial, you'll learn how to build an e-commerce site to sell tickets using an Angular 6 single page app (SPA) and an ASP.NET Core 2.1 backend API. You’ll build both the Angular and ASP.NET Core applications and run them from within VS Code. Let’s get to it! Upgrade to Angular 6 I love to use the latest and greatest when starting a new project. But when you use a project generator (like Angular-CLI, or the DotNetCLI), you may be at the mercy of the latest version the authors of those libraries have added. Right now, the DotNet CLI generates an Angular application with dotnet new angular gives you an Angular app at about version 4.5, which is about two versions behind the latest. Let me show you how to upgrade the templates and the generated application so that you’re using Angular 6, which is the latest as of the time of this article. Upgrade the Angular App Template Update the DotNet command line tools with: dotnet new --install Microsoft.DotNet.Web.Spa.ProjectTemplates::2.1.0 Then run: dotnet new --install Microsoft.AspNetCore.SpaTemplates::2.1.0-preview1-final Generate the ASP.NET Angular App Now you can scaffold a new project: dotnet new angular -o ticket-sales-example Upgrade the Angular App to 6 The closest that gets you is Angular v5.2.0. To update Angular to v6.0.9 (as of this writing) switch to the ClientAppdirectory. If you want to get to zero vulnerabilities, you would have to hunt them each down and fix them manually. Create a Stripe Account One of the easiest ways to take payments on the web is to use Stripe. You can create a free developer account on Stripe’s registration page. Once you’ve registered, make sure that you go to your dashboard and on the left-hand menu, click the toggle to ensure you are viewing test data. Then click on the Developers menu item and then click API Keys. Copy down the Publishable key to use in your Angular app. Add Stripe to Your Angular 6 App In your index.html file, add a script tag for Stripe’s JavaScript library, right below the app-root component. <script type="text/javascript" src="" /> Also add your publishable key to the Stripe object: <script type="text/javascript"> Stripe.setPublishableKey('{yourPublishableKey}'); </script> Make sure that your publishable key starts with pk_test_. If it doesn’t, you’re using the production key, and you don’t want to do that yet. Create the Stripe Ticket Registration Page You can easily scaffold the base registration component with the Angular CLI. Go to a command line and change directories into the src/app directory. Then run the command: ng generate component registration The shorthand for the CLI is: ng g c registration The generate command will generate a folder called registration, and inside that a registration.compomnent.css, registration.component.html, a registration.component.spec.ts, and a registration.component.ts file. These are all the basic files for an Angular 6 component. I won’t be covering testing in this tutorial, so you can ignore or delete the registration.component.spec.ts file. First, add some basic HTML to your registration.component.html file for displaying tickets. So the final file contents looks like this: <h1>Register for SuperDuperConf</h1> <div class="ticket conf-only"> <span class="title">Conference Only Pass</span> <span class="price">$295</span> <button (click)="selectTicket('Conference Only', 295)">Register Now!</button> </div> <div class="ticket full"> <span class="title">Full Conference + Workshop Pass</span> <span class="price">$395</span> <span class="value">Best Value!</span> <button (click)="selectTicket('Full Conference + Workshop', 395)">Register Now!</button> </div> <div class="ticket work-only"> <span class="title">Workshop Only Pass</span> <span class="price">$195</span> <button (click)="selectTicket('Workshop Only', 195)">Register Now!</button> </div> <div class="alert alert-success" *{{successMessage}}</div> <div class="alert alert-danger" *{{errorMessage}}</div> <div * <form (submit)="purchaseTicket()" class="needs-validation" novalidate # <div class="form-group"> <label for="firstName">First Name:</label> <input type="text" class="form-control" name="firstName" id="firstName" [(ngModel)]="model.firstName" required # <div [hidden]="firstName.valid || firstName.pristine" class="text-danger">First Name is required.</div> </div> <div class="form-group"> <label for="lastName">Last Name:</label> <input type="text" class="form-control" name="lastName" id="lastName" [(ngModel)]="model.lastName" required # <div [hidden]="lastName.valid || lastName.pristine" class="text-danger">Last Name is required.</div> </div> <div class="form-group"> <label for="email">Email Address:</label> <input type="text" class="form-control" name="email" id="email" [(ngModel)]="model.emailAddress" required # <div [hidden]="email.valid || email.pristine" class="text-danger">Email Address is required.</div> </div> <div class="form-group"> <label for="password">Password:</label> <input type="password" class="form-control" name="password" id="password" [(ngModel)]="model.password" required # <div [hidden]="password.valid || password.pristine" class="text-danger">Password is required.</div> </div> <div class="form-group"> <label for="cardNumber">Card Number:</label> <input type="text" class="form-control" name="cardNumber" id="cardNumber" [(ngModel)]="model.card.number" required> </div> <div class="form-group form-inline"> <label for="expiry">Expiry:</label> <br/> <input type="text" class="form-control mb-1 mr-sm-1" name="expiryMonth" id="expiryMonth" [(ngModel)]="model.card.exp_month" required> / <input type="text" class="form-control" name="expiryYear" id="expiryYear" [(ngModel)]="model.card.exp_year" required> </div> <div class="form-group"> <label for="cvc">Security Code:</label> <input type="text" class="form-control" name="cvc" id="cvc" [(ngModel)]="model.card.cvc" required> </div> <button type="submit" class="btn btn-success" [disabled]="!regForm.form.valid">Pay ${{model.ticket.price / 100}}</button> </form> </div> I know it seems like a lot, but there is a lot of repetition here. The first section lists three tickets that a user can buy to register for the “SuperDuperConf”. The second section is just a form that collects the information needed to register an attendee for the conference. The important thing to take note of here is the [(ngModel)]="model.some.thing" lines of code. That weird sequence of characters around ngModel is just parentheses inside of square brackets. The parentheses tell Angular that there is an action associated with this field. You see this a lot for click event handlers. It usually looks something like (click)="someEventHandler()". It is the same, in that the ngModel is the handler of the event when the model changes. The square brackets are used for updating the DOM when something on the model changes. It is usually seen in something like disabling a button as you did above with [disabled]="!regForm.form.valid". It watches the value on the form, and when it is not valid, the button is disabled. Once the form values become valid, the disabled property is removed from the DOM element. Now that you have all the fields on the page, you will want to style that ticket section up a bit so that it looks like tickets. .ticket { text-align: center; display: inline-block; width: 31%; border-radius: 1rem; color: #fff; padding: 1rem; margin: 1rem; } .ticket.conf-only, .ticket.work-only { background-color: #333; } .ticket.full { background-color: #060; } .ticket span { display: block; } .ticket .title { font-size: 2rem; } .ticket .price { font-size: 2.5rem; } .ticket .value { font-style: italic; } .ticket button { border-radius: 0.5rem; text-align: center; font-weight: bold; color: #333; margin: 1rem; } These are just three basic ticket types I regularly see for conference registrations. Now the meat of the registration page, the TypeScript component. You will need a few things to make the page work. You will need a model to store the values that the user enters, a way for the user to select a ticket, and a way for the user to payfor the ticket they have selected. import { Component, ChangeDetectorRef, Inject } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Component({ selector: 'app-registration', templateUrl: './registration.component.html', styleUrls: ['./registration.component.css'] }) export class RegistrationComponent { public model: any; public card: any; public errorMessage: string; public successMessage: string; constructor( private http: HttpClient, private changeDetector: ChangeDetectorRef, @Inject('BASE_URL') private baseUrl: string ) { this.resetModel(); this.successMessage = this.errorMessage = null; } resetModel(): any { this.model = { firstName: '', lastName: '', emailAddress: '', password: '', token: '', ticket: { ticketType: '', price: 0 } }; this.card = { number: '', exp_month: '', exp_year: '', cvc: '' }; } selectTicket(ticketType: string, price: number) { this.model.ticket = { ticketType, price: price * 100 }; } purchaseTicket() { (<any>window).Stripe.card.createToken( this.card, (status: number, response: any) => { if (status === 200) { this.model.token = response.id; this.http .post(this.baseUrl + 'api/registration', this.model) .subscribe( result => { this.resetModel(); this.successMessage = 'Thank you for purchasing a ticket!'; console.log(this.successMessage); this.changeDetector.detectChanges(); }, error => { this.errorMessage = 'There was a problem registering you.'; console.error(error); } ); } else { this.errorMessage = 'There was a problem purchasing the ticket.'; console.error(response.error.message); } } ); } } Even if you’re familiar with Angular, some of this may look foreign. For instance, the BASE_URL value that is getting injected into the component. It comes from the main.ts file that the Angular CLI generated. If you look at that file, right below the imports, there is a function called getBaseUrl() and below that is a providers section that provides the value from the getBaseUrl() function, which is just a simple way to inject constant values into components. The other thing that might look strange is the purchaseTicket() function. If you’ve never used Stripe before, the createToken() method creates a single-use token that you can pass to your server to use in your server-side calls, that way you don’t have to send credit card information to your server, and you can let Stripe handle the security of taking online payments! Add the ASP.NET Registration Controller Now that your Angular app can get a token from Stripe, you’ll want to send that token and the user’s information to the server to charge their card for the ticket. Create a controller in the Controllers folder in the server-side application root. The contents of the file should be: using System; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Okta.Sdk; using Stripe; using ticket_sales_example.Models; namespace ticket_sales_example.Controllers { [Produces("application/json")] [Route("api/[controller]")] public class RegistrationController : ControllerBase { [HttpPost] public async Task<ActionResult<Registration>> CreateAsync([FromBody] Registration registration) { ChargeCard(registration); var oktaUser = await RegisterUserAsync(registration); registration.UserId = oktaUser.Id; return Ok(registration); } private async Task<User> RegisterUserAsync(Registration registration) { var client = new OktaClient(); var user = await client.Users.CreateUserAsync( new CreateUserWithPasswordOptions { Profile = new UserProfile { FirstName = registration.FirstName, LastName = registration.LastName, Email = registration.EmailAddress, Login = registration.EmailAddress, }, Password = registration.Password, Activate = true } ); var groupName = ""; if (registration.Ticket.TicketType == "Full Conference + Workshop") { groupName = "FullAttendees"; } if (registration.Ticket.TicketType == "Conference Only") { groupName = "ConferenceOnlyAttendees"; } if (registration.Ticket.TicketType == "Workshop Only") { groupName = "WorkshopOnlyAttendees"; } var group = await client.Groups.FirstOrDefault(g => g.Profile.Name == groupName); if (group != null && user != null) { await client.Groups.AddUserToGroupAsync(group.Id, user.Id); } return user as User; } private StripeCharge ChargeCard(Registration registration) { StripeConfiguration.SetApiKey("sk_test_uukFqjqsYGxoHaRTOS6R7nFI"); var options = new StripeChargeCreateOptions { Amount = registration.Ticket.Price, Currency = "usd", Description = registration.Ticket.TicketType, SourceTokenOrExistingSourceId = registration.Token, StatementDescriptor = "SuperDuperConf Ticket" }; var service = new StripeChargeService(); return service.Create(options); } } } It seems like there is a bit here, but there is only the HttpPost method CreateAsync() that is the API endpoint for a POST to /api/registration. The other methods are helpers to the endpoint. The ChargeCard() method does just as the name implies, it charges the user’s credit card using the token that the Angular app got from Stripe and sent to the API. Even though I am setting the Stripe API key with a simple string here for demonstration purposes, you might want to store the key in an environment variable, in a configuration file that doesn’t get checked into source control, or in a key management service like Azure’s Key Vault. This will mitigate the chances that you will accidentally check the test key into your source control and have that end up being deployed to production! The RegisterUserAsync() method handles registering a user with Okta and putting them into a group that corresponds to the ticket that the user is purchasing. This is done in two steps: by creating the user, then finding the group that corresponds with the ticket purchased, and adding that group’s ID to the newly created Okta user. Set Up Okta for Your Angular and ASP.NET Core Applications Dealing with user authentication in web apps is a massive pain for every developer. This is where Okta shines: it helps you secure your web applications with minimal effort. an Okta: TicketSalesApp - Base URIs: - Login redirect URIs: You can leave the other values unchanged, and click Done. Now that your application has been created, copy down the Client ID and Client secret values on the following page, you’ll need them soon. Even though you have a method for registering users, you’ll need to create the groups for the tickets, set up your API to use Okta, and configure it to receive access tokens from users of the Angular app for authorization. Start by creating a group for each of the three tickets you’ll be selling. From the Okta dashboard hover over the Users menu item until the drop-down appears and choose Groups. From the Groups page, click the Add Group button. In the Add Group modal that pops up, add a group for each ticket type. Now, you’ll need to add these newly created groups to the ticket sales application. Click on the Applications menu item, and choose the TicketSalesApp from the list of apps. It should open on the Assignments tab. Click on the Assign button and choose Assign to Groups from the button’s drop-down menu. From here, assign each group you just created to the Ticket Sales app. Add Groups to the ID Token_9<< Add Okta to Your Angular Application To set up your Angular application to use Okta for authentication, you’ll need to install the Angular SDK and the rxjscompatibility package. npm install @okta/okta-angular rxjs-compat@6 --save Add the components to your app.module.ts file in src/app by first importing them: import { OktaCallbackComponent, OktaAuthModule, OktaAuthGuard } from '@okta/okta-angular'; Now add a configuration variable right below the import statements: const config = { issuer: '', redirectUri: '', clientId: '{clientId}' }; Add the callback route to the routes in the imports section of the @NgModule declaration: { path: 'implicit/callback', component: OktaCallbackComponent } That’s all for now in the Angular application. Now let’s get the ASP.NET Core app set up. Add Okta to Your ASP.NET Core API Now you need to let the API know two things: how to get the user’s identity from an access token (when one is sent) and how to call Okta for user management. Start by adding the Okta Nuget package: dotnet add package Okta.Sdk: "" token: "{yourApiToken}" = ""; options.Audience = "api://default"; }); And in the Configure() method before the app.UseMvc() line add: app.UseAuthentication(); That’s it! Now your ASP.NET Core app will take that bearer token, get the user’s information from Okta, and add them to the User object so you can get the currently requesting user’s data. It will also use the API token stored in the okta.yaml file when registering users. Show the Tickets in Your Angular App Now that users can purchase a ticket, you’ll want them to be able to log in and see their purchased ticket. To do this, generate a profile component using Angular’s CLI. From the src/app folder of the client app, run: ng g c profile Again, this is just shorthand for ng generate component profile, which will generate all the base files for the profile component. The profile.component.ts file should have the following contents: import { Component, OnInit } from '@angular/core'; import { OktaAuthService } from '@okta/okta-angular'; import 'rxjs/Rx'; @Component({ selector: 'app-profile', templateUrl: './profile.component.html', styleUrls: ['./profile.component.css'] }) export class ProfileComponent implements OnInit { user: any; ticket: string; constructor(private oktaAuth: OktaAuthService) {} async ngOnInit() { this.user = await this.oktaAuth.getUser(); if (this.user.groups.includes('FullAttendees')) { this.ticket = 'Full Conference + Workshop'; } else if (this.user.groups.includes('ConferenceOnlyAttendees')) { this.ticket = 'Conference Only'; } else if (this.user.groups.includes('WorkshopOnlyAttendees')) { this.ticket = 'Workshop Only'; } else { this.ticket = 'None'; } } } This does two things: it gets the currently logged in user and translates the group name into a displayable string representation of the ticket type purchased. The profile.component.html file is straightforward: <h1>{{user.name}}</h1> <p> Your Puchased Ticket: {{ticket}} </p> The last thing to do is to add a protected route to the profile page in the app.module.ts. I added mine right above the callback route: { path: 'profile', component: ProfileComponent, canActivate: [OktaAuthGuard] }, You can now sell tickets, and the users can log in and see which ticket they have once they’ve purchased one. You’re ready to hold your event! Learn More About ASP.NET Check out our other Angular and .NET posts on the Okta developer blog: - Ibrahim creates a CRUD app with an ASP.NET Framework 4.x API in his post - Build a basic CRUD app using Angular and ASP.NET Core - If you would like to use React instead of Angular for your CRUD app, I’ve got you covered - Get nitty-gritty on token authentication in ASP.NET Core - Get your project out into the world by deploying it to Azure, the right way As always, if you have any comments or questions, feel free to leave a comment below. Don’t forget to follow us on Twitter @oktadev and on Facebook! Build a SPA with ASP.NET Core 2.1, Stripe, and Angular 6 was originally published to the Okta developer blog on August 8, 2018. If you enjoyed this article and want to learn more about ASP.NET, check out this collection of tutorials and articles on all things ASP.NET. Published at DZone with permission of Lee Brandt , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/build-a-basic-ticket-sales-app-with-aspnet-core-an
CC-MAIN-2019-51
refinedweb
3,137
50.12
Deprecation policy From Biopython As bioinformatics and computational biology are developing quickly, some of previously developed Biopython modules may no longer be relevant in today's world. To keep the code base clean, we aim to deprecate and remove such code from Biopython, while avoiding any nasty surprises for users who may be relying on older code. We keep a plain text file in the Biopython source code to record these changes, available here or on github. This is the current policy for deprecating and removing code from Biopython: - First, ask on the biopython and biopython-dev mailing lists whether a given piece of code has any users. Please keep in mind that not all users are following the biopython-dev mailing list. - Consider declaring the module as "obsolete" for a release before deprecation. No code changes, just: - Note this in the DEPRECATED, add a DeprecationWarning to the code: import warnings warnings.warn("Bio.SomeModule has been deprecated, and we intend to remove it" " in a future release of Biopython. Please use the SomeOtherModule" " instead, as described in the Tutorial. If you would like to" " continue using Bio.SomeModule, please contact the Biopython" " developers via the mailing list.""", DeprecationWarning) - In principle, we require that two Biopython releases carrying the deprecation warning are made before the code can be actually removed. - In addition, at least one year should pass between the first Biopython release carrying the deprecation warning, and the first Biopython release in which the code has been actually removed. See here for the discussion on the mailing list:
http://biopython.org/w/index.php?title=Deprecation_policy&direction=prev&oldid=3575
CC-MAIN-2015-48
refinedweb
259
53.21
pulling my hair out on this because I don’t understand… I have 2 methods/actions in the same ‘placements’ controller that I am concerned with… find, works properly… view code from find.rhtml new, which throws me an error… undefined method `clwholename’ for #Placement:0xb78e0dc0 which has the same exact view code as above… clwholename is an aggregation defined in the Client class which combines the client first_name, middle_initial, last_name the controller code for new and find don’t seem to impact this… def find @placements = Placement.find(:all) @clients = Client.find(:all) end def new @placement = Placement.new @client = Client.find(:all) end but just to make sure, I added the 2 lines in the ‘find’ definition, into the ‘new’ definition and the result is the same. and I do have the method in my controller that should works for ‘find’ but not ‘new’… def auto_complete_for_placement_clwholename auto_complete_responder_for_clients params[:placement][:clwholename] end Why would ‘find’ work but not ‘new’ ? Craig
https://www.ruby-forum.com/t/undefined-method-and-auto-complete/80130
CC-MAIN-2018-47
refinedweb
161
58.62
Education technology Catching on at a tool. It can be used appropriately. Or it can be misused and abused. And even become a source of addiction. The internet is a valuable research tool and source of knowledge. But it is also the biggest time waster ever devised: gaming, social media, shopping, cat videos, and pornography. Using the internet for education depends on individual motivation and self discipline. A self motivated student will do well in any regime: books and pencil or supercomputer power workstation. To paraphrase Pink Floyd: In order to have education, you must first have self control. No amount of technology is going to make a future burger-flipper into college material. Sorry. But an involved parent who sits with their child every day, making sure that all reading and homework assignments are completed correctly (without doing it for them) is much, much more valuable and effective. Many teachers I know have stressed this (two in my family). America will spend billions of dollars on technology and it will be mostly wasted. In a few years we will be having this conversation again, wondering why all this technology did not turned little Johnny into a high-paid engineer or doctor. And some politician will offer another guaranteed "program" if only you will vote for him. Meanwhile, back at the ranch, Tiger moms (and many other types of moms) will have turned out millions of capable and educated children who did not need an expensive whiz-bang collection of technonlogy toys . . . but they may be the ones designing them for your child! P.S. Did you sit and read with your child today??? From Wikipedia regarding MOOC Distance Learning: " Before the Digital Age, distance learning appeared in the form of written correspondence courses, broadcast courses, and early forms of e-learning.[5] By the 1890s commercial and academic correspondence courses on specialized topics such as civil service tests and shorthand were promoted by door-to-door salesmen.[6] Over 4 million US citizens – far more than attended traditional colleges – were enrolled in correspondence courses by the 1920s, covering hundreds of practical job-oriented topics, with a completion rate under 3%.[7] Radio was the exciting new technology of the 1920s, with millions buying sets and tuning in. Universities quickly staked out their wavelengths. By 1922, New York University operated its own radio station, with plans to broadcast practically all its subjects. Other schools joined in, including Columbia, Harvard, Kansas State, Ohio State, NYU, Purdue, Tufts, and the Universities of Akron, Arkansas, California, Florida, Hawaii, Iowa, Minnesota, Nebraska, Ohio, Wisconsin, and Utah. Journalist Bruce Bliven pondered: "Is radio to become a chief arm of education? Will the classroom be abolished, and the child of the future be stuffed with facts as he sits at home or even as he walks about the streets with his portable receiving-set in his pocket?"[8] The students read textbooks and listened to broadcast lectures, but attrition rates were very high, and there was no way to collect tuition. By 1940 radio courses had virtually disappeared.[8] Talking motion pictures was the technology of choice in the 1930s and 1940s. They were used to train millions of draftees during World War II in how to operate all sorts of equipment. Any number of universities had televised classes starting in the late 1940s at the University of Louisville.[9] The Australian School of the Air has used two-way shortwave radio starting in 1951 to teach school children in remote locations. At many universities in the 1980s special classrooms were linked to a remote campus to provide closed-circuit video access to specialized advanced courses for small numbers of students, and many continue to operate. But this trend should not be disconnected from the more general and historical process of industrialization of education, in particular through teaching machines, industry of textbook and educational networks[10] There are striking anticipations of the MOOC of the 2010s in the CBS TV series Sunrise Semester, broadcast from the 1950s to the 1980s with cooperation between CBS and NYU. Course credit was even offered for participants in those early video courses[11] In 1994, James J. O'Donnell of the University of Pennsylvania taught an Internet seminar, using gopher and email, on the life and works of St. Augustine of Hippo, attracting over 500 participants from around the world.[12] By 1994 hundreds of colleges had distance education undergraduate degree programs, and there were 150 leading to advanced degrees.[13] The short lecture format used by many MOOCs developed from "Khan Academy’s free archive of snappy instructional videos."[14] In April 2007, Irish-based ALISON (Advance Learning Interactive Systems Online) launched its massively free online courses for basic education and workplace skills training supported by advertising."[15] .....[29][41] Nevertheless, by early 2013, questions emerged about whether MOOCs were undergoing a hype cycle and whether academia was "MOOC'd out."[40][42]" ____________________________ Sometimes the future is just a fad. And the past is littered with 'revolutions in education'. The great thing about technology in education is that is allows certain learners (i.e. students who don't learn to their full potential from a teacher's lecture) to learn at their own pace, and figure things out on their own. If you present a student with an online video on math or history, she can rewind the video and watch the parts that she didn't understand. There would be no awkward pauses to ask a teacher "can you repeat that again?" After the video, the student can complete some assignments online, while the teacher goes and helps any students who have any questions, giving more time for a one-on-one lesson. The teacher also will have a lighter workload. Instead of talking nonstop for 6 hours a day, the teacher can assign some online material to the students. Using a data-tracking program (like it said on the article), the teacher can see whether a student has hit a learning curve, or if he is struggling in English or chemistry. The teacher will also learn how to use the new technology, and may gain more fulfillment and feel less tedious from the job, since they are not giving the same lecture 4 times a day. Online education is also great for adults who want to continue their education. I have used Khanacademy for a year, and I am teaching myself calculus right now. But I was never good at math in school. In fact, I couldn't even do basic arithmetic when I started. I was horribly embarrassed that I was 22 and I couldn't divide and multiply. I had a shaky foundation in math in elementary school, and it gave me low confidence and fear whenever the topic of math was even mentioned. But the educational videos and problems online was a great way for me to sit down by myself, and fill in the gaps in my education without anyone around to judge me. I think a lot of adults considering going back to school have this fear of their peers looking down on them, and technology can help in part to alleviate this problem. Khanacademy's videos are subtitled in many languages, and can be used in a classroom anywhere around the world with an internet connection. Bright students from poor countries can learn by themselves, and teachers can use the videos and questions to supplement their lecture. Education technology won't solve every problem. There is still the issue of underpaid, under-qualified teachers, dangerous school environment, income discrepancy, and unwilling students. But it is a huge step in the right direction. I think we will see a lot of changes in American (and even world) education in the world the next couple of years. Huzzah! . Let me share an anecdote: a few years ago a local high school bought 10 "smartboards" as they were called then -- essentially huge touch monitors. They were meant to provide interactive, tactile learning experiences for the classroom, they cost >$2000 each, and they were truly some pretty cool new tech. Students could write on them with whiteboard markers, and the board would capture their input as well as display information. . But because they did not come loaded with curriculum, and none of the staff had the time, much less the tech ability, to enter the curriculum, they were switched off and used as old-fashioned white boards. The kind you can buy for ~$10 at Home Depot. . The technophobes detractors all have valid points; yes, the child must have intrinsic motivation, yes, they must have good parental involvement, no, a computer can never completely replace a good teacher. But what they are missing is that it's 2013, there will be self-driving cars soon, and the majority of teachers -- and as a result, the majority of the teenagers -- I meet are close to tech-illiterate. Sure, they know how to use google, and they can text with their eyes closed and the phone in their pocket. But when I was their age my high school offered a class in Visual Basic. Most of the high schools in my area offer -- at best -- a class in computer literacy, where kids learn how to make really, really good Power Point presentations. In the poorer areas they don't even have that. It makes me want to chew my legs off. . I know that, as I am in my mid-thirties, I am officially a curmudgeon. But to all the union anti-tech teachers out there: I don't want you to inspire the kids, I don't want you to be their friend, I don't want you to slave away as an underpaid 70-hour-a-week babysitter, jail warden, and ineffectual talking head. I want you to turn them into useful 21st century citizens. I want them to learn. how. to. code. And if the computer does that better and cheaper, or -- more plausibly, if the computer makes your job easier so you can concentrate on the actual teaching -- let me say it again ... huzzah! Is that how the ownership of TE falls out? If so, I had not realized. But, yes, the entire article reads like an infomercial. Our family invested heavily in a device that DID "allow continuous assessment of his [child's] abilities and shortcomings." The device understood "the pupil himself and the way human being learn." It was called "Mom." It required no batteries or external power source other than the occasional bagel. The only peripherals it required were a kitchen table, a pencil and a bunch of scrap paper. It booted up every evening -- and then gave our son a quick boot too (which, believe me, was exactly what he needed!) (A server linked "Mom" to "Dad," a second device that scanned for written errors and faulty grammar.) The only major failing of "Mom" was its inadequacy in terms of Retail Mark-Up. But, Learning Egg seems like quite an advance over "Mom" -- I'll bet it even gives a child a pellet for a correct answer. Whether this new technology is a genuine boost to education must be clarified in future study. I have seen these trends come and go (is there anybody out there who remembers the "Teaching Machine" nonsense of the late 1950s? That was another "revolutionary" learning device flogged through the schools by for-profit vendors.) Regardless, an offshoot of this technology may be to further devolve education from the public classroom. I suspect there many readers who do not realize how utterly disaffected some parents are with the public schools. Reasons for their disdain include safety, abysmal test scores and moral ambiguity in the instruction given there. For the school at which I teach I 'guesstimate' that at least one-quarter, and more like one-third, of the students were home-schooled prior to middle school. Homeschooling steadily becomes easier. There is a plethora of texts designed for home instruction and most areas have parent-support groups. It is not the isolating experience some may assume. Access to on-line courses is slowly rendering college professors less valuable. One can "take" a full lecture course from "home," ask questions and submit work and tests for grading. The Learning Egg concept just accelerates this de-normalization of public education. Mom can handle the software just as well as Teacher -- the added value of being a captive audience for an Education grad immediately is diminished. Profession after profession has been modified or abolished by computers. Some of us remembers draftsmen and their slanty desks and swivel lamps. They were essential and then extinct. Bank tellers? Don't you use an ATM? How about all those tens of thousands who used to work in back offices in banks and insurance companies? Now, it may be the turn of education. Three or four home-schooled students in Mom's living room using Learning Egg or dozens of other programs don't need Teacher. The administrators will have less and less to administer. The public school education model worked reasonably well for about 160 years. But, maybe it was just a phase. More and more it seems as though the brick-and-mortar, tax-payer supported schools will just be masonry holding pens for minorities, the unmotivated and the none-too-bright. It seems the potential now exists to decouple education from an on-site school. The first to utilize this will likely be religious parents, highly-motivated and involved parents and those who feel their children are not safe at P.S. 110. There won't be a mass exodus. After all, there wasn't a mass exodus from drafting either. I wonder how long it will be before the computers become self aware and unionize? Maybe the older slower ones will demand more pay and benefits than the newer faster ones and will be given preferential treatment.... It always amuses me to note that teachers are always proclaiming their dedication towards their pupils - and then are the first to block anything that might improve their pupils' scholastic outcomes. This is just one reason why it may be a very good idea to abandon the notion of a physical "school" altogether and move towards more effective and efficient ways of helping kids to learn. Imaginary scenarios always fit perfectly into someone's dogma. I took a look at the Mathletics web-site. From review of the Top 100 students world-wide, only a small number were American; very consistent with the slide in global ratings reported by the Article. However, when I took a look at the Top 100 USA, by my count, 71 out of the Top 100 were from Church (mostly Catholic) schools. My immediate thoughts are: 1) Given equal resources, Church schools apparently can outperform public schools 2) There may be other factors that can contribute to student performance besides technical dazzle. I had as much control over the teaching process as I did when using a textbook. I controlled which topics the computer taught, in what order and at what level of difficulty--factors I could vary by class and student in order to best meet their needs. If I felt that the computer was deficient at teaching any one topic, then I could take over and teach that topic myself. The most frequent objection I heard to using this technology was the opposite of your concern--not will the teacher lose control, but will the teacher be lazy and do nothing. People would ask; "What is the teacher going to do?" as if keeping teachers busy is the most important outcome of our educational spending. Besides which, students of a lousy teacher are better off using an adaptive program and being ignored by their teacher than they are with a traditional textbook being taught by that same teacher. And when a good teacher has this technology, she can spend her class time working with individual students right at the point of their need. The most important thing is that our students get the best possible education. Every single math problem that my students did this year was graded immediately. The results were shown to them, and every time they got a problem wrong, the computer showed them exactly how to do it right. The computer then moved them on to the next topic if they had shown mastery, or guided them through more practice if they had not. The computer notified me when its digital interventions were not working, and then I tutored the student. It would be logistically impossible for me to do each day for my 160 students what the I CAN Learn math program and 36 refurbished desktops did. Our students should be returning home from the first day of school with a tablet full of interactive, adaptive apps, not a backpack full of worn textbooks. Think of how much more interesting a History or Science book would be as an interactive app with text, graphics, video, self-paced learning and the ability to interact with the teacher and students all over the world just as we are doing right now. California teacher. While the capacity to individualize and customize learning experiences for students is one powerful role that tech can play in the classroom, its real power is the ability to connect students with one another and with authentic problems and resources around the world. Using technology to connect students in NYC and rural Arkansas as they work together to design new water filtration solutions with partner schools in the developing world, all under the supervision of virtual mentors who are real working engineers-- THIS is the potential that technology holds for real learning. Let's raise the bar beyond personalized tutorials and create real transformative change in schools. -Sarah Field, Digital Learning Designer New Tech Network () twitter.com/sfieldnewtech My kids had to have a compulsory Laptop starting 1996. And even though there was a lack of software for teaching purposes, their laptops became a tool for every subject. So i sent them to a school where they taught manually - and they did not need specialised software. The headmaster there told me that he was against it because it had done a lot of research on it and found that the kids hardly used it for lessons. The Human brain is getting smaller. It was before the advent of computers. Now people can hardly write. They hardly count. At the supermarket you can cheat them and get away with it. At business meetings - it is easy to fool them - people just can't for the figures in their heads any more. But it is easy for the teachers - marking lessons is as easy as ABC. . Humans have lost their basic abilities. Well said, and probably the reason (if true) 71 of top 100 in Mathletics (cited by one commentator) were from private schools. Could it be that success is tied to also incorporating concepts like discipline into the learnign process? Well hopefully the most intransigent can be disintermediated. We just need to get them (student teachers) early enough so they don't get sucked into a "them and us" mindset. The problem can be serious, as a bright 10 year old is often more insightful than an average educator, who is much better than computer code. And the 10 year old knows the difference between facts and knowledge, and understands that knowing the number of rivets in the Eiffel tower, or the exact date of Lincoln's birthday, is meaningless. So the student will be thrown a series of poorly stated lessons, all wrong, when looked at in detail. Example: an atom it taught as ball of protons and neutrons surrounded by electrons. Wrong, that is not an atom, its a model of an atom, and the difference is vital to a physicist - which is why its called the "Standard Model." So the child will be marked down again for knowing the correct answer. That will not be understood by a teacher, and most certainly not by a computer program. Then the student will be directed to remedial lessons - and that will realy messs things up. Ed Tech's effects on an individual students knowledge aquistion and educational skills enrichment are not well understood. The data is a mixed bag. K-12 students who are enrolled in online schools most often perform poorly on tests and other pursuits: college completion, miitary service, employment continuity,etc. Programs that appear very similiar (one on one laptop, flipped clasroom paradigms) can work well in one environment but not another. Ed tech is not a silver bullet. If the United States wants higher scores: smaller class sizes, decrease homeless student population, raise minimum wage, universal health care, better food security,and applaud academic excellence are better investments for improved outcomes. Give every child a mentor along with their apps. Kids with a meth dealer as a parent will not flourish with Khan's flipped classroom but other students may benefit. Gates' obession with measurement needs a more balance approach. Every minute of my lifetime of service to my students proclaims my dedication to them, CA-Oxonian. You, by contrast, are "amused" from your giddy perspective inside your own head. Your formulation of "anything that might improve their pupils' scholastic outcomes" is also at war with reality. Tablet-based accountability made the ninth graders in my building cry last year, and damaged their "scholastic outcomes". The young TFA boosters described the punishment-based classroom management strategies we must now implement to keep our students buckled down to their iPads. Pearson Flipped Learning Network and Nellie Mae Foundation have decreed that it will be extended to the whole school next year, and we were gifted a two-day training in how to implement their project. It's a hoax. The digital emperor is naked, arrogant, and dishonest. The kids are begging to get his heel off their throats. In the above geography example, a Korean student is more likely to get the "right" answer, 4, than an American student living in Minnesota who has take a boat directly from Minnesota to Isle Royal National park in Michigan who knows that the answer is 5. Then headlines will say "Koreans understand US geography better than Americans." That is just humorous, but the real problem is if the student is marked down for knowing the right answer, 5. As computer programs are rigid and maintained poorly (the Economist blog editor still can not handle edits or deletions properly) the student will have an incorrect grade forever. What will that do? He or she will either learn to con the system, or develop contempt for it.
http://www.economist.com/node/21580136/comments
CC-MAIN-2014-23
refinedweb
3,800
61.36
#include <cel_license.h> This class manages the cartridges. For more information, see CartridgeManager. Commit the reserve identified by the GUID. Create a cartridge from unencrypted script. This method is reserved for internal use. For internal usage, see documents in cel_baselib/opt/licenseSystem. Obtain the list of activated products. Obtain a instance of CartridgeManager. Obtain the product status. Install the new cartridge. Re-initializes the cartridge database. All the information in the database will be cleared but cartridge installation history is not cleared. This method is provided for database recovery purpose only and it should not be called for normal purpose. Revert the reserve identified by the GUID.
https://www.cuminas.jp/sdk/classCelartem_1_1License_1_1CartridgeManager.html
CC-MAIN-2017-51
refinedweb
107
55.5
22 June 2009 By clicking Submit, you accept the Adobe Terms of Use. This article assumes that you have a working knowledge of ActionScript 3, and that you know how to use FTP for sending files to a web server. You may want to start by reading Chris Charlton's article, Building a Drupal site in 10 steps, which provides more in-depth information about installing and setting up Drupal in general. It would also be beneficial to look at the SWFAddress site because I discuss using SWFAddress toward the end of the article. Intermediate This article takes you through the process of using Adobe Flash or Adobe Flex to build a site with Drupal, an extremely popular open-source content management system (CMS) written in PHP. Along the way you'll get a better understanding of the benefits of this technique, also known as "Druplash" or "Druplex." I designed this article so that you can stop reading at any time and start using Drupal with Flash. However, the further you read, the more advanced the topics and the more full-featured the setup. Many successful sites created with Adobe Flash use XML files, or no external files at all, for their content. Here's my list of reasons why you may want to consider combining the Flash platform with Drupal: The process of using Adobe Flash with Drupal is pretty simple; in fact there are really only four steps to follow: The first step is to install the Drupal CMS onto a PHP-enabled server. I strongly recommend using your local machine for testing if possible. It's true there's quite a bit of groundwork needed to get this set up, but after the first time it can be done really quickly—and it pays you back many times over once you're up and running. For more detailed information on installation, visit the Drupal 6 section of the Drupal online documentation. This article assumes you are installing to the web root folder. You can install to any location, however, such as a subfolder or a subdomain without modification. Here are the basic steps to installing Drupal on a PHP-enabled server: Note: Make sure you extract the Drupal ZIP file into the folder directly because you may miss the invisible .htaccess file if you just use Windows Explorer or the Mac OS X Finder to drag the files. Alternatively enable "show hidden" files on your system to copy this file manually. To do this on a Mac/Linux/Unix machine, open a command prompt, change to the sites folder, and type chmod a+w default. In Windows you can set the file permissions for the "Internet Guest Account" via the File Properties dialog box. Alternatively, if you are using FTP to access files on a remote server, you can usually add the write permissions using your FTP client. The Drupal site has detailed instructions on how to do this for most systems. When installing Drupal, you may have noticed an option for enabling "clean URLs." If not, don't panic, you can enable this via the Drupal administration screens later on. Either way, this option deserves some explanation. When you access a page (or node) in Drupal in a web browser, the URL is in a format like, which is quite nondescriptive. Clean URLs do not contain query string variables; instead, they just contain an easy-to-read path made up of words and slashes only—for example,. When clean URLs are enabled, the server needs to dynamically rewrite them into the query-string format that PHP understands. This makes the URLs much more descriptive to both users and search engines alike but still allows the underlying PHP engine to interpret the request correctly. Follow the instructions on the Clean URLs page in Drupal via Administer > Site Configuration > Clean URLs. In most cases you just need to enable/install the mod_rewrite Apache module on your server if it is not already switched on, and/or allow .htaccess files to set the rules for your site's folder. You'll be using the Path and Pathauto modules to allow you to specify custom paths, or aliases for your nodes. In this context, a path is the portion of the URL that appears after the slash that immediately follows the domain name. The paths in Drupal will match directly with the (SWFAddress) paths in ActionScript, giving you a one-to-one mapping between human-readable paths and the node IDs that uniquely identify the content. This is why the clean URLs option must be enabled if you want deep-linking in your site. (There are ways to work around this if you absolutely cannot enable clean URLs.) To use Drupal with content that you create in Adobe Flash or Adobe Flex Builder, you'll need to add a few very useful modules. Just extract the files found in the ZIP file into your sites/all/modules folder to create a new folder for each module. Install the following modules: Note: The official 6.x-0.14 release has a bug that prevents us from generating a sitemap, so please download the 6.x-2.x-dev development snapshot instead (unless the official release is newer than 6.x-0.14 when you read this note, of course). After installing the modules, you need to activate (or enable) them. I recommend turning them on one or two at a time. Drupal can choke if you check all of them at once because the first time you enable a module it runs an installer script for it. Enable the following modules using the Administer > Site Building > Modules page. I've included any included submodules that you'll also need to enable: There are several other modules that you can enable to add more power to your website. For example, when you specify images in your nodes, the Imce CCK Image module allows the user to upload and pick files in one step, rather than pasting the known file-path of an already uploaded image file. However, I do not recommend using these additional modules until you are already comfortable with how everything is working, because they can take a lot of time to play with and tweak to get just right: After you enable all the relevant modules, you need to enable anonymous access to the Services module so that you can call its methods from your application. To do this, visit the Administer > User Management > Permissions page (see Figure 1) and click the anonymous and authenticated access check boxes for the following: Finally, save your changes when done. Note: You can use this Permissions screen to enable all sorts of rights for anonymous and authenticated (logged-in) users, as well as other types of roles you create, such as moderators or content creators. As the administrator, you always have access to all features. You need some content for Flash Player to be able to display anything. Specifically, you need a node—Drupal's term for a page, blog post, or any other content that you might create for your site. Later in this tutorial you'll be creating custom "content types," more advanced versions of the simple Page and Story types that come standard with Drupal. For now, you can start by creating a Page with some text so you'll have something to look at. Note: A great way to generate age-old placeholder text ("Lorem ipsum dolor sit amet, consectetur adipiscing elit") is the Lorem Ipsum generator. Create a test page by going to Administer > Create Content > Page in your Drupal site. This node will have node id "1" (you can click Edit and check the URL to verify this but it's not important). A Page has a title and a body field by default. The body can be HTML, and you can configure Drupal to allow all or just some HTML tags (it will strip any disallowed tags when you save). You may be thinking, HTML isn't much use to Flash Player unless perhaps you're writing XHTML that you want to parse as XML. Even that is quite limiting because it requires the person entering the content to type XML manually and know exactly what to write. Later on I'll cover adding extra fields to Nodes so that your pages contain videos, arrays of buttons, and information that will enable your SWF application to identify these fields individually by name. But for now, you just need to see how to access the contents of a Drupal node from within a SWF application. If you visit the Administer > Content Management > Content page, you'll notice that this new page is now listed; you can always come back here to edit or remove it. You have a Page in the site, but right now it's only accessible as HTML in the web browser, so how do you get at this data in ActionScript? The first step is to visit the Administer > Site Building > Services page and uncheck the Use Key and Use Session ID options. This allows you to access the services from the comfort of Flash without putting the SWF file on the server. Next, create a new Flash FLA (or Flex Builder project) and add the Flash Drupal library to the ActionScript 3 class path via the project settings panel. You now need to make a request of the Drupal Services module, so if you are using Flash, open up the Actions panel, select frame 1 on the Timeline, and enter the ActionScript code below. If you are using Flex Builder, enter this in a suitable place in your class: import uk.co.richardleggett.drupal.services.NodeService; import uk.co.richardleggett.drupal.model.Node; import uk.co.richardleggett.net.services.events.*; var nodeService:NodeService = new NodeService(""); nodeService.addEventListener(ServiceResultEvent.RESULT, nodeServiceResultHandler); nodeService.addEventListener(ServiceFaultEvent.FAULT, nodeServiceFaultHandler); nodeService.loadNodeData("1", Node); function nodeServiceResultHandler(event:ServiceResultEvent):void { trace("result: " + (event.result as Node)); } function nodeServiceFaultHandler(event:ServiceFaultEvent):void { trace("fault occurred: " + event.message); } Note: The URL in the NodeService constructor should point to your Drupal installation. Test the SWF file. You should see the contents of the Page along with a few other properties in the output panel. If you like, dig into the Node class file to see what you get as standard. Take a close look at what this code does. First of all, it imports the classes I'm going to use and creates a new NodeService instance, passing it the AMFPHP gateway location in the constructor (you can get your AMFPHP gateway address from the Services page in Drupal; here I installed Drupal in localhost, the root of my machine). The code also has two event handlers, one for a result and one for a failure. Then it calls loadNodeData(), passing it the node ID ( "1" in my case), and the class to use to parse the result (the basic Node class here). Finally the two handlers for the result and failure simply output the outcome; in the case of a result, the result object is a Node. I think you'll agree that requesting one node at a time like this would be cumbersome for a site, and hard-coding the nodes' IDs is really bad practice. Fear not. I'll be using the Views module in Drupal to get a sitemap that lists all the nodes in the site—giving me their IDs, so I can load the data—and their paths (the portion of the URL after the domain; for example, "about-us" or "products/cars") so I don't have to hard-code IDs. Note: For debugging, you can use Charles or Service Capture to view the AMF request and response objects as they are sent and received. If you see NetConnection.Call.Failed in the Output panel, make sure you've enabled anonymous access to the Services module, as outlined in the "Setting permissions" section of this article, and that you can visit your AMFPHP gateway in the browser without error. If you see NetConnection.Call.BadVersion in the Output panel, it could be a PHP error. Check your PHP/server error logs (turn on log_errors and set the error_log file in php.ini if it isn't already enabled). On Mac OS X you can view your PHP error log using the built-in Console application; it appears as /var/log/apache2/error_log in the list. It was quite simple to load a node's data by using NodeService along with a known ID. But the SWF application shouldn't need to know these Node IDs because it limits the person using the CMS, and it is not easy to add new nodes without updating the application. You can introduce a sitemap to solve this problem. Think of this as a list of all the nodes in the site (I'll cover filtering later). In this list you get just a few details about the node: its title, ID (nid), type, and path. In addition, by default language, parent, and order_by, among others. With this information you can display site menus, look up a node by path, and so on. For example: You can also view the hierarchy, which will be discussed later in the "Node hierarchy (parenting nodes)" section: trace( sitemap.getNodeByPath("about-us").childNodes ); In effect, you have access to the structure of a Drupal site and all of the content within it. One of the first things I do in an application is load the sitemap and store it so that I can use it throughout the site to figure out what data to load, and even what to display (by inspecting the Node.type property). There's only one problem: Where do you get this magical sitemap? The answer is you need to set up a sitemap View in Drupal using the Views module. A View in Drupal is like a database query. You can choose what nodes to pull out and list based on certain criteria. Views can be used to grab all kinds of information out of a Drupal site—anything from the sitemap to a list of products for an online shop. Create this View by going to Administer > Site Building > Views > Add. Type sitemap for the name, Sitemap for the description, and leave Node as the view type; then click Next. The key thing to note on the following screen is that while you can preview your view, you need to click Save to apply any changes. This may sound obvious, but with the way the UI works, you can sometimes be testing your application and wondering why it isn't updating. It's easy to forget to save. Finally change the Row Style to Node in the Basic Settings and change the Items To Display to 0. That way it won't limit how many results you can see. Click Preview to see whether there is a node in the results. Make sure to click Save (this screen doesn't make it too obvious) and then use the Administer > Site building > Services screen to test this out. Simply click views.getView and type sitemap as the View Name, followed by nid,type,path,title,parent as the Fields. You should then see a dump of the objects returned when you click Call Method. If you don't see views.getView on the Services screen (see Figure 2), enable the Views Service module from the Administer > Site building > Modules screen. Now you're ready for a practical example of loading the sitemap from ActionScript. Create another project and enter the following code: import uk.co.richardleggett.drupal.services.SitemapService; import uk.co.richardleggett.drupal.model.*; import uk.co.richardleggett.net.services.events.*; var sitemapService:SitemapService= new SitemapService(""); sitemapService.addEventListener(ServiceResultEvent.RESULT, sitemapServiceResultHandler); sitemapService.addEventListener(ServiceFaultEvent.FAULT, sitemapServiceFaultHandler); sitemapService.loadSitemap(); function sitemapServiceResultHandler(event:ServiceResultEvent):void { trace( (event.result as Sitemap).nodes ); } function sitemapServiceFaultHandler(event:ServiceFaultEvent):void { trace("a fault occurred: " + event.message); } With any luck you'll see an array of nodes in the output panel (each will display the basic Node fields). You now have full access to all the nodes in the site, new or old, and you can use methods such as sitemap.getNodeByPath() to pull out specific nodes. If you've been using the Node Hierarchy module in Drupal, the childNodes arrays should also have been populated. I talk more about that in the "Node hierarchy (parenting nodes)" section. Either way, you can inspect the node.type property in the sitemap to determine what type of node you have, and you can attach or load the appropriate content to render that node in Flash Player (most likely giving the Node object itself as the data provider to your view). The SWFAddress Drupal module passes in the Drupal path that the user initially requests into the Flash SWF as a FlashVar with a value like "about-us" or "products/someproduct". You can use that FlashVar to request the node's content (by using the sitemap to map the path to a Node ID). Using the SWFAddress library is also a great way to handle navigation within your site because it updates the browser's address bar and allows people to copy and paste links in e-mail messages or IM. It also supports browser bookmarking. Note: For more information, view the presentation by Lee Brimelow to see a demo of using the SWFAddress library to enable direct and deep linking of a SWF file. Those of you who are experienced in building data-driven websites in Flash may be thinking that my approach so far has been incredibly basic and inflexible. But this is actually where it gets interesting. The next few sections describe how to expand the technique to support more complex types of content in the CMS. Now that you have the ability to populate a site with nodes and load them in ActionScript, the next logical step is to figure out a way of storing the information in a more SWF-friendly format. At the moment the two fields you can use to store information is the title and the body, but the body is just a chunk of HTML. Custom content types (enabled by the CCK module) allow you to define new Page types that can contain any fields you wish. For example, I could have an AboutUsPage content type with the following fields: In Drupal your custom fields are prefixed with field_ automatically, so these extra fields would appear in the AMF result object as field_partner_link and field_image, respectively. To create this AboutUsPage content type, navigate to the Administer > Content Management > Content Types screen and click Add Content Type. For the Name, type something readable by humans such as About Us Page (see Figure 3). For the Type, use a string that you can reference in code, for example aboutUsPage (remember that Node includes the type property when you retrieve one from the NodeService). I recommend disabling comments in the Comment Settings. If you intend to have pages under this one, be sure to select the Can Be Parent. Click Save when you are done. Back on the Content Types screen, click Add Fields. Follow the wizard, typing partner_link as the Field Name and Partner Link as the Name. Select Text as the Field Type and click Continue. Select Textfield for the Widget Type. On the settings screen for this field type, select Unlimited for the Number Of Values. When you create a new AboutUsPage, this field will have a button to add multiple entries, and in ActionScript you'll get an array of field_partner_link values. Follow the same steps for field_image. Now it's time to create a node using the AboutUsPage content type. On the Create Content screen, create a new AboutUsPage, filling in a few values for the newly added fields. When you get to URL Path Settings, you'll see that Automatic Alias is already selected. I'll explain what this means shortly, but for now deselect it and type about-us for the path. This means when you save your content it will be accessible at (with localhost being whatever server and path you are using). Note: When you go back and edit content, Drupal has the habit of re-enabling the Automatic Alias option, which will change what you see if the automatic alias is set up to use a different value. Be sure to deselect it if you aren't sure that the automatic alias settings are set up for this content type. Also, I recommend you apply the patch that is available to help fix this. It adds a new "Update Setting" option in your Pathauto settings page to make sure it doesn't automatically generate a new alias for content that already has one, but you must choose this option for it to take effect. (See the Drupal documentation for help on applying patches.) By default, all content/nodes you create will have a path of "/node/X" where X is the node's ID. (If you do not have the clean URL option enabled, it will be "/?q=X".) You can always visit /node/X to view a node, but it's more useful to use a human-friendly alias instead, so "/node/5" might become "/about-us". The Path module enables the textfield you saw when creating your content. It allowed you to enter "about-us", and it will display that node to the user without displaying "/node/X" in the URL. See the Path and Pathauto pages in the Drupal online documentation for more information. Recall that you deselected the Automatic Alias option when creating the AboutUsPage node. The Pathauto module allows you to set up rules for each content type that dictates what path should be given automatically to any content/nodes created. In this case I had you disable the functionality because you haven't yet set up the automatic alias pattern for the AboutUsPage content type. An automatic alias is just a path that is named based on the content type you are creating, and it can also include the node id, or other information about the node you are creating. On the Administer > Site Building > URL Aliases page you'll see an alias named "about-us", which you gave for the AboutUsPage when you created it. If you click Automated Alias Settings you can choose to specify the default path names for all known content types. For example, if you type about-us for the Pattern For All AboutUsPage paths, then from now on if you create another About Us page, the path will be given as "about-us". Of course this would cause a problem if you had two About Us pages because they'd have the same path, but Pathauto lets you use replacement patterns. For example you could use "about-us/[nid]" where "[nid]" is replaced with the unique node ID, for instance "about-us/2". This solves the problem of having many pages that use the same content type. Note: One common problem is that once a piece of content has been assigned an alias, as it has already in this case, you must delete that alias from the URL Alias > List screen, go back to the Automated Alias screen, select the Bulk Generate Aliases For Nodes That Are Not Aliased option, and click Save Configuration to regenerate the aliases for any content that doesn't have one. In practice, you shouldn't encounter this problem if you make sure you set up the automatic alias pattern for a content type before you create any content using it (or simply don't use automatic aliases at all, for example on sites where new content is rarely added). Having a flat sitemap doesn't make for a natural site structure. Sure, you could fake it in ActionScript, or you could just rely on your paths to reflect a hierarchy, but that's the long route. The easy route is to use the Site Hierarchy module in Drupal. After enabling this module, you can edit any existing content via Administer > Content Management > Content, scroll down to Node Hierarchy, and select a parent. If you don't see this option, make sure you have enabled the Can Be Child option in the Content Type for the content you are editing, and the Can Be Parent option for the one you want to be the parent (use the Administer > Content Management > Content Types page). Note: I usually create a Home node in Drupal, promote it to the front page, and set it as the parent of my top-level Section pages so that I can easy build a menu in ActionScript from the Home node's children. If you now run the Sitemap application that you created earlier, you'll see that the child node now has a parentNodeId value. The SitemapService class automatically builds the hierarchy if this is found, so someNode.parent will give you the parent Node object and parentNode.childNodes will give you the array of child Node objects. See the Node Hierarchy page in the Drupal online documentation for details. Being able to add fields to a content type and even having the ability to set them to "multiple" (so that you can allow more than one entry, producing an array) is great, but it soon breaks down when you want to represent a complex or compound object. This is something I do all the time in ActionScript by writing classes or value objects—for example, a Button—which might be defined as having a label and a link (or path). Consider a case in which you want to display three buttons on your home page. One way to do this is to create a HomePage content type in Drupal and add two multiplevalue text fields for it: field_buttonlabel and field_buttonlinkpath. What you'd get back in ActionScript is two separate arrays of values that you can loop through in order to build one array of single Button objects. This is an unnatural way of working; it quickly gets confusing to someone editing the node in the CMS and you end up with lots of fields. The solution is to use the Flexifield module. This gives you a new field type to choose from—namely "flexifield". Unlike other types, Flexifield allows you to have multiple input fields for a single field without writing any PHP (the normal route is to write custom field types using CCK). It does this by allowing you to choose a content type to act as the template for your field type. See the Flexifield page in the Drupal online documentation for details. Remember that up until now you have only used content types to act as the template for an entire node/page. Go to Administer > Content Types and create a new content type named FieldType_Button. Add two fields to it: field_label and field_path (try to name your field types so that they can be reused in many situations). Both fields should be set to textfield, single line, single entry (both are default). Note: You'll notice I prefixed my new content type with FieldType_. I do this to make it clear that this content type is not to be used to create nodes/content; it's specifically for the Flexifield used in other content types. With the new content type defined, go back into your HomePage content type (Administer > Content Types > AboutUsPage > Edit) and add a new Flexifield that uses FieldType_Link. Name it field_link and set it to Multiple to allow multiple links in a page. Now create a new Links Page via Create Content > AboutUsPage. You'll see that you now have a Link field which contains within it two fields: field_title and field_path. You can enter as many Link fields as you like because we set the field_link field to allow multiple entries. In ActionScript you get back a single array called field_link, and each entry in this array contains two values, one for field_path and one for field_title. So to access the third link's title you may use trace(node.data.field_link[2].field_title). I hope this article provided you with some insights into how to use Adobe Flash and Flex Builder with Drupal. In practice, I would expect you to integrate this raw technique into the site-building framework you already use, such as PureMVC, Gaia, Mate, and so on. (Jeremy Wischusen summarizes the most popular frameworks currently available in his article, Choosing a Flex framework.) This way you can automate many of the tasks involved with getting content from the CMS when the user navigates around your site, enabling you to interact with Drupal at a higher level. I haven't touched on how Drupal displays the nodes that you are reading into ActionScript when it encounters a search engine or device that does not have Flash Player installed (essentially, where SWFObject fails to embed the content that is played back in Flash Player). There are several options here that range from simply outputting the node's fields one after another as HTML (this is what Drupal does by default), to hiding certain fields via the Content Type settings (for example hiding all the Flash Player-only fields and just displaying a "body" and "title" field), to writing PHP or CSS that reads the custom fields and outputs them nicely in HTML. Read Chris Carlton's article, Using Drupal themes with Dreamweaver CS4, to get started. As for Drupal itself, version 7 is currently in development and looks to be very promising for rich media. In particular there's a lot going into making the CCK module part of the core, which could mean that fields become first-class objects. That opens up the possibility of having Flex-based editors for content types inside Drupal itself, and most likely not needing Flexifields that use content types for their structure. Either way, the future looks great for Drupal-backed sites built with the Adobe Flash platform. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
http://www.adobe.com/devnet/flash/articles/drupal_flash.html
CC-MAIN-2015-11
refinedweb
5,045
68.5
>> − def method_name expr.. end You can represent a method that accepts parameters like this − def method_name (var1, var2) expr.. end You can set default values for the parameters, which will be used if method is called without passing the required parameters − def method_name (var1 = value1, var2 = value2) expr.. end Whenever you call the simple method, you write only the method name as follows − method_name However, when you call a method with parameters, you write the method name along with the parameters, such as − − The programming language is C The programming language is C++ The programming language is Ruby The programming language is Perl Return Values from Methods Every method in Ruby returns a value by default. This returned value will be the value of the last statement. For example − − #!/usr/bin/ruby def test i = 100 j = 200 k = 300 return i, j, k end var = test puts var This will produce the following result − − #!, the above code will produce the following result − − class Accounts def reading_charge end def Accounts.return_date end end See how the method return_date is declared. It is declared with the class name followed by a period, which is followed by the name of the method. You can access this class method directly as follows − Accounts.return_date To access this method, you need not create objects of the class Accounts. Ruby alias Statement This gives alias to methods or global variables. Aliases cannot be defined within the method body. The alias of the method keeps cannot − undef bar
https://www.tutorialspoint.com/ruby/ruby_methods.htm
CC-MAIN-2022-33
refinedweb
252
61.97
1360281 story Patron Saint of the Internet 208 Posted by Hemos on Tuesday June 15, 1999 @09:50AM Quite a number of people have been writing with the news that the Catholic Church is considering naming a patron saint of the Internet. The strongest current contender is St. Isisdore, an 8th century Spanish saint, with is created with making one of the first databases - a 20 volume encyclopedia. Re:thats nifty? (Score:1) OLIPN (Score:1) Re:Ever lived in Belgium? (Score:1) -Dean Re:Patron saint? (Score:1) Case in point, prostitutes. Prostitutes do wonderful things for society, yet am I correct in thinking the church isn't overjoyed by them? Yet for a couple of quid (pounds), a bloke (as it generally is) is satisfied (for a while), and doesn't get overcome by his feelings of lust/natural desires that he has to rape some (almost) defenceless person (possibly a girl so young as to almost be a child)?!?!? Anyway, I wouldn't say indulging in the fantasy is indulging in the bad thought, I would say actually going out and following/stalking/raping the TV star would be indulging (and I agree wrong). You mention double-standards, but it seems as though the church has more of them than me! Man, I must appear like some kind of anti-religious nut! Re:Patron saint? (Score:1) But the fact that many people think it is alright doesn't mean it is right. "Release" becomes an excuse, and while for many people, they could still somewhat control themselves before going too far, but for many others, sexual fantasies lead them to a slippery slope. St. Maria Goretti (11 years old) was murdered by Alessandro (19 years old) when she refused Alessandro's rape attempt. Maria was pure at heart, while Alessandro was full of impure thoughts... his room was full of pornographical magazines and posters... Impure thoughts and pornography, as illustrated in this tragedy, are not releases, but rather, fuel to Alessandro's sexual desire to the point that he tried to rape little Maria, and when she refused, he stabbed her 14 times and left her to die. Would he even had thought of raping her had he not been mesmerized and his moral desensitized by pornography? Regarding prostitutes: The Church does not shun them. (At least we shouldn't.) There are quite a few canonized saints who were once prostitutes before their conversion. The most famous of all is probably St. Mary Magdalen, the Penitent. You might know her story in the Bible: She was nearly stoned to death when the Pharisees caught her in the very act of adultery. When they brought her to Jesus, Jesus asked them whoever has no sin can cast the first stone. The Pharisees hesitated and finally escaped one by one. Jesus then forgave her sin, and said, "Go now in peace, and sin no more." From then on, St. Mary Magdalen left her old sinful way of life, and became a devout follower of Jesus. Just like Jesus had loved her unconditionally, St. Mary Magdalen pour out her love for God too. Prostitution is sin... but, we hate the sin and love the sinner. In many aspects, prostitutes are victims of our society. I don't think the Church is being double-standard in this regard. The Church is rather consistent, actually, and some would even say, "radical or "extreme". Raping is wrong, that we all know. Premarital sex? Why not? It is just casual fun, right? Fantasizing about sex is a sin? You've gotta be kidding! And yet, the Church is not budging to public pressure. Afterall, the Church cannot teach against what Jesus taught us: "Whenever a man look upon a woman with lust, he has already commited adultery with her in his heart." Yes, I realize that perhaps over 90% of Slashdot readers would disagree with what I wrote above. However, to me, to my family, and to many of my friends, Jesus' teaching make perfect sense. Anthony P.S. Well, there are lots of people who are anti-religious, so if you are indeed one, you are not alone. However, I do hope that you were just kidding about being an anti-religious nut. Re:Patron saint? (Score:1) Re:God no... (Score:1) Re:The internet and religion (Score:1) I'll have to admit I was quite surprised by this. I was rather expecting a condemnation of the Internet as a vile tool of Satan rife with pornography and atheism. Also, let's all take joy in the fact that Jerry Falwell has not discovered push technology. "Shove down throat" technology perhaps. The Great Schism Part Deaux (Score:1) Could be fun to watch though... -Subotai Re:You gotta... (Score:1) Church might not approve... (Score:2) 1. Sacrificing AOL disks to the god of Packet Storms 2. Chanting the names of great hackers to ensure that code will compile without errors. 3. Building a shrine to the god of Greater Bandwidth entirely out of MSN CD-ROMs. 4. Imploring the High Priestess of IT for a larger disk quota. 5. Daemon processes. 'Nuff said. Re:GOD? (Score:1) Hmm. God for sale. How ironic. paul Re:You gotta... (Score:1) Re:BBC News == Supermarket Tabloid of the Internet (Score:1) Re:GOD? (Score:1) ---Jason Re:God no... (Score:1) I say unto you, check thy facts and thy history. Read the Apology of Socrates. I quote: "This you must recognize, the god has commended me to do. And I think that no greater good has ever befallen you in the state than my service to the god. For I spend my whole life in going about and persuading you all to give your first and greatest care to the improvement of your souls, and not till you have done that to think of your bodies or your wealth." Socrates was very religious. (And I have no clue where you got this, man-as-god BS.) thats nifty (Score:1) What patron saints are (Score:2) Anyone interested in looking up patron saints should try saints.catholic.org [catholic.org] -- it contains an index of the officially-recognized patron saints, plus some good background information. I will quote their explaination of patron saints here: Some things to note -- the news article simply mentioned a popular movement to have the Vatican declare St. Isidore the patron saint of the Internet. These popular movements happen all the time within the Roman Catholic Church. Some receive official approval, some do not. Of course, any Catholic (or anyone else) can request the intercession of any saint in any matter. No one needs to wait for Vatican approval. Personally, while I can see why St. Isidore would show an interest in the Internet, there are some other saints I would nominate: GOD? (Score:1) natural law (Score:1) --Paul Saint IGNUcius! (Score:1) Patron Saint by the Pope?!?! (Score:1) Re:GOD? (Score:1) Hoorah! (Score:2) And we _all_ know that if anything needs a patron saint right now, it's the Internet. An omnipotent God just doesn't cut it when the backbone goes down. We need somebody who really cares. (All in the name of good humor, folks. :) ) Re:Saint IGNUcius! (Score:1) and convenience stores. Why Not? (Score:2) Being able to get a patron saint medal that can be stuck to the front of a server isn't a bad idea at all, IMHO. Seriously, most sysadmins can use all the help they can get! Re:Patron Saint by the Pope?!?! (Score:1) What about Al Gore? (Score:1) Re:God know... (Score:1) The internet, as well as many other things are a result of human inginuity. Clearly not all people believe this, but I believe that human inginuity is not something we made, but something we were given. I am confident you will disagree with me, and I throroughly don't mind at all. How long has there been a Vatican Political agenda. Not long, the Vatican has not been a soverign nation until this century. This has been very good. It has made some official separation between the Italian political arena and the Church. Clearly, after hundreds of years of the Roman Catholic Church, it will be a while before the Italian part has a chance to fade. The Catholic Church is OLD. When you have been around long enough, people will sometimes do really dumb things. This is no exception. I hope that other people are more forgiving of your decision making, than you are of organized religion. I am not exacly sure why I responded...knee jerk reatcion I guess. Not so much the aspect of faith, but the historical half-truths and bitter spin you put on the topic. Clearly, your convictions are deep-seated and I am not trying to "win you over." I'm just thinking and letting my fingers click away until I feel better. This is Slashdot; you can do that. Internet Saints Up a Couple Levels (Score:3) St. Marconi of Unlimited Bandwidth St. Turing the Mystic St. Hopper of Transubstantiation of Bugs St. Ada the Inscrutable St. Stallman of Hoofed Mammals St. Torvalds the Flightless and from Jimhotep: St. Tesla the Enabler Big deal. There's lots of patron saints. (Score:3) St. Isidore's already listed. Re:St. Turing (Score:2) So? St. Mary of Magdala was a prostitute, and they canonized her. (At least, I think she's a saint. There are about a gazillion different Marys in the Bible. I might be confusing her with a different one). John Postel? (Score:1) For those who speak spanish... (Score:1) Jaculatoria.... San isidro de Sevilla, sabio y escritor, Que mi correo no traiga un virus destructor... St. Linus? (Score:1) The miracles are probably pretty easy to take care of. Anyone who can understand kernel level code obviously has some divine powers... but they also have to be dead, and i dont think that anyone wants to make linus a martyr right now. Now Bill Gates... maybe if we sacrificed him.... St. Vidicon of the Cathode (Score:1) It should be St. Vidicon of the Cathode... Unfortunatly I'm drawing a blank as the the series of books that's from, or what the real anme of the character was. Re: I think you'll find that... (Score:2) It's all too easy to bring up the Church's missteps throughout the centuries, but these are human errors, some graver than others. That they were wrongly committed in the name of God does not repudiate the value of the religion's message or its true core doctrines, IMHO. And for centuries the concept of personal freedom was largely unknown to the masses who knew only the Church as the starting and ending points of most aspects of their lives. I think for far too long religion got bogged down in the details of things like the Bible, a fascinatingly confusing document which led to the justification for all sorts of terrible deeds. Recently there have been shifts away from organized religions to "personal faith", a more direct connection to one's deity of choice. A lot of right-wing fundamentalist Christian groups emphasize this, as a result of their disillusionment with Lutherans, Methodists, Baptists, etc. etc. All that aside, today you and I have the freedom to cheerfully ignore religion or complain about it as we see fit. That freedom comes from the labors of generations of our ancestors, Christian, Jew, Muslim, or none of the above. While acknowledging the fact that organized religions have made mistakes, their importance should not be so wantonly dismissed. While I am a Christian (Lutheran specifically), I'm quite liberal, and if you want to be a heathen, hey, that's fine with me. I wonder if the fierce reprisals against religion are because the online demographics are much different than the real world...i.e., a higher concentration of agnostics and atheists in the online population. Who knows? I would also not be surprised (if you are Caucasian) if you owe your existence to the 'Catholic heritage' at some point way back in history. Actually. . . (Score:1) A Clear Error in BBC article. (Score:1) Pliny the Elder was the known 1st encyclopedia compiler in the European setting. Isidore's work is regarded inferior to Pliny's in quality and quantity. And there are some Chinese candidates for the title "The 1st Encyclopedist", let alone other civilizations, though I believe that title must go to D'Alembert & Diderot. Now I wonder, what happened to BBC writers' and editors' intelligence. When had this decline bagun? St. Beuno, Patron of Computer Technicians (Score:1) Pope Lx Streetmentioner Re:You gotta... (Score:1) I also have a difficult time with the belief that people are just like all other mammals in the act of procreation, since the human is the only mammal without a penile bone. Somehow, there is a significant difference in the procreation of humans compared to other mammals. This seems to lessen the role of a precedent that other mammals might set for us humans. Re:You gotta... (Score:1) Re:God no... (Score:1) And no arguments that there are other places and other times for religion. Because there are other places and other times for atheism as well: the public schooling system. And anyways, just because some organization says that they're going to name someone as the protector of all who travel the 'info superhighway', doesn't mean you have to observe that naming, or wear a medallion or anything. What about Saint Dogbert? (Score:1) whoa! (Score:1) There has been no official statement from Rome (Score:2) This is very signifigant. Unless there is an official statement from Rome, this is just a rumor. I'm not saying it won't happen.. I'm just saying that it's not definite yet. At all. Re:thats nifty (Score:1) The use of the 'net for distributing pornography and quasi-legal purposes also goes pretty well with the characteristics of Hermes. Re:You gotta... (Score:1) Of course it is intended to be pleasure (intended by who? I have to think you believe in a god), or else nobody would do it! It's just a simple fact of evolution! So when the humans reached a level of intelligence, connecting the sex with the babys, they also began to control it. See the bible, Onan for example! Szo Re:You gotta... (Score:1) Every sperm is great If a sperm is wasted God gets quite irate. SAINT BILL??? (Score:1) The patron saint of the blue screen perhaps? Re:Patron saint? (Score:1) Brainwashed? If you believe that 1.1 billion people have been brainwashed and that you're not, you need to take a very hard look at your reality. EXAMPLE: Tear up a $100 bill. I mean into a thousand, untapeable pieces. Go ahead, right now. You won't, because you BELIEVE it's worth something. You're as "brainwashed" as anyone else, my friend. As for Windoze, say what you will. To the winner goes the spoils. That's capitalism. If Red Hat or someone else can wrest control, great! In 10 years others will be complaining about the lack of choice in Linux, and how much BETTER OSDEJURE is because it's cool because it's not as popular as the fascist Red Had. WAKE UP AND SMELL THE COFFEE! Ahhha, this will come in handy (Score:1) Needless to say, I am very pleased at this initiative. All I would need to do is light a candle to St. Isidore to cleanse and protect me from the nasty little viruses, trojan horses and security holes that are clearly the work of Satan. Hell, the Vatican was ahead even of the Discordians on this one. All hail St. Isidore! Re:Patron saint? (Score:1) Last count: ~ 5.7 Billion Ergo, % RCC = ~ 19.3% ...which is still a heck of a lot of folks. Ethnocentricity has no place on the internet? Who's saying that there is any? If the RCC says St. So-and-so is now the patron of the Internet, would it change the Net any more than the "Our Lady of the Highways" shrine changes the Jersey Turnpike? (read: it doesn't) I don't think so. Re:Burn THEM! (Score:1) Really, they've un-excommunicated Galileo, and have done quite a bit since Vatican II (early 60s conference in Rome) to reverse the oppressiveness and backwardsness that were the hallmarks of the church from the inquisition through the industrial revolution. Bottom Line: Hey, the RCC isn't perfect. But they're trying. Are you? Re:One's Not Quite Enough (Score:1) You forget St Jon! (Score:1) Alas, the canonisation of St Jon the creator has been lost in the Postel. (Collapses into hysterical laughter) Re:BBC News == Supermarket Tabloid of the Internet (Score:1) if the Vatican offends you, just pray to BOB, or Discordia, Cthulhu, etc... it's all in good fun... nmarshall #include "standard_disclaimer.h" R.U. SIRIUS: THE ONLY POSSIBLE RESPONSE Top X Lines Uttered by the Internet Saint (Score:5) 2) "That'll be 20 Hail Marys and 5 lines of assembly code" 3) "Thou shall not covet thy cubicle neighbor's video card" 4) "And God shall smite thee by sending a power surge through your CPU" 5) "God is compassionate, my child...everyone is tempted by the Fruit of the Tree of Microsoft once or twice" 6) "And Apple begat Macintosh, Macintosh begat the PowerMac, and PowerMac begat iMac..." 7) "And on the Seventh Day, Torvald created Linux. And Torvald saw that it was good. Re:St. Alan? (Score:1) -- Re:St. Vidicon of the Cathode (Score:1) Re:Patron saint? (Score:1) Anyway, this is the wrong place for a spiritual debate. Re:God no... (Score:1) You neglect the influence of the ancient Greeks on those religious cultures, and on modern cultures despite the religions. You imply that the religions are responsible for the intellectual foundation of Western society. Wouldn't it be more accurate to say the church scholars are responsible for it? It is the nature of those who would choose that life to treasure knowledge and history, regardless of religious teachings. With religion so dominant, where do you think academic-minded people would gravitate? Now consider this, where is the scientific method in this religious tradition? The internet and religion (Score:3) If the catholic church were to declare a patron saint for the internet, that means the church either does not understand the internet, or that there may be hope yet for it to become less of a conservative patriarchal hierarchical institution. Re:Internet Saints (Score:2) Re:Whatever happened to... (Score:1) I wonder if he'll see this post... [OT] Kibo (was Re:Whatever happened to...) (Score:1) Oh, sure, they claim it's Japanese for "hope," but we know better... Eric -- My favorite part... (Score:1) It seems to me that anyone capable of witnessing such a feat should have an equal claim to the spot. Re:God no... (Score:1) Re:The Vatrican has a linux kernel site (Score:1) The only two machine which have an FTP service on them both give back: 555-You are not permitted to use the ftp operation. 555-Please contact your system administrator. 555- 555 Now I don't know what OS they're using on their WWW server, but it's running Netscape Enterprise server. You might, of course, mean either vatican.org or vatican.com, neither of whihc has anything to do with the Holy See. Even their search doesn't say anything about Linux, although it does mention Compaq and Altavista... And gives some mighty weird junk back if you simply ask it for the HEADer of '/'... Sorry for pissing on your fire, and all. Meanwhile, Isidore (Soon to be known, I hope, as Izzy), only gets a mention in the footnotes of Vatican II, in relation to the celibacy of the preisthood. More St. Internet nominations (Score:1) They will save us from the mess Intel left with us, and allow the Internet to spread the gospel of bigendianess. Re:The Vatrican has a linux kernel site (Score:1) Not sure if the site is physically located in VC, but it makes sense that it would be. Re:Hoorah! (Score:1) | in the United States, the Catholic Church | doesn't consider the Internet to be a cesspool | of paganism and various other Bad Things. You're confusing the Catholics with the Southern Baptists - the Southern Baptists think everything is evil. Re:The Vatrican has a linux kernel site (Score:1) Upon inspection I found that the site is actually hosted in the UK. ??? Re:thats nifty (Score:1) Arch Angel Gabriel (Score:1) wouldn't the Internet fall under that? Re:Patron saint? (Score:1) I personally am GLAD the RCC moves slowly. Society needs an anchor, a set of ideals that keep it civilized. Imagine had the church gone pro-eugenics in the 1900s. Many of us would no doubt be dead. (unless you're PERFECT in every possible way. Yeah, right). Et cetera. The point is that the church is actually very good about keeping up with the times, all things considered. Why NOT a patron saint for the 'Net? It HAS kept up with modern technology... ever go to the Vatican website? Remember the flood of fax machines, computers and other stuff the church smuggled into Poland during the Reagan years? Santa Tecla (Score:1) Re:Whatever happened to... (Score:2) Can someone refresh me on this one? Arch Angel Gabriel (Score:1) (really, look it up!) . Wouldn't the Internet fall under his domain? Re:What about Saint Dogbert? (Score:1) With his right paw, he heals broken technology, and, with the scepter in his left paw, he drives out the Demons of Stupidity. He also has a cute little hat (which is actually modeled after a fancy folded napkin). Eric (*) OK, so it's just the little punch-out thing from a Dilbert calendar. Deal with it. -- OT: Nitpick (Score:1) The Vatican has their own TLD and domain name; vatican.va. Re:God no... (Score:1) The fact Algebra came out the Moslem world is unlikely a freak of nature. I find it ironic that Newton concluded that being able to describe all physical movement with five simple equations was evidence of God. Re:Internet Saints Up a Couple Levels (Score:1) Re:You gotta... (Score:1) Well, most of us wouldn't... Interestingly enough... (Score:2) I'm not declaring my stance on birth control here. I'm just saying that nothing is set in stone, not even what the Catholic Church teaches. --Another practicing roman catholic/linux geek Isn't this a little late? (Score:1) another article on this... (Score:1) Wired has this too, here [wired.com]. St. Coupertino[sic] == Cupertino, Ca? (Score:1) Tom Byrum Re:Patron Saint by the Pope?!?! (Score:1) Commercial organization? (Score:2)? Heaven is a commercial organization? Guess that explains where all the cash I've dropped into collection plates over the years has gone ;-) God no... (Score:3) Oh dear god no. I'm happy being a heathen without further indoctrination from a fucking organised religion as Catholicism which has traditionally been responsible for the alientation, persecution of many people advocating doctrines which did not fall with in the Vatican's political agenda. There is no room for God here. We're a product in spite of the Catholic heritage certainly not as a result of it. If I want to pray it sure as hell will not to be what I am told is permissable by a body which murdered and desicrated scientists, philosophers, astronomers, witches... I recant! Re:The internet and religion (Score:2) Nah; if they did that, they'd have to condemn television and the printed media for exactly the same reasons. Whatever happened to... (Score:2) And for that matter, Legba? Though I suppose Isidore is appropriate for his accomplishments. Glad to see the Vatican is more techno-savvy than the extreme right-wing. Hang on St. Christopher, on the passenger side (Score:2) great line by Tom Waits. He's gone on tour again, and is better than ever. the AntiCypher Ia! Ia! Shub-Internet! (Score:3) The church has no idea what peril they are entering; they live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that they should voyage far. Sending your prayer packets to this so-called "St. Isisdore" only helps to draw attention to both the source and destination addresses. "But whose attention?" you ask. Well, perhaps it would be better to ask "What's attention?" There are impossibly ancient hungers that lurk out there, furtively waiting in the dark until the comm satellites are right. And when the time comes, it will be both swift and agonizingly slow at the same time. A swift tentacle probing here, a ping packet there, and then you will be beset by the true horror: Shub-Internet, the black beast of the 'Net with a thousand bastard processes! We already have a patron ... thing. (I guess calling it a "saint" wouldn't quite be right, huh?) Better to leave well enough alone, and pray (quietly to yourself, where nothing can snoop your prayer) that the dawn of Its era comes long after you are safely in the grave. The Vatrican has a linux kernel site (Score:2) It's funny to tell newbies they can download the latest linux kernel from the vatican's ftp site. Here! The Greek Pantheon (Score:2) I was not able to fit the whole post in the Reply form (how do you do that?). For now here is an excerpt, but it well worth the effort fetching the original article from UseNet: -Eos (goddess of dawn): goddess of the bootstrap processes (lilo, Drive A:, BootManager, boot.ini, IO.SYS, etc). -Nyx (goddess of night): goddess of shutdown -h, screen blanking, and Jolt. -Morpheus (god of dreams): god of vaporware. -Muses (nine sisters, goddesses of respective arts and sciences): goddesses of Yahoo, and related Internet directories; goddesses of multimedia and multimedia plugins. -Hestia (goddess of the hearth): goddess of servers and standalone units; patron of proxy servers and (with Aesculapius, see below) firewalls. -Titans (various important, antecedant gods): Ada, Babbage, Turing, Hopper (goddess of _software_ programming**), Thompson, Kernighan & Ritchie, GHades (giving the devil his due), and many more. -Ares (god of destructive war): god of flamers and flaming; also, patron god of all that is M$; god of Doom, Quake, etc. -Pan (god of flocks & shepherds): god of NNTP; also, along with Demeter, protects databases; patron god of tarballs and PKWare. -Hymen (first name, "Buster"; god of marriage): patron of device drivers; god of application suites (MS Office, Corel WP suite, StarOffice, etc.); god of Java. -Eris (goddess of strife & discord--she began the Trojan War): another patron of Usenet; goddess of software copyright infringement. li>-Priapus (god of fertility): god of Internet; patron of the viruses that work by loading up one's ha unending Usenet strings and cascades; god of software bloat; god of AOL & MSN disks. -Hermes (messenger of the gods, also, patron of thieves, highwaymen, and, I believe, of commerce): god of spam. -Athena (goddess of wisdom, and all that is noble in war): (with Tux)Linux; patron goddess of GNU. -Aesculapius (born mortal, deified as god of Medicine): patron god of Unix gurus; god of UPSs, spike protectors, firewalls, etc. -Chaos: god of random # generators; patron of trolls; god of Error 404. Re:The internet and religion (Score:2) Just because the Internet is not a Catholics only club does not mean that the Catholic users of it can not have a patron saint for it. St. Christopher is the patron saint of travelers. I do not hear travelers of non-Catholic faiths decrying this - or worse yet, refusing to travel to avoid the accidental labeling as Catholics by proxy. Most non-Catholics simply do not care. As a recovering Catholic, I am encouraged to see the Church trying to look forward (albeit through ancient rose-colored glasses) rather than ignorantly overlooking the importance of the net or labeling it a fad or wose still - the vehicle of Satan. Also, let's all take joy in the fact that Jerry Falwell has not discovered push technology. Patron Saint (Score:2) Al Gore was rejected because he isn't Catholic, and even if he gets elected, he'll only have one miracle to claim. [smile] This really seems like joke material. I had to check the date to make sure it wasn't April 1. [/OBJOKE] This really seems like joke material. I had to check the date to make sure it wasn't April 1. All kidding aside, does the internet really need a patron saint? Maybe so. You see, this may actually help some technophobes overcome their instincitve Luddite fear of the net (remember the kids being "talked to" because they admitted to playing DOOM?). The technology can be seen as being "blessed" as it were, by the Vatican. For its part, the vatican has been keeping tabs on the internet, with a web presense. Actually, only the Church of Scientology comes to mind as being more net savvy, although the stories associated with the Scientologists are usually negative with respect to the net. The presense of the Vatican may be even more beneficial, as the internet currently has an image problem (maybe rightfully so) as being awash with pornography, weapons how-to's, and other negative things. Its nice to know there is a major organized religion that may actually champion this technology and help get it seen as acceptable for families, etc. -- St. Alan? (Score:2) Surely he'd be "St. Alan"? It seems that saints tend to be referred to by their first names.
http://slashdot.org/story/99/06/15/1357226/patron-saint-of-the-internet
CC-MAIN-2014-23
refinedweb
5,047
74.29
Improving .NET Core Kestrel performance using a Linux-specific transport By Tom Deseyn July 24, 2018 August 9, 2018 +9 rating, 11 votes. Transport abstraction Kestrel supports replacing the network implementation thanks to the Transport abstraction. ASP.NET Core 1.x uses libuv for its network implementation. libuv is the asynchronous I/O library that underpins Node.js. The use of libuv predates .NET Core, when cross-platform ASP.NET was called ASP.NET 5. Then scope broadened to the cross-platform .NET implementation that we know now as .NET Core. As part of .NET Core, a network implementation became available (using the Socket class). ASP.NET Core 2.0 introduced the Transport abstraction in Kestrel to make it possible to change from the libuv to a Socket-based implementation. For version 2.1, many optimizations were made to the Socket implementation and the Sockets transport has become the default in Kestrel. The Transport abstraction allows other network implementations to be plugged in. For example, you could leverage the Windows RIO socket API or user-space network stacks. In this blog post, we’ll look at a Linux-specific transport. The implementation can be used as a direct replacement for the libuv/Sockets transport. It doesn’t need privileged capabilities and it works in constrained containers, for example, when running on Red Hat OpenShift. For future versions, Kestrel aims to become more usable as a basis for non-HTTP servers. The Transport and related abstractions will still change as part of that project. Benchmark introduction Microsoft is continuously benchmarking the ASP.NET Core stack. The results can be seen at. The benchmarks include scenarios from the TechEmpower web framework benchmarks. It is easy to get lost watching the benchmark results, so let me give a short overview of the TechEmpower benchmarks. There are a number of scenarios (also called test types). The Fortunes test type is the most interesting, because it includes using an object-relational mapper (ORM) and a database. This is a common use-case in a web application/service. Previous versions of ASP.NET Core did not perform well in this scenario. ASP.NET Core 2.1 improved it significantly thanks to optimizations in the stack and also in the PostgreSQL driver. The other scenarios are less representative of a typical application. They stress particular aspects of the stack. They may be interesting to look at if they match your use-case closely. For framework developers, they help identify opportunities to optimize the stack further. For example, consider the Plaintext scenario. This scenario involves a client sending 16 requests back-to-back (pipelined) for which the server knows the response without needing to perform I/O operations or computation. This is not representative of a typical request, but it is a good stress test for parsing HTTP requests. Each implementation has a class. For example, ASP.NET Core Plaintext has a platform, micro, and full implementation. The full implementation is using the MVC middleware. The micro implementation is implemented at the pipeline level, and the platform implementation is directly building on top of Kestrel. While the platform class provides an idea of how powerful the engine is, it is not an API that is used to program against by application developers. The benchmark results include a Latency tab. Some implementations achieve a very high number of requests per second but at a considerable latency cost. Linux transport Similar to the other implementations, the Linux transport makes use of non-blocking sockets and epoll. Like .NET Core’s Socket, the eventloop is implemented in managed (C#) code. This is different from the libuv loop, which is part of the native libuv library. Two Linux-specific features are used: SO_REUSEPORT lets the kernel load-balance accepted connections over a number of threads, and the Linux AIO API is used to batch send and receive calls. Benchmark For our benchmark, we’ll use the JSON and Plaintext scenarios at the micro class. For the JSON benchmark, the web server responds with a simple JSON object that is serialized for each request. This means that for each request, our web server will do a tiny amount of useful work which makes the transport weigh through. For the Plaintext scenario, the server responds with a fixed string. Due to the pipelining (per 16 requests), only 1/16 of the requests need to do network I/O. Each transport has a number of settings. Both the libuv and Linux transport have a property to set the number of threads for receiving/sending messages. The Sockets transport performs sends and receives on the ThreadPool. It has an IOQueueCount setting that we’ll set instead. The graphs below show the HTTP requests per second (RPS) for varying ThreadCount/ IOQueueCount settings. We can see that each transport is initially limited by the number of allocated threads. The actual handling happens on the ThreadPool, which is not fully loaded yet. We see Sockets has a higher RPS because it is also using the ThreadPool for network sends/receives. We can’t compare it with the other transports because it is constrained in a different way (it can use more threads for transporting). Transport is CPU-constrained When we increase the ThreadCount sufficiently, the transport is no longer the limiting factor. Now the constraint becomes either the CPU or network bandwidth. The TechEmpower Round 16 benchmark hit the network bandwidth for the Plaintext scenario. If you look at the benchmark results, you see the top results are all about the same value. These benchmarks indicate underutilized CPU. For our benchmark, the CPU is fully loaded. The processor is busy sending/receiving and handling the requests. The difference we see between the scenarios is due to the different workload per network request. For Plaintext, we receive 16 pipelined HTTP requests with a single network request. For JSON, there is an HTTP request per network request. This makes the transport weigh through much more in the JSON scenario compared to the Plaintext scenario. App is CPU-constrained Using the Linux transport The Kestrel Linux transport is an experimental implementation. You can try it by using the 2.1.0-preview1 package published on myget.org. If you try this package, you can use this GitHub issue to give feedback and to be informed of (security) issues. Based on your feedback, we’ll see if it makes sense to maintain a supported 2.1 version published on nuget.org. Do this to add the myget feed to a NuGet.Config file: <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="rh" value="" /> </packageSources> </configuration> And add a package reference in your csproj file: <PackageReference Include="RedHat.AspNetCore.Server.Kestrel.Transport.Linux" Version="2.1.0-preview1" /> Then we call UseLinuxTransport when creating the WebHost in Program.cs: public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseLinuxTransport() .UseStartup() .Build(); It is safe to call UseLinuxTransport on non-Linux platforms. The method will change the transport only when the application runs on a Linux system. Conclusion In this blog post, you’ve learned about Kestrel and how its Transport abstraction supports replacing the network implementation. We took a closer look at TechEmpower benchmarks and explored how CPU and network limits affect benchmark results. We’ve see that a Linux-specific transport can give a measurable gain compared to the default out-of-the-box implementations. For information about running .NET Core on Red Hat Enterprise Linux and OpenShift, see the .NET Core Getting Started Guide. Related: This will update the package.json file; then you need to run: If you get a message about @angular/cli you can update it by running: You may now see some vulnerabilities in your NPM packages. To fix them run:? At Okta, our goal is to make identity management a lot easier, more secure, and more scalable than what you’re used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:; isAuthenticated: boolean; constructor(public oktaAuth: OktaAuthService) { this.oktaAuth.$authenticationState.subscribe( (isAuthenticated: boolean) => (this.isAuthenticated = isAuthenticated) ); } async ngOnInit() { this.isAuthenticated = await this.oktaAuth.isAuthenticated(); } login() { this.oktaAuth.loginRedirect(': As always, if you have any comments or concerns, feel free to leave a comment below. Don’t forget to follow us on Twitter and on Facebook! Previous posts in the series:_16<<. The first preview of ASP.NET Core 2.2 is due (very) soon and it will be our first chance to test the changes and new features which are expected to be released by the end of this year. ASP. Endpoint Routing is also integrated with the latest ASP.NET Core 2.2 MVC functionality, allowing MVC to work on top of this new Endpoint Routing feature. MVC in ASP.NET Core 2.2 includes code changes to support building up a list of available Endpoints for. In the year 2008, Microsoft released the first version of MVC i.e. Model, View and Controller for the web programming framework. It was the one of biggest revolutionary released by the Microsoft in the recent past. Because, before this era, web developers mainly developed by using web forms which are mainly maintained by the control of the HTML templates including CSS and Scripting languages as required. The concept of web forms is very simple and easy for the web developers, especially for the beginners. But, in the case of MVC the concept is a little harder. Because in the technology, web developers need to take the full responsibility related to all web content in their applications. In MVC, developers normally do not use any web content for their applications. Because, Microsoft introduced three helper objects (HtmlHelper, UrlHelper and AjaxHelper) for generating web control in the application. This helper objects basically simply and shorten the work of developer for designing any application of web interface. In MVC pattern, all the code of Razor views (include server-side) starts with @ sign. In this way, MVC framework always have a clear separation between the server-side code and client-side code. WHY TAG HELPERS? Microsoft introduced a new feature in the MVC Razor engine with the release of ASP.NET Core which is known as Tag Helpers. This feature helps web developers to use their old conventional HTML tags in their web applications for designing presentation layer. With the help of Tag Helpers, developers can design their presentation layer using HTML tag while they still can write business logic in the C# the code at server-side which will run in web server. So, with the help of Tag Helpers which one is the Microsoft’s new features in ASP.NET CORE, developers can replace the Razor cryptic syntax with @ symbol with a more natural looking HTML-like syntax. So, the first question always arises that “Why we need Tag Helpers?”. The simple answer is that Tag Helpers actually reduce the coding amount in HTML which we need to write and also create an abstracted layer between our UI and server-side code. We can extend the existing HTML tag elements or can create custom elements just like HTML elements with the help of Tag Helpers. Actually, we can write server-side code in the Razor files to create new elements or rendering HTML elements. So, we can define the customs elements name, attribute or parent name just like the HTML elements by using Tag Helpers. But, we need to remember that Tag Helpers does not replace the HTML helpers, so we can use both of them side by side as per our requirement. In the below example, we can clearly see the difference between the two helper methods // HTML Helpers @Html.ActionLink("Click", "Controller1", "CheckData", { @Click</a> In the above sample code of HTML, it is clear to see that how much similar to HTML the tag helper syntax looks like. In fact, when we use data attribute in HTML that is also optional in this case. TAG HELPERS ADVANTAGES Now, we need to understand why Tag Helpers is important or what are the advantages of its over the HTML Helper objects. So that, can compares of these two different helper objects. So, before going to in deep discussion about Tag Helpers, let’s discuss the main advantages of Tag Helpers over HTML Helpers objects Tag Helpers use server-side binding without any server-side code. This helper objects is very much use full when HTML developers do the UI designing who does not have any idea or concept about Razor syntax. It provides us an experience of working on basic HTML environments.the It Support rich intellisence environment support for create a markup between HTML and Razor BUILT-IN TAG HELPERS Microsoft provides many build-in Tag Helpers objects to boost our development. In the below table, there are a list of available built-it Tag Helpers in ASP.NET Core. CUSTOM TAG HELPERS In ASP.NET Core, there are several built-in Tag Helpers objects available. But in spite of that, we can create a custom Tag Helpers and that can be included in the ASP.NET Core. We can add a custom tag helper in our MVC Core projects very easily. We can create separate projects for this custom tag helpers or can create in the same projects There are 3 easy steps need to perform for create a custom tag helper – Step 1 First, we need to decide, which tag we need to target for create the custom tag helpers and then we need to create a class for the tag helpers. So, I want to create our first tag helper named "hello-world", so for that I need to add a class named HelloWorldTagHelper. Also, we need to remember that your tag helper class must be inherited from the TagHelper class. Step 2 For perform the operations, we need to override the Process or ProcessAsync. In this method, we get two parameters as input – 1. TagHelperContext – The Tag Helpers receive information about the HTML element which they are transforming through an instances of the TagHelperContext class. It mainly contains 3 values – - AllAttributes – It is a dictionary of property which we can use in our Tag Helpers. - Items – It returns a dictionary which is used to co-ordinate between tag helpers. - UniqueId – This property returns a unique id or identifier for the HTML element which has been transformed. 2.TagHelperOutput – This parameter is basically being the output of our tag helper declaration. It start describing the HTML elements as it view in the Razor engine and can be modified through the below properties as mentioned. TagName – It provide the root name of the tag element. Attributes – Provides a dictionary of items for the output attributes. PreElement – It returns TagHelperContext object which is used in the view before the output element. PostElement – It returns TagHelperContext object which is used in the view after the output element. PreContent – It returns TagHelperContext object which is used in the view before the output element con. PostContent – It returns TagHelperContext object which is used in the view after the output element con. TagMode – This property mention that either tag is SelfClosing, StartTagAndEndTag or StartTagOnly Tag. SupressOutput() – Execution of this method basically prevents the elements being including in the view. namespace CustomTagHelper.TagHelpers { using Microsoft.AspNetCore.Razor.TagHelpers; using System.Text; [HtmlTargetElement("hello-world")] public class HelloWorldTagHelper : TagHelper { public string Name { get; set; } public override void Process(TagHelperContext context, TagHelperOutput output) { output.TagName = "CustumTagHelper"; output.TagMode = TagMode.StartTagAndEndTag; var strSb = new StringBuilder(); strSb.AppendFormat("<span>Hello! My Name is {0}</span>", this.Name); output.PreContent.SetHtmlContent(strSb.ToString()); } } } 3.Use the tag in your view and pass the required attribute SUMMARY With the tag helper, we can extend the existing element or create new elements. We can developer custom reusable attribute or elements with the help of Tag Helpers. It also maintains the MVC naming conventions. But the most important thing is that Tag helpers can contain presentation logic. Business logic must remain in models or business service class. They can also be used as objects in strongly typed views. Also, it needs to remember that they’re no new way to create web controls. Tag Helpers does not contain any event model or view state. Tag Helpers give us more flexibility over the content in a great maintainable fashion. Use the new HttpClientFactory to create HttpClient objects in ASP.NET Core. Learn how to create Named or Typed HttpClient instances. With .NET Core 2.1 the HttpClientFactory is introduced. The HttpClientFactory is a factory class which helps with managing HttpClient instances. Managing your own HttpClient correctly was not so easy. The HttpClientFactory gives you a number of options for easy management of your HttpClient instances. In this post I’ll explain how to use the HttpClientFactory in your ASP.NET Core application. HttpClient The HttpClient enables you to send HTTP requests and receive HTTP responses. The HttpClient is intended to be instantiated once for each uri and reused throughout the life of the application. It implements IDisposable, which seduces many developers to put it in an using block. “Although HttpClient does indirectly implement the IDisposable interface, the standard usage of HttpClient is not to dispose of it after every request. The HttpClient object is intended to live for as long as your application needs to make HTTP requests. Having an object exist across multiple requests enables a place for setting DefaultRequestHeaders and prevents you from having to re-specify things like CredentialCache and CookieContainer on every request as was necessary with HttpWebRequest.” — From the book ‘Designing Evolvable Web APIs with ASP.NET’ HttpClient is probably the only IDisposible that should not be put into an using block. When creating unneeded HttpClient object, it can lead to a SocketException caused by open TCP/IP connections. When disposed is called, the connection will stay open for a 240 second to correctly handle the network traffic. A good explanation can be found here: You’re using HttpClient wrong and it is destabilizing your software. However, when you try to solve it with a static member, a new problem will be introduced. It is not possible to do blue-green deployments caused by the behavior of the HttpClient. It will only renew its DNS when the connection is lost. Read Singleton HttpClient? Beware of this serious behaviour and how to fix it for more information on this. Then in ASP.NET Core 2.1 the HttpClientFactory is introduced that manages the life cycle of the HttpClient instances. This will make developing good software so much easier. All life cycle management is done by the factory class and it also adds a very nice way to configure your HttpClients in your Startup. You are able to choose to use create an empty one with the factory class, Named Clients or Typed Clients. Create a HttpClient The simple way to get a HttpClient object is to just create one with the HttpClientFactory. The first thing to do is add it to your services configuration in StartUp: services.AddHttpClient();. Next, you can use dependency injection to inject the HttpClientFactory into your class. Call the CreateClient method on the factory to create a new HttpClient.(); client.BaseAddress = new Uri(""); client.DefaultRequestHeaders.Add("Accept", "application/json"); return client.GetStringAsync("/ping"); } } As you can see in the code, the HttpClient has no configuration yet. When you configure the object, it will create a HttpClientHandler under the hood. This object is pooled within the HttpClientFactory. Named Clients The second option to get a HttpClient with the HttpClientFactory is by using Named Clients. The HttpClientFactory is injected into your classes like before. By passing a name into the CreateClient method, you can get the Named client. The Named clients are pre-configured in the startup method: public void ConfigureServices(IServiceCollection services) { services.AddHttpClient("MyCustomAPI", client => { client.BaseAddress = new Uri(""); client.DefaultRequestHeaders.Add("Accept", "application/json"); }); (...) } Then the HttpClient is accessable by its name ‘MyCustomAPI’:("MyCustomAPI"); return client.GetStringAsync("/ping"); } } Named clients give you more control over how the clients you are using in your program are configured. This makes your life already a lot easier. Typed Clients Typed Clients are even better as the Named Clients. They are strongly typed and do not need the HttpClientFactory to be injected. The Type Clients can be injected directly into your classes. In startup you can configure them like: public void ConfigureServices(IServiceCollection services) { services.AddHttpClient(client => { client.BaseAddress = new Uri(""); client.DefaultRequestHeaders.Add("Accept", "application/json"); }); (...) } The HttpClient is configured when the in ConfigureServices and then used in a implementation like: public class MyCustomClient { private readonly HttpClient _httpClient; public MyApiWrapper(HttpClient httpClient) { _httpClient = httpClient; } public Task PingResult() { return _httpClient.GetStringAsync("/ping"); } } The MyCustomClient implementation encapsulates the HttpClient by exposing only the functional methods. This hides all implementation details from the actual user of the class. public class MyApiWrapper { private readonly MyCustomClient _customClient; public MyApiWrapper(MyCustomClient customClient) { _customClient = customClient; } public Task PingResult() { return customClient.PingResult(); } } Related posts Background processing Schedule services Headless services Using scoped services Finally By the introduction of the HttpClientFactory, the usage of the HttpClient is a lot easier. Beside what I have shown in this post, is very easy to add extra functionality to Named or Typed client, for example a retry policy or circuit breaker. Probably nice content for a later post. Introduction In this article, we will see how to create a simple CRUD application for ASP.NET Core Blazor using Entity Framework and Web API. Blazor is a new framework introduced by Microsoft. I love to work with Blazor as this makes our SPA full stack application development in a more simple way and yes, now, we can use only one language, C#. Before Blazor, we were using ASP.NET Core with the combination of Angular or ReactJS. Now, with the help of Blazor support, we can create our own SPA application directly with C# Code. If you start your SPA application development using Blazor, surely, you will love it and it is so simple and fun to work with Blazor. The only drawback now we have is that because Blazor is a newly introduced framework, it’s still in the experimental phase. Once we get the complete version, it will be more fun to work with application development. In this article, we will see about creating a CRUD Web Application using ASP.NET Core Blazor. - C: (Create): Insert new Student Details into the database using ASP.NET Core, Blazor, EF and Web API. - R: (Read): Select Student Details from the database using ASP.NET Core, Blazor, EF and Web API. - U: (Update): Update Student Details to the database using ASP.NET Core, Blazor, EF and Web API - D: (Delete): Delete Student Details from the database using ASP.NET Core, Blazor, EF and Web API. We will be using Web API and EF to perform our CRUD operations. Web API has the following four methods as Get/Post/Put and Delete, where: - Get is to request for the data. (Select) - Post is to create a data. (Insert) - Put is to update the data. (Update) - Delete is to delete data. (Delete) Prerequisites Make sure, you have installed all the prerequisites on your computer. If not, then download and install them all, one by one. Note that since Blazor is. - USE MASTER - GO - - <span–< 1) Check for the Database Exists .If the database is exist then drop and create new DB - IF EXISTS (SELECT [name] FROM sys.databases WHERE [name] = ‘StudentsDB’ ) - DROP DATABASE StudentsDB - GO - - CREATE DATABASE StudentsDB - GO - - USE StudentsDB - GO - - - <span–< 1) //////////// StudentMasters - - IF EXISTS ( SELECT [name] FROM sys.tables WHERE [name] = ‘StudentMasters’ ) - DROP TABLE StudentMasters - GO - - CREATE TABLE [dbo].[StudentMasters]( - [StdID] INT IDENTITY PRIMARY KEY, - [StdName] [varchar](100) NOT NULL, - [Phone] [varchar](20) NOT NULL, - [Address] [varchar](200) NOT NULL - ) - - <span–< insert sample data to Student Master table - INSERT INTO [StudentMasters] ([StdName],[Email],[Phone],[Address]) - VALUES (‘Shanu’,‘syedshanumcain@gmail.com’,‘01030550007’,‘Madurai,India’) - - INSERT INTO [StudentMasters] ([StdName],[Email],[Phone],[Address]) - VALUES (‘Afraz’,‘Afraz@afrazmail.com’,‘01030550006’,‘Madurai,India’) - - INSERT INTO [StudentMasters] ([StdName],[Email],[Phone],[Address]) - VALUES (‘Afreen’,‘Afreen@afreenmail.com’,‘01030550005’,‘Madurai,India’) - - select * from [StudentMasters] After creating ASP.NET Core Blazor Application, wait for a few seconds. You will see the below structure in solution explorer. What is new in ASP.NET Core Blazor solution? When we create our new ASP.NET Core Blazor application we can see there will be 3 projects that will be automatically created in the Solution Explorer. Client Project The first project created as a Client project is our Solutionname.Client and here, we can see our solution name as “BlazorASPCORE”. This project will be mainly focused on all the client-side views. Here, we will be adding all our page views to be displayed at the client side in the browser. We can see a few sample pages have been already added here and we can also see a shared folder like our MVC application where we will be having the Shared folder and Layout page for the Master page. Here, in Blazor, we have the MainLayout which will be working be work like getting/set the data from Database and from our Client project we bind or send the result to this server to perform the CRUD operation in the database. Shared Project As the name indicating this project work as the default sample pages and menus will be displayed in our Blazor web site. We can use the pages or remove it and start with our own page. Now let’s see how to add new page on the right side of the combo box on the console, select the Default project as your shared project” Select Shared”. - You can see the PM> and copy and paste the below line to install the Database Provider package. This package is used to set the database provider as SQL Server. Install-Package Microsoft.EntityFrameworkCore.SqlServer We can see as the package is installed in our Shared folder. Install the Entity Framework – - You can see the PM> and copy and paste the below line to install the EF package. Install-Package Microsoft.EntityFrameworkCore.Tools To Create DB Context and set the DB Connection string - You can see the PM> and copy and paste the below line set the Connection string and create DB Context. This is an important part as we give our SQL Server name, Database Name and SQL server UID and SQL Server Password to connect to our database for performing the CRUD operation. We also give our SQL Table name to create the Model class in our Shared project. - Scaffold-DbContext "Server= YourSqlServerName;Database=StudentsDB;user id= YourSqlUID;password= YourSqlPassword;Trusted_Connection=True;MultipleActiveResultSets=true" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models -Tables StudentMasters Press enter create a connection string, Model Class, and Database Context. We can see StudentMasters Model class and StudentsDBContext class has been created in our Shared project. We will be using this Model and DBContext in our Server project to create our Web API to perform the CRUD operations. Creating Web API for CRUD/StudentMasters/ Run the program and paste API path to test our output. Now we will bind all this WEB API Json result in out View page from our Client project Working with Client Project as. - @using BLAZORASPCORE.Shared - @using BLAZORASPCORE.Shared.Models - @page "/Students" - @using Microsoft.AspNetCore.Blazor.Browser.Interop - @inject HttpClient Http HTML design and data Bind part Next, we design our Student details page to display the student details from the database and created a form to Insert and update the Student details we also have Delete button to delete the student records from the database. For binding in Blazor we use the bind="@stds.StdId" and to call the method using onclick="@AddNewStudent" Function Part function part to call all the web API to bind in our HTML page and also to perform client-side business logic to be displayed in View page.In this Function we create a separate function for Add, Edit and Delete the student details and call the Web API Get,Post,Put and Delete method to perform the CRUD operations and in HTML we call all the function and bind the results. - @functions { - StudentMasters[] student; - - StudentMasters stds = new StudentMasters(); - string ids = "0"; - bool showAddrow = false; - protected override async Task OnInitAsync() - { - student = await Http.GetJsonAsync<StudentMasters[]>("/api/StudentMasters/"); - } - - void AddNewStudent() - { - stds = new StudentMasters(); - } - // Add New Student Details Method - protected async Task AddStudent() - { - if (stds.StdId == 0) - - { - await Http.SendJsonAsync(HttpMethod.Post, "/api/StudentMasters/", stds); - - } - else - { - await Http.SendJsonAsync(HttpMethod.Put, "/api/StudentMasters/" + stds.StdId, stds); - } - stds = new StudentMasters(); - student = await Http.GetJsonAsync<StudentMasters[]>("/api/StudentMasters/"); - } - // Edit Method - protected async Task EditStudent(int studentID) - { - ids = studentID.ToString(); - stds = await Http.GetJsonAsync<StudentMasters>("/api/StudentMasters/" + Convert.ToInt32(studentID)); - } - // Delte Method - protected async Task DeleteStudent(int studentID) - { - ids = studentID.ToString(); - await Http.DeleteAsync("/api/StudentMasters/" + Convert.ToInt32(studentID)); - - // await Http.DeleteAsync("/api/StudentMasters/Delete/" + Convert.ToInt32(studentID)); - - student = await Http.GetJsonAsync<StudentMasters[]>("/api/StudentMasters/"); - } - - } Navigation Menu Now we need to add this newly added Students Razor page to our left Navigation. For adding this Open the Shared Folder and open the NavMenu.cshtml page and add the menu. - <li class="nav-item px-3"> - <NavLink class="nav-link" href="/Students"> - <span class="oi oi-list-rich" aria-</span> Students Details - </NavLink> - </li> Build and Run the application Conclusion Note that when creating the DBContext and setting the connection string, don’t forget to add your SQL connection string. Hope you all like this article. In the next article, we will see more examples to work with Blazor. It’s really very cool and awesome to work with Blazor. - When a user requests a page from the server, first, it will check the cache memory and see if the page is present in cache or not. - If the page is found in the Cache, the third step is to check its expiration time. If it’s already expired, then again it will go for regenerating the page. - If it’s not expired, then the cached content is provided to the user. - On the other hand, if the page or content is not found in this case, it will go with the manual process and generate the the page and store that in Cache Memory and then forwarded that to its client. This is how the entire caching process works. Here we will learn the Azure Redis Cache provided by the Redis Cache. Azure Redis Cache is based on the popular open-source Redis cache. It gives you access to a secure, dedicated Redis cache, managed by Microsoft, and accessible from any application within Azure. To work with Azure Redis Cache we have to create a Redis cache in the Azure Portal. Please follow the following steps to create a Redis cache in the portal. Why Caching is Fast?? A cache is a small storage area that is closer to the application needing it than the original source. Accessing this cache is typically faster than accessing a typically stored in memory or on disk. A memory cache is normally faster to read from than a disk cache, but a memory cache typically does not survive system restarts. This is very much similar to a library(considered as a DB) and searching a book from this vast memory will be very difficult.so once any item is searched putting that in study table will be searched easily next time. To use Azure Redis Cache Log in to your Azure portal with your UserId and Password. Once you are logged in it will forward you to the Dashbord. From the Search menu, search for Database in the Portal as shown below. Once you select the Database option, under that you can search for Redis Cache as shown in the below picture. Select the Redis Cache and create and create a new Redis Cache in the Portal. While creating Redis cache on the basic of Pricing tier, you will find 3 options: - Basic – Single node. Multiple sizes up to 53 GB. - Standard – Two-node Primary/Replica. Multiple sizes up to 53 GB. 99.9% SLA. - Premium – Two-node Primary/Replica with up to 10 shards. Multiple sizes from 6 GB to 530 GB. All Standard tier features and more including support for Redis cluster, Redis persistence, and Azure Virtual Network. 99.9% SLA. Here I have shown the Premium One with Redis Cluster in the below image. Once you create your Cache just pin it to the Dashbord, so that it will be easy to work on it. So in this way you are able to create a Redis Cache in the Azure Portal. Now let’s create a Client MVC application will use this cache to store and retrieve the data to boost the performance. So Open Visual Studio and create a new MVC project as shown below. Once you create the project, go to NuGet Package and search for Stack Exchange Redis and include that in the project as shown below. Once that is completed, you will find the references in Reference folder. Here is my solution Explorer with all the settings. Now go to the Azure Portal and check the access key for the Redis Cache Once you get the connection string and Key, copy it and take it to the Configfile of your application. - <appSettings> - <add key="webpages:Version" value="3.0.0.0" /> - <add key="webpages:Enabled" value="false" /> - <add key="ClientValidationEnabled" value="true" /> - <add key="UnobtrusiveJavaScriptEnabled" value="true" /> - <add key="RedisCachekey" value="Debendra.redis.cache.windows.net:6980,password=###########KtWeHK9IRK3kmb7uIsdeben&&&&=,ssl=True,abortConnect=False" /> - </appSettings> Note Here I have tempered my password. In the below section I have pasted the connection string; the database for this application is hosted in webAPP in the Azure. - <connectionStrings> - <add name="Connect" connectionString="Server=tcp:debendra.database.windows.net,1433;Initial Catalog=EmployeeDetails;User ID=Debendra;Password=Password#####;" providerName="System.Data.SqlClient"/> - </connectionStrings> Now create an Employee Model to define properties for CRUD operations. - namespace RedisCache.Models - { - public class Employee - { - public int Id { get; set; } - public string Name { get; set; } - public string Address { get; set; } - public string Company { get; set; } - public string MobileNo { get; set; } - } - } As I will work in the CodeFirst Approach, here I have defined my DBContex class as follows. - public class DebendraContext:DbContext - { - public DebendraContext():base("Connect") - { - - } - public DbSet<Employee> Employee { get; set; } - - } Now Create an Index controller and read the connection string and Redis Cache connection value in that. Now Create an Action Method to add some Employee Data in the same Index Controller. - [HttpPost] - public ActionResult AddEmployee(Employee model) - { - mydbcontext = new DebendraContext(); - mydbcontext.Employee.Add(model); - mydbcontext.SaveChanges(); - - return View(); - } - [HttpGet] - public ActionResult AddEmployee() - { - - return View(); - } Create the Add Employee View. Here is the attached view code. - @model RedisCache.Models.Employee - - @{ - Layout = null; - } - - <!DOCTYPE html> - - <html> - <head> - <meta name="viewport" content="width=device-width" /> - <title>AddEmployee</title> - </head> - <body> - @Scripts.Render("~/bundles/jquery") - @Scripts.Render("~/bundles/jqueryval") - - - @using (Html.BeginForm()) - { - @Html.AntiForgeryToken() - - <div class="form-horizontal"> - <h4>Employee</h4> - <hr /> - @Html.ValidationSummary(true, "", new { @class = "text-danger" }) - <div class="form-group"> - @Html.LabelFor(model => model.Name, htmlAttributes: new { @class = "control-label col-md-2" }) - <div class="col-md-10"> - @Html.EditorFor(model => model.Name, new { htmlAttributes = new { @class = "form-control" } }) - @Html.ValidationMessageFor(model => model.Name, "", new { @class = "text-danger" }) - </div> - </div> - - <div class="form-group"> - @Html.LabelFor(model => model.Address, htmlAttributes: new { @class = "control-label col-md-2" }) - <div class="col-md-10"> - @Html.EditorFor(model => model.Address, new { htmlAttributes = new { @class = "form-control" } }) - @Html.ValidationMessageFor(model => model.Address, "", new { @class = "text-danger" }) - </div> - </div> - - <div class="form-group"> - @Html.LabelFor(model => model.Company, htmlAttributes: new { @class = "control-label col-md-2" }) - <div class="col-md-10"> - @Html.EditorFor(model => model.Company, new { htmlAttributes = new { @class = "form-control" } }) - @Html.ValidationMessageFor(model => model.Company, "", new { @class = "text-danger" }) - </div> - </div> - - <div class="form-group"> - @Html.LabelFor(model => model.MobileNo, htmlAttributes: new { @class = "control-label col-md-2" }) - <div class="col-md-10"> - @Html.EditorFor(model => model.MobileNo, new { htmlAttributes = new { @class = "form-control" } }) - @Html.ValidationMessageFor(model => model.MobileNo, "", new { @class = "text-danger" }) - </div> - </div> - - <div class="form-group"> - <div class="col-md-offset-2 col-md-10"> - <input type="submit" value="Create" class="btn btn-default" /> - </div> - </div> - </div> - } - - <div> - @Html.ActionLink("Back to List", "Index") - </div> - </body> - </html> Now run the application to add some employee details to the Database, which will later be cached and shown. Here I have saved some data in the DataBase in Azure. Now here is the Main logic to retrieve the data from DataBase and put it in Cache. - public ActionResult Index() - { - var connect = ConnectionMultiplexer.Connect(cacheConnectionstring); - mydbcontext = new DebendraContext(); - IDatabase Rediscache = connect.GetDatabase(); - if (string.IsNullOrEmpty(Rediscache.StringGet("EmployeeDetails"))) - { - var liemp = mydbcontext.Employee.ToList(); - var emplist= JsonConvert.SerializeObject(liemp); - - Rediscache.StringSet("EmployeeDetails", emplist, TimeSpan.FromMinutes(2)); - return View(liemp); - - } - else - { - - var detail = JsonConvert.DeserializeObject<List<Employee>>(Rediscache.StringGet("EmployeeDetails")); - return View(detail); - - } - - } CODE EXPLANATION - As you know we have installed StackExchange.Redisfor using the Redis Cache in our client application. - The central object in StackExchange.Redis is the ConnectionMultiplexer class in the StackExchange. - The connection to the Azure Redis Cache is managed by the ConnectionMultiplexer class. - This class should be shared and reused throughout your client application, and does not need to be created on a per operation basis. - In these examples abortConnect is set to false, which means that the call succeeds even if a connection to the Azure Redis Cache is not established. - <appSettings> - <add key="RedisCachekey" value="Debendra.redis.cache.windows.net:6980,password=###########KtWeHK9IRK3kmb7uIsdeben&&&&=,ssl=True,abortConnect=False" /> - </appSettings> One key feature of ConnectionMultiplexer class is that it automatically restores connectivity to the cache once the network issue or other causes are resolved. Accessing a redis database is as simple as: - IDatabase db = redis.GetDatabase(); - Once you have the IDatabase, it is simply a case of using the redis API. Note that all methods have both synchronous and asynchronous implementations. - Here I have used two methods to store and retrieve data using StringGet and StringSet. Add and retrieve objects from the cache. - // If key1 exists, it is overwritten. - Rediscache .StringSet("key1", "value1"); - - string value = Rediscache .StringGet("key1"); Where Rediscache is the object of IDatabase. Here we are definging the expiration time while setting the data. - Rediscache.StringSet("EmployeeDetails", emplist, TimeSpan.FromMinutes(2)); Now just run the Index method and check how it works: For the first time the Rediscache.StringGet("EmployeeDetails" ) will be null. Go inside the If statement, fetch the data, and set it the to the cache object. This object is valid for two minutes. If you make any request within that time period it will go to the else part and just fetch the data from the cache without regenerating again . So let’s request it again. NOTE The second request is within two minutes, otherwise it will go inside the If statement again. Here is the complete Flow. So we can avoid a lot of time using Caching. You can also set rules to get email alerts when a particular threshold is reached for your cached object. Similarly you can monitor your cache health here as follows. Conclusion In this way we can work on Azure Redis Cache and keep our web application maximized. Please let me know if you have any doubts or if I made any mistakes so I can modify and learn from that. Introduction Let’s have a quick review of ASP.NET MVC Architecture. So when the request arrives at our application MVC Framework hands off that request to an action in a controller, this action most of the time returns a view which is then parsed by razor view engine and then eventually HTML markup is returned to the client. So in this approach html markup is generated on the server and then return to the client. There is an alternative way to generate the HTML markup, we can generate it on the client. So instead of our action returning HTML markup, they can return raw data. What is the benefit of this approach? There are numbers of benefits of generating markup on the client. - It requires fewer server resources (it potentially improves the scalability of the application because each client will be responsible for generating their own views) - Raw Data often requires less bandwidth than HTML markup. So the data potentially arrives faster at the client. And this can improve the perceived performance of the application. - This approach supports the broad range of clients like mobile and tablet apps. These apps are simply called endpoints get the data and generate the view locally. We call these endpoints Data Services (Web APIs) because they just return data, not markup. Web APIs are not just limited to cross devices, it also widely used in our Web Applications to add the new features like many popular websites like youtube, facebook and twitter expose public data services which we can consume in our web applications. We can merge their data with the data in our application and provide new experiences to the new user. These are the benefits. These data services are just not only to get the data, we’ve services to modify the data like adding the customer etc. The framework we use to build these data services is called web APIs. This framework was developed after ASP.Net MVC but it follows the same architecture and principals of ASP.NET MVC so it has routings, controllers, actions, action result and so on. There are also few minor differences that we’ll see here. In .Net Core, Microsoft has merged these both frameworks (ASP.NET MVC & ASP.NET Web API) into a single framework. Restful Convention So you know what is http services and what is web api. Here we’ll develop an application which supports few different kinds of requests. GET /api/customers (to get the list of customers) GET /api/customers/1 (to get the single customer) POST /api/ customers (to add the customer and add the customer data in request body) Don’t confuse about GET and POST request of the data, we use get a request to get the list of the resource or data. And we use post request to create the new one. Now to update the student we use PUT request. PUT /api/customers/1 So the id of the customer is in the url and the actual data or properties to update will be in the request body. And finally to delete the student. Delete /api/customers/1 We send HttpDelete request to the endpoint. So what you see here, in terms of request types and endpoints is a standard convention referred to ask REST (Representational State Transfer) Building An API This class derives from ApiController as opposed to Controller. If you’re working with any existing project then just add a new folder in controllers folder and add the api controller here. And add these actions but before defining the actions in apis, this is my Customer model class public class Customer { public int Id { get; set; } [Required] [StringLength(255)] public string Name { get; set; } public bool IsSubscribedToNewsLetter { get; set; } [Display(Name = "Date of Birth")] public DateTime? Birthdate { get; set; } [Display(Name = "Membership Type")] public byte MembershipTypeId { get; set; } // it allows us to navigate from 1 type to another public MembershipType MembershipType { get; set; } } And here is my DbContext class public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public ApplicationDbContext() : base("DefaultConnection", throwIfV1Schema: false) { } public static ApplicationDbContext Create() { return new ApplicationDbContext(); } public DbSet<Customer> Customers { get; set; } public DbSet<MembershipType> MembershipTypes { get; set; } } Now it would be easy for you to write actions for Api. public IEnumerable<Customer> GetCustomers() { } Because we’re returning a list of objects. This action by convention will respond to // Get /api/customers So this is the convention built into ASP.Net Web API. Now in this action we gonna use our context to get the customers from the database. namespace MyAPI.Controllers.Api { public class CustomersController : ApiController { private readonly ApplicationDbContext _context; public CustomersController() { _context = new ApplicationDbContext(); } // GET /api/customers public IEnumerable<Customer> GetCustomers() { return _context.Customers.ToList(); } } } If the resource isn’t found, we return not found httpresponse otherwise we return the object. So this is the customer argument will be in the request body and ASP.NET Web API Framework will automatically initialize this. Now we should mark this action with HttpPost because we’re creating the resource. And if we’re following the naming convention then we don’t even need to place the action verb on the action. But originally it isn’t so good approach, Let’s suppose you refactor the code in future and rename your action then surely your code will break. So always prefer to use Http verbs on the top of action. Now let’s insert the customer object into the database with post request of api action. // POST /api/customers [HttpPost] public Customer CreateCustomer(Customer customer) { if (!ModelState.IsValid) { throw new HttpResponseException(HttpStatusCode.BadRequest); } _context.Customers.Add(customer); _context.SaveChanges(); return customer; } Another action let’s suppose we want to update the record. // PUT /api/customers/1 [HttpPut] public void UpdateCustomer(int id, Customer customer) {); } custmr.Birthdate = customer.Birthdate; custmr.IsSubscribedToNewsLetter = customer.IsSubscribedToNewsLetter; custmr.Name = customer.Name; custmr.MembershipTypeId = customer.MembershipTypeId; _context.SaveChanges(); } Here in this scenario, different people have different opinions to return the void or the object. And if we make the delete action of Api, // Delete /api/customers/1 [HttpDelete] public void DeleteCustomer(int id) { var custmr = _context.Customers.SingleOrDefault(x => x.Id == id); // Might be user sends invalid id. if (custmr == null) { throw new HttpResponseException(HttpStatusCode.NotFound); } _context.Customers.Remove(custmr); // Now the object is marked as removed in memory // Now it is done _context.SaveChanges(); } This is how we use restful convention to build the api. Testing the API If we run the application and request the api controller, we can see the list of customers in XML based language. <a href=""></a> So ASP.NET Web API has what we call media formatter. So what we return from an action (in our case the list of customer will be formatted based on what the client asks) let me explain what I mean and what I’m trying to say. Inspect the browser on above screen and refresh the page, here you’ll see the customer request. Here look at Content Type, if you don’t set the content type header in our request by default the server assumes application/xml. Note: General is the Request Header, Response Headers is our Response Header here. As you can see in Request Header, we don’t have any content type. Now let me show you the best to test the api and get the data in json. Install Postman Desktop App in your machine. And copy the browser link with localhost port number and paste it into the postman. And here we put the url of the request with localhost and here the response comes back in json. And if we click on the Header tab, here we’ll see our request header content type is application/json Most of the time we’ll be using json because it is native for javascript code and much faster than xml. XML media format is largely used in large organizations like government because they’re behind the modern technology. Json format is more lightweight because it doesn’t have redundant opening and closing tabs like xml. Little Confusion: Sometimes when you’re working with Apis or with postman, mostly people confuse about the interface of Postman because they have not ever used postman before. It is very simple just keep in mind, So if you’re working with request and trying to change some information of request then focus on request header and if you’re monitoring the response then watch the results in response headers. As they’re looking same and sometimes when the scroll down the request header vanishes. So don’t confuse about the things. Now let’s insert a customer in the database with Api Post Action. Select the Post request from the dropdown and in request body tab. You can insert the customer with key value pairs on clicking form-data. But most of the time we use json format. So click on raw and write the json here. Don’t put Id property in json because it is hard and fast rule when we insert the data in database, the id is automatically generated on the server. Now click on Send Button and here I’ve successfully inserted the data with Post api action and gets the response. Here the above block is request block and the below block is response block. You might face some kind of error here like this. If you read the error message ‘The request entity’s media type ‘text/plain’ is not supported for this resource’. This is the error message. Now to resolve this error. Click on Header tab and add the value for content-type (‘application/json’) And here the values has been added. Look the status code for request is 200 OK and we can see the response body below. Now let’s update the customer entity. And look it has been updated. Now let’s delete one record similarly, just select Delete in dropdown and specify the id with forward slash in the url and click on Send button. It will be automatically deleted. Best Practice: The best practice is when you build the api and before consuming it in application. It would be better to test the api through Postman. Data Transfer Objects (DTO) So now we’ve build this api but there are couple of issues with this design. Our api receives or returns Customer object. Now you might be thinking what’s wrong with this approach? Actually Customer object is part of the domain model of our application. It is considered implementation details which can change frequently as we implement new features in our applications and these changes potentially grab existing clients that are dependent on the customer object i.e. if we rename or remove our property this can impact the client that are dependent upon the property. So basically we make the contract of the api as stable as possible. And here we use DTOs. DTO is the plain data structure and is used to transfer data from the client on server or vice versa that’s why we called it data transfer object. By creating DTOs, we reduces the chances of our APIs breaking as we refactor our domain model. Ofcourse we should remember that changing these DTOs that can be costly. So the most important thing is our api should not receive or return the Customer model class object. Another issue by using domain object here in API is we’re opening the security holes in our application. A hacker can easily pass additional data in json and they will be mapped to our domain object. What if one of this property should not be updated, a hacker can easily bypass this but if we use DTO we can simply exclude the properties that can be updated. So add a new folder in your project with name DTOs and Add the class CustomerDTO and copy all the properties of Customer domain model class with their data annotation attributes and paste it in CustomerDTO. Now remove the navigation properties from CustomerDTO because it is creating the dependency to MembershipType domain model class. namespace MyAPI.DTOs { public class CustomerDTO { public int Id { get; set; } [Required] [StringLength(255)] public string Name { get; set; } public bool IsSubscribedToNewsLetter { get; set; } public DateTime? Birthdate { get; set; } public byte MembershipTypeId { get; set; } } } Now the next thing is we want to use to CustomerDTO in our api instead of Customer domain class object. So to reduce a lot of code to bind the properties one by one, we use Automapper. Automapper Install the automapper package from Package Manager Console. PM > Install-Package Automapper -version:4.1 Now add the new class in App_Start (MappingProfile.cs) and inherit it from Profile. using AutoMapper; namespace MyAPI.App_Start { public class MappingProfile : Profile { } } Now create the constructor and add the mapping configuration between 2 types. public class MappingProfile : Profile { public MappingProfile() { Mapper.CreateMap<Customer, CustomerDTO>(); Mapper.CreateMap<CustomerDTO, Customer>(); } } The first argument of the CreateMap is the Source and the second one is destination. When we use the CreateMap method, automapper automatically uses the reflection to scan these types. It finds their properties and maps them based on their names. This is why we called automapper (a convention based mapping tool) because it uses the property names as the convention to map objects. So here is mapping profiles, now we need to load it when the application started. Now open the Global.asax.cs file and write the code for Application_Start() protected void Application_Start() { Mapper.Initialize(c => c.AddProfile<MappingProfile>()); GlobalConfiguration.Configure(WebApiConfig.Register); AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } Now open CustomerController of Api. And let’s start changes here. // GET /api/customers public IEnumerable<CustomerDTO> GetCustomers() { return _context.Customers.ToList(); } Now we want to return CustomerDTO type instead of Customer object. Now we need to map this Customer object to CustomerDTO. So we use linq extension method. // GET /api/customers public IEnumerable<CustomerDTO> GetCustomers() { return _context.Customers.ToList() .Select(Mapper.Map<Customer, CustomerDTO>); } Mapper.Map<Customer, CustomerDTO> This delegates does the mapping. As we can see we’re not placing function call small brackets because we’re not calling the function here. We just reference it here. Mapping function automatically calls, when it executes. // GET /api/customers/1 public Customer GetCustomer(int id) { var customer = _context.Customers.SingleOrDefault(x => x.Id == id); // This is part of the RESTful Convention if (customer == null) { throw new HttpResponseException(HttpStatusCode.NotFound); } return customer; } In this function, as we are returning one object so we don’t use Select extension method. Here we directly use mapper // GET /api/customers/1 public CustomerDTO GetCustomer(int id) { var customer = _context.Customers.SingleOrDefault(x => x.Id == id); // This is part of the RESTful Convention if (customer == null) { throw new HttpResponseException(HttpStatusCode.NotFound); } return Mapper.Map<Customer, CustomerDTO>(customer); } Now come on the next CreateCustomer action, // to Customer table (id is assigned to it) // & Now we assigned this id to customerDto customerDto.Id = customer.Id; return customerDto; } This is how we works with Dtos and Automapper. Now let’s update the UpdateCustomer action api method. // PUT /api/customers/1 [HttpPut] public void UpdateCustomer(int id, CustomerDTO customerDto) {); } Mapper.Map<CustomerDTO, Customer>(customerDto, custmr); //custmr.Birthdate = customerDto.Birthdate; //custmr.IsSubscribedToNewsLetter = customerDto.IsSubscribedToNewsLetter; //custmr.Name = customerDto.Name; //custmr.MembershipTypeId = customerDto.MembershipTypeId; _context.SaveChanges(); } So this is how we map objects using Automapper. Now automapper has a few features that you may find useful in certain situations i.e. if your property names don’t match, you can override the default convention or you can exclude some of its properties from mapping or you may wanna create custom mapping classes. If you want to learn more, you can learn from Automapper documentation. IHttpActionResult Alright look at this CreateCustomer action method // customerDto; } Here we’re simply returning CustomerDto which would eventually result in response like this But in restful convention when we create a resource, the status code should be 201 or created. So we need more control over the response return from an action and to make this happen, instead of returning CustomerDto we return IHttpActionResult. This interface is similar to ActionResult we’ve in MVC framework so it is implemented by few different classes and here in ApiController, we’ve bunch of methods to create an instance of one of the classes that implement IHttpActionResult interface. Now here if the model is not valid instead of throwing an exception, use the helper method BadRequest() //); } As we can see if the ModelState is not valid, it is returning BadRequest and if customer has been added then we return the Uri with this resource id in Created() with the object we’ve finally created the new one. Look here we created one more resource and now the status is 201 Created. And if we look the location tab That’s the uri of newly created customer. This is part of the restful convention. So in Web Apis, we preferred to use IHttpActionResult as the returntype of your actions. Now let’s make changes in the rest of the actions in this Web Api. And here is the complete code of our Api public class CustomersController : ApiController { private readonly ApplicationDbContext _context; public CustomersController() { _context = new ApplicationDbContext(); } // GET /api/customers public IHttpActionResult GetCustomers() { return Ok(_context.Customers.ToList() .Select(Mapper.Map<Customer, CustomerDTO>)); } // GET /api/customers/1 public IHttpActionResult GetCustomer(int id) { var customer = _context.Customers.SingleOrDefault(x => x.Id == id); // This is part of the RESTful Convention if (customer == null) return NotFound(); return Ok(Mapper.Map<Customer, CustomerDTO>(customer)); } //); } // PUT /api/customers/1 [HttpPut] public IHttpActionResult UpdateCustomer(int id, CustomerDTO customerDto) { if (!ModelState.IsValid) { return BadRequest(); } var custmr = _context.Customers.SingleOrDefault(x => x.Id == id); // Might be user sends invalid id. if (custmr == null) { return NotFound(); } Mapper.Map<customerdto,(customerDto, custmr); _context.SaveChanges(); return Ok(custmr); } // Delete /api/customers/1 [HttpDelete] public IHttpActionResult DeleteCustomer(int id) { var custmr = _context.Customers.SingleOrDefault(x => x.Id == id); // Might be user sends invalid id. if (custmr == null) { return NotFound(); } _context.Customers.Remove(custmr); // Now the object is marked as removed in memory // Now it is done _context.SaveChanges(); return Ok(custmr); } }</customerdto,> Let me mention this point here, as you can see we’ve 2 parameters in UpdateCustomer. If the parameter is primitive type like we’ve int id then we’ll place this parameter in route url or with query string. And if we want to initialize our complex type like here we’ve CustomerDTO then we’ll always initialize it from request body in postman. So don’t be confuse about this thing here. Now let’s update and delete the json object through postman. If you focus on the UpdateCustomer action parameter, here we’ve 1st parameter has record id and the 2nd parameter is the Customer domain model class object. Look it is working with Id in request header because our entity is complete here. But if we don’t provide the id in the request header, we’ll get the error. And the exceptionMessage is “The property ‘Id’ is part of the object’s key information and cannot be modified.” Actually this exception happens on this line, Mapper.Map<CustomerDTO, Customer>(customerDto, custmr); Because customerDto doesn’t contain the Id but custmr (which is the object variable of Customer model class) has an Id property. And here we need to tell Automapper to ignore Id during mapping to customerDto to custmr. So, come on to the Mapping Profile public class MappingProfile : Profile { public MappingProfile() { Mapper.CreateMap<Customer, CustomerDTO>(); Mapper.CreateMap<CustomerDTO, Customer>() .ForMember(c => c.Id, opt => opt.Ignore()); } } And look it is working now, Consuming the Web API After proper testing of the api, now it is the time of consuming the api. Most important thing is which I would like to mention here. Now our api is ready, you can consume this api in any client. Here we’re showing you the sample of consuming with Visual Studio application. If you’ve build this api with us then you can consume it in php, python, in any framework application with the help of jquery ajax. Now we’ll use jquery to call our api. Look this screen, here I’m showing some customers. Now what we want is to delete the row on clicking the delete button. So if you get the idea how we’re rendering the items on the screen, obviously using foreach loop. So on Delete anchor tag click we want record id as well to pass this id to the web api Delete action and on success remove the row. @foreach (var customer in Model) { <tr> <td>@Html.ActionLink(customer.Name, "Edit", "Customers", new { id = customer.Id }, null)</td> @if (customer.Birthdate != null) { <td> @customer.Birthdate </td> } else { <td>Not Available</td> } <td> <button data-Delete</button> </td> </tr> } This is the html. Now I want to call my api through ajax. @section scripts{ $(document).ready(function() { $('#customers .js-delete').on('click', function () { var button = $(this); if (confirm('Are you sure you want to delete this client?')) { $.ajax({ url: '/api/customers/' + button.attr('data-customer-id'), method: 'DELETE', success: function() { button.parents('tr').remove(); } }) } }); }); } This is how we work with ajax and apis. Now you might be thinking here I’m just passing the id to the customers api and targeting Delete action and on success event I’m removing the row directly. You might think this scenario in different way, because every developer has its own taste. You might think like first we delete the record with Delete method and then get all the records from GetCustomers() method of Web Api and then render all these items through each loop of jquery. But this scenario takes too much time and effort. Look when I click on the Delete anchor tag and show the result in inspect browser, the status is 200 ok. It means everything is working fine, our code (Delete action) is working as we expects. So we don’t any need to again verify the items how much items we’ve in the database and render it through each loop. Just make your scenario simple, and remove the record as I do here. Conclusion So the conclusion is always follow the Restful convention when you’re working with Web Apis. Web Apis are very lightweight than SOAP based web services. They are cross platform. Restful Http verbs helps in the application a lot to insert, delete, update, get the records. Here we see how we use the postman and there are 2 different panes like request header and response header. Most of the time developers confuse about how to use Web Api actions with jquery ajax. Here we also consume the action as well.
http://ugurak.net/index.php/page/4/
CC-MAIN-2018-47
refinedweb
10,393
57.37
Provides Jenkins notification integration with Slack or Slack compatible applications like RocketChat and Mattermost. Install Instructions for Slack Get a Slack account: Configure the Jenkins integration: Install this plugin on your Jenkins server: - From the Jenkins homepage navigate to Manage Jenkins - Navigate to Manage Plugins, - Change the tab to Available, Pipeline" Additionally you can pass attachments or blocks (requires bot user) in order to send complex messages, for example: Attachments: def attachments = [ [ text: 'I find your lack of faith disturbing!', fallback: 'Hey, Vader seems to be mad at you.', color: '#ff0000' ] ] slackSend(channel: "#general", attachments: attachments) Blocks (this feature requires a 'bot user' and a custom slack app): blocks = [ [ "type": "section", "text": [ "type": "mrkdwn", "text": "Hello, Assistant to the Regional Manager Dwight! *Michael Scott* wants to know where you'd like to take the Paper Company investors to dinner tonight.\n\n *Please select a restaurant:*" ] ], [ "type": "divider" ], [ "type": "section", "text": [ "type": "mrkdwn", "text": "*Farmhouse Thai Cuisine*\n:star::star::star::star: 1528 reviews\n They do have some vegan options, like the roti and curry, plus they have a ton of salad stuff and noodles can be ordered without meat!! They have something for everyone here" ], "accessory": [ "type": "image", "image_url": " "alt_text": "alt text for image" ] ] ] slackSend(channel: "#general", blocks: blocks) For more information about slack messages see Slack Messages Api, Slack attachments Api and Block kit Note: the attachments API is classified as legacy, with blocks as the replacement (but blocks are only supported when using a bot user through a custom slack app). File upload You can upload files to slack with this plugin: node { sh "echo hey > blah.txt" slackUploadFile filePath: "*.txt", initialComment: "HEY HEY" } This feature requires botUser mode. Threads Support You can send a message and create a thread on that message using the pipeline step. The step returns an object which you can use to retrieve the thread ID. Send new messages with that thread ID as the target channel to create a thread. All messages of a thread should use the same thread ID. Example: def slackResponse = slackSend(channel: "cool-threads", message: "Here is the primary message") slackSend(channel: slackResponse.threadId, message: "Thread reply #1") slackSend(channel: slackResponse.threadId, message: "Thread reply #2") This feature requires botUser mode. Messages that are posted to a thread can also optionally be broadcasted to the channel. Set replyBroadcast: true to do so. For example: def slackResponse = slackSend(channel: "ci", message: "Started build") slackSend(channel: slackResponse.threadId, message: "Build still in progress") slackSend( channel: slackResponse.threadId, replyBroadcast: true, message: "Build failed. Broadcast to channel for better visibility." ) If you wish to upload a file to a thread, you can do so by specifying the channel, and the timestamp of the thread you want to add the file to, separated by a colon. For example: def slackResponse = slackSend(channel: "cool-threads", message: "Here is the primary message") sh "echo hey > blah.txt" slackUploadFile(channel: "cool-threads:" + slackResponse.ts, filePath: "*.txt", initialComment: "A file, inside a thread.") Update Messages You can update the content of a previously sent message using the pipeline step. The step returns an object which you can use to retrieve the timestamp and channelId NOTE: The slack API requires the channel ID for chat.update calls. Example: def slackResponse = slackSend(channel: "updating-stuff", message: "Here is the primary message") slackSend(channel: slackResponse.channelId, message: "Update message now", timestamp: slackResponse.ts) This feature requires botUser mode. Emoji Reactions Add an emoji reaction to a previously-sent message like this: Example: def slackResponse = slackSend(channel: "emoji-demo", message: "Here is the primary message") slackResponse.addReaction("thumbsup") This may only work reliably in channels (as opposed to private messages) due to limitations in the Slack API (See "Post to an IM channel"). This does not currently work in a situation where Jenkins is restarted between sending the initial message and adding the reaction. If this is something you need, please file an issue. This feature requires botUser mode and the reactions:write API scope. Unfurling Links You can allow link unfurling if you send the message as text. This only works in a text message, as attachments cannot be unfurled. Example: slackSend(channel: "news-update", message: " sendAsText: true) User Id Look Up There are two pipeline steps available to help with user id look up. A user id can be resolved from a user's email address with the slackUserIdFromEmail step. Example: def userId = slackUserIdFromEmail('spengler@ghostbusters.example.com') slackSend(color: "good", message: "<@$userId> Message from Jenkins Pipeline") A list of user ids can be resolved against the set of changeset commit authors with the slackUserIdsFromCommitters step. Example: def userIds = slackUserIdsFromCommitters() def userIdsString = userIds.collect { "<@$it>" }.join(' ') slackSend(color: "good", message: "$userIdsString Message from Jenkins Pipeline") This feature requires botUser mode and the users:read and users:read.email API scopes. Colors Warning: This functionality is not supported if you are using the blocks layout mode Any hex triplet (i.e. '#AA1100') can be used for the color of the message. There are also three builtin color options: Freestyle job - Configure it in your Jenkins job (and optionally as global configuration) and add it as a Post-build action. Install Instructions for Slack compatible application - Log into the Slack compatible application. - Create a Webhook (it may need to be enabled in system console) by visiting Integrations. - You should now have a URL with a token. Something like xxxxis the integration token and the Slack compatible app URL. - Install this plugin on your Jenkins server. - Follow the freestyle or pipeline instructions for the slack installation instructions. Security Use Jenkins Credentials and a credential ID to configure the Slack integration token. It is a security risk to expose your integration token using the previous Integration Token setting. Create a new Secret text credential: Select that credential as the value for the Credential field: Direct Message You can send messages to channels or you can notify individual users via their slackbot. In order to notify an individual user, use the syntax @user_id in place of the project channel. Mentioning users by display name may work, but it is not unique and will not work if it is an ambiguous match. User Mentions Use the syntax <@user_id> in a message to mention users directly. See User Id Look Up for pipeline steps to help with user id look up. Configuration as code This plugin supports configuration as code Add to your yaml file: credentials: system: domainCredentials: - credentials: - string: scope: GLOBAL id: slack-token secret: '${SLACK_TOKEN}' description: Slack token unclassified: slackNotifier: teamDomain: <your-slack-workspace-name> # i.e. your-company (just the workspace name not the full url) tokenCredentialId: slack-token For more details see the configuration as code plugin documentation: Bot user mode There's two ways to authenticate with slack using this plugin. Using the "Jenkins CI" app written by Slack, it's what is known as a 'legacy app' written directly into the slack code base and not maintained anymore. Creating your own custom "Slack app" and installing it to your workspace. The benefit of using your own custom "Slack app" is that you get to use all of the modern features that Slack has released in the last few years to Slack apps and not to legacy apps. These include: - Threading - File upload - Custom app emoji per message - Blocks The bot user option is not supported if you use the Slack compatible app URL option. Creating your app Note: These docs may become outdated as Slack changes their website, if they do become outdated please send a PR here to update the docs. - Go to and click "Create New App". - Pick an app name, i.e. "Jenkins" and a workspace that you'll be installing it to. - Click "Create App". This will leave you on the "Basic Information" screen for your new app. - Scroll down to "Display Information" and fill it out. You can get the Jenkins logo from: - Scroll back up to "Add features and functionality". - Click "Permissions" to navigate to the "OAuth & Permissions" page. - Scroll down to "Scopes". Under "Bot Token Scopes" - Add chat:writeScope. - (optional) Add files:writeScope if you will be uploading files. - (optional) Add chat:write.customizeScope if you will be sending messages with a custom username and/or avatar. - (optional) Add reactions:writeScope if you will be adding reactions. - (optional) Add users:readand users:read.emailScope if you will be looking users up by email. - (optional) Click "App Home" in the sidebar - (optional) Edit the slack display name for the bot. - Return to the "OAuth & Permissions" page. - At the top of the page, click "Install App to Workspace". This will generate a "Bot User OAuth Access Token". - Copy the "Bot User OAuth Access Token". -". - On Jenkins: Tick the "Custom slack app bot user" option. - Invite the Jenkins bot user into the Slack channel(s) you wish to be notified in. - On Jenkins: Click test connection. A message will be sent to the default channel / default member. Troubleshooting connection failure When testing the connection, you may see errors like: WARNING j.p.slack.StandardSlackService#publish: Response Code: 404 There's a couple of things to try: Have you enabled bot user mode? If you've ticked Custom slack app bot user then try unticking it, that mode is for when you've created a custom app and installed it to your workspace instead of the default Jenkins app made by Slack Have you set the override URL? If you've entered something into Override url then try clearing it out, that field is only needed for slack compatible apps like mattermost. Enable additional logging Add a log recorder for the StandardSlackService class this should give you additional details on what's going on. If you still can't figure it out please raise an issue with as much information as possible about your config and any relevant logs. Developer instructions Install Maven and JDK. $ mvn -version | grep -v home Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Java version: 1.7.0_79, vendor: Oracle Corporation Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "4.4.0-65-generic", arch: "amd64", family: "unix" Run unit tests mvn test Create an HPI file to install in Jenkins (HPI file will be in target/slack.hpi). mvn clean package
https://plugins.jenkins.io/slack/
CC-MAIN-2022-21
refinedweb
1,720
55.95
How to Get time of a Python program's execution In this article, we will learn to calculate the time taken by a program to execute in Python. We will use some built-in functions with some custom codes as well. Let's first have a quick look over how the program's execution affects the time in Python. Programmers must have often suffered from "Time Limit Exceeded" error while building program scripts. In order to resolve this issue, we must optimize our programs to perform better. For that, we might need to know how much time the program is taking for its execution. Let us discuss different functions supported by Python to calculate the running time of a program in python. The time of a Python program's execution measure could be inconsistent depending on the following factors: - The same program can be evaluated using different algorithms - Running time varies between algorithms - Running time varies between implementations - Running time varies between computers - Running time is not predictable based on small inputs Calculate Execution Time using time() Function We calculate the execution time of the program using time.time() function. It imports the time module which can be used to get the current time. The below example stores the starting time before the for loop executes, then it stores the ending time after the print line executes. The difference between the ending time and starting time will be the running time of the program. time.time() function is best used on *nix. import time #starting time start = time.time() for i in range(3): print("Hello") # end time end = time.time() # total time taken print("Execution time of the program is- ", end-start) Hello Hello Hello Execution time of the program is- 1.430511474609375e-05 Calculate execution time using timeit() function We calculate the execution time of the program using timeit() function. It imports the timeit module. The result is the execution time in seconds. This assumes that your program takes at least a tenth of a second to run. The below example creates a variable and wraps the entire code including imports inside triple quotes. The test code acts as a string. Now, we call the time.timeit() function. The timeit() function accepts the test code as an argument, executes it, and records the execution time. The value of the number argument is set to 100 cycles. import timeit test_code = """ a = range(100000) b = [] for i in a: b.append(i+2) """ total_time = timeit.timeit(test_code, number=200) print("Execution time of the program is-", total_time) Execution time of the program is- 4.26646219700342 Calculate execution time using time.clock() Function Another function of the time module to measure the time of a program's execution is time.clock() function. time.clock() measures CPU time on Unix systems, not wall time. This function is mainly used for benchmarking purposes or timing algorithms. time.clock() may return slightly better accuracy than time.time(). It returns the processor time, which allows us to calculate only the time used by this process. It is best used on Windows. import time t0= time.clock() print("Hello") t1 = time.clock() - t0 print("Time elapsed: ", t1 - t0) # CPU seconds elapsed (floating point) Hello Time elapsed: -0.02442 Note: time.clock() is "Deprecated since version 3.3". The behavior of this function depends on the platform. Instead, we can use perf_counter() or process_time() depending on the requirements or have a well-defined behavior. time.perf_counter() - It returns the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. time.process_time() - It returns the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. For example, start = time.process_time() ... do something elapsed = (time.process_time() - start) Calculate execution time using datetime.now() Function We calculate the elapsed time using datetime.datetime.now() from the datetime module available in Python. It does not make the script a multi-line string like in timeit(). This solution is slower than the timeit() since calculating the difference in time is included in the execution time. The output is represented as days, hours, minutes, etc The below example saves the current time before any execution in a variable. Then call datetime.datetime.now() after the program execution to find the difference between the end and start time of execution. import datetime start = datetime.datetime.now() list1 = [4, 2, 3, 1, 5] list1.sort() end = datetime.datetime.now() print(end-start) 0:00:00.000007 Calculate execution time using %%time We use %%time command to calculate the time elapsed by the program. This command is basically for the users who are working on Jupyter Notebook. This will only capture the wall time of a particular cell. %%time [ x**2 for x in range(10000)] Why is timeit() the best way to measure the execution time of Python code? 1. You can also use time.clock() on Windows and time.time() on Mac or Linux. However, timeit() will automatically use either time.clock() or time.time() in the background depending on the operating system. 2. timeit() disables the garbage collector which could otherwise skew the results. 3. timeit() repeats the test many times to minimize the influence of other tasks running on your operating system. Conclusion In this article, we learned to calculate the time of execution of any program by using functions such as time(), clock(), timeit(), %%time etc. We also discussed the optimization of the python script. We learned about various functions and their uniqueness.
https://www.studytonight.com/python-howtos/how-to-get-time-of-a-python-programs-execution
CC-MAIN-2022-21
refinedweb
951
59.5
How to create Class in Python In this article, we will learn to create a class in Python. We will look at the methodology, syntax, keywords, associated terms with some simple approaches, and some custom codes as well to better understand the topic of the class. Let's first have a quick look over what is a class, its types, and how it is used and defined in Python language.. Class Real World Example Let us understand a real-world example to know the importance of object-oriented Class. Assume there are 50 students in one class and the teacher wants to store, manage, and maintain marks of each subject scored by 50 students. In order to maintain this large data, classes are introduced because they combine data together and provide data organization. The class creates objects for its functioning. The class will define properties related to students like name, roll no, marks, etc. under one roof and then access the information by using objects created. Now, let us see how classes are created, what important points should one keep in mind before creating a class, and then what another functionality class provides to us. Creating a Class in Python A program of a class is generally easy to read, maintain, and understand. A class, functions as a template that defines the basic characteristics of a particular object. Important Points - Classes are created using classkeyword. - A colon (:)is used after the class name. - The class is made up of attributes (data) and methods (functions). - Attributes that apply to the whole class are defined first and are called class attributes. - Attributes can be accessed using the dot Let us understand the concept of the 'Dog' class using a simple code. Example: Creating Python Class #class is defined using class keyword class Dog: #data members of class color = "black" #attribute 1 name = "Polo" #attribute 2 #class constructor def __init__(self): pass #user defined function of class def func(): pass We take a class and named it "Dog". We defined two attributes or two instances of the Dog class that store color and name. This is the simplest template of a class. Further, we defined a constructor that uses __init__ for its declaration. After this, the user can create their own function called member functions of the class and perform different operations on the attributes defined inside the class. We left these two functions empty and will learn more about them in another article. As you can see in the above example, all the different attributes or properties of the dog are placed together as a bundle. These attributes are like class variables that are local to his class. When a class defined, a new namespace is created and used as the local scope thus, all assignments to local variables (attributes here) go into this new namespace and then accessed further with the help of an object. We will learn more about class objects in the next article. Conclusion In this article, we learned to create a class in Python by using the class keyword. We used the Dog class to better understand the topic. We learned about object-oriented programming with its importance. We learned how classes are used in our daily lives.
https://www.studytonight.com/python-howtos/how-to-create-class-in-python
CC-MAIN-2022-21
refinedweb
541
62.38
By Steve Smith Mobile apps can easily communicate with ASP.NET Core backend services. View or download sample backend services code The Sample Native Mobile App This tutorial demonstrates how to create backend services using ASP.NET Core MVC to support native mobile apps. It uses the Xamarin Forms ToDoRest app as its native client, which includes separate native clients for Android, iOS, Windows Universal, and Window Phone devices. You can follow the linked tutorial to create the native app (and install the necessary free Xamarin tools), as well as download the Xamarin sample solution. The Xamarin sample includes an ASP.NET Web API 2 services project, which this article's ASP.NET Core app replaces (with no changes required by the client). Features The ToDoRest app supports listing, adding, deleting, and updating To-Do items. Each item has an ID, a Name, Notes, and a property indicating whether it's been Done yet. The main view of the items, as shown above, lists each item's name and indicates if it is done with a checkmark. Tapping the + icon opens an add item dialog: Tapping an item on the main list screen opens up an edit dialog where the item's Name, Notes, and Done settings can be modified, or the item can be deleted: This sample is configured by default to use backend services hosted at developer.xamarin.com, which allow read-only operations. To test it out yourself against the ASP.NET Core app created in the next section running on your computer, you'll need to update the app's RestUrl constant. Navigate to the ToDoREST project and open the Constants.cs file. Replace the RestUrl with a URL that includes your machine's IP address (not localhost or 127.0.0.1, since this address is used from the device emulator, not from your machine). Include the port number as well (5000). In order to test that your services work with a device, ensure you don't have an active firewall blocking access to this port. // URL of REST service (Xamarin ReadOnly Service) //public static string RestUrl = "{0}"; // use your machine's IP address public static string RestUrl = "{0}"; Creating the ASP.NET Core Project Create a new ASP.NET Core Web Application in Visual Studio. Choose the Web API template and No Authentication. Name the project ToDoApi. The application should respond to all requests made to port 5000. Update Program.cs to include .UseUrls("http://*:5000") to achieve this: var host = new WebHostBuilder() .UseKestrel() .UseUrls("http://*:5000") .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup<Startup>() .Build(); Note Make sure you run the application directly, rather than behind IIS Express, which ignores non-local requests by default. Run dotnet run from a command prompt, or choose the application name profile from the Debug Target dropdown in the Visual Studio toolbar. Add a model class to represent To-Do items. Mark required fields using the [Required] attribute: using System.ComponentModel.DataAnnotations; namespace ToDoApi.Models { public class ToDoItem { [Required] public string ID { get; set; } [Required] public string Name { get; set; } [Required] public string Notes { get; set; } public bool Done { get; set; } } } The API methods require some way to work with data. Use the same IToDoRepository interface the original Xamarin sample uses: using System.Collections.Generic; using ToDoApi.Models; namespace ToDoApi.Interfaces { public interface IToDoRepository { bool DoesItemExist(string id); IEnumerable<ToDoItem> All { get; } ToDoItem Find(string id); void Insert(ToDoItem item); void Update(ToDoItem item); void Delete(string id); } } For this sample, the implementation just uses a private collection of items: using System.Collections.Generic; using System.Linq; using ToDoApi.Interfaces; using ToDoApi.Models; namespace ToDoApi.Services { public class ToDoRepository : IToDoRepository { private List<ToDoItem> _toDoList; public ToDoRepository() { InitializeData(); } public IEnumerable<ToDoItem> All { get { return _toDoList; } } public bool DoesItemExist(string id) { return _toDoList.Any(item => item.ID == id); } public ToDoItem Find(string id) { return _toDoList.FirstOrDefault(item => item.ID == id); } public void Insert(ToDoItem item) { _toDoList.Add(item); } public void Update(ToDoItem item) { var todoItem = this.Find(item.ID); var index = _toDoList.IndexOf(todoItem); _toDoList.RemoveAt(index); _toDoList.Insert(index, item); } public void Delete(string id) { _toDoList.Remove(this.Find(id)); } private void InitializeData() { _toDoList = new List<ToDoItem>(); var todoItem1 = new ToDoItem { ID = "6bb8a868-dba1-4f1a-93b7-24ebce87e243", Name = "Learn app development", Notes = "Attend Xamarin University", Done = true }; var todoItem2 = new ToDoItem { ID = "b94afb54-a1cb-4313-8af3-b7511551b33b", Name = "Develop apps", Notes = "Use Xamarin Studio/Visual Studio", Done = false }; var todoItem3 = new ToDoItem { ID = "ecfa6f80-3671-4911-aabe-63cc442c1ecf", Name = "Publish apps", Notes = "All app stores", Done = false, }; _toDoList.Add(todoItem1); _toDoList.Add(todoItem2); _toDoList.Add(todoItem3); } } } Configure the implementation in Startup.cs: public void ConfigureServices(IServiceCollection services) { // Add framework services. services.AddMvc(); services.AddSingleton<IToDoRepository,ToDoRepository>(); } At this point, you're ready to create the ToDoItemsController. Tip Learn more about creating web APIs in Building Your First Web API with ASP.NET Core MVC and Visual Studio. Creating the Controller Add a new controller to the project, ToDoItemsController. It should inherit from Microsoft.AspNetCore.Mvc.Controller. Add a Route attribute to indicate that the controller will handle requests made to paths starting with api/todoitems. The [controller] token in the route is replaced by the name of the controller (omitting the Controller suffix), and is especially helpful for global routes. Learn more about routing. The controller requires an IToDoRepository to function; request an instance of this type through the controller's constructor. At runtime, this instance will be provided using the framework's support for dependency injection. using System; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using ToDoApi.Interfaces; using ToDoApi.Models; namespace ToDoApi.Controllers { [Route("api/[controller]")] public class ToDoItemsController : Controller { private readonly IToDoRepository _toDoRepository; public ToDoItemsController(IToDoRepository toDoRepository) { _toDoRepository = toDoRepository; } This API supports four different HTTP verbs to perform CRUD (Create, Read, Update, Delete) operations on the data source. The simplest of these is the Read operation, which corresponds to an HTTP GET request. Reading Items Requesting a list of items is done with a GET request to the List method. The [HttpGet] attribute on the List method indicates that this action should only handle GET requests. The route for this action is the route specified on the controller. You don't necessarily need to use the action name as part of the route. You just need to ensure each action has a unique and unambiguous route. Routing attributes can be applied at both the controller and method levels to build up specific routes. [HttpGet] public IActionResult List() { return Ok(_toDoRepository.All); } The List method returns a 200 OK response code and all of the ToDo items, serialized as JSON. You can test your new API method using a variety of tools, such as Postman, shown here: Creating Items By convention, creating new data items is mapped to the HTTP POST verb. The Create method has an [HttpPost] attribute applied to it, and accepts a ToDoItem instance. Since the item argument will be passed in the body of the POST, this parameter is decorated with the [FromBody] attribute. Inside the method, the item is checked for validity and prior existence in the data store, and if no issues occur, it is added using the repository. Checking ModelState.IsValid performs model validation, and should be done in every API method that accepts user input. [HttpPost] public IActionResult Create([FromBody] ToDoItem item) { try { if (item == null || !ModelState.IsValid) { return BadRequest(ErrorCode.TodoItemNameAndNotesRequired.ToString()); } bool itemExists = _toDoRepository.DoesItemExist(item.ID); if (itemExists) { return StatusCode(StatusCodes.Status409Conflict, ErrorCode.TodoItemIDInUse.ToString()); } _toDoRepository.Insert(item); } catch (Exception) { return BadRequest(ErrorCode.CouldNotCreateItem.ToString()); } return Ok(item); } The sample uses an enum containing error codes that are passed to the mobile client: public enum ErrorCode { TodoItemNameAndNotesRequired, TodoItemIDInUse, RecordNotFound, CouldNotCreateItem, CouldNotUpdateItem, CouldNotDeleteItem } Test adding new items using Postman by choosing the POST verb providing the new object in JSON format in the Body of the request. You should also add a request header specifying a Content-Type of application/json. The method returns the newly created item in the response. Updating Items Modifying records is done using HTTP PUT requests. Other than this change, the Edit method is almost identical to Create. Note that if the record isn't found, the Edit action will return a NotFound (404) response. [HttpPut] public IActionResult Edit([FromBody] ToDoItem item) { try { if (item == null || !ModelState.IsValid) { return BadRequest(ErrorCode.TodoItemNameAndNotesRequired.ToString()); } var existingItem = _toDoRepository.Find(item.ID); if (existingItem == null) { return NotFound(ErrorCode.RecordNotFound.ToString()); } _toDoRepository.Update(item); } catch (Exception) { return BadRequest(ErrorCode.CouldNotUpdateItem.ToString()); } return NoContent(); } To test with Postman, change the verb to PUT. Specify the updated object data in the Body of the request. This method returns a NoContent (204) response when successful, for consistency with the pre-existing API. Deleting Items Deleting records is accomplished by making DELETE requests to the service, and passing the ID of the item to be deleted. As with updates, requests for items that don't exist will receive NotFound responses. Otherwise, a successful request will get a NoContent (204) response. [HttpDelete("{id}")] public IActionResult Delete(string id) { try { var item = _toDoRepository.Find(id); if (item == null) { return NotFound(ErrorCode.RecordNotFound.ToString()); } _toDoRepository.Delete(id); } catch (Exception) { return BadRequest(ErrorCode.CouldNotDeleteItem.ToString()); } return NoContent(); } Note that when testing the delete functionality, nothing is required in the Body of the request. Common Web API Conventions As you develop the backend services for your app, you will want to come up with a consistent set of conventions or policies for handling cross-cutting concerns. For example, in the service shown above, requests for specific records that weren't found received a NotFound response, rather than a BadRequest response. Similarly, commands made to this service that passed in model bound types always checked ModelState.IsValid and returned a BadRequest for invalid model types. Once you've identified a common policy for your APIs, you can usually encapsulate it in a filter. Learn more about how to encapsulate common API policies in ASP.NET Core MVC applications.
https://docs.microsoft.com/en-us/aspnet/core/mobile/native-mobile-backend
CC-MAIN-2017-34
refinedweb
1,662
50.02
Complete Roguelike Tutorial, using python3+pysdl2, part 1 Graphics Setting it up Ok, now that we got that out of our way let's get our hands dirty!" Your project folder Now create your project's folder. Inside it, create an empty file with a name of your choice. We're using manager.py. It'll make the tutorial easier to just use the same name, and you can always rename it later. +-pysdl2-roguelike-tutorial/ | +-manager.py If you chose to keep the SDL2 dlls at the project folder, it should now look like this: +-pysdl2-roguelike-tutorial/ | +-manager.py | +-README-SDL.txt | +-SDL2.dll You're ready to start editing manager.py!! :) Wait, manager? I've thought we were making a game... SDL2 is a C library. PySDL2 is a python wrapper for that library. But remember we've said at the tutorial introduction that SDL2 is not a game engine? We're going to create some classes to make our lives easier - make it game engine'ish. It will take a few lines before we can see our character on the screen, but it will save us lots of time in the future. If you don't care about the implementation of the Manager class just download manager.py and contants. skip to the next section. The code should be reasonably well described, with lots of docstrings and comments so that you may be able to undestand it just by looking at it. If Showing the @ on screen This first part will be a bit of a crash-course. The reason is that you need a few lines of boilerplate code that will initialize and handle the basics of a sdl2 window. And though there are many options, we won't explain them all or this part will really start to drag out. Fortunately the code involved is not as much as in many other libraries! First we import the library sdl2. import sdl2 Then, a couple of important values. It's good practice to define constants, special numbers that might get reused. Constants are usually defined on a module level and written in all capital letters with underscores separating words, according to Python's style guide - its not required, but it should make your code more readable for other people, so we're following it: SCREEN_WIDTH = 25 SCREEN_HEIGHT = 18 TILE_SIZE = 32 LIMIT_FPS = 30 In this tutorial we're using a bitmap created from a regular font. I've done this myself and you can download it here..
http://roguebasin.com/index.php?title=Complete_Roguelike_Tutorial,_using_python3%2Bpysdl2,_part_1&oldid=44286
CC-MAIN-2020-16
refinedweb
421
73.17
Introduction You are reading the documentation for Vue 3! - Vue 2 documentation has been moved to v2.vuejs.org. - Upgrading from Vue 2? Check out the Migration Guide. What is Vue? Vue (pronounced /vjuː/, like view) is a JavaScript framework for building user interfaces. It builds on top of standard HTML, CSS and JavaScript, and provides a declarative and component-based programming model that helps you efficiently develop user interfaces, be it simple or complex. Here is a minimal example: import { createApp } from 'vue' createApp({ data() { return { count: 0 } } }).mount('#app') <div id="app"> <button @ Count is: {{ count }} </button> </div> Result The above example demonstrates the two core features of Vue: Declarative Rendering: Vue extends standard HTML with a template syntax that allows us to declaratively describe HTML output based on JavaScript state. Reactivity: Vue automatically tracks JavaScript state changes and efficiently updates the DOM when changes happen. You may already have questions - don't worry. We will cover every little detail in the rest of the documentation. For now, please read along so you can have a high-level understanding of what Vue offers. Prerequisites The rest of the documentation assumes basic familiarity with HTML, CSS and JavaScript. If you are totally new to frontend development, it might not be the best idea to jump right into a framework as your first step - grasp the basics then come back! Prior experience with other frameworks helps, but is not required. The Progressive Framework Vue is a framework and ecosystem that covers most of the common features needed in frontend development. But the web is extremely diverse - the things we build on the web may vary drastically in form and scale. With that in mind, Vue is designed to be flexible and incrementally adoptable. Depending on your use case, Vue can be used in different ways: - Enhancing static HTML without a build step - Embedding as Web Components on any page - Single-Page Application (SPA) - Fullstack / Server-Side-Rendering (SSR) - Jamstack / Static-Site-Generation (SSG) - Targeting desktop, mobile, WebGL or even the terminal If you find these concepts intimidating, don't worry! The tutorial and guide only require basic HTML and JavaScript knowledge, and you should be able to follow along without being an expert in any of these. If you are an experienced developer interested in how to best integrate Vue into your stack, or you are curious about what these terms mean, we discuss them in more details in Ways of Using Vue. Despite the flexibility, the core knowledge about how Vue works is shared across all these use cases. Even if you are just a beginner now, the knowledge gained along the way will stay useful as you grow to tackle more ambitious goals in the future. If you are a veteran, you can pick the optimal way to leverage Vue based on the problems you are trying to solve, while retaining the same productivity. This is why we call Vue "The Progressive Framework": it's a framework that can grow with you and adapt to your needs. Single-File Components In most build-tool-enabled Vue projects, we author Vue components using an HTML-like file format called Single-File Component (also known as *.vue files, abbreviated as SFC). A Vue SFC, as the name suggests, encapsulates the component's logic (JavaScript), template (HTML), and styles (CSS) in a single file. Here's the previous example, written in SFC format: <script> export default { data() { return { count: 0 } } } </script> <template> <button @Count is: {{ count }}</button> </template> <style scoped> button { font-weight: bold; } </style> SFC is a defining feature of Vue, and is the recommended way to author Vue components if your use case warrants a build setup. You can learn more about the how and why of SFC in its dedicated section - but for now, just know that Vue will handle all the build tools setup for you. API Styles Vue components can be authored in two different API styles: Options API and Composition API. Options API With Options API, we define a component's logic using an object of options such as data, methods, and mounted. Properties defined by options are exposed on this inside functions, which points to the component instance: <script> export default { // Properties returned from data() becomes reactive state // and will be exposed on `this`. data() { return { count: 0 } }, // Methods are functions that mutate state and trigger updates. // They can be bound as event listeners in templates. methods: { increment() { this.count++ } }, // Lifecycle hooks are called at different stages // of a component's lifecycle. // This function will be called when the component is mounted. mounted() { console.log(`The initial count is ${this.count}.`) } } </script> <template> <button @Count is: {{ count }}</button> </template> Composition API With Composition API, we define a component's logic using imported API functions. In SFCs, Composition API is typically used with <script setup>. The setup attribute is a hint that makes Vue perform compile-time transforms that allow us to use Composition API with less boilerplate. For example, imports and top-level variables / functions declared in <script setup> are directly usable in the template. Here is the same component, with the exact same template, but using Composition API and <script setup> instead: > Which to Choose? First of all, both API styles are fully capable of covering common use cases. They are different interfaces powered by the exact same underlying system. In fact, the Options API is implemented on top of the Composition API! The fundamental concepts and knowledge about Vue are shared across the two styles. The Options API is centered around the concept of a "component instance" ( this as seen in the example), which typically aligns better with a class-based mental model for users coming from OOP language backgrounds. It is also more beginner-friendly by abstracting away the reactivity details and enforcing code organization via option groups.. You can learn more about the comparison between the two styles and the potential benefits of Composition API in the Composition API FAQ. If you are new to Vue, here's our general recommendation: For learning purposes, go with the style that looks easier to understand to you. Again, most of the core concepts are shared between the two styles. You can always pick up the other one at a later time. For production use: Go with Options API if you are not using build tools, or plan to use Vue primarily in low-complexity scenarios, e.g. progressive enhancement. Go with Composition API + Single-File Components if you plan to build full applications with Vue. You don't have to commit to only one style during the learning phase. The rest of the documentation will provide code samples in both styles where applicable, and you can toggle between them at any time using the API Preference switches at the top of the left sidebar. Still Got Questions? Pick Your Learning Path Different developers have different learning styles. Feel free to pick a learning path that suits your preference - although we do recommend going over all content if possible!
https://vuejs.org/guide/introduction.html
CC-MAIN-2022-21
refinedweb
1,176
61.26
I created a program that accepts input from the keyboard and stores it in a string. Then it outputs the string in a file. This works, and is as follows: The problem:The problem:Code:#include <fstream> #include <iostream> #include <string> using namespace std; int main() { string sentence; ofstream file; char filename[100]; cout << "Enter text to be written to file: "; getline(cin, sentence); cout << endl; cout << "The text will be written to the current directory (contains the .cpp file)." << endl; cout << "The file will be called Output.txt" << endl; file.open("Output.txt", ios::out); file << sentence; file.close(); cout << endl; cin.get(); cout << endl; return 0; } In unix, I have to redirect the stdin to a file (a.out < file.txt). So, I want this to read all the contents of "file.txt" into the string. However, only the first line of the file is being read into string. I think it probably has something to do with getline, but i'm not sure how to fix it. I thought getline is suppose to include all whitespace, but maybe it has something to do with unix (\n or endl?). By the way, I'm trying to use a string instead of a char. This is an example of the file: Only the first line reads into the string.Only the first line reads into the string.Code:This is sample text that needs to be read correctly. More sample text is as follows. Thanks.
https://cboard.cprogramming.com/cplusplus-programming/104034-multiple-lines-string.html
CC-MAIN-2017-13
refinedweb
246
84.07
This project was written using a same method as is recommended when using the Microsoft DirectX Framework, although there is no library to build in DirectX 8 as there was in DirectX 7, so the files have to be included in the project. These files are now located in the folder mssdk\samples\multimedia\common which includes both a src and a include directory. The best way to use these is to add the include file to your DevStudio path through the tools|options directories tab and the include the source files directly in your project. One important note here is that DirectX 8 is much stricter about what it will let you do due to its greater hardware dependency. In DirectX 7 I was always able to run a debug app in full screen and then add break points with no problems at all. I could even run the code on a laptop with no 3D card present. This is possible with DirectX 8 but this project runs in a window at five frames a second so it would be hell to try and develop anything serious on it. This means that I'm going to have to recommend that all development takes place in a windowed environment and on a computer with a proper 3D card This project is meant as an introduction to DirectX 8 and the classes in the project are intended to be reused with the minimum of fuss, while the demo code is clearly identified so as to be easily removable. initialize when you've been staring at DirectX for a while but. OneTimeSceneInit() The virtual function InitDeviceObjects is where the main objects for the code are set up. This code is reent InitDeviceObjects OneTimeSceneInit if( !bInitialScreen ) { bInitialScreen = true; if( FAILED( SetScreenSize( 800, 600 ) ) ) return E_FAIL; } See setting the screen size below fro a description of what this function does. If you open the 3dApp.h file you'll notice a private member in the class CD3DFont *m_pFont; This is used in this project to show the stats that appear in the top left hand corner of the screen and if we trace it through the code we can see not only how it works but get a quick reprisal of the way that objects should be treated in the code. In the constructor the font is initialized m_pFont = new CD3DFont( _T( "Arial" ), 12, D3DFONT_BOLD ); m_pFont->InitDeviceObjects( m_pd3dDevice ); In the DeleteDeviceObjects function the font calls its own DeleteDeviceObjects function. This is done when the device needs to be deleted when the project is closing down. DeleteDeviceObjects The next time we see it is in the render function when it draws the application frame and device stats. It then calls its own version of InvalidateObjects in the applications invalidate objects function, which is called when the current devices are changed for instance switching between windows and full screen or between the windowed application and the debugger. Again in RestoreDeviceObjects when the application once again has the main focus the font object calls its own version of the RestoreDeviceObjects function. Finally in the FinalCleanup function the font object is deleted. InvalidateObjects RestoreDeviceObjects FinalCleanup While, hopefully, not being the most exciting class you'll ever come across it does provide a useful template for how to design classes to work within the framework provided by DirectX. The CD3DFont class itself has three functions that are of interest apart from its use of the DirectX Framework format. These are, HRESULT DrawText( FLOAT x, FLOAT y, DWORD dwColor, TCHAR* strText, DWORD dwFlags=0L ); HRESULT DrawTextScaled( FLOAT x, FLOAT y, FLOAT z, FLOAT fXScale, FLOAT fYScale, DWORD dwColor, TCHAR* strText, DWORD dwFlags=0L ); HRESULT Render3DText( TCHAR* strText, DWORD dwFlags=0L ); In the current code we only use the DrawText function but it might be interesting to look at what the others do in more detail at a later date. DrawText The SetScreenSize function has had to be completely rewritten in the change to DirectX 8. This is due to the introduction of the notion of adapters and the changes to the way the bit depth works. SetScreenSize HRESULT SetScreenSize(int nHoriz, int nVert, BOOL bWindowed = FALSE, D3DFORMAT d3dFormat = D3DFMT_X8R8G8B8 ); The final parameter has been changed to a D3DFORMAT parameter which is a value to represent the different types of ( in the case of the render target ) pixel format. We will only be concerned with two values here the D3DFMT_X8R8G8B8 format which is a 32 bit RGB ( Red, Green, Blue )format where eight bits are reserved for each colour and the D3DFORMAT_R5G6B5 format which is a 16 bit RGB format. The code is setup to default to use the 32 bit value. D3DFORMAT D3DFMT_X8R8G8B8 D3DFORMAT_R5G6B5 The rest of the SetScreenSize function is. HRESULT hResult = S_OK; int nWidth, nHeight; // Get access to the newly selected adapter, device, and mode DWORD dwDevice; dwDevice = m_Adapters[m_dwAdapter].dwCurrentDevice; for( int i=0; i < ( int )m_Adapters[ m_dwAdapter ].devices[ dwDevice ].dwNumModes; i++ ) { if( m_Adapters[ m_dwAdapter ].devices[ dwDevice ].modes[ i ].Format == d3dFormat ) { nWidth = ( int )m_Adapters[ m_dwAdapter ].devices[ dwDevice ].modes[ i ].Width; nHeight = ( int )m_Adapters[ m_dwAdapter ].devices[ dwDevice ].modes[ i ].Height; if( nWidth == nHoriz && nHeight == nVert ) { m_Adapters[ m_dwAdapter ].devices[ dwDevice ].dwCurrentMode = i; m_Adapters[ m_dwAdapter ].devices[ dwDevice ].bWindowed = bWindowed; m_bWindowed = bWindowed; break; } } } /// Release all scene objects that will be re-created for the new device InvalidateDeviceObjects(); DeleteDeviceObjects(); /// Release display objects, so a new device can be created if( m_pd3dDevice->Release() > 0L ) { return DisplayErrorMsg( D3DAPPERR_NONZEROREFCOUNT, MSGERR_APPMUSTEXIT ); } /// Inform the display class of the change. It will internally /// re-create valid surfaces, a d3ddevice, etc. hResult = Initialize3DEnvironment(); if( FAILED( hResult ) ) return DisplayErrorMsg( hResult, MSGERR_APPMUSTEXIT ); The function changes the screen size to the requested size and sets it full screen or not depending on the value passed in bWindowed, which must be TRUE if running in debug mode. The main difference is the use of the notion of Adapters which to you and me means video card, though it is possible for a video card to have more than one adapter if it has built in multi monitor support. The devices that each adapter supports are the video modes eg 640/480 in 16 bit, 800/600 in 32 bit, etc., that the card can run. TRUE The object displayed on screen is an .x file that is basically a list of vectors for the DirectX code to draw. This file does also contain texture information but for now just loading it and getting it to the screen correctly should be good enough. The object will be loaded into a D3DMesh class which is located in the header file d3dfile.h. This class will contain all the information needed for manipulating the x file that the code will load. The mesh is loaded with the code, HRESULT hResult = m_pObjectMesh->Create( m_pd3dDevice, _T( "dolphin.X" ) ); if( FAILED( hResult ) ) return D3DAPPERR_MEDIANOTFOUND; This code loads the dolphin.x file although there is nothing to prevent the name of any other file being added here it should be noted that at the moment the file has to be in the current directory and that due to the different sizes of the meshes it is possible that the object loaded will at first be too small. Big, or not visible at all which will mean that some changes to the view will have to be made which will be discussed in a while. If you look through the code in the 3dapp.cpp file you can see that the control flow of the CD3DMesh is exactly the same as that for the font object in that in each of the overloaded functions a function is called by the m_pObjectMesh object that will keep the object synchronised with the rest of the application. There are three things that need to be covered now. At the moment we have a computer screen and an object that we want to draw on it in 3d but how does the computer know where to draw the object. Simple we tell it. D3DXMATRIX matWorld; D3DXMATRIX matView; D3DXMATRIX matProj; /// Set the Rotation speed D3DXMatrixRotationY( &matWorld, timeGetTime()/550.0f ); m_pd3dDevice->SetTransform( D3DTS_WORLD, &matWorld ); D3DXMatrixLookAtLH( &matView, &m_vecEye, &vecAt, &vecUp ); m_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI/4, 1.0f, 1.0f, -1.0f ); m_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); In the Render function we define three objects of type D3DMATRIX, one for the world one for the view and one for the projection. We then set these matrices the way we want them by calling the DirectX 8 helper functions and finally call SetTransform on the m_pd3dDevice ( which is the drawing device ). D3DMATRIX SetTransform The world matrix ( matWorld ) specifies the way the world will look, here the code tells it to set the matrix to rotate on the y access and because this is the Render function that gets called a number of times a second the code will tell it to rotate a bit more every single time through. Then when the SetTransform function is called any objects in the world will rotate on the y axis. Raising or lowering the value the divides the return of timeGetTime ( the number of seconds since windows started ) will slow down or speed up the rotation of the object respectively. timeGetTime The view matrix ( matView ) specifies the way that the screen is looking in on the world. This calls the function D3DXMatrixLookAtLH which sets the way the world is looked at in a left handed world. Meaning that the x axis of the screen will be 0 at the leftmost and its highest value at its rightmost point. SetTransform is then called with the D3DTS_VIEW constant to tell DirectX that this is the viewing style that is wanted. D3DXMatrixLookAtLH D3DTS_VIEW The projection matrix ( matProj ) deals with the way that objects are perceived and scaled with the 3d world created. This is why things shrink when they are moved further away. When the code for this project is run the object first of all comes from the left hand of the screen and then moves backwards and forwards a bit while sliding slightly to left as it moves forward and slightly to the right as it moves backwards. Well it Doesn't. The object remains perfectly stationary throughout the entire program. The only code that deals with moving the object in this example is the World matrix code that rotates it a little bit every time the render function is called. In fact when the code starts the object is behind you. As you move back and to the left the object comes into view and centers itself in the middle of your screen and then when it appears to move towards you and to your left it is because you are moving forwards and to the right. This is all done in the FrameMove function with the code, static bool bForward = true; if( bForward ) { if( m_vecEye.z > 1000.0f ) bForward = false; m_vecEye.x += 3.0f; m_vecEye.z += 10.0f; } else { if( m_vecEye.z < 400.0f ) bForward = true; m_vecEye.x -= 3.0f; m_vecEye.z -= 10.0f; } The changes are made in the m_vecEye D3DXVECTOR3 ( Note when I say vector I mean a D3DXVECTOR3 which inherits from a Vector and adds some functionality ) which is a class member that stores the eye position of the view when matView matrix is created D3DXVECTOR3 D3DXMatrixLookAtLH( &matView, &m_vecEye, &vecAt, &vecUp ); A vector contains the x, y and z coordinates for that position. With x being the position from the left of the center of the World position with a negative x moving to the left. y being the position from the base of the World position with a positive y value moving towards the top of the screen and a negative y value moving down towards the bottom of the screen and z being the depth. The vectors that are passed to the function above are the eye position which is the screen position within the World, which is currently the center of the screen. The at vector is the position which is the way that the screen is facing and the up position which tells the matrix where up is. The whole effect is controlled by the value contained in m_vecEye.z value which when it goes above 1000 the view stops and starts to move forwards again until it reaches a value less than 400 where it stops and starts to go backwards again. These values should be experimented with to get the same effects when using different models. There should be no problems for any one building the sample as the cpp files are included in the zip file. This has been done due to me adding the line #include "stdafx.h" In all of the cpp files. This is because I prefer to include MFC and I probably will use some of it some point. It's not done for technical reasons but because this is how I work. You can remove them and use the standard cpp files if you prefer. The only possible problem you should have is the need to include the path to the common header files in your tools\options\directories section of developer studio. These can be found under mssdk\samples\multimedia\common\include. The demo code will start at /// DEMO CODE and end with /// END DEMO CODE By removing the code between these lines you will be left with a template to start developing you own.
http://www.codeproject.com/Articles/1402/DirectX-Template?msg=151596
CC-MAIN-2016-07
refinedweb
2,231
65.46
Qt5 & invokeMethod - J.Hilk Moderators last edited by Hello everyone, I have a - hopefully - simple question: Is there a Qt5 Syntax-conform way to call QMetaObject::invokeMethod ? For example I use something like this to call a function in the next event loop cycle: QMetaObject::invokeMethod(this, "myFunction" ,Qt::QueuedConnection); But it is highly inconvenient. - The function call is not listed wenn you search for it! - Because "myFunction"is essentially a string, its not effected by simple refactoring e.g renaming. - And of course no compile-time validation check I know I could potentially use a QTimer: QTimer::singleShot(0,this, &myClass::myFunction); But that feels like circumventing the problem, also I'm nut sure if those 2 would always behave in the same way! - raven-worx Moderators last edited by @J.Hilk the 2 methods behave the same. QMetaObject::invokeMethod() only accepts the method as string because it is using the object's metaObject, which also only stores the methods as string. You can check the return value of invokeMethod() and add an assertion. bool check = QMetaObject::invokeMethod(...); Q_ASSERT( check ); Just make sure not to surround the invokeMethod() call into Q_ASSERT otherwise they wont be executed in release mode. But you could add your custom assertion macro for this as well: #ifdef QT_DEBUG #define MY_ASSERT(CODE) Q_ASSERT(CODE); #else #define MY_ASSERT(CODE) CODE #endif MY_ASSERT( QMetaObject::invokeMethod(...) ); or alternatively for both debug AND release (untested though - not sure if this compiles): #define MY_ASSERT(CODE) (CODE == false ? (qFatal() << #CODE << wasn't successful;) : (void;) ) MY_ASSERT( QMetaObject::invokeMethod(...) ); - Chris Kawa Moderators last edited by Chris Kawa In Qt 5.10 (currently in alpha) there's gonna be an overload taking pointer to function, so you'll be able to do: QMetaObject::invokeMethod(this, &myClass::myFunction, Qt::QueuedConnection); For now I'd use the timer. The 0 timeout special case is documented so there's nothing wrong in using it. - J.Hilk Moderators last edited by Thanks @raven-worx and @Chris-Kawa for the quick answers. @raven-worx said in Qt5 & invokeMethod: the 2 methods behave the same. I know that they seem to behave in the same way. But creating an timer object, running & processing the timer/timeout and deleteing the object afterwards, feels like it should be more work than queing a function call via Q_Object-Macro-Magic. However the compiler may break it down to the same thing, after all. @raven-worx said in Qt5 & invokeMethod: You can check the return value of invokeMethod() and add an assertion. That is actually a good idea, that way one would always get an immediate notification when the call becomes invalid due to changes made. Still inconvenient, but much less so. @Chris-Kawa said in Qt5 & invokeMethod: In Qt 5.10 (currently in alpha) there's gonna be an overload taking pointer to function, so you'll be able to do: YES, thanks for the info, I was not aware of that change. Now I'm hyped for Qt 5.10 more so than previously! With that said, I would consider my question answered. Thanks everyone.
https://forum.qt.io/topic/83278/qt5-invokemethod
CC-MAIN-2020-05
refinedweb
513
64.71
On Fri, May 11, 2012 at 08:09:52PM -0500, Rob Landley wrote:> However, 95% of this use case is already covered by FAT, considering> that most of these people are going to want to interchange with windows> and mac, neither of which are necessarily happy with an ext3 formatted> USB stick. (Sadly, that's what I normally do. My usb keychain is fat> formatted, because otherwise I can't use it to give a PDF to the guy at> kinko's to print out. I suspect this is why it hasn't previously come up> much.)The other reason why I suspect it hasn't come up often is that USBsticks are so painfully slow that the file system really isn't abottleneck.I would expct this might be different if you were using a removableHDD (or even an SSD) with a USB 3.0 interface. In that case youreally might want a bette file system than VFAT, especially if you areinterchanging with another Linux system with an incompatible uid/gidnamespace. That's not nearly as common a use case, though. - Ted
https://lkml.org/lkml/2012/5/13/2
CC-MAIN-2016-44
refinedweb
183
58.32
On Sat, Apr 24, 1999 at 01:37:28PM -0700, Robert Woodcock wrote: > There is no practical reason why every piece of UNIX configuration can't be > put into a hierarchial tree with its own unique namespace. [1] Oh yes, you have this idea from Microsoft, right? :) > It's overweight if you have each program implement it all over again, as is > currently done. It's not overweight as a dynamically linked library that's > done right. And the Parse Tree which is generated at runtime over the complete root element? > Those structures and tag names are *documentation*. Yes, just like the config files are documented right now... where is the difference? Greetings Bernd
https://lists.debian.org/debian-devel/1999/04/msg01478.html
CC-MAIN-2017-09
refinedweb
114
74.69
Scrape Twitter data with Python A complete guide on collecting and storing Tweets. A couple of weeks back, I was working on a project that required me to analyze data from Twitter. After a quick Google search, I realized that the most popular way to do this was with the Twitter API. This API is called Tweepy, and there are various levels of access you can get depending on what you want to use it for. However, Tweepy has its limitations. Firstly, you will need to create a Twitter Developer Account and apply for API access. You'll need to answer a series of questions to do this, which is incredibly time consuming. Even once you get approved, there is a limit on the number of Tweets you can scrape. To get around this, I started looking at alternatives to Tweepy. Twint Twint is an advanced Twitter scraping tool written in Python that allows for scraping Tweets from Twitter profiles without using Twitter's API. While the Twitter API only allows you to scrape 3200 Tweets at once, Twint has no limit. It is very quick to set up, and you don't need any kind of authentication or access permission. Start scraping First, install the Twint library: pip install Twint Then, run the following lines of code to scrape Tweets related to a topic. In this case, I'm going to scrape every Tweet that mentions Taylor Swift: import twint c = twint.Config() c.Search = ['Taylor Swift'] # topic c.Limit = 500 # number of Tweets to scrape c.Store_csv = True # store tweets in a csv file c.Output = "taylor_swift_tweets.csv" # path to csv file twint.run.Search(c) Finally, all you need to is read the .csv file back into a data frame: import pandas as pd df = pd.read_csv('taylor_swift_tweets.csv') Taking a look at the head of the data-frame, we see output that looks like this: The contents of all Tweets are stored in the 'tweet' column: df['tweet'] Running the above line of code will render the content of all Tweets: The codes above just scratches the surface of what you can do with Twint. You can tailor the output you want according to your needs (even filter Tweets between a particular time-frame or language).
https://www.natasshaselvaraj.com/how-to-scrape-twitter/
CC-MAIN-2022-27
refinedweb
380
71.75
Chapter 4 From IPRE Wiki Sensing the World Cole Sear in Shayamalan's Sixth Sense is not refering to dead bodies lying in front of him (for those who have not seen the movie). The five senses that most humans relate to are: touch, vision, balance, hearing, and taste or smell. In all cases our bodies have special sensory receptors that are placed on various parts of the body to enable sensing. For example the taste receptors are concentrated mostly on the tongue, the touch receptors are most sensitive on hands and the face and least on the back and on limbs although they are present all over the body, etc. Besides the difference in the physiology of each kind of receptors there are also diffferent neuronal pathways and thereby sensing mechanisms built into our bodies. Functionally, we can say that each type of sensory system starts with the receptors which convert the thing they sense into electrochemical signals that are trasmitted over neurons. Many of these pathways lead to the cerebral cortex in the brain where they are further processed (like, "Whoa, that jalapeno is hot!!"). Sensing is an essential component of being a robot and every robot comes built with sensors that are capable of sensing different environmental conditions. It is not uncommon, for example, to find sensors that are capable of sensing light, temperature, touch, distance to another object, etc. Robots employ electromechanical sensors and there are different types of devices available for sensing the same physical quantity. For example, one common sensor found on many robots is a proximity sensor. It detects the distance to an object or an obstacle. Proximity sensors can be made using different technologies: infrared light, sonars, or even lasers. Depending upon the type of technology used, their accuracy, performance, as well as cost varies: infrared (IR) being the cheapest, and laser being on the expensive side. Your Scribbler robot has a small set of sensors that can be used to detect certain features of its environment. Scribbler Sensors The picture below shows two of the Scribbler's four sensors. In this section we will learn about their behavior and the physical quantities they detect. - List of Scribbler Sensors - Light: There are three light sensors present on the robot. These are located in the three holes present on the front of the robot. These sensors can detect the levels of brightness (or darkness). Using these the Scribbler can be made to detect variations in ambience light in a room. - Proximity: At the front of the robot you will notice two tiny lamps inserted as "headlights" on either side of the robot. These are IR emmitters. The light emmitted by these if reflected by the presence of an obstacle will bounce back towards the robot and is captured by the IR sensor that is present in the tiny notch in the middle of the two IR emmiters (see figure above). - Line: If you tip over your Scribbler and look at the front portion of the chasis (away from the third wheel), you will notice two paris of tiny holes. These are also IR emitters and receivers and can be used to detect lines on the floor. - Stall: There is also an internal sensor in the robot that tells it whether it is stalled or stuck when it is trying to move. Getting to know the sensors Sensing using the sensors provided in the Scribbler is easy. Once you are familiar with the details of sensor behaviors you will be able to use them in your programs to design interesting creature-like behaviors for your Scribbler. But first, we must spend some time getting to know these sensors. This is a hands-on exercise, so be sure to bring your Scribbler with you (make sure that your batteries are still fresh). Start Python, connect to the robot, and then enter the command: >>> joyStick(1) Earlier, in Chapter 1, when you used this command, you did not supply any parameters. But, when you supply a parameter (the value 1 shown above), the joystick window opens up with an additional display of all the sensor values being reported by the robot. These values are updated as the robot moves or at least once every second. A sample joystick window with sensor display is shown below: The display under the joystick contains four sets of sensor values being reportd by the Scribbler. These are described in more detail below: - Scribbler Sensor Values - Light: The current is capable of detecting a bright edge against a dark edge (that forms a line). The display shows the values of left and right sensors (both are 1). If the right sensor is on a dark are of an edge or a line and the left sensor is on a lighter area or edge, the sensor values would be (left=0, right=1). if the edge is flipped, the values would be (left=1, right=0). - IR: The IR values displayed are also left and right (in the same orientation as the light). Their values will be either 1 or 0. If there is an obstacle detected, the value will be 0, 1 otherwise. - Stall: The stall sensor detects that the robot is not moving even though the motors are. That is, it is stuck somewhere. Its values can be 0 or 1, 0 implies it is not stalled and 1 implies it is. Do This Since you have the robot connected and showing the joystick display with sensor values, try to move the robot around (either with your hand or with the joystick) and observe the sensor values. For the light sensor, try to cover the sensors and observe, turn the lights on or off in the room (if feasible) or pick up the robot and turn it towards or away from the light. Again observe the values reported. Shine a bright flashlight into the sensor holes and observe the values (they should be really low). Similarly, place the robot so that it is on a thick black line. Also try the edge of your mouse pad (mousepads tend to be of darker colors). Notice the values that are reported. Place various objects in front of the robot and look atthe values of the IR proximity sensors. Take your notebook, place it in front of the robot about two feet away. Slowly move the notebook closer to the robot. Notice how the value of the IR sensor changes from a 1 to a 0 and then move the notebook away again. Can you guess how far (near) the obstacle should be before it is detected (or cleared)? try moving the notebook from side to side. Again notice the values of the IR sensors. Using the joystick move the robot around and on purpose, make the robot go into a wall or an obstacle so it gets stuck. read the value of the stall sensor. Make sure you do this enough number of times so that you are comfortable with the performance of your robot's sensors. if the left-right orientation/labelling is confusing, feel free to mark the robot's sensors with a pen on the robot itself. In the remainder of this chapter, we will learn how we can access these sensor values through commands/functions in the Myro library. Then we will design many interesting behaviors for your Scribbler. Exercise 1: Place your Scribbler on the floor, turn it on, start Python, and connect to it. Issue the joystick command (shown above to get the joystick display). Now, our objective here is to really "get into the robot's mind" and drive it around without ever looking at the robot. You can use the information displayed by the sensors to navigate the robot. Try driving it to a dark spot, or the brightest spot in the room. Try driving it so it never hits any objects. Can you detect when it hits something? If it does get stuck, try to maneuver it out of the jam! This exercise will give you apretty good idea of what the robot senses, how it can use its sensors, and to the range of behaviors it may be capable of. Place a sheet of paper with a thick black line on it (your instructor will provide one for the lab) and without looking at the robot or the line, see if you can make the robot follow the line. You will find this exercise a little hard to carry out, but it will give you a good idea as to what should go into the brains of such robots when you actually try to design them. We will try and revisit this scenario as we build various robot programs. Proprioception Proprioception is the term used to refer to the phenomenon of internal sensing. Here is an illustration of what this means: Do This: Get something really delicious to eat, like a cookie, or a piece of chocolate, or candy (whatever you fancy!). Hold it in your right hand, and let your right arm hang naturally on your side. Now close your eyes, real tight, and try to eat the thing you are holding. Piece of cake! (well, whatever you picked to eat :-) Without fail, you were able to pick it up and bring it to your mouth, right? Give yourself a Snickers moment and enjoy the treat. Then read on... This is because of proprioception. Your body's sensory system also keeps track of the internal state of your body parts, how they are oriented, etc. Proprioception in robots is refered to the measurement of movement relative to the robot's internal frame of reference. Sometimes also called dead reckoning, it can be a useful additional sensing mechanism that you can use to design robot behaviors. In fact, the sense that the robot has stalled is a kind of proprioception. Another useful proprioceptory mechanism that exists is time. You have already seen how you can use the wait command to do something for a specific amount of time. In the next exercise, we will ask you to use time to a robot task. Exercise 2: Design a robot program for the Scribbler to draw a square (say with sides of 6 inches). To accomplish this, you will have to experiment with the movements of the robot and correlate them with time. The two movements you have to pay attention to are the rate at which the robot moves, when it moves in a straight line; and the degree of turn with respect to time. You can write a function for each of these: def travelStraight(distance): # Travel in a straight line for distance inches ... def degreeTurn(angle): # Spin a total of angle degrees That is, figure out by experimentation on your own robot (the results will vary from robot to robot) as what the correlation is between the distance and the time for a given type of movement above and then use that to define the two functions above. For example, if a robot (hypothetical case) seems to travel at the rate of 25 inches/minute when you issue the command tranalate(1.0), then to travel 6 inches you will have to translate for a total of (6*60)/25 seconds. Try moving your robot forward for varying amounts for time atthe same fixed speed. For example try moving the robot forward at speed 0.5 for 3, 4, 5, 6 seconds. Record the distance travelled by the robot for each of those times (you will notice a lot of variation in the distance even for the same set of commands, you may want to average those). Given this data, you can estimate the average amount of time it takes to travel an inch. Similarly for turning. Try turning the robot at the same speed for varying amounts of time. Experiment how long it takes the robot to turn 360 degrees, 720 degrees, etc. Again, average the data you collect to get the amount of wait you will need for each degree. Once you have figured out the details use them to write the above function. Then use the following main program: def main(): # Transcribe a square of sides = 6 inches for side in range(4): travelStraight(6.0) degreeTurn(90.0) print "Done." main() Run this program several times. It is unlikely that you will get a perfect square each time. This has to do with the calculations you performed as well with the variation in the robot's motors. They are not precise. Besides, it generally takes more power to move from a still stop than to keep moving. Since you have no way of controlling this, at best you can only approximate this type of behavior. Over time, you will also notice that the error will aggregate. This will become evident in doing the exercise below. Exercise 3: Building on the ideas from the previous exercise, we could further abstract the robot's drawing behavior so that we can ask it to draw any regular polygon (given the number of sides and length of each side). Write the function: def drawPolygon(SIDES, LENGTH): # Draw a regular polygon with SIDES number of sides and each side of length LENGTH. Then, we can write a regular polygon drawing robot program as follows: def main(): # Given the number of sides and the length of each side, draw a regular polygon # First ask the user for the number of sides and side length nSides = input("Enter the number of sides in the polygon you want me to draw: ") sideLength = input("Enter the length of each side (in inches): ") # Draw the polygon drawPolygon(nSides, sidelength) print "Done." main() To test the program, first try drawing a square of sides 6 inches as in the previous exercise. Then try a triangle, a pentagon, hexagon, etc. Try a polygon with 30 sides of length 0.5 inches. What happens when you give 1 as the number of sides? What happens when you give zero (0) as the number of sides? Note your robot's behavior in each of these cases and write a short report. Exercise 4: Write a robot program to make your Scribbler draw a five point star. [Hint: Each vertex in the star has an interior angle of 36 degrees.] Exercise 5: Experiment with Scribbler movement commands and learn how to make it transcribe a path of any given radius. Using this, write a program to draw a circle of any input diameter. Exercise 6: Try writing a program to draw other shapes: the outline of a house, a stadium, or create art by inserting pens of different colors (you can write the program so that the robot stops and asks you for a new color). Exercise 7: If you had an open rectangular lawn (with no trees or obstructions in it) you could use a Zanboni like strategy to mow the lawn. Start at one end of the lawn, mow the entire length of it along the longest side, turn around and mow the entire length again, next to the previously mowed area, etc. until you are done. Write a program for your Scribbler to implement this strategy (make the Scribbler draw its path as it goes). Random Walks One way you can do some interesting things with robot drawings is to inject some randomness in how long the robot does something. Python, and most programming languages, typically provide a library for generating random numbers. Generating random numbers is an interesting process in itself but we will save that discussion for a later time. Random numbers are very useful in all kinds of computer applications, especially games and in simulating real life phenomena (as in estimating how many cars might be entering an already crowded highway in the peak of rush hour? etc.). In order to access the random number generating functions in Python you have to import the random library: from random import * There are lots of features available in this library but we will restrict ourselves with just functions for now: random and ranrange. These are described below: - Functions for generating random numbers - random() Each time you invoke this function it returns a random bunber between 0.0 and 1.0. - randrange(A, B) Returns a random number in the range [A..B-1]. Here is a sample interaction with these two fuctions: As you can see, using the random number library is easy enough, similar to using the Myro library for robot commands. Given the two functions, it is entirely up to you how you use them. Look at the program below: def main(): # generate 10 random polygons for poly in range(10): # generate a random polygon and draw it userInput = input("Place a new color in the pen port and enter any number: ") sides = randrange(3, 8) size = randrange(2, 6) drawPolygon(sides, size) # generate a random walk of 20 steps for step in range(20): travelStraight(random()) degreeTurn(randrange(0, 360)) The first loop in the program draws 10 random polygons. The second loop carries out a random walk of 20 steps. Python: Asking Questions? Let us say you wanted to design a Scribbler artist (see Exercise 8 below). As you can see from above, it is easy to program various kinds of movements into the Scribbler. If there is a pen in the pen port, the Scribbler draws a path. However, as an artist, that would restrict the Scribbler to draw something in only one color: whatever it is you place in its pen port. However, ss you can see in the example above, we can stop the program temporarily, pretend that we are taking some input and use that as an opportunity to change the pen and then go on. Above, we used the Python input command to accomplish this. There is a better way to do this and it uses a function provided in the Myro library: >>> askQuestion("Are you ready?") When this fuction is executed, a dialog window pops up as shown below: When you press your mouse on any of the choices (yes/no), the window disappears and the function returns the name of the key selected by the user as a string. That is, if in the above window you pressed the Yes key, the function will return the value: >>> askQuestion("Are you ready?") 'Yes' the askQuestion command can be used in the program above as follows: askQuestion("Change my pen to a different color and press 'Yes' when ready.") While this is definitely more functional than our previous solution, we can actually do better. For example, what happens when the user presses the No button in the above interaction? On ething you know for sure is that the function will return the string 'No'. However, the way we are using this function, it really does not matter which key the user presses. askQuestion is designed so it can be customized by you so that you can specify how many button choices you want to have in the dialog window as well as what the names of those buttons would be. Here is an illustration of how you would write a better version of the above command: askQuestion("Change my pen to a different color and press 'OK' when ready", ["OK"]) Now this is certainly better. Notice that the function askQuestion can be used with either one parameter or two. If only one parameter is specified, then the default behavior of the funtion is to offer two button choices: 'Yes' and 'No'. However, using the second parameter you can specifyy, in a list, any number of strings that will become the choice buttons. For example, askQuestion("What is your favorite ice cream flavor?", ["Vanilla", "Chocolate", "Mango", "Hazelnut", "Other"]) This will be a very handy function to use in many different situations. In the next exercise, try and use this function so that you are familiar with it. Exercise 8: Write a Scribbler program of your own that exploits the Scribbler's movements to make random drawings. Make sure you generate drawings with at least three or more colors. Because of random movements, your robot is likely to run into things and get stuck. Help your robot out by picking it up and placing it elsewhere when this happens. Speaking of... In writing robot programs you have seen how placing print commands can help improve the interface of your programs. The messages printed on the screen are quite informative and are thus an essential component of the user interface of the program. Most programs are really written not just to be used by their programmers but other people as well. Most of the time when other people use your program they may not be interested in how your program works or the inner details of your program. Thus, by printing informative statements from your program you can create a more usable interface to your program. The input commands also take messages that are printed out as user prompts. The same is also the case for the askQuestion command. In general your goal in writing robot programs should be to make the interface as friendly and useful and informative as possible. In that vein, the Myro library also includes some additional functions that can be used to extend the modalities using which your program/robot interacts with you or the user. You can actually use speech as an output modality. For example, try the command: >>> speak("Top of the morning to you!") Your computer has a speech generation system built into it. Though it is rudimentary, you will find it quite useful (and entertaining). When you issue the above command, a default voice assigned to the system is used to speak out the text you specify. You can find out the name of the default speaker in your system: >>> getVoice() 'MSSam' That is, 'MSSam' is currently the assigned speech model. In most systems, you get a choice of several speakers. You can find out what other options are available: >>> getVoices() ['MSMike', 'MSMary', 'MSSam', 'LHMICHAEL', 'LHMICHELLE'] On my computer, I have the above 5 choices available. You can select a specific speaker out of these using the setVoice command: >>> setVoice('MSMary') >>> speak("Change my pen to a different color and press 'OK' when ready") You can even make this a choice for the user by defining a new fucntion: def pickVoice(): setVoice(askQuestion("Pick a voice for me, please.", getVoices())) Take a careful look at the above definition and make sure you understand how it works. Each time you call pickVoice, you will get a dialog window: Then, depending on which button you select, the string returned by askQuestion is used to set the voice. Try writing your robot program so that they make good use of these different output modalities. Talk among yourselves What kinds of things can your robot talk about? A robot could report back its state, for example, by saying things like "I see light on the left" or "There is an obstacle in front of me." But the robot can also "talk" about other things, like the time or the weather. If you wanted to get the current time and date, the most easiest way might be to import another Python library called "time". You can do that with: >>> import time You can then use the function called "localtime" like so: >>> time.localtime() Localtime returns all of the following in order: - year - month - day - hour - minute - seconds - weekday - day of the year - whether it is using daylight savings time, or not >>> time.localtime() (2007, 5, 29, 12, 15, 49, 1, 149, 1) In this example, it is May 29, 2007 at 12:15pm and 49 seconds. It is also the 1st day of the week, 149 day of the year, and we are using daylight savings time. Exercise 9: Modify your program from Exercise 8 to make use of speech. Make the robot, as it is carrying out random movements, to speak out what it is doing. As a result you will have a robot artist that you have created! Proprioception: Internal Clock As you saw earlier, proprioception or internal sensing mechanisms are built into humans as well as robots. The wait command, for instance, causes the robot to wait for the specfied amount of seconds before proceeding on to the next command in a program. This also implies that there must be an internal clock mechanism built into the robot. In fact, internal clocks are an essential component of any computing device. Most programming languages also allow you to access this internal clock to keep track of time, or time elapsed (as in a stop watch), or in any other way you may want to make use of time (as in the case of the wait) function. The Myro library provides a simple function that can be used to retreive the current time: >>> currentTime() 1169237231.836 The value returned by currentTime is a number that represents the number of seconds elapsed since some earlier time (whatever that is). Try issuing the command several times and you will notice that the difference in the values returned by the function represents the real time in seconds. For example: >>> currentTime() 1169237351.5580001 >>> 1169237351.5580001 - 1169237231.836 119.72200012207031 That is, 119.722... seconds had elapsed between the two commands above. This provides another way for us to write robot behaviors. For example, if you wanted your robot to go forward for 3 seconds, you could either do: forward(1.0) wait(3.0) or startTime = currentTime() # record start time while (currentTime() - startTime) < 3.0: forward(1.0) The second solution uses the internal clock. First, it records the start time. Next it enters the loop which says, get the current time and see if the difference between the current time and start time is less than 3.0 seconds. If so, repeat the command forward. As soon as the elapsed time gets over 3.0 seconds, the loop terminates. This is another way of using the while-loop that you learned in the previous chapter. In the last chapter, you learned that you could write a lopp that executed forever using the loop as: while True: do something The more general form of the while-loop is: while <some condition>: do something That is, you can specify any condition in <some condition>. The condition is tested and if it results in a True value, the loop performs one more iteration. and then tests the condition again, and so on. In the example above, we use the expression: (currentTime() - startTime) < 3.0 as the condition. If this condition is true, it implies that the elapsed time since the start is less than 3.0 seconds. If it is false, it implies that more than 3.0 seconds have elapsed and it results in a False value, and the loop stops. learning about writing such conditions is essential to writing smarter robot programs and we will return to this topic in the next chapter. While it may seem that the first solution, using wait seemed simple enough (and it is!), you will soon discover that being able to use the internal clock as shown above provides more versatility and functionality in desiging robot behaviors. This, for example is how one could program a vacuum cleaning robot to clean a room for 60 minutes: startTime = currentTime() while (currentTime() - startTime)/60.0 < 60.0: cleanRoom() Now you have seen how to write robot programs that have behaviors or commands that can be repeated a fixed number of times, or forever, or for a certain duration: # do something N times for step in range(N): do something... # do something forever while True: do something... # so something for some duration duration = <some time in seconds> startTime = currentTime() while (currentTime() - startTime) < duration: do something... All of the above are useful in different situations. Exercise 10: Rewrite your program from Exercise 9 so that the random behavior using each different pen is carried out for 30 seconds. Summary In this chapter you have become familiar with all of the sensors your Scribbler robot has. You also learned about proprioception or internal sensing. The wait command is one internal sensing mechanism built into the Scribber. Sensing that it has stalled is another one. You also leanred about generating random numbers and how to use them in your programs to create random behaviors. Additionally, you learned some new Python commands that enable dialog window interaction as well as speech output. By making use of the internal clock, you can also use this proprioception mechanism to design behaviors that are duration specific. That is, now you know how to write robot programs that contain behaviors that are repeated a fixed number of times, or are repeated forever, or are carried out for a specific duration. In the next chapter, we will revisit the sensors and learn about how they can be used to help the robot make decisions. Myro Review Python Review More Exercises Exercise 11: The Myro library also provides a function called, randomNumber() that returns a random number in the range 0.0 and 1.0. This is similar to the function random() from the Python library random that was introduced in this chapter. You can use either based on your own preference. You will have to import the appropriate library depending on the function you choose to use. Experiment with both to convince yourself that these two are equivalent. Exercise 12: In reality, you only need the function random() to generate random numbers in any range. For example, you can get a random number between 1 and 6 with randRange. Given this example, write a new function called myRandRange() that works just like randrange(): def myRandRange(A, B): # generate a random number between A..B (just like as defined for randrange) Previous Chapter: Chapter 3, Up: Introduction to Computer Science via Robots, Next Chapter: Chapter 5
http://wiki.roboteducation.org/Chapter_4
crawl-001
refinedweb
4,998
69.72
disabled indentation of namespace content in both the MonoDevelop and project-specific settings. However, pressing Enter after the opening brace of a namespace declaration still inserts an indent. What's worse, if I fix this manually and later press Enter after a statement in a method, the next line is indented as if the namespace content were indented, that is, it is indented one level more than the previous statement. The "Format Code" action in the menu correctly respects the namespace indentation setting. *** Bug 4838 has been marked as a duplicate of this bug. *** In recent versions of MonoDevelop (tested with 4.0.1) this bug is even more severe since it also affects indentation on paste operations, no matter if Smart Indentation is even turned on or not. See case 13031 for details. 4.0.1 is 5 months old, have you tried 4.0.9? I downloaded Xamarin Studio 4.0.9 to see, and I can confirm that both this bug and the paste bug (case 13031) are still there. fixed with the new engine.
https://bugzilla.xamarin.com/40/4096/bug.html
CC-MAIN-2021-25
refinedweb
178
65.73
Provided by: aolserver4-dev_4.5.1-15_amd64 NAME Ns_InfoAddress, Ns_InfoBootTime, Ns_InfoBuildDate, Ns_InfoConfigFile, Ns_InfoErrorLog, Ns_InfoHomePath, Ns_InfoHostname, Ns_InfoLabel, Ns_InfoNameOfExecutable, Ns_InfoPid, Ns_InfoPlatform, Ns_InfoServerName, Ns_InfoServerVersion, Ns_InfoServersStarted, Ns_InfoShutdownPending, Ns_InfoStarted, Ns_InfoTag, Ns_InfoUptime, Ns_PageRoot - Get server information SYNOPSIS #include "ns.h" char * Ns_InfoAddress(void) int Ns_InfoBootTime(void) char * Ns_InfoBuildDate(void) char * Ns_InfoConfigFile(void) char * Ns_InfoErrorLog(void) char * Ns_InfoHomePath(void) char * Ns_InfoHostname(void) char * Ns_InfoLabel(void) char * Ns_InfoNameOfExecutable(void) int Ns_InfoPid(void) char * Ns_InfoPlatform(void) char * Ns_InfoServerName(void) char * Ns_InfoServerVersion(void) int Ns_InfoServersStarted(void) int Ns_InfoShutdownPending(void) int Ns_InfoStarted(void) char * Ns_InfoTag(void) int Ns_InfoUptime(void) char * Ns_PageRoot(char *server) _________________________________________________________________ DESCRIPTION These functions return information about the server. Many of the functions return pointers to strings or other types of information which, in most cases, you must not free. These are denoted as "read-only" in the sections below. Ns_InfoAddress() Return the server IP address of the server. The IP address is defined in the server configuration file. The IP address is returned as a string pointer which you must treat as read-only. If you want to alter the string, you must use ns_strdup to copy the string to another location in memory and modify that instead. Ns_InfoBootTime() Return the time that the server was started as an int. Treat the result as time_t. Ns_InfoBuildDate() Return the date and time that this server was compiled as a string pointer. Treat the result as read-only. Ns_InfoConfigFile() Return the absolute path name of the configuration file in use as a string pointer. Treat the result as read-only. Ns_InfoErrorLog() Return the name of the error log as a string pointer. Treat the result as read- only. The name may be just a name, a relative path or an absolute path depending on how it is defined in the server configuration file. Ns_InfoHomePath() Return the absolute directory path where AOLserver is installed as a string pointer. Treat the result as read-only. Ns_InfoHostname() Return the hostname of the host that AOLserver is running on as a string pointer. The gethostname(2) function is used. If gethostname(2) fails to return a hostname, "localhost" is used instead. Treat the result as read-only. Ns_InfoLabel() Return the source code label for AOLserver as a string pointer. Statically defined in the source code. If no label was used, "unlabeled" is returned. You can use these functions to provide the source code label when you report problems with the server. Treat the result as read-only. Ns_InfoNameOfExecutable() Return the name of the running executable as a string pointer. Treat the result as read-only. Ns_InfoPid() Return the pid of the running AOLserver executable as an int. Ns_InfoPlatform() Return the platform name as a string pointer, e.g. "linux". Treat the result as read-only. Ns_InfoServerName() Return the AOLserver name string, e.g. "AOLserver". Statically defined in the source code. Treat the result as read-only. Ns_InfoServerVersion() Return the AOLserver version string, e.g. "3.5.2". Statically defined in the source code. Treat the result as read-only. Ns_InfoServersStarted() Return TRUE if the server has started, i.e., if initialization and module loading is complete. This is a compatibility function that calls Ns_InfoStarted. Ns_InfoShutdownPending() Return TRUE if there is there a shutdown pending, i.e. if an INTR signal has been received or if ns_shutdown has been called. Ns_InfoStarted() Return TRUE if the server has started, i.e., if initialization and module loading is complete. Ns_InfoTag() Return the CVS tag of this build of AOLserver. Statically defined in the source code. The value may be meaningless. Treat the result as read-only. Ns_InfoUptime() Return how long, in seconds, AOLserver has been running. Ns_PageRoot(server) Return the path name of the AOLserver pages directory for a particular server as a string pointer. The server argument is not used. Treat the result as read-only. SEE ALSO nsd(1), info(n)
http://manpages.ubuntu.com/manpages/precise/man3/Ns_InfoAddress.3aolserver.html
CC-MAIN-2019-43
refinedweb
637
52.36
Contents The pImpl idiom is a useful idiom in C++ to reduce compile-time dependencies. Here is a quick overview of what to keep in mind when we implement and use it. What is it? The pImpl Idiom moves the private implementation details of a class into a separate structure. That includes private data as well as non-virtual private methods. The key to this idiom is to only forward-declare the implementation struct in the class header and own onw instance via a pointer. With naming conventions of prefixing pointers with p the pointer is often named pImpl, giving the idiom its name. The naming convention may differ, e.g. in Qt it’s d – sticking to a name is useful to make the idiom recognizable. //MyClass.h #include <memory> class MyClass { public: explicit MyClass(int i); //... int getSomething() const; void doSomething(); private: struct Impl; std::unique_ptr<Impl> pImpl; }; //MyClass.cpp #include <MyClass.h> struct MyClass::Impl { int i; void twice() { i *= 2; } void half() { i /= 2; } }; MyClass::MyClass(int i) : pImpl{new Impl{i}} {} int MyClass::getSomething() const { return pImpl->i; } void MyClass::doSomething() { if (pImpl->i % 2 == 0) { pImpl->half(); } else { pImpl->twice(); } } //... What is it used for? The use of the pImpl idiom is twofold: it can greatly reduce compile time dependencies and stabilize the ABI of our class. Compile time firewall Because of the reduced dependencies, the pImpl idiom sometimes is also called a “compile time firewall”: Since we move all data members into the opaque Impl struct, we need to include the headers that declare their classes only into the source file. The classes of function parameters and return types need only be forward-declared. This means that we need only include <memory> for the unique_ptr, headers of base classes, and the occasional header of typedefs for which forward declarations are not possible. In the end, translation units that include MyClass.h have potentially fewer headers to parse and compile. ABI stability Changes to private implementation details of a class usually mean that we have to recompile everything. Changes in data members mean that the layout and size of objects change, changes in methods mean that overload resolution has to be reevaluated. With pImpl, that is not the case. The class will always only have one opaque pointer as the only member. Private changes do not affect the header of our class, so no clients have to be recompiled. How to impl the pImpl The example above shows a sketch of how we can implement the pImpl idiom. There are some variations and caveats, and the //... indicates that I’ve left some things out. Rule of 5 The Impl struct is only forward-declared. That means the compiler can not generate the destructor and other member functions of the unique_ptr for us. So, we have to declare them in the header and provide an implementation in the source file. For the destructor and move operations, defaulting them should suffice. The copy operations should either be explicitly deleted (they are implicitly deleted due to the unique_ptr) or implemented by performing a deep copy of the impl structure. MyClass::MyClass(MyClass&&) = default; MyClass::MyClass(MyClass const& other) : pImpl{std::make_unique<Impl>(*other.pImpl)} {} MyClass::~MyClass() = default; MyClass& MyClass::operator=(MyClass&&) = default; MyClass& MyClass::operator=(MyClass const& other) { *pImpl = *other.pImpl; return *this; } The Impl struct The Impl struct should be simple. Its only responsibility is to be a collection of the private details of the outer class. That means, it should not contain fancy logic in itself, only the private methods of the outer class. It also means that it does not need its own header since it is used in one place only. Having the struct in another header would enable other classes to include it, needlessly breaking encapsulation. Inner class or not? The impl struct can be either an inner class of the actual class, or it can be a properly named standalone class, e.g. MyClassImpl or MyClassPrivate. I usually choose the private inner structure so that the access to its name is really restricted to the implemented class, and there are no additional names in the surrounding namespace. In the end, the choice is mostly a matter of preference – the important thing is to stick to one convention throughout the project. What not to do Don’t derive from the Impl struct I’ve heard of deriving from the Impl struct as an argument to put it in its own header. The use case of deriving would be overriding parts of the implementation in a derived class of the outer class. This will usually be a design smell since it mixes the aggregation of private details with polymorphism by making those details not so private at all. If parts of the base class behavior have to be overridden, consider using the strategy pattern or similar behavioral patterns and provide a protected method to exchange the strategy. Don’t overuse it The pImpl idiom comes at a cost: Allocating memory is relatively costly in terms of performance. It’s possible to use specialized allocators, but that only trades the performance cost for complexity, and it’s not scalable to a large number of classes. That’s why using the pImpl idiom everywhere just because we can is a bad idea. 10 Comments Permalink Much more details, incl. about const propagation, to be found here: Article is old, but not, yet, outdated. </shameless plug> Permalink The point is to have the Impl declaration outside of the class header, so how can the Impl class be an inner class ? Permalink This basically mimics virtual methods (like intended by using a pure base class as an interface) but in a more expensive way. Also, since the constructor of the base class still needs to know the private implementation the separation isn´t total either. Additionally, how does the smart pointer (or any other means of destruction of the implementation) know how to properly destruct the implementation structure? Without an explicit destructor taking care of it or a implementation base class enforcing a virtual destructor (both additionally weakening the separation) this is sooner a later a likely cause of resource leakage. Note that a class with only non-virtual methods and an opaque data pointer would provide full compile-time independency with zero runtime and far less code overhead as all dependency resolving gets have move to the link or even dynamic loading (in case of a DLL) stage. Proper destruction is simpler there as well. Thus, pImpl is far from a great tool for just reducing build-time dependencies. However, as an additional separation mechanism for more complex demands (like Qt) or provide inter-compiler compatibility it can be pretty useful. About the const issue: Indeed, references are your friend here. A small “smart reference” helper template as a non-zeroable with transitive const usage is a pretty useful tool. Permalink I don’t agree with the suggested implementation due to the use of the std::unique_ptr. The reason is that the unique_ptr does not respect the const-ness of the function or object. You can test this by changing the getSomething function as follow: int MyClass::getSomething() const { return ++pImpl->i; } Thus the function states (promises) that it does not change the object, but it actually does change the underlying data in the object. Then “const” on the function does not mean anything. Personally if I see a unique_ptr being used as a pimpl pointer I know I can ignore the contract/promises of “const” on any function or interface on that object. Permalink Const is not transitive through any pointer, so you have thus problem with any way to implement pImpl. And, of course, with any other class that uses pointers of any kind to aggregate its parts. Permalink it should be better when std::experimental::propagate_const comes around. Permalink int MyClass::getSomething() const { return pImpl->i; } danger: it breaks the const (ie pImpl->i=10; will compile here) it should to be class foo { struct impl; public: ~foo() noexcept ; // interface here.. private: std::unique_ptr pimpl_; impl& pimpl() { return pimpl_; } const impl& pimpl() const {return pimpl_; } }; and inside foo’s use only pimpl(), pimpl_ can be used only in ctor. in all const methods it is pimpl_inside * const insteed const pmpl_inside* Permalink In the Microsoft Windows DLL world, there is another reason to use pImpl – to force all class allocations and deallocations to occur in the DLL. Because of the vagaries of the various C runtime libraries, it is dangerous to provide a DLL that returns allocated memory that the caller is expected to free. Unfortunately, one can also not use unique_ptr<>, even with a custom DLL-provided deleter. There is no guarantee that the unique_ptr<> the DLL was compiled with has the same memory layout as the unique_ptr<> the client was compiled with. So, you are safer using a raw pointer</holds nose>. Permalink I once created a generic Pimpl implementation which was essentially a wrapper around the unique_ptr you have but it specifies the gang of 5 depending on whether you want them or not. This way, the class having a pimpl could still use the rule of 0 which is very convenient. And by means of a template parameter you could control whether the pimpl struct was copyable and/or moveable. I’m paraphrasing a bit, but the essence is here: The caveat of this approach is that the constructor of the class becomes inlined. So if the class is part of a dll interface, this may be a bad idea Permalink Qt actually gives a reason for deriving from the private/Impl class: build a hierarchy of private classes, matching the hierarchy of public classes, and installing the pImpl pointer just in the topmost class of the hierarchy. For instance, you have QObject, which holds the pImpl pointer (to a QObjectPrivate object¹). QWidget inherits from QObject, and of course needs its own data, stored in a QWidgetPrivate object. Where do you put this data? You can add another pImpl member to QWidget, pointing to QWidgetPrivate. But this comes with drawbacks: it adds an extra memory allocation (creating a QWidget means allocating QObjectPrivate, for the base QObject class, plus QWidgetPrivate; and if we repeat this pattern, the more we derive, the more allocations we perform), and causes a name clash for the data member in QWidget (either we come up with a clever naming scheme, or we have to use awkward syntax to access the pImpl pointer of the base classes). Qt instead makes QWidgetPrivate inherit from QObjectPrivate². The pImpl pointer in the base QObject class is reused, and there’s only one allocation for the private class. A few macros (Q_DECLARE_PRIVATE, Q_D, etc.³) are used to downcast the pImpl pointer to the right type needed by a derived class. The glue for all of this is a protected constructor for QObject that takes a QObjectPrivate pointer (and installs it as the pImpl pointer)⁴: QWidget constructors use protected this constructor for the base class, passing a QWidgetPrivate⁵. In turn, QWidget has a similar constructor to be used by its subclasses⁶. ¹ ² ³ ⁴ ⁵ ⁶
https://arne-mertz.de/2019/01/the-pimpl-idiom/
CC-MAIN-2022-05
refinedweb
1,867
52.7
Solving problem is about exposing yourself to as many situations as possible like Modify tick label Modify tick label text, which can be followed any time. Take easy to follow this discuss. I want to make some modifications to a few selected tick labels in a plot. For example, if I do: label = axes.yaxis.get_major_ticks()[2].label label.set_fontsize(size) label.set_rotation('vertical') the font size and the orientation of the tick label is changed. However, if try: label.set_text('Foo') the tick label is not modified. Also if I do: print label.get_text() nothing is printed. Here’s some more strangeness. When I tried this: from pylab import * axes = figure().add_subplot(111) t = arange(0.0, 2.0, 0.01) s = sin(2*pi*t) axes.plot(t, s) for ticklabel in axes.get_xticklabels(): print ticklabel.get_text() Only empty strings are printed, but the plot contains ticks labeled as ‘0.0’, ‘0.5’, ‘1.0’, ‘1.5’, and ‘2.0’. Answer #1: Caveat: Unless the ticklabels are already set to a string (as is usually the case in e.g. a boxplot), this will not work with any version of matplotlib newer than 1.1.0. If you’re working from the current github master, this won’t work. I’m not sure what the problem is yet… It may be an unintended change, or it may not be… Normally, you’d do something along these lines: import matplotlib.pyplot as plt fig, ax = plt.subplots() # We need to draw the canvas, otherwise the labels won't be positioned and # won't have values yet. fig.canvas.draw() labels = [item.get_text() for item in ax.get_xticklabels()] labels[1] = 'Testing' ax.set_xticklabels(labels) plt.show() To understand the reason why you need to jump through so many hoops, you need to understand a bit more about how matplotlib is structured. Matplotlib deliberately avoids doing “static” positioning of ticks, etc, unless it’s explicitly told to. The assumption is that you’ll want to interact with the plot, and so the bounds of the plot, ticks, ticklabels, etc will be dynamically changing. Therefore, you can’t just set the text of a given tick label. By default, it’s re-set by the axis’s Locator and Formatter every time the plot is drawn. However, if the Locators and Formatters are set to be static ( FixedLocator and FixedFormatter, respectively), then the tick labels stay the same. This is what set_*ticklabels or ax.*axis.set_ticklabels does. Hopefully that makes it slighly more clear as to why changing an individual tick label is a bit convoluted. Often, what you actually want to do is just annotate a certain position. In that case, look into annotate, instead. Answer #2: One can also do this with pylab and xticks import matplotlib import matplotlib.pyplot as plt x = [0,1,2] y = [90,40,65] labels = ['high', 'low', 37337] plt.plot(x,y, 'r') plt.xticks(x, labels, rotation='vertical') plt.show() Answer #3: In newer versions of matplotlib, if you do not set the tick labels with a bunch of str values, they are '' by default (and when the plot is draw the labels are simply the ticks values). Knowing that, to get your desired output would require something like this: from pylab import * axes = figure().add_subplot(111) a=axes.get_xticks().tolist() a[1]='change' axes.set_xticklabels(a) [<matplotlib.text.Text object at 0x539aa50>, <matplotlib.text.Text object at 0x53a0c90>, <matplotlib.text.Text object at 0x53a73d0>, <matplotlib.text.Text object at 0x53a7a50>, <matplotlib.text.Text object at 0x53aa110>, <matplotlib.text.Text object at 0x53aa790>] plt.show() and the result: and now if you check the _xticklabels, they are no longer a bunch of ''. for item in axes.get_xticklabels()] ['0.0', 'change', '1.0', '1.5', '2.0'][item.get_text() It works in the versions from 1.1.1rc1 to the current version 2.0. Answer #4: It’s been a while since this question was asked. As of today ( matplotlib 2.2.2) and after some reading and trials, I think the best/proper way is the following: Matplotlib has a module named ticker that “contains classes to support completely configurable tick locating and formatting”. To modify a specific tick from the plot, the following works for me: import matplotlib.pyplot as plt import matplotlib.ticker as mticker import numpy as np def update_ticks(x, pos): if x == 0: return 'Mean' elif pos == 6: return 'pos is 6' else: return x data = np.random.normal(0, 1, 1000) fig, ax = plt.subplots() ax.hist(data, bins=25, edgecolor='black') ax.xaxis.set_major_formatter(mticker.FuncFormatter(update_ticks)) plt.show() Caveat! x is the value of the tick and pos is its relative position in order in the axis. Notice that 1, not in 0 as usual when indexing. In my case, I was trying to format the y-axis of a histogram with percentage values. mticker has another class named PercentFormatter that can do this easily without the need to define a separate function as before: import matplotlib.pyplot as plt import matplotlib.ticker as mticker import numpy as np data = np.random.normal(0, 1, 1000) fig, ax = plt.subplots() weights = np.ones_like(data) / len(data) ax.hist(data, bins=25, weights=weights, edgecolor='black') ax.yaxis.set_major_formatter(mticker.PercentFormatter(xmax=1.0, decimals=1)) plt.show() In this case xmax is the data value that corresponds to 100%. Percentages are computed as x / xmax * 100, that’s why we fix xmax=1.0. Also, decimals is the number of decimal places to place after the point. Answer #5: The axes class has a set_yticklabels function which allows you to set the tick labels, like so: #ax is the axes instance group_labels = ['control', 'cold treatment', 'hot treatment', 'another treatment', 'the last one'] ax.set_xticklabels(group_labels) I’m still working on why your example above didn’t work. Answer #6: This works: import matplotlib.pyplot as plt fig, ax1 = plt.subplots(1,1) x1 = [0,1,2,3] squad = ['Fultz','Embiid','Dario','Simmons'] ax1.set_xticks(x1) ax1.set_xticklabels(squad, minor=False, rotation=45) Answer #7: This also works in matplotlib 3: x1 = [0,1,2,3] squad = ['Fultz','Embiid','Dario','Simmons'] plt.xticks(x1, squad, rotation=45) Answer #8: If you do not work with fig and ax and you want to modify all labels (e.g. for normalization) you can do this: labels, locations = plt.yticks() plt.yticks(labels, labels/max(labels))
https://discuss.dizzycoding.com/modify-tick-label-text/
CC-MAIN-2022-33
refinedweb
1,078
60.72
05-24-2012 07:18 AM - edited 05-24-2012 07:23 AM This is a related post to thread I noticed that most of IVI-C drivers, which is created with Pacific Mindworks Nimbus can not be detected for its IVI class, when converting the IVI-C driver .fp file with LabVIEW Instrument Driver Import Wizard 2.0 (use with LabVIEW 2011). For example, if converting AgN57xx IVI driver (Agilent N5700/8700 series DC Power Supply) with the Wizard in Advanced Mode, the 4th page of the Wizard says the driver to be imported looks like VXIpnp driver ! But the driver is in fact an IviDcPwr class. I tried to reverse-engineer about how the Wizard detects the driver's IVI class. So I noticed that the Wizard is checking if the <prefix>.h contains the following #include lines. #include <ivi.h> #include <ividcpwr.h> // or can be any other class header file However, for most Nimbus-generated drivers, the .h file does not contain such lines, but simply has the line: #include <IviVisaType.h> The Wizard simply sees <ivi.h> line to recognize the driver is IVI rather than Plug&Play. The Wizard simply sees <ividcpwr.h> line to recognize the driver is IviDcPwr class rather than custom. (similar for other classes too) In fact, by customizing the .h file to have these 2 lines, the Wizard has detected that the driver is IVI and of an appropriate class as default. Is it LabVIEW's spec? I wonder if why the LabVIEW Instrument Driver Import wizard sees that lines rather than checking the IviConfigurationStore.xml content, where the driver detail is described. Makoto 05-28-2012 10:27 PM - edited 05-28-2012 10:33 PM Hi Makoto, I believe the behavior of LabVIEW Driver Import Wizard is exactly what you mentioned above. Maybe it looks a little bit inflexible to you. But considering that the only feature of this tool is focused on converting a C driver to LabVIEW driver, and it should be able to run on any enviroment without any depencency limitation (e.g. install IVI Shared Components or ICP before using this tool...). So some user may run it on a machine even without IVI ( no IviConfigurationStore.xml either). That may be the resason it just parse the C driver in text to determin the driver type and IVI classes. Actually, the problem you wondering is a compatibility issue between Nimbus and Driver Import Wizard. I am also hoping it would be fixed in the next version of LDIW. Thanks, Charles 05-29-2012 12:01 AM - edited 05-29-2012 12:02 AM Hi All, I just find the C-Driver generated by NimBus do not have class driver information in .c, .h, fp and sub files. Maybe we could parse configStore.xml to get the driver class, but not all the drivers would provide the related info in this xml(Normally the xml file is updated in installation phase). As we already provide a workaround for such drivers(By selecting driver class manually), I think it's better to keep it as it was. 05-29-2012 01:03 AM - edited 05-29-2012 01:04 AM Thanks for the reply. I know that I can explicitly select a specific IVI class rather than leaving VXIpnp when importing an Nimbus IVI-C driver, and off course I know the Import Wizard is covering not only IVI-C but legacy VXIpnp and other CVI instrument drivers. And as for me, this "manual select" work-around is accepted. But as a instrument manufacturer that provides IVI drivers, I think many driver user will operate the Import Wizard in Basic Mode, which will leave the detected driver type as is. The resuled driver is still accepted because the LV wrapper operates normally, but in this case, many of IVI-defined and IVI-class-defined VIs will have default icon, making the user get confused. I think I can't decide which tools (Import Wizard or Nimbus) is the true cause for this issue. It might be Import Wizard issue that only checks <prefix>.h without seeing ConfigStore.xml. And it might be Nimbus issue that never embeds #include <ivi.h> and <other ivi class .h> lines in <prefix>.h when machine-generate. (At least Nimbus 2.x does not embed these #include lines, and insertion by hands will be prevented by Nimbus when rebuild. I dont know Nimbus 3.x fixes this because I dont have it yet.) Anyway it may be better that step-by-step instruction for the Import Wizard describes this pit-fall. Makoto 05-29-2012 04:43 AM Dear Makoto, Your consideration makes sense. How about this: a) We provider a step to parse the configstore.xml if failed with include parsing, but users may be required to enter the driver session name. b) We add some notes in fp converter to notify user about changing the driver class manually and get the driver class info from driver provider. Which one do you think is better? 05-29-2012 08:32 PM - edited 05-29-2012 08:34 PM aTammy-san, Thanks for the reply. I think the solution [a] is better but I have one issue on this. >>but users may be required to enter the driver session name I think it is bit strange query for the user. As for Session Name, it only exists after the user has created a virtual instrument entry (Logical Name) in NI-MAX, so it is probably absent when importing. If you are talking about Software Module instead, you can assume that it surely exists and the name is same as <prefix>. So the user does not have to explicitly specify the Software Module name because the Import Wizard already knows the name once after the user selected an .fp file. The IVI-3.17 Installation Requirement Specification - section 5.1.4 requires that an "IVI-COM driver packaged with IVI-C wrapper" shall create a Software Module entry with IVI-C's <prefix> name. Nimbus-based IVI-COM/IVI-C hybrid drivers are this type. So finally, I think the following approach is appropriate. After the user selected a <prefix>.fp file (and then Import Wizard estimates .h, .sub, and .dll), (1)1st Try: Parse <prefix>.h for if #include <ivi.h> and other class .h lines are included. If any included, decide the driver is IVI, decide appropriate class if any class .h was included. If not, go to 2nd Try. (2)2nd Try: Look for the Software Module entry named <prefix> in the Configuration Store XML. If any found, deeply parse about what Published API for IviDriver IVI-C, IviDCPwr IVI-C, etc... are supported. If Software Module was not found, leave the driver type VXIpnp. In any cases, the manual select for driver type should be available for the case that the Import Wizard could not detect correct type. Regards, Makoto 05-29-2012 08:56 PM Hi Dear Makoto, Thanks for your kindly feedback! Only two things: 1. As the software module name could be different with the prefix(That's why we have a prefix field in softwaremodule), and we could be lack of configStore settings, such solution could only cover part of the senarios. 2. Parsing xml would lower the effciency, bring in extra effort and might break our original design. We will try to contact Pacific Mindworks to find a balanced solution. Your suggestion will definitely be under consideration. Thank you again for all your comments! 05-30-2012 01:25 AM - edited 05-30-2012 01:34 AM aTammy-san, >> Only two things: 1. As the software module name could be different with the prefix(That's why we have a prefix field in softwaremodule), and we could be lack of configStore settings, such solution could only cover part of the senarios. << Yes I know there is Prefix property under Software Module, and they could be different. So general approcah might be better to iterate Software Modules collection and check each one and its Prefix name to see if it matches with the desired <prefix> name. As for Nimbus drivers as I have seen, the SoftwareModule name of all drivers does match with Prefix. Plus NI-made IVI-C drivers (created with CVI) and our (Kikusui) hand-made IVI-COMs with C wrapper (created with VC++/ATL) will not step on the 2nd try because they all have #include <ivi.h> lines. But as for legacy Pnp and CVI drivers that do not have <ivi.h> lines, it will step on the 2nd try with no success, so better to consider for that case... >> 2. Parsing xml would lower the effciency, bring in extra effort and might break our original design. << Yes it will take more time by scanning the XML through the Ivi Configuration Server engine. But the FP-to-VI import work is only one-time for the user, so I think it will be accepted if taking a couple of seconds more. Makoto 05-31-2012 03:24 AM - edited 05-31-2012 03:30 AM Hello, I have another issue on Import Wizard 2.0 when importing a Nimbus-based IVI-C wrapper. -- Condition is LabVIEW 2011 32bit, using 32bit IVI driver At the Import Wizard (Advanced Mode) 4th page, we can see the "Shared Library or DLL" shows <prefix>_32.* as default. This is fine when importing a VXIpnp and IVI-C drivers that have DLL file name with _32 suffix. But all 32bit drivers created with Nimbus do not have the bitness suffix (such as <prefix>.dll). This means that the Import Wizard will only search the DLL with the <prefix>_32.* wild card match. (IVI 3.1 spec says 32bit IVI-C drivers can be either <prefix>.dll or <prefix>_32.dll, where as 64bit DLL shall only be <prefix>_64.dll.) The default _32.* DLL wild card does not work for finding the Nimbus IVI-C DLL, so the user has to specify the exact DLL file name by clicking BROWSE button. (Bad points are the file open dialog by BROWSE often can't finish selecting the correct DLL with OK button, and specifying more generic <prefix>*.* wild cards directly on Shared Library or DLL textbox will not work. No way other than manually ***TYPING*** the exact DLL name.) When the next update, is it possible to fix this part so that both with & without _32 DLL names can be searched without problem? Thanks 06-03-2012 09:30 PM - edited 06-03-2012 09:39 PM Hello Makoto, I used to think Import Wizard 2.0 cannot manipulate the <prefix>.dll file as "Shared Library or DLL". But I finally figured out how to indicate the <prefix>.dll format file in the prompted file selecting dialog. I admit the behavior of the dialog is a little bit weird and confusing. 1. When you select the <prefix>.dll file first time in the dialog (e.g. hp34401a.dll). The dialog will display "*.dll" in file name and OK is not function. 2. But if you try to click another files and then click the hp34401a.dll again. Both the file name display and OK button are correct in function. 3. The wizard "Shared Library or DLL" editor also has a correct file name after that. It will be great if these tips would be helpful and looking forward to know if you have any issue when trying them. Thanks, - Chenchen
http://forums.ni.com/t5/Instrument-Control-GPIB-Serial/LabVIEW-Driver-Import-Wizard-does-not-recognize-IVI-class/m-p/2011648
CC-MAIN-2015-11
refinedweb
1,925
65.62
* importing other classes and using classpath donald croswell Greenhorn Joined: Mar 23, 2001 Posts: 6 posted Mar 28, 2001 19:54:00 0 Howdy I am having trouble setting my classpath to include an extra class file. Some of my problems could be: 1. is the class path the same path that I set in DOS? (ie.path=c:/jdk1.3/bin) ? 2. what sort of file is the new class supposed to be?(.class,.java,.jar) 3. what is a package and does my file have to be in one? 4. when I include java.lang.*; where is this package located and what is it called? I know this is a big question but I have been trying to get this stupid thing to work all day and I'm going crazy. I have the book "Java First Contact" but I cant figure it out with that either. Help Please! Terry McKee Ranch Hand Joined: Sep 29, 2000 Posts: 173 posted Mar 28, 2001 21:11:00 0 There are two variables that you need to setup, the CLASSPATH and the PATH. Somewhere in the PATH statement you have to add the bin folder of your newly installed Java Development Kit. For example: PATH=%PATH%;C:\jdk1.3\bin (The %PATH% keeps any previously defined directories in the path statement while the C:\jdk1.3\bin adds the bin directory to the path statement) The CLASSPATH tells the JDK where to find the libraries of programs that it needs to utilize in order to compile and run your programs. You need to make the lib directory and the . current directory available to the CLASSPATH statement. For example, SET CLASSPATH=%CLASSPATH%;C:\jdk1.3\lib;. See if this works. Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Mar 28, 2001 21:35:00 0 Terry answered 1. pretty good, so I'll take a crack at the others. 2. when you compile *.java files, they turn into *.class files (sometimes more than one of them) 3. just like file directories on your computer, packages are a way to group your classes in logical... well, packages. Sun does this, and you can tell this whenever you import something. If you open up src.jar with Winzip, you will see all of the source files for Java. Examine the path information in the WinZip window, and you'll see that all of the classes you import with the command "import java.awt.event.*;" have source files in the path java\awt\event You don't need to have your classes inside a package. There is something known as 'the default package' which all classes are part of, when you don't specify any other package. Package stuff doesn't come into importance until you begin to make applications that have many classes, and potentially re-use classes you've previously written. (Like when you import classes that Sun has written, when you import javax.swing.*) 4. java.lang.* is imported by default in any java source file. You don't need to implicity import it. The entire package name is "the java.lang package" [This message has been edited by Mike Curwen (edited March 28, 2001).] donald croswell Greenhorn Joined: Mar 23, 2001 Posts: 6 posted Mar 28, 2001 23:03:00 0 Hi Thanks for the quick reply. I am a little clearer but the class path is not working with me. If I am in dos and type: path I get the path But if I type :classpath I get an error message that says is not recognisable as a command or batchfile. So I am assuming that you don't enter the classpath in Dos, you do it somewhere else. Also when I enter the new path in, I am still getting the message that the package does not exist. I am assuming this is because my classpath is still not set. Arghhhhhh! I know it is simple but I am missing something. donald croswell Greenhorn Joined: Mar 23, 2001 Posts: 6 posted Mar 29, 2001 00:47:00 0 Hey again I think I am getting closer now because when I compile the program with the imported package, it works! But then when I try to run the program: java Chapter4n1 I get an error: Exception in thread "main" java.lang.NoClassDefFoundError : Chapter4n1/java now what am I doing wrong?? This is the code I am using. import java.lanks.* ; public class Chapter4n1 { public static void main( String [] args) throws Exception { // read in and output a String BasicIo.prompt("please type in a string ") ; String string1 = BasicIo.readString() ; System.out.println("the string you typed was **" + string1 + "**") ; System.out.println() ; // read in and output an integer BasicIo.prompt("please type in an integer ") ; int intValue = BasicIo.readInteger() ; System.out.println("the integer you typed was " + intValue) ; } // end of method main } // end of class Chapter4n1 Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Mar 29, 2001 08:06:00 0 when you set classpath in your autoexec.bat, you cannot open a DOS window and type "c:>classpath" and have it print out what the classpath is, unlike "c:>path" and have it print the path. I recall way back when I was being taught DOS, that you could do this for any environment variable, but it's never worked for me in Windows. Setting classpath is not really necessary, unless you have multiple packages imported, that are not rooted in the current directory. **************** This is one of those things about Java that I had the hardest time with, and here is how I've come to think of it. When you use an IDE and click on compile, it will construct the following string and send it to a DOS prompt... javac -classpath <> -sourcepath <> -d <> *.java In between each of the <> is an absolute path, or paths seperated by a ;. classpath and sourcepath are self-explaining, and -d is the destination of compiled class files. Here's where it gets interesting. If in MyClass.java, you import the following package: "com.MyPackage.MySubPackage.MyClass5" *AND* the actual place on your harddrive that this package is 'rooted' at is: "c:\javawork\packages" (So a full path to MyClass5.class would be: c:\javawork\packages\com\MyPackage\MySubPackage\MyClass5.class) Then your Classpath MUST contain "c:\javawork\packages". This is one of the places that the java compiler will start to look for the import "com.MyPackage.MySubPackage.*", and it will find it there. If your classpath only contains "c:\javawork", then the import would be looking for: "c:\javawork\com\MyPackage\MySubPackage\MyClass5.class" But this file does not exist! So you would get a Class Not Found error. In summary, java compiler, when it attempts an import, will turn each '.' in the import into a '\', and then in turn, will append each entry in CLASSPATH in front of this import. If the file is found, it is imported. If it's not, it will try the next entry in CLASSPATH. If it's still not found, you get a compiler error. ************* As for the error you do get... I've never seen a main function being declared as throwing an error... Or perhaps the BasicIo is in a deeper package than java.lanks.* (One thing about packages that sort of sucks is this: You can only import from one folder at a time with one import statement.) So if you had packages like this: com\MyPackage\*.class com\MyPackage\SubPackage\*.class com\MyPackage\SubPackage\AnotherPackage If you want to import a class from the first, second and third package, you can't just say "import com.MyPackage.*;" you would need to specify all three. "import com.MyPackage.*; import com.MyPackage.SubPackage.*; import com.MyPackage.Subpackage.AnotherPackage.*; " Mark Savory Ranch Hand Joined: Feb 08, 2001 Posts: 122 posted Mar 29, 2001 08:13:00 0 Exception in thread "main" java.lang.NoClassDefFoundError : Chapter4n1/java The above tells me that you're running your program like so: java Chapter4n1.java You should be running it without the .java extension: java Chapter4n1 Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Mar 29, 2001 08:23:00 0 hehehe. silly me. LOL I agree. Here's the link: subject: importing other classes and using classpath Similar Threads Errors when compiling these classes... packages + classpath help in package/class path ! packages Javac cannot find the parent class All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/388422/java/java/importing-classes-classpath
CC-MAIN-2014-10
refinedweb
1,436
74.49
RSS is an acronym for both RdfSiteSummary and ReallySimpleSyndication. The history of RSS covers several periods: prior to March 1999 -- development of technologies used in RSS March 1999 to July 1999 -- RDF Site Summary 0.9 July 1999 to Late 1999 -- Rich Site Summary 0.91, Netscape leaves the scene Late 1999 to August 2000 -- Discussion of RSS direction and technology, namespaces proposal, some RDF discussion August 2000 to September 2002 -- RDF Site Summary 1.0, start of the "RSS flamewar", RSS 0.92, 0.93, and 0.94 September 2002 to now -- Really Simple Syndication 2.0 (RSS 0.91 with namespaces), transfer of RSS 2.0 to Berkman As of August 2000 there are the two branches, variants, flavors, formats, or forks of RSS: RdfSiteSummary (RSS 1.0) and ReallySimpleSyndication (RSS 2.0). Links - RSS: Introducing Myself -- DanLibby RSS-Classic, RSS 1.0 and a historical debt -- DanBrickley RSS Links -- collected links from March 1999 to August 2000, KenMacLeod History of the RSS fork -- selected links from June to November 2000, MarkPilgrim
http://www.intertwingly.net/wiki/pie/RssHistory?action=highlight&value=DanBri
CC-MAIN-2017-47
refinedweb
174
58.89
PHP files. Also using the closing tag to generate HTML output within a method is omitted. Instead use heredoc syntax if needed.:. Function names may only contain alphanumeric characters. Underscores are not permitted. Numbers are not allowed in function names. heavily. Variable names may only contain alphanumeric characters. Underscores or numbers are not permitted. heavily discouraged. Unlike PHP's documentation, the Zend Framework uses lowercase for both boolean values and the "null" value. PHP code must always be delimited by the full-form, standard PHP tags (although you should see the note about the closing PHP tag): Short tags are only allowed within view scripts. and each value aligns as shown below: When declaring associative arrays with the array construct, it is encouraged to break the statement into multiple lines. In this case, each successive line must be padded with whitespace such that both the keys and the values are aligned: heavily discouraged. In these files, a blank line must separate the class from any additional PHP code in the file. This is an example of an acceptable class declaration:, not allowed in favor of the "else if" combination.. It is sometimes useful to write a "case" statement which falls through to the next case by not including a "break" or "return". To distinguish these cases from bugs, such "case" statements must contain the comment "// break intentionally omitted". Usage of the global keyword is not allowed. Use $GLOBALS[xxx] instead.. All classes/files which are required must contain a @see clause:. If this file contains a depreciated class it must have the optional @depreciated clause in the same format as the @since clause. Each function must have a function header. The header has to look like this: All parameters of the function must be available. The following types are allowed:. Documentation within a method is good practice and should be done to increase readability of the code. The only acceptable syntax is phpdoc ("/*") or pearl ("//"). The usage of the ("#") or the ("/**") Syntax is not allowed.. Exceptions must be lazy loaded before they are thrown:.. pardon me, but what operators coding standarts do you use? we didn't find any information and in Zend FW codes there are now single style. For you corporate we agreed to use folowing: B.4.8 Operators All monadic(unary) operations (like "!") must be close to their operand, and all binary (like "&&") operations must be separated from both of operands by single space. if (!$b) if ($a && $b) if ($a && !$b) But actualy Zend team is our leader and if you will discribe in your Codind Standards somthing like that we will be happy Controller and such things use other directory and file conventions. Should they be mentioned? Should option keys be mentioned ('doSomething' or 'do_something')? @functions and methods (using pattern): Seems more useful do define pattern instead of "method should contain name of pattern", because especially Singleton do not use a method "singleton", but "getInstance". I dont know, what Im expecting, but Id like to see more conventions for the docblock It seems to me, that there are many files with broken docblocks I'm not finished with defining the API doc standard. Please be patient until I start the discussion in the generic mailinglist. And yes, you are right... actually the API doc is more or less completly ignored and follows no standard. This is what I want to change. Three things: 1) Doesn't the docblock parser already known, which parameters are optional? It's already in the function definition and the less we duplicate the less the docblock can be out of date 2) There is no such thing as void in PHP. The default return value/type of a function is null. 3) Do we have a list of docblock errors? It's easier for me to adapt if I see what I did wrong. The SVN diffs can help, but a separate page/log, maybe generate each day or week, would IMO be easier to check. 1) I'm not sure if all parsers know these, or show that they are optional... but from point of usability it's always good to have it also visible in the comment. 2) But when the function does not return at all it's even not null We can handle this as we want... but I think it should always be a @return defined even if nothing/null is returned because of usability.. I added a new issue where the actual result of coding standard check for core and incubator can be found. The amount of tests can be found within this page as described within the appendix.. Just a couple of comments (I couldn't find an email address to send these to, and I can't comment on ZF issues):. If you would like to discuss these comments, please contact me at gsherwood at squiz dot net. OK, I've grepped throughout all of trunk and I can't find any of these files in Subversion. I assume they've been removed?. The Zend sniffs in the ZF incubator have been brought to my attention because the package name (Zend) conflicts with the existing Zend Framework package name distributed with PHP_CodeSniffer. The Zend sniffs in the ZF incubator have been brought to my attention because the package name (Zend) conflicts with the existing Zend Framework package name distributed with PHP_CodeSniffer. That's an odd way to put it. Your namespace scheme is strange to begin with, in that you have no claim to several of the namespaces you use (Zend, for example). Wouldn't a more logical structure be something like: Where standards aggregate various sniffs? Well, anyway. Yes, they were removed. Thanks to Thomas for But this page is not a discussion about PHP_CodeSniffer. If you want to know more, have any suggestions, please contact me directly. I've just posted this on Greg's blog () but I guess it's better place for this question: " What should be going on ? This is a RC. RELEASE CANDIDATE. This means that it is in discussion with the dev-team and not an approved standard. . Also the testbed in phpcs is not the official ZF testbed. But you should have mentioned this reading through the comments added here. And my reply was:. If you guys do want to get the ZF standard out into the PHP community, I'm more than happy to include it in the PEAR release and get you access to maintain it. Matthew has been working on a ZF Code sniffer implementation of the ZF coding standards in svn (standard\incubator\tools\codingstandard) perhaps that one is nice point to start of with How do you come to the idea that this is Matthews work ? I have spent much more than 50 hours for this code. It's definitly mine. And you should also have mentioned that this document is a RC and not official. This is the reason why my testbed (standard\incubator\tools\codingstandard) is until now in the incubator and not released. Several things have to be cleared and the testbed has to be changed to reflect this. This document and the testbed will change. So don't rely on this fo
http://framework.zend.com/wiki/display/ZFDEV/ZF+Coding+Standards+%28RC%29
crawl-002
refinedweb
1,208
73.58
Agenda See also: IRC log MikeS: I posted a draft of the spec that I have been working on. It would be helpful as a starting point. <MikeSmith> <pimpbot> Title: HTML: The Markup Language (at) MikeS: I realize very few have looked at it. Has anyone initial comments? ... Gives outline of abstract ... The doc defines authors, producers and consumers differently. ... Gives further details. No normative criteria, web browsers are not defined in terms of how they parse HTML, Is not intended to be an authoring guide. ... HTML Syntax is described. Various Mime Types are discussed. Its the same prose as defined in the current draft, pretty much. Optional BOM are mentioned etc. ... DOC Type, character encoding etc are defined. The remaining part of spec is a list of HTML elements and their content models, attributes and values etc. <takkaria> I had a brief look, looked reasonable, but I would be worried people take it for normative MikeS: In addition there is a section on common content models, phrase and prose content matches block and inline content. Then definitions of sets of common attributes. Similar to HTML 4 draft and other markup specs. ... Last part deals with ARIA markup, attribute sets, enumerated values for ARIA attributes. Semantics undefined as they are in the ARIA spec. Then exhaustive list of name character references. MM: Test kit being build. MikeS: Not a schema? MM: Its a grammar to build a parser. MikeS: Interesting MM: I will ask him to join WG. MS: Will you have more info next week? MM: Yes Adrian: Do you have a view as to how having this doc changes what the HTML 5 spec is/does? MS: Right now as far as content models and syntax description. This matches what is in the HTML 5 draft. We want to keep things that way. We need to decide that the current part of semantics, content models etc should be kept there. We need to keep them in sync. As different docs have diff editors there may not always be agreement. ... We want this to be normative. If we were to go forward with a separate normative markup-language spec, there can only be one spec so it would necessarily need to supersede anything else. Adrian: This looks like a good start. In terms of a descriptive doc that talks about the language and not its use. However, how practical is this? How much of the text has been taken from the HTML 5 draft? MS: This spec should not have a lot of non-normative content. ... It should not describe rendering behaviors normatively, or have too much description of rendering behavior etc. many say the current draft conflates authoring and rendering domains. These are separate so there is confusion. I like to have the markup spec not do this anymore. Separate some of the under the hood stuff from the user manual aspect. Want to see the spec defined as an abstract language without processing assumptions. Adrian: That is a good goal. <DanC> (trying to construct a proof in my head that the language defined in Mike's draft is smaller than the language in Hixie's draft; hmm... don't think there is one... I think it's not actually a theorem. I think there are counter-examples) Joshue thinks this may make it easier to understand for all concerned. DanC: Its not smaller than the language Hixie defines as conformant. DanC: In that docs conforming to his spec is conforming to yours. MS: It is. DanC: I don't think so. <DanC> (other way around) MS: You are right. <DanC> DanC: e.g. documents that misuse headings, cite, etc. are prohibited by the HTML 5 spec MS: Discusses schemas, parsing of schemas, attribute model and pattern definitions. RelaxNG etc. ... Programmatic extracts/additions of certain content via Schematron. Josh unable to parse some statements. Cynthia: I am curious why this is done that way? <DanC> (I think having feedback between validation tools and the spec is good... though this is something of an extreme approach) MS: It is circular. Not ideal. Changes to the spec will go other way, or not be one way from validator to the spec. If changes are made the assertions that validator.nu are making will have to be changed to match the spec. At his point they are one way. Cynthia: It is reasonable to do this in order to get the spec out. MS: Its about having a formal description of the language. Formalisms are currently prose descriptions in order to not lock people who write a conformance checker. High level language used in order to design a tool around it loosely. <DanC> (publishing the schema as a note is an interesting idea.) MS: Hixie feels there should not a normative schema for the language. Other builders have a disincentive to build anything. All of these normative schemas for the language seemed to stop others from developing their own. We want to avoid this, having only one tool. Cynthia: Yes, some need this behind a firewall. MS: This can be done and works well. Cynthia: We don’t want to give advantage to one set of schemas. MM: When developing a formalism for HTML, we can build a grammar, define constraints etc. It depends on what you are trying to do. ... Grammar needs to be correct. Stuff taken from different namespaces can be dealt with. Others have more rigorous purposes, may not be public facing. The grammar needs to be examined to be a more liberal version that conformance checkers want to use, then good stuff. MS: Existing validators, and HTML 4. XHTML 1.0 and 1.1 (DTD based validation tools) MM: When you produce a DTD, the doc that accompanies it is produced alongside it. There are better formalisms to do this etc MS: I understand. Validator.nu is doing a lot more that just conformance checking. MM: You claim that it does that is false. <takkaria> you have to be very very careful that people don't start trying to consume HTML via a grammar rather than an implementation of the parsing algorithm MS: I concede that, however when a decent schema is available, validation against a schema etc there are more sophisticated tools. But the problem is that many see that passing the validator is perceived as meaning their content is fit for purpose. Schema checking alone does not always mean your doc can be processed the way you want it to be. MM: Again this is false. MS: I hear what you are saying. Other comments? Cynthia: This is a good idea. It will be helpful. MS: I think to have the Authoring guide as a way to make it clear to help them have their docs work on the web. It also needs to cover the DOM interface for scripting purposes. Real world use cases etc. This will keep the spec minimal. remove informative stuff into the authoring guide etc MM: Then call it something else. MS: No Cynthia: It could have subtitle? MS: We have talked to developers and they want this. +q MM: How about a browsers guide, developers guide etc? MS: We need a normative guide for browsers.. MM: You can't have that. ... I am not understanding this. DanC: You said this was a spec for how UAs behave. MM: Strong objection -q <MikeSmith> Joshue: some document that is specifically for authors, that cuts out a lot of the under-the-hood stuff is in principle a good idea MM: I am going to make this an issue. <DanC> issue-61? <trackbot> ISSUE-61 -- Conformance depends on author's intent -- RAISED <trackbot> <pimpbot> Title: ISSUE-61 - HTML Issue Tracking Tracker (at) <DanC> maybe that's not so close to what Murray wanted on the issues list after all <DanC> action-77? <trackbot> ACTION-77 -- Michael(tm) Smith to lead HTML WG to response to TAG discussion and report back to TAG -- due 2008-10-30 -- OPEN <trackbot> <pimpbot> Title: ACTION-77 - HTML Issue Tracking Tracker (at) MS: I did want to talk to the TAG list about this. Let them know we have followed up on the discussion. I have an item to do this. This should take place on the public HTML list. MM: I don't follow MS: The action item is complete. <DanC> ISSUE-59? <trackbot> ISSUE-59 -- Should the HTML WG produce a separate document that is a normative language reference and if so what are the requirements -- RAISED <trackbot> <pimpbot> Title: ISSUE-59 - HTML Issue Tracking Tracker (at) <DanC> (maybe that's closer) MS: Lets take the rest of the discussion to public HTML. @headers? <pimpbot> Joshue: Huh? <DanC> (just briefly, who has the ball on headers?) <DanC> (the actions listed in seem stale. ) <pimpbot> Title: ISSUE-20 - HTML Issue Tracking Tracker (at) <MikeSmith> Joshue: we are talking with PF about @headers and discussing how to move this along a little farther <DanC> (hmm... so it sounds like anybody/somebody/nobody has the ball.) <MikeSmith> ACTION: Joshue to prepare status report on @headers discussion by next week [recorded in] <trackbot> Created ACTION-84 - Prepare status report on @headers discussion by next week [on Joshue O Connor - due 2008-11-20]. waves bye <MikeSmith> we will have the telcon at the regular time next week, probably with ChrisWilson chairing <MikeSmith> [adjourned]
http://www.w3.org/2008/11/13-html-wg-minutes.html
CC-MAIN-2015-35
refinedweb
1,564
76.01
Has the transform changed since the last time the flag was set to 'false'? A change to the transform can be anything that can cause its matrix to be recalculated: any adjustment to its position, rotation or scale. Note that operations which can change the transform will not actually check if the old and new value are different before setting this flag. So setting, for instance, transform.position will always set hasChanged on the transform, regardless of there being any actual change. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Update() { if (transform.hasChanged) { print("The transform has changed!"); transform.hasChanged = false; } } }
https://docs.unity3d.com/kr/2017.1/ScriptReference/Transform-hasChanged.html
CC-MAIN-2019-30
refinedweb
104
58.89
Unlike vsprintf(), the maximum number of characters that can be written to the buffer is specified in vsnprintf(). vsnprintf() prototype int vsnprintf( char* buffer, size_t buf_size, const char* format, va_list vlist ); The vsnprintf() function writes the string pointed to by format to a character string buffer. The maximum number of characters that can be written is buf_size. After the characters are written, a terminating null character is added. If buf_size is equal to zero, nothing is written and buffer may be a null pointer. The string format may contain format specifiers starting with % which are replaced by the values of variables that are passed as a list vlist. It is defined in <cstdio> header file. vsnprintf() Parameters - buffer: Pointer to a character string to write the result. - buf_size: Maximum number of characters to write. -nprintf() Return value - If successful, the vsnprintf()function returns number of characters written. - On failure it returns a negative value. - When the length of the formatted string is greater than buf_size, it needs to be truncated. In such cases, the vsnprintf()function returns the total number of characters excluding the terminating null character which would have been written, if the buf_size limit was not imposed. Example: How vsnprintf() function works #include <cstdio> #include <cstdarg> void write(char* buf, int buf_size, const char *fmt, ...) { va_list args; va_start(args, fmt); vsnprintf(buf, buf_size, fmt, args); va_end(args); } int main () { char buffer[100]; char fname[20] = "Bjarne"; char lname[20] = "Stroustrup"; char lang[5] = "C++"; write(buffer, 27, "%s was created by %s %s\n", lang, fname, lname); printf("%s", buffer); return 0; } When you run the program, the output will be: C++ was created by Bjarne
https://www.programiz.com/cpp-programming/library-function/cstdio/vsnprintf
CC-MAIN-2020-16
refinedweb
278
61.97
[Date Index] [Thread Index] [Author Index] Re: namespaces - To: mathgroup at smc.vnet.net - Subject: [mg121550] Re: namespaces - From: "Harvey P. Dale" <hpd1 at nyu.edu> - Date: Mon, 19 Sep 2011 07:06:21 -0400 (EDT) - Delivered-to: l-mathgroup@mail-archive0.wolfram.com - References: <201109180813.EAA06414@smc.vnet.net> Much, but not all, of the functions in Combinatorica have been added to Mathematica's core. Subsets is one example. Best, Harvey -----Original Message----- From: Alan [mailto:alan.isaac at gmail.com] Sent: Sunday, September 18, 2011 4:13 AM To: mathgroup at smc.vnet.net Subject: [mg121550] - References: - namespaces - From: Alan <alan.isaac@gmail.com>
http://forums.wolfram.com/mathgroup/archive/2011/Sep/msg00389.html
CC-MAIN-2018-17
refinedweb
106
55.5
Learn to Use ITensor Version 1.x to 2.0 Transition Guide Below are the major interface changes in ITensor version 2.0 versus 1.x. Most of this interface is already supported, although optional, in the 1.x branch. Following version 2.0 these changes are mandatory. General Tips for Updating When updating, remove your old options.mk file and create a new one from the options.mk.sample file. You can save a backup your old file to recall your BLAS/LAPACK settings. It may not hurt to completely re-clone ITensor from github. For example, the include/ folder is no longer used in version 2.0 but may stay around on your machine if you upgrade by just doing a git pull. Changes to Basic Interface Note that some of these changes already work under version 1.3.x. However, they are mandatory following version 2.0.x. There is now an "all.h" header which gives a convenient way to include the entire library. #include "itensor/all.h" Include statements must now include paths to header files. Old-style version 1.x code #include "iqtensor.h" should be replaced by #include "itensor/iqtensor.h" The appropriate path to use is the actual location of the file under the ITensor source directory. MPS and DMRG related codes are in the mps/ subfolder, for example #include "itensor/mps/dmrg.h" Use the .realand .cplxmethods to access tensor elements. Old-style version 1.x code auto A = ITensor(i,j); ... //make changes to A Real val = A(i(2),j(3)); should be replaced by auto A = ITensor(i,j); ... //make changes to A //If A is known to be real auto val = A.real(i(2),j(3)); //Or if A is complex auto val = A.cplx(i(2),j(3)); Note that the .cplxmethod always succeeds even if the tensor is purely real. If the tensor is a scalar (no indices) use .real()or .cplx()to retrieve its value. Use the .setmethod to set tensor elements. Old-style version 1.x code auto A = ITensor(i,j); A(i(2),j(3)) = 4.56; should be replaced by auto A = ITensor(i,j); A.set(i(2),j(3),4.56); One advantage of the new .setapproach is one can pass a real or complex number to .set, whereas it was more cumbersome to create a complex ITensor before. Many previous ITensor and IQTensor class methods are now free functions. T.norm()is now norm(T) T.randomize()is now randomize(T) Prefer rank(T)to T.r() The Vector and Matrix classes now have zero-indexed element access. If you prefer a 1-indexed interface you can use the Vector1 and Matrix1 classes. The header svdalgs.hhas been renamed to decomp.h Older codes may still be using the names "Opt" and "OptSet" for passing optional named arguments to functions. From version 2.0 on, these older names have been removed in favor of a single class called "Args". For more on using the Args system view this Args tutorial. Changes to Advanced Features The ITensor and IQTensor constructors taking a set of IndexVals (or IQIndexVals) and setting the corresponding element to 1.0 have been removed. Instead use the setEltfunction to make such tensors. For example if i and j are Index objects auto P = setElt(i(1),j(2)); makes an ITensor P with the i(1),j(2)element set to 1.0 and the rest set to zero. Combiner and IQCombiner are no longer distinct types, but just a type of sparse ITensor or IQTensor. To create a combiner which combines indices i, j write the code auto C = combiner(i,j); To use the combiner just contract it with a tensor having indices i and j. auto T = ITensor(i,k,j,l); auto S = C * T; Creating IQCombiners works the same way except i and j are of type IQIndex.
http://itensor.org/docs.cgi?page=v2transition_guide
CC-MAIN-2019-18
refinedweb
660
70.39
kig #include <kig_commands.h> Detailed Description this class monitors a set of DataObjects for changes and returns an appropriate ChangeObjectImpsCommand if necessary. E.g. MovingMode wants to move certain objects, so it monitors all the parents of the explicitly moving objects: It then moves them around, and when it is finished, it asks to add the KigCommandTasks to a KigCommand, and applies that.. Definition at line 153 of file kig_commands.h. Constructor & Destructor Documentation all the DataObjects in objs will be watched. Definition at line 211 of file kig_commands.cpp. Definition at line 384 of file kig_commands.cpp. Definition at line 243 of file kig_commands.cpp. Member Function Documentation add the generated KigCommandTasks to the command comm . monitoring stops after this is called.. Definition at line 227 of file kig_commands.cpp. add objs to the list of objs to be watched, and save their current imp's. Definition at line 217 of file kig_commands.c.
https://api.kde.org/4.14-api/kdeedu-apidocs/kig/html/classMonitorDataObjects.html
CC-MAIN-2019-51
refinedweb
155
51.65
updated copyright years updated copyright-blacklist (added libltdl) updated distributed files (don't distribute files without distribution terms) added copyright to preforth.in and build-ec.in \ paths.fs path file handling 03may97jaw \ Copyright (C) 1995,1996,1997,1998,2000,2003,2004. \ : path-allot ( umax -- ) \ gforth \G @code{Allot} a path with @i{umax} characters capacity, initially empty. chars dup , 0 , allot ; [IFUNDEF] +place : +place ( adr len adr ) 2dup >r >r dup c@ char+ + swap move r> r> dup c@ rot + swap c! ; [THEN] [IFUNDEF] place : place ( c-addr1 u c-addr2 ) 2dup c! char+ swap move ; [THEN] \ create sourcepath 1024 chars , 0 , 1024 chars allot \ !! make this dynamic 0 avalue fpath ( -- path-addr ) \ gforth : make-path ( -- addr ) $400 chars dup 2 cells + allocate throw >r 0 swap r@ 2! r> ; : os-cold ( -- ) make-path to fpath pathstring 2@ fpath only-path init-included-files ; \ The path Gforth uses for @code{included} and friends. : also-path ( c-addr len path-addr -- ) \ gforth \G add the directory @i{c-addr len} to @i{path-addr}. >r \ len check r@ cell+ @ over + r@ @ u> ABORT" path buffer too small!" \ !! grow it \ copy into tuck r@ cell+ dup @ cell+ + swap cmove \ make delimiter 0 r@ cell+ dup @ cell+ + 2 pick + c! 1 + r> cell+ +! ; : clear-path ( path-addr -- ) \ gforth \G Set the path @i{path-addr} to empty. 0 swap cell+ ! ; : cell+ dup cell+ swap @ ; : next-path ( addr u -- addr1 u1 addr2 u2 ) \ addr2 u2 is the first component of the path, addr1 u1 is the rest 2dup 0 scan dup 0= IF 2drop 0 -rot 0 -rot EXIT THEN >r 1+ -rot r@ 1- -rot r> - ; : previous-path ( path^ -- ) \ !! "fpath previous-path" doesn't work dup path>string BEGIN tuck dup WHILE repeat ; : ; Create ofile 0 c, 255 chars allot Create tfile 0 c, 255 chars allot : pathsep? dup [char] / = swap [char] \ = or ; : need/ ofile dup c@ + c@ pathsep? 0= IF s" /" ofile +place THEN ; : extractpath ( adr len -- adr len2 ) BEGIN dup WHILE 1- 2dup + c@ pathsep? IF EXIT THEN REPEAT ; : remove~+ ( -- ) ofile count s" ~+/" string-prefix? IF ofile count 3 /string ofile place THEN ; : expandtopic ( -- ) \ stack effect correct? - anton \ expands "./" into an absolute name ofile count s" ./" string-prefix? IF ofile count 1 /string tfile place 0 ofile c! includefilename 2@ extractpath ofile place \ care of / only if there is a directory ofile c@ IF need/ THEN tfile count over c@ pathsep? IF 1 /string THEN ofile +place@ cmove r> endif endif + nip over - ; \ test cases: \ s" z/../../../a" compact-filename type cr \ s" ../z/../../../a/c" compact-filename type cr \ s" /././//./../..///x/y/../z/.././..//..//a//b/../c" compact-filename type cr : reworkdir ( -- ) remove~+ ofile count compact-filename nip ofile c! ; : open-ofile ( -- fid ior ) \G opens the file whose name is in ofile expandtopic reworkdir ofile count r/o open-file ; : check-path ( adr1 len1 adr2 len2 -- fid 0 | 0 ior ) 0 ofile ! >r >r ofile place need/ r> r> ofile +place place open-ofile dup 0= IF >r ofile count r> THEN EXIT ELSE r> -&37 >r path>string BEGIN next-path dup WHILE r> drop 5 pick 5 pick check-path dup 0= IF drop >r 2drop 2drop r> ofile count ;
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/kernel/paths.fs?rev=1.34;sortby=rev;f=h;only_with_tag=v0-7-0
CC-MAIN-2021-10
refinedweb
533
80.92
08 March 2011 06:15 [Source: ICIS news] By Junie Lin ?xml:namespace> Spot MMA prices broke free from 24 weeks of stagnation in late February, according to ICIS data. Surplus inventories of downstream light-guided panels (LGP) for light emitting diode (LED) TV industry had kept MMA demand in doldrums since September last year. But most producers anticipate a revival of demand that should support higher prices going forward. On 4 March, MMA cargoes of 500 tonnes or higher were assessed at $2,380-2,450/tonne (€1,714-1,764/tonne) CFR (cost and freight) SE (southeast) Asia, while those cargoes with less than 500 tonnes were at $2,530-2,560/tonne CFR SE Asia, according to ICIS. MMA sellers said they had successfully implemented an average price hike of $50-100/tonne on small to mid-sized buyers from the cast sheet and emulsion sectors this month. Initial resistance to aggressive price increases faded in the face of soaring costs of MMA feedstocks like methyl tertiary butyl ether (MTBE), market sources said. “Prices should have still some room to go up in April, as feedstock costs had risen so much,” said a trader. MTBE prices had increased by more than 6% week on week to $1,050-1,100/tonne FOB (free on board) Singapore in the week ended 4 March, ICIS data showed. The surge in MTBE prices, which were firmly above $1,000/tonne mark, was due to robust global oil futures, amid oil supply disruption from OPEC producer MMA prices had to adjust accordingly for producers to generate margins, market sources said. Most producers were mulling raising prices in April by $100/tonne from March, expecting strong demand as flat-screen television manufacturers gear up production. Demand from this sector has been driving up demand for MMA and its downstream PMMA over the past years. PMMA has strong applications in the light guided panels of flat-screen televisions that use the latest light emitting diode (LED) technology. Meanwhile, tight supply due to outages at regional plants and scheduled turnarounds through the second quarter would also help push prices higher, market sources said. Fresh supply was expected in Additional reporting by Felicia Loo (
http://www.icis.com/Articles/2011/03/08/9441178/asia-mma-may-extend-gains-thru-april-on-high-feedstock-costs.html
CC-MAIN-2014-41
refinedweb
369
56.79
About Inko is an object-oriented programming language, focusing on making it fun and easy to write concurrent programs, without the headaches. It tries to achieve this by combining various features, such as its error handling model, a high performance garbage collector, the ability to easily perform concurrent tasks, and much more. Inko draws inspiration from many other languages, such as: Smalltalk, Self, Ruby, Erlang, and Rust. Some of Inko's features are borrowed from these languages. For example, the concurrency model is heavily inspired by Erlang, and the use of message passing for if and the likes is taken from Smalltalk. Inko is free and open source software, licensed under the Mozilla Public License version 2.0. This means you can not only install and use Inko, but you are also free to modify and redistribute it. Features Inko has a variety of features that make it stand out compared to other programming languages. Writing concurrent tasks is done using lightweight processes. Each process has its own heap, and processes communicate via message passing. import std::process import std::stdio::stdout let sender = process.channel!(String) lambda (receiver) { # This will print "Hello world!" to STDOUT. stdout.print(receiver.receive) } sender.send('Hello world!') The virtual machine uses preemptive multitasking, ensuring every process is given a fair and equal amount of time to do its work. This prevents a single process from blocking an OS thread indefinitely. import std::process let mut remaining = 100 # This will spawn 100 processes, all spinning forever, without blocking OS # threads indefinitely. { remaining > 0 }.while_true { process.spawn { {}.loop } remaining -= 1 } Inko's error handling model prevents unexpected runtime errors from occurring, forcing you to handle errors directly at the call site. Blocks (methods, closures, and lambdas) can only throw an error of a single type. This drastically simplifies error handling, as you no longer need to catch potentially dozens of radically different errors. Sending a message that might throw requires you to start the expression with the try keyword. import std::fs::file import std::stdio::stdout def read_file(path: String) -> String { # If file.read_only() throws, we simply return an empty String. let handle = try file.read_only(path) else return '' # handle.read_string might fail, in which case we will again return an empty # String. try handle.read_string else '' } stdout.print(read_file('README.md')) Inko also lets you terminate the program immediately upon encountering an error, this is known as a "panic". Panics can be useful if there is no proper way of responding to an error during runtime, such as a division by zero error. This can be done using the try! keyword. import std::fs::file import std::stdio::stdout let handle = try! file.read_only(path) stdout.print(try! handle.read_string) Class-like objects can be defined, and traits can be used to define reusable behaviour and requirements that must be met by objects. Inheritance is not supported, preventing objects from being coupled together too tightly. trait Greet { # This method is required, and must be implemented by objects that implement # this trait. def name -> String # This method comes with a default implementation. Objects are free to # redefine it, as long as the signature is still compatible. def greet -> String { 'Hello ' + name } } object Person impl Greet { def init(name: String) { # This is an instance attribute, called an "instance variable" in languages # such as Ruby and Smalltalk. These variables are available to instances of # the object that defines them (a Person instance in this case). # # Instance attributes can not be accessed outside of an object. Instead, you # have to define a method that returns an instance attribute, should you # want to expose the value. let @name = name } def name -> String { @name } } let alice = Person.new('Alice') alice.greet # => 'Hello Alice' Traits can be implemented for previously defined objects, allowing you to extend their behaviour. import std::conversion::ToString object Person { def init(name: String) { let @name = name } def name -> String { @name } } impl ToString for Person { def to_string -> String { @name } } let alice = Person.new('Alice') alice.to_string # => 'Alice' Instead of using statements, Inko uses message passing for (almost) everything. This means there are no if or while statements, instead you send messages to objects. This allows objects to determine how these messages should behave, making it easy and natural to implement patterns such as the Null Object pattern. import std::stdio::stdout object NullUser { def if_true!(R)(block: do -> R) -> ?R { Nil } def if_false!(R)(block: do -> R) -> ?R { block.call } def if!(R)(true: do -> R, false: do -> R) -> R { false.call } } let user = NullUser.new # This would print "nay" to STDOUT. user.if true: { stdout.print('yay') }, false: { stdout.print('nay') } Inko uses gradual typing, with static typing being the default. This means that by default you get all the benefits (e.g. safety) of a statically typed language, but are free to trade this with the flexibility of a dynamically typed language. This allows you to for example build a prototype using dynamic typing, then switch to static typing when you have a better understanding of the variables involved.') Last but not least, most of Inko is written in Inko itself. For example, this is the implementation of String.starts_with?: def starts_with?(prefix: String) -> Boolean { prefix.length > length .if_true { return False } slice(0, prefix.length) == prefix } This makes it easier to contribute changes, debug problems, optimise code, and test the capabilities of Inko as a language. Overall we believe this leads to a better programming language, compared to implementing most of it in a different language (e.g. Rust, the language the virtual machine is written in).
https://inko-lang.org/about/
CC-MAIN-2018-51
refinedweb
933
58.08
On Sep 25, 2002 15:44 -0400,. One known problem - don't use it with NFS on top - it will overflow the stack in do_split (it may even overflow without NFS, but in the presence of interrupts or something). A sub-optimal fix is below (the "optimal" fix is not yet working, but avoids the allocation and the potential panic ;-). The diff is only approximately a real diff, but you can get the general idea (it may even apply, I don't know). Cheers, Andreas ======================================================================= --- linux/fs/ext3/namei.c.orig 25 Jul 2002 21:35:21 -0000 1.10 +++ linux/fs/ext3/namei.c 23 Sep 2002 23:44:13 -0000 1.12 @@ -778,7 +778,7 @@ u32 newblock; unsigned MAX_DX_MAP = PAGE_CACHE_SIZE/EXT3_DIR_REC_LEN(1) + 1; u32 hash2; - struct dx_map_entry map[MAX_DX_MAP]; + struct dx_map_entry *map; char *data1 = (*bh)->b_data, *data2, *data3; unsigned split; ext3_dirent *de, *de2; @@ -798,6 +798,9 @@ data2 = bh2->b_data; + map = kmalloc(sizeof(*map) * MAX_DX_MAP, GFP_KERNEL); + if (!map) + panic("no memory for do_split\n"); count = dx_make_map ((ext3_dirent *) data1, blocksize, map); split = count/2; // need to adjust to actual middle dx_sort_map (map, count); @@ -828,6 +831,7 @@ brelse (bh2); ext3_journal_dirty_metadata (handle, frame->bh); dxtrace(dx_show_index ("frame", frame->entries)); + kfree(map); return de; } #endif -- Andreas Dilger
https://www.redhat.com/archives/ext3-users/2002-September/msg00146.html
CC-MAIN-2015-32
refinedweb
207
60.95
#include <sys/cdefs.h> #include <arch/types.h> Go to the source code of this file. Stack traces. The functions in this file deal with doing stack traces. These functions will do a stack trace, as specified, printing it out to stdout (usually a dcload terminal). These functions only work if frame pointers have been enabled at compile time (-DFRAME_POINTERS and no -fomit-frame-pointer flag). Do a stack trace from the current function. This function does a stack trace from the the specified frame pointer, printing the results to stdout. This could be used for doing something like stack tracing a main thread from inside an IRQ handler.
http://cadcdev.sourceforge.net/docs/kos-2.0.0/stack_8h.html
CC-MAIN-2018-05
refinedweb
109
75.3
python-cephclient 0.1.0.5 A client library in python for the Ceph REST API. python-cephclient is a python module to communicate with Ceph’s REST API (ceph-rest-api). This is currently a work in progress. ABOUT Client The cephclient class takes care of sending calls to the API through HTTP and handle the responses. It supports queries for JSON, XML, plain text or binary. Wrapper The wrapper class extends the client and provides helper functions to communicate with the API. Nothing prevents you from calling the client directly exactly like the wrapper does. The wrapper exists for convenience. Development, Feedback, Bugs Want to contribute ? Feel free to send pull requests ! Have problems, bugs, feature ideas ? I am using the github issue tracker to manage them. HOW TO USE Installation Install the package through pip: pip install python-cephclient Installation does not work ? python-cephclient depends on lxml which itself depends on some packages. To install lxml’s dependencies on Ubuntu: apt-get install python-dev libxml2-dev libxslt-dev Instanciate CephWrapper: from cephclient.wrapper import * wrapper = CephWrapper( endpoint = '', debug = True # Optionally increases the verbosity of the client ) Do your request and specify the reponse type you are expecting. Either json, xml, text (default) or binary are available. json: response, body = wrapper.get_fsid(body = 'json') print('Response: {0}, Body:\n{1}'.format(response, json.dumps(body, indent=4, separators=(',', ': ')))) ==== Response: <Response [200]>, Body: { "status": "OK", "output": { "fsid": "d5252e7d-75bc-4083-85ed-fe51fa83f62b" } } xml: response, body = wrapper.get_fsid(body = 'xml') print('Response: {0}, Body:\n{1}'.format(reponse, etree.tostring(body, pretty_print=True))) ==== Response: <Response [200]>, Body: <response> <output> <fsid><fsid>d5252e7d-75bc-4083-85ed-fe51fa83f62b</fsid></fsid> </output> <status> OK </status> </response> text: response, body = wrapper.get_fsid(body = 'text') print('Response: {0}, Body:\n{1}'.format(response, body)) ==== Response: <Response [200]>, Body: d5252e7d-75bc-4083-85ed-fe51fa83f62b binary: response, body = wrapper.mon_getmap( RELEASE NOTES 0.1.0.5 dmsimard: - Add missing dependency on the requests library - Some PEP8 and code standardization cleanup - Add root “PUT” methods - Add mon “PUT” methods - Add mds “PUT” methods - Add auth “PUT” methods Donald Talton: - Add osd “PUT” methods 0.1.0.4 - Fix setup and PyPi installation 0.1.0.3 - GET API calls under ‘/tell’ have been implemented. - GET API calls are are in root (/) have been renamed to be coherent with incoming future development 0.1.0.2 - Implemented or fixed missing GET calls (All API GET calls that are not under the ‘/tell’ namespace are now supported) - Client can optionally raise an exception when requesting a unsupported body type for a provided API call (ex: requesting json through the wrapper for a call that is known to only return binary will raise an exception) - Client now supports binary type responses (ex: crush map, mon map, etc) - Improved the README (!) 0.1.0.1 - First public release of python-cephclient - Author: David Moreau Simard - Bug Tracker: - Keywords: ceph rest api ceph-rest-api client library - License: Apache License, Version 2.0 - Categories - Package Index Owner: dmsimard - DOAP record: python-cephclient-0.1.0.5.xml
https://pypi.python.org/pypi/python-cephclient
CC-MAIN-2017-30
refinedweb
516
57.77
A computer algebra system written in pure Python . To get started to with contributing Hi there, novice SymPy user here. (Starting to use it as part of teaching maths at a uni.) Before posting this as an issue: I suspect that log_to_real() does not quite convert complex logarithms to real functions with the same derivative, and - consequently - some anti-derivatives in the real domain are not real. The issue seems to be that it uses SymPy's log, which is the prinicipal complex branch extended from the left upper quadrant to the negative half-axis, in other words: log(-1) = I*pi. In particular: x = symbols("x", real=True) integrate(1/x, x) log(x) integrate(1/x, x).subs(x, -1) I*pi This seems to come from the fact that log_to_real() returns log for log, which is not a real function having the same derivative: That would be log(abs). Both functions differ by a locally (!) constant non-real function and (thus) have the same derivative. Only log(abs) is a real function. Now, SymPy is either not really happy with abs or I'm holding it wrong: diff(log(abs(x)), x) sign(x)/Abs(x) And this should simplify to 1/x (in the reals) but does not. Are my assumptions wrong? Hello everyone, I am having trouble preventing expressions with brackets getting evaluated, would greatly appreciate any help on this issue thread on github Hi all, I am trying to port this - - from Maxima to SymPy. Most of the work is done, but I need some help to write a function "isatom" that works correctly... here is how it should work: after running this, from sympy import Function, Symbol, dsolve f = Function('f') x = Symbol('x') od = f(x).diff(x, x) + f(x) then isatom(5), isatom("foo"), isatom(x), and isatom(f) should all return True, but isatom(od) should return False... This almost works: from sympy import Function, Symbol, AtomicExpr isatom = lambda o: isinstance(o, int) or isinstance(o, str) or isinstance(o, AtomicExpr) but in the test above f is an "UndefinedFunction", and all my attempts to make a version of isatom that recognizes UndefinedFunctions as atoms have failed. Any hints?...
https://gitter.im/sympy/sympy?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
CC-MAIN-2022-40
refinedweb
370
60.24
We now have a ReSharper plugin (for version 7.1) for Essential Studio ASP.NET MVC that provides some excellent tips and warnings on how to fix errors as you develop using our MVC controls. Here are some scenarios where our plugin provides warnings as well as fixes for some issues. Ensure appropriate HttpHandlers and Handlers The plugin ensures that the appropriate HTTP handlers are added to the web.config file when the Syncfusion.Shared.MVC.dll is referenced. If the HTTP handlers are missing in the web.config file, it reports an error in the Solution Errors window. The error can then be resolved using quick fixes, which will add the necessary handlers to web.config automatically. There are also warnings in your CSHTML code from where you can initiate the fix. The screenshot below illustrates error highlighting for missing HTTP handlers. The screenshot below illustrates the quick fix option to add the appropriate HTTP handlers. Ensure compatible MVC VS Syncfusion references When you reference Syncfusion assemblies, you will face issues if you don’t reference the right variant of the Syncfusion assembly matching the MVC framework version you have used in your project. For example, Syncfusion assembly version “11.144.0.21” should be used with MVC framework 4, while “11.134.0.21” should be used with MVC framework 3. Using it the other way around will cause errors during runtime. The plugin now detects this conflict and warns you in web.config when a discrepancy like this is detected. Quick fixes are also available which will update the version numbers to the right one. The screenshot below shows the error highlighting for an invalid assembly version number. The screenshot below shows the quick fix option to correct the issue. Ensure all required assemblies are referenced Usually ReSharper adds an assembly reference and a namespace to the warning “Cannot resolve symbol” for an unidentified type as shown below: You can then use the quick fix to add the missing reference. This plugin will also add all the dependent assemblies to your project. For example, PagingParams is an unidentified Syncfusion class, and ReSharper provides a quick fix for this as shown below: When executing the quick fix, the plugin adds all of the required dependencies: · Syncfusion.Core · Syncfusion.Grid.Mvc · Syncfusion.Shared.Mvc · Syncfusion.Linq.Base · Syncfusion.Theme.Base The dependencies and namespaces are added to the project automatically. The current plugin version is 1.0. It has been tested with Visual Studio 2012 and 2010, and supports ReSharper 7.1. Download the plugin here: Syncfusion ReSharper Plugin
https://blog.syncfusion.com/post/plugin-for-resharper-supports-syncfusion-mvc-projects.aspx
CC-MAIN-2019-04
refinedweb
430
57.87
AIO_CANCEL(3) Linux Programmer's Manual AIO_CANCEL(3) aio_cancel - cancel an outstanding asynchronous I/O request #include <aio.h> int aio_cancel(int fd, struct aiocb *aiocbp); Link with -lrt.. EBADF fd is not a valid file descriptor. ENOSYS aio_cancel() is not implemented. The aio_cancel()_cancel() │ Thread safety │ MT-Safe │ └──────────────────────────────────────┴───────────────┴─────────┘ POSIX.1-2001, POSIX.1-2008. See aio(7). aio_error(3), aio_fsync(3), aio_read(3), aio_return(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7) This page is part of release 5.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2021-03-22 AIO_CANCEL(3) Pages that refer to this page: aio_error(3), aio_fsync(3), aio_read(3), aio_return(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7), system_data_types(7)
https://man7.org/linux/man-pages/man3/aio_cancel.3.html
CC-MAIN-2021-17
refinedweb
137
60.01
I’m trying this simple idea: l = lambda { yield + 1 } l.call { 2 } # I was expecting 3 Expecting the result to be 3, but I get LocalJumpError: no block given (yield). So I can’t pass a block ( { 2 } in the example) to a lambda in the same way I would do with a method. The same example with methods would be: def m yield + 1 end m {2} # 3 What I’m doing for now is to make my lamda to receive a callback: l = lambda{|defer| defer.call + 1 } l.call(lambda{ 2 }) # 3 But it would have been so nice to use the block instead. So. Does anybody if this is all there is to say about this, or it is actually possible and I’m making a mistake?
https://www.ruby-forum.com/t/can-procs-or-lambdas-receive-blocks/261547
CC-MAIN-2021-31
refinedweb
132
78.99
Swish.QName Description This module defines an algebraic datatype for qualified names (QNames), which represents a URI as the combination of a namespace URI and a local component ( LName), which can be empty. Although RDF supports using IRIs, the use of URI here precludes this, which means that, for instance, LName only accepts a subset of valid characters. There is currently no attempt to convert from an IRI into a URI. Synopsis Documentation A qualified name, consisting of a namespace URI and the local part of the identifier, which can be empty. The serialisation of a QName is formed by concatanating the two components. Prelude> :set prompt "swish> " swish> :set -XOverloadedStrings swish> :m + Swish.QName swish> let qn1 = "" :: QName swish> let qn2 = "" :: QName swish> let qn3 = "" :: QName swish> let qn4 = "" :: QName swish> let qn5 = "" :: QName swish> map getLocalName [qn1, qn2, qn3, qn4, qn5] ["","bob","fred","x","fred:joe"] swish> getNamespace qn1 swish> getNamespace qn2 swish> getNamespace qn3 swish> getNamespace qn4 Instances A local name, which can be empty. At present, the local name can not contain a space character and can only contain ascii characters (those that match isAscii). In version 0.9.0.3 and earlier, the following characters were not allowed in local names: '#', ':', or '/' characters. This is all rather experimental. Instances emptyLName :: LNameSource The empty local name. Arguments Create a new qualified name. getNamespace :: QName -> URISource Return the URI of the namespace stored in the QName. This does not contain the local component. getLocalName :: QName -> LNameSource Return the local component of the QName. getQNameURI :: QName -> URISource Returns the full URI of the QName (ie the combination of the namespace and local components). qnameFromFilePath :: FilePath -> IO QNameSource Convert a filepath to a file: URI stored in a QName. If the input file path is relative then the current working directory is used to convert it into an absolute path. If the input represents a directory then it *must* end in the directory separator - so for Posix systems use "/foo/bar/" rather than "/foo/bar". This has not been tested on Windows.
http://hackage.haskell.org/package/swish-0.9.0.10/docs/Swish-QName.html
CC-MAIN-2016-44
refinedweb
343
63.9
The Silverlight Toolkit includes a couple new panels, one of which is the incredibly handy WrapPanel. [ Updated Jan. 4 to include working example ] In its most straight forward use, the WrapPanel allows you to add UIElements which it positions sequentially (typically left to right) until there is insufficient space, at which time it creates a new row beneath the most recent; that is, it wraps. Creating a simple test program reveals a few interesting aspects about the wrap panel. To do this, open Visual Studio and create a new Silverlight Application. Be sure to add a reference to the Microsoft Windows Controls.dll that came with the Toolkit (the trick is remembering where you’ve put them!) You’ll also need to create a namespace at the top of Page.xaml, but Intellisence is prepared to be of some assistance, That done, my sample program creates a wrap panel with a single button in it; here’s the complete Xaml listing: <UserControl x:Class="WrapPanel.Page" xmlns="" xmlns:x="" xmlns:controls="clr-namespace:Microsoft.Windows.Controls; assembly=Microsoft.Windows.Controls" Width="800" Height="600"> <Grid x: <controls:WrapPanel x: <Button x: </controls:WrapPanel> </Grid> </UserControl> The goal of the code that supports this page is that when the user clicks on the add button one of four controls is randomly chosen, created and added to the panel. using System; using System.Windows; using System.Windows.Controls; namespace WrapPanel { public partial class Page : UserControl { private Random rand = new Random(); public Page() { InitializeComponent(); Add.Click += new RoutedEventHandler( Add_Click ); } To support that goal, a private member variable of type Random is added as is an event handler for the "Add" button that was created in the Xaml. All the interesting work happens in the event handler. If you start by creating controls all of the same size, void Add_Click( object sender, RoutedEventArgs e ) { Button b = new Button(); b.Content = DateTime.Now.ToLocalTime().ToString(); b.Width = 120; b.Height = 35; b.Margin = new Thickness( 5 ); this.WPanel.Children.Add( b ); } you get a very uniform appearance as each control is added until there is not enough room and then a new row is begun. (in the example shown I’ve made the control somewhat smaller and the Add button the same size as the buttons I’m creating): If, instead, you create different objects, then different things begin to happen. In the next example I’ll create one of four types of objects void Add_Click( object sender, RoutedEventArgs e ) { int choice = rand.Next( 0, 4 ); Control c; switch ( choice ) { case 0: // make a button case 1: // make a text box case 2: // make a short list case 3: default: // make a password control } // set the font size // set the width and height and margin // add the control to the wrap panel } A few interesting questions arise. What if I set their font size randomly and what if I set the height to auto (you do this programmatically by setting it to the value double.Nan). void Add_Click( object sender, RoutedEventArgs e ) { /* Button b = new Button(); b.Content = DateTime.Now.ToLocalTime().ToString(); b.Width = 120; b.Height = 35; b.Margin = new Thickness( 5 ); this.WPanel.Children.Add( b ); */ int choice = rand.Next( 0, 4 ); Control c; switch ( choice ) { case 0: Button b = new Button(); b.Content = DateTime.Now.ToLocalTime().ToString(); c = b; break; case 1: TextBox t = new TextBox(); t.Text = rand.Next(0,int.MaxValue-1).ToString(); c = t; break; case 2: ItemsControl i = new ItemsControl(); i.Items.Add( "Cormac McCarthy" ); i.Items.Add( "Neal Stephenson" ); i.Items.Add( "Marcel Proust" ); i.Items.Add( "Virginia Woolfe" ); c = i; break; case 3: default: PasswordBox p = new PasswordBox(); p.Height = 35; p.Password = "Secret"; c = p; break; } c.FontSize = rand.Next( 6, 18 ); if ( c is ItemsControl ) { c.FontSize = Math.Max( c.FontSize, 10 ); } c.Width = Double.NaN; c.Height = Double.NaN; c.Margin = new System.Windows.Thickness( 5 ); this.WPanel.Children.Add( c ); } I find the results interesting, though not necessarily what I want. One approach is to fix the height of the buttons, which gives another, also interesting effect. A quick way to try this is to make a small code modification: if ( c is ItemsControl ) { c.FontSize = rand.Next( 8, 18 ); c.Height = Double.NaN; //c.FontSize = Math.Max( c.FontSize, 10 ); } else { c.FontSize = 12; c.Height = 30; } This causes the other controls not to resize but to center on the larger list. By using panels within panels, you can of course get just about any effect you like. (I’ve not posted the source code because it’s all here) Here is a working example in an iframe: Previous The Wrap Panel
http://jesseliberty.com/2009/01/02/silverlight-toolkit-wrappanel/
CC-MAIN-2021-17
refinedweb
781
66.33
{-# LANGUAGE ScopedTypeVariables #-} -- | Basic pipe combinators. module Control.Pipe.Combinators ( -- ** Control operators tryAwait, forP, -- ** Composition ($$), -- ** Producers fromList, -- ** Folds -- | Folds are pipes that consume all their input and return a value. Some of -- them, like 'fold1', do not return anything when they don't receive any -- input at all. That means that the upstream return value will be returned -- instead. -- -- Folds are normally used as 'Consumer's, but they are actually polymorphic -- in the output type, to encourage their use in the implementation of -- higher-level combinators. fold, fold1, consume, consume1, -- ** List-like pipe combinators -- The following combinators are analogous to the corresponding list -- functions, when the stream of input values is thought of as a (potentially -- infinite) list. take, drop, takeWhile, takeWhile_, dropWhile, intersperse, groupBy, filter, -- ** Other combinators pipeList, nullP, feed, ) where import Control.Applicative import Control.Monad import Control.Pipe import Control.Pipe.Exception import Data.Maybe import Prelude hiding (until, take, drop, concatMap, filter, takeWhile, dropWhile, catch) -- | Like 'await', but returns @Just x@ when the upstream pipe yields some value -- @x@, and 'Nothing' when it terminates. -- -- Further calls to 'tryAwait' after upstream termination will keep returning -- 'Nothing', whereas calling 'await' will terminate the current pipe -- immediately. tryAwait :: Monad m => Pipe a b m (Maybe a) tryAwait = catch (Just <$> await) $ \(_ :: BrokenUpstreamPipe) -> return Nothing -- | Execute the specified pipe for each value in the input stream. -- -- Any action after a call to 'forP' will be executed when upstream terminates. forP :: Monad m => (a -> Pipe a b m r) -> Pipe a b m () forP f = tryAwait >>= maybe (return ()) (\a -> f a >> forP f) -- | Connect producer to consumer, ignoring producer return value. infixr 5 $$ ($$) :: Monad m => Pipe x a m r' -> Pipe a y m r -> Pipe x y m (Maybe r) p1 $$ p2 = (p1 >> return Nothing) >+> fmap Just p2 -- | Successively yield elements of a list. fromList :: Monad m => [a] -> Pipe x a m () fromList = mapM_ yield -- | A pipe that terminates immediately. nullP :: Monad m => Pipe a b m () nullP = return () -- | A fold pipe. Apply a binary function to successive input values and an -- accumulator, and return the final result. fold :: Monad m => (b -> a -> b) -> b -> Pipe a x m b fold f = go where go x = tryAwait >>= maybe (return x) (go . f x) -- | A variation of 'fold' without an initial value for the accumulator. This -- pipe doesn't return any value if no input values are received. fold1 :: Monad m => (a -> a -> a) -> Pipe a x m a fold1 f = tryAwait >>= maybe discard (fold f) -- | Accumulate all input values into a list. consume :: Monad m => Pipe a x m [a] consume = pipe (:) >+> (fold (.) id <*> pure []) -- | Accumulate all input values into a non-empty list. consume1 :: Monad m => Pipe a x m [a] consume1 = pipe (:) >+> (fold1 (.) <*> pure []) -- | Act as an identity for the first 'n' values, then terminate. take :: Monad m => Int -> Pipe a a m () take n = replicateM_ n $ await >>= yield -- | Remove the first 'n' values from the stream, then act as an identity. drop :: Monad m => Int -> Pipe a a m r drop n = replicateM_ n await >> idP -- | Apply a function with multiple return values to the stream. pipeList :: Monad m => (a -> [b]) -> Pipe a b m r pipeList f = forever $ await >>= mapM_ yield . f -- | Act as an identity until as long as inputs satisfy the given predicate. -- Return the first element that doesn't satisfy the predicate. takeWhile :: Monad m => (a -> Bool) -> Pipe a a m a takeWhile p = go where go = await >>= \x -> if p x then yield x >> go else return x -- | Variation of 'takeWhile' returning @()@. takeWhile_ :: Monad m => (a -> Bool) -> Pipe a a m () takeWhile_ p = takeWhile p >> return () -- | Remove inputs as long as they satisfy the given predicate, then act as an -- identity. dropWhile :: Monad m => (a -> Bool) -> Pipe a a m r dropWhile p = (takeWhile p >+> discard) >>= yield >> idP -- | Yield Nothing when an input satisfying the predicate is received. intersperse :: Monad m => (a -> Bool) -> Pipe a (Maybe a) m r intersperse p = forever $ do x <- await when (p x) $ yield Nothing yield $ Just x -- | Group input values by the given predicate. groupBy :: Monad m => (a -> a -> Bool) -> Pipe a [a] m r groupBy p = streaks >+> createGroups where streaks = await >>= \x -> yield (Just x) >> streaks' x streaks' x = do y <- await unless (p x y) $ yield Nothing yield $ Just y streaks' y createGroups = forever $ takeWhile_ isJust >+> pipe fromJust >+> (consume1 >>= yield) -- | Remove values from the stream that don't satisfy the given predicate. filter :: Monad m => (a -> Bool) -> Pipe a a m r filter p = forever $ takeWhile_ p -- | Feed an input element to a pipe. feed :: Monad m => a -> Pipe a b m r -> Pipe a b m r -- this could be implemented as -- feed x p = (yield x >> idP) >+> p -- but this version is more efficient feed _ (Pure r) = return r feed _ (Throw e) = throw e feed a (Free c h) = case go a c of (False, p) -> p >>= feed a (True, p) -> join p where go a (Await k) = (True, return $ k a) go _ (Yield y c) = (False, yield y >> return c) go _ (M m s) = (False, liftP s m)
http://hackage.haskell.org/package/pipes-core-0.0.1/docs/src/Control-Pipe-Combinators.html
CC-MAIN-2015-06
refinedweb
850
67.18
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 4.1, “How to create a primary constructor in a Scala class.” Problem You want to create a primary constructor for a Scala class, and you quickly find that the approach is different than Java. Solution The primary constructor of a Scala. The following class demonstrates constructor parameters, class fields, and statements in the body of a class: class Person(var firstName: String, var lastName: String) { println("the constructor begins") // some class fields private val HOME = System.getProperty("user.home") var age = 0 // some methods override def toString = s"$firstName $lastName is $age years old" def printHome { println(s"HOME = $HOME") } def printFullName { println(this) } // uses toString printHome printFullName println("still in the constructor") } Because the methods in the body of the class are part of the constructor, when an instance of a Person class is created, you’ll see the output from the println statements at the beginning and end of the class declaration, along with the call to the printHome and printFullName methods near the bottom of the class: scala> val p = new Person("Adam", "Meyer") the constructor begins HOME = /Users/Al Adam Meyer is 0 years old still in the constructor Discussion If you’re coming to Scala from Java, you’ll find that the process of declaring a primary constructor in Scala is quite different. In Java it’s fairly obvious when you’re in the main constructor and when you’re not, but Scala blurs this distinction. However, once you understand the approach, it also makes your class declarations more concise than Java class declarations. In the example shown, the two constructor arguments firstName and lastName are defined as var fields, which means that they’re variable, or mutable; they can be changed after they’re initially set. Because the fields are mutable, Scala generates both accessor and mutator methods for them. As a result, given an instance p of type Person, you can change the values like this: p.firstName = "Scott" p.lastName = "Jones" and you can access them like this: println(p.firstName) println(p.lastName) Because the age field is declared as a var, it’s also visible, and can be mutated and accessed: p.age = 30 println(p.age) The field HOME is declared as a private val, which is like making it private and final in a Java class. As a result, it can’t be accessed directly by other objects, and its value can’t be changed. When you call a method in the body of the class — such as the call near the bottom of the class to the printFullName method — that method call is also part of the constructor. You can verify this by compiling the code to a Person.class file with scalac, and then decompiling it back into Java source code with a tool like the JAD decompiler. After doing so, this is what the Person class constructor looks like: public Person(String firstName, String lastName) { super(); this.firstName = firstName; this.lastName = lastName; Predef$.MODULE$.println("the constructor begins"); age = 0; printHome(); printFullName(); Predef$.MODULE$.println("still in the constructor"); } This clearly shows the printHome and printFullName methods call in the Person constructor, as well as the initial age being set. When the code is decompiled, the constructor parameters and class fields appear like this: private String firstName; private String lastName; private final String HOME = System.getProperty("user.home"); private int age; Anything defined within the body of the class other than method declarations is a part of the primary class constructor. Because auxiliary constructors must always call a previously defined constructor in the same class, auxiliary constructors will also execute the same code. A comparison with Java The following code shows the equivalent Java version of the Person class: // java public class Person { private String firstName; private String lastName; private final String HOME = System.getProperty("user.home"); private int age; public Person(String firstName, String lastName) { super(); this.firstName = firstName; this.lastName = lastName; System.out.println("the constructor begins"); age = 0; printHome(); printFullName(); System.out.println("still in the constructor"); } public String firstName() { return firstName; } public String lastName() { return lastName; } public int age() { return age; } public void firstName_$eq(String firstName) { this.firstName = firstName; } public void lastName_$eq(String lastName) { this.lastName = lastName; } public void age_$eq(int age) { this.age = age; } public String toString() { return firstName + " " + lastName + " is " + age + " years old"; } public void printHome() { System.out.println(HOME); } public void printFullName() { System.out.println(this); } } As you can see, this is quite a bit lengthier than the equivalent Scala code. With constructors, I find that Java code is more verbose, but obvious; you don’t have to reason much about what the compiler is doing for you. Those _$eq methods The names of the mutator methods that are generated may look a little unusual: public void firstName_$eq(String firstName) { ... public void age_$eq(int age) { ... These names are part of the Scala syntactic sugar for mutating var fields, and not anything you normally have to think about. For instance, the following Person class has a var field named name: class Person { var name = "" override def toString = s"name = $name" } Because name is a var field, Scala generates accessor and mutator methods for it. What you don’t normally see is that when the code is compiled, the mutator method is named name_$eq. You don’t see that because with Scala’s syntactic sugar, you mutate the field like this: p.name = "Ron Artest" However, behind the scenes, Scala converts that line of code into this code: p.name_$eq("Ron Artest") To demonstrate this, you can run the following object that calls the mutator method in both ways (not something that’s normally done): object Test extends App { val p = new Person // the 'normal' mutator approach p.name = "Ron Artest" println(p) // the 'hidden' mutator method p.name_$eq("Metta World Peace") println(p) } When this code is run, it prints this output: name = Ron Artest name = Metta World Peace Again, there’s no reason to call the name_$eq method in the real world, but when you get into overriding mutator methods, it’s helpful to understand how this translation process works. Summary As shown with the equivalent Scala and Java classes, the Java code is verbose, but it’s also straightforward. The Scala code is more concise, but you have to look at the constructor parameters to understand whether getters and setters are being generated for you, and you have to know that any method that’s called in the body of the class is really being called from the primary constructor. This was a little confusing when I first started working with Scala, but it quickly became second nature. The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations:
https://alvinalexander.com/scala/how-to-create-primary-class-constructors-in-scala
CC-MAIN-2019-51
refinedweb
1,164
60.35
wolkenkit-eventstorewolkenkit-eventstore wolkenkit-eventstore is an open-source eventstore for Node.js that is used by wolkenkit. InstallationInstallation $ npm install wolkenkit-eventstore Quick startQuick start To use wolkenkit-eventstore first you need to add a reference to your application. You also need to specify which database to use: const eventstore = ; The following table lists all currently supported databases: Once you have created a reference, you need to initialize the instance by running the initialize function. Hand over the connection string to your database as well as a namespace: await eventstore; For the in-memory database there is no need to hand over the connection string and the namespace, so in this case you only need the following call: await eventstore; To handle getting disconnected from the database, subscribe to the disconnect event. Since wolkenkit-eventstore does not necessarily try to reconnect, it's probably best to restart your application: eventstore; Please note that since the in-memory database does not make use of an external data source it does not support the disconnect event. To manually disconnect from the database call the destroy function: await eventstore; eventstore; To limit the number of events returned you may use the fromRevision and toRevision options: const aggregateId = 'd3152c91-190d-40e6-bf13-91dd76d5f374';const eventStream = await eventstore; eventstore; Saving eventsSaving events To save events use the saveEvents function and hand over an array of events you want to save. For each event, you also have to provide a snapshot that represents the state of the aggregate the events refer to. To create the events use the Event constructor function of the commands-events module, and add a revision property to the event's metadata property: const eventStarted = ...;const eventJoined = ...;eventStartedmetadatarevision = 1;eventJoinedmetadatarevision = 2;const stateStarted =// ...;const stateJoined =// ...;const savedEvents = await eventstore; Please note that the revision starts at 1, not – as you may expect – at 0.. Whenever the revision of an event is divisible by 100, a snapshot for the appropriate aggregate is written based on the state that is attached to that event. Saving a single eventSaving a single event If you only want to save a single event you may omit the brackets of the array and directly specify the event as parameter: const eventStarted = ...;eventStartedmetadatarevision = 1;const stateStarted =// ...;const savedEvents = await eventstore; eventstore; eventstore; eventstore; Saving a snapshotSaving a snapshot To manually eventstore; eventstore; To limit the number of events returned you may use the fromPosition and toPosition options: const replayStream = await eventstore; Running the buildRunning the build To build this module use roboter. $ npx roboter Please note that wolkenkit-eventstore uses the Microsoft SQL Server on Linux Docker image to run SQL Server for the tests. To run this image you need to assign at least 3.25 GByte of RAM to Docker for Mac or Docker for Windows. GNU Licenses.
https://preview.npmjs.com/package/wolkenkit-eventstore
CC-MAIN-2021-39
refinedweb
472
54.66
Hiya bullets do the talking: - How to audit which versions of NTLM are being used - What to do now that you have Windows Server 2008 R2 Forest Functional Level in place - USMT security-ntlm-lmc.man XmlException error - FRS events missing in KB 308406 for Windows Server 2003 and later - Why you can’t have “non-admin admins” on writable domain controllers - Does DFSR’s built-in auditing impact performance? - Netware volumes as DFS Namespace targets - Support for the Group Policy Best Practices tool - More Comic-Con cosplay Question We are trying to move away from NTLM in our Active Directory environment. I read your previous post on NTLM Auditing for later blocking. However, the blog posting does not differentiate between the two versions of NTLM. What would be the best way to audit for only NTLMv1 or LM? Also, will Microsoft ever publish those two TechNet articles? Answer I still suggest you give up on this, unless you want to spend six months not succeeding. If you want to try though, add security event logging on your Win2008 R2 servers/DCs for 4624 Logon events: 977519 Description of security events in Windows 7 and in Windows Server 2008 R2;EN-US;977519 Those will capture the Package Name type. For example: Best I can tell, those two TechNet articles are never going to be published. Jonathan is trying yet again as I write this. Maybe Win8…? We’ll see… Question I am now 100% Windows Server 2008 R2 in my domains and am ready to move my Domain and Forest Functional Levels to 2008 R2. What does that and my new schema buy me, and are there any steps I should do in special order? Answer Nothing has to happen in any special order. Some of your new AD-related options include: - AD Recycle Bin ( ) - DFSR for SYSVOL ( ) - V2 DFS Namespaces ( ) and migrate existing V1 namespaces ( ) - Last Interactive Logon ( ) - Fine Grain Password Policies – ( ) - Virtual Desktops ( ) - Managed Service Accounts with automatic SPN management ( ) - Other things we recommend at the end of the upgrade ( ) With your awesome Win2008 R2 servers, you can also: - Use new PKI features ( ) - Use Starter GPOs and new GP Preferences and other GP stuff ( ) - All this other craziness ( ) Question Our USMT scanstate log shows error: Error [0x08081e] Failed to load manifest at C:\USMT\x86\dlmanifests\security-ntlm-lmc.man: XmlException: hResult = 0x0, Line = 18, Position = 31; A string literal was expected, but no opening quote character was found. But nothing bad seems to happen and our migration has no issues we can detect. It looks like the quotation marks in the XML are incorrect. If I correct that, it runs without error, but am I making something worse and is this supported? Answer Right you are. Note the quotation marks – looks like some developer copied them out of a rich text editor at some point: But no matter – you can change it or delete that MAN file, it makes no difference. That manifest file does not have a USMT scope set, so it is never used even when syntactically correct. In order for USMT to pick up a manifest file during scanstate and loadstate, it must have this set: <migration scope=“Upgrade,MigWiz,USMT“> If not present, the manifest is skipped with message “filtered out because it does not match the scope USMT”: Roughly two thirds of the manifests included with USMT are not used at all for this very same reason. Question I am looking for a full list of Event IDs for FRS. KB 308406 only seems to include them for Windows 2000 – is that list accurate for later operating systems like WIndows Server 2003 or 2008 R2? Answer That KB article has a few issues, I’ll get that ironed out. In the meantime: Windows Server 2003 added events: Event ID: 13569 Severity: Error The File Replication Service has skipped one or more files and/or directories during primary load of the following replica set. The skipped files will not replicate to other members of the replica set. Replica set name is : “%1” A list of all the files skipped can be found at the following location. If a directory is skipped then all files under the directory are also skipped. Skipped file list : “%2” Files are skipped during primary load if FRS is not able to open the file. Check if these files are open. These files will replicate the next time they are modified. Event ID: 13570 Event Type: Error The File Replication Service has detected that the volume hosting the path %1 is low on disk space. Files may not replicate until disk space is made available on this volume. The available space on the volume can be found by typing “dir /a %1”. For more information about managing space on a volume type “copy /?”, “rename /?”, “del /?”, “rmdir /?”, and “dir /?”. Event ID: 13571 Event Type: Error The File Replication Service has detected that one or more volumes on this computer have the same Volume Serial Number. File Replication Service does not support this configuration. Files may not replicate until this conflict is resolved. Volume Serial Number : %1 List of volumes that have this Volume Serial Number: %2 The output of “dir” command displays the Volume Serial Number before listing the contents of the folder. Event ID: 13572 Event Type: Error The File Replication Service was unable to create the directory “%1” to store debug log files. If this directory does not exist then FRS will be unable to write debug logs. Missing debug logs make it difficult, if not impossible, to diagnose FRS problems. Windows Server 2008 added no events. Windows Server 2008 R2 added events: Event ID: 13574 Event Type: Error. Event ID: 13575 Event Type: Error This domain controller has migrated to using the DFS Replication service to replicate the SYSVOL share. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets. Event ID: 13576 Event Type: Error Replication of the content set “%1” has been blocked because use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets. All OSes included event: Event ID: 13573 Event Type: Warning File Replication Service has been repeatedly prevented from updating: File Name : “%1” File GUID : “%2”. For more information on troubleshooting please refer to. Win2008 should have had those 13574-13576 events as they are just as applicable, but $&% happens. Question Why isn’t it possible to grant local admin rights to a domain controller without added them to the built-in Administrators or Domain Admins groups? It can be done on RODCs, after all. Answer It’s with good intentions – if I am a local administrator on a DC, I own that whole domain. I can directly edit the AD database, or even replace it with my own copy. I can install a filter driver that intercepts all password communications between LSASS and the database. I can turn off all auditing and group policy. I can add a service that runs as SYSTEM and therefore, runs as the DC itself – then impersonate the DC. I can install a keyboard logger that captures the “real” domain admins as they logon. My power is almost limitless. The reasons we added the functionality for non-domain admin administrators on RODC are: - RODCs are not authoritative for anything and cannot originate any data out to any other DC or RODC. So the likelihood of damage or compromise is lower – although theoretically, not removed. - RODCs are for branch offices that don’t have dedicated IT staff and which may not even be reliably network connected to the main IT location – so having a “local admin” makes sense for management. Question You have talked about how to track individual DFSR file replication using its built-in “enable audit” setting. Does this impact server performance? Answer Yes and no. The additional DFSR logging impact is negligibly low on any OS. The object access auditing impact ranges from medium (by default) to high (if you have added many custom SACLs). You have to enable the object access auditing to use the DFSR logging on Win2003 though, so the net result there is medium to high impact when compared to other auditing categories. It’s worth noting that overall, auditing impact in Win2008+ is lower, as the audit system was redesigned for greater scalability and performance. You also have a much less disruptive security audit option, which is to enable only the subcategory: Category: Object Access Subcategory: Application Generated That way you don’t have to enable the completely ridiculous set of Object Access auditing in order to track only DFSR file changes. And the impact is greatly lowered. And besides, to run Win2008+, you need much faster hardware anyway. ^_^ Question Can Netware volumes be DFSN Link Targets? Answer Good grief, someone still has Netware servers? Yes, with caveats: 824729 Novell 6 CIFS pass-through authentication failures;EN-US;824729 Novell also created a DFS service, to act as a root instead of simply a link target like above: Using DFS to Create Junctions Generally speaking, if a target can provide SMB/CIFS shares, they can be a link target. To connect to a DFS target, your OS needs a DFS client: Can Apple, Linux, and other non-MS operating systems connect to DFS Namespaces? Bring on the Banyan Vines questions! Question There is no later version of the Group Policy Best Practices Analyzer tool and no updates when it starts. Is it going to be updated for Windows Server 2008 or later? The tool was even mentioned by Tim on this very blog years ago, but since then, nothing. [This “question” came from a continued conversation about a specific aspect of the tool – Ned] Answer - This tool has no updates or development team and is effectively abandoned. It was not created by the Group Policy Windows developer group nor is it maintained by them – it doesn’t have a dev team at all. It probably should have released in CodePlex instead of the download center. The genie cannot be put back in the bottle now though, as people will just grab copies from elsewhere on the internet, likely packed with malware payloads. - This tool is not supported – it’s provided as-is. When Tim talked about it, the tool had a bright future. Now it is gooey dirt. - This tool’s results and criteria are questionable, bordering on dangerous. It gives a very false sense of security if you pass, because it checks very little. It also incorrectly flags issues that do not exist – for example, it states that the Enterprise Domain Controllers group does not have Apply GP permissions to the Default Domain Controllers policy, and this is an error. The DCs are all members of Authenticated Users though, and that’s how they get Apply permissions. And why doesn’t it raise the same flag for the default domain policy? Who knows! The developers were not correct in this design or assumptions. The tool recommends you add more RPC ports for invalid reasons, which is silly. It talks about a few security settings, ignoring hundreds of others and giving no warning that changing these can break your entire environment. Gah! If you are looking for security-related best practice recommendations for group policy, you should be using the Security Compliance Manager tool: Microsoft Security Compliance Manager v1 (release) Microsoft Security Compliance Manager v2 (beta) That tool at least has best effort support and a living dev team that is providing vetted recommendations. More Comic-Con Cosplay As you know, I spent last week at San Diego Comic-Con and even showed some pictures I snagged. Here is more amazing cosplay, courtesy of the rad Comicvine.com (click thumbnails to make with the bigness). And check out the eyes on Scorpion. Comicvine.com – go there now, unless you hate awesomeness Until next time. Ned “I should go as a Keebler Elf next year” Pyle We'll be powering off our last few Netware servers soon(tm). Funny thing is, the move to Server 2008 from Netware essentially launched March 10, 2008 when domain planning began. Boy we move slow… 😉 Lemme know when you are ready to tackle those DOS 5.0 Lantastic machines. ^_^ Boy1 How I miss Netware. First they made me abandon it for Vines then for, of all things, Windows NT. I want my VMS back. I want my RTE and that great database system HP Image. The first relational engine; even before Oracle. I remember when they made us run Oracle on Vines. Now that was a challenge. If we keep making things easy no one will want to play with us any more. Have a great weekend and keep up on the excellent and extremely useful articles. Well, I'm trying to avoid admitting that we still have a (doing almost nothing) NT4 some 2000 servers still kicking around, too. "Soon", always soon.
https://blogs.technet.microsoft.com/askds/2011/07/29/friday-mail-sack-anchors-aweigh-edition/
CC-MAIN-2016-30
refinedweb
2,211
61.77
mount.ecryptfs_private is broken on arm Bug Description ======= SRU Justification: 1. Impact: mount.ecryptfs_ 2. Development fix: the infinite loop happens because an unsigned char is being compared to -1 as a condition to end the loop 3. Stable fix: same as development fix 4. Test case: run 'mount. 5. Regression potential: I used this updated code (on arm) for a week with no issues. If there were a regression it should have been seen (at argument parsing time) during regular use of mount.ecryptfs_ ======= char is unsigned on arm (signed on x86). fetch_sig compares the result of fgetc to EOF (-1), which of course never succeeds, causing an infinite loop. Related branches The attachment "debdiff with proposed fix, works for me." itself looks fine, but oneiric has released. I assume you still think this should be fixed in oneiric (it does seem to meet the SRU criteria), so could you please attach a patch for precise if necessary and also a patch targeted at oneiric-proposed? I wouldn't want to upload an SRU before it's been fixed in precise. This bug was fixed in the package ecryptfs-utils - 93-0ubuntu2 --------------- ecryptfs-utils (93-0ubuntu2) precise; urgency=low * fix infinite loop on arm: fgetc returns an int, and -1 at end of options. Arm makes char unsigned. (LP: #884407) -- Serge Hallyn <email address hidden> Tue, 08 Nov 2011 10:47:03 -0600 Thanks for the patches! As you can see, I have uploaded this to precise. I also uploaded to oneiric-proposed. Please follow https:/ Hello Serge, or anyone else affected, Accepted ecryptfs-utils into oneiric-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https:/ Failed to reproduce on a Pandaboard running Oneiric (not -proposed): ubuntu@ Bad file Error reading configuration fileubuntu@ ecryptfs-utils 92-0ubuntu1 ubuntu@ Linux panda-test 3.0.0-1206-omap4 #13-Ubuntu SMP PREEMPT Wed Nov 23 17:50:31 UTC 2011 armv7l armv7l armv7l GNU/Linux ubuntu@ ecryptfs seems to work fine. Some help please? Thanks for trying, Robie! Could you try the following program and show the result? #include <stdio.h> int main() { char i = -1; if (i < 0) printf("char is signed\n"); else printf("char is unsigned\n"); } Serge, ubuntu@ ubuntu@ char is unsigned Comparing a char to -1 is undefined, so perhaps it's being optimised away in this case? I tried building ecryptfs-tools from oneiric with optimisation turned off, and that doesn't cause a hang either. I've also tried the -proposed package and that also appears to work fine. Storing the result of fgetc in a char and then comparing to EOF is clearly wrong (which is why fgetc returns an int in the first place). So although I cannot reproduce the original symptom, this change does not appear to cause a regression but does fix a bug for somebody, so I think it would be fine to go into -updates unless there is a policy that prevents this. Quoting Robie Basak (<email address hidden>): > Serge, > > ubuntu@ > ubuntu@ > char is unsigned > > Comparing a char to -1 is undefined, so perhaps it's being optimised > away in this case? No, run it on an x86 and you'll get 'char is signed'. I was hoping your arm toolchain was somehow different from mine. Perhaps my test case was too brief. Could you try: mkdir /home/ubuntu/ mkdir /home/ubuntu/a /home/ubuntu/b cat > /home/ubuntu/ /home/ubuntu/a /home/ubuntu/b ecryptfs none 0 0 EOF ecryptfs- (type 'x\n' twice) then to try the mounting: ecryptfs- (type 'x\n') mount.ecryptfs_@ Passphrase to wrap: Wrapping passphrase: ubuntu@ Passphrase: Inserted auth tok with sig [1830fcc608085939] into the user session keyring ubuntu@ fopen: No such file or directory keyctl_search: Success Perhaps try the interactive 'ecryptfs- ubuntu@ ecryptfs-utils 92-0ubuntu1 libecryptfs0 92-0ubuntu1 Quoting Robie Basak (<email address hidden>): >@ sorry, that should be " ecryptfs- This worked. Reproduced the bug in oneiric (evidently an infinite loop), the problem went away in -proposed. Normal ecryptfs operation (via ecryptfs- # dpkg -l|grep ecryptfs ii ecryptfs-utils 92-0ubuntu1.1 ecryptfs cryptographic filesystem (utilities) ii libecryptfs0 92-0ubuntu1.1 ecryptfs cryptographic filesystem (library) Yay! Thanks, Robie, sorry about all the time that took. This bug was fixed in the package ecryptfs-utils - 92-0ubuntu1.1 --------------- ecryptfs-utils (92-0ubuntu1.1) oneiric-proposed; urgency=low [ Serge Hallyn ] * fix infinite loop on arm: fgetc returns an int, and -1 at end of options. Arm makes char unsigned. (LP: #884407) [ Michael Terry ] * debian/ - Backport ecryptfs-verify from version 93. Required to support gnome- the autologin controls. LP: #576133 -- Michael Terry <email address hidden> Thu, 10 Nov 2011 10:33:01 -0500 The mount.ecryptfs_ private. c changes look good to me.
https://bugs.launchpad.net/ubuntu/oneiric/+source/ecryptfs-utils/+bug/884407
CC-MAIN-2017-13
refinedweb
801
64.51
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 17 Feb 2004 at 17:25, Roman Yakovenko wrote: > 1. To ask David Abrahams how we should export unnamed enums ( defined > within class and within some namespace ). Believe it or not we have done our best on this - six months ago unnamed enums just didn't work at all. If you search the archives, all is explained there. Unfortunately pyste starts with a zero based index, so if it is called on two separate occasions it reuses the same index for two quite different enums in separate compilation units. This is a bug but Nicodemus knows about it. Cheers, Niall -----BEGIN PGP SIGNATURE----- Version: idw's PGP-Frontend 4.9.6.1 / 9-2003 + PGP 8.0.2 iQA/AwUBQDKb8MEcvDLFGKbPEQLGVQCfXESQvBBPPelYobm0cfMD6oyszc4AoMDt WXMbHghaT3UM1CryNYXI9Qpo =gA9f -----END PGP SIGNATURE-----
https://mail.python.org/pipermail/cplusplus-sig/2004-February/006522.html
CC-MAIN-2016-44
refinedweb
132
74.79
. 3. Addition of x86_64 vector math functions to Glibc 3.1. Goal Main goal is utilize of SIMD constructs in OpenMP4.0 (#2.8 in and Cilk Plus (#6-8 in) on x86_64 by adding vector implementations of several vector math functions (float and double versions). 3.2. What is vector math functions Vector math functions are vector variants of corresponding scalar math operations implemented using SIMD ISA extensions (e.g. SSE or AVX). They take packed vector arguments, perform the operation on each element of the packed vector argument, and return a packed vector result. Using vector math functions is faster than repeatedly calling the scalar math routines. However, these: a. Functions may not raise exceptions as required by C language standard. Functions may raise spurious exceptions. This is considered an artifact of SIMD processing and may be fixed in the future on the case-by-case basis. b. Functions may not change errno in some of the required cases, e.g. if the SIMD friendly algorithm is done branch-free without a libm call for that value. This is done for performance reasons. c. As the implementation is dependent on libm, some accuracy and special case problems may be inherent to this fact. d. Functions do not guarantee fully correct results in computation modes different from round-to-nearest one. 3.3. Integration and usage model Currently call to a vector math function could be created by GCC (from version 4.9.0) if developer used OMP SIMD constructs and -fopenmp passed. These functions doesn’t set errno and have less accuracy so we need to require to set –ffast-math (which sets –fno-math-errno) to use them (or may be set –ffast-math under -fopenmp). Name of vector function created by GCC based on Vector Function ABI () with a little differences at the moment (mainly b, c, d instead of x, y, Y), but we will fix that mangling. For example the next code in file cos.c: #pragma omp declare simd extern double cos(double); int N = 300; double b[300]; double a[300]; int main(void) { - int i; #pragma omp simd for (i = 0; i < N; i += 1) { - b[i] = cos (a[i]); - } - return (0); } being built by gcc 4.9.0 with the following command: gcc ./cos.c -I/PATH_TO_GLIBC_INSTALL/include -L/ PATH_TO_GLIBC_INSTALL/lib/ -O1 -fopenmp -lm -mavx2 produces binary with nm a.out | grep ZGV - U _ZGVdN4v_cos@@GLIBC_2.21 (here PATH_TO_GLIBC_INSTALL is path to Glibc already built with _ZGVdN4v_cos). 3.4. Testsuite We plan to test vector functions based on testsuite for scalar ones. It will be done by wrappers which will combine argument values to vector registers, passing it to vector function, checking equality of elements in result vector and return extracted value. Only round-to-nearest rounding mode will be checked. 3.5. Consensus a. Put new functions to glibc or to gcc? - Consensus is that we will put them into glibc. - We wouldn’t want to put it to gcc because that will restrict users of other compilers e.g. llvm. b. Add new functions to libm or libmvec? - Consensus is that we will put the new functions in libmvec. - We would like to integrate new functions with libm because it is easier for developers to employ vectorization and should improve acceptance. The libmvec case affects compiler options and it seems new header need to be included instead of math.h, or is it OK include it in math.h? - The use of libm.so as a linker srcipt with AS_NEEDED for libmvec.so does a lot to alleviate the need to link against libmvec. A new libmvec as a distinct API/ABI from libm allows the project to deploy an experimental libmvec for use by application developers and compiler writers without needing the same stability guarantees as libm. The libmvec library symbols could eventually be moved to libm if the project chooses, and thus a conservative approach is to use a distinct library. 3.6. Open questions c. Glibc build requirements? - For SSE4, AVX and AVX2 implementation it has been claimed that we don’t need to change Glibc build requirements. However, AVX2 support was added in binutils 2.22, so we need either to decide if it's OK to increase the binutils requirement, or to have a reason to expect the code to work with older versions. d. How to handle other architectures having different sets of vectorized functions and possibly not having the same set of vector ISAs for each function? - May be with help of some wrappers? e. How to handle the case when Glibc has no vector function while application has it inside (due to old Glibc version installed)? - This should never happen. An application must always run with the glibc it was compiled against or newer.
https://sourceware.org/glibc/wiki/libm
CC-MAIN-2019-43
refinedweb
804
65.22
As part of my ongoing adventures in C#, I decided to write a game, largely because people seem to like them, and I had an Asteroids game already written using C++/DirectX, and I figured that meant I had the program logic and the graphics, I just needed the parts I wanted to learn, such as handling keyboard input, resources, etc. Someone else beat me to the punch though, so I decided to go for something a little more simple - a sideways scrolling game in which the object is simply to avoid oncoming asteroids. For those who don't remember, parallax scrolling was a method of making things look cool in the days prior to 3D. Basically it involves scrolling a number of different bitmaps at differing rates, which gives the illusion of 3D, in the same way that two moving objects moving at the same rate would travel at different speeds if they were different distances from you. The first step in an action based game is to make it run the same speed regardless of the computer it's running on. Using C++, I would do this by catching WM_IDLE, and drawing my objects whenever the processor is idle, but moving them based on time elapsed since I last moved them. In this manner the speed is always the same, and we get the highest frame rate possible for the processor. A slower machine slows down the frame rate, not the game. Well, I don't know if C# has an OnIdle message, although I know it's possible to catch the message using some fancy footwork as documented in the Petzold book. However, for the sake of the exercise, I've decided to use a timer instead. In C#, we set up a timer like this: private Timer timer = new Timer(); timer.Tick += new EventHandler(OnTimer); timer.Enabled = true; timer.Interval = 1000/60; The variable is set as a class member and the rest is done on initialisation. We are defining an event handler for the timer, turning it on and setting it to go off 60 times a second. A timer will only fire if the system is not busy, so we are not guaranteed that we will get precisely 60 shots a second. To get a better degree of accuracy, we will use a DateTime object to time our timer. We create a member DateTime called m_DateTime and set it using DateTime.Now. Then at the start of our timer function we do this: TimeSpan ts = DateTime.Now - m_DateTime; if (ts.Milliseconds > 1000/m_nFPS) { m_DateTime = DateTime.Now; m_DateTime.AddMilliseconds(ts.Milliseconds - (1000/m_nFPS)); In other words, if the time that has passed is more than the time between our desired frames per second value, we spring into actoin, do our drawing and also reset the variable to be Now() again, plus any remaining offset. As we will see later, although I tested this code using some simple drawing, as soon as we scroll the bitmap it is moot, because after turning the timer off, so it goes flat out, I get about 2-3 fps. C# is not fast enough for action games, or at least it does not provide an way I can see to perform the scrolling on the bitmap quickly. Resources The .NET platform has an interesting way of dealing with resources. Basically a resource is added by selecting Project | Add Existing Item, and then by clicking on that item in the Solution Explorer, it can be changed to a Build Action of 'Embedded Resource, i.e. it becomes part of your .exe. To load a bitmap resource, I use this line: m_bmPlanets = new Bitmap(GetType(), "Planets.jpg"); Now my bitmap is loaded from the resources. Depending on your default namespace, you may need to specify a namespace name, such as Collision in this case, in order to load a resource. I must admit I spent an hour on this before it started to work, and I'm not sure what I did.... My initial strategy was to create a system where the background was constantly changing, and so I've built two resource bitmaps, one that holds star patterns of the same resolution in a row, and one that holds a planet per tile, all tiles again the same size. Then I built a bitmap that was a tilewidth wider than the screen, and filled it every time the screen had scrolled a tilewidth, drawing new images off screen so they scrolled into view. I soon discovered it was very slow. The only way I found to scroll a bitmap without writing an image filter was to make a clone, and copy it back to itself offset by a certain number of pixels. For speed I now have a repeating system, where I draw both bitmaps twice over, and use the TranslateTransform to move the aspect of the Graphics object before drawing my two bitmaps. Sadly, it is also very slow, leaving me to conclude that there is no quick way to do what I am trying to do. The timing code I presented before is turned off in the demo, because it doesn't get used. The other thing I found is that in order to draw the planets over the stars with transparency, I needed to use the ImageAttributes object. Bitmaps have a MakeTransparent method, which takes a colour to make transparent, but as I used resources saved as a jpeg, I found this did not work. JPEG is a lossy compression, which means what you get out is not what you put in. Specifically, instead of all being hard black, my transparent area was in a range of black, such that I needed to filter 0,0,0, through to 35, 35, 35 to get the masking effect I needed. ImageAttributes iattrib = new ImageAttributes(); iattrib.SetColorKey(Color.FromArgb(255, 0, 0, 0), Color.FromArgb(255, 35, 35, 35)); iattrib.SetWrapMode(System.Drawing.Drawing2D.WrapMode.Tile, Color.Black, false); gr.DrawImage(m_bmPlanetLayer, new Rectangle(0, 0, 640, 320), m_nPlanetPos, 0, 640, 320, GraphicsUnit.Pixel, iattrib); Having found that GDI+ is too slow to do what I wanted, I have decided to simplify. The next installment will have planets which I track on the screen, and a ship I can move in it. It will not scroll the stars anymore, hopefully providing a speed increase sufficient to prepare for the last installment, where I will be doing a per pixel hit test to know when I've flown into a planet. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/game/collision.aspx
crawl-002
refinedweb
1,098
67.49
- XQuery Advantages - Working with XQuery Methods - Conclusion It has probably taken longer for XML to become a mainstream part of today’s applications than many developers first thought. XML is now becoming more widely used as wireless devices are becoming more secure and with better bandwidth. New technologies such as XQuery for SQL Server 2005 is making it much easier to convert relational data to XML and then query the XML data, much as you would relational data, because XQuery is a query language very similar to the SQL query language for relational data. This article gives you a crash course on becoming familiar with XQuery methods and how you can use them in certain situations to retrieve and update XML data stored in your SQL05 database. These methods also contain functions and operators that I’ll touch on a bit with a few examples where necessary. These functions and operators and the combinations that can be used are extensive and can cover the scope of the entire book. XQuery Advantages Microsoft originally included XQuery in the .NET framework for use with client-side XML processing. They have since removed it from the .NET framework and kept XQuery native only to the SQL Server environment. The reasons for this vary from being more secure to better performance by having it at the SQL Server level. The .NET framework does still include XML client-side parsing capability through the use of the Systems.XML namespace, however. This namespace includes Xpath 2.0, a language for locating and extracting parts of an XML document. XQuery can also use Xpath expressions. The advantage of using XQuery 1.0 over XLST, another XML parsing language, is considerably less code and that means fewer maintenance costs according to Microsoft. The queries also have better performance than XLST when parsing a strongly typed XML document because XQuery can be used as a strongly typed language. Because XQuery is in the process of becoming a W3C standard, XQuery tools are and will be available with many different software packages from many different vendors, not just Microsoft products.
http://www.informit.com/articles/article.aspx?p=468058&amp;seqNum=3
CC-MAIN-2018-34
refinedweb
350
62.48
Board index » C Language All times are UTC > Greetings, > Can anyone tell me how to separate a number from a string? > E.g., how do I reduce "abc 12.3 cde" to "12.3"? From there on, it's > trivial.. char *str = "abc 12.3 cde"; double d = atof( str + 4 ); atof will stop converting when it hits the "space" following the 3. If you DON'T know where the number is then you might try several calls to strtok(): char *str = "abc 12.3 cde"; char *tok; char *endp; double d; tok = strtok( str, ' ' ); while (1) { /* try to convert the token */ d = strtod( tok, &endp ); /* if the token was not a number then endp will still point to the beginning of the current token in which case we need to get the next token. */ if ( endp == tok ) { tok = strtok( NULL, ' ' ); } else { break; } }; Of course, in *this* code if there IS NO number then you'll be waiting for a long time :) Hope that helps. I wouldn't be surprised if there's something more "elegant". Anyone? Richard -- { char *str = "abc 12.3 cde"; char *numStr; while (*str && !isdigit(*str)) /* skip non-digits (don't accept .23) */ str++; numStr = str; while (*str && (isdigit(*str) || (*str == '.'))) /* check for digit and DOT */ str++; *str = 0; printf("The numberstring is '%s'\n",numStr); Hope this helps, Pieter -- > Greetings, > Can anyone tell me how to separate a number from a string? > E.g., how do I reduce "abc 12.3 cde" to "12.3"? From there on, it's > trivial... But the solution is similar.. If you know where the number begins: char *num; char *str = "abc 12.3 cde"; num = strtok( str + 4, ' ' ); Be aware that this effectively "chops" your originial string apart by putting a '\0' right after the 3 - but num will point to a null terminated string containing your number. Obviously, if you don't want the original string ripped apart like this then you can always just make a copy of it to work with. If you don't know where the number begins then then you can still use several calls to strtok() and test the initial character of the returned string with isdigit(). If you get a positive return from isdigit() then you know you've found what is likely the number you want. char *num; char *str = "abc 12.3 cde"; num = strtok( str, ' ' ); while ( num != NULL && !isdigit(*num) ) { num = strtok( NULL, ' ' ); } if ( num == NULL ) { you don't have what you want. } else { party on! } I Hope that's better. -- char * WorkPtr = YourString; while( *WorkPtr != '\0' && ! isdigit( *WorkPtr ) ) WorkPtr++; Assuming I did not make a stupid mistake -- I haven't used up my daily quota for that _completely_ ;-) -- WorkPtr should afterwards point to the first digit in YourString or to the end of it. -- Greetings from _____ /_|__| Auke Reitsma, Delft, The Netherlands. / | \ ------------------------------------- Remove SPAMBLOCK from my address ... -- 1. This is a line taken from a file with fgets or a similar function of your own making. 2. The line consists of three fields, the first of characters, the second of digits (+ decimal point) the third of characters. 3. The fields are separated by spaces and the character fields do NOT have internal spaces so the following string is not valid "abc def 12.3 ghi" #include <string.h> #define MAX_LEN 50 size_t len = strlen(input_string); size_t i = 0; char *p = input_string; char field_1[MAX_LEN]; char field_2[MAX_LEN]; char field_3[MAX_LEN]; for( i = 0; *p != ' '&& i < len; ++i) { field_1[i] = *p++; for( i = 0; *p != ' '&& i < len; ++i) { field_2[i] = *p++; for( i = 0; *p != ' '&& i < len; ++i) { field_3[i] = *p++; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bob Wightman -- 1. Newbie question: Strings and string manipulation. 2. Filename string manipulation 3. String Manipulation 4. The business of String Manipulation - Revisited! 5. string manipulation 6. String Manipulation 7. Segmentation fault during string manipulation 8. String Manipulation 9. good string manipulation library?? 10. String Manipulations!!! 11. C++ Statement String Manipulation... 12. String Manipulation
http://computer-programming-forum.com/17-c-language/f330fb43b76f8fda.htm
CC-MAIN-2019-13
refinedweb
660
76.11
XCTest Testing Framework Overview Qualified supports the XCTest testing framework. XCTest Quick Start All tests start with a subclass of XCTestCase. You can then add one or more test case methods to that class, each of which must start with test. You must also include the main entry point method to start the tests. Assertions Use XCTAssert* functions to create your assertions, for example, XCTAssertEqual(_ actual:_ expected:_ message). Example import XCTest class PersonTest: XCTestCase { static var allTests = [ ("testGreet", testGreet), ] func testGreet() { let person = Person("Jorge") XCTAssertEqual(person.greet("Aditya"), "Hello, Aditya, I am Jorge, it's nice to meet you!") } } XCTMain([ testCase(PersonTest.allTests) ]) Learn More You can learn more on the Apple XCTest website.
https://www.qualified.io/kb/languages/swift/xctest
CC-MAIN-2020-40
refinedweb
118
66.23
Python client for parsing SCOTUS cases from the granted/noted and orders dockets. Project Description Release History Download Files Getting started pip install nyt-docket Using nyt-docket Demo app Run the demo app. python -m docket.demo Modules Use the docket loader manually from within your Python script. Grants (new cases) from docket import grants g = grants.Load() g.scrape() for case in g.cases: print case.__dict__ Slip opinions (decisions) from docket import slipopinions o = slipopinions.Load() o.scrape() for case in o.cases: print case.__dict__ Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/nyt-docket/0.0.6/
CC-MAIN-2017-43
refinedweb
112
63.05
"mop" stands for "Meta-Object Protocol", and it's a term closely related to CLOS. I've mentioned getting annoyed at a certain piece of it last time, when I needed to iterate over CLOS instance slots for some weird reason. It turns out that due to the way MOP support is implemented, this is a non-trivial thing to do portably. Last week, I got into a situation where I needed a temporary copy of an object. What I really wanted was an object with most slots mirroring an existing instance, but with changed values in two slots. For reasons related to the layout of the surrounding code, I did not want to destructively modify the object itself because it was unclear whether the old values would be expected on a subsequent call. So I googled around a bit, and found that the situation for copying is pretty much the same as it is for iterating. There isn't a built-in, general way of making a copy of a CLOS instance, shallow or otherwise, and implementing it myself in a semi-portable way would require doing all the annoying things that I had to pull with slot iteration earlier. So, being that I occasionally profess to be a non-idiot programmer, I figured I'd take a stab at solving the problem in a semi-satisfactory way. And here we are. That implements slot-names (which takes a CLOS instance or class and returns a list of its slot names), map-slots (which takes a (lambda (slot-name slot-value) ...) and an instance, and maps over the bound slots of that instance), shallow-copy (which does exactly what it sounds like it would do) and deep-copy (which is tricky enough that I hereby direct you to the documentation and/or code if you're sufficiently curious about it). I did cursory testing in GNU Clisp, and fairly extensive testing (followed by some production use) in SBCL, though the :shadowing-import directive should work properly in a number of others as well. Now, I realize that due to the kind of crap you can pull using CLOS by design, this isn't a complete solution. That said, it did solve the problems I was staring down, and I think I've made it portable/extensible enough that you'll be able to do more or less what you want in a straight-forward way. For basic use cases, it solves the problem outright, which should save me a bit of time in the coming weeks. For more complex cases, each of the exported symbols is a method, which means you can easily def your own if you need to treat a certain class differently from others.
http://langnostic.blogspot.com/2012/08/cl-mop-or-yak-shaving-for-fun-and.html
CC-MAIN-2017-39
refinedweb
458
62.51
When [P1858R1] was presented to EWGI in Prague [EWGI.Prague], that group requested that the structured bindings extension in that proposal was split off into its own paper. This is that paper, and the original paper continues on as an R2 [P1858R2]. Assuming the original paper gets adopted, and we end up with facilities allowing both declaring packs and indexing into them, it becomes a lot easier to implement something like tuple and opt it into structured bindings support: template <typename... Ts> class tuple { Ts... elems; public: template <size_t I> constexpr auto get() const& -> Ts...[I] const& { return elems...[I]; } }; template <typename... Ts> struct tuple_size<tuple<Ts...>> : integral_constant<size_t, sizeof...(Ts)> { }; template <size_t I, typename... Ts> struct tuple_element<I, tuple<Ts...>> { using type = Ts...[I]; }; That’s short, easy to read, easy to write, and easy to follow - dramatically more so than the status quo without P1858. But there’s quite a bit of redundancy there. And a problem with the tuple-like protocol here is that we need to instantiate a lot of templates. A declaration like: requires 2N+1 template instantiations: one for std::tuple_size, N for std::tuple_element, and another N for all the gets). That’s pretty wasteful. Additionally, the tuple-like protocol is tedious for users to implement. There was a proposal to reduce the customization mechanism by dropping std::tuple_element [P1096R0], which was… close. 13-7 in San Diego. What do tuple_size and tuple_element do? They give you a number of types and then each of those types in turn. But we already have a mechanism in the language that provides this information more directly: we can provide a pack of types. Currently, there are three kinds of types that can be used with structured bindings [P0144R2]: Arrays (specifically T[N] and not std::array<T, N>). Tuple-like: those types that specialize std::tuple_size, std::tuple_element, and either provide a member or non-member get(). Types where all of their members are public members of the same class (approximately). This paper suggests extending the Tuple-like category by allowing types to opt-in by either providing a member pack alias named tuple_elements or, if not that, then the status quo of specialization both std::tuple_size and std::tuple_element. In other words, a complete opt-in to structured bindings for our tuple would become: This would also help those cases where we need to opt-in to the tuple protocol in cases where we do not even have a pack: Note that the whole pair_get implementation on the left can be replaced by introducing a pack alias as on the right anyway. And if that’s already a useful thing to do to help implement a feature, it’d be nice to go that extra one step and make that already useful solution even more useful. Change 9.6 [dcl.struct.bind]/4: Otherwise, if either - (4.1) the qualified-id E::tuple_elementsnames an alias pack, or - (4.2) the qualified-id std::tuple_size<E>names a complete class type with a member named value, then the number and types of the elements are determined as follows. If in the first case, the number of elements in the identifier-list shall be equal to the value of sizeof...(E::tuple_elements)and let Tidesignate the type E::tuple_elements...[i]. Otherwise, the expression std::tuple_size<E>::valueshall be a well-formed integral constant expression and, the number of elements in the identifier-list shall be equal to the value of that expression, and let Tidesignate the type std::tuple_element<i, E>::type. Let ibe an index prvalue of type std::size_tcorresponding to vi. The unqualified-id getis looked up in the scope of Eby class member access lookup ([basic.lookup.classref]), and if that finds at least one declaration that is a function template whose first template parameter is a non-type parameter, the initializer is e.get<i>(). Otherwise, the initializer is get<i>(e), where get is looked up in the associated namespaces ([basic.lookup.argdep]). In either case, get<i>is interpreted as a template-id. [ Note: Ordinary unqualified lookup ([basic.lookup.unqual]) is not performed. — end note ] In either case, eis an lvalue if the type of the entity eis an lvalue reference and an xvalue otherwise. Given the typethe type Tidesignated by std::tuple_element<i, E>::typeand Uidesignated by either Ti&or Ti&&, where Uiis an lvalue reference if the initializer is an lvalue and an rvalue reference otherwise, variables are introduced with unique names rias follows: Each viis the name of an lvalue of type Tithat refers to the object bound to ri; the referenced type is Ti. [EWGI.Prague] EWGI. 2020. EWGI Discussion of P1858R1. [P0144R2] Herb Sutter. 2016. Structured Bindings. [P1096R0] Timur Doumler. 2018. Simplify the customization point for structured bindings. [P1858R1] Barry Revzin. 2020. Generalized pack declaration and usage. [P1858R2] Barry Revzin. 2020. Generalized pack declaration and usage.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2120r0.html
CC-MAIN-2020-45
refinedweb
821
57.06
Technical Support On-Line Manuals CARM User's Guide Discontinued #include <string.h> unsigned char *strrchr ( const unsigned char *string, /* string to search */ char c); /* character to find */ The strrchr function searches string for the last occurrence of c. The null character terminating string is included in the search. The strrchr function returns a pointer to the last character c found in string or a null pointer if no matching character was found. strchr, strcspn, strpbrk, strpos, strrpbrk, strrpos, strspn #include <string.h> #include <stdio.h> /* for printf */ void tst_strrchr (void) { unsigned char *s; unsigned char buf [] = "This is a test"; s = strrchr (buf, 't'); if (s != NULL) printf ("found the last .
http://www.keil.com/support/man/docs/ca/ca_strrchr.htm
CC-MAIN-2020-16
refinedweb
112
66.33
Macros are simple string replacementssuggest change Macros are simple string replacements. (Strictly speaking, they work with preprocessing tokens, not arbitrary strings.) #include <stdio.h> #define SQUARE(x) x*x int main(void) { printf("%d\n", SQUARE(1+2)); return 0; } You may expect this code to print 9 ( 3*3), but actually 5 will be printed because the macro will be expanded to 1+2*1+2. You should wrap the arguments and the whole macro expression in parentheses to avoid this problem. #include <stdio.h> #define SQUARE(x) ((x)*(x)) int main(void) { printf("%d\n", SQUARE(1+2)); return 0; } Another problem is that the arguments of a macro are not guaranteed to be evaluated once; they may not be evaluated at all, or may be evaluated multiple times. #include <stdio.h> #define MIN(x, y) ((x) <= (y) ? (x) : (y)) int main(void) { int a = 0; printf("%d\n", MIN(a++, 10)); printf("a = %d\n", a); return 0; } In this code, the macro will be expanded to ((a++) <= (10) ? (a++) : (10)). Since a++ ( 0) is smaller than 10, a++ will be evaluated twice and it will make the value of a and what is returned from MIN differ from you may expect. This can be avoided by using functions, but note that the types will be fixed by the function definition, whereas macros can be (too) flexible with types. #include <stdio.h> int min(int x, int y) { return x <= y ? x : y; } int main(void) { int a = 0; printf("%d\n", min(a++, 10)); printf("a = %d\n", a); return 0; } Now the problem of double-evaluation is fixed, but this min function cannot deal with double data without truncating, for example. Macro directives can be of two types: #define OBJECT_LIKE_MACRO followed by a "replacement list" of preprocessor tokens #define FUNCTION_LIKE_MACRO(with, arguments) followed by a replacement list What distinguishes these two types of macros is the character that follows the identifier after #define: if it’s an lparen, it is a function-like macro; otherwise, it’s an object-like macro. If the intention is to write a function-like macro, there must not be any white space between the end of the name of the macro and \(. Check this for a detailed explanation. In C99 or later, you could use static inline int min(int x, int y) { … }. In C11, you could write a ‘type-generic’ expression for min. #include <stdio.h> #define min(x, y) _Generic((x), \ long double: min_ld, \ unsigned long long: min_ull, \ default: min_i \ )(x, y) #define gen_min(suffix, type) \ static inline type min_##suffix(type x, type y) { return (x < y) ? x : y; } gen_min(ld, long double) gen_min(ull, unsigned long long) gen_min(i, int) int main(void) { unsigned long long ull1 = 50ULL; unsigned long long ull2 = 37ULL; printf("min(%llu, %llu) = %llu\n", ull1, ull2, min(ull1, ull2)); long double ld1 = 3.141592653L; long double ld2 = 3.141592652L; printf("min(%.10Lf, %.10Lf) = %.10Lf\n", ld1, ld2, min(ld1, ld2)); int i1 = 3141653; int i2 = 3141652; printf("min(%d, %d) = %d\n", i1, i2, min(i1, i2)); return 0; } The generic expression could be extended with more types such as double, float, long long, unsigned long, long, unsigned — and appropriate gen_min macro invocations written.
https://essential-c.programming-books.io/macros-are-simple-string-replacements-f15a42d0d010487394a011cab0b7fdef
CC-MAIN-2021-25
refinedweb
544
64.24
Asked by: how to exchange data between two property pages in the same program i am using Visual C++, i want to exchange data between two property pages in the same program......... means i need the data from a member variable in first property page to the another property page. can anybody help me.........Monday, March 26, 2012 11:51 AM General discussion All replies >i am using Visual C++, i want to exchange data between two property pages in the same program......... means i need the data from a member variable in first property page to the another property page. Put the data you need to share in the class that is the parent of the pages (usually a property sheet), that way both pages can access it by using GetParent or having a reference to the data in the parent's instance. e.g. From a property page method, you can access the property sheet like this: CMyPage1::Method() { CMySheet * pSheet = (CMySheet *) GetParent(); DaveMonday, March 26, 2012 1:19 PM i have two property pages with classes MetalTemperatures.cpp,Rupture.cpp. In MetalTemperatures class i have one variable like "m_PipeInnerRadius". i need this variable in Rupture.cpp also, how should i access or transfer data from one page to another page means i need the data given to "m_PipeInnerRadius" in MetalTemperatures.cpp to some calculation in Rupture.cpp. "m_PipeInnerRadius" this is local variable in MetalTemperatures.cpp. how should i exchange that variable to Rupture.cpp will u please help me............. its very urgent........ ppdevTuesday, March 27, 2012 4:36 AM hi sir, i am a beginner to programming, i am unable to get the terminology, i explain to you what i understand, but my data is not in propertysheet, that is in propertypage,then how can i share the data between property pages.Tuesday, March 27, 2012 8:42 AM >but my data is not in propertysheet, that is in propertypage,then how can i share the data between property pages. The key is to put the data you need to share somewhere you can share it easily. Either 1. Move the (to be shared) data from the property page to the property sheet class and modify code that references the variables so it accesses them from the property sheet. or: 2. Add references (or pointers) to each page so that page1's class has a reference to page2 and vice versa. If you're a beginner, then you're unlikely to find either option easy - but learning new tricks is rarely easy. DaveTuesday, March 27, 2012 9:49. error C2227: left of '->m_E11' must point to class/struct/unionTuesday, March 27, 2012 10:48. Which seems pretty clear to me. The variable "page1" isn't declared at the point where you're using it. DaveTuesday, March 27, 2012 12:09 PM can u explain me clearly...................Tuesday, March 27, 2012 12:13 PM hi sir, i declared that CPage1* page1; in my CPage2.cpp and also include the CPage1.h file in the CPage2.cpp. now the program is compiled but it does not transfer data from one page to another means i have two edit boxes with variable names as m_E11,m_E12 as CString Type in page1 and another two edit boxes with variable names as m_E21,m_E22 as CString Type in page2. now in my CPage2.cpp i did as follows: CPage1 *page1; m_E21 = page1->m_E11; but in the output when i enter a string to the editbox in the page1 (m_E11) it doesnt shows in the second page m_E21; i missed something to transfer data........ can u help me......................Tuesday, March 27, 2012 12:53 PM Your approach with page1->m_E11 is not going to work. Please see your other post on this same question at... Tuesday, March 27, 2012 1:10 PM - ya now only i saw the post, i can try and anydoubt i will ask you................Tuesday, March 27, 2012 1:12 PM >but in the output when i enter a string to the editbox in the page1 (m_E11) it doesnt shows in the second page m_E21; If you've got 2 edit controls in separate pages then you need to use SetWindowText to get the text to update. Having said that, it would seem to me that your UI design is odd - your shared variable/edit control doesn't really appear as though it should be on the page - instead it should be a control on a parent window. DaveTuesday, March 27, 2012 1:13 PM CMysheet means-----------> is itpropertysheet for the property pages? can i declare it in my propertysheet.cpp.Tuesday, March 27, 2012 1:15 PM // PropSheet.cpp: implementation of the CPropSheet class. // ////////////////////////////////////////////////////////////////////// #include "stdafx.h" #include "dataexchange.h" #include "PropSheet.h" //#include "Page1.h" //#include "Page2.h" #ifdef _DEBUG #undef THIS_FILE static char THIS_FILE[]=__FILE__; #define new DEBUG_NEW #endif ////////////////////////////////////////////////////////////////////// // Construction/Destruction ////////////////////////////////////////////////////////////////////// IMPLEMENT_DYNAMIC(CPropSheet, CPropertySheet) CPropSheet::CPropSheet(LPCTSTR szCaption, CWnd *pParent, UINT iSelectPage): CPropertySheet(szCaption, pParent, iSelectPage) { AddPage(&P1); AddPage(&P2); CPropSheet* pSheet = (CPropSheet*)GetParent(); pSheet->m_E11_1 = m_E11; pSheet->m_E12_1 = m_E12; pSheet->m_E21_1 = m_E21; pSheet->m_E22_1 = m_E22; UpdateData(FALSE); } CPropSheet::~CPropSheet() { } BEGIN_MESSAGE_MAP(CPropSheet, CPropertySheet) END_MESSAGE_MAP() // PropSheet.h: interface for the CPropSheet class. // ////////////////////////////////////////////////////////////////////// #if !defined(AFX_PROPSHEET_H__81F3D431_D802_4C15_BAD7_FD52576935E0__INCLUDED_) #define AFX_PROPSHEET_H__81F3D431_D802_4C15_BAD7_FD52576935E0__INCLUDED_ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 #include "Page1.h" #include "Page2.h" class CPropSheet : public CPropertySheet { DECLARE_DYNAMIC(CPropSheet) public: CPropSheet(LPCTSTR szCaption, CWnd *pParent = NULL, UINT iSelectPage = 0); virtual ~CPropSheet(); CPage1 P1; CPage2 P2; CString m_E11_1; CString m_E12_1; CString m_E21_1; CString m_E22_1; protected: DECLARE_MESSAGE_MAP() private: // CPage1 P1; // CPage2 P2; }; #endif // !defined(AFX_PROPSHEET_H__81F3D431_D802_4C15_BAD7_FD52576935E0__INCLUDED_) i insert both code blocks of PropSheet.h and PropSheet.cpp, while compiling i got following errors as: Compiling... PropSheet.cpp C:\Temp\dataexchange\PropSheet.cpp(31) : error C2065: 'm_E11' : undeclared identifier C:\Temp\dataexchange\PropSheet.cpp(32) : error C2065: 'm_E12' : undeclared identifier C:\Temp\dataexchange\PropSheet.cpp(33) : error C2065: 'm_E21' : undeclared identifier C:\Temp\dataexchange\PropSheet.cpp(34) : error C2065: 'm_E22' : undeclared identifier Error executing cl.exe. dataexchange.exe - 4 error(s), 0 warning(s) m_E11,m_E12 are variables in page1 and m_E21 and m_E22 are variables in page2.Wednesday, March 28, 2012 8:10 AM >> CPropSheet* pSheet = (CPropSheet*)GetParent(); >> pSheet->m_E11_1 = m_E11; It makes no sense to put this in the CPropSheet class. The pages are child windows of the sheet. So the pages can use GetParent to access variables in the sheet. Wednesday, March 28, 2012 1:49 PM In this case, variables are not declared in my sheet, the variables are declared in the pages itself. how should i get the variables from the sheet without variables in the sheet. ppdevThursday, March 29, 2012 4:12 AM Add some variables to the sheet. Those variables can be accessed from any page.Thursday, March 29, 2012 2:04 PM
http://social.msdn.microsoft.com/Forums/en-US/38b4cb21-6607-4bae-9b0b-24de2a296ce0/how-to-exchange-data-between-two-property-pages-in-the-same-program?forum=vcmfcatl
CC-MAIN-2014-41
refinedweb
1,127
64.61
Bouncing a Ball with Mixed Integer Programming Edit: A new version. Here I made a bouncing ball using mixed integer programming in cvxpy. Currently we are just simulating the bouncing ball internal to a mixed integer program. We could turn this into a control program by making the constraint that you have to shoot a ball through a hoop and have it figure out the appropriate initial shooting velocity. import numpy as np import cvxpy as cvx import matplotlib.pyplot as plt N = 100 dt = 0.05 x = cvx.Variable(N) v = cvx.Variable(N) collision = cvx.Variable(N-1,boolean=True) constraints = [] M = 20 # Big M trick #initial conditions constraints += [x[0] == 1, v[0] == 0] for t in range(N-1): predictedpos = x[t] + v[t] * dt col = collision[t] notcol = 1 - collision[t] constraints += [ -M * col <= predictedpos , predictedpos <= M * notcol] #enforce regular dynamics if col == 0 constraints += [ - M * col <= x[t+1] - predictedpos, x[t+1] - predictedpos <= M * col ] constraints += [ - M * col <= v[t+1] - v[t] + 9.8*dt, v[t+1] - v[t] + 9.8*dt <= M * col ] # reverse velcotiy, keep position the same if would collide with x = 0 constraints += [ - M * notcol <= x[t+1] - x[t], x[t+1] - x[t] <= M * notcol ] constraints += [ - M * notcol <= v[t+1] + 0.8*v[t], v[t+1] + 0.8*v[t] <= M * notcol ] #0.8 restitution coefficient objective = cvx.Maximize(1) prob = cvx.Problem(objective, constraints) res = prob.solve(solver=cvx.GLPK_MI, verbose=True) print(x.value) print(v.value) plt.plot(x.value, label='x') plt.plot(v.value, label= 'v') plt.plot(collision.value, label = 'collision bool') plt.legend() plt.xlabel('time') plt.show() Pretty cool. The trick I used this time is to make boolean indicator variables for whether a collision will happen or not. The big M trick is then used to actually make the variable reflect whether the predicted position will be outside the wall at x=0. If it isn’t, it uses regular gravity dynamics. If it will, it uses velocity reversing bounce dynamics Just gonna dump this draft out there since I’ve moved on (I’ll edit this if I come back to it). You can embed collisions in mixed integer programming. I did it below using a strong acceleration force that turns on when you enter the floor. What this corresponds to is a piecewise linear potential barrier. Such a formulation might be interesting for the trajectory optimization of shooting a hoop, playing Pachinko, Beer Pong, or Pinball. using JuMP using Cbc using Plots N = 50 T = 5 dt = T/N m = Model(solver=CbcSolver()) @variable(m, x[1:N]) # , Bin @variable(m, v[1:N]) # , Bin @variable(m, f[1:N-1]) @variable(m, a[1:N-1], Bin) # , Bin @constraint(m, x[1] == 1) @constraint(m, v[1] == 0) M = 10 for t in 1:N-1 @constraint(m, x[t+1] == x[t] + dt*v[t]) @constraint(m, v[t+1] == v[t] + dt*(10*(1-a[t])-1)) #@constraint(m, v[t+1] == v[t] + dt*(10*f[t]-1)) @constraint(m, M * a[t] >= x[t+1]) #if on the next step projects into the earth @constraint(m, M * (1-a[t]) >= -x[t+1]) #@constraint(m, f[t] <= M*(1-a[t])) # we allow a bouncing force end k = 10 # @constraint(m, f .>= 0) # @constraint(m, f .>= - k * x[2:N]) # @constraint(m, x[:] .>= 0) E = 1 #sum(f) # 1 #sum(x) #sum(f) # + 10*sum(x) # sum(a) @objective(m, Min, E) solve(m) println(x) println(getvalue(x)) plotly() plot(getvalue(x)) #plot(getvalue(a)) gui() More things to consider: Is this method trash? Yes. You can actually embed the mirror law of collisions directly without needing to using a funky barrier potential. You can extend this to ball trapped in polygon, or a ball that is restricted from entering obstacle polygons. Check out the IRIS project - break up region into convex regions Gives good support for embedding conditional variables. On a related note, gives a good way of defining piecewise linear functions using Mixed Integer programming. Pajarito is another interesting Julia project. A mixed integer convex programming solver. Russ Tedrake papers - Break up obstacle objects into delauney triangulated things.
https://www.philipzucker.com/bouncing-a-ball-with-mixed-integer-programming/
CC-MAIN-2021-39
refinedweb
714
57.77
Twitter Follower Value, revisited April 1, 2010 4 Comments In my last post, I presented a Groovy class for computing Twitter Follower Value (TFV), based on Nat Dunn’s definition of the term (number of followers / number of friends). That worked just fine. Then I moved on to calculating Total Twitter Follower Value (TTFV), which sums the TFV’s of all your followers. My solution ground to a halt, however, when I ran into a rate limit at Twitter. It turns out I didn’t read the API carefully enough. I thought that to calculate TTFV, I would have to get all the follower ID’s for a given person and loop over them, calculating each of their TFV’s. That’s actually not the case. There is a call in the Twitter API to retrieve all of an individual’s followers, and the returned XML lists the number of friends and followers for each. It’s therefore time to redesign my original solution. I first added a TwitterUser class to my system. package com.kousenit.twitter class TwitterUser { def id def name def followersCount def friendsCount def getTfv() { followersCount / friendsCount } String toString() { "($id,$name,$followersCount,$friendsCount,${this.getTfv()})" } } Putting the computation of TTV in TwitterUser makes more sense, since the two counts are there already. The TwitterFollowerValue class has also been redesigned. First of all, it expects an id for the user to be supplied, and stores that as an attribute. It also keeps the associated user instance around so that doesn’t have to be recomputed all the time. package com.kousenit.twitter class TwitterFollowerValue { def id TwitterUser user def getTwitterUser() { if (user) return user def url = "" def response = new XmlSlurper().parse(url) user = new TwitterUser(id:id,name:response.name.toString(), friendsCount:response.friends_count.toInteger(), followersCount:response.followers_count.toInteger()) return user } // ... more to come ... The getTwitterUser method checks to see if we’ve already retrieved the user, and if so returns it. Otherwise it queries the Twitter API for a user, converts the resulting XML into an instance of the TwitterUser class, saves it locally, and returns it. The next method is something I knew I’d need eventually. // ... from above ... def getRateLimitStatus() { def url = "" def response = new XmlSlurper().parse(url) return response.'remaining-hits'.toInteger() } // ... more to come ... Twitter limits the number of API calls to 150 per hour, unless you apply to be on the whitelist (which I may do eventually). The URL shown in the getRateLimitStatus method checks on the number of calls remaining in that hour. Since the XML tag is <remaining-hits>, which includes a dash in the middle, I need to wrap it in quotes in order to traverse the XML tree. I added one simple delegate method to retrieve the user, which also initializes the user field if it hasn’t been initialized already. def getTfv() { user?.tfv ?: getTwitterUser().tfv } This uses both the safe dereference operator ?. and the cool Elvis operator ?: to either return the user’s TFV if the user exists, or find the user and then get its TFV if it doesn’t. I’m not wild about relying on the side-effect of caching the user in my get method (philosophically, any get method shouldn’t change the system’s state), but I’m not sure what the best way to do that is. Maybe somebody will have a suggestion in the comments. (For those who don’t know, the Elvis operator is like a specialized form of the standard ternary operator from Java. If the value to the left of the question mark is not null, it’s returned, otherwise the expression to the right of the colon is executed. If you turn your head to the side, you’ll see how the operator gets its name. Thank you, thank you very much.) Next comes a method to retrieve all the followers as a list. def getFollowers() { def slurper = new XmlSlurper() def followers = [] def next = -1 while (next) { def url = "" def response = slurper.parse(url) response.users.user.each { u -> followers << new TwitterUser(id:u.id,name:u.name.toString(), followersCount:u.followers_count.toInteger(), friendsCount:u.friends_count.toInteger()) } next = response.next_cursor.toBigInteger() } return followers } The API request for followers only returns 100 at a time. If there are more than 100 followers, the <next_cursor> element holds the value of the cursor parameter for the next page. For users with lots of followers, this is going to be time consuming, but there doesn’t appear to be any way around that. The value of next_cursor seems to be randomly selected long value, so I just went with BigInteger to avoid any problems. Note we’re relying on the Groovy Truth here, meaning that if the next value is not zero, the while condition is true and the loop continues. Finally we have the real goal, which is to compute the Total TFV. Actually, it’s pretty trivial now, but I do make sure to check to see if I have enough calls remaining to do it. def getTTFV() { def totalTTFV = 0.0 // check if we have enough calls left to do this def numFollowers = user?.followersCount ?: getTwitterUser().followersCount def numCallsRequired = (int) (numFollowers / 100) def callsRemaining = getRateLimitStatus() if (numCallsRequired > callsRemaining) { println "Not enough calls remaining this hour" return totalTTFV } // we're good, so do the calculation getFollowers().each { TwitterUser follower -> totalTTFV += follower.tfv } return totalTTFV } That’s all there is to it. Here’s my test case, which shows how everything is supposed to work. package com.kousenit.twitter; import static org.junit.Assert.*; import org.junit.Before; import org.junit.Test; class TwitterValueTest { TwitterFollowerValue tv @Before public void setUp() throws Exception { tv = new TwitterFollowerValue(id:'15783492') } @Test public void testGetTwitterUser() { TwitterUser user = tv.getTwitterUser() assertEquals '15783492', user.id assertEquals 'Ken Kousen', user.name assertEquals 90, user.friendsCount assertEquals 108, user.followersCount } @Test public void testGetTFV() { assertEquals 1.2, tv.tfv, 0.0001 } @Test public void testGetFollowers() { def followers = tv.getFollowers() assertEquals 109, followers.size() } @Test public void testGetTTFV() { assertEquals 135.08, tv.getTTFV(), 0.01 } } As you can see, my TTFV as of this writing is a little over 135, though my TTV is only about 1.2. I also put together a script to use this system for a general user and to output more information: package com.kousenit.twitter import java.text.NumberFormat; NumberFormat nf = NumberFormat.instance TwitterFollowerValue tfv = new TwitterFollowerValue(id:'kenkousen') total = 0.0 tfv.followers.sort { -it.tfv }.each { follower -> total += follower.tfv println "${nf.format(follower.tfv)}\t$follower.name" } println total I need to supply an id when I instantiate the TwitterFollowerValue class. That id can either be numeric, as I used in my test cases, or just the normal Twitter id used with an @ sign (i.e., @kenkousen). The cool part was calling the sort function applied after retrieving the followers. The sort method takes a closure to do the comparison. If this were Java, that would be the “ int compare(T o1, T o2)” method from the java.util.Comparator interface, likely implemented by an anonymous inner class. I think you’ll agree this is better. 🙂 Incidentally, I used a minus sign because I wanted the values sorted from highest to lowest. My result is: 12.135 Dierk König 10.077 Graeme Rocher 9.621 Glen Smith 4.667 Kirill Grouchnikov 3.89 Mike Loukides 3.1 Christopher M. Judd 3.01 Robert Fischer 3 Marcel Overdijk 2.847 Andres Almiray 2.472 jeffscottbrown 2.363 Dave Klein 2.322 GroovyEclipse 2.238 James Williams 2.034 Safari Books Online ... 0.037 HortenseEnglish 0.007 Showoff Cook 135.0820584094 Since this was all Nat’s idea, here’s his value as well: 6.281 Pete Freitag 5.933 CNY ColdFusion Users 3.085 Barbara Binder 2.712 Mike Mayhew 2.537 Jill Hurst-Wahl 2.406 Andrew Hedges 2.333 roger sakowski 2.138 Raquel Hirsch 1.986 TweetDeck ... 0.1 Richard Banks 0.092 Team Gaia 0.05 AdrianByrd 0.043 OletaMullins 0.039 SuzySharpe 122.8286508850 My TTFV is higher than his, but his TFV is higher than mine. Read into that whatever you want. The next step is to make this a web application so you can check your own value. I imagine that’ll be the subject of another blog post. Recent Comments
https://kousenit.org/2010/04/
CC-MAIN-2017-17
refinedweb
1,390
60.72
Python in the Computer science branch works under libraries. One of the libraries is NumPy which is very essential for coding. It is generally used to work with arrays. It adds support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. About Numpy The ancestor of NumPy, i.e. Numeric was created by Jim Hugunin with the help of several other developers. In 2005, Travis Oliphant created NumPy by incorporating the features of the competing Numarray into the Numeric with extensive modifications. Technically, NumPy is open-source software and has so many contributions. Uses of NumPy - An alternative for the lists and arrays in Python: The Arrays in Numpy are equivalent to the lists in python. It is a homogenous set of elements. This feature of the NumPy array differentiates them from the python arrays. It maintains the uniformity for mathematical operations which would not be possible with the heterogeneous elements. - NumPy maintains minimal memory: Arrays in the NumPy are objects. Python deletes and creates these objects continually. Therefore, the memory allocation is less as per the Python lists. NumPy has several features that avoid memory wastage in the data buffer. It holds the functions like copies, view, and indexing that helps in saving a lot of memories. Indexing helps to return the view of the original array, which implements the reuse of data - Using NumPy for multi-dimensional arrays: A multi-dimensional array can be created in NumPy. These arrays have various rows and columns. Therefore, a multi-dimensional array implements the creation of matrices. These matrices are easy to work with. With this, the code also becomes memory efficient. - Mathematical operations with NumPy: Working with NumPy includes easy to use functions for the mathematical computations on the array data set. NumPy Array Applications There are a number of applications of Numpy applications as well as the areas where the Numpy library is used in python. Some of them are listed below: Shape Manipulations If the output produces the same number of elements then the user can change array dimensions at the runtime. np.reshape(…) function is applied on the array to reshape it. This will be useful for performing the various operations. We can use it whenever we want to broadcast two non-similar arrays. Array Generation We generate the array data set for implementing various functions. We can also generate a predefined set of numbers for the array elements using the np.arrange(…)function. Reshape function is useful to generate a different set of dimensions. The user can generate some random functions for an array having random values. Array Dimension Numpy consists of both arrays. Some functions have restrictions on multidimensional arrays. It is necessary to transform those rays into one-dimensional arrays. Data Types of NumPy NumPy has some extra data types. We refer it to the data types with one character. However, we can use “i” for the integers, “u” for unsigned integers, and much more. Here is the list of all the data types in NumPy and the characters used to represent them. i – integer b – boolean u – unsigned integer f – float c – complex float m – time delta M – DateTime O – object S – string U – Unicode string V – a fixed chunk of memory for another type ( void ) Numpy Example in Python import numpy as np # First we create array object arr = np.array( [[ 9, 8, 7], [ 6, 5, 4]] ) # We Print the type of arr object print("Array is of type: ", type(arr)) # We Print array dimensions (axes) print("No. of dimensions: ", arr.ndim) # We Print the shape of array print("Shape of array: ", arr.shape) # We Print the size (total number of elements) of array print("Size of array: ", arr.size) # We Print the type of elements in array print("Array stores elements of type: ", arr.dtype) Output of the Code Array is of type: <class 'numpy.ndarray'> No. of dimensions: 2 Shape of array: (2, 3) Size of array: 6 Array stores elements of type: int64 >
https://www.developerhelps.com/numpy/
CC-MAIN-2022-40
refinedweb
677
57.37
When; if (!DBNull.Value.Equals(reader["Age"])) { age = (int)reader["Age"]; } This conversion seems ugly and it is all over the code. Is there any other elegant solution to this? You could solve this with an extension method. The extension method would look something like this: public static TValue GetNullableValue<TValue>(this DbDataReader reader, string name) { object value = reader[name]; if (DBNull.Value.Equals(value)) { return default(TValue); } return (TValue)value; } And your data access code will be: int? age = customerReader.GetNullableValue<int?>("Age"); I’ve encountered the same ugly convention as well, and was thinking along the lines of an extension method similiar to the previous poster. Although he missed the part about not being a nullable type (easy enough to tweak his method to be similiar to SQLs IsNull function.) When it is a nullable type, I’ve done this in the past: int? age = dr["Age"] as int?; This problem (dbnull) has been around since well before .net and Microsoft has never done anything to make it any better, nullable types don’t really bring us any closer to a way to map between a database null and a data type. Maybe the entity framework will finally fix this issue. I also noticed this issue when I was using VB2005 to retrieve the data from SQL2005. I created a class named DBNullable(Of T) to deal with it. But I think M$ need to do something to fix this problem because it’s confusing and fussy when developing a data-based system. I love Trygve’s idea, but it doesn’t apply to DataSets (yes, some of us still need to use DataSets). …and farrio, re: ‘M$’, are you like 4 years old or something? There’s no reason why this shouldn’t work for DataSets. You just need another extension method. You can even take the table name as a parameter in this method to resolve which table of the dataset you want to get data from. I like the extension method idea as it will work for both nullable and non-nullable types. int? age = dr["Age"] as int?; also works. It works because .NET is unable to cast it and assigns a NULL instead, which ends up being what we desire in this case. I think this depends on whether you need to keep -1 as the default value or what you want to keep the null value. One possibility is to use the as operator for a safe type cast: int age = reader["Age"] as int? ?? -1; int? age = reader["Age"] as int?; Unfortunately, this has negative consequences that can be difficult to catch. If you mistakenly cast a smallint to byte, for example, the as operator will silently swallow InvalidCastException. I think a more reasonable solution with some measure of safety is to use a nullable cast either with or without the null coalescing operator: int age = (int?) reader["Age"] ?? -1; int? age = (int?) reader["Age"]; The code is easy to read and it has the advantage that InvaidCastException is thrown early in case you mistakenly cast to the wrong type, which I’ve found is easy to do with large tables. Thanks – the as operator with ?? <defaultvalue> works perfectly when using nullable types for the database tier. I’ve been working like mad to get my 3-tier construction to work. As I’m not all that great in C# yet, Check out the SafeDataReader class from CSLA.NET. I had customized and used to solve the repeated dbnull check. How do you make these conversions using VB2005? I was initially developing my database system in C# but had prblms cz I’m a beginner, switched back to VB and the NullReferenceException and InvalidCastExceptions are still giving me hell! cant we use the Convert.ToString()…….as compare to DBnull.value..i think its better.Is it ?
https://blogs.msdn.microsoft.com/thottams/2008/06/30/dbnull-and-nullable-types/
CC-MAIN-2018-05
refinedweb
645
65.12
Have you wanted to start playing with Perl 6 but find yourself wondering what to write? I use Pugs, a Perl 6 implementation being written in Haskell and have been tremendously enjoying Perl 6. Like many, I’m impatient, but the work on Perl 6 has been progressing quite well and I’m quite keen to see the alpha. However, if you’re like me, you probably do better with a new language by actually writing something in it. Well, not only do I have something for you to write, you can actually help out the Perl 6 effort! Recently I stumbled across 99 Problems in Lisp, which was in turn apparently borrowed from 99 Problems in Prolog. I’ve started 99 Problems in Perl 6. I started out by writing a program which would take the text of the “99 Problems” and split them into separate test files for Perl 6. The first one looks like this: use v6-alpha; use Test; plan 1; # P01 (*) Find the last box of a list. # # Example: # * (my-last '(a b c d)) # (D) is <a b c d>.[-1], 'd', 'Find the last box of a list.'; Each test file contains the entire text of the problem, though some refer you to previous problems. At the current writing, only the first 24 have been “solved” and the rest contain “skip” tests. If you can solve a problem, feel free to send an email to one of the Pugs mailing lists. Commit bits are handed out very liberally. Then you can add the solution and work on other problems. To give you an example of how some of the languages compare, here’s the “lotto” problem (Draw N different random numbers from the set 1..M) in Lisp: (defun range (ini fim) (if (> ini fim) (if (eql ini fim) (cons fim nil) (cons ini (range (- ini 1) fim))) (if (eql ini fim) (cons fim nil) (cons ini (range (+ ini 1) fim))))) (defun remove-at (org-list pos &optional (ini 1)) (if (eql pos ini) (cdr org-list) (cons (car org-list) (remove-at (cdr org-list) pos (+ ini 1))))) (defun rnd-select (org-list num &optional (selected 0)) (if (eql num selected) nil (let ((rand-pos (+ (random (length org-list)) 1))) (cons (element-at org-list rand-pos) (rnd-select (remove-at org-list rand-pos) num (+ selected 1)))))) (defun lotto-select (num-elem max-elem) (rnd-select (range 1 max-elem) num-elem)) Wow! That’s hideously verbose and I suspect there’s a better way. Here it is in Prolog: range(I,I,[I]). range(I,K,[I|L]) :- I < K, I1 is I + 1, range(I1,K,L). remove_at(X,[X|Xs],1,Xs). remove_at(X,[Y|Xs],K,[Y|Ys]) :- K > 1, K1 is K - 1, remove_at(X,Xs,K1,Ys). rnd_select(_,0,[]). rnd_select(Xs,N,[X|Zs]) :- N > 0, length(Xs,L), I is random(L) + 1, remove_at(X,Xs,I,Ys), N1 is N - 1, rnd_select(Ys,N1,Zs). lotto(N,M,L) :- range(1,M,R), rnd_select(R,N,L). That’s better, but not much. The Haskell solution isn’t much better:) Here it is in Perl 6: subset Positive::Int of Int where { $_ > 0 }; sub lotto (Positive::Int $count, Positive::Int $range) returns List { return (1 .. $range).pick($count); } (The current implementation of Pugs does not yet support a ’subset’, so that’s not yet included in the test problem.) Needless to say, that’s much easier to read. And look! A better type mechanism :) Part of the reason why that works so well is that Perl 6 is heavily focused on solving problems that programmers really face. Now, rather than testing whether your ‘Int’ is in the allowable range, you can carefully define a special ’subset’ which details what that allowable range is. So go out there and do my homework! Things which will help: - You might want to check the README first. - Read the latest Perl 6 documentation, - Hang out on #perl6 on irc.freenode.net - For Firefox users, create a “perl6doc” keyword search using Here is a version in OCaml. Not as short as the Perl 6, but not as long as the others either. Don't mean to troll... but as soon as I read the title the following popped in to my head... Havin' Perl Problems? I feel bad for you son. I got 99 problems but Perl aint one.... Tip of the hat to Jay-Z. This is how I would solve the problem in C: lotto takes m numbers from an array of n, placed at the start of the array. example usage: There's probably a much nicer way to rewrite the Lisp version. Also would like to note, the Perl 6 version is short because the main algorithm is already implemented by the pick method. shaurz: your code has a bug. Note that the Perl 6 version forbids integers less than 1. If you call lotto with negative numbers, your code breaks. I think it's fair to ask that functionality be equivalent when comparing programming languages :) The bug is in the caller ;-) I guess the point was to show that the problem is trivial, even in a terrible language like C. Anonymous: that's a fair point, but there are two things to consider. First, Perl 6 is heavily focused on providing built-in solutions to common problems. Thus, .pick might sound like I'm cherry picking (hah!) solutions which look better in Perl 6, but it's common. Want to know if all elements in one list are contained in another? Also note that some examples posted don't have the nice "Positive::Int" constraint, thus contain bugs. That's another feature which is harder to concisely duplicate. Side note: Positive::Int should have been Int::Positive. It's better to go from the general to the specific rather than the other way around. But, as the anonymous commenter said, this is just because the problem has already been solved in the language's standard library. michele: nothing wrong with the problem being solved by a standard library or a built in feature. Regular expressions are one of the many built-in things which first drove people to Perl. However, what happens to your code if you pass in values less than one? The code's not functionally equivalent. Of course, I just noticed that my code has the bug that I didn't validate that $count <= $range :) (sorry, anonymous was me) A bit nicer CL version: Ovid, I think it does: if either of the values is < 0, or count > range, a ValueError exception will be raised (in the first case, it will have a misleading "sample larger than population" message, though). this is a lame example. first off it doesn't work. second off it just relies on a builtin function called "pick". big whoop. It seems to me that the Lisp / Prolog / Haskell versions take pains to be an efficient implementation and not construct a complete list of integers 1 .. $range before picking some. What is the time and space complexity of the different algorithms? Ed Avis: it's tough for me to answer that, but I do know that Perl 6 also attempts to automatically make lists lazy, so it should be relatively efficient with large lists. Too bad most of these problems seem academic exercises. Compute prime numbers Compute other math values Sort lists of lists Adam, those are problems I encounter all the time while programming. Just the other day I needed to find the least common multiple between two numbers (useful when you need to synchronize timing events, for example), that that requires finding the prime factors of numbers. Sorting lists of lists is also extremely useful in a number of contexts, including reporting. Other issues such as filtering and manipulating lists, generating combinations and permutations, working with trees and graphs, etc., are things that I find myself using now and then. While a number of the items are certainly things I haven't used, I certainly have used most of these concepts at one time or another. By the way: lcm(a, b) = a * b / gcd(a, b) lcm(0, 0) = 0 gcd(a, 0) = a gcd(a, b) = gcd(b, a mod b) And, for what it's worth, here's one implemention of the lotto function in Lua, which avoids creating the temporary array, mostly. You're being a little harsh on the other solutions frankly. Line count / character count is not a measure of code quality. Your argument is in fact: "Look, perl6 has more useful libraries than these other languages!" Which is a good argument, although a single example is hardly convincing. CPAN might be though! (for those who find it useful) If you want to make claims about readability, then go and define the pick function in your example & then make the comparison... I am totally new to perl package installation. Currently i want to insall Net:Pcap pacakage. Can any one please let me know the procedure in complete how to do it. I have tried cpan.org, (Through CPAN i am facing build problem on cygwin) tried ppm (facing problem with proxies). Please can any one tell me how can i do it. Thanks in advance,
http://www.oreillynet.com/onlamp/blog/2006/12/99_problems_in_perl_6.html
crawl-002
refinedweb
1,560
72.46
Am 16.04.12 23:57, schrieb Reimar Döffinger: > On 16 Apr 2012, at 23:43, Thilo Borgmann <thilo.borgmann at googlemail.com> > wrote: >> Am 16.04.12 21:51, schrieb Reimar Döffinger: >>> On 16 Apr 2012, at 17:36, Thilo Borgmann <thilo.borgmann at googlemail.com> >>> wrote: >>>> Am 14.04.12 18:05, schrieb Reimar Döffinger: >>>>> On Sat, Apr 14, 2012 at 11:46:03AM +0200, Michael Niedermayer wrote: >>>>>> On Wed, Apr 11, 2012 at 12:50:34PM +0200, Thilo Borgmann wrote: >>>>>>>> this will update teh file even if it matches, this breaks >>>>>>>> caching of the logo >>>>>>> >>>>>>> Updated Patch attached. >>>>>> >>>>>> from a quick look, its ok maybe reimar can double check ? >>>>> >>>>> I don't see any real issues with it. Though if you're pedantic >>>>> there's a questions whether "test -r" is the best condition, copying >>>>> over the file to my knowledge won't fix any permissions which is why >>>>> I went with -f originally. I don't think it really matters, when it >>>>> makes a different things are already seriously broken anyway and will >>>>> need manual intervention. >>>> >>>> I don't really get it. >>>> >>>> "test -r" or "test -f" - what does that have to do with fixing >>>> permissions? >>> >>> What is the point of overwriting the file when the problem is that the >>> file exists but is not readable, when that will be the case afterward >>> still? >> >>> That's why I chose -f since it avoids pointlessly copying, but as said I >>> think it doesn't matter really. >> >> I might misunderstand test, but -f would pointlessly copy if the file is >> not readable. -r would at least not do that but could still fail if the >> file is not writable. So if we check the file, we should have to check r&w >> permissions. > > -------------- next part --------------. --- Makefile | 14 ++++++++++++-- 1 files changed, 12 insertions(+), 2 deletions(-) diff --git a/Makefile b/Makefile index 9c8110d..dbc8dd4 100644 --- a/Makefile +++ b/Makefile @@ -7,8 +7,15 @@ TARGETS = $(addsuffix .html,$(addprefix htdocs/,$(SRCS))) htdocs/main.rss PAGE_DEPS = src/template_head1 src/template_head2 src/template_footer +DATE := $(shell date +%m%d) -all: $(TARGETS) +ifneq ($(wildcard src/logik/$(DATE)-standard),) + LOGO_SRC := htdocs/FFmpeg_standard.png +else + LOGO_SRC := $(wildcard src/logik/$(DATE).png) +endif + +all: htdocs/ffmpeg-logo.png $(TARGETS) clean: rm -f $(TARGETS) @@ -35,5 +42,8 @@ X' >> $@ echo '</channel>' >> $@ echo '</rss>' >> $@ +htdocs/ffmpeg-logo.png: $(LOGO_SRC) + test -z $< || cmp $< $@ || cp $< $@ + test -e $@ || cp htdocs/FFmpeg_standard.png $@ -.PHONY: all clean +.PHONY: all clean htdocs/ffmpeg-logo.png -- 1.7.4.3
http://ffmpeg.org/pipermail/ffmpeg-devel/2012-April/123398.html
CC-MAIN-2015-06
refinedweb
413
67.35
Caution: The documentation you are viewing is for an older version of Zend Framework. You can find the documentation of the current version at docs.zendframework.com WSDL Accessor — Zend Framework 2 2.3.9 documentation class: - input message with name $methodName . ‘Request’. - output message with name $methodName . ‘Response’. See for the details.: where $name is a class name for the Web Service definition mode using class and script name for the Web Service definition mode using set of functions. See for the details. ZendSoap WSDL accessor implementation uses the following type mapping between PHP and SOAP types: Where xsd: is “” namespace, soap-enc: is a “” namespace, tns: is a “target namespace” for a service. getType($type) method may be used to get mapping for a specified PHP type: ‘wsdl:document’ element. ‘/definitions/binding/soap:binding’ element is used to signify that the binding is bound to the SOAP protocol format. See for the details.
https://framework.zend.com/manual/2.3/en/modules/zend.soap.wsdl.html
CC-MAIN-2017-26
refinedweb
155
59.19
BigQuery python library Project description bqlib - BigQuery python library A BigQuery python library. This library is a wrapper for bigquery_client.py. Requirements - Python 2.6 or later (not support for 3.x) Setup $ pip install bqlib How to use Single Query - BQJob BQJob is a class to start the BigQuery job and fetch result. You can use either run_sync(synchronous) or run_async(asynchronous) method. from bqlib import BQJob project_id = 'example_project' query = 'SELECT foo FROM bar' http = authorized_http bqjob = BQJob(project_id=project_id, query=query, http=http) # run synchronously job_result = bqjob.run_sync() # or run asynchronously bqjob.run_async() # ... do other things ... job_result = bqjob.get_result() print job_result # [{u'foo': 10}, {u'foo': 20}, ...] Multiple Queries - BQJobGroup BQJobGroup is a class for putting multiple BQJobs into an one group. Every BQJob in that group are executed concurrently. from bqlib import BQJob, BQJobGroup bqjob1 = BQJob(project_id=project_id, query=query, http=http) bqjob2 = BQJob(project_id=project_id, query=query, http=http) job_group = BQJobGroup([bqjob1, bqjob2]) # synchronously results = job_group.run_sync() # or asynchronously job_group.run_async() results = job_group.get_results() print results # [[{'foo': 10}, {'foo': 20}], [{'bar': 'test'}]] Note - Concurrent Requests to BigQUery - Concurrent requests to BigQuery is restricted to 20 requests by Quota Policy. - If you want to set up concurrent requests to 20, you also have to set up at traffic controls in api-console page. License This library is disributed as MIT license. History 2013-10-22 bqlib 0.0.1 - First release Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/bqlib/0.0.1/
CC-MAIN-2018-22
refinedweb
261
60.41
@tubular/time@tubular/time Not all days are 24 hours. Some are 23 hours, or 25, or even 23.5 or 24.5 or 47 hours. Some minutes are 61 seconds long. How about a Thursday followed directly by a Saturday, giving Friday the slip? Or a September only 19 days long? This is a date/time library for handling both day-to-day situations (so to speak) and some weird ones too. Key featuresKey features - Mutable and immutable DateTime objects supporting the Gregorian and Julian calendar systems, with settable crossover. - IANA timezone support, with features beyond formatting using timezones, such as parsing, accessible listings of all available timezones (single-array list, grouped by UTC offset, or grouped by region), and live updates of timezone definitions. - Supports leap seconds and conversions between TAI (International Atomic Time) and UTC (Universal Coordinated Time). - Supports and recognizes negative Daylight Saving Time. - Extensive date/time manipulation and calculation capabilities. - Many features available using a familiar Moment.js-style API. - Astronomical time conversions among TDT (Terrestrial Dynamic Time), UT1, UTC and TAI. - Local mean time, by geographic longitude, to one minute (of time) resolution. - Astronomical time conversions among TDT (Terrestrial Dynamic Time), UT1, UTC and TAI, as well as local mean time, by geographic longitude, to one minute (of time) resolution. - Internationalization via JavaScript’s IntlInternationalization API, with additional built-in i18n support for issues not covered by Intl, and US-English fallback for environments without Intlsupport. - Package suitable for tree shaking and Angular optimization. - Full TypeScript typing support. @tubular/time is a collection of date and time classes and functions, providing extensive internationalized date/time parsing and formatting capabilities, date/time manipulations such as field-specific add/subtract, set, and roll; calendar computations; support for live-updatable IANA time zones; and a settable Julian/Gregorian calendar switchover date. This library was originally developed for an astronomy website,, and has some features of particular interest for astronomy and historical events, but has been expanded to provide many features similar to the now-legacy-status Moment.js. Unlike Moment.js, IANA timezone handling is built in, not a separate module, with a compact set of timezone definitions that reach roughly five years into the past and five years into the future, expanded into the past and future using Daylight Saving Time rules and/or values extracted from Intl.DateTimeFormat. Unlike the Intl API, the full list of available timezones is exposed, facilitating the creation of timezone selection interfaces. Two alternate large timezone definition sets, of approximately 280K each, are available, each serving slightly different purposes. These definitions can be bundled at compile time, or loaded dynamically at run time. You can also download live updates when the IANA Time Zone Database is updated. - Installation - Basic usage - Formatting output - Format string tokens - Moment.js-style localized formats - @tubular/time Intl.DateTimeFormatshorthand string formats - Pre-defined formats - Parsing with a format string, and optionally a locale - Converting timezones - Converting locales - Defining and updating timezones - The YMDDateand DateAndTimeobjects - Reading individual DateTimefields - Modifying DateTimevalues - Time value - Timezone offsets from UTC - Validation - Comparison and sorting - Monthly calendar generation - Dealing with weird months - Dealing with weird days - Leap Seconds, TAI, and Julian Dates - Global default settings - The DateTimeclass - The Calendarclass - The Timezoneclass - Other functions available on ttime - Constants available on ttime InstallationInstallation Via npmVia npm npm install @tubular/time import { ttime, DateTime, Timezone... } from '@tubular/time'; // ESM ...or... const { ttime, DateTime, Timezone... } = require('@tubular/time/cjs'); // CommonJS Documentation examples will assume @tubular/time has been imported as above. Via <script> tag To remotely download the full code as an ES module: <script type="module"> import('').then(pkg => { const { ttime, DateTime, Timezone} = pkg; // ... }); </script> For the old-fashioned UMD approach (which can save you from about 560K of extra data): <script src=""></script> <script src=""></script> The script element just above the index.js URL is an example of optionally loading extended timezone definitions. Such a script element, if used, should precede the index.js script. The @tubular/time package will be available via the global variable tbTime. tbTime.ttime is the default function, and other functions, classes, and constants will also be available on this variable, such as tbTime.DateTime, tbTime.julianDay, tbTime.TIME_MS, etc. Basic usageBasic usage While there are a wide range of functions and classes available from @tubular/time, the workhorse is the ttime() function, which produces immutable instances of the DateTime class. function ttime(initialTime?: number | string | DateAndTime | Date | number[] | null, format?: string, locale?: string | string[]): DateTime Creating immutable DateTime instances with ttime() DateTime instances can be created in many ways. The simplest way is to create a current-time instance, done by passing no arguments at all. Dates and times can also be expressed as strings, objects, and arrays of numbers. When dealing with Daylight Saving Time, and days when clocks are turned backward, some hour/minute combinations are repeated. The time might be 1:59, go back to 1:00, then forward again to 1:59, and only after hitting 1:59 for this second time during the day, move forward to 2:00. By default, any ambiguous time is treated as the earlier time, the first occurrence of that time during a day. You can, however, use either an explicit UTC offset, or a subscript 2 (₂), to indicate the later time. ttime('11/7/2021 1:25 AM America/Denver', 'MM/DD/YYYY h:m a z').toString() → DateTime<2021-11-07T01:25:00.000 -06:00§> ttime('11/7/2021 1:25₂ AM America/Denver', 'MM/DD/YYYY h:m a z').toString() → DateTime<2021-11-07T01:25:00.000₂-07:00> ttime('2021-11-07 01:25 -07:00 America/Denver').toString() → DateTime<2021-11-07T01:25:00.000₂-07:00> Formatting outputFormatting output Dates and times can be formatted in many ways, using a broad selection of format tokens, described in the table below. For the greatest adherence to localized formats for dates and times, you can use the IXX format strings, which call directly upon Intl.DateTimeFormat (if available) to create localized dates, times, and combined date/times. You can also produce more customized, flexible formatting, specifying the order, positioning, and style (text vs. number, fully spelled out or abbreviated, with or without leading zeros) of each date/time field, with embedded punctuation and text as desired. For example: ttime().format('ddd MMM D, y N [at] h:mm A z') → Wed Feb 3, 2021 AD at 8:59 PM EST ttime().toLocale('de').format('ddd MMM D, y N [at] h:mm A z') → Mi 02 3, 2021 n. Chr. at 9:43 PM GMT-5 Please note that most unaccented Latin letters (a-z, A-Z) are interpreted as special formatting characters, as well as the tilde ( ~), so when using those characters as literal text they should be surrounded with square brackets, as with the word “at” in the example above. Special CJK date formatting optionsSpecial CJK date formatting options A few of the formatting tokens below can have an optional trailing tilde ( ~) added. This is for special handling of Chinese, Japanese, and Korean (CJK) date notation. The ~ is replaced, where appropriate, with 年, 月, or 日 for Chinese and Japanese, and with 년, 월, or 일 for Korean. Korean formatting also adds a space character when the following character is a letter or digit, but not when punctuation or the end of the format string comes next. For all other languages, ~ is replaced with a space character when the following character is a letter or digit, or simply removed when followed by punctuation or the end of the format string. For example: ttime().toLocale('zh').format('MMM~YYYY~') → 8月2021年 ttime().toLocale('es').format('MMM~YYYY~') → ago 2021 Format string tokensFormat string tokens Moment.js formats not supported by @tubular/time: DDDo, Wo, wo, yo @tubular/time formats not supported by Moment.js: KK, K, kk, k, ZZZ, V, v, R, r, n, IXX (IFF, IFL, IFM... IxM, IxS) Moment.js-style localized formatsMoment.js-style localized formats @tubular/time Intl.DateTimeFormat shorthand string formats These start with a capital letter I, followed by one letter for the date format, which corresponds to the dateStyle option of Intl.DateTimeFormat, and one letter for the time format, corresponding to the timeStyle option. The capital letters F, L, M, and S correspond to the option values 'full', 'long', 'medium', and 'short'. ILS thus specifies a long style date and a short style time. IL is a long style date alone, without time. IxS is a short style time without a date. ExamplesExamples You can also augment these formats with brace-enclosed Intl.DateTimeFormatOptions, such as: IMM{hourCycle:23h} ...which will start with whatever the localized time formatting is and force it into 24-hour time, whether the standard localized form is a 12- or 24-hour format. Note that no quotes are placed around the option values, as they would be in JavaScript/TypeScript code. Pre-defined formatsPre-defined formats ttime.DATETIME_LOCAL = 'Y-MM-DD[T]HH:mm'; ttime.DATETIME_LOCAL_SECONDS = 'Y-MM-DD[T]HH:mm:ss'; ttime.DATETIME_LOCAL_MS = 'Y-MM-DD[T]HH:mm:ss.SSS'; ttime.DATE = 'Y-MM-DD'; ttime.TIME = 'HH:mm'; ttime.TIME_SECONDS = 'HH:mm:ss'; ttime.TIME_MS = 'HH:mm:ss.SSS'; ttime.WEEK = 'GGGG-[W]WW'; ttime.WEEK_AND_DAY = 'GGGG-[W]WW-E'; ttime.WEEK_LOCALE = 'gggg-[w]ww'; ttime.WEEK_AND_DAY_LOCALE = 'gggg-[w]ww-e'; ttime.MONTH = 'Y-MM'; Parsing with a format string, and optionally a localeParsing with a format string, and optionally a locale (As viewed via formatted output) Converting timezonesConverting timezones ttime('2005-10-10 16:30 America/Los_Angeles').tz('Europe/Warsaw').toString() → DateTime<2005-10-11T01:30:00.000 +02:00> Please note that if you pass a second argument of true, the timezone is changed, but the wall time stays the same. This same option to preserve wall time is available for the utc() and local() methods, where the optional boolean value will be the one and only argument. ttime('2005-10-10 16:30 America/Los_Angeles').tz('Europe/Warsaw', true).toString() → DateTime<2005-10-10T16:30:00.000 +02:00> ttime('2005-10-10 16:30 America/Los_Angeles').utc().toString() → DateTime<2005-10-10T23:30:00.000 +00:00> ttime('2005-10-10 16:30 America/Los_Angeles').utc().toString(true) → DateTime<2005-10-10T16:30:00.000 +00:00> // Local zone is America/New_York ttime('2005-10-10 16:30 America/Los_Angeles').local().toString() → DateTime<2005-10-10T19:30:00.000 +04:00> Converting localesConverting locales ttime('7. helmikuuta 2021', 'IL', 'fi').toLocale('de').format('IL') → 7. Februar 2021 Defining and updating timezonesDefining and updating timezones These functions define the size and behavior of the IANA timezone definitions used by @tubular/time: ttime.initTimezoneSmall(); ttime.initTimezoneLarge(); ttime.initTimezoneLargeAlt(); By default, @tubular/time is set up using initTimezoneSmall(). This covers explicitly-defined timezone information for roughly the release date of the version of @tubular/time you’re using, +/- five years, supplemented by rules-based extensions (i.e. knowing that for a particular timezone, say, “DST starts on the last Sunday of March and ends on the last Sunday of October”), and further supplemented by information extracted from Intl, when available. With proper tree-shaking, the code footprint of @tubular/time should be less than 150K when using the small timezone definitions. Using initTimezoneLarge() provides the full IANA timezone database. Using this will increase code size by about 280K, presuming that your build process is smart enough to have otherwise excluded unused code in the first place. initTimezoneLargeAlt() provides a slight variant of the full IANA timezone database, and is also roughly 280K. This variant rounds all timezone offsets to full minutes, and adjusts a small number of fairly old historical changes by a few hours so that only the time-of-day ever goes backward, never the calendar date. It’s generally more than enough trouble for software to cope with missing and/or repeated hours during a day; initTimezoneLargeAlt() makes sure the date/time can’t be, say, the 19th of the month, then the 18th, and then the 19th again, as happens with the unmodified America/Juneau timezone during October 1867. For browser-based inclusion of timezone definitions, if not relying on a tool like webpack to handle such issues for you, you can also include full timezone definitions this way: <script src=""></script> ...or... <script src=""></script> Either of these should appear before the script tag that loads @tubular/time itself. Live timezone updatesLive timezone updates Timezone definitions can be updated live as well. Different polling methods are needed for Node.js code or browser-hosted code, since both environments access web resources in very different ways (and browsers have CORS issues, which Node.js does not). To be informed when a live timezone update takes place, add and remove update listeners using these functions: function addZonesUpdateListener(listener: (result: boolean | Error) => void): void; function removeZonesUpdateListener(listener: (result: boolean | Error) => void): void; function clearZonesUpdateListeners(): void The result received by a callback is true if an update was successful, and caused changes in timezone definitions, false if successful, but no changes occurred, or an instance of Error, indicating an error (probably an HTTP failure) has occurred. For example: const listener = result => console.log(result); // Keep in a variable if removal is needed later ttime.addZonesUpdateListener(listener); // Later on in the code... ttime.removeZonesUpdateListener(listener); Why use a listener? Because you might want to recalculate previously calculated times, which possibly have changed due to timezone definition changes. For example, imagine you have a video meeting scheduled for 10:00 in a client’s timezone, which, when you first schedule it, was going to be 15:00 in your timezone. Between the time you scheduled the meeting, however, and when the meeting actually takes place, the switch to Daylight Saving Time is cancelled for the client’s timezone. If you still intend to talk to your client at 10:00 their time, you have to meet at 16:00 in your timezone instead. To poll for for timezone updates at a regular interval, use: function pollForTimezoneUpdates(zonePoller: IZonePoller | false, name: ZoneOptions = 'small', intervalDays = 1): void; zonePoller: Either zonePollerBrowser(from tbTime.zonePollerBrowser) or zonePollerNode(using importor require, from '@tubular/time'). If you pass the boolean value false, polling ceases. name: One of 'small', 'large', or 'large-alt'. Defaults to 'small'. intervalDays: Frequency of polling, in days. Defaults to 1 day. The fastest allowed rate is once per hour (~0.04167 days). You can also do a one-off request: function getTimezones(zonePoller: IZonePoller | false, name: ZoneOptions = 'small'): Promise<boolean>; zonePoller and name are the same as above. Any periodic polling done by pollForTimezoneUpdates() is canceled. You can get a response via registered listeners, but this function also returns a Promise. The promise either resolves to a boolean value, or is rejected with an Error. The YMDDate and DateAndTime objects YMDate: { y: 2021, // short for year q: 1, // short for quarter m: 2, // short for month d: 4, // short for day dow: 4, // short for dayOfWeek (output only) dowmi: 1, // dayOfWeekMonthIndex (output only) dy: 35, // short for dayOfYear n: 18662, // short for epochDay j: false, // short for isJulian year: 2021, quarter: 1, // quarter of the year 1-4 month: 2, day: 4, dayOfWeek: 4, // Day of week as 0-6 for Sunday-Saturday (output only) dayOfWeekMonthIndex: 1, // Day of week month index, 1-5, e.g. 2 for 2nd Tuesday of the month (output only) dayOfYear: 35, epochDay: 18662, // days since January, 1 1970 isJulian: false, // true if a Julian calendar date instead of a Gregorian date yw: 2021, // short for yearByWeek w: 5, // short for week dw: 4, // short for dayByWeek yearByWeek: 2021, // year that accompanies an ISO year/week/day-of-week style date week: 5, // week that accompanies an ISO year/week/day-of-week style date dayByWeek: 4, // day that accompanies an ISO year/week/day-of-week style date ywl: 2021, // short for yearByWeekLocale wl: 6, // short for weekLocale dwl: 5, // short for dayByWeekLocale yearByWeekLocale: 2021, // year that accompanies a locale-specific year/week/day-of-week style date weekLocale: 6, // week that accompanies a locale-specific year/week/day-of-week style date dayByWeekLocale: 5, // day that accompanies a locale-specific year/week/day-of-week style date error: 'Error description if applicable, otherwise undefined' } DateAndTime, which extends the YMDDate interface: { hrs: 0, // short for hour min: 18, // short for minute sec: 32, // short for second hour: 0, minute: 18, second: 32, millis: 125, // 0-999 milliseconds part of time utcOffset: -18000, // offset (in seconds) from UTC, negative west from 0°, including DST offset when applicable dstOffset: 0, // DST offset, in minutes - usually positive, but can be negative (output only) occurrence: 1, // usually 1, but can be 2 for the second occurrence of the same wall clock time during a single day, caused by clock being turned back for DST deltaTai: 37, // How much (in seconds) TAI exceeds UTC or UT1 at given moment in time (output only) /* In the well-defined range for UTC, deltaTai is always an integer value. Outside that range it can be a non-integer with millisecond precision. */ jde: 2459249.722008264, // Julian days, ephemeris mjde: 59249.22200826416, // Modified Julian days, ephemeris jdu: 2459249.7212051502, // Julian days, UT mjdu: 59249.22120515024 // Modified Julian days, UT } When using a YMDDate or DateAndTime object to create a DateTime instance, you need only set a minimal number of fields to specify the date and/or time you are trying to specify. You can use either short or long names for fields (if you use both, the short form takes priority). At minimum, you must specify a date or a time. If you only specify a date, the time will be treated as midnight at the start of that date. If you only specify a time, you can create a special dateless time instance. You can also, of course, specify both date and time together. In specifying a date, the date fields have the following priority: n/ epochDay: Number of days before/after epoch day 0, which is January 1, 1970. y/ year: A normal calendar year. Along with the year, you can specify: - Nothing more, in which case the date is treated as January 1 of that year. m/ month: The month (a normal 1-12 month, not the weird 0-11 month the JavaScript Dateuses!). - If nothing more is given, the date is treated as the first of the month. d/ day: The date of the month. dy/ dayOfYear: The 1-based number of days into the year, such that 32 means February 1. yw/ yearByWeek: An ISO week-based calendar year, where each week starts on Monday. This year is the same as the normal calendar year for most of the calendar year, except for, possibly, a few days at the beginning and end of the year. Week 1 is the first week which contains January 4. Along with this style of year, you can specify: - Nothing more, in which case the date is treated as the first day of the first week of the year. w/ week: The 1-based week number. - If nothing more, the date is treated as the first day of the given week. dw/ dayByWeek: The 1-based day of the given week. ywl/ yearByWeekLocale, etc.: These fields work the same as yw/ yearByWeek, etc., except that they apply to locale-specific rules for the day of the week on which each week starts, and for the definition of the first week of the year. In specifying a time, the minimum needed is a 0-23 value for hrs / hour. All other unspecified time fields will be treated as 0. Astronomical time fields will supersede any of the above date fields. As discussed earlier, concerning parsing time strings, ambiguous times due to Daylight Saving Time default to the earlier of two times. You can, however, use occurrence: 2 to explicitly specify the later time. An explicit utcOffset can also accomplish this disambiguation. Reading individual DateTime fields As an output from a DateTime instance, such as what you get from ttime().wallTime, all DateAndTime fields will be filled in with synchronized values. ttime().wallTime.hour provides the hour value, ttime().wallTime.utcOffset provides the UTC offset in seconds for the given time, etc. ttime().wallTimeShort returns a DateAndTime object with all available short-form field names, and ttime().wallTimeLong only long-form field names. ttime().wallTimeSparse returns a DateAndTime object with a minimal set of short-form field names: y, m, d, hrs, min, sec, millis. Modifying DateTime values There are six main methods for modifying a DateTime value: add(field: DateTimeField | DateTimeFieldName, amount: number, variableDays = true): DateTime subtract(field: DateTimeField | DateTimeFieldName, amount: number, variableDays = true): DateTime roll(field: DateTimeField | DateTimeFieldName, amount: number, minYear = 1900, maxYear = 2099) set(field: DateTimeField | DateTimeFieldName, value: number, loose = false): DateTime startOf(field: DateTimeField | DateTimeFieldName): DateTime endOf(field: DateTimeField | DateTimeFieldName): DateTime Before going further, it needs to be mentioned that DateTimeinstances can be either locked, and thus immutable, or unlocked. Instances generated using ttime(... )are locked. Instances created using the DateTimeconstructor (covered later in this document) are created unlocked, but can be locked after creation. When you use the add/subtract/roll/set methods on a locked instance, a new modified and locked instance is returned. When used on an unlocked instance, these methods modify that instance itself, and a reference to the modified instance is returned. Using add (and subtract) subtract()is nothing more than a convenience method which negates the amount being added, and then calls add(). The documentation that follows is in terms of the add()method alone, but applies, with this negation, to the subtract()method as well. An example of using add(): ttime().add('year', 1) or ttime().add(DateTimeField.YEAR, 1) The above produces a date one year later than the current time. In most cases, this means that the resulting date has the same month and date, but in the case of a leap day: ttime('2024-02-29').add('year', 1).toIsoString(10) → 2025-02-28 ...the date is pinned to 28 so that an invalid date is not created. Similarly, when adding months, invalid dates are prevented: ttime('2021-01-31').add(DateTimeField.MONTH, 1).toIsoString(10) → 2021-02-28 You can add using the following fields: MILLI, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR, YEAR_WEEK, and YEAR_WEEK_LOCALE, as provided by the DateTimeField enum, or their string equivalents ( 'milli', 'millis', 'millisecond', 'milliseconds'... 'day', 'days', 'date', 'month', 'months', etc.). (There are further fields defined for dealing with leap seconds and TAI, described later.) For fields MILLI through HOUR, fixed units of time, multiplied by the amount you pass, are applied. When dealing with months, quarters, and years, the variable lengths of months, quarters, and years apply. DAY amounts can be handled either way, as variable in length (due to possible effects of Daylight Saving Time), or as fixed units of 24 hours. The default for variableDays is true. DST can alter the duration of days, typically adding or subtracting an hour, but other amounts of change are possible (like the half-hour shift used by Australia’s Lord Howe Island), so adding days can possibly cause the hour (and even minute) fields to change: ttime('2021-02-28T07:00 Europe/London', false).add('days', 100).toIsoString() → 2021-06-08T08:00:00.000+01:00 (note shift from 7:00 to 8:00) ttime('2021-02-28T07:00 Australia/Lord_Howe, false').add('days', 100).toIsoString() → 2021-06-08T06:30:00.000+10:30` (note shift from 7:00 to 6:30) By default, however, hour and minute fields remain unchanged. ttime('2021-02-28T07:00 Australia/Lord_Howe').add('days', 100).toIsoString() → 2021-06-08T07:00:00.000+10:30 Even with the default behavior, however, it is still possible for hours and minutes to change, in just the same way adding one month to January 31 does not yield February 31. When clocks are turned forward, some times of day simply do not exist, so a result might have to be adjusted to a valid hour and minute in some cases. ttime('2000-04-27T00:30 Africa/Cairo').add('day', 1).toString() → DateTime<2000-04-28T01:30:00.000 +03:00§> (clock turned forward at midnight to 1:00) Using roll() You can use the roll() method to roll, or “spin”, through values for each date/time field. This operation can be used, for example, in a user interface where you select a field and use up/down arrows to change the value, and the value changes in a wrap-around fashion, e.g. ...58 → 59 → 00 → 01..., etc. While seconds and minutes wrap at 59, hours at 23, and dates wrap at the length of the current month, there are no natural wrapping boundaries for years. The wrap-range defaults to 1900-2099, but you can pass optional arguments to change this range (this only effects rolling of years, not other time units). You can roll using the following fields: MILLI, SECOND, MINUTE, HOUR, AM_PM, DAY, DAY_OF_WEEK, DAY_OF_WEEK_LOCALE, DAY_OF_YEAR, WEEK, WEEK_LOCALE, MONTH, YEAR, YEAR_WEEK, YEAR_WEEK_LOCALE, ERA. For the purpose of the roll() method, AM_PM and ERA are treated as numeric values. AM and BC are 0, PM and AD are 1. If you roll by an odd number, the value will be changed. If you roll by an even value, the value will remain unchanged. Examples of using roll(): ttime('1690-09-15').roll('month', 5).toIsoString(10) → 1690-05-15 ttime('1690-09-15').roll('era', 1).format('MMM D, y N') → Sep 15, 1690 BC ttime('10:15').roll('ampm', 1).format('h:mm A') → 10:15 PM Using set() This method sets date/time fields to explicit values. In the default mode, you can only use valid values for each particular field. In the loose mode, some leeway is given, such as allowing the date to be set to 31 when the month is September (resulting in October 1), or allowing the month to be set to 0 (meaning December of the previous year) or 13 (January of the next year). Using these loose values means, of course, that other fields besides the one field being set might change. You can set using the following fields: MILLI, SECOND, MINUTE, HOUR, AM_PM, DAY, DAY_OF_WEEK, DAY_OF_WEEK_LOCALE, DAY_OF_YEAR, WEEK, WEEK_LOCALE, MONTH, YEAR, YEAR_WEEK, YEAR_WEEK_LOCALE, ERA. Examples of using set(): ttime('1690-09-15').set('month', 5).toIsoString(10) → 1690-02-15 ttime('1690-09-15').set('month', 13, true).toIsoString(10) → 1691-01-15 There is a corresponding get() method which returns the numeric value of a field, or undefined if the field does not exist. Using startOf() and endOf() These functions transform a DateTime to the beginning or end of a given unit of time. ttime('2300-05-05T04:08:10.909').startOf(DateTimeField.MINUTE).toIsoString(23) → 2300-05-05T04:08:00.000 ttime('2300-05-05T04:08:10.909').startOf('hour').toIsoString(23) → 2300-05-05T04:00:00.000 ttime('2300-05-05T04:08:10.909').startOf(DateTimeField.WEEK).format(ttime.WEEK_AND_DAY) → 2300-W18-1 ttime('2300-05-05T04:08:10.909').startOf('year').toIsoString(23) → 2300-01-01T00:00:00.000 ttime('2300-05-05T04:08:10.909').endOf('day').toIsoString(23) → 2300-05-05T23:59:59.999 ttime('2300-05-05T04:08:10.909').endOf(DateTimeField.MONTH).toIsoString(23) → 2300-05-31T23:59:59.999 Time valueTime value In milliseconds: ttime().utcMillis In seconds: ttime().utcSeconds As a native JavaScript Date object: ttime().toDate() Timezone offsets from UTCTimezone offsets from UTC Offset from UTC for a given DateTime in seconds, negative for timezones west of the Prime Meridian, including any change due to Daylight Saving Time when applicable: ttime().utcOffsetSeconds Offset from UTC for a given DateTime in minutes: ttime().utcOffsetMinutes Change in seconds from a timezone’s standard UTC offset due to Daylight Saving Time. This will be 0 when DST is not in effect, or always 0 if DST is never in effect. While usually a positive number, some timezones (like Europe/Dublin) employ negative DST during the winter: ttime().dstOffsetSeconds Change in minutes from a timezone’s standard UTC offset due to Daylight Saving Time: ttime().dstOffsetMinutes Returns true when a moment in time is during DST, false otherwise: ttime().isDST() ValidationValidation When an invalid DateTime instance is created, the valid property returns false, and the error property (which is otherwise undefined) returns a description of the error. ttime('1234-56-78').valid → false ttime('1234-56-78').error → 'Invalid month: 56' If you prefer that an exception be thrown, you can do this: ttime('1234-56-78').throwIfInvalid() If a DateTime is valid, throwIfInvalid() returns that instance, so you can use the result as the DateTime itself. Parsing of dates and times specified as strings is somewhat loose. When no format string is provided, dates are parsed according to ISO-8601, with leniency about leading zeros when delimiters are used. Pseudo-months 0 and 13 are accepted, as are days of the month from 0 to 32, regardless of the length of a given month. Years can be in the range -271820 to 275759. When parsing using a format string, especially formats where months are numeric, not textual, strict matching of delimiters is not required. For example, even where proper localized output formatting is done with dots, input using dashes instead of dots is acceptable: ttime('2021-02-08', null, 'de').format('IS') → 08.02.21 ttime('08.02.21', 'IS', 'de').format('IS') → 08.02.21 ttime('008-2-21', 'IS', 'de').format('IS') → 08.02.21 Except in compact, delimiter-free ISO formats like 20210208, leading zeros are never required. Extra, unexpected leading zeros are generally ignored, although an ISO date month should have no more than two digits, and when a two-digit year is expected, a 3-digit year such as 021 will be treated as 21 AD, not 2021. Future releases may offer options for stricter parsing. There are also functions for checking if a year, month, and day-of-month together constitute a valid date. The following functions can be imported/required from '@tubular/time', or, in a browser script, found on the tbTime global: function isValidDate_SGC(yearOrDate: YearOrDate, month?: number, day?: number): boolean; function isValidDateGregorian(yearOrDate: YearOrDate, month?: number, day?: number): boolean; function isValidDateJulian(yearOrDate: YearOrDate, month?: number, day?: number): boolean; The yearOrDate argument can be just a number for the year (in which case month and day should also be provided), a YMDDate object, or a [year, month, day] numeric array. There is also this method, available on instances of Calendar or DateTime, which determines the validity of a date according to that instance’s Julian/Gregorian switch-over: isValidDate(year: number, month: number, day: number): boolean; isValidDate(yearOrDate: YMDDate | number[]): boolean; A related method takes a possibly invalid date and coerces it into a valid date, such as turning September 31 into October 1. normalizeDate(year: number, month: number, day: number): YMDDate; normalizeDate(yearOrDate: YMDDate | number[]): YMDDate; Comparison and sortingComparison and sorting You can test whether moments in time expressed as DateTime instances are before, after, or the same as each other. By default, this comparison is exact to the millisecond. You can, however, pass an optional unit of time for the resolution of the comparison. ttime('2020-08-31').isBefore('2020-09-01') → true ttime('2020-08-31').isBefore('2020-09-01', 'year') → false ttime('2020-08-31').isSameOrBefore('2020-08-03') → false ttime('2020-08-31').isSameOrBefore('2020-08-03', 'month') → true ttime('2020-08-31 07:45').isAfter('2020-08-31 07:43') → true ttime('2020-08-31 07:45').isAfter('2020-08-31 07:43', 'hour') → true The full list of functions for these comparisons is as follows: isBefore, isSameOrBefore, isSame, isSameOrAfter, isAfter. You can also check if a DateTime instance is chronologically, non-inclusively between two other DateTime instances: ttime().isBetween('1776-06-04', '1809-02-12') → false There are two general comparison methods which, when comparing two DateTime instances, return a negative number if the first is less than the second, 0 if the two are equal at the given resolution, or a positive number if the first instance is greater than the second. This is the style of comparison function that works with JavaScript sort. ttime().compare('1776-06-04') → 7721460952408 ttime().compare(ttime(), 'minute') → 0 ttime().compare('3776-06-04') → -55392442920503 DateTime.compare(ttime('1776-06-04'), ttime('1809-02-12')) → -1031616000000 Sorting an array of datesSorting an array of dates ttime.sort(dates: DateTime[], descending = false): DateTime[] This sort modifies the array which is passed in, and returns that same array. min/max functionsmin/max functions ttime.min(...dates: DateTime[]): DateTime ttime.max(...dates: DateTime[]): DateTime Monthly calendar generationMonthly calendar generation The DateTime method getCalendarMonth() returns an array of YMDDate objects, the zeroth date object being on the locale-specific first day of the week (possibly from the preceding month), with a multiple-of-7 length of dates to represent a full month. As an example (filtered down to just the day-of-month for visual clarity): ttime().getCalendarMonth().map(date => date.m === ttime.FEBRUARY ? date.d : '-') → [ '-', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, '-', '-', '-', '-', '-', '-' ] For the above example, the current date was February 11, 2021, so the calendar was generated for that month. The locale was 'en-us', so each week starts on Sunday. Dealing with weird monthsDealing with weird months The utility of the getCalendarMonth() method is more evident with when viewing the calendar generated for October 1582, when (by default) the Julian calendar ends, and the Gregorian calendar begins: ttime('1582-10-01', null, 'fr').getCalendarMonth().map(date => date.d) → [ 1, 2, 3, 4, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31 ] By using the locale 'fr', the calendar generated above starts on Monday instead of Sunday. Notice how the 4th of the month is immediately followed by the 15th. One of the last switches to the Gregorian calendar was enacted by Russia in 1918. The month of February didn’t even start with the 1st, but started on the 14th: new DateTime('1918-02', null, '1918-02-14').getCalendarMonth(1).map(date => date.m === ttime.FEBRUARY ? date.d : '-') [ '-', '-', '-', 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, '-', '-', '-' ] Given such examples, here are some things to consider which might defy ordinary expectations about how calendar months work: - A month does not necessary start on the 1st. - A month might be missing days in the middle. - Because of the previous possibilities, the last numeric date of the month (in the above example, 28) is not necessary the same thing as the number of days in the month (in the example above, only 15 days). - There are timezone changes which eliminate both a day-of-the-month number and a day of the week. The getCalendarMonth() method shows all of these effects together, but there are additional functions to examine each issue separately. These methods are available on both the DateTime class, and the Calendar class. Arguments which are optional when using the DateTime class are required when using the Calendar class, because instances of the Calendar class have no internal year, month, or day values available as defaults. MethodsMethods Total number of days in a month, as affected by leap years and Julian/Gregorian switch-over. If a day is missing due to a timezone issue, that day is still counted as a day in the month, albeit a special 0-length day: getDaysInMonth(year?: number, month?: number): number; The range of dates excluded due to Julian/Gregorian switch-over only. If no days are excluded, the result is null. If days are excluded, a two-element array is returned. result[0] is the first day dropped, result[1] is the last day dropped: getMissingDateRange(year?: number, month?: number): number[] | null; The first date in a month. Usually 1, of course, but possibly different, as in the previous example for Russia, February 1918: getFirstDateInMonth(year?: number, month?: number): number; The last date in a month. Usually 28, 29, 30, or 31. This method is provided mainly because this number can be different from the getDaysInMonth() value: getLastDateInMonth(year?: number, month?: number): number; Another way to drop a dayAnother way to drop a day In December 2011, the nation of Samoa jumped over the International Dateline (or, since no major tectonic shifts occurred, perhaps it’s better to say the International Dateline jumped over Samoa). The Pacific/Apia timezone was changed from UTC-10:00 to UTC+14:00. As a result, Friday, December 30, 2011 did not exist for Samoans. Thursday was followed immediately by Saturday, a type of discontinuity that doesn’t happen with days dropped by switching from the Julian to the Gregorian calendar. @tubular/time handles this situation by treating that skipped-over Friday as a day that exists, but one that is 0 seconds long. The getCalendarMonth() method makes this 0-length status apparent by rendering the day-of-the-month for that day as a negative number. new DateTime('2011-12', 'Pacific/Apia').getCalendarMonth().map(date => date.m === ttime.DECEMBER ? date.d : '-') → [ '-', '-', '-', '-', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, -30, 31 ] Here’s what that month looks like, as rendered by the drop-down date picker at skyviewcafe.com: The following section about weird days provides another method for detecting days like the missing Friday above. Dealing with weird daysDealing with weird days A day is, of course, usually 24 hours long, which is also 1440 minutes, or 86400 seconds. Two things can change the length of a day, however: - Daylight Saving Time rules, which typically subtract or add one hour, but DST changes are not always one hour. - Changes in a timezone’s base offset from UTC, such as the Samoa example above. The biggest of these changes have been due to timezones switching back and forth over the International Dateline, resulting in days as short as 0 hours, and days as long as 47 hours (1969-09-30, Pacific/Kwajalein). These two methods tell you how long a particular day is, in either seconds or minutes: getSecondsInDay(yearOrDate?: YearOrDate, month?: number, day?: number): number; getMinutesInDay(yearOrDate?: YearOrDate, month?: number, day?: number): number; It is possible, but highly unlikely (no timezone is currently defined this way, or is ever likely to be), for getSecondsInDay() to return a non-integer value. getMinutesInDay() is always rounded to the nearest integer minute. Despite this rounding, the value will nearly always be precisely correct anyway. Except for a few late 19th century/early 20th century timezone changes away from local mean time, UTC offset changes are otherwise in whole minutes, typically whole hours, with most fractional hour changes being in multiples of 15 minutes. This next method provides a description of any discontinuity in time during a day caused by Daylight Saving Time or other changes in UTC offset. It provides the wall-clock time when a clock change starts, the number of milliseconds applied to that time to turn the clock either forward or backward, and the ending wall-clock time. The notation “24:00:00” refers to midnight of the next day. If there is no discontinuity, as with most days, the method returns null: getDiscontinuityDuringDay(yearOrDate?: YearOrDate, month?: number, day?: number): Discontinuity | null; Typical Daylight Saving Time examplesTypical Daylight Saving Time examples new DateTime('2021-03-14', 'America/New_York').getDiscontinuityDuringDay() → { start: '02:00:00', end: '03:00:00', delta: 3600000 } // spring forward! new DateTime('2021-07-01', 'America/New_York').getDiscontinuityDuringDay()) → null new DateTime('2021-11-07', 'America/New_York').getDiscontinuityDuringDay() → { start: '02:00:00', end: '01:00:00', delta: -3600000 } // fall back! Examples of big UTC offset shifts due to moving the International DatelineExamples of big UTC offset shifts due to moving the International Dateline // As soon as it’s midnight on the 30th, it’s instantly midnight on the 31st, erasing 24 hours: new DateTime('2011-12-30', 'Pacific/Apia').getDiscontinuityDuringDay()) → { start: '00:00:00', end: '24:00:00', delta: 86400000 } // As soon as it’s midnight on the 31th, turn back to 1AM on the 30th, adding 23 hours to the day: new DateTime('1969-09-30', 'Pacific/Kwajalein').getDiscontinuityDuringDay() → { start: '24:00:00', end: '01:00:00', delta: -82800000 } Here’s a skyviewcafe.com image for that extra-long day in the Marshall Islands, with two sunrises, two sunsets, and 24 hours, 6 minutes worth of daylight packed into a 47-hour day: Leap Seconds, TAI, and Julian DatesLeap Seconds, TAI, and Julian Dates UTC (Universal Coordinated Time) is not a uniform timescale. It is currently defined to track closely with another time standard, UT1, which is based on the slightly variable, and (in the long run) slowly lengthening rotation time of the Earth. Each single second of UTC is equal to one standard, atomically-defined second, but whole seconds are occasionally inserted (and, theoretically, might occasionally be deleted) to keep UTC within 0.9s of UT1. These seconds are called leap seconds. The current system for UTC was adopted in 1970 and implemented in 1972*. UTC is only strictly defined relative to TAI starting in 1972 and going forward in time until the next announced omission or addition of a leap second. You can, however, create a @tubular/time DateTime instance using UTC while also using year values in the distant past or future. So how are dates and times outside the well-defined range of UTC handled? The answer is that DateTime uses both extended UTC and UT1 outside the well-defined UTC range. This works as follows: - For all dates prior to 1957, estimated UT1 is in effect. This is most accurate back to 1600, for which there is sufficient astronomical data for reasonable approximate conversions from UT1 to TAI and dynamical time. Further back in time less accurate approximations are in effect. - From 1957-1958, using a sliding weighted average, UT1 transitions to proleptic UTC. - From 1958-1972 proleptic UTC, as proposed by Tony Finch, is used, with the first non-official leap second occurring at 1959-06-30 23:59:60. - From 1972 up until the latest updates provided by the International Bureau of Weights and Measures, well-defined UTC prevails, with the first official leap second occurring at 1972-06-30 23:59:60. - For a year to 18 months after the current time, or after the last defined leap second, whichever is later, a presumed leap-second-free span of UTC is projected to occur. - A sliding weighted average transition from UTC to estimated UT1 follows for the next 365 days. - Formulaic predicted UT1 is used for all later dates and times thereafter. Note: It is possible (no sooner than 2023) that the use of leap seconds might be abandoned, depending on the results of the World Radiocommunication Conference that year. One possible outcome is that UTC will become locked to TAI, and allowed to drift further and further out of synchronization with UT1. All timezones other than TAI, ZONELESS, and DATELESS, such as Europe/London or Asia/Tokyo, are handled the same way as described for UTC above — simply at varying timezone offsets from UTC. Leap second time valuesLeap second time values DateTime instances generally behave as if leap seconds do not exist. DateTime instances which express leap seconds can be created as follows: - By being directly parsed: new DateTime('1972-06-30 23:59:60Z')→ "DateTime<1972-07-01T00:00:10.000 TAI>" Note that this only works for defined leap seconds. new DateTime('2021-04-15 23:59:60Z'), not a valid leap second, is treated as 2021-04-16 00:00:00Z. - From TAI, Julian date, or modified Julian date values. For example: new DateTime('1972-07-01T00:00:10 TAI', 'UTC').toString()→ "DateTime<1972-06-30T23:59:60.000 +00:00>" new DateTime({ jde: 2450630.5007242477 }, 'UTC').toIsoString(19)→ "1997-06-30T23:59:60" - By add/subtract operations using TAI quantities: new DateTime('2016-12-31 18:59:59 EST').add('seconds_tai', 1).toString()→ "DateTime<2016-12-31T18:59:60.000 -05:00>" - Using the set operation (this only works if the result is the is considered a valid leap second): new DateTime('2016-12-31 18:59:59 EST').set('second', 60).toString()→ "DateTime<2016-12-31T18:59:60.000 -05:00>" - Using the setUtcMillismethod, with the optional second leapSecondMillisargument: new DateTime('utc').setUtcMillis(252460799999, 701).toString()→ "DateTime<1977-12-31T23:59:60.700 +00:00>" TAI field valuesTAI field values You can add and subtract TAI quantities using the following fields: MILLI_TAI, SECOND_TAI, MINUTE_TAI, HOUR_TAI, and DAY_TAI, as provided by the DateTimeField enum, or their string equivalents ( 'milli_tai', 'millis_tai', 'millisecond_tai', 'milliseconds_tai', 'second_tai'... etc.). Converting from UT/UTC to TAI, and back againConverting from UT/UTC to TAI, and back again @tubular/time DateTime instances maintain UT/UTC time using integer millisecond values (sometimes along with an ancillary integer count of milliseconds during leap seconds). Starting with version 3.8.0 of @tubular/time, DateTime TAI time values, likewise measured in milliseconds, can be integer or non-integer values. This difference is because integer TAI values, as previously used, do not have a unique one-to-one correspondence with UT/UTC integer values. Without fractional precision for TAI, a UT/UTC value converted to TAI, then converted back to UT/UTC, would not reliably be restored to its original value. Over the range of time starting from January 1, 1958, up until roughly six months beyond present realtime, time is maintained specifically as UTC (not UT1 or UT1/UTC transitional) and UTC integer milliseconds are reliably converted to TAI integer milliseconds. Astronomical DateAndTime fields jde: Julian Date (ephemeris) — Time measured in fixed-length days of dynamical time from noon, January 1, 4713 BCE (-4712-01-01T12:00) Terrestrial Dynamical Time (TDT), defined to be exactly 32.184 seconds ahead of TAI. mjde: Modified Julian Date (ephemeris) — Same as Julian Date (ephemeris) plus 2400000.5, moving time 0 to midnight, November 17, 1858 (1858-11-07T00:00). jdu: Julian Date (UT) — Time measured in variable-length days of earth rotation time from mean solar noon on the Prime Meridian, January 1, 4713 BCE (-4712-01-01T12:00). mjdu: Modified Julian Date (UT) — Same as Julian Date (UT) plus 2400000.5, moving time 0 to mean solar midnight, November 17, 1858 (1858-11-07T00:00). epochMillis, utcMillis, and taiMillis getters/setters The epochMillis getter/setter returns, or allows you to modify, the fundamental core value for a DateTime instance. For a TAI instance, epochMillis is the same as taiMillis, with utcMillis providing a conversion to or from UTC (or UT1 outside the well-defined UTC range). For a non-TAI instance epochMillis is the same as utcMillis, with taiMillis performing conversions. During a leap second the epochMillis/ utcMillis value is pinned 59 seconds, 999 milliseconds into the minute in which the leap seconds occurs. The taiMillis value, however, still varies over the course of that second. In the unlikely event a negative leap second is ever declared, the epochMillis/ utcMillis value for a non-TAI DateTime instance will simply skip over the leap second, while taiMillis advances contiguously. epochSeconds, utcSeconds, and taiSeconds are essentially the same as epochMillis, utcMillis, and taiMillis, but functioning at one-second resolution. Sorting and comparison with TAI and non-TAI DateTime instances - TAI instances are compared to each other by taiMillis. - Non-TAI instances are compared to each other by utcMillis, but if the utcMillisvalues are identical, comparison is done using leapSecondMillis. - Mixed types are compared by taiMillis. - Coarse-resolution comparison (e.g. only comparing to a resolution of whole seconds or whole days) between mixed TAI and non-TAI instances is not well-defined and should be avoided. Global default settingsGlobal default settings The next two methods get or set the first year of the one hundred-year range that will be used to interpret two-digit year numbers. The “default default” is 1970, meaning that 00-69 will be treated as 2000-2069, and 70-99 will be treated as 1970-1999: ttime.getDefaultCenturyBase(): number; ttime.setDefaultCenturyBase(newBase: number): void; Get/set the default locale (or prioritized array of locales). This defaults to the value provided either by a web browser or the Node.js environment: ttime.getDefaultLocale(): string | string[]; ttime.setDefaultLocale(newLocale: string | string[]): void; Get/set the default timezone. The “default default” (if you don’t use setDefaultTimezone()) is: - The default timezone provided by the Intlpackage, if available. - The timezone determined by the Timezone.guess()function. - The OStimezone, a special @tubular/time timezone created by probing the JavaScript Dateclass to determine the rules of the unnamed JavaScript-supported local timezone. ttime.getDefaultTimezone(): Timezone; ttime.setDefaultTimezone(newZone: Timezone | string): void; The DateTime class The main ttime() function works by creating instances of the DateTime class. You can also use new DateTime(... ) to create instances of DateTime directly. This is necessary for taking advantage of support for variable switch-over from the Julian to the Gregorian calendar, which by default is set at October 15, 1582. ConstructorConstructor constructor(initialTime?: DateTimeArg, timezone?: Timezone | string | null, gregorianChange?: GregorianChange); constructor(initialTime?: DateTimeArg, timezone?: Timezone | string | null, locale?: string | string[], gregorianChange?: GregorianChange); All arguments to the constructor are optional. When passed no arguments, new DateTime() will return an instance for the current moment, in the default timezone, default locale, and with the default October 15, 1582 Gregorian calendar switch-over. initialTime: This can be a single number (for milliseconds since 1970-01-01T00:00 UTC), an ISO-8601 date as a string, and ECMA-262 date as string, an ASP.NET JSON date string, a JavaScript Dateobject, a DateAndTimeobject, an array of numbers (in the order year, month, day, hour, etc.), or a null, which causes the current time to be used. timezone: This can be a Timezoneinstance, a string specifying an IANA timezone (e.g. 'Pacific/Honolulu'), a UTC offset (e.g. 'UTC+04:00'), or nullto use the default timezone. locale: a locale string (e.g. 'fr-FR'), an array of locales strings in order of preference (e.g. ['fr-FR', 'fr-CA', 'en-US']), or nullto use the default locale. gregorianChange: The first date when the Gregorian calendar is active, the string 'J'for a pure Julian calendar, the string 'G' for a pure Gregorian calendar, the constant ttime.PURE_JULIAN, the constant ttime.PURE_GREGORIAN, or nullfor the default of 1582-10-15. A date can take the form of a year-month-day ISO-8601 date string (e.g. '1752-09-14'), a year-month-day numeric array (e.g. [1918, 2, 14]), or a date as a YMDDateobject. As a string, initialTime can also include a trailing timezone or UTC offset, using the letter Z to indicate UTC (e.g. '1969‑07‑12T20:17Z'), or a specific timezone (e.g. '1969‑07‑20T16:17 EDT', '1969‑07‑20T16:17 America/New_York', or '1969‑07‑20T16:17-0400'). If the timezone argument is itself null or unspecified, this embedded timezone will become the timezone for the DateTime instance. If the timezone argument is also provided, the time will be parsed according to the first timezone, then it will be transformed to the second timezone. new DateTime('2022-06-01 14:30 America/Chicago', 'Europe/Paris', 'fr_FR').format('IMM ZZZ') → 1 juin 2022 à 21:30:00 Europe/Paris The following is an example of using the gregorianChange parameter to apply the change from the Julian to Gregorian calendar that was used by Great Britain, including what were the American colonies at the time: new DateTime('1752-09', null, '1752-09-14').getCalendarMonth(1).map(date => date.m === ttime.SEPTEMBER ? date.d : '-') [ '-', 1, 2, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, '-' ] Locking and cloningLocking and cloning The lock() method takes a mutable DateTime instance and makes it immutable, returning that same instance. This is a one-way trip. Once locked, an instance cannot be unlocked. The clone() method creates a copy of a DateTime instance. By default, the copy is either locked or unlocked, the same as the original. You can, however, use clone(false) to create an unlocked, mutable copy of a locked original. DateTime astronomical time functions Converts Julian days into milliseconds from the 1970-01-01T00:00 UTC epoch: DateTime.julianDay(millis: number): number; Converts milliseconds from the 1970-01-01T00:00 UTC epoch into Julian days: DateTime.millisFromJulianDay(jd: number): number; Given a year, month, day according to the standard Gregorian calendar change (SGC) of 1582-10-15, and optional hour, minute, and second UTC, returns a Julian day number. DateTime.julianDay_SGC(year: number, month: number, day: number, hour = 0, minute = 0, second = 0): number; DateTime static constant static INVALID_DATE; DateTime static methods Compares two DateTime instances, or a DateTime instance and another date form, returns a negative value when the first date is less than the second, 0 when the two are equal (for the given resolution), or positive value when the first date is greater than the second: static compare(d1: DateTime, d2: DateTime | string | number | Date, resolution: DateTimeField | DateTimeFieldName = DateTimeField.FULL): number; Determine if a value is an instance of the DateTime class: static isDateTime(obj: any): obj is DateTime; // boolean DateTime getters dstOffsetMinutes: number; dstOffsetSeconds: number; error: string | undefined; // Explanation of why a DateTime is considered invalid, undefined if valid. leapSecondMillis: number; // Number of milliseconds into a leap second (normally 0) // 'DATETIME` is the usual type, but a DateTime instance can be DATELESS (time-only) // or an abstract date/time with no real-world timezone. type: 'ZONELESS' | 'DATELESS' | 'DATETIME'; utcOffsetMinutes: number; utcOffsetSeconds: number; valid: boolean; wallTimeLong: DateAndTime; wallTimeShort: DateAndTime; DateTime getter/setters locale: string | string[]; epochMillis: number; epochSeconds: number; taiMillis: number; taiSeconds: number; timezone: Timezone; utcMillis: number; // utcTimeMillis has been deprecated utcSeconds: number; // utcTimeSeconds has been deprecated wallTime: DateAndTime; Other DateTime methods computeUtcMillisFromWallTime(wallTime: DateAndTime): number; format(fmt = fullIsoFormat, localeOverride?: string | string[]): string; // For questions like “What date is the second Tuesday of this month?” // `dayOfTheWeek` 0-6 for Sun-Sat, index is 1-based. You can use the constant `ttime.LAST` // for `index` to get the last occurrence of a particular day of the month, be it the 4th or 5th // (or even earlier, as in some unusual Julian-Gregorian transition months). getDateOfNthWeekdayOfMonth(year: number, month: number, dayOfTheWeek: number, index: number): number; getDateOfNthWeekdayOfMonth(dayOfTheWeek: number, index: number): number; // Number of days from 1970-01-01 getDayNumber(yearOrDate: YearOrDate, month?: number, day?: number); // Day of the week for date, 0-6 for Sun-Sat getDayOfWeek(): number; getDayOfWeek(year: number, month: number, day: number): number; getDayOfWeek(date: YMDDate | number[]): number; // As number[]: [year, month, day] // How many times does a given day of the week (0-6 for Sun-Sat) occur during this month? getDayOfWeekInMonthCount(year: number, month: number, dayOfTheWeek: number): number; getDayOfWeekInMonthCount(dayOfTheWeek: number): number; // Is the date the 1st, 2nd, 3rd, 4th, or 5th occurrence of its day of the week // during the given month? getDayOfWeekInMonthIndex(year: number, month: number, day: number): number; getDayOfWeekInMonthIndex(date: YMDDate | number[]): number; getDayOfWeekInMonthIndex(): number; // Returns the date (day-of-the-month only) of the first occurrence of a given day // of the week on or after a given date. For example, election day in the // United States is the first Tuesday on or after November 2, so election day // in 2024 is getDayOnOrAfter(2024, 11, ttime.TUESDAY, 2). getDayOnOrAfter(year: number, month: number, dayOfTheWeek: number, minDate: number): number; getDayOnOrAfter(dayOfTheWeek: number, minDate: number): number; // Returns the date (day-of-the-month only) of the first occurrence of a given day // of the week on or before a given date. getDayOnOrBefore(year: number, month: number, dayOfTheWeek: number, maxDate: number): number; getDayOnOrBefore(dayOfTheWeek: number, minDate: number): number; // Number of days in a given year. Typically 365 or 366, but it can be smaller values // for years when days have been dropped when transitioning from the Julian to the // Gregorian calendar. getDaysInYear(year?: number): number; // Returned date is an arbitrary distant future for a pure Julian calendar, distant past // for pure Gregorian, otherwise the first-used Gregorian date. getGregorianChange(): YMDDate; // This method is for finding the date of the first day of the first week of a week-based // calendar, which can be a few days before or after January 1, depending on how weeks // are defined. For ISO weeks, this date is the Monday at the beginning of a week which // contains January 4, e.g. getStartDateOfFirstWeekOfYear(year, 1, 4). getStartDateOfFirstWeekOfYear(year: number, startingDayOfWeek?: number, minDaysInCalendarYear?: number): YMDDate; // UTC millisecond value at the start of a given day, per the `DateTime` instance’s // timezone and calendar rules. getStartOfDayMillis(yearOrDate?: YearOrDate, month?: number, day?: number): number; // Convert a UTC millisecond value into a `DateAndTime` object, per the `DateTime` instance’s // timezone and calendar rules. getTimeOfDayFieldsFromMillis(millis: number): DateAndTime; // Display short name for `DateTime` instance’s timezone, such as "EDT" or "PST". getTimezoneDisplayName(): string; // Typically 52 or 53, the number of weeks in a week-based year, ISO by default. getWeeksInYear(year: number, startingDayOfWeek = 1, minDaysInCalendarYear = 4): number; // Typically 52 or 53, the number of weeks in a locale-specific week-based year. getWeeksInYearLocale(year: number): number; // For a given standard calendar date, return the week-based year, week number, and day // number, according to startingDayOfWeek and minDaysInCalendarYear, defaulting to ISO. getYearWeekAndWeekday(year: number, month: number, day: number, startingDayOfWeek?: number, minDaysInCalendarYear?: number): number[]; getYearWeekAndWeekday(date: YearOrDate | number, startingDayOfWeek?: number, minDaysInCalendarYear?: number): number[]; // For a given standard calendar date, return the locale-specific week-based year, // week number, and day. getYearWeekAndWeekdayLocale(year: number, month: number, day: number): number[]; getYearWeekAndWeekdayLocale(date: YearOrDate | number): number[]; // Check if a given date is before this DateTime’s switch to the Gregorian calendar. isJulianCalendarDate(yearOrDate: YearOrDate, month?: number, day?: number): boolean; // `true` if the given year is a leap year, according to this `DateTime` instance's // calendar rules. For example: // // new (null, null, 'G').isLeapYear(1900) → false // new (null, null, 'J').isLeapYear(1900) → true isLeapYear(year?: number): boolean; isPureGregorian(): boolean; isPureJulian(): boolean; // Is the DateTime instance TAI? isTai(): boolean; // Is the DateTime instance UTC, or a timezone offset from UTC? // (Anything other than TAI, DATELESS, and ZONELESS.) isUtcBased(): boolean; // Sets the first date when the Gregorian calendar starts. Pass 'J' as the first argument to get // a perpetual Julian calendar, or 'G' for always-Gregorian (extending even before the Gregorian // calendar existed - a fancy word for that being a "proleptic" Gregorian calendar). You can // also pass a string date (e.g. '1752-09-14'), a numeric array (e.g. [1752, 9, 14], or a YMDDate // object. If you pass a numeric year alone for the first argument, include two more arguments // for the month and date as well. setGregorianChange(gcYearOrDate: YearOrDate | string, gcMonth?: number, gcDate?: number): DateTime; // If pureGregorian is true, calendar becomes pure, proleptic Gregorian. If false, standard change date of 1582-10-15 is applied. setPureGregorian(pureGregorian: boolean): DateTime; // If pureJulian is true, calendar becomes pure Julian. If false, standard change date of 1582-10-15 is applied. setPureJulian(pureJulian: boolean): DateTime; // Throws an exception if the `DateTime` is invalid, otherwise returns the instance itself. throwIfInvalid(): DateTime; // Convert DateTime to a JavaScript Date. toDate(): Date; // Format as hour and minute, using the format 'HH:mm', or 'HH:mmv' if includeDst is true. toHoursAndMinutesString(includeDst = false): string; // Format as 'Y-MM-DDTHH:mm:ss.SSSZ', trimming to an optional maxLength that *does not* count // any leading + or - sign. If maxLength is negative, remove that many characters from the // end of the full string. Base length for positive years <= 9999 is 24 characters. toIsoString(maxLength?: number): string; // Create a clone of a DateTime instance with a different locale. toLocale(newLocale: string | string[]): DateTime; // Convert to a string, such as 'DateTime<2017-03-02T14:45:00.000 +01:00>. // When dateless: 'DateTime<20:17:15.000>' // When zoneless: 'DateTime<2017-03-02T14:45:00.000>' // When TAI: 'DateTime<2017-03-02T14:45:00.000 TAI>' toString(): string; // Format as 'Y-MM-DD HH:mmv'. toYMDhmString(): string; The Calendar class This is the superclass of the DateTime class. It stores no internal date value, however. It merely implements calendar calculations, and holds the Julian/Gregorian change date to be used for the calendar. Most of the purely date-related methods of DateTime exist on Calendar. Calendar does not have any methods that refer to an internal date value (such as add, roll, etc.), formatting, locale, or timezone. The constructor takes the same arguments as the setGregorianChange() method: constructor(gcYearOrDateOrType?: YearOrDate | CalendarType | string, gcMonth?: number, gcDate?: number); This Calendar method adds a given number of days to a date: addDaysToDate(deltaDays: number, yearOrDate: YearOrDate, month?: number, day?: number): YMDDate The Timezone class Static Timezone constants static OS_ZONE: Timezone; // Local timezone as derived from analyzing values returned by JavaScript `Date`. static TAI_ZONE: Timezone; // International Atomic Time (TAI). static UT_ZONE: Timezone; // Universal Coordinated Time (AKA UTC, UCT, GMT, Zulu Time, etc.) static ZONELESS: Timezone; // A pseudo timezone for abstract date/time instances. static DATELESS: Timezone; // A pseudo timezone for abstract dateless, time-only `DateTime` instances. Static Timezone getter version: string; // Current timezone version, e.g. 2021a LeapSecondInfo interface This defines the moment immediately after the insertion or deletion of a leap second. export interface LeapSecondInfo { utcMillis: number; taiMillis: number; dateAfter: YMDDate; deltaTai: number; isNegative: boolean; // Optional flag indicating if a specific moment in time is during a leap second. // When a query is a UTC value, this flag is true for the 59th second of a minute. inLeap?: boolean; // For an unlikely, but theoretically possible, negative leap second, this optional flag // is true if a query time is in the 58th second of a minute, preceding an omitted 59th // second. inNegativeLeap?: boolean; } Static Timezone methods Check if a given IANA zoneName is associated with an ISO Alpha-2 (two-letter) country code: static doesZoneMatchCountry(zoneName: string, country: string): boolean; Find the officially-defined, or proleptic, difference in seconds between TAI and UTC at the given TAI; Find the officially-defined, or proleptic, difference in seconds between TAI and UTC at the given UTC; Take a duration, offsetSeconds, and turn it into a formatted UTC offset, e.g. -18000 → '-05:00'. If noColons is set to false (it defaults to true if not specified), colons will be omitted from the output, e.g. '-0500'. If the duration is not in whole minutes, seconds will be added to the output, e.g. '+15:02:19': static formatUtcOffset(offsetSeconds: number, noColons = false): string; Return a timezone matching name, if available. If no such timezone exists, a clone of Timezone.OS_ZONE is returned, but with the given name, and with result.error containing an error message. name can be "DATELESS", "TAI", or "ZONELESS", as well as an IANA timezone name, or common name like "UTC" or "GMT": static from(name: string): Timezone; Get all timezone names which can be treated as aliases for the give zone name. All equivalent timezones are treated as aliases for each other by this method, with no particular regard given to which zone name is the actual root name as opposed to being a link. static getAliasesForZone(zone: string): string[] This method returns a full list of available IANA timezone names. Does not include names for the above static constants: static getAvailableTimezones(): string[]; Get a Set of ISO Alpha-2 (two-letter) country codes associated with a given IANA zoneName: static getCountries(zoneName: string): Set<string>; The last known, declared leap second. This can be null if no leap seconds are declared in the current timezone data: static getDateAfterLastKnownLeapSecond(): YMDDate Get the symbol ( ^, §, #, ❄, or ~) @tubular/time associates with various Daylight Saving Time offsets, or an empty string for dstOffsetSeconds of 0: static getDstSymbol(dstOffsetSeconds: number): string; Get the full list of leap seconds, including 10 non-official, proleptic leap seconds defined from 1959 to 1971, and all officially declared leap seconds thereafter (up to the latest software update). This can be null if no leap seconds are declared in the current timezone data: static getLeapSecondList(): LeapSecondInfo[]; This method returns a list of available IANA timezone names in a structured form, grouped by standard UTC offset and Daylight Saving Time offset (if any), e.g. +02:00, -05:00§, etc. The “MISC” timezones, and the various IANA “Etc” timezones, are filtered out: export interface OffsetsAndZones { offset: string; offsetSeconds: number; dstOffset: number; zones: string[]; } static getOffsetsAndZones(): OffsetsAndZones[] Get a rough estimate, if applicable and available, for the population of an IANA zoneName, otherwise 0: static getPopulation(zoneName: string): number; This method returns a full list of available IANA timezone names in a structured form, grouped by regions (e.g. “Africa”, “America”, “Etc”, “Europe”, etc.). The large “America” region is broken down into three regions, “America”, “America/Argentina”, and “America/Indiana”. There is also a “MISC” region that contains a number of redundant, deprecated, or legacy timezones, such as many single-name-no-slash timezones and SystemV timezones: export interface RegionAndSubzones { region: string; subzones: string[]; } static getRegionsAndSubzones(): RegionAndSubzones[]; If a shortName such as 'PST' or 'EET' is available, return information about that timezone, or undefined if not available. Please keep in mind that some short timezone names are ambiguous, so you might not get the desired result: export interface ShortZoneNameInfo { utcOffset: number; dstOffset: number; ianaName: string; } static getShortZoneNameInfo(shortName: string): ShortZoneNameInfo; Return a timezone matching name, if available. If no such timezone exists, a clone of Timezone.OS_ZONE is returned, but using the given name, and with result.error containing an error message. If the name 'LMT' (for Local Mean Time) is used, then include the optional longitude in degrees (negative west of the Prime Meridian), and a timezone matching Local Mean Time for that longitude will be returned, with a UTC offset at a resolution of one (time) minute (as opposed to angular minutes): static getTimezone(name: string, longitude?: number): Timezone The same as getDateAfterLastKnownLeapSecond(), but null if the given date is in the past: static getUpcomingLeapSecond(): YMDDate; This method returns the name of the IANA timezone that best matches your local timezone. If the Intl package is available, it’s not a guess at all, but a proper system-reported value. Otherwise, the guess() method finds the most populous timezone that most closely matches OS_ZONE. If recheck is true, a fresh check is forced instead of using a cached result: static guess(recheck = false): string; Check if there is a timezone matching name: static has(name: string): boolean; Check if a shortName for a timezone, such as 'PST' or 'EET', is available: static hasShortName(name: string): boolean; Timezone getters aliasFor: string | undefined; // undefined for a primary timezone name countries: Set<string>; // ISO Alpha-2 country codes, empty set if no associated countries dstOffset: number; // in seconds dstRule: string | undefined; // undefined, or textual representation of last start-of-DST rule in effect. error: string | undefined; // undefined if no error population: number; // 0 if inapplicable or unknown stdRule: string | undefined; // undefined, or textual representation of last end-of-DST rule in effect. usesDst: boolean; utcOffset: number; // in seconds zoneName: string; Timezone methods For a given utcTime, find the most recent change in the timezone, on or before utcTime. The change can be a DST “spring forward” or “fall back” change, a change in the standard UTC offset, or even just a change in the short-form name of the timezone: export interface Transition { transitionTime: number; // in milliseconds utcOffset: number; // in seconds dstOffset: number; // in seconds name?: string; deltaOffset?: number; // in seconds, compared to previous transition utcOffset dstFlipped?: boolean; // true if dstOffset has changed 0 to non-0, or non-0 to 0, from previous transition baseOffsetChanged?: boolean; wallTime?: number; // in milliseconds wallTimeDay?: number; } findTransitionByUtc(utcTime: number): Transition | null; For a given wallTime, expressed in milliseconds, find the most recent change in the timezone, on or before wallTime. For an ambiguous wall time, the later time applies: findTransitionByWallTime(wallTime: number): Transition | null Get all transitions in a timezone. Returns null for simple, single-UTC-offset timezones which have no transitions. getAllTransitions(): Transition[] | null Get the short-form name for the timezone, dependent upon utcTime in milliseconds, such as the America/New_York timezone returning 'EST' during the winter, but 'EDT' during the summer. getDisplayName(utcTime: number); Return the formatted UTC offset for a given moment in time, specified in milliseconds, formatted as per Timezone.formatUtcOffset: getFormattedOffset(utcTime: number, noColons = false): string; Return the UTC offset in seconds (with effect of DST, if applicable, included) for the timezone at utcTime in milliseconds. The day parameter is usually not needed, but for the edge case of a 0-length day, passing the wall-time day value helps distinguish between the two overlapping midnights of one day and the instantaneous next day: getOffset(utcTime: number, day = 0): number; For a given wallTime, in milliseconds for this timezone, return the UTC offset in seconds at that time. Where wall time is ambiguous, wallTime refers to the later time: getOffsetForWallTime(wallTime: number): number; For a given utcTime, in milliseconds, return the UTC offset in seconds, and DST offset in seconds, at that time for this timezone. The UTC offset includes the effect of the DST offset, if any. The result is a two-element numeric array, [utcOffset, dstOffset]: getOffsets(utcTime: number): number[]; Check if the given utcTime, in milliseconds, is during Daylight Saving Time for this timezone: isDuringDst(utcTime: number): boolean; Check if this timezone explicitly supports the given country, specified as a two-letter ISO Alpha-2 code: supportsCountry(country: string): boolean; Other functions available on ttime Get the minimum number of days within a given calendar year needed for a week to be considered part of a locale’s week-based calendar for that year: ttime.getMinDaysInWeek(locale: string | string[]): number; Day number (0-6 for Sunday-Saturday) considered the first day of a week for a locale: ttime.getStartOfWeek(locale: string | string[]): number; Day numbers (0-6 for Sunday-Saturday) considered to comprise weekend days for a locale: ttime.getWeekend(locale: string | string[]): number[]; Determine if a value is an instance of the Date class: ttime.isDate(obj: any): obj is Date; // boolean Determine if a value is an instance of the DateTime class: ttime.isDateTime(obj: any): obj is DateTime; // boolean Converts Julian days into milliseconds from the 1970-01-01T00:00 UTC epoch: ttime.julianDay(millis: number): number; Converts milliseconds from the 1970-01-01T00:00 UTC epoch into Julian days: ttime.millisFromJulianDay(jd: number): number; Given a year, month, day according to the standard Gregorian calendar change (SGC) of 1582-10-15, and optional hour, minute, and second UTC, returns a Julian day number. ttime.julianDay_SGC(year: number, month: number, day: number, hour = 0, minute = 0, second = 0): number; For a given TDT Julian Date (ephemeris time), return the number of seconds that TDT is ahead of Universal Time (UT1): ttime.getDeltaTAtJulianDate(timeJDE: number): number; For a given TAI millisecond value (1970 epoch), return the corresponding UT1 or UTC milliseconds: ttime.taiToUtMillis(millis: number, forUtc = false): number; For a given TDT Julian Date (ephemeris time), return the Julian Date in Universal Time (UT1): ttime.tdtToUt(timeJDE: number): number; For a given UT1 or UTC millisecond value (1970 epoch), return the corresponding TAI milliseconds: ttime.utToTaiMillis(millis: number, asUtc = false): number; For a given UT1 Julian Date (Universal Time), return the Julian Date in ephemeris time (TDT): ttime.utToTdt(timeJDU: number): number; Create new Intl.DateTimeFormat instances with more flexibility for mixing options. For instance: new DateTimeFormat('ja', { timeStyle: 'short' }) ... shows hours with no leading zero for single-digit hours. If you try to add the leading zero like this: new DateTimeFormat('ja', { timeStyle: 'short', hour: '2-digit' }) ...an exception is thrown. By using newDateTimeFormat, however, @tubular/time will attempt to override dateStyle and timeStyle options with specific variations which are otherwise disallowed. ttime.newDateTimeFormat(locales?: string | string[], options?: DateTimeFormatOptions) Constants available on ttime // Locale ttime.defaultLocale; // Feature flags ttime.hasDateTimeStyle: boolean; ttime.hasIntlDateTime: boolean; // Formats ttime.DATETIME_LOCAL: string; ttime.DATETIME_LOCAL_SECONDS: string; ttime.DATETIME_LOCAL_MS: string; ttime.DATE: string; ttime.TIME: string; ttime.TIME_SECONDS: string; ttime.TIME_MS: string; ttime.WEEK: string; ttime.WEEK_AND_DAY: string; ttime.WEEK_LOCALE: string; ttime.WEEK_AND_DAY_LOCALE: string; ttime.MONTH: string; // Calendar ttime.PURE_JULIAN: CalendarType; ttime.PURE_GREGORIAN: CalendarType; ttime.SUNDAY: number; ttime.MONDAY: number; ttime.TUESDAY: number; ttime.WEDNESDAY: number; ttime.THURSDAY: number; ttime.FRIDAY: number; ttime.SATURDAY: number; ttime.JANUARY: number; ttime.FEBRUARY: number; ttime.MARCH: number; ttime.APRIL: number; ttime.MAY: number; ttime.JUNE: number; ttime.JULY: number; ttime.AUGUST: number; ttime.SEPTEMBER: number; ttime.OCTOBER: number; ttime.NOVEMBER: number; ttime.DECEMBER: number; ttime.LAST: number; // For use with getDateOfNthWeekdayOfMonth()
https://www.npmjs.com/package/@tubular/time
CC-MAIN-2022-33
refinedweb
12,160
54.32
Problem code: POINTS You are given a set of points in the 2D plane. You start at the point with the least X and greatest Y value, and end at the point with the greatest X and least Y value. The rule for movement is that you can not move to a point with a lesser X value as compared to the X value of the point you are on. Also for points having the same X value, you need to visit the point with the greatest Y value before visiting the next point with the same X value. So, if there are 2 points: (0,4 and 4,0) we would start with (0,4) - i.e. least X takes precedence over greatest Y. You need to visit every point in the plane. You will be given an integer t(1<=t<=20) representing the number of test cases. A new line follows; after which the t test cases are given. Each test case starts with a blank line followed by an integer n(2<=n<=100000), which represents the number of points to follow. This is followed by a new line. Then follow the n points, each being a pair of integers separated by a single space; followed by a new line. The X and Y coordinates of each point will be between 0 and 10000 both inclusive. For each test case, print the total distance traveled by you from start to finish; keeping in mind the rules mentioned above, correct to 2 decimal places. The result for each test case must be on a new line. Input: 3 2 0 0 0 1 3 0 0 1 1 2 2 4 0 0 1 10 1 5 2 2 Output: 1.00 2.83 18.21 For the third test case above, the following is the path you must take: 0,0 -> 1,10 1,10 -> 1,5 1,5 -> 2,2 = 18.21 java.util.comparator interface helps here lot ... i don't have to implement any sorting algorithm sort() works just fine in C C (cpp) that is - why did my double ' ' get stripped? The debugger is stating the answer comes out to be correct. But here on submitting it says wrong answer. Unable to understand what is wrong. In order to optimize i tried using float, but the thing works with double. @pratik vakil float has lesser precision than double. there is a number format exception that is thrown on one of the 'test case' integers. i had to wrap the parsing of the 'test case' individually to get a successful run. i guess i should be wrapping them all in try-catch clauses individually anyway. i've gotten lazy...still, it is strange that the test data has an error in it. especially since the format instructions were very specific. stuck! unable to generate a test case for which my submission will fail :( admin or moderator.... plz provide some hint. Try some more :) suppose the total distance is 1.256 What should be the output? 1.25 or 1.26? 1.26. sorry for the last post.... it should be 1.26 ....evident frm the second sample test case..... i m giving up.... i have tried for all the test cases i can think of.... and none of them fail :( how can I round upto 2 decimal points? double t = 45.493847; int x = t*100; t = x / 100.0; printf("%.2lf", t); Something like this? this is my code : #include<iostream> #include<math.h> using namespace std; int main() { char c;int n,i,*x,*y,j,t,k,z; float s; cin >> t; for ( z = 0 ; z < t ; z++ ) { i = 0;k=0; cin >> n; x = new int[n+1]; y = new int[n+1]; for ( i = 0 ; i < n ; i++ ) cin >> x[i] >> y[i] ; s=0; for( j = 0 ; j < n - 1 ; j++ ) s = s + sqrt ( pow((y[j+1]-y[j]),2) + pow((x[j+1]-x[j]),2) ); printf("%.2fn", s); delete []x; delete []y; } } can anyone tell the error?its showing wrong answer. You don't seem to be obeying any of the rules mentioned in the problem. Try reading it again. Hi Admin I've completed and successfully run the code for this problem and it works fine for many test cases with me..It runs fine and exactly as per the problem constraints on my laptop. Java version on my laptop: ___________________________ java version "1.6.0_14" Java(TM) SE Runtime Environment (build 1.6.0_14-b08) I'm using Ubuntu:9.04 Could you pls tell me if I can personally send you my src file, So that you can compile and run it, & let me know if I'm making some critical mistake as per Directi program evaluation norms.. :) u know how it makes feel, when it's all correct and still it appears not. Well, if you are getting a wrong answer, then your code is not correct and is producing the wrong result for some of the test cases. :) Dear Admin, I am getting TLE everytime. My answers seems to be correct. I have tried using merge sort then again insertion sort but all times I am getting TLE? Can u provide any hint what sorting algo to apply?? it keeps saying run time error i think its the array size of 10000 which i am using giveing the problem, admin can you tell if i am coreect. @Ravi Looks like you solved it... mind answering your own question for us? I had the same question; I think the problem needs to be worded more clearly. When read literally, I think it would produce 28 for your problem, but I don't understand the motivation for designing the problem that way unless it is solely to try and confuse people about the rules. if input is (1,10),(2,19),(1,6),(2,20),(1,8),(1,18). wat ll be visiting order...? is this correct (1,10),(1,8),(1,6),(2,20),(2,19),(1,18)..? You have to start with (1,18). Read the problem statement carefully. can I declare a 2-D array "int xy[RANGE][RANGE]" for each test case? I generate input and seems to work fine on my laptop but chef gives runtime error. No, 10000*10000 ints is about 380mb which is far too much memory. <!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } --> Hi, I have defined a number of reasonably small test cases to run and all succeeded. I then set up a test case generator to comply with codechef specifications. The number of tests are user defined at the command line but all other parameters are generated by random number selection conforming with the directions set forth. The output from this program becomes the input for the POINTS program. Randomly selecting x number of tests within the test file and solving those by hand, I could insure the answers are correct. They were. I am still, after 5 runs getting a runtime error of NZEC. Since I am using Python the types of errors to produce this result are finite. I think they possibly reflect an error on one of the test cases. I would like to know if some error was built into the test cases. If this is correct, I think you have violated your own standards. What I have always liked best about CodeChef is the directions for each problem are exceedingly well defined. We know the inputs and we know exactly what criteria govern their creation. Therefore, if an error has been deliberately introduced this is a violation of your standards and should certainly not be used under the assumption that a 'good programmer' knows to code for invalid data. That is true but in a controlled environment such as codechef or SPOJ this is an invalidated exercise and either the test data should be recreated and firmly tested or the problem should be dropped. Again, if I am in error, I apologize for my rant and would like to submit to you both my solution and my test deck because suddenly this has become a most unusual and perplexing situation. Thank you, Robert The problem has been solved by 133 people, and there is nothing wrong with the input. If you are getting a runtime error then something is wrong with your solution, rather than codechef 'violating their standards'. I'm storing the sum in a long double, so I'm loathe to think that I'm losing precision there. Still, I'm coming up with WA. What a puzzle. I've solved the problem, but it is giving me wrong answer. It might be some case that i'm omitting some of the test cases. Can anybody suggest me any test case. Some important one which can exhaustively be applicable for the whole problem. Thanks in advance...!!! Here's a testcase generator that might help. It helped me find a bug in my program. Good luck! #!/usr/bin/pythonimport sys, randomtestcases, points = 20, 100000print testcasesfor i in range(testcases): print '' print points for point in xrange(points): print random.randint(0, 10000), random.randint(0, 10000) oops, sorry, bad formatting, here goes again: #!/usr/bin/pythonimport sys, randomtestcases, points = 20, 100000print testcasesfor i in range(testcases): print '' print points for point in xrange(points): print random.randint(0, 10000), random.randint(0, 10000) For those who are having problems and are getting Wrong Answer, are you compiling in Windows? I was, and I got 3 Wrong Answers on this even though all my program seemed to be correct. Compile on Linux (I use Ubuntu) and it might show you whats wrong with your program, at least it did for me. I am getting Runtime Error while submitting my Java code. The error code as sent in the email to me is NZEC. From... link, I came to know that it can be because of some unhandled exception thrown by a Java program. I have tried testing my program with boundary conditions on my own. I am unable to somehow crash my program. Can I get the exception class??? or any kind of hint??? ok i got the issue.... just a case to take into consideration.... there can be same set of points more than once in the input :) I'm using the Comparable interface in java to sort my points. However, my result turns out to be Time limit exceeded...I was wondering if anyone had successfully used this approach or whether I should try another more efficient sorting algorithm, because I see no other way of optimizing my program. Scanners are very very very slow. Never use them. Hi everybody, In my C program, I am using math.h for sqrt. So when i am compiling it in my system, I need to give use the following command , "gcc -lm test.c" , only then the program runs. However when I am submitting here, I am getting "wrong answer". Is it because I am using math.h, that I am getting the "wrong answer". If so, what do yo suggest me to use for sqrt function? Regards, Vignesh No, your code is not giving a compiler error. Wrong answer means you are printing out the wrong thing, which means there is a test case that you are giving the wrong answer for. Not that the sqrt function isn't working. i don't understand the problem for last case why can't i go directly to 2,2 from 0,0 ... the problem statement says only u can't go to a point having less value of X ... so i guess i can go anywhere except that??? oops didnt read last line :) missed return statement in main...!!! and wasted time.. where do i get more information on this i checked with the above test cases , program is running fine and also tested it on sample test cases genterated by the python script given above (for sorting). still i am getting wrong answer :( , any hints ?? or some corner cases that i may be missing You may have another bug, but floats aren't very precise; they are only accurate to about 7dp. Adding 100000 of them will leave your answer only accurate to about 1dp; you should really always be using doubles for accuracy. got it , was using float instead of double :( plz provide some hint at where i m goin wrong.i m getting runtime error thanks i am getting time limit exceeded....even though i hav used scanf and print f...plz helphere is my code//#include <iostream>#include <stdlib.h>#include <stdio.h>#include <vector>#include <algorithm>#include <math.h>using namespace std;int main(){ int cases,points,final; vector<int> x,chkx; vector<int> y,chky,v; int x1,y1,k,max,size,t=0; vector<int> answerx[100]; vector<int> answery[100]; scanf("%d",&cases); for(int i=0;i<cases;i++){ scanf("%d",&points); for(int j=0;j<points;j++){ scanf("%d",&x1); scanf("%d",&y1); chkx.push_back(x1); x.push_back(x1); y.push_back(y1); } int flag=0; sort(chkx.begin(),chkx.end()); for(int flag=0;flag<points;flag++){ for(int j=0;j<chkx.size();j++){ if(chkx[0]==x[j]){ v.push_back(j); } } k=0; max=0; size=v.size(); if (v.size()>1){ while(size!=0){ if(y[v[k]]>max){ max=y[v[k]]; final=v[k]; } k++; size--; } } else if(v.size()==1){ final=v[0]; } else{} answerx[i].push_back(x[final]); answery[i].push_back(y[final]); v.erase(v.begin(),v.end()); x.erase(x.begin()+final); y.erase(y.begin()+final); chkx.erase(chkx.begin()); } //t++; } double ans; for(int i=0;i<cases;i++){ ans=0.00; for(int j=0;j<answerx[i].size()-1;j++){ int xx1,yy1,yy2,xx2; xx1=answerx[i][j]; yy1=answery[i][j]; xx2=answerx[i][j+1]; yy2=answery[i][j+1]; ans=ans + sqrt((xx1-xx2)*(xx1-xx2)+(yy1-yy2)*(yy1-yy2)); } printf("%.2fn",ans); } system("pause"); return 0;} Will the input consist of all of the points sorted, or if i need to sort them in my solution, i would have to do it explicily? The problem statement doesn't say anything about the order the points can appear in, so you shouldn't assume anything. Could anybody help me out: I am getting a runtime error(other), and despite reading the FAQ, I am unable to figure out where I am going wrong. Link to my code is: It has been bugging me. Would be really grateful if someone could help. Have you tried a test case with 100000 points? Have you read the section called 'Other common mistakes' in the FAQ? (The answer to both is no ;)) @Stephen: I did try out my code on a test case generator of 100000 points. Seemed to work fine. Can you please help a little more? As mentioned in the FAQ, a new line is \r\n, which you aren't handling properly. I may have been mistaken on the first point, let me look again. @stephen: Oh no, I am reading 30000 characters at a time. I guess my problem is with the newline character. I am fixing it. @stephen: any suggestion as to how to handle rn ? I meant how to handle the newline character? can anyone tell me what's the problem here...it's giving wrong answer!!! but i couldn't find any problem... please help. #include<stdio.h> #include<math.h> long x[100010],y[100010]; long i,j,k,t,n,p,q,r,temp1,temp2; double sum; int main(){ //freopen("input.txt","r",stdin); scanf("%ld",&t); for(p=0; p<t; p++){ scanf("%ld",&n); for(q=0; q<n; q++){ scanf("%ld",&x[q]); scanf("%ld",&y[q]); } for(i=0; i<n; i++){ for(j=i; j<n; j++){ if(x[i]>y[j]){ temp1 = x[i]; x[i] = x[j]; x[j] = temp1; temp2 = y[i]; y[i] = y[j]; y[j] = temp2; else if(x[i] == y[j]){ temp1 = y[i]; y[j] = temp1; temp2 = x[i]; x[j] = temp2; sum = 0; for(i=0; i<n-1; i++){ sum = sum + sqrt(pow((double(x[i+1])-double(x[i])),2) + pow((double(y[i+1])-double(y[i])),2)); printf("%.2lfn",sum); return 0; } i have no other choices but to post my code.plz have a look....i have tried lots of test cases...but my answer is incorrect..plz help #define p 100001 long long int a[p][2],u=-1,v=-1,n; void quicksort(long long int,long long int); int main() { int t,i; long long int j,p1,q,r,s1,k; double s; scanf("%d",&t); for(i=1;i<=t;i++) k=0; s=0.0; scanf("%lld",&n); for(j=0;j<n-k;) scanf("%lld",&r); scanf("%lld",&s1); if((r!=0)||(s1!=0)) a[j][0]=r; a[j][1]=s1; j=j+1; else if((r==0)&&(s1==0)) u=r; v=s1; k=k+1; n=n-k; quicksort(0,n-1); for(j=0;j<(n-1);j=j+1) p1=a[j][0]-a[j+1][0]; q=a[j][1]-a[j+1][1]; s=s+sqrt(pow(p1,2)+pow(q,2)); if((u==0)&&(v==0)) s=s+sqrt(pow(a[0][0],2)+pow(a[0][1],2)); printf("%.2fn",s); return(0); void quicksort(long long int left,long long int right) long long int pivot,i,j; long long int temp; if(left<right) i=left; j=right+1; pivot=a[left][0]; do i=i+1; }while(a[i][0]<pivot); j=j-1; }while(a[j][0]>pivot); if(i<j) temp=a[i][0]; a[i][0]=a[j][0]; a[j][0]=temp; temp=a[i][1]; a[i][1]=a[j][1]; a[j][1]=temp; }while(i<j); temp=a[left][0]; a[left][0]=a[j][0]; temp=a[left][1]; a[left][1]=a[j][1]; quicksort(left,j-1); quicksort(j+1,right); for(j=0;j<right;j++) if(a[j][0]==a[j+1][0]) if(a[j][1]<a[j+1][1]) temp=a[j][1]; a[j][1]=a[j+1][1]; a[j+1][1]=temp; frnds can anybody help me out on this problem its clearly written that You start at the point with the least X and greatest Y value, and end at the point with the greatest X and least Y value. and for the last case the anser turns out to be the following order (0,10)->(1,5)->(1,2)->(2,0) for wich the answer turns out to be 10.33 hi i have used mergesort for sorting and i am TLE. @people who hav submitted code... which algo u used for sorting?? i used quicksort too.. guys is blank line and new line is same i mean n tell me :D :( Internal Error Occured in the system... Do any1 know why this error comes up??? Hi, can we have 3 values of Y for same value of X Points can be anywhere they want. well guys i have done my code quite simply & shortly using struct...everythings are ok except the time limit! m damn dead for knowing what the problem really is...please help somebody!here goes my code: #include<stdio.h>#include<math.h>typedef struct{ int x; int y;}point;point points[10000];void sort(int n){ int temp; for(int i=(n-1);i>=1;i--) { for(int j=0;j<=(i-1);j++) { if(points[j].x>points[j+1].x) { temp=points[j].x; points[j].x=points[j+1].x; points[j+1].x=temp; temp=points[j].y; points[j].y=points[j+1].y; points[j+1].y=temp; } else if(points[j].x==points[j+1].x) { if(points[j].y<points[j+1].y) { temp=points[j].y; points[j].y=points[j+1].y; points[j+1].y=temp; } } } }}void countDistance(int n){ double result=0.0; for(int i=0;i<(n-1);i++) { result+=sqrt((points[i+1].x-points[i].x)*(points[i+1].x-points[i].x)+(points[i+1].y-points[i].y)*(points[i+1].y-points[i].y)); } printf("%.2fn",result);}int main(){ int test,iterator; scanf("%d",&test); for(int i=0;i<test;i++) { scanf("%d",&iterator); for(int j=0;j<iterator;j++) { scanf("%d %d",&points[j].x,&points[j].y); } sort(iterator); countDistance(iterator); } return 0;} oh...my struct point is something like point[100000]...either it shows runtime error! but what making its to be so time consuming? Your sort uses a nested loop, and therefore takes around 100000*100000 iterations. That doesn't have a hope of running quickly. There are much faster sorting methods. thanks i understood it after i had closed my pc in Anger! hehehe :P hi friends i tried few submitted solutions, and i guess everyone is missing this condition: for points having the same X value, you need to visit the point with the greatest Y value before visiting the next point with the same X value. because for the following test case 4 0 0 0 2 0 4 0 8 output should be (0,8)-->(0,4)(0,4)-->(0,8)(0,8)-->(0,2)(0,2)-->(0,8)(0,8)-->(0,0) the answer should be: 28 but the programs i tried from the submission are giving the answer: 8 as it is clearly written in the question that if u are visit a point with the same value of x, we have to visit the point with maximum y. i think by visiting to that point we have to add distance from current point to that point. and then from point with maximum y to the next point. admin if i m wrong please correct me hi friendsi tried few problems from the submission and found that they all are not obeying this condition: for points having the same X value, you need to visit the point with the greatest Y value before visiting the next point with the same X value.if we obey this condition for the following test case40 00 20 40 8output should be: 28Explanation:(0,8)-->(0,4)(0,4)-->(0,8) *(0,8)-->(0,2)(0,2)-->(0,8) *(0,8)-->(0,0)but if we do not follow this condition:(0,8)-->(0,4)(0,4)-->(0,2)(0,2)-->(0,0)output: 8as it is clearly mentioned that if we have to visit any point with same x we have to visit the point with maximum y. in explanation the visit marked with * are the same visits.admin please correct me if i m wrong. The problem is written in pretty poor english; taken literally you are correct. The problems intends you to visit points (where x is the same) from the highest y value down to the lowest value. Stephen: ya you are correct. later i also realised that for coordinates with same we just have to take minimum and max value of y and leave the coordinates in b/w thanx anyway I kept getting wrong answer until i changed the way I formatted my output. I still don't know what's wrong with my original output method can someone help? originally i had this DecimalFormat twoDForm = new DecimalFormat("#.##"); returnVal = Double.valueOf(twoDForm.format(distance)).toString(); if (returnVal.endsWith(".0")) { returnVal = returnVal.concat("0"); } System.out.println(returnVal); when i changed it to returnVal = Double.valueOf(twoDForm.format(distance)); System.out.print(String.format("%.2fn",returnVal)); it suddenly worked with everything else the same. What was wrong with the first approach? The toString() method of Double will show any value which is at least 10^7 (possible here with the right input) in exponential notation, which is what is causing the wrong answer. The simplest correct code doesn't involve any DecimalFormat class at all: System.out.printf("%.2f\n",distance); ah thanks for the explanation. I realized i could just go with System.out.printf("%.2fn",distance); right after i pressed save on the previous comment To Stephen: When I upload my program, it goes on running without showing any message. What do you think, what should have gone wrong?? To Stephen: When I upload my program, it goes on running without showing any message. What do you think, what should have gone wrong?? To Admin: When I upload my program, it goes on running and running without showing any message. What do you think should have gone wrong?? Please help! When I upload my program, it goes on running and running without showing any message. What do you think should have gone wrong?? BUUUUUUUUUUUZZZZZZZZZZZZZZZZZZZZZ!!!!!!!!!!!!!! Is there anyone to answer my query???? Hello everyone... Iam Aafreen student of B.C.A. I am just a begginner in the field of progrramming...and fine myself unable in understanding the logic of many problems...I hav very little knowledge abt programming...can someone tell me how can I learn programming in C and become a better programmer. Changed float to double, worked like a charm ;) Can anyone tell me please what is wrong in my code; #define d(x,y,a,b) (sqrt((x-a)*(x-a)+(y-b)*(y-b))) void calc(int [][2],int); void mergesort(int [][2],int,int); void merge(int [][2],int,int,int); void mergesort1(int [][2],int,int); void merge1(int [][2],int,int,int); int b[100001][2]; main() int t,i,m,j,k,num[100001][2],o; char waste; for(o=1;o<=t;o++) { do{waste=getchar();}while(waste<'0'||waste>'9'); m=waste-'0'; for(j=1;j<=m;j++) scanf("%d%d",&num[j][0],&num[j][1]); mergesort1(num,1,m); j=1; while(j<m) { i=num[j][0]; for(k=j+1;num[k][0]==i&&k<=m;k++); mergesort(num,j,k-1); j=k; calc(num,m); return 0; void calc(int num[][2],int m) { int i,j=1,k,l; double total=0; if(k>j+1&&k<=m) {total+=fabs(-num[k-1][1]+num[j][1])+d(num[k-1][0],num[k-1][1],num[k][0],num[k][1]);} else if(k>j+1&&k>m) {total+=fabs(-num[k-1][1]+num[j][1]);} else if(k==j+1) { total+=d(num[j][0],num[j][1],num[k][0],num[k][1]);} double t=total*100+0.5; int y=floor(t); t=(double)y/100; printf("%.2lfn",t); void mergesort1(int num[][2],int i,int j) int mid; { mid=(i+j)/2; mergesort1(num,i,mid); mergesort1(num,mid+1,j); merge1(num,i,mid,j); void merge1(int num[][2],int low,int mid,int high) int i,j,k,l; i=low; j=mid+1; for(k=low;i<=mid&&j<=high;) if(num[i][0]<num[j][0]) { b[k][0]=num[i][0]; b[k++][1]=num[i++][1]; else { b[k][0]=num[j][0]; b[k++][1]=num[j++][1]; if(i>mid) for(l=j;l<=high;l++) { b[k][0]=num[l][0]; b[k++][1]=num[l][1];} else if(j>high) for(l=i;l<=mid;l++) for(l=low;l<=high;l++) { num[l][0]=b[l][0]; num[l][1]=b[l][1];} void mergesort(int num[][2],int i,int j) mergesort(num,i,mid); mergesort(num,mid+1,j); merge(num,i,mid,j); void merge(int num[][2],int low,int mid,int high) if(num[i][1]>num[j]
http://www.codechef.com/problems/POINTS
crawl-003
refinedweb
4,665
74.69
The due dates stored in the database are naive datetimes. That means they do not have a time zone associated with them. So while we can see a date and time, the datetime object is unaware of *when* that datetime is. What time zone at almost midnight are we talking about? Standard practice is to store all due dates in UTC format, then change them to the relevant time zone as needed. **pytz** is a 3rd-party application that helps with time zones, including daylight savings. From their documentation: `).* You will need basic Python programming and datetime skills for this lab: – [Certified Associate in Python Programming Certification]() – [Python’s datetime]() – [pytz]() Learning Objectives Successfully complete this lab by achieving the following learning objectives: - Make Due Dates Timezone Aware Using the SQLite applications you have already developed you pull all the datetimes and book titles from the db. Run python timezones.py. This will result in an AssertionError. There are two sections that must be fixed to pass the assertion. In this first section, you must make the due dates aware datetime objects by adding time zone information to the datetime object. The time zone added should the time zone indicated for a book title. You create a time zone object with pytz. my_timezone = pytz.timezone(<time zone name>) Then make the due date aware of that time zone. This is done by replacing the Nonetime zone with the real time zone. This does not change the due date, just makes it aware of its time zone: aware_due_date = due_date.replace(my_timezone) We need to install pytz, so run pip3 install pytz. Now make those times aware of their time zone: import pytz from datetime import datetime # due date data from db title_due_dates = [ ['Oh Python! My Python!', '2020-11-15 23:59:59'], ['Fun with Django', '2020-06-23 23:59:59'], ['When Bees Attack! The Horror!', '2020-12-10 23:59:59'], ["Martin Buber's Philosophies", '2020-07-12 23:59:59'], ['The Sun Also Orbits', '2020-10-31 23:59:59'] ] # dictionary matching the timezone value to book title title_time_zones = { 'Oh Python! My Python!': 'US/Central', 'Fun with Django': 'US/Pacific', 'When Bees Attack! The Horror!': 'Europe/London', "Martin Buber's Philosophies": 'Australia/Melbourne', 'The Sun Also Orbits': 'Europe/Paris' } # when the db due_date string is converted to a datetime object it is naive # it does not contain timezone info # make the due_date timezone aware with `<due_date>.replace(timezone)` # make the timezone the same as indicated in `title_time_zones` # this makes the book due just before midnight local time aware_title_due_dates = {} for book in title_due_dates: book_timezone = pytz.timezone(title_time_zones[book[0]]) naive_date_due = datetime.strptime(book[1], "%Y-%m-%d %H:%M:%S") aware_date_due = naive_date_due.replace(tzinfo=book_timezone) aware_title_due_dates[book[0]] = aware_date_due # remaining code omitted-used in step 2. - Turn All Due Date to UTC We now have a dictionary of aware datetimes by book author timezone. These aware datetimes are set for essentially midnight in the author’s timezone. When storing due dates, it is a standard to store the datetime in UTC and convert to the user’s timezone on the fly. So now we need to change the timezone on the author’s due date to UTC and make it a text string for storage in the database. # code prior to this omitted # aware_title_due_dates has the due date in the author's timezone # following good db practice we will store the dates as UTC and # only convert when necessary to time zone needed # update `title_due_dates` with the due_date in UTC time for book in title_due_dates: utc_due_date = aware_title_due_dates[book[0]].astimezone(pytz.utc) utc_due_date_string = utc_due_date.strftime("%Y-%m-%d %H:%M:%S %Z%z") book[1] = utc_due_date_string # we can now use aware_title_due_dates for updating the db Run python timezones.py. Congrats! You have shown basic understanding of datetimes and timezones. This skill will be necessary if you work for a compant that has an application used worldwide.
https://acloudguru.com/hands-on-labs/working-with-time-zones-in-python
CC-MAIN-2021-31
refinedweb
656
63.39
.Net Server and Perl Client (Code Snippets) Expand Messages - Hello, I hope these code snippets are useful. I was new to Perl and SOAP::Lite before I started. So I will also write the problems faced and how they got solved. .Net Service.. ----------------------------------------- <WebMethod()> _ Public Function methodXXX(ByVal param1 As String, _ ByVal param2 As String) as Result() ....... ....... End Function SOAP Client ------------------------------------------------------------- my ($param1, $param2) = ('aaa', 'bbb'); my $soap = SOAP::Lite -> uri('') -> on_action( sub { sprintf '%s/%s', @_ }) -> proxy(''); my $method = SOAP::Data->name('methodXXX') ->attr({xmlns => ''}); my @params = ( SOAP::Data->name(param1 => $param1)->type('string'), SOAP::Data->name(param2 => $param2)->type('string'), SOAP::Data->name(param3 => $param3)->type('string') ); my $som = $soap->call($method => @params); my @result_list = $som->valueof('//Result'); foreach my $result(@result_list) { ... ... } Here are problems that I faced. 1. The .Net Service was not receiving the values of method parameters. It was receiving blank data. Solution: a) Used SOAP::Data instead of just $param1 2. The Perl client received no data. Solution: a) Added namespace attribute to the $method. This namespace should be the same as namespace declared for the .Net web service. I found this by trial and error after seeing the SOAP messages. 3. The .Net Service returned a SOAP Fault message Solution: Added the on_action method call to SOAP::Lite. -> on_action( sub { sprintf '%s/%s', @_ }) 4. The client did not receive the entire list. Only last item in the result was being printed. Solution: Used the valueof() method instead of result() method on the som. There were some other problems bcos of improper Perl syntax. I am learning Perl and I used $ instead of @ and didnt get the correct result. But, finally it worked. I hope the above list is useful. Any feedback is welcome. Ajay - I wrote the following article for perl.com in March of this year. It covers several of the points you learned on your own, as well as a few others. Sorry that I didn't get this to you before you learned the hard way, but I haven't been attending to e-mail much this weekend. The article is at: Hope this helps! Randy -- """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" Randy J. Ray Campbell, CA rjray@... Silicon Valley Scale Modelers: -.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/3124?source=1&var=1
CC-MAIN-2017-30
refinedweb
369
77.84
This is an assignment that I'm having trouble with, these are the 2 questions I'm having trouble with. 2. Write a loop that counts only the odd numbers out of the first five numbers entered from the keyboard, but does not count (skips) the number 7. Use the keyword continue to skip 7 inside the loop 3. Write the same program above, but exit the loop using break if the number 7 is entered So my program works just fine but I'm not sure how to implement the two 7 parts?? Any help would be appreciated. #include <iostream> using namespace std; int main(){ int nNums = 5; int count=0,i; int number; for (i=0;i<nNums;i++){ cout << "Please Enter a Number: "; cin >> number; if (number%2 != 0) count++; } cout << "Number of odd numbers: " << count << endl; return 0;
https://www.daniweb.com/programming/software-development/threads/362221/help-with-loop
CC-MAIN-2017-34
refinedweb
142
77.67
Base class for handles to an AMQP session. More... #include <qpid/client/SessionBase_0_10.h> Base class for handles to an AMQP session. Subclasses provide the AMQP commands for a given version of the protocol. Close the session. A session is automatically closed when all handles to it are destroyed. Get the channel associated with this session. Get the session ID. Resume a suspended session with a new connection. Suspend the session - detach it from its connection. Synchronize the session: sync() waits until all commands issued on this session so far have been completed by the broker. Note sync() is always synchronous, even on an AsyncSession object because that's almost always what you want. You can call AsyncSession::executionSync() directly in the unusual event that you want to do an asynchronous sync. Set the timeout for this session. Definition at line 103 of file SessionBase_0_10.h.
http://qpid.apache.org/apis/0.18/cpp/html/a00296.html
CC-MAIN-2013-20
refinedweb
146
61.33
Opaque handle to a map object. A map object is used to manipulate key value pairs using the am_map_* interface. Map objects are used by the policy interface in the C SDK to return any policy decision results and advices from Access Manager policy service, and to pass any environment variables for to the policy interface for policy evaluation. #include "am_map.h"typedef struct am_map *am_map_t; This is an opaque structure and therefore has no members accessible by the C SDK user. This function creates an instance of am_map_t structure and returns the pointer to the structure to the caller. Memory Concerns: You should free the allocated structure by calling am_map_destroy. See am_policy_test.c in the C SDK samples for an example of how to use am_map_t.
http://docs.oracle.com/cd/E19636-01/819-2140/adobh/index.html
CC-MAIN-2014-42
refinedweb
127
53.51
Quote: Ok, I found this great article. Now, my question is. Should I use a game engine to create games? If I'll create a game and use a game engine, I can't reusing it's functionality because that's for the only specific engine that I've use. So in the end, I didn't make an engine at all. Honestly, I'm really confused on where to start. I've made many games in 2D using the SDL API and reuse some of it's code like creating text, image, menu, sound and etc. Now I moved to 3D. I want to make games also here but I also want to make an engine so I can use it to create my next game. #1 Members - Reputation: 100 Posted 31 January 2009 - 03:21 PM #2 Members - Reputation: 2747 Posted 31 January 2009 - 03:49 PM The article does not expressly say "Don't use an engine", nor does it advise you to "never write an engine". Its really advising you to have the humility to recognize when you are not yet ready to write an engine up-front, and to be sure of the requirements of your game and that you are writing code to fit the game, rather than writing some misshapen code and then trying to shoehorn your game into it. Go ahead and use an engine if you like, exposure to an established engine will be enlightening -- though be careful to view it realistically. Every engine has things it does well, and things it does not so well. Under no circumstances should you take an engine like, say, OGRE or even a commercial engine like Unreal Engine 3, and assume its construction is doctrine. #3 Members - Reputation: 194 Posted 31 January 2009 - 04:00 PM Quote: The idea behind that article is that simply writing a game, produces and engine anyway - you don't need to "write and engine" because if you write a game, you get an engine for free. The first game you write will need a menu, a way to get from that into the game, a way to render 3D models and play sounds for example. So you have scenes and scene management, resource management and use in there. Thats an engine :) Sure it might be a simple one with 4 classes but if thats all you need then bham your done. Maybe copy it into a /src/Engine/ sub directory so you know never to put game specific code into any file in that directory. Next game just cut+paste /engine/ and your good to go. Next game you start you'll have so much code already there -> that IS your engine. Engines are NOT something mythical or magical, its just a set of source code that does what you want which you can reuse. #4 Members - Reputation: 102 Posted 31 January 2009 - 04:16 PM But it is also a great learning experience, if a long path to success.... I would hazzard that most of the now very successful and long lived engines have come out of the quagmire of finding and solving problems for an original game concept they were used for. If you take a close look at their history you'll see that the owners / designers of those engines are often the ones leading trends, and the people who license them are often stuck one iteration behind, often having to augment the engines to get what they need. Personally I have not found a game engine yet that does everything I need. I have also never been entirely happy with any engine I have produced. If you have the time then I would always suggest having a go yourself. But if you are producing something that fits the kind of game produced with a particular engine that is available to you, then why not take advantage of that? YOU can always augment that engine with specific modules you require. However, with most of the cheap and readily accessible engines out there. Torque, Unity, etc. etc. I have always found that learning to use them, and adapting to their methods takes almost as long as writing those things yourself. And you are then stuck with those methods forever, often not fully understanding what's actually going on deep under the hood. So if that is the case with you then perhaps you are ready to write your own. #5 Moderators - Reputation: 2286 Posted 31 January 2009 - 04:26 PM Quote: Yea, pretty much. The grand, overarching idea is, of course, the one get shit done. So certainly if the option exists to use an existing engine (either a full-spectrum game engine or something more specific, such as a rendering engine like Ogre), by all means, go ahead. You can still build your own "engine" of reusable bits frameworks of code around something like Ogre just as well as you can build it around something like Direct3D, too. Josh Petrie | Lead Tools Engineer, ArenaNet | Microsoft C++ MVP #6 Members - Reputation: 100 Posted 31 January 2009 - 09:46 PM #7 Members - Reputation: 285 Posted 31 January 2009 - 10:40 PM After this if you want to progress onto bigger and better games (not neccasarily the same thing ;)) choose an off the shelf engine like Ogre, Torque or something similar. On the other hand if you prefer to write an engine and arn't too interested about games go ahead and write one but, remember the chances of you being able to produce something better than a dedicated middleware company are very slim. Even AAA games companies think twice now before developing in house tech as the cost in man hours to create an engine will likely end up being more than licencing an off the shelf solution. #8 Members - Reputation: 37 Posted 17 February 2009 - 11:41 AM #include "engines.h" int main(){ CGame* game = new CGame(); game->Run(); return 0; } #9 Members - Reputation: 135 Posted 17 February 2009 - 11:52 AM This is what I suggest you to do : make games, when you want to try something new, make the engine. That's what I would do and that's what I will certainly do in a near future. #10 Members - Reputation: 1084 Posted 17 February 2009 - 12:17 PM Quote: If you use an existing engine, you don't need to write your own - because you can use that same engine for your next game too. What's the problem here? :) #11 Members - Reputation: 176 Posted 17 February 2009 - 01:17 PM Start it small, keep it modular and extensible, and build it up over time. Eventually you'll have enough re-usable code to make flexible sub-systems that have passed "trial by fire" by being used in multiple games. That's extremely important, and it's why the big engines have so many licensees -- they've been proven to work. So while you shouldn't set out to "write an engine", you should actively be re-using your code, and building up your framework of helper functions, utilities, tools, algorithms and sub-systems. You'll eventually end up with something you could call a Game Engine. [Edited by - ThrustGoblin on February 17, 2009 7:17:32 PM] #12 Members - Reputation: 301 Posted 07 November 2011 - 04:59 AM #13 Members - Reputation: 138 Posted 07 November 2011 - 05:16 AM Well, care must be taken when someone is writting a game in order to get an engine for free, because someone could write a "hardcoded game" by accident, with no reusability in mind. Refactoring - if you're an agile developer you're probably doing it anyway many times while developing, and it becomes the step that turns the meat of your asteroids clone into the engine for your breakout clone. #14 Moderators - Reputation: 4818 Posted 07 November 2011 - 07:00 AM Sloperama Productions Making games fun and getting them done. Please do not PM me. My email address is easy to find, but note that I do not give private advice.
http://www.gamedev.net/topic/522990-write-games-not-engines/?p=4393421
CC-MAIN-2013-20
refinedweb
1,348
65.35
Hello everyone, I'm obviously having trouble with a program that wants me to create a house with just five mouse clicks. My program is simple create a house using only 5 mouse clicks. This problem is from the book Python Programming by John Zelle Its on page 162 This is what the book says on the problem: You are to write a program that allows the user to draw a simple house using five mouse clicks. The first two clicks will be the opposite corners of the rectangle frame of the house. The third click will indicate the center of teh top edge of a rectangular door. The door should have a total width that is 1/5th of the width of the house frame. The sides of the door should extend from the corners of the top down to the bottom of the frame. The fourth click will indicate the center of a square window. The window is half as wide as the door. The last click will indicate the peak of the roof. The edges of the roof will extend from the point at the peak to the corners of the top edge of the house frame. Thanks for the help in advance. PS I'm using python 2.2.3 version So far this is what I have come up with: from graphics22 import * def main(): win = GraphWin("house.py", 500, 500) win.setCoords(0,0, 4,4) p1 = win.getMouse() p2 = win.getMouse() p3 = win.getMouse() p4 = win.getMouse() p5 = win.getMouse() house = Rectangle(p1, p2) house.setFill("Red") house.draw(win) roof = Polygon(p1, p3, p4) roof.setFill("Black") roof.draw(win) #door = Rectangle () #door.setFill("Brown") #door.draw(win) #window = Rectangle() #window.setFill("White") #window.draw(win) win.getMouse() win.close() main() I can create the rectangle for the base of the house and the roof but then I'm already at 4 mouse clicks.
https://www.daniweb.com/programming/software-development/threads/94059/need-help-with-the-5-mouse-click-house-program
CC-MAIN-2018-05
refinedweb
322
85.28
Natural Bohemia Weaving Rattan Wall Decor Home Living Room Hanging Product Burlywood Sunburst Round Wall Mirror Decoration US $6.80-$19.00 / Piece 5.0 Pieces (Min. Order) Top Sponsored Listing Furniture Mirrors Decor Wall Home Living Room Fish Rattan & Grass Mirror US $2.79 / Piece 100 Pieces (Min. Order) Decorative African Wall Basket Extralarge Wicker Baskets Hand Woven Rattan Wooden Hanging Decoration Home Decor Accessories Art US $2.00-$4.00 / Piece 12 Pieces (Min. Order) Elegant Rattan Wooden Serving Trays Rattan Tabletop Home Decor Round Square Black Nature Bamboo Wooden Serving Decorative Tray US $11.00-$11.50 / Piece 2 Pieces (Min. Order) New arrival Home creative decorative supplies Rattan Woven Plastic Fruit Basket US $1.59-$3.89 / Piece 1000.0 Pieces (Min. Order) Simulation plant Plastic Butterfly Flowers Home decoration Wedding ceiling rattan US $0.60-$3.00 / Piece 50.0 Pieces (Min. Order) Patio Furniture Home Garden Outdoor Rattan / Wicker Poweder Coated Metal or Aluminum Structure US $550.00-$990.00 / Set 1 Set (Min. Order) Rattan Knitting Storage Basket Hotel Rattan Knitting Laundry Basket Home Hand-woven Bathroom Laundry Basket US $5.24-$6.13 / Piece 100.0 Pieces (Min. Order) Sea Grass Wickerwork Basket Home Garden Rattan Woven Hanging Flower Pot Planter US $2.99-$10.99 / Piece 500 Pieces (Min. Order) Rattan Woven Wall Basket Home Decor Seagrass Fruit Bowl Rattan Hanging Decorative for Home Livingroom US $5.45-$7.58 / Piece 2 Pieces (Min. Order) Cheap For Home Storage Toronto Wicker Rattan Lined Picnic Basket US $18.70-$19.70 / Piece 300.0 Pieces (Min. Order) Home Decor Rattan Bread Basket Woven Storage Basket Kitchen Rattan Toys Storage Boxes Fruit Plate Trays US $3.99-$4.99 / Piece 100 Pieces (Min. Order) Cooking pancake plate party placement decoration hot sale home dining table plate rattan fruit plate handmade rattan US $5.35-$5.50 / Piece 2.0 Pieces (Min. Order) Macaroon 100ml new design home fragrance wood lid rattan sticks reed diffuser in pvc box US $1.65-$2.40 / Piece 48 Pieces (Min. Order) Hand Woven Plastic Rattan Imitate Rush Grass Home Storage Baskets Bins Organizer With Handles US $11.50-$12.39 / Set 100 Sets (Min. Order) Tabletop storage basket rattan wicker make cosmetics storage box garden home rattan make storage basket US $1.87-$1.97 / Piece 500 Pieces (Min. Order) Home Fragrance Scented Rattan Stick Dried Flowers Reed Diffuser Wood Sola Flowers For Air Freshening US $0.35-$0.80 / Piece 3000.0 Pieces (Min. Order) Home decoration round shape rattan and wood wall shelf rack for cloth hanger US $18.43-$20.38 / Set 500.0 Sets (Min. Order) Advanced hand-woven home decoration rattan woven rectangular toy storage baby laundry basket with lid US $4.25-$34.30 / Piece 2.0 Pieces (Min. Order) Hot selling desktop home decoration Artificial rattan Fake Grass of 36 leaves, 12 Rattans in one OPP bag US $3.10-$3.48 / Set 10 Sets (Min. Order) Original Storage Toy Laundry Woven Shelves Wicker Bamboo Home Decor Wall Baskets Rattan US $6.50-$10.00 / Piece 100.0 Pieces (Min. Order) Wholesale Hand-woven home storage pp plastic wicker rattan 3set Storage Basket US $9.40-$9.60 / Piece 100 Pieces (Min. Order) Small Children Clothes Storage Basket Vegetables Or Fruits Baskets Rattan Round Laundry hamper With Handle US $48.89-$60.32 / Piece 2 Pieces (Min. Order) Woven storage basket, flower plastic rattan children's shopping home decoration hanging basket US $0.30-$0.50 / Piece 2 Pieces (Min. Order) Artificial rattan with flower plastic eucalyptus rattan home decoration simulation eucalyptus vine Amazon hot sale US $0.21-$3.00 / Piece 100.0 Pieces (Min. Order) Hot Sale High Quality Hammock Home Garden Rattan/Wicker white egg hanging chair US $95.00-$100.00 / Set 50 Sets (Min. Order) Wholesale New arrival Home creative decorative supplies Rattan Woven Plastic Fruit Basket Storage Boxes & Bins for kids US $3.30-$3.90 / Set 300 Sets (Min. Order) Wholesale Fabric Artificial Rose Cane Rattan Plants Daisy Home Party Wall Wedding Decor Rose Flower Artificial Plant Rattan US $1.25-$1.55 / Piece 12 Pieces (Min. Order) Custom Home Decoration Rattan Braided Hanging Basket US $0.90-$7.90 / Piece 500 Pieces (Min. Order) import cheap wicker patio rattan 6 home garden / furniture set modern outdoor garden indonesia US $500.00-$2500 / Piece 1.0 Pieces (Min. Order) Sustainable Home storage basket rattan 2021 eco friendly basket storage US $2.88-$3.00 / Set 200 Sets (Min. Order) 2020 handmade home laundry basket organizer made rattan US $15.00 / Piece 300 Pieces (Min. Order) Nordic Style Home Square Artificial Rattan Harden Storage Basket Organizer Food Fruit Bins US $14.66 / Piece 2 Pieces (Min. Order) import cheap wicker patio rattan 6 home garden / furniture set modern outdoor garden indonesia US $1850-$2150 / Set 1 Set (Min. Order) Couture Jardin Curl Unique Home Decor Accessories Rattan Curl Planter 1 Piece US $510.00-$540.00 / Piece 1 Piece (Min. Order) - About product and suppliers: Alibaba.com offers 100,566 home rattan products. About 11% of these are garden sets, 4% are storage baskets, and 1% are other home decor. A wide variety of home rattan options are available to you, such as modern, european, and minimalist. You can also choose from rattan / wicker, metal, and wood. As well as from outdoor, hotel, and courtyard. And whether home rattan is aluminum, iron, or stainless steel. There are 100,566 home rattan suppliers, mainly located in Asia. The top supplying country or region is China, which supply 100% of home rattan respectively. Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show home rattan or other products of your own company? Display your Products FREE now! Related Category Product Features
https://www.alibaba.com/countrysearch/CN/home-rattan.html
CC-MAIN-2022-05
refinedweb
979
62.24
Consider the following text: "Mr. McCONNELL. yadda yadda jon stewart is mean to me. The PRESIDING OFFICER. Suck it up. Mr. McCONNELL. but noooo. Mr. REID. Really dude?" And a list of words to split on: ["McCONNELL", "PRESIDING OFFICER", "REID"] I want to have the output be the dictionary {"McCONNELL": "yadda yadd jon stewart is mean to me. but noooo.", "PRESIDING OFFICER": "Suck it up. " "REID": "Really dude?"} So I need a way to split by elements of a list (on any of those names), and then be aware of which one it split on and be able to map that to the chunk of text in that split. In the case of more than one chunks of text having the same speaker ("McCONNELL", in the example), just concatenate the strings. Edit: Here is the function I have been using. It works on the example, but is not robust when I try it on a much larger scale (and isn't clear why it messes up) def split_by_speaker(txt, seps): ''' Given raw text and a list of separators (generally possible speaker names), splits based on those names and returns a dictionary of text attributable to that name ''' speakers = [] default_sep = seps[0] rv = {} for sep in seps: if sep in txt: all_occurences = [m.start() for m in re.finditer(sep, txt)] for occ in all_occurences: speakers.append((sep, occ)) txt = txt.replace(sep, default_sep) temp_t = [i.strip() for i in txt.split(default_sep)][1:] speakers.sort(key = lambda x: x[1]) for i in range(len(temp_t)): if speakers[i][0] in rv: rv[speakers[i][0]] = rv[speakers[i][0]] + " " + temp_t[i] else: rv[speakers[i][0]] = temp_t[i] return rv
http://www.developersite.org/question-88479
CC-MAIN-2019-22
refinedweb
281
73.58
Hi there. Great program, I'll give you some money when I'm not broke. I've noticed a fun little problem when I try to compile this on Windows 7 Ultimate 32 bit edtion with python 2.5.4 and all the associated dependancies for python 2.5.4. Once the dist directory is created with the python setup.py py2exe command I can run the resulting code just fine on the computer I compiled it on. However, when I take the dist directory to another computer running Vista and/or XP Pro SP3 I get errors in the log file to the tune of: Traceback (most recent call last): File "keylogger.pyw", line 54, in <module> File "detailedlogwriter.pyc", line 35, in <module> File "win32api.pyc", line 12, in <module> File "win32api.pyc", line 10, in __load ImportError: DLL load failed: The specified procedure could not be found. My first thought was the DLLs py2exe was complaining about at the end of its output: ***32.dll - C:\Windows\system32\ole32.dll ntdll.dll - C:\Windows\system32\ntdll msvcrt.dll - C:\Windows\system32\msvcrt.dll WS2_32.dll - C:\Windows\system32\WS2_32.dll GDI32.dll - C:\Windows\system32\GDI32.dll VERSION.dll - C:\Windows\system32\VERSION.dll KERNEL32.dll - C:\Windows\system32\KERNEL32.dll SETUPAPI.dll - C:\Windows\system32\SETUPAPI.dll RPCRT4.dll - C:\Windows\system32\RPCRT4.dll Using the shotgun approach to problem solving I used to include ALL the dlls there, and then re-ran the py2exe bit. It then spit out two more DLLs that I forget the names of, which I added to the includes list, and re-ran py2exe again. I took THAT massive thing over to the winXP machine, ran it and got the following in the error log(last two lines, I missplaced the rest of it, but it was about as long as the first DLL error message and complaining about tk instead of win32api): File "_tkinter.pyc", line 10, in __loadImport Error: DLL load failed: The specified procedure could not be found. My guess is that this is probably some esoteric Windows 7 problem with py2exe and DLLs and I have a few more ideas to try before passing out. I wanted to shoot this off into the void while I'm still coherent enough to make some kind of sense. I understand I could probably get around it by just py2exe'ing the thing on an XP machine, but I'd like to figure out WHY it doesn't work on Windows 7. I've had similar problems with py2exe and Vista before too. I have a decent amount of python experience and a very minimal amount of py2exe experience. I'll update this thread with any progress. : Hi, Thanks a lot for posting this, and in such detail. I personally only have access to winxp, so haven't dealt with this problem and cannot test any of this. One thing i can suggest is to include /all/ dlls, by defining your dll test function in setup.py that always returns 0, as follows: def isSystemDLL(pathname): return 0 py2exe.build_exe.isSystemDLL = isSystemDLL This will produce a /giant/ distribution, but at least you'll know everything is there. ;) give that a try and see what happens… Anonymous 2010-07-03 I have the exact same message as you do and I was wondering if you have discovered a work around for this yet. William Kreider 2010-07-17 yes nanotube I have and the file is very large. I've installed it and when I try to run it now it complains again most of them deal with tkinter well… then maybe you should try building on the same os as your target comp… since i don't have anything other than winxp and can't play around with this stuff to test, i haven't any further nuggets of wisdom to suggest… Anonymous 2012-01-31 Solved with the help of this: or you can just delete the powrprof.dll from the install folder and that will do the trick.
http://sourceforge.net/p/pykeylogger/discussion/493189/thread/63d41d38/
CC-MAIN-2014-23
refinedweb
681
68.81