text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
An exception is an abnormal condition that arises in a code sequence at run time. For example, read a non-existing file.
A Java exception is an object that describes an exceptional condition that has occurred in a piece of code.
Java exception handling is managed via five keywords:
try, catch, throw, throws, and
finally.
try block has program statements that you want to monitor for
exceptions.
If an exception occurs within the
try block, it is thrown.
The
catch statement can catch exception and handle it in rational manner.
To manually throw an exception, use the keyword
throw.
Any exception that is thrown out of
a method must be specified as such by a
throws clause.
Any code that absolutely must be
executed after a try block completes is put in a
finally block.
To handle an exception we put code which might have exceptions in a try...catch statement.
try { // block of code to monitor for errors } catch (ExceptionType1 exOb) { // exception handler for ExceptionType1 } catch (ExceptionType2 exOb) { // exception handler for ExceptionType2 }
Program statements that might have exceptions are contained within a
try block.
The exception handler is coded using
catch statement
Here,
ExceptionType is the type of exception that has occurred.
Enclose the code that you want to monitor inside a try block and catch clause.
The following program includes a try block and a catch clause that processes the ArithmeticException generated by the division-by-zero error:
public class Main { public static void main(String args[]) { int d, a;//from w w w .j a v a2s .com try { // monitor a block of code. d = 0; a = 42 / d; System.out.println("This will not be printed."); } catch (ArithmeticException e) { // catch divide-by-zero error System.out.println("Division by zero."); } System.out.println("After catch statement."); } }
This program generates the following output:
Once an exception is thrown, program control transfers out of the try block into the catch block. Execution never returns to the try block from a catch.
The following code handles an exception and move on.
import java.util.Random; /*w w w . j a va 2s.("Division by zero."); a = 0; // set a to zero and continue } System.out.println("a: " + a); } } }
The code above generates the following result. | http://www.java2s.com/Tutorials/Java/Java_Language/6000__Java_Exception.htm | CC-MAIN-2017-43 | refinedweb | 376 | 59.5 |
Recommend a video resource of Gupao College: link: Password: e54x
First, I will answer the questions that students didn't add in the previous release "be careful when using HashMap". I'd better say how HashMap solves the problem of dead cycle in JDK8.
The link list part corresponds to the transfer code above:; }
Since the capacity expansion is carried out by twice, i.e. n is expanded to N + N, there will be low-level Part 0 - (N-1) and high-level part N - (2N-1), so it is divided into low head (low head) and high head (high head).
Through the above analysis, it is not difficult to find that the generation of the cycle is because the order of the new chain list is completely opposite to that of the old chain list, so as long as the new chain is built according to the original order, the cycle will not be generated.
JDK8 uses head and tail to ensure the order of the linked list is the same as before, so no circular reference will be generated.
Disadvantages of traditional HashMap
Before JDK 1.8, the implementation of HashMap was array + linked list. Even if the hash function is good enough, it is difficult to achieve even distribution of elements.
When a large number of elements in a HashMap are stored in the same bucket, there is a long chain table under the bucket. At this time, HashMap is equivalent to a single chain table. If a single chain table has n elements, the time complexity of traversal is O(n), completely losing its advantages.
In order to solve this problem, red black tree (O(logn) is introduced in JDK 1.8 to optimize this problem.
New data structure - red black tree
In addition to linked list nodes in HashMap in JDK 1.8:
static class Node implements Map.Entry { //Hash value is the location final int hash; //key final K key; //value V value; //Pointer to the next point Node next; //... }
There is another kind of node: TreeNode, which is newly added in 1.8. It belongs to the red black tree in the data structure Click here to learn about the red black tree):
static final class TreeNode extends LinkedHashMap.Entry { TreeNode parent; // red-black tree links TreeNode left; TreeNode right; TreeNode prev; // needed to unlink next upon deletion boolean red; }
You can see that it is a red and black tree node, with a father, a left and right child, a node of the previous element, and a color value.
In addition, because it inherits from LinkedHashMap.Entry and LinkedHashMap.Entry inherits from HashMap.Node, there are six additional properties:
//Inherited from LinkedHashMap.Entry Entry before, after; //Of HashMap.Node final int hash; final K key; V value; Node next;
Three key parameters of red black tree
There are three key parameters about red black tree in HashMap:
- TREEIFY_THRESHOLD
- UNTREEIFY_THRESHOLD
- MIN_TREEIFY_CAPACITY
The values and functions are as follows:
//Tree threshold of a bucket //When the number of elements in the bucket exceeds this value, you need to replace the linked list node with a red black tree node //This value must be 8, otherwise frequent conversion efficiency is not high static final int TREEIFY_THRESHOLD = 8; //Restore threshold of a tree's linked list //When the capacity is expanded, if the number of elements in the bucket is less than this value, the tree shaped bucket elements will be restored (cut) into a linked list structure //This value should be smaller than the one above, at least 6, to avoid frequent conversion static final int UNTREEIFY_THRESHOLD = 6; //Minimum tree capacity of hash table //When the capacity in the hash table is greater than this value, the buckets in the table can be treelized //Otherwise, if there are too many elements in the barrel, it will expand capacity, rather than tree shape //In order to avoid the conflict of expansion and tree selection, this value cannot be less than 4 * treeify? Treehold static final int MIN_TREEIFY_CAPACITY = 64;
New operation: treeifyBin()
stay Java In 8, if the number of elements in a bucket exceeds treeify? Treehold (the default is 8), the red black tree is used to replace the linked list to improve the speed.
The alternative is treeifyBin(), which is treelification.
//Replace all linked list nodes in the bucket with red black tree nodes final void treeifyBin(Node[] tab, int hash) { int n, index; Node e; //If the current hash table is empty, or the number of elements in the hash table is less than the threshold value for tree formation (64 by default), create / expand if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { //If the number of elements in the hash table exceeds the tree threshold, tree // e is the link list node in the bucket in the hash table, starting from the first one TreeNode hd = null, tl = null; //The head and tail nodes of the red black tree do { //Create a new tree node with the same content as the current linked list node e TreeNode p = replacementTreeNode(e, null); if (tl == null) //Determine tree head node hd = p; else { p.prev = tl; tl.next = p; } tl = p; } while ((e = e.next) != null); //Let the first element of the bucket point to the newly created red black tree header node. In the future, the element in the bucket is the red black tree instead of the linked list if ((tab[index] = hd) != null) hd.treeify(tab); } } TreeNode replacementTreeNode(Node p, Node next) { return new TreeNode<>(p.hash, p.key, p.value, next); }
The above operations do these things:
- Determine whether to expand or tree based on the number of elements in the hash table
- If it's treelization
- Traverse the elements in the bucket, create the same number of tree nodes, copy the content, and establish a connection
- Then let the first element of the bucket point to the new tree head node, and replace the bucket's chain content with tree content
But we found that the previous operation did not set the color value of the red black tree, and now we can only get a binary tree. Finally, we call the tree node hd.treeify(tab) method to shape the red black tree.
final void treeify(Node[] tab) { TreeNode root = null; for (TreeNode x = this, next; x != null; x = next) { next = (TreeNode)x.next; x.left = x.right = null; if (root == null) { //Enter the cycle for the first time, confirm the head node, which is black x.parent = null; x.red = false; root = x; } else { //After that, enter the logic of looping. x points to a node in the tree K k = x.key; int h = x.hash; Class kc = null; //Another loop starts from the root node, traverses all nodes and compares them with the current node x, adjusts the position, which is a bit like bubble sorting for (TreeNode p = root;;) { int dir, ph; //This dir K pk = p.key; if ((ph = p.hash) > h) //When the hash value of the comparison node is larger than x, dir is - 1 dir = -1; else if (ph < h) //Hash value ratio x hour dir is 1 dir = 1; else if ((kc == null && (kc = comparableClassFor(k)) == null) || (dir = compareComparables(kc, k, pk)) == 0) // If comparing the hash value of a node, x dir = tieBreakOrder(k, pk); //Make the current node the father of x //If the hash value of the current comparison node is larger than x, X is the left child, otherwise x is the right child TreeNode xp = p; if ((p = (dir <= 0) ? p.left : p.right) == null) { x.parent = xp; if (dir <= 0) xp.left = x; else xp.right = x; root = balanceInsertion(root, x); break; } } } } moveRootToFront(tab, root); }
As you can see, when turning a binary tree into a red black tree, you need to ensure order. There is a double cycle here. Compare the hash values of all nodes in the tree with that of the current node (if the hash values are equal, the comparison key is used, and here it is not completely ordered). Then determine the location of the tree species according to the comparison results.
New operation: add element putTreeVal() in red black tree
The above describes how to change the linked list structure in a bucket into a red black tree structure.
When adding, if a bucket already has a red black tree structure, the red black tree's add element method putTreeVal() should be called.
final TreeNode putTreeVal(HashMap map, Node[] tab, int h, K k, V v) { Class kc = null; boolean searched = false; TreeNode root = (parent != null) ? root() : this; //Each time an element is added, it is traversed from the root node and compared with the hash value for (TreeNode p = root;;) { int dir, ph; K pk; if ((ph = p.hash) > h) dir = -1; else if (ph < h) dir = 1; else if ((pk = p.key) == k || (k != null && k.equals(pk))) //If the hash value, key and the one to be added of the current node are the same, the current node will be returned return p; else if ((kc == null && (kc = comparableClassFor(k)) == null) || (dir = compareComparables(kc, k, pk)) == 0) { //If the hash values of the current node and the node to be added are the same, but the keys of the two nodes are not the same class, you have to compare the left and right children one by one if (!searched) { TreeNode q, ch; searched = true; if (((ch = p.left) != null && (q = ch.find(h, k, kc)) != null) || ((ch = p.right) != null && (q = ch.find(h, k, kc)) != null)) //If you can find the node you want to add from the subtree of ch, you can directly return return q; } //The hash value is equal, but the key cannot be compared, so we have to give a result in a special way dir = tieBreakOrder(k, pk); } //After the previous calculation, a size relationship between the current node and the node to be inserted is obtained //If the node to be inserted is smaller than the current node, it will be inserted into the left subtree, and if it is larger, it will be inserted into the right subtree TreeNode xp = p; //Here is a judgment. If the current node has no left or right child, it can be inserted. Otherwise, it will enter the next cycle if ((p = (dir <= 0) ? p.left : p.right) == null) { Node xpn = xp.next; TreeNode x = map.newTreeNode(h, k, v, xpn); if (dir <= 0) xp.left = x; else xp.right = x; xp.next = x; x.parent = x.prev = xp; if (xpn != null) ((TreeNode)xpn).prev = x; //Necessary balance adjustment operation after inserting elements in red black tree moveRootToFront(tab, balanceInsertion(root, x)); return null; } } } //This method is used to compare the hash values of a and b according to the address of two references when they are the same but cannot be compared //The source code annotation also says that complete order is not required in this tree, as long as the same rules are used to maintain balance when inserting static int tieBreakOrder(Object a, Object b) { int d; if (a == null || b == null || (d = a.getClass().getName(). compareTo(b.getClass().getName())) == 0) d = (System.identityHashCode(a) <= System.identityHashCode(b) ? -1 : 1); return d; }
From the above code, we can know that when adding a new node n to the red black tree in the HashMap, there are the following operations:
- Start from the root node to traverse the element P in the current red black tree, and compare the hash values of n and p;
- If the hash values are the same and the keys are the same, it is judged that this element already exists (it is not clear why the ratio is not correct here);
- If the hash value uses other information, such as the reference address, to give a rough comparison result, you can see that the comparison of red black trees is not very accurate. The notes also say that only to ensure a relative balance can be achieved;
- Finally, after the hash value comparison results are obtained, if the current node p has no left child or right child, it can be inserted, otherwise it will enter the next cycle;
- After the element is inserted, the routine balance adjustment of the red black tree is also required, and the leading position of the root node is ensured.
New operation: find element getTreeNode() in red black tree
The search method of HashMap is get():
public V get(Object key) { Node e; return (e = getNode(hash(key), key)) == null ? null : e.value; }
After calculating the hash value of the specified key, it calls the internal method getNode();
final Node getNode(int hash, Object key) { Node[] tab; Node)first).getTreeNode(hash, key); do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } while ((e = e.next) != null); } } return null; }
This getNode() method is to get the head node of the bucket where the key is located according to the number of hash table elements and the hash value (the formula used is (n - 1) & hash). If the head node happens to be a red black tree node, call the getTreeNode() method of the red black tree node, otherwise traverse the linked list node.
final TreeNode getTreeNode(int h, Object k) { return ((parent != null) ? root() : this).find(h, k, null); }
The getTreeNode method enables to find by calling the find() method of the tree node:
//Search from root node according to hash value and key final TreeNode find(int h, Object k, Class kc) { TreeNode p = this; do { int ph, dir; K pk; TreeNode pl = p.left, pr = p.right, q; if ((ph = p.hash) > h) p = pl; else if (ph < h) p = pr; else if ((pk = p.key) == k || (k != null && k.equals(pk))) return p; else if (pl == null) p = pr; else if (pr == null) p = pl; else if ((kc != null || (kc = comparableClassFor(k)) != null) && (dir = compareComparables(kc, k, pk)) != 0) p = (dir < 0) ? pl : pr; else if ((q = pr.find(h, k, kc)) != null) return q; else p = pl; } while (p != null); return null; }
Since the tree has been guaranteed to be orderly when it was added before, the search is basically a half search with high efficiency.
Here is the same as when inserting. If the hash value of the comparison node is equal to the hash value to be searched, it will be judged whether the key is equal. If it is equal, it will be returned directly (there is no judgment value). If it is not equal, it will be recursively searched from the subtree.
New operation: tree structure pruning split()
In HashMap, the resize() method is used to initialize or expand the hash table. When expanding, if the element structure in the current bucket is a red black tree and the number of elements is less than the threshold value of list restore untreeify ﹐ threehold (default is 6), the tree structure in the bucket will be reduced or directly restored (cut) to the list structure, and split() is called:
//Parameter introduction //tab represents the hash table that holds the bucket head node //index indicates where to start trimming //Bits to trim (hash value) final void split(HashMap map, Node[] tab, int index, int bit) { TreeNode b = this; // Relink into lo and hi lists, preserving order TreeNode loHead = null, loTail = null; TreeNode hiHead = null, hiTail = null; int lc = 0, hc = 0; for (TreeNode e = b, next; e != null; e = next) { next = (TreeNode)e.next; e.next = null; //If the last bit of the current node hash value is equal to the bit value to be trimmed if ((e.hash & bit) == 0) { //Put the current node in the lXXX tree if ((e.prev = loTail) == null) loHead = e; else loTail.next = e; //Then loTail records e loTail = e; //Record the number of nodes in the lXXX tree ++lc; } else { //If the last bit of the current node hash value is not to be trimmed //Put the current node in the hXXX tree if ((e.prev = hiTail) == null) hiHead = e; else hiTail.next = e; hiTail = e; //Record the number of nodes in the hXXX tree ++hc; } } if (loHead != null) { //If the number of lXXX trees is less than 6, leave the branches, leaves and leaves of lXXX trees empty and become a single node //Then let the nodes in the bucket after the index position to be restored become lXXX nodes of the linked list //This element is followed by a linked list structure if (lc <= UNTREEIFY_THRESHOLD) tab[index] = loHead.untreeify(map); else { //Otherwise, let the node in the index position point to the lXXX tree, which has been pruned and has fewer elements tab[index] = loHead; if (hiHead != null) // (else is already treeified) loHead.treeify(tab); } } if (hiHead != null) { //Similarly, let the element after the specified position index + bit //Point to hXXX to restore to linked list or pruned tree if (hc <= UNTREEIFY_THRESHOLD) tab[index + bit] = hiHead.untreeify(map); else { tab[index + bit] = hiHead; if (loHead != null) hiHead.treeify(tab); } } }
From the above code, it can be seen that the pruning of the red black tree node during the expansion of HashMap is mainly divided into two parts. First, it is classified, and then it is determined whether to restore to the linked list or to simplify the elements and still retain the red black tree structure according to the number of elements.
1. classification
Specify the location and range. If the element in the specified location (hash & bit) = = 0, put it in the lXXX tree. If it is not equal, put it in the hXXX tree.
2. Determine the processing according to the number of elements
When the number of elements (i.e. lXXX tree) meets the requirements, it will be restored to a linked list when the number of elements is less than 6, and finally the pruned pain tab[index] in the hash table will point to the lXXX tree; when the number of elements is greater than 6, it will still use the red black tree, just pruning the lower branches and leaves;
The elements that do not meet the requirements (i.e. hXXX tree) are also operated in the same way, but in the end, they are placed outside the trimming range tab[index + bit].
summary
After JDK 1.8, the methods of adding, deleting, finding and expanding hash tables add a TreeNode node
- When it is added, when the number of linked lists in the bucket exceeds 8, it will be converted into a red black tree;
- When deleting or expanding, if the bucket structure is a red black tree and the number of elements in the tree is too small, it will be pruned or directly restored to the linked list structure;
- Even if the Hashi function is not good, a large number of elements are concentrated in a bucket. Due to the red black tree structure, the performance is not bad.
(image from: tech.meituan.com/java-hashma...)
This article analyzes some key methods of TreeNode added by HashMap in JDK 1.8 according to the source code. It can be seen that after 1.8, HashMap combines the advantages of hash table and red black tree. It is not only fast, but also can guarantee performance in extreme cases. The designer is painstaking to see a spot. When can I write this NB The code of!!!
Reference address
-
-
If you like my article, you can pay attention to the personal subscription number. Welcome to leave a message and communicate at any time.
| https://programmer.ink/think/what-has-hashmap-jdk-changed-since-1.8.html | CC-MAIN-2021-39 | refinedweb | 3,283 | 66.67 |
Next article: Friday Q&A 2009-02-06: Profiling With Shark
Previous article: Friday Q&A 2009-01-23
Tags: codeinjection evil fridayqna hack
Welcome back to another exciting Friday Q&A.; This week I'll be taking Jonathan Mitchell's suggestion to talk about code injection, the various ways to do it, why you'd want to, and why you wouldn't want to.
What It Is
Let's start with a real easy example:
Fear:~/shell mikeash$ cat injectlib.c #include <stdio.h> void inject_init(void) __attribute__((constructor)); void inject_init(void) { fprintf(stderr, "Here's your injected code!\n"); } Fear:~/shell mikeash$ gcc -bundle -o injectlib injectlib.c Fear:~/shell mikeash$ gdb attach `ps x|grep Safari|grep -v grep|awk '{print $1}'` GNU gdb 6.3.50-20050815 (Apple version gdb-962) (Sat Jul 26 08:14:40 UTC 2008) [snip] 0x93e631c6 in mach_msg_trap () (gdb) p (void *)dlopen("/Users/mikeash/shell/injectlib", 2) $1 = (void *) 0x28f3b5b0 (gdb)
[0x0-0x17f97f8].com.apple.Safari[23558]: Here's your injected code!
How to Do It
Of course, using
gdb to inject code isn't exactly what one might call practical. For one thing,
gdb is unlikely to be present on your users' systems, as it's a developer tool.
However there are better alternatives, some as part of Mac OS X and some as third-party tools.
Input Managers
Input Managers are intended to provide keyboard input mechanisms for allowing custom ways to translate keystrokes into text on the screen, for example a custom Chinese input method.
They aren't very useful for their stated purpose on Mac OS X because they only work in Cocoa apps, not Carbon apps. But because they work by loading the input manager directly into every Cocoa application, they're great for code injection. All you need to do is build a bundle with the right layout, put it in the right place, and suddenly you're loaded into every Cocoa app.
Of course Apple isn't too keen on code injection and they've threatened that they might take our toys away at some point. Input Managers still work on Leopard, although they've been restricted and now require root access to install them. They may or may not still be around on Snow Leopard, it's hard to say yet.
Input Managers are a bit troublesome. First, on Leopard, they have to be installed with some fairly precise permissions and that's annoying. Second, they load into every Cocoa app even if you only want to fiddle with one of them. The third-party SIMBL helps with both of these problems. It will load standard bundles placed in standard locations (although SIMBL itself still needs the special magic installation to function), and it allows plugins to provide a list of applications they want to load into.
Mach ports
In a previous edition, I briefly mentioned that mach ports allowed injecting code into other processes. This works because mach ports can allow essentially full control over other processes. If you can get your hands on the right port (and see the
task_for_pid function for how to do that) you can do things like map new memory into the process with custom contents and create a new thread in the target process that executes that memory. Set things up right and you have code injection.
This is pretty hard to pull off, as it ends up being a pretty complex bootstrapping process. Fortunately, the third-party mach_inject does all the hard work for you.
There are, of course, some downsides. One is that you need to run as root (or as part of the
procmod group) to be able to get the necessary task port, even if you're injecting code into another process owned by the same user. Another is that the time of injection is non-deterministic. Input Managers load at a fairly well defined point in the application startup process, but mach_inject loads whenever your process can make the call, which could be much later, and potentially earlier, before things are really set up properly yet.
APE
Application Enhancer is a third-party injection mechanism. It's kind of like a better SIMBL which can load code into any application, not just Cocoa apps, and which loads it a little earlier than SIMBL does (which is an advantage for certain kinds of code).
Once again, there are downsides. Probably the biggest downside is that APE is by far the largest offender in the code injection war. A lot of people out there know APE by name, think that APE is evil, and refuse to use it.
Another big downside is that the company which makes APE is no longer maintaining it in a timely fashion. The first non-beta release of APE that supported Leopard was made in August 2008, a year after that OS version shipped. It's currently unknown whether they even plan to support Snow Leopard at all, let alone how long it will take them to release Snow Leopard support if they do. At this point, APE is good for experimentation but I can't recommend basing an actual product on it.
Lastly, APE is non-free and requires a license fee for commercial/shareware products, although that fee is quite reasonable.
Miscellaneous
Those are the main mechanisms, but the system provides a few more, of varying utility:
- Contextual menu plugins. These are Finder plugins intended to extend the contextual menu in the Finder. Of course once they're loaded, they can do whatever they want. The downside is that, as I understand it, they're loaded lazily on Leopard so you don't get your shot until and unless the user actually brings up the plugins section of the contextual menu. And of course it only gets you into the Finder.
- Scripting Additions. These are meant to be used to extend the capabilities of AppleScript on the system, but they actually work by loading into an application which is responding to the appropriate Apple Event. For example, run this command in your shell:
osascript -e 'tell app "Finder" to display dialog "I just injected some code into Finder!"'Replace that standard scripting additions command with your own and off you go.
- WebKit plugins. These are rarely useful unless you really are implementing a browser plugin, but it's a way to potentially get code into Safari and other WebKit-using applications.
- Kernel extensions. Not really an injection mechanism, but once you're in the kernel you rule the system and can do whatever you want to anything.
- Buffer overflows. Don't laugh too hard, people have done this! One of the older iPhone jailbreaking mechanisms used a buffer overflow in Safari to get in and do its dirty work. Of course these are absolutely not something to rely on, as vendors have this weird idea that they ought to fix them once they're discovered.
Code injection is a powerful tool for extending applications you don't control. For example, my own LiveDictionary uses the Input Manager mechanism to load into Safari so that it can monitor the user's keyboard and mouse inputs in that app, and read the text under the mouse cursor at the appropriate times. (This is something that could be done using the Accessibility API today, but at the time LiveDictionary was written it wasn't yet functional enough.)
For more examples, just take a look at Unsanity. They have a whole line of products based around APE and code injection, doing things ranging from GUI themes to mouse cursor customization to custom menus.
Basically, any time you need control over objects inside another application, and that application doesn't expose a mechanism to get at them from the outside (such as AppleScript or a plugin interface), code injection is how you do it.
How do you accomplish your task once you're inside? Well that all depends on exactly what you want to do. It's much like writing code in your own application, except you have much less control about how things work and much less information about how things are structured. It's the kind of thing where if you don't know how to write the injected code, it's probably something you shouldn't be doing in the first place. Since you're running code in a foreign process, you're in an unforgiving environment where mistakes are much more dire than usual.
What's Bad About It?
Code injection is dangerous and nasty and very special care needs to be taken when doing it.
There are two fundamental reasons for this. First, when you're in another process, you have much less control over the environment than usual. It's also easy for that process to make certain assumptions about how things work. For example, you might pop up a window, while the application assumes that all windows are ones that it created. Crash, boom, game over.
The second reason is more of a political one. It's extremely rude to crash another process. Users hate it when you crash their other programs. Developers hate it when they get crash reports and support requests caused by your code. While crashing is never a good thing, crashing somebody else's program is an order of magnitude worse than crashing your own standalone program.
Practical Advice
Given the dangers, how should you proceed? Here are some very general guidelines:
- Avoid code injection if at all possible. Take another look at your options. Can you use Accessibility to do what you want? AppleScript? Is there an official plugin interface? Sometimes you really have no choice, but exhaust all other options first.
- Load into as few programs as possible. If you're using a mechanism like Input Managers that loads into a lot of applications but you only want to hit a few, be sure to restrict your module to just the ones you want. This reduces the chances of affecting an application you didn't even need to be in. For Input Managers, you can use SIMBL to accomplish this.
- Do as little in the injected code as possible. If you do a lot of complicated work in your code, move that into a background process and talk to it using Distributed Objects or another IPC mechanism. That way, if something in the background process crashes, it won't take other applications down with it. (LiveDictionary is a good example of this, the Input Manager itself basically just grabs input and text out of the target application, and all of the dictionary parsing, lookup, and display is done in an LSUIElement application.)
- Modify your environment as little as possible. Got a handy Cocoa programming trick that involves posing as a common AppKit class? Don't do it. Need to install some category methods with common names on NSObject? Forget about it. Want to load some enormous framework that you don't really need? Best to avoid it if you can. Any large application is going to have hidden assumptions about the environment it runs it, often completely inadvertently. Try to modify as little as possible to avoid make it crash when you step over one of those invisible lines.
- Program defensively. I mean really defensively. Check every potential failure spot thoroughly, and fail as gracefully as possible. Remember, it's much better for your injected code to stop working or disable itself but leave the application running than it is to crash the application.
Wrapping Up
That's it for this week's Friday Q&A.; Come back next week for another show. Bring a friend and get 50% off the price of admission!
Did I overlook something important? Forget to mention your favorite technology? Do you passionately hate code injection in every form? Post your comments below.
And as always, Friday Q&A; is powered by your suggestions. If you have a topic you'd like to see discussed here, post it below or e-mail me. (Your name will be used unless you ask me not to.)
`ps x | awk '/[S]afari/ {print $1}'`
- task_for_pid() requires a process running as the root user or the setgid group since 10.4 for Intel (not 10.4 for PPC). Additionally, code signed with a trusted certificate can also use task_for_pid() after the user enters an admin password at an elevation prompt.
- InputManagers in 10.5 are deprecated; they can still be used so long that they're placed in /Library/InputManagers and owned by root with 0644/0755 privileges.
- DYLD_INSERT_LIBRARIES in the ~/.MacOSX/Environment.plist file (that describes the environment variables for all apps launched through LaunchServices for a particular user) is silently filtered away.
Code injection is a security risk that can be used for a lot for interesting and evil things: see -- or just think of circumvention of Keychain ACLs.
I make a mach_inject high-level wrapper that runs on 10.5 and "adapts" SIMBL-style extenders while abstracting all the mach_injectness away. It's called PlugSuit:.
Furthermore, on 10.5, it is no longer required to be root (or procmod) to access task_for_pid(). All you need to do is add SecTaskAccess to your Info.plist and fork over money for an official code signing certificate with which to sign your process. Somehow Apple thinks that if you have a "real" certificate you can't possibly use these facilities for evil.
With a flat namespace, you can't have loaded multiple libraries that define the same symbol. This would mean a link failure at build time, and it used to mean a crash at runtime. Since 10.3, however, a runtime symbol conflict in a flat namespace will result in one definition silently overwriting the other.
So now you don't have to worry only about how your injected code affects the target app and its libraries. You're also causing each library in the process to potentially overwrite bits of other libraries.
Quite awhile ago I submitted a patch to dyld that would allow INSERT without FORCE_FLAT, but Apple never really did anything with it. (They basically recommended against using DYLD_INSERT_LIBRARIES, because it messes with the dyld cache.) I have no idea how well the patch applies to modern dyld, but if anybody's interested:
I believe that the interposing functionality that you can get in conjunction with DYLD_INSERT_LIBRARIES does force flat namespace, but it's not required to use that functionality when you use DYLD_INSERT_LIBRARIES.
The folks working on product security at Apple aren't complete idiots. By forcing people to obtain a valid certificate from a CA trusted by Apple, they make use of task_for_pid() something that the developer is accountable for. If someone releases an app that abuses task_for_pid(), Apple can have the certificate revoked to limit the scope of its damage.
- An organization creating malware will have little trouble getting an official certificate that doesn't lead back to them. Certificate authorities don't check credentials particularly thoroughly, and there's an unlimited supply of willing fronts to be found on the net.
- Even if Apple could convince a certificate authority to revoke a certificate on the basis of malware (which I'm dubious about), this will take a huge amount of time. I know of at least one extremely serious security hole in Leopard that Apple has known about, and has been sitting on, for several months. If it takes them this long to patch a flagship product, how fast are they going to move when a piece of malware hits a few users?
- Even if Apple moves fast, and the CA moves fast, we're still talking about days. By the time the certificate is revoked, the large part of the damage will have been done.
- An activity defined as Apple as "non-malicious" could be something I personally find "malicious". I certainly know that my decision to allow or deny code injection is completely unrelated to the author's ability to obtain a trusted code signing certificate. I'll happily trust code injection done by tiny indie companies, but I wouldn't touch such a thing coming from Microsoft or Adobe with a ten-foot pole. Requiring root for task_for_pid() on Tiger allowed the user to decide which programs could use it, which was a good thing. On Leopard they have now opened it up so that anyone with a bit of cash and an ID card can completely bypass my controls on it. They have, in essence, opened the hole back up that they had previously closed. This is bad.
In conclusion, requiring a trusted signing certificate adds zero security, and allowing code signed with such a certificate to use this API opens up a huge hole in the system. (One might wonder at the relevance of this hole, given the number of other injection mechanisms outlined above, but it's still big.) Somebody either wasn't thinking when they made this decision or, worse, they were thinking but they didn't have our best interests in mind.
Revoking a certificate doesn't require cross-functional engineering coordination or QA testing. Issuing an update takes more resources, so I don't really think the turnaround on one is a predictor for the turnaround on the other.
And why would they have trouble convincing a CA to revoke a certificate that signed malicious code?
Not at all. Worms spread at a geometric rate through the majority of their lifetimes and only slows toward the end. A lot of the more famous worms were running rampant for over a week. Revoking the certificate on Day 2 or Day 3 would make a huge difference.
And a certificate lets you positively identify a piece of code as coming from Adobe or Microsoft.
This is a more convincing argument. But my overall point is that the people working in security at Apple know what code signing is and what its uses are. And this restriction allows them to leverage the ability to revoke certificates, which is a change that propagates much faster than distributing a security patch.
Nice article. Months ago I was looking for the opposite.
How can I avoid that some create code and injected it into my app with confidential information? How to avoid inputmanagers altering an app?
"And a certificate lets you positively identify a piece of code as coming from Adobe or Microsoft." It's not like Adobe or Microsoft try to hide their products, I can just look for the "Adobe" or "Microsoft" in the name. And then of course there's the part where you've missed the whole point: it does me no good to be able to identify their products if the system leaves itself wide open for them.
"And this restriction allows them to leverage the ability to revoke certificates...." But it's not a restriction. This API is less restricted on Leopard than it was on Tiger. That's not a restriction, that's a hole.
Lord Anubis: Run your app as a user which is known to be clean of any such software, on a system with no input managers or any other such thing installed in /Library. UNIX's entire security model is based on preventing one user from messing with a different user. It has no real security against protecting one process of one user from meddling with another process of that same user. Things like the task_for_pid() restriction are really just small patches over this fact.
If you want to prevent code from being injected while running on a user's system you have no control over, give it up. Can't be done. Convince him to run it in a secure environment.
Why? Anti-virus companies have issued updated virus definitions within a day or two of an outbreak in the past. Big organizations can move quickly when needed.
Not true. You can preemptively not trust certificates from them and/or their issuing CAs.
It restricts who can use the API, so it's a restriction. There is a security regression, but that doesn't mean there is no restriction in place.
Yes, I did give up then, but was and i'am still hoping for a trick.
LA
Damien, that's a great idea! Maybe we should delete every system root cert in our keychains!
And then let's say, just for the sake of argument, that Apple identifies and reacts to the threat instantaneously, and that the CA responds to Apple's request instantaneously. What, exactly, is the mechanism for communicating the revocation to my computer? Is my machine phoning home to all of the CAs every day without my knowledge?
As for "You can preemptively not trust certificates from them and/or their issuing CAs." please don't be stupid. I appreciate intelligent discussion here, but I'm afraid this has lost it completely. I don't have a master list of "evil companies" to blacklist. What I would like is for my system not to trust software just because it has been signed. I personally do not take signing as any indication of trustworthiness. Apple apparently does, but the answer to this disagreement is not "oh, just blacklist the companies you don't like". What would be acceptable would be to whitelist the companies I do like. That's what the old Tiger mechanism provided, and making it to use signing information to prove exactly who is requesting privileges would be a reasonable enhancement. But making it use signing information to give carte blanche to anyone with a hundred bucks and a fake ID is not reasonable.
If you still think the people working on product security at Apple aren't complete idiots, then I would love to read your explanation for that policy.
*
You don't have to do such thing to override a symbol. The dynamic linker also provide some facility to automatically override a symbol when the injected library is loaded.
This is called interposition.
Basically, to override NSApplicationMain, you can do something like this:
#define DYLD_INTERPOSE(_replacment,_replacee) \
__attribute__((used)) static struct{ const void* replacment; const void* replacee; } _interpose_##_replacee \
__attribute__ ((section ("__DATA,__interpose"))) = { (const void*)(unsigned long)&_replacment, (const void*)(unsigned long)&_replacee };
DYLD_INTERPOSE(_SAApplicationMain, NSApplicationMain);
And then:
static
int _SAApplicationMain(int argc, const char **argv) {
// My interposed code.
// call original implementation
return NSApplicationMain(argc, argv);
}
And it's a little worse than that: /usr/bin/nc (among others) is signed and will listen on any port it's asked to
Nice job, Apple!
This firewall is still foolishly holey, but it turns out that it's not that holey.
/usr/bin/nc has world read permissions set on it, which means it can be copied elsewhere. Doing so retains the code signature. However, since the firewall exclusion list is path-based, it won't catch the new copy of nc in its new location. So all a malicious app has to do is copy it somewhere else (like /tmp, or ~/Desktop) and execute it there, and it'll bypass the firewall.
You'd think that someone would have told Apple that maintaining an "evil list" was a bad idea....
(Thanks to Jeff Johnson for figuring this one out.)
Signed apps with trusted certs must acquire "system.privilege.taskport" authorization to use task_for_pid.
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion. | https://www.mikeash.com/pyblog/friday-qa-2009-01-30-code-injection.html | CC-MAIN-2018-30 | refinedweb | 3,922 | 63.29 |
Which style is preferable?
Style A:
def foo():
import some_module
some_module.something
import some_module
def foo():
some_module.something
some_module
Indeed, as already noted, it's usually best to follow the PEP 8 recommendation and do your imports at the top. There are some exceptions though. The key to understanding them lies in your embedded question in your second paragraph: "at what stage does the import ... happen?"
Import is actually an executable statement. When you import a module, all the executable statements in the module run. "def" is also an executable statement; its execution causes the defined name to be associated with the (already-compiled) code. So if you have:
def f(): import something return None
in a module that you import, the (compiled) import and return statements get associated with the name "f" at that point. When you run f(), the import statement there runs.
If you defer importing something that is "very big" or "heavy", and then you never run the function (in this case f), the import never happens. This saves time (and some space as well). Of course, once you actually call f(), the import happens (if it has already happened once Python uses the cached result, but it still has to check) so you lose your time advantage.
Hence, as a rule of thumb, "import everything at the top" until after you have done a lot of profiling and discovered that importing "hugething" is wasting a lot of time in 90% of your runs, vs saving a little time in 10% of them. | https://codedump.io/share/qcwiwAAD8XGP/1/imports-at-global-level-or-at-function-level | CC-MAIN-2016-50 | refinedweb | 257 | 62.78 |
<div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>I think /*@dependent@*/ or /*@shared@*/ could both work, i.e.,<br>"static /*@dependent@*/ char *statictest(){..." <br><br>at memory that splint uses. That is understandable. There is<br>something with the splint manual that makes it hard to<br>understand. At least I think so. </blockquote><div>I was thinking about saying *precisely the same thing* in my first post, but I thought that I would be flamed with comments like "If you don't know what is the problem than shut up". I also have read it multiple times, and find it hard to understand. <br><br>In my real program, (not the one I put in my first post) I had already tried to use /*@shared@*/, but splint complained <br>"Fresh storage returned as shared (should be only): ret<br> Fresh storage (newly allocated in this function) is transferred in a way that <br> the obligation to release storage is not propagated. Use the /*@only@*/<br> annotation to indicate the a return value is the only reference to the<br> returned storage. (Use -freshtrans to inhibit warning)<br> arrumanomes.c:56:46: Fresh storage ret created"<br><br>The <span style="font-style: italic;">only</span> storage was being returned by function basename. It turns out that basename does not seem to have an annotated version for splint; therefore, splint assumes it returns <span style="font-style: italic;">only</span> storage. I have now worked around the problem with the following code in the beginning of the program:<br><br>#ifdef S_SPLINT_S //if this macro is defined, then the program is being analyzed by splint. splint doesn't seem to understand that the functions dirname and basename return pointers to static storage; the annotations below tell this to splint. <br>/*@shared@*/<br>extern char *dirname(char *path);<br>/*@shared@*/<br>extern char *__xpg_basename (char *__path);<br>#define basename __xpg_basename<br>#else <br>#include <libgen.h> //basename,dirname<br>#endif //S_SPLINT_S <br><br>Perhaps I would have seen this workaround earlier if the splint warning told me that if the storage should not have been created with <span style="font-style: italic;">only</span>, I should annotate the function that creates it with /*@shared*@/, since /*@only@*/ is implicit. Perhaps this could be emphasized in the splint manual too. It should be emphasized that <span style="font-style: italic;">only</span> is the default and that if splint does not understand that some memory does not need to be deallocated, perhaps the culprit is a function that is not annotated. <br><br>Anyway, The biggest problem is that, since I was using +unixlib (also tried with +posixlib), I thought that basename and dirname would be correctly annotated. At first I didn't even consider the possibility that {base,dir}name lacked annotations. I was trying to solve the problem by annotating my variables and functions, and even considered the possibility that {base,dir}name did in fact return dynamically allocated memory that should be freed (I have studied them and found out that the storage they return should *not* be freed). <br><br>Perhaps the splint's unixlib should be enhanced? <br><br>PS: How would owned/dependent work? splint would still expect the memory to be deallocated, wouldn't it?<br></div></div>-- <br>Software is like sex: it is better when it is free. | http://www.cs.virginia.edu/pipermail/splint-discuss/attachments/20070305/068cb4ab/attachment.htm | crawl-002 | refinedweb | 579 | 53.1 |
.
If the root causes are simple (Memory footprint, high CPU load, high IO) then the stats available from Task Manager can help indicate if there are some misbehaving services or applications. Open Task Manager, go to Processes, on the view menu click "Select Columns" and enable tracking for "CPU Time", "Peak Working Set", IO Reads, IO Writes and Image Path (dunno if the latter is available in XP). Leave it running over a typical working session and take a look at what processes have the highest values. Now ask if these make sense. Typically 90%+ of the activity should be related to user apps. If something you don't recognize is very high or top of one of the lists then some more investigation is called for, there may be a good reason (e.g. a mandatory AV\Security service might appear to have very high numbers) and there may not be (e.g. DodgyApp.exe consuming 75% of total memory).
More complicated problems can be very hard to diagnose and require some smarts. The various Sysinternals tools like Process Explorer and Process Monitor can be used to drill into quite a bit more detail to find problem areas but using them effectively takes time and a bit of expertise. On W2K8 & Vista the XPerf tools can be used to take detailed traces of the entire system behaviour while investigating issues.
A lot of overall performance issues can be caused by network problems (e.g. poor name resolution, persistent connections to shares that have very large numbers of files, simple dodgy networks throwing lots of errors etc). Troubleshooting network issues could fill a book but checking ping times to your key servers is a good start - on a LAN everything should be <1ms, your WAN latencies will be longer but they should be consistent and if any of them are >100ms then there should be a very good reason why. Netstat -e will show you whether there are any discards\errors both of which are bad at any level, if non-unicast packets exceed unicast packets by any significant margin then that is probably a problem.
Tracking down more esoteric problems can be quite hard. For example, Windows Explorer has the ability to support third party namespace\shell extensions (e.g. extensions that provide better metadata on media files, source-control repositories and so on). Installable File System Filter drivers are used to provide additional features (and occasionally restrictions e.g. DRM) and there are quite a few other places where 3rd party extensions to the user interface can be inserted by vendors. All of these can cause significant problems in terms of user interface performance (when they misbehave) because they can be triggered by many actions that appear to be relatively benign (e.g. opening a file dialog and browsing for a file). Mark Russinovich has a good article about tracing down just such a misbehaving component on his blog here a few years back. That blog post is a great place to start as a guide to finding a root cause when you know something is seriously sick.
Give your sysadmin the machine with a detailed description of the problem, including when it happens and what you are doing on the machine at the time.
Try Everest: it can generate VERY nice reports on the whole system configuration
All of the previous answers are good, so I won't try and cover the same ground. Rather, a few things i'd do before gathering further data on performance. These tips assume that the source of the issue is the disk I/O subsystem, which in my experience is generally responsible for the most obvious slowdowns, even if indirectly.
Firstly - clean your machine up. By that, I mean remove any apps that you don't actually use, empty %TEMP% and the various other temp folders your applications might use (including browser caches and other application temporary files), delete all but the most recent system restore point etc. Running Windows disk cleanup can sometimes find further files which you might overlook during a manual tidy up. I actually automate a bit of this with a script which deletes files from various temp folders on boot, i'll leave that up to you.
Next, defrag. Use a defragger like the free edition of Ultimate Defrag which can move files you commonly access to the faster parts of the disk. By default, it moves files which Windows uses on startup to the faster portion of the disk, but you can also tell it to move user files - for example, I ensure my Outlook PST files are contiguous and on the faster part of the disk.
Finally, if you're able to, move your pagefile to it's own partition. This will ensure it doesn't get fragmented, resulting in a performance improvement in paging operations.
Doing this will eliminate "wear and tear" from your investigations by eliminating some of the cruft that Windows installs amass over time, allowing your efforts to focus on specific performance problems.
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
139 times
active | http://serverfault.com/questions/79190/what-diagnostic-information-can-i-give-my-system-administrator-to-help-him-diagn | CC-MAIN-2015-06 | refinedweb | 866 | 57.91 |
This is the final codelab in the Kotlin Bootcamp. In this codelab you learn about annotations and labeled breaks. You review lambdas and higher- order functions, which are key parts of Kotlin. You also learn more about inlining functions, and Single Abstract Method (SAM) interfaces. Finally, you learn more about the Kotlin Standard Library.
- The basics of lambdas and higher-order functions
What you'll learn
- The basics of annotations
- How to use labeled breaks
- More about higher-order functions
- About Single Abstract Method (SAM) interfaces
- About the Kotlin Standard Library
What you'll do
- Create a simple annotation.
- Use a labeled break.
- Review lambda functions in Kotlin.
- Use and create higher-order functions.
- Call some Single Abstract Method interfaces.
- Use some functions from the Kotlin Standard Library.
Annotations are a way of attaching metadata to code, and are not something specific to Kotlin. The annotations are read by the compiler and used to generate code or logic. Many frameworks, such as Ktor and Kotlinx, as well as Room, use annotations to configure how they run and interact with your code. You are unlikely to encounter any annotations until you start using frameworks, but it's useful to be able to know how to read an annotation.
There are also annotations that are available through the Kotlin standard library that control the way code is compiled. They're really useful if you're exporting Kotlin to Java code, but otherwise you don't need them that often.
Annotations go right before the thing that is annotated, and most things can be annotated—classes, functions, methods, and even control structures. Some annotations can take arguments.
Here is an example of some annotations.
@file:JvmName("InteropFish") class InteropFish { companion object { @JvmStatic fun interop() } }
This says the exported name of this file is
InteropFish with the
JvmName annotation; the
JvmName annotation is taking an argument of
"InteropFish". In the companion object,
@JvmStatic tells Kotlin to make
interop() a static function in
InteropFish.
You can also create your own annotations, but this is mostly useful if you are writing a library that needs particular information about classes at runtime, that is reflection.
Step 1: Create a new package and file
- Under src, create a new package,
example.
- In example, create a new Kotlin file,
Annotations.kt.
Step 2: Create your own annotation
- In
Annotations.kt, create a
Plantclass with two methods,
trim()and
fertilize().
class Plant { fun trim(){} fun fertilize(){} }
- Create a function that prints all the methods in a class. Use
::classto get information about a class at runtime. Use
declaredMemberFunctionsto get a list of the methods of a class. (To access this, you need to import
kotlin.reflect.full.*)
import kotlin.reflect.full.* // required import class Plant { fun trim(){} fun fertilize(){} } fun testAnnotations() { val classObj = Plant::class for (m in classObj.declaredMemberFunctions) { println(m.name) } }
- Create a
main()function to call your test routine. Run your program and observe the output.
fun main() { testAnnotations() }
⇒ trim fertilize
- Create a simple annotation,
ImAPlant.
annotation class ImAPlant
This doesn't do anything other than say it is annotated.
- Add the annotation in front of your
Plantclass.
@ImAPlant class Plant{ ... }
- Change
testAnnotations()to print all the annotations of a class. Use
annotationsto get all the annotations of a class. Run your program and observe the result.
fun testAnnotations() { val plantObject = Plant::class for (a in plantObject.annotations) { println(a.annotationClass.simpleName) } }
⇒ ImAPlant
- Change
testAnnotations()to find the
ImAPlantannotation. Use
findAnnotation()to find a specific annotation. Run your program and observe the result.
fun testAnnotations() { val plantObject = Plant::class val myAnnotationObject = plantObject.findAnnotation<ImAPlant>() println(myAnnotationObject) }
⇒ @example.ImAPlant()
Step 3: Create a targeted annotation
Annotations can target getters or setters. When they do, you can apply them with the
@get: or
@set: prefix. This comes up a lot when using frameworks with annotations.
- Declare two annotations,
OnGetwhich can only be applied to property getters, and
OnSetwhich can only be applied to property setters. Use
@Target(AnnotationTarget.PROPERTY_GETTER)or
PROPERTY_SETTERon each.
annotation class ImAPlant @Target(AnnotationTarget.PROPERTY_GETTER) annotation class OnGet @Target(AnnotationTarget.PROPERTY_SETTER) annotation class OnSet @ImAPlant class Plant { @get:OnGet val isGrowing: Boolean = true @set:OnSet var needsFood: Boolean = false }
Annotations are really powerful for creating libraries that inspect things both at runtime and sometimes at compile time. However, typical application code just uses annotations provided by frameworks.
Kotlin has several ways of controlling flow. You are already familiar with
return, which returns from a function to its enclosing function. Using a
break is like
return, but for loops.
Kotlin gives you additional control over loops with what's called a labeled break. A
break qualified with a label jumps to the execution point right after the loop marked with that label. This is particularly useful when dealing with nested loops.
Any expression in Kotlin may be marked with a label. Labels have the form of an identifier followed by the
@ sign.
- In
Annotations.kt, try out a labeled break by breaking out from an inner loop.
fun labels() { outerLoop@ for (i in 1..100) { print("$i ") for (j in 1..100) { if (i > 10) break@outerLoop // breaks to outer loop } } } fun main() { labels() }
- Run your program and observe the output.
⇒ 1 2 3 4 5 6 7 8 9 10 11
Similarly, you can use a labeled
continue. Instead of breaking out of the labeled loop, the labeled continue proceeds to the next iteration of the loop.
Lambdas are anonymous functions, which are functions with no name. You can assign them to variables and pass them as arguments to functions and methods. They are extremely useful.
Step 1: Create a simple lambda
- Start the REPL in IntelliJ IDEA, Tools > Kotlin > Kotlin REPL.
- Create a lambda with an argument,
dirty: Intthat does a calculation, dividing
dirtyby 2. Assign the lambda to a variable,
waterFilter.
val waterFilter = { dirty: Int -> dirty / 2 }
- Call
waterFilter, passing a value of 30.
waterFilter(30)
⇒ res0: kotlin.Int = 15
Step 2: Create a filter lambda
- Still in the REPL, create a data class,
Fish, with one property,
name.
data class Fish(val name: String)
- Create a list of 3
Fish, with names Flipper, Moby Dick, and Dory.
val myFish = listOf(Fish("Flipper"), Fish("Moby Dick"), Fish("Dory"))
- Add a filter to check for names that contain the letter ‘i'.
myFish.filter { it.name.contains("i")}
⇒ res3: kotlin.collections.List<Line_1.Fish> = [Fish(name=Flipper), Fish(name=Moby Dick)]
In the lambda expression,
it refers to the current list element, and the filter is applied to each list element in turn.
- Apply
joinString()to the result, using
", "as the separator.
myFish.filter { it.name.contains("i")}.joinToString(", ") { it.name }
⇒ res4: kotlin.String = Flipper, Moby Dick
The
joinToString() function creates a string by joining the filtered names, separated by the string specified. It is one of the many useful functions built into the Kotlin standard library.
Passing a lambda or other function as an argument to a function creates a higher-order function. The filter above is a simple example of this.
filter() is a function, and you pass it a lambda that specifies how to process each element of the list.
Writing higher-order functions with extension lambdas is one of the most advanced parts of the Kotlin language. It takes a while to learn how to write them, but they are really convenient to use.
Step 1: Create a new class
- Within the example package, create a new Kotlin file,
Fish.kt.
- In
Fish.kt, create a data class
Fish, with one property,
name.
data class Fish (var name: String)
- Create a function
fishExamples(). In
fishExamples(), create a fish named
"splashy", all lowercase.
fun fishExamples() { val fish = Fish("splashy") // all lowercase }
- Create a
main()function which calls
fishExamples().
fun main () { fishExamples() }
- Compile and run your program by clicking the green triangle to the left of
main(). There is no output yet.
Step 2: Use a higher-order function
The
with() function lets you make one or more references to an object or property in a more compact way. Using
this.
with() is actually a higher-order function, and in the lamba you specify what to do with the supplied object.
- Use
with()to capitalize the fish name in
fishExamples(). Within the curly braces,
thisrefers to the object passed to
with().
fun fishExamples() { val fish = Fish("splashy") // all lowercase with (fish.name) { this.capitalize() } }
- There is no output, so add a
println()around it. And the
thisis implicit and not needed, so you can remove it.
fun fishExamples() { val fish = Fish("splashy") // all lowercase with (fish.name) { println(capitalize()) } }
⇒ Splashy
Step 3: Create a higher-order function
Under the hood,
with() is a higher-order function. To see how this works, you can make your own greatly simplified version of
with() that just works for strings.
- In
Fish.kt, define a function,
myWith()that takes two arguments. The arguments are the object to operate on, and a function that defines the operation. The convention for the argument name with the function is
block. In this case, that function returns nothing, which is specified with
Unit.
fun myWith(name: String, block: String.() -> Unit) {}
Inside
myWith(),
block() is now an extension function of
String. The class being extended is often called the receiver object. So
name is the receiver object in this case.
- In the body of
myWith(), apply the passed in function,
block(), to the receiver object,
name.
fun myWith(name: String, block: String.() -> Unit) { name.block() }
- In
fishExamples(), replace
with()with
myWith().
fun fishExamples() { val fish = Fish("splashy") // all lowercase myWith (fish.name) { println(capitalize()) } }
fish.name is the name argument, and
println(capitalize()) is the block function.
- Run the program, and it operates as before.
⇒ Splashy
Step 4: Explore more built in extensions
The
with() extension lambda is very useful, and is part of the Kotlin Standard Library. Here are a few of the others you might find handy:
run(),
apply(), and
let().
The
run() function is an extension that works with all types. It takes one lambda as its argument and returns the result of executing the lambda.
- In
fishExamples(), call
run()on
fishto get the name.
fish.run { name }
This just returns the
name property. You could assign that to a variable or print it. This isn't actually a useful example, as you could just access the property, but
run() can be useful for more complicated expressions.
The
apply() function is similar to
run(), but it returns the changed object it was applied to instead of the result of the lambda. This can be useful for calling methods on a newly created object.
- Make a copy of
fishand call
apply()to set the name of the new copy.
val fish2 = Fish(name = "splashy").apply { name = "sharky" } println(fish2.name)
⇒ sharky
The
let() function is similar to
apply(), but it returns a copy of the object with the changes. This can be useful for chaining manipulations together.
- Use
let()to get the name of
fish, capitalize it, concatenate another string to it, get the length of that result, add 31 to the length, then print the result.
println(fish.let { it.name.capitalize()} .let{it + "fish"} .let{it.length} .let{it + 31})
⇒ 42
In this example, the object type referred to by
it is
Fish, then
String, then
String again and finally
Int.
fishafter calling
let(), and you will see that it hasn't changed.
println(fish.let { it.name.capitalize()} .let{it + "fish"} .let{it.length} .let{it + 31}) println(fish)
⇒ 42 Fish(name=splashy)
Lambdas and higher-order functions are really useful, but there is something you should know: lambdas are objects. A lambda expression is an instance of a
Function interface, which is itself a subtype of
Object. Consider the earlier example of
myWith().
myWith(fish.name) { capitalize() }
The
Function interface has a method,
invoke(), which is overridden to call the lambda expression. Written out longhand, it would look something like the code below.
// actually creates an object that looks like this myWith(fish.name, object : Function1<String, Unit> { override fun invoke(name: String) { name.capitalize() } })
Normally this isn't a problem, because creating objects and calling functions doesn't incur much overhead, that is, memory and CPU time. But if you're defining something like
myWith() that you use everywhere, the overhead could add up.
Kotlin provides
inline as a way to handle this case to reduce overhead during runtime by adding a bit more work for the compiler. (You learned a little about
inline in the earlier lesson talking about reified types.) Marking a function as
inline means that every time the function is called, the compiler will actually transform the source code to "inline" the function. That is, the compiler will change the code to replace the lambda with the instructions inside the lambda.
If
myWith() in the above example is marked with
inline:
inline myWith(fish.name) { capitalize() }
it is transformed into a direct call:
// with myWith() inline, this becomes fish.name.capitalize()
It is worth noting that inlining large functions does increase your code size, so it's best used for simple functions that are used many times like
myWith(). The extension functions from the libraries you learned about earlier are marked
inline, so you don't have to worry about extra objects being created.
Single Abstract Method just means an interface with one method on it. They are very common when using APIs written in the Java programming language, so there is an acronym for it, SAM. Some examples are
Runnable, which has a single abstract method,
run(), and
Callable, which has a single abstract method,
call().
In Kotlin, you have to call functions that take SAMs as parameters all the time. Try the example below.
- Inside example, create a Java class,
JavaRun, and paste the following into the file.
package example; public class JavaRun { public static void runNow(Runnable runnable) { runnable.run(); } }
Kotlin lets you instantiate an object that implements an interface by preceding the type with
object:. It's useful for passing parameters to SAMs.
- Back in
Fish.kt, create a function
runExample(), which creates a
Runnableusing
object:The object should implement
run()by printing
"I'm a Runnable".
fun runExample() { val runnable = object: Runnable { override fun run() { println("I'm a Runnable") } } }
- Call
JavaRun.runNow()with the object you created.
fun runExample() { val runnable = object: Runnable { override fun run() { println("I'm a Runnable") } } JavaRun.runNow(runnable) }
- Call
runExample()from
main()and run the program.
⇒ I'm a Runnable
A lot of work to print something, but a good example of how a SAM works. Of course, Kotlin provides a simpler way to do this—use a lambda in place of the object to make this code a lot more compact.
- Remove the existing code in
runExample, change it to call
runNow()with a lambda, and run the program.
fun runExample() { JavaRun.runNow({ println("Passing a lambda as a Runnable") }) }
⇒ Passing a lambda as a Runnable
- You can make this even more concise using the last parameter call syntax, and get rid of the parentheses.
fun runExample() { JavaRun.runNow { println("Last parameter is a lambda as a Runnable") } }
⇒ Last parameter is a lambda as a Runnable
That's the basics of a SAM, a Single Abstract Method. You can instantiate, override and make a call to a SAM with one line of code, using the pattern:
Class.singleAbstractMethod { lambda_of_override }
This lesson reviewed lambdas and went into more depth with higher-order functions—key parts of Kotlin. You also learned about annotations and labeled breaks.
- Use annotations to specify things to the compiler. For example:
@file:JvmName("Foo")
- Use labeled breaks to let your code exit from inside nested loops. For example:
if (i > 10) break@outerLoop // breaks to outerLoop label
- Lambdas can be very powerful when coupled with higher-order functions.
- Lambdas are objects. To avoid creating the object, you can mark the function with
inline, and the compiler will put the contents of the lambda in the code directly.
- Use
inlinecarefully, but it can help reduce resource usage by your program.
- SAM, Single Abstract Method, is a common pattern, and made simpler with lambdas. The basic pattern is:
Class.singleAbstractMethod { lamba_of_override }
- The Kotlin Standard Library provides numerous useful functions, including several SAMs, so get to know what's in it.
There's lots more to Kotlin than was covered in the course, but you now have the basics to begin developing your own Kotlin programs. Hopefully you're excited about this expressive language, and looking forward to creating more functionality while writing less code (especially if you're coming from the Java programming language.) Practice and learning as you go is the best way to become an expert in Kotlin, so continue to explore and learn about Kotlin on your own.
Kotlin documentation
If you want more information on any topic in this course, or if you get stuck, is your best starting point.
- Kotlin coding conventions
- Kotlin idioms
- Annotations
- Reflection
- Labeled breaks
- Higher-order functions and lambdas
- Inline functions.
Kotlin Standard Library
The Kotlin Standard Library provides numerous useful functions. Before you write your own function or interface, always check the Standard Library to see if someone has saved you some work. Check back occasionally, because new functionality is added frequently.
In Kotlin, SAM stands for:
▢ Safe Argument Matching
▢ Simple Access Method
▢ Single Abstract Method
▢ Strategic Access Methodology
Question 2
Which one of the following is not a Kotlin Standard Library extension function?
▢
elvis()
▢
apply()
▢
run()
▢
with()
Question 3
Which one of the following is not true of lambdas in Kotlin?
▢ Lambdas are anonymous functions.
▢ Lambdas are objects unless inlined.
▢ Lambdas are resource intensive and shouldn't be used.
▢ Lambdas can be passed to other functions.
Question 4
Labels in Kotlin are indicated with an identifier followed by:
▢
:
▢
::
▢
@:
▢
@
Congratulations! You've completed the Kotlin Bootcamp for Programmers codelab.
For an overview of the course, including links to other codelabs, see "Kotlin Bootcamp for Programmers: Welcome to the course."
If you're a Java programmer, you may be interested in the Refactoring to Kotlin codelab. The automated Java to Kotlin conversion tools cover the basics, but you can create more concise, robust code with a little extra work.
If you're interested in developing apps for Android, take a look at Android Kotlin Fundamentals. | https://developer.android.com/codelabs/kotlin-bootcamp-sams?hl=pt-br | CC-MAIN-2021-31 | refinedweb | 3,046 | 57.87 |
\ , _ _ |,' _ ' / (_)\\|`,_;,' ,', ,'- \ ,#&&&&&&&&--'| \_;#&&&& '- #&%%%%%%%%%&& #&&%%%& #&&&&&&&&&& #&oo%&&&&&%%%o& #&&%%%%%& #&%%%%%%%%%&& #&ooo& ##&oooo&#&&o%ooo%%& #&%%%%%%%%%%&& #&ooo&&&&&ooooo&#&&ooooooo& #&oo%&&&%%%%%%& #&%ooooooooooo&#&&ooo%&%ooo&#&ooo&##&oooo%& #&%%%%&&&&&!%%&#&&%%%%& &%%%%&&ooo& #&ooooo& #&%%%%& ##&%%&#&&%%%&&&&&%%%&#&oo&&&oooooo& #&%%%%&&&&&%%%%&&%%%%%%%%%%%%%&&o%%%%%%%%%%& #&&%%%%%%%%%%%%&&%%%%&&&&%%%%%&&%%%%%%%%%%&$ #&&&&&&%%%%%%%&&&%%%&####&%%%%&#&%%%%%%%%&$ ##$$$$$&&&&&&&$&&&&&## ##&&&&&#&&&&&&&&&$ $$$$$$$$$$$$ $$$$$ ##$$$$#$$$$$$$$$ #&&&&&&&&&&& # &&&& #&&&&& #&&&&&&&& #&%%%%%%%%%%&&&%%%& #&%%%& #&%%%%%%%& #&%%%%%%%%%%%&&%%%& #&%%%& #&%%&&&%%%& #&%oooo&&&&&&$&%%o& #&ooo& #&o%& &%%%& #&oooo&$ #&ooooo& #&oooo& #&oo&&&oo%& #&ooo%%&&&&& #&oooooo&&&&oooooo&#&oooooo&& #&%%%%%%%%%& #&oooooooooooooooo&#&%o&&ooo%& #&%%%%&&&&&$ #&oo%%%%%%%%%%%%%%&#&%%&#&%%%%& #&%%%%&$$$$ #&%%%%%%%%%%%%%%%%&#&%%& #&%%%& #&%%%%%& #&%%%%%%%%%%%%%%&##&%%& #&%%& #&%%%%%& #&%%%%%%%%%%%%&$##&%%& #&%%& #&&&&&&& #&&&&&&&&&&&&$ ##&&&& #&&&& #$$$$$$$$ #$$$$$$$$$$$ #$$$$$ #$$$$ #&&&&&&&&&& #&&&&&##&&&& &&&& #&%%%%%%%%%&& #&%%%%%&##&%%%& &%%%& #&%%%%%%%%%%%&&#&%%%%%%&,##&%%%&&&%%%& #&oo%&&&%%%%%%%&%%%%%%%%& ##&ooooooo& #&ooo&##&ooooo%&%%ooooo%o&,##&ooooo& #&ooo& #&oooooo&oooo&oooo& #&ooooo& #&oo&&&ooooooo&ooo& &ooo&, #&%%%& #&o%%%%%%%%%%%&%o&&&&&%oo& #&%%%& #&%%%%%%%%%%%&$%%%%%%%%%%&, #&%%%& #&%%%%%%%%%&$%%%%&&&%%%%%& #&%%%& #&&&&&&&&&&$&&&&& ##&&&&&& #&&&&& #$$$$$$$$$$#$$$$$ #$$$$$$ #$$$$$ "Conker's Bad Fur Day" o------------------------------------------------------------------------------o | ~~FAQ/Walkthrough~~ | | ~~Created October 7, 2007~~ | +-----------------------+----------------------------+--------------+----------+ | RATED M (FOR MATURE) | "By KrocTheDoc" | Version 1.00 |NINTENDO64| +-----------------------+----------------------------+--------------+----------+ ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | | Table of Contents | | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +----+-----------------------+--------+----------------------------------------+ |Ch.#| Name/Description | Search | Description | +----+-----------------------+--------+----------------------------------------+ |(01)| Version History |VERSIONS| Descriptions of updates to the FAQ. | +----+-----------------------+--------+----------------------------------------+ |(02)| Introduction | INTROD | The introduction to this FAQ. | +----+-----------------------+--------+----------------------------------------+ |(03)| Game Basics | BASICS | Basic information about Conker's BFD. | +----+-----------------------+--------+----------------------------------------+ | 3a | Story | STOR | The story behind the game. | | 3b | Controls | CTRL | How to control Conker. | | 3c | Moves | MOVE | Conker's techniques and skills. | | 3d | Items/Weapons | ITEM | The items/weapons found in the game. | | 3e | Enemy List | ENEM | A list of the enemies in Conker's BFD. | | 3f | Character List | CHARAC | A list of characters in Conker's BFD. | +----+-----------------------+--------+----------------------------------------+ |(04)| I. Hungover |CHAPTER1| The game's training level. | +----+-----------------------+--------+----------------------------------------+ | 4a | Scaredy Birdy |CHPT1P01| Curing Conker's hangover. | | 4b | Pan Handled |CHPT1P02| Getting the frying pan. | | 4c | Gargoyle |CHPT1P03| Getting rid of the gargoyle. | +----+-----------------------+--------+----------------------------------------+ |(05)| II. Windy (Pt. 1) |CHPT2PT1| The first part of chapter two. | +----+-----------------------+--------+----------------------------------------+ | 5a | Mrs Bee |CHPT2P01| Returning the beehive. | +----+-----------------------+--------+----------------------------------------+ |(06)| III. Barn Boys |CHAPTER3| The third chapter of the game. | +----+-----------------------+--------+----------------------------------------+ | 6a | Marvin |CHPT3P01| Feeding marvin the mouse cheese. | | 6b | Mad Pitchfork |CHPT3P02| Meeting Franky the pitchfork. | | 6c | Sunny Days |CHPT3P03| Making bees tickle a sunflower. | | 6d | Barry + Co |CHPT3P04| Rescuing Franky. | | 6e | Buff You |CHPT3P05| Attacking the haystack. | | 6f | Haybot Wars |CHPT3P06| Defeating the robot. | | 6g | Frying Tonight |CHPT3P07| Getting out of the robot area. | | 6h | Slam Dunk |CHPT3P08| Opening a gate in the area. | +----+-----------------------+--------+----------------------------------------+ |(07)| II. Windy (Pt. 2) |CHPT2PT2| The second part of chapter two. | +----+-----------------------+--------+----------------------------------------+ | 7a | Poo Cabin |CHPT2P02| Heading through poo cabin. | | 7b | Pruned |CHPT2P03| Pouring prune joice into a trough. | | 7c | Yee Haa! |CHPT2P04| Killing the cows. | | 7d | Sewage Sucks |CHPT2P05| Swimming through the pile of poo. | | 7e | Great Balls of Poo |CHPT2P06| Putting your poo balls to work. | +----+-----------------------+--------+----------------------------------------+ |(08)| IV. Bats Tower |CHAPTER4| The fourth chapter of the game. | +----+-----------------------+--------+----------------------------------------+ | 8a | Mrs. Catfish |CHPT4P01| Meeting the catfish. | | 8b | Barry's Mate |CHPT4P02| Heading up Bats Tower. | | 8c | Cogs' Revenge |CHPT4P03| Finding the missing cogs. | | 8d | The Combination |CHPT4P04| Opening the safe. | | 8e | Blast Doors |CHPT4P05| Doing a pinwheel challenge. | | 8f | Clang's Lair |CHPT4P06| Swimming through an underwater tunnel. | | 8g | Pisstastic |CHPT4P07| Defeating the flames. | | 8h | Brass Monkey |CHPT4P08| Defeating the boiler. | | 8i | Bullfish's Revenge |CHPT4P09| Swimming back to the start. | +----+-----------------------+--------+----------------------------------------+ |(09)| V. Sloprano |CHAPTER5| The fifth chapter of the game. | +----+-----------------------+--------+----------------------------------------+ | 9a | Corn Off the Cob |CHPT5P01| Feeding the Great Mighty Poo corn. | | 9b | Sweet Melody |CHPT5P02| Defeating the Great Mighty Poo. | | 9c | U-Bend Blues |CHPT5P03| Swimming past the spinning blades. | | 9d | The Bluff |CHPT5P04| Getting past the two guards. | +----+-----------------------+--------+----------------------------------------+ |(10)| VI. Uga Buga |CHAPTER6| The sixth chapter of the game. | +----+-----------------------+--------+----------------------------------------+ |10a | Drunken Gits |CHPT6P01| Getting into the dino-idol room. | |10b | Sacrifice |CHPT6P02| Sacrificing a dino to the idol. | |10c | Phlegm |CHPT6P03| Going through the dinosaur idol. | |10d | Worship |CHPT6P04| Getting rid of the rock guys. | |10e | Rock Solid |CHPT6P05| Breaking Berri out of her cage. | |10f | Bomb Run |CHPT6P06| Taking a bomb through the idol. | |10g | Mugged |CHPT6P07| Getting your money back from cavemen. | |10h | Raptor Food |CHPT6P08| Eating cavemen with a raptor. | |10i | Buga the Knut |CHPT6P09| Defeating Buga the Knut. | +----+-----------------------+--------+----------------------------------------+ |(11)| II. Windy (Pt. 3) |CHAPTER2| The third part of chapter three. | +----+-----------------------+--------+----------------------------------------+ |11a | Wasps' Revenge |CHPT2P07| Returning the hive again. | |11b | Mr. Barrel |CHPT2P08| Riding Mr. Barrel down the hill. | +----+-----------------------+--------+----------------------------------------+ |(12)| VII. Spooky |CHAPTER7| The seventh chapter of the game. | +----+-----------------------+--------+----------------------------------------+ |12a | Mr Death |CHPT7P01| Getting a shotgun and killing zombies. | |12b | Count Batula |CHPT7P02| Dropping mice in Batula's grinder. | |12c | Zombies |CHPT7P03| Getting out of Count Batula's mansion. | |12d | Mr. Barrel |CHPT7P04| Riding Mr. Barrel out of the house. | +----+-----------------------+--------+----------------------------------------+ |(13)| VIII. It's War |CHAPTER8| The eight chapter of the game. | +----+-----------------------+--------+----------------------------------------+ |13a | It's War |CHPT8P01| Meeting the general. | |13b | Power's Off |CHPT8P02| Restoring the area's power. | |13c | TNT |CHPT8P03| Blowing up the giant jet. | |13d | The Assault |CHPT8P04| Heading through the beach. | |13e | Sole Survivor |CHPT8P05| Getting two guns from a dead soldier. | |13f | Casualty Dept. |CHPT8P06| Heading through a big building. | |13g | Saving Private Rodent |CHPT8P07| Saving Rodent, a soldier. | |13h | Chemical Warfare |CHPT8P08| Going through the area with Rodent. | |13i | The Tower |CHPT8P09| Destroying the tower's supports. | |13j | Little Girl |CHPT8P10| Destroying submarines firing missiles. | |13k | The Experiment |CHPT8P11| Killing the Little Girl's robot. | |13l | Countdown |CHPT8P12| Heading through lasers and the beach. | |13m | Peace at Last! |CHPT8P13| Leaving the war chapter. | +----+-----------------------+--------+----------------------------------------+ |(14)| IX. Heist |CHAPTER9| The ninth chapter of the game. | +----+-----------------------+--------+----------------------------------------+ |14a | The Windmill's Dead |CHPT9P01| Going in the remains of the windmill. | |14b | Enter the Vertex |CHPT9P02| Matrixing your way through the bank. | |14c | The Vault |CHPT9P03| Defeating the final boss. | |14d | End Cutscene |CHPT9P04| The ending of Conker's Bad Fur Day. | +----+-----------------------+--------+----------------------------------------+ |(15)| Appendicies | APPEND | The last chapters of the guide. | +----+-----------------------+--------+----------------------------------------+ |15a | Tail Locations | TAIL | Where the 1-up tails area. | |15b | Cheat Codes | CHEATS | The cheat codes of the game. | |15c | Legal Disclaimer | LEGALD | The legal notice of this FAQ. | |15d | Contact Information |CONTACTI| READ THIS BEFORE CONTACTING! | |15e | Credits | CREDIT | I would like to thank... | +----+-----------------------+--------+----------------------------------------+ ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (01) | Version History | VERS | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +------------------+--------------+ | October 13, 2007 | Version 1.00 | +------------------+--------------+--------------------------------------------- | The FAQ is complete, meaning all the cash is in there and | the guides for all nine chapters are complete. The extra | chapters are also done. +------------------------------------------------------------ +---------------+--------------+ | March 1, 2008 | Version 1.05 | +---------------+--------------+------------------------------------------------ | I added two extra tails I missed. These were pointed out to me | thanks to cellosolo777@gmail.com. Thanks cello! +--------------------------------------------------------------- ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (02) | Introduction | INTROD | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- Welcome to my FAQ for Conker's Bad Fur Day. This is my eighth FAQ for GameFAQs and my fifth Nintendo 64 guide. Rareware has made yet another top-quality, hilarious game for the Nintendo 64 FAQ. Originally, the game was to be called Twelve Tails and was going to be another 3d platform like Banjo-Kazooie. When criticism claiming it would be "another cutesy platformer" reeled its ugly head, the game was transformed into a poo humor-filled, violent, swearfest. And thus, Conker's Bad Fur Day was born. Before the guide starts, let me dish out a little warning. If you're not at least, shall we say, 14 years old (or at least mature enough to be able to watch shows like Family Guy), then **** off, as any character in this game would say. Parents, if you don't like the sound of BFD, don't let your kids play it. It's not the type of thing you want your little boy to be playing. Conker's Bad Fur Day is totally different from N64 games like Super Mario 64 and Banjo-Kazooie. It's not where there are worlds you must open by collecting items , working your way towards the boss. BFD is much more open-ended. Basically, you progress through the game doing various tasks for characters, rather than collecting huge amounts of garbage. In fact, the only thing you have to collect is money. The VERSION HISTORY chapter explains changes made in each update of the FAQ. The GAME BASICS tells you basic information about the game. If you read it, you'll be able to skip a lot of training/cut-scenes that tell you controls and other such information. The WALKTHROUGH explains how to complete every task in the game and will tell you where all the money wads are as well. The APPENDICIES includes credit and legal information as well as extra information like cheat codes. Enjoy! ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (03) | Game Basics | BASICS | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 3a Story STOR | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+." stand once more. "I'm definitely going now. Good-bye!" "I think you just hit the nail on the head," said Conker,... +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 3b Controls CONTROLS | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Control Stick ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Move~~ The control stick is used to manuever Conker anywhere within 360 degrees. This allows you to move and head to different areas. Holding the control stick lightly will cause Conker to tip-toe. Push it further and he will walk. Hold it all the way and Donkey will run. Running is the most efficient way to explore the worlds, so you should always do so. However, when moving across narrow bridges, tip-toeing is safer and more advisable. Underwater, the control stick allows you to move as well. However, it is much more difficult to swim to precise locations or to objects underwater. Pushing the control stick up causes Conker to go down while holding it down causes him to go up. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A Button ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Jump~~ Like in pretty much all N64 adventure games, the A button is used to jump. Press A and Conker will jump, allowing you to reach higher ledges. You'll need to use this pretty much all the time throughout the game. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ B Button ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Attack/Dive~~. While on the water (after swallowing the confidence pills in Windy part 2), press B to submerge. Conker's face will appear on the screen and his expression will become worse and worse as you stay under longer and longer. A little bit after his face becomes blue you'll start losing your health, so you'll want to be extremely careful. Sometimes you'll find bubble pockets to restore your air. To swim around underwater, move the control stick while holding B. The last use of the B button is. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Z Button ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Crouch~~ Hold the Z button while on the land and Conker will crouch. You can then press the A button to make him do a highjump. While crouching, you can't do anything else but move the control stick to turn in a circle. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C Buttons ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Camera~~ Conker's Bad Fur Day has one of the simplest camera systems, which is probably all for the better. Hold left or right C to rotate the camera sideways. If you press up C, you can toggle between two different views. One is further away from Conker, letting you see more at once. If you hold the down C button, you can center the camera behind Conker, making things a bit easier to see. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L Button ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Skip Cutscenes~~ After watching a cut-scene once, you can press the L button to skip it (a select few cut-scenes cannot be skipped). If you haven't seen the scene before, you won't be able to skip it. Also, some of the context zones come with manuals to explain the more complex stuff. If you want to use the zone for a second time but need the manual again, press L and B at the same time. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ R Button ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~First-Person~~ If you hold the R button, you'll be able to see a first-person view of the area. If you're unfamilar with a level, this is a great way to look around and see where you're going. You can't do anything else while holding R. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 3c Moves MOVE | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ High Jump ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Z+A~~ Conker will remember this move at the top of the waterfall in the training area. While on the ground, hold Z to crouch and press A. Conker will do an extra high jump. You can now reach much higher ledges and platforms. You'll need to do this a lot throughout the game, as the normal jump won't cut it for many ledges. Unlike the high jump in a lot of other N64 games, you can get a bit of distance, but usually you'll be using... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Helicoptery Tail Thing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~A+A~~ Conker will remember this move at the top of the waterfall in the training area. Press A to jump, then press and hold A again to make Conker spin his tail. You can now cross rather large gaps as the tailspin will make Conker float for a while. After a bit, his tail will slow down and stop. Be careful, because if you fail to cross the gap before Conker stops spinning his tail, you'll fall straight down and possibly die. While the most important use of the tailspin is to cross gaps, you can also use it to break falls. Hold A just before landing from a fall and you'll float safely down. If you time it incorrectly, you'll get hurt anyway. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Crawl ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~Z+Joystick~~ This move makes Conker look like a baby, which he is sometimes. Hold Z to crouch down, then start moving the control stick. Conker will start crawling around like a little baby. Since he moves EXTREMELY slowly while crawling, you can use this move to get across very narrow ledges. If you have difficulty controlling Conker on ledges, the crawl may make it a bit easier. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 3d Items/Weapons ITEM | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ================================================================================ Collectible Items ================================================================================ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Anitgravity Chocolate ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ After leaving the training area and entering the Windy chapter for the first time, you'll start seeing pieces of chocolate all over the place. Pick them up to restore your life bar (you only get six pieces throughout the whole game). If an enemy hits you, you'll lose one or more pieces of chocolate. After losing them all, you'll die. Pieces of chocolate can be found all over the place, so stocking up isn't hard. A short while after picking up a piece of chocolate, the piece will respawn, so you can always head back and refill your health with ease. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Cash ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Throughout the game, you'll encounter packs of bills with eyes. Walk near one and Conker's eyes will turn into dollar signs. Conker will grab and pocket the bills and say something funny. You'll need money to pay some of the characters in the game for their services. Fortunately for you, Conker whistles his money back to him, so you'll never really lose any bills. You can find cash in alcoves and on roofs as well as in many other places. If you press the start button, you can view how much money you have. The cash wads usually come in packs of 100. However, one pack is only worth 10 dollars. ================================================================================ Pads, Switches, and Misc. ================================================================================ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Context Buttons/Zones ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ After meeting Mr. Birdy, you. ================================================================================ Weapons ================================================================================ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Frying Pan ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Room with key in training area. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Slingshot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Several locations You'll have to pay Birdy ten dollars for this (though Conker whistles it back). Press B while on the context button to take them out and shoot the nearby beetles. To shoot, press Z. Use the control stick to aim. You have an unlimited amount of shots. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Anvil ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Various locations Sometimes, usually when Conker jumps off of a seemingly dangerous plank or ledge, a lightbulb will appear. Press B to turn into an anvil and crash downward. This can sometimes hurt enemies, smash open ledges, or activate switches. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Flamethrower ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Top of Franky's barn/inside Bats Tower This is your weapon that you'll use against Barry and the other bats in certain areas. Press B when the lightbulb appears just after you hear a squeak and Conker will torch the bat with a flamethrower. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Knives ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Area above Franky's barn/under the barn after killing the Haybot While in the correct context zones in the barn areas, press B to take out some knives. You have an unlimited supply. Press Z to throw them, allowing you to cut ropes and cords. Use the control stick to aim. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Toilet Paper ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Great Mighty Poo battle In the Great Mighty Poo battle, stand on the context buttons not covered with poo and press B to take out a roll of toilet paper. When the Great Mighty Poo opens his mouth to sing, press B to throw the roll into his mouth and cause damage. If you want to put the toilet paper away, press Z. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Shotgun ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Beginning of Spooky chapter When you meet Gregg trying to kill the catfish in the Spooky chapter, he'll give you a shotgun to deal with the zombies up ahead. Press B to take it out, hold R to go into aiming mode, and press Z to shoot. You can't hurt yourself, so it's all good. Press A to reload. While in aiming mode, hold Z to use a laser- targetting feature. If you want to put the gun away, press B. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Crossbow ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Count Batula's mansion After killing Count Batula, press B while standing on context buttons in a couple places in the mansion to take out a bow, letting you kill bats. Press Z to fire, hold Z to use the laser targetting, and press B to put the bow away. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dual Shotguns ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Beach in It's War! chapter At the end of the beach in the It's War! chapter, Conker will loot two shotguns off of a dead soldier's body. Press B to take out the guns and Z to shoot. You can hold R to aim and hold Z to use a laser-target. Press B to put the guns away. The shotguns will let you kill the Tediz throughout the chapter. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Machine Gun ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Room in It's War! chapter You'll have to shoot the Tediz operating the machine gun chair with a bazooka. Then, you can hop into the chair and press Z to fire powerful bullets at the Tediz while using the control stick to turn the chair. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bazooka ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found in: Several times in It's War! chapter There are a couple parts of chapter eight in which you'll get to use a bazooka. When you can, press B to take it out/put it away, hold R to aim, and press Z to fire. The Bazooka fires considerably slower than the shotgun. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 3e Enemy List ENEM | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gargoyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : * You'll encounter a huge, living gargoyle at the top of the first chapter. Once you get the frying pan, whack him to make him fall off the bridge, allowing you to continue. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Wasps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ** You'll find a group of wasps guarding the beehive in the Windy chapter. If you want to return the hive, you'll have to keep moving or the wasps will sting you and force you to start over. There are also wasps guarding the ladders leading up to the top of the Barn Boys chapter. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dung Beetles ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ** Dung beetles hang out on ledges in the Windy area and around the ledge leading up Poo Mountain. The mountain ones you can just avoid, but the ledge ones you must shoot with your slingshot to head up the slope they're guarding. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Worms ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : *** Worms will pop out of nowhere on the path leading up to the windmill in Windy and the path leading to Count Batula's house. If you run quickly, you can easily run into them and get hurt, so move slowly and highjump over them when they pop out of the ground. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Megablox/Crates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ** Large metal blocks guard the path to the cheese farm in the Barn Boys chapter. Wait for them to jump, then rush under them before they land to avoid getting hurt. You'll find a crate variation of these at the start of the It's War! chapter. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Barry and Buddies ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : *** Along very narrow ledges at the top of the Barn Boys barn and the path leading up the inside of Bats Tower, you'll encounter Barry the bat and his bat friends. When you reach the context zone, press B after you hear a squeak to torch the bats and kill them. Larger bats are also found guarding keys in Count Batula's house after killing the count. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bullfish ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ** In the Bats Tower chapter, you'll find a bullfish at the end of the river. If you get too close, he'll bite you. After he gets loose later on, you'll have to swim through the river before he bites you. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Imp ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ** Imps are found in various locations, mostly in Bats Tower and Windy. They look like green dudes in metallic armor. You can't kill them with your frying pan or anything, so just avoid them (sometimes you can kill them with a special weapon or item). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clang ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : **** Clangs, which are eyeballs swimming around in Clang's lair, an area in Bats Tower, will try to bite you. You can point your flashlight helmet at them to blind them for a VERY short time, so swim past them quickly. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fire Imps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ** Fire Imps are found at the end of the safe in the Bats Tower chapter. To kill them, drink beer from the keg and press B to start pissing on them. They'll run to run under you and burn you, so you'll have to be careful. If you want to extend your piss stream, hold Z. Use the control stick to turn around and aim. After a while, Conker will get a hangover, so you'll need to get some alka- seltzer from the medicine cabinet. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Uga Buga ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : * Uga Bugas are the cavemen patrolling the Uga Buga chapter of the game. There's no way to kill them, so just avoid them (it's easy as they're quite slow). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Raptor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : ***** Raptors patrol the building at the start of the Uga Buga chapter. You can only avoid them. You'll also be attacked by one (though you'll have to hipnotize it to eat the cavemen) in an arena at the end of the chapter. Press B to headbutt. Walk into a caveman to grab him by the mouth, then press Z to swallow. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Zombie ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : **** Zombies are found in the Spooky chapter (in the graveyard and in Count Batula's house after destroying the said count). They'll try to run towards you and attack you. The only way to kill them is to use the shotgun Gregg gives you to shoot them in the head. If you shoot them anywhere else, they'll merely bounce back. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Villagers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : *** Villagers are mice patrolling Count Batula's house. You have to, as a bat, pick them up (hold B to fly) and press Z to drop bat poo on them. This will stun them for a while and allow you to pick them up. You can then fly to the room where Count Batula is hanging and drop them in the grinder, allowing Batula to feed on their blood. Once he's had too much, he'll fall off his cord and be grinded himself. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tediz ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : **** Tediz are evil teddy bears found throughout the Its War! chapter. Use your dual shotguns to shoot them in the head and kill them. If you shoot them anywhere else, they won't die. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Submarines ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : **** You'll find subs in the waters surrounding the area the little girl is in in the Its War! chapter. Shoot them with a bazooka while they're not firing missiles at you to get rid of them. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bank Guards ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : *** Bank Guards are found throughout the Feral Reserve Bank in the Heist chapter. Use the B button pads behind the pillars to Matrix glide through the air and shoot them with a crosshair in slow motion. Of course, you can be shot too. Since you have no control over Conker's pre-determined flying motion, you have to press B on the pad at the right time. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 3f Character List CHARAC | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Conker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Conker is a heavy-drinking, materialistic red squirrel that you play as in the game. He drinks beer, gets the hot girls, and grabs all the money he can find. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Berri ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Berri, Conker's curvaceous girlfriend, is seen in the opening sequence. You don't see her again until a cut-scene about halfway through the game. In the Rock Solid club, you'll have to break her cage open. The last time you see her is at the Feral Reserve Bank, where she'll assist you in robbing said bank. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Panther King ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Panther King, ruler of all the land and master of his castle, treats his subjects will cruelty. His table, which is missing a leg, falls over whenever he places his milk on it. This creates a problem for Conker because a red squirrel is the only thing that can possibly fill the gap. Of course, a rich king cannot be arsed to go out and buy another damn table. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Professor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The professor, a personal scientist of the Panther King, lives only to sort out the king's problems with the power of science. He flies around in a hovering chair with no legs. The professor is the one that determines the only thing that can fill the gap in the king's table is a red squirrel. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Panther King's Guards ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Panther King's guards watch over to the entrance to the Uga Buga chapter and appear in a few cut-scenes. Conker fools one of them into thinking that he is an elephant, allowing him to get past them. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Birdy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Birdy is a scarecrow found in a couple places in the beginning of the game. He will teach you how to use context pads and sell you a manual for the more complex stuff. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gregg the Grim Reaper ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gregg, the very short grim reaper, will appear when you die to explain the life system (grab squirrel tails for extra lives to prevent yourself from dying). He also appears in the Spooky chapter to give you a shotgun. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (04) | I. Hungover | CHAPTER1 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 4a Scaredy Birdy CHPT1P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ In the opening cut-scene, Conker leaves his girlfriend, Berri, a trick message saying that he's with some guys who are off to "fight some war or something". Conker proceeds to get drunk and wander off in the direction opposite his home. In a throne room of a distant castle, the Panther King is mad because his table has a missing leg, causing his milk to spill. Conker wakes up in some forest or something with a horrible hangover, giving you your first objective. At the start, head forward and around the farm patch's fence. You'll move very slowly while hungover and you can barely jump either. At the other side, go through the opening in the fence and head to the middle of the patch, where you will meet Birdy the scarecrow. Conker asks him for help, to which Birdy replies that Conker should step over behind him. Head to the B button pad behind Birdy and the scarecrow will explain that they're called context sensitive. He'll tell you to press B when a lightbulb appears and makes a ting noise. A lightbulb will appear above Conker's head, so press B. Conker takes out a beer, which Birdy snatches and drinks. He'll tell you to try it again outside the patch or do it again here. You can press B two more times on the same pad to take out helium and beer (Birdy will go to bed after drinking both). Head back outside the farm patch using the exit Birdy opens for you and press B on the pad. Conker takes out an alka-seltzer, which cures his hangover. He'll realize that these give him just what he needs in that moment in time. Understand now? It's sensitive to the context of what you need. Clevuh. Conker tells you you can skip cut-scenes, assuming you've already seen them, by pressing L. You can now run at normal speed and press A to jump at a normal height. Run over to the water and swim to the ledge at the top of the waterfall. You can't get hurt at all in the training area, so don't worry if you fall off the ledge. Conker will remember that you can hold Z and press A to do a highjump and press A while in the air to do a tailspin, letting you hover to cross gaps and break falls. Be warned, Conker can't tailspin forever, so you can fall. Okay, tailspin to the log nearby. Head up the path and tailspin across a few more gaps. At the top, you'll reach a bridge. Ignore it for now and highjump to pull the key near the bridge, opening a door. Head back across the gaps and go into the door once you find it. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 4b Pan Handled CHPT1P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Chase the key around in this room until the door closes, at which point Conker will remember that you can press B to swipe a frying pan. You'll stop for a second when you do so, so you'll need to be near your target. Run up to the key and smash it with the pan to stun it, then pick it up and take it towards the door to open it up. Head back outside the door. Tailspin across the gaps until you reach the top of the path, then head to the bridge, where you'll meet a gargoyle. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 4c Gargoyle CHPT1P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ The gargoyle will explain that he's been sitting on a piece of gothic architecture for 200 hundred years and only just got comfy on the bridge, so he's not moving to let you cross. Conker will remark that it's a bit early in the day to be talking about gothic architecture, so the gargoyle says if he comes a bit closer, they can discuss things of another nature. Approach the gargoyle and press B to whack it with your frying pad. The gargoyle will laugh at the fact that you actually tried a frying pan, but this causes him to lose his balance and fall off the bridge to the bottom of the waterfall. He dies, but a huge boulder falls and blocks the opening at the end of the bridge. Highjump onto the top of the rock, then tailspin to the wooden ledge on your right. Use the context button to take out a dynamite with plunger. Conker will use it to blow up the rocks, so tailspin back and head into the passage. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (05) | II. Windy (Pt. 1) | CHPT2PT1 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 5a Mrs Bee CHPT2P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ -------------------------------------------------------------------------------- NOTE: This is the first area where Conker will be able to collect health. However, you won't be able to get extra lives until you die. This FAQ assumes you have already died or killed yourself, allowing you to collect squirrel's tails. -------------------------------------------------------------------------------- After entering the chapter, a cut-scene will show that a professor in a floating chair goes to the Panther King's throne room. The king shows that the professor must fix his three-legged table. The professor says he will do what he can, but will require a bit of time. The king tells the professor not to take too long and threatens him with duct tape. In the professor's lab, he swears about the king and looks for something that might help. He knocks a bunch of antigravity chocolate out the window. The chocolate will land in front directly in front of Conker. Head down the path and collect the chocolate to boost your health up to the maximum of six. At the bottom of the path you'll find a sign with two directions, naught and nice. Behind the sign is a squirrel tail. Follow the nice path to the right and you'll meet Mrs. Bee. She's crying because a group of wasps stole the beehive. Conker agrees to get it for her and asks where it is. Mrs. Bee merely tells him to follow the signs. Head backward and take the naughty path this time. Go up the slope to where you'll find some yellow goop surrounding the grass. You'll walk slowly on this. Head up the path until you reach and area with a beehive. Grab it and three wasps will come out to try and attack you. Rush down the path (don't stop moving baby, don't stop moving, wiggle, wiggle or you'll get hurt and the wapss will most likely steal the hive back) and avoid the sticky crap. Continue down the path until you reach Mrs. Bee again. Conker will throw the hive to her and she'll fly into it. From the hive, she will shoot the three wasps and kill them. Mrs. Bee thanks "Mr. Squirrel" (get used to this, as Conker doesn't tell anyone his name) for his efforts and comments that none of it would have happened if it weren't for her good-for-nothing husband, who, not suprisingly, ran off with another woman. As a reward for your good services to the bee community, Mrs. Bee presents Conker with...a big fat wad of CASH ($100). After you have your money, head over to the sign again. Follow the naughty path until you reach the place where you find the hive. Tailspin across the river below to where you'll find a gray metallic B button pad. The beetles on the ledges up the slope are angered by your presence, but decide to wait for you. Stand on the pad and Birdy will appear. He'll tells you you need a manual to use this particular B button pad. However, it'll cost you ten dollars. Birdy will hand you the manual after Conker forks over ten bucks. As Birdy leaves, the bills run back to Conker. Now press B and Conker will start reading the manual. Manuals will tell you how to use more complex zones. If you want to use the zone again but need the manual, press L and B. To skip it, just press B. Use the control stick to aim your slingshot and press B to fire. Now shoot all the beetles on the ledges on the slope ahead. After shooting one, it will fly up. Shoot it again to kill it before it hurts you. Once they're all gone, a door at the top of the slope opens. Head up the slope and Conker will discover a fork in the path. Take the right path. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (06) | III. Barn Boys | CHAPTER3 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6a Marvin CHPT3P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ In his lab, the professor is putting together an experiment to figure out ze problem with ze table. Clevuh. The professor sees ze problem and realizes zhere is a... a... (a gap, maybe?). He decides to do some experiments to sort it out. And when the Tediz are ready, the king and professor will see who uses ze duct tape. Go forward at the start of this barn area and cross the river. Turn right, then look on your left and talk to the two blocks sitting on top of each other. Jack, the man on the bottom, demands that Conker make the fat ass bitch on top of him get off pronto. Conker asks what he'll do for him. Jack replies that he'll maybe help him if Conker can also get rid of the smelly rat polluting (no joke) the small area with his toxic farts and burps. Jack also tells Conker that if he runs into Burt, he should just mention his name and everything will be good. Take the left path from the start now. Head along the ledge above the greenish river and run under the blocks bouncing up and down when they jump to avoid getting hit. Talk to Burt, the block on the ground by the cheese farm, at the end and he'll open the gate for you just because you mentioned Jack. Whack one of the pieces of cheese here and pick it up. Now head back along the ledge with the bouncing blocks. If you get hit, you'll drop the cheese, so stick to the right side of the ledge where there's a small safe spot. Continue along the path under the area with the four posts until you reach Jack again. Walk over to the mouse and feed him the cheese. Marvin, the mouse, will ask for another. Keep going back and feed Marvin two more pieces of cheese. After he's had three total , he will explode, which makes the lady on top of Jack fall off. Jack thanks you and tells you there's something "real neat" inside the barn. You've just got to open it first. Hop up the two blocks, then jump up the pipes. Jump to the roof and run to the wooden part, where you'll find a switch. Step on it to open the barn. Now grab the CASH ($200) in front of you. Jump off the edge of the roof and tailspin to avoid losing health. You'll see a wooden crate hopping around a circular platform. Go to your right and you should see the barn door on your left. Head inside. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6b Mad Pitchfork CHPT3P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker will ask what is "real neat" inside the barn, as he can't see it himself. Unless some guys jumping around stinking of horse poo is neat, which of course it isn't. One haystack says this is pretty neat. The doors close behind Conker and a paint pot over in the back of the room will tell Franky the pitchfork that it's his turn to kick this guys ass. However, Franky wants you to come over to him. Run over to the where the paint pot and brush are and Conker will insult Franky, starting the battle. Head over to the haystacks. Position yourself so you, Franky, and a haystack are lined up and Franky will charge forward and destroy the haystack. If you line yourself improperly, he'll poke Conker and you'll lose a piece of chocolate. Some haystacks are smaller than others, making them harder to kill. After all they're are gone, the paint pot and brush will tell Franky his kick ass sucked. Franky takes their suggestion of killing himself. Unfortunately, his attempt to hang himself doesn't work, as he doesn't have a neck of any description. His "friends" continue to mock him for his idiocy. Now head over to the wall opposite the entrance. Go to the corner and highjump to pull the switch, opening a door near the ceiling and causing a bee to fall outside. Leave the barn. A cut-scene shows after you leave about a giant haystack entering the room. The haystack claims his nemesis is defeated. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6c Sunny Days CHPT3P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Head over to the circular platform where a wooden crate is bouncing around and talk to the bee that fell out of the barn. He asks Conker if he could spare him a dime, but gets ignored. The bee tells Conker that in his own country he's a king, but his wife (Mrs. Bee) threw him out. The hive keeps getting stolen, but he doesn't care and he's been reduced to a bum. Conker decides to leave and not help the bee, but the bee tells Conker about the big-breasted babe, which gets his attention. The bee wants to "pollinate" the babe (who is a sunflower, as a cut-scene will show). The bee gets Conker to help him by offering him cash if he succeeds. Go past the wooden crate hopping around and follow the narrow path above the water until you reach the sunflower, who tells Conker to stop tickling her with his bushy tail. After the cut-scene, go back to the platform where the wooden crate is. Talk to the bees in the center, who are tickly BEES (1).. We're going to get all five sets of bees before going to the sunflower. The bees follow you, so head past the wooden crate and over to the cheese farm. Jump onto the big cheese piece and start jumping around the giant cheeses around the farm. You'll find some BEES (2). Now head over to the metal blocks past the cheese farm. Get past them and you'll be by the start, so highjump just by the start to find more BEES (3). Turn around and head to where you fed Marvin the cheese. Climb up the blocks and pipes, then jump to the roof where you opened the barn. Just by where you got the cash you'll find some BEES (4). Now turn around and you'll see a steeper roof. Highjump to the top from the less steep part and head near the edge, where you'll find a ladder. Climb it and you'll reach the top of a giant bucket. Carefully head around to the other side where you'll find more BEES (5). Go back and climb down the ladder. Head to the less steep roof and tailspin to where Jack is. Opposite Jack you'll see the platform where the wooden crate is bouncing around. Highjump to it, then follow the path by the water to reach the sunflower. All the bees will tickle her and the bee will come to pollinate her, complete with disturbing sex sounds. Now use the flowers breasts to bounce up into an alcove below. You need to bounce several times in succession, so I recommend you just tailspin right after bouncing. In the alcove you'll find some CASH ($300). Follow the path back to where the wooden crate is. Highjump on top of the crate when it's near the barn, then highjump to the ledge on the barn wall and head into the opening. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6d Barry + Co CHPT3P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You'll meet a bunch of bats who are here to kill you. One of them is named Barry. Carefully head across the very narrow beams. A lightbulb will appear. After hearing a squeak, press B and Conker will take out a flamethrower. If you time it correctly, you'll torch the bat. Head along the left beam once you kill the three bats. Now use the B button pad to take out some knives. Press Z to throw them, trying to cut the rope hanging Franky. Once you hit it, Franky will fall to the bottom of the barn. Tailspin down to where he is after putting away the knives by pressing B. The paint pot points out that the brush repeats everything he says and Franky thanks Conker for helping him. Conker asks what they're going to do about the giant haystack man you saw when you left the barn. Conker puts the brush in the paint pot, then agrees to hop on Franky to fight the haystack. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6e Buff You CHPT3P05 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Boss - Haybot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : 3/10 After hopping on Franky, head over to where the giant haystack is. Try to circle him until you get behind him, then press B to have Franky stick him with his fork. If you try it from his front, it won't work and he'll punch you back. You will also get knocked off Franky, forcing you to jump back on him. After sticking him, he'll be engulfed in flames. After you hit him again, he'll show his robot eye and target Conker. Give him a third poke and the haybot will throw a fit. He'll slam the ground, causing the floor to break. Conker, Franky, and the haybot will all fall into the pit below. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6f Haybot Wars CHPT3P06 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker falls to the ground and busts his leg. Franky celebrates that they defeated the haybot, but now you'll have to face his fully robotic form. The haybot takes out suzie 9mm's and will start firing missiles at you. They home in on you, so it can be a bit tricky to avoid them. A sure-fire way to avoid getting hit is to stand behind one of the giant tanks in the room. You'll also find chocolate behind one of them. When the battle begins, get away from the haybot (he chases you when you're near). As the missiles come, jump just before they hit. Now go near the haybot and look around for a pipe with water coming out of it. Lure the haybot over to the water and he'll malfunction. He'll retreat to the center of the room and start spinning around. Go near it and wait for it's back to face you, then jump and press B to smash the DO NOT PUSH button. One of the robot's arms will explode. Continue dodging missiles as they fire and lure the haybot to the water. If you need some more chocolate, head behind the pipes to get some. You have to dodge the missiles before one of the pipes starts leaking. The number of missiles you have to dodge increases each time. After pressing the red button two more times, the haybot will explode. Though the robot is gone, Franky has been split in two by the explosion. Fortunately for him, Conker tapes the pitchfork together with tape. However, the entire place floods with water. Franky abandons Conker, leaving you to escape on your own. There are also electrical wires you'll have to be careful of. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6g Frying Tonight CHPT3P07 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Rush over to the pipe where you found the two chocolate pieces. Climb the ladder behind it and you'll find a B button pad. The water level will soon rise to where you are. Use the pad to take out your knives, then cut the wire just next to you. Now cut the wire dangling from the pipe behind you. On your right side you'll find another wire. After cutting it, swim across from the context pad and climb the ladder on the next tank. Take out your knives and cut the wires on your left and right, followed by the wire all the way across the room. The water raises one last time, so swim over to the other side of the room and look for a ledge. Hop onto it and exit this area. You'll meet the monk dude with the stone that Conker puked on in the opening sequence. Hop onto his stone and he'll get mad, propelling you upward to a ledge. Get the chocolate if you need it, then go to the left side and grab the CASH ($400). Now leave the barn. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 6h Slam Dunk CHPT3P08 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Climb the ladder just next to you (it's very long) to reach a platform. Climb past the platform and you'll find a wasp. Rush past it when the bee flies past the ladder to avoid getting stung, which will knock you to the platform below. Head up the next ladder past a bee. Climb past a platform and past more wasps where you'll reach a diving board (don't look down). Jump off it to where the chocolate pieces are and you'll see a lightbulb. Press B to turn into and anvil. Conker will slam down into the bucket below and hit the B button pad, which opens a gate near the bottom of the level. Climb the ladder in the bucket, then find the ladder leading down. Head down it to reach the roof. Hop to the less steep roof, then tailspin to where Jack is. Turn around and hop into the nearby green river. Follow the river westward and you'll see the gate that opened. Go into the tunnel and you'll find a squirrel's tail plus some CASH ($500). Continue following the tunnel and head past the valley under the blocks by the cheese farm. When the ground becomes low enough, hop to your right and go forward over to the exit. Leave the Barn Boys chapter. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (07) | II. Windy (Pt. 2) | CHPT2PT2 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 7a Poo Cabin CHPT2P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Make a left U-turn and take the left path that you ignored before. Head forward and Conker will take out a gas mask due to the horrible stink. Go over to the cabin and push the door open. In the Panther King's castle, the professor barges into the throne room just as the king begins to become impatient. He has discovered that the gap in the table causes the milk to spill when it is placed on it. The only thing that can fit in the gap is a red squirrel. The king orders his guards to find him one. Inside the cabin, you'll meet a dung beetle. The beetle wants you to make the cows crap so he can make a ball of poo for you. Stand on the trapdoor and press B to make Conker smash it open as an anvil. Head forward and jump onto the rope at the end of the hallway. Climb upward and jump to the nearby rope, then jump and tailspin to the rope on the other side of the room (it's near a pipe pouring out poo). Jump to the next rope. Above the third rope is a beam. When the poo pouring down is out of the way (if you get hit by it, it will knock you to the bottom and you'll have fall through the trapdoor again), tailspin to the beam and head into the new pathway. On your left you should see another opening. Tailspin to it and follow the path to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 7b Pruned CHPT2P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You come to an outside area where a bull is sitting. The bull becomes enraged by Conker's red fur and starts chasing him. Make a left U-turn and you should see a path leading up. Poo balls are rolling down, so you'll have to be careful. Head up, jumping over the poo balls as you go. After highjumping up a few ledges, you'll reach a nearby barrel. Now go over to the metal circle nearby and run in the direction the arrow is pointing to make it turn. Prune juice will pour into a trough below. Conker realizes that anything that drinks that will have to take a crap. A target comes out of the wall below, giving Conker an idea. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 7c Yee Haa! CHPT2P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Head back down the path (if you jump off from the prune juice circle, you'll die) until you reach the bottom. Now go around the area until you find the target. The bull is much faster than you, so you have to jump over it. Stand in front of the target and highjump over the bull when it comes to hit the target. A target on a wall will come out and a cow will walk out of a shed to graze on some grass. Head over to the target on the wall and stand in front of it. Highjump over the bull when it comes and it will get its horns stuck in the wall. Jump onto its back. To charge, press B. If you get too close to the edge, he'll throw you off, so be careful. Ride him to the cow and press B to charge into the cow. She'll decide to go over to the prune juice and have a drink. However, that makes her take a crap on the grate in the middle. Now charge her with the bull while she's shitting and she'll explode. Keep doing this. Each time you have to hit antoher normal target to get the next cow out. The second cow takes two hits to get her to drink, while the third takes three. Once you've killed three cows total, the bull will break the poo grate and fall into the pit. Hop in there yourself. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 7d Sewage Sucks CHPT2P05 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ The whole place is pretty much flooded with crap now. Swim around the room, then head through the alcove you'll find to another watery room. Jump into the alcove here and you'll find a context button. Use it to swallow some confidence pills, which let Conker get rid of his swimmies. He takes out a manual. It explains that to dive under, press B. Hold B and move the control stick to swim around while keeping an eye on the Conker face that appears on the screen. Once his face turns blue, he'll eventually start losing all his health. Go back into the room you dropped into when you jumped into the poo grate and swim directly downward. You'll find the area with all the ropes. Look for a ledge near the bottom in the middle of a poo waterfall and look across from it to see an opening. Swim through it to reach the exit. At the end, turn around and you'll see some CASH ($600) in an alcove. Swim back into the rope room and into the pit below. You appear back at the start of the cabin, so head out the door. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 7e Great Balls of Poo CHPT2P06 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ The dung beetle outside will point to the poo ball next to the cabin. You can do whatever you want with it now. Take the poo ball and push it to the other side of the cabin. You should see a big slope leading up the giant mountain, called Poo Mountain. Start pushing the poo up the slope. Along the way, you'll encounter dung beetles. Wait for them to sleep in the alcoves, then push the poo balls past them. At the top of the slope, Conker will put a stick of TNT into the poo ball and push it onto the large beetle below. The beetle swallows the poo ball by accident, causing him to explode. Another poo ball appears outside the cabin. Head down Poo Mountain until you reach the bottom. Go to the other side of the cabin and start pushing the next poo ball. This time, go up the slope on the same side of the cabin as the poo ball. You'll have to turn around to see it. Push it up, rushing past the dung beetles when they go in their alcoves. Push the poo ball into the hole at the top and it will roll down inside the mountain and crash through a bunch of planks boarding up an entrance at the bottom. Highjump on top of the moutain, then quickly jump again to reach the top, where you'll find a wad of CASH ($700). Take the path down Poo Mountain again to find your next poo ball. Push it past the slope you just went up, following the river around the bottom of the mountain. Push it off the edge of the poo river and it will land on the head of an armored imp standing on a ledge by a lake. Jump into the lake and head to your right, avoiding the imps swimming around. You'll find the poo balled imp, so stand on the switch by him and press B to turn into anvil. Conker will hit the switch and cause the a drain in the water to open, killing the two imps in the water. Now head to the back of the lake and you'll find the entrance to the next chapter. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (08) | IV. Bats Tower | CHAPTER4 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8a Mrs Catfish CHPT4P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You appear in an area with a long river. Conker meets a group of catfish who ask Conker to help them with the nasty bullfish who's stolen their fortune. The catfish ask Conker to get rid of him, but he only agrees when offered ten percent of the money. The safe with the money has a combination that the catfish will enter once it's safe. Head forward, following the river. Swim down until you reach the end, where you'll find the bullfish in a small area. Press B to dive under, then swim into the hole next to the dogfish. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8b Barry's Mate CHPT4P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Swim through the path until you reach a place where you can surface. Head onto the ground and you'll be a huge circular tower. Go to the right side of the room and talk to the cog on the wall. He tells you to either find his missing cogs or **** off, but Conker turns him around to reveal his nice, gay side. The gay side asks Conker to find the mean side's "friends" or else his life will be a misery. An elevator thing lowers nearby. Head over to it and ride it up to a ledge going around the room , where you'll find Barry and the other bats again. Head along the ledge, then cross the narrow plank. When you hear the squeak, press B while the lightbulb is above your head to torch the bat. At the end, go along a ledge and jump to the rope. Climb up to another plank. Keep going up the tower, killing the bats as you head along the planks. The planks after the first one are much narrower. When you reach a half plank at the top, climb the rope that got you there and at the top, you should see another rope near it. Jump to that rope, then climb up and tailspin to the top of the tower. Tailspin across the ledges on top of the tower to your right. Avoid the imps as you go until you find a wad of CASH ($800). Make your way back to the wooden ledge sticking off the tower, then jump to the rope and tailspin to the top platform. Go along the half plank and face the tower wall. You should see a switch above a cobweb. Tailspin to the switch, which Conker will pull. A gate will open in the water that you were in when you entered and you'll fall in the cobweb. Tailspin to a platform below, then make your way down the tower. Once you reach the bottom, head to the water and swim down. Just under the floor is the gate you opened. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8c Cogs' Revenge CHPT4P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Swim through the pipe, avoiding the imps as you go. Once you surface, you'll be in a tiny circular room. Going around it is a red cog. Try to run right into it, then slam it with your frying pan before it gets away. Pick it up and swim back through the pipe, avoiding the imps. Head to the floor, then go to the right side of the room and place the red lady COG (1) next to the evil cog. Swim back through the pipe with the imps to the room where you got the cog. This time, turn around and head through the pathway above the imp tunnel to find a larger room. Smack the green or blue cog with your pan (these ones are a bit harder), then go back and swim through the imp pipe. Head over to the mean cog and place the COG (2) by him. Head back through the imp pipe one more time. Go into the larger room with the two cogs and smack the other cog, then take it back through the imp tunnel. Head over to the mean cog and place the COG (3) near him. Now head to the center of the room and you'll find a big circle. Run around it to make it start spinning, like with the prune juice thing. Mr. Big Cog and all the other cogs will start spinning. After a while, the rope tying the dogfish will contract, trapping him to a very small area. The wheel inside begins to spin so fast that Conker can't control it anymore. He jumps and the mean cog falls off the wall. The lady cogs beat him up and put him on Mr. Big Cog as punishment. The mean cog turns around and the nice one tells Conker that the problem outside has been taken care of. The red cog thanks you and heads off to the Caribbean with her friends after kissing Conker. Jump into the water nearby and swim to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8d The Combination CHPT4P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Head along the river, swimming past the waterfalls and the dogfish. Once you reach the catfish, Conker will ask for the combination. However, the catfish insist on opening it themselves. Swim through the river again to the bullfish as the catfish follow you from behind. A few imps are now in the river, so you'll have to be careful. Once you reach the safe, the catfish will switch the WRONG to RIGHT, which is the combination...Head into the safe now that it's open. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8e Blast Doors CHPT4P05 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker will wake up the money, who declares that Conker isn't his boss. The money hops into the water below to swim with the fish. A door closes over the water and a B button pad comes out, so use it to take out your slingshot. Face the end of the room, where you'll see a pinwheel. The object is to shoot the letters of the word OPEN in order. If you mess up, one of the imps will attack you, but you can shoot them with your slingshot to knock them back. While the letter you're aiming for is covered up, shoot around the wheel so you can figure out where to aim, then shoot once the letter is open. After spelling out open, the trapdoor will open, so hop into the water. Use the B button pad to take out a flashlight helmet, then swim into the tunnel below. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8f Clang's Lair CHPT4P06 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ This can be pretty frustrating, as you can easily lose your oxygen while trying to navigate the confusing passages. You'll find Clangs, which are giant eyeballs , throughout the level. You can use the flashlight helmet to blind them for a very short time, so you can quickly rush past them. At the start, swim down a tiny bit and you should see an alcove. There's a bubble pocket here for you to refill your oxygen. Swim directly downward and blind the Clang, then go past it. You'll find a few more air pockets as you head down. At the very bottom, use the air pocket and look around for two green light tunnels. Swim through the top one and blind the Clang, then surface at the top of this room. Use the context pad to replace the batteries for your helmet. Swim to the left side of the room to find two blue light tunnels. Head through the top one and surface in the next room. On your left you should see a switch. Pull it to open a nearby tunnel. Swim down and look for a green tunnel. Swim through it, avoiding the Clang. Surface at the top of the room and use the B button pad to replace your batteries again. Now hop into the water and swim to the northwest part of the room to find two yellow light tunnels. Swim through the top one, blind the Clang, then swim upward through the long shaft. Along the way, blind the Clang and use the air pockets to refill your oxygen. At the top, you'll finally reach land. There's a huge pit here, so drop into it. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8g Pisstastic CHPT4P07 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Head forward and you'll be shown a cut-scene of two fire imps who are smoking. Conker drops down and the fire imps decide to try and burn him as he looks kinda flaaammable. After the scene, you should see a B button pad where the flame guys were. Use it to drink beer from the keg, making Conker drunk. Now head forward into the open area and press B to start pissing. Hold Z to extend your piss stream. Try to pee as much as you can on the fire imps. When one jumps over you, quickly pee on him before he creeps up on you and burns you. Eventually, Conker will stop and get a hangover. Head to the left side of the room and look for a B button pad near the back. Use it to take out some alka-seltzer and cure your hangover. Get drunk again and continue peeing on the fire imps. Once there are only two of them left, they'll hop into a boiler to start the real boss fight. The boiler will grow giant brass balls. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8h Brass Monkey CHPT4P08 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Boss - Boiler ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : 2/10 This is quite easy, as you can see. Head to one of the corners of the room and you'll find a switch above the corner. Near the pipe thing that the switch is above you'll see a grate. Wait until the boiler comes over and stands on the grate, then highjump to pull the switch, pouring poo on the boiler. He'll be stunned and will retreat backward. Quickly run up to him and go near his crotch. Press B to smack his balls with bricks, which will raise the pressure dial. Keep doing this. After using one switch, it will disappear, so you'll have to use all four corners. Be careful of the boiler's flame breath that it will use on you when it gets near. After you've smacked it four times, Conker will whup its balls with bricks and his frying pan. The boiler collapses after his balls fall off. The fire imps try to escape but instead press the self-destruct button. Alright, now take the boiler's ball and push it over to the right side of the room. You should see a weird looking switch tile on the floor. Put the ball on it to open up a door near it. Remember to avoid the flame guys, as they still are here. Now take the other ball and push it over to the switch. This time, roll it down the newly opened path to kill the imp. Head through the path and to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 8i Bullfish's Revenge CHPT4P09 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Head forward and grab the CASH ($900), then leave the safe. Once outside, the catfish ask for their money so they can give you your dollar. But that means... their fortune was only ten dollars? The new deal is you keep the lot ($910). While the catfish argue over this, the dogfish comes loose, creating a bigger problem. Hop into the water and quickly swim away from the fish. Swim past the catfish so that you're lined up with it. The dogfish will swim straight into her and eat the catfish. Hurry through the river in this method, making sure to line yourself up with each catfish. All the catfish will be eaten, letting you escape safely. Once you reach the end, rush to the dock. Conker struggles to run to safety as the dogfish crashes through the dock. He gets stuck in the wall, so hop onto his back and highjump to the alcove above, where you'll find a whopping three packs of CASH ($1210). Now hop out and head over to the exit. Once back in the Windy area, swim across the lake and down the river. Hop over to your left to the context button, then head up the slope where you defeated the beetles and take the left path into the poo cabin area. Go over to the cabin and go along the edge of Poo Mountain. Once you find the entrance that you opened with your second poo ball, head inside. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (09) | V. Sloprano | CHAPTER5 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~o | 9a Corn Off the Cob CHPT5P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=o A dung beetle will tell you take his advice and get out, as there's something really bad down in the poo ocean in this room. About two days ago, the beetle's friend Tezza was swiped up by a big hand... and he never came back. Then Bazza was next, while he was minding his own business. He decided to escape the mountain before he was taken too. The beetle tells you there's some money in here if you can be arsed to get it. Head forward. Hey, there's some sweet corn! BRING ME SOME SWEET CORN! Chop chop, food delivery boy. Smack the piece of sweet corn around the hole with your pan. If you get hit by the hand that comes out of the hole, you'll drop the corn (wait, if his hand comes out of there, why can't the lazy bastard just get the damn corn for himself?). Once you have the corn, take it to the ledge overlooking the big lake of poo and Conker will throw it in, where it will be eaten. Head past the first hole and follow the path to another pit. You'll have to tailspin a gap and avoid the poo raining down (watch for shadows). Smack the two kernels of corn here and take them to the ledge, then continue past the hole until you reach another. Smack the three pieces of corn and take them to the ledge to make a giant pile of crap with green eyes come out. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 9b Sweet Melody CHPT5P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Boss - The Great Mighty Poo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : 2.5/10 The pile of poo is actually an opera singer and he will start his song. "I am the Great Mighty Poo and I'm going to throw my shit at you. A huge supply of tish come from my chocolate starfish. How about some scat you little twat?" All the Great Mighty Poo does is throw pieces of himself at you, which are pretty easy to avoid if you stay on the move. Head to the B button pad next to the nearby hole and dodge the poop while staying near the pad. Once the GMP opens his mouth to sing, press B on the pad to take out a roll of toilet paper. Quickly use the Z button to throw it into his mouth, causing damage. The Great Mighty Poo will sing the next verse of his song. "Do you really think you'll survive in here? You don't seem to know which creek you're in. Sweet corn is the only thing that makes it through my rear. How do you think I keep this lovely grin? Have some more caviar." He'll throw a piece of poo that lands on the context pad. Go along the path so you're heading back to the start. Don't forgot to watch for the raining poo and grab the chocolate. Once you reach the second pit, go over to the context button. You can either wait for him to stop throwing shit or you can throw the toilet paper at the poo balls to break them. He'll change his position some times, switching back and forth between two places. After he opens his mouth, throw a roll of toilet paper in. You'll need to throw in another roll before he sings the next verse. "Now I'm really getting rather mad. You're like a niggly tickly shitty little tag nut. When I've knocked you out with all your bab, I'm going to take your head and ram it up my butt." Conker: "Your butt?" "My butt!" "Your butt?" "My buuuuuuttttttt!" Some glass nearby will break. Now continue along the path until you reach the first hole you saw. The GMP will switch between three places, so dodge his poo balls until he opens his mouth. He switches places pretty quickly, so I recommend you just aim at one place until he appears there. After throwing three rolls of paper into his mouth, he'll sing so loudly that the glass shatters. Head back the other way so you're heading towards the other holes. After passing the gap in the path, look for a side path. Follow the side path and you'll find some CASH ($1310). Now pull the switch to flush the Great Mighty Poo. You'll see him drain into a pipe below. Backtrack to the second hole where you fed the GMP two pieces of corn. From the ledge, tailspin to a ledge where the Great Mighty Poo was. Continue tailspinning down a few ledges until you reach the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 9c U-Bend Blues CHPT5P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ In Berri's house, someone knocks on her door. She thinks it's Conker "standing her up" again, but it turns out to be some rock dude. Berri says she isn't interested in whatever he's selling, but that's not why he's here. The evil looking rock dude kidnaps Berri after punching her, then drags her away. After the cut-scene, drop into the circular room and get any chocolate you might need as well as the squirrel tail on the right. Jump into the water and dive under. Once you use the air pocket below, swim over to the tunnel with the spinning fans. They'll kill you instantly if you hit them, so swim as close as you can and turn the camera sideways, then swim past once it's safe. There's an air pocket before the third fan (they get faster as you go on). Once you make it past all three, swim to the surface of the next room and turn around. Head through the pipe below the ceiling fan and climb the rope. Turn to your right at the top and you should see a ladder. Once the spinning fan blades are out of the way, jump to the ladder and climb up to a metal bucket. Head around until you see a path, then run across the bridge to the panther king's guards. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 9d The Bluff CHPT5P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ One of the guards is crapping behind a rock, but the other demands that Conker pay the toll. Conker hands over the $1000 (you should have $1310), but the guard realizes that Conker is a squirrel, which is just what they need. He fits the description perfectly, but Conker explains that he's an elephant. Squirrles are big and gray with flappy ears and long and snouty noses. Conker passes to the exit as the other guard finishes crapping. He calls the other guard a stupid twat for letting Conker fool him. Conker whistles his $1000 back before leaving. At the start, head forward and around the stone structure to the other side. You'll have to highjump over raptors along the way. Go through the exit and you will reach the second level of the structure. Continue around to the next exit, which takes you to the top. You'll see a weird idol statue at the top. Near it is a wad of CASH ($1410). Highjump on top of the statue and press B to turn into an anvil, smashing it down. Do it three times to crash down into the next level. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (10) | VI. Uga Buga | CHAPTER6 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10a Drunken Gits CHPT6P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker will fall off the idol after it destroys a caveman. Head forward to where the chocolate is and turn around to see four rock piles on the ground. If you approach them, they'll wake up and attack you, so head around the very edge of the area. When you reach the guy guarding the entrance, push the rolled up rock dude around the area and to the idol that you smashed in here. Highjump on top of the idol and become an anvil again to open a door below. Push the rock through the newly opened hole and continue through to where you'll push it down a tunnel, killing some cavemen. Go through the tunnel and you'll find a giant dinosaur idol with fireballs flying around the room. After Conker asks the maestro to change the dramatic music to something with more of a beat, turn the camera so you're facing the idol. Go down the pathway on your right and continue heading around the right side of the room, past a few fire lakes. At the end of the room, you'll find an exit on the right side of the idol. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10b Sacrifice CHPT6P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Carefully head along the narrow walkway while avoiding the cavemen until you reach the giant egg. Approach the egg and you'll see a monk, so jump on his stone to be propelled on top of the egg. Press B when the lightbulb appears and Conker will sit on the egg until "some time later", at which point the egg will hatch and reveal a baby dino. He'll jump onto the monk and squash him, then declare that Conker is his mama. The dinosaur follows you, so continue along the pathway. It's pretty long and winding, but you'll make it eventually. He'll eat the cavemen along the way, so don't worry about them. At the end of the walkway, you'll find the exit back to the dinosaur room. Head back to the front of the room, going down the slope next to the fire lake. Now head in front of the dino idol's mouth and you'll see four cavemen. Since the dino likes to eat the cavemen, this next part can be a bit frustrating. In front of a weird platform you should see a context button. Use it to take out your slingshot, then look up at the right wall to see an arrow switch pointing up. Shoot it to make the platform in front of you rise. Now quickly make your baby dino stand on the platform, then run back to the context pad and take out your slingshot. This time, shoot the button on the left, which lowers the platform and crushes the baby dinosaur. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10c Phlegm CHPT6P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ The idol is pleased with your offering, so he opens his mouth to reveal a mucous -covered tongue. A monk guy crawls out of his tongue as well, so hop onto his stone to get shot up on top of the idol. Head forward over his eyes to his back and you will find some chocolate, a squirrel tail, and some CASH ($1510). Go over to the front of the idol and find his nostrils on the sides. When smoke isn't coming out of a nostril, hop into it and press B to fill it with pepper. Now pepper the other nostril when it's safe. The idol will sneeze, getting rid of the mucous and sending Conker flying. Head up his tongue and enter the mouth, avoiding the uvilla. Go past a uvilla, then take the right path and tailspin across a couple gaps. Go past the uvilla on your right and to the exit. Once outside, Conker will steal a dead caveman's hat. Head back into the idol. Go through his throat, crossing the gaps and avoiding the uvilla, then go back outside the idol's mouth. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10d Worship CHPT6P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ When Conker reaches the bottom of the idol's tongue, the cavemen will begin to worship him. Conker asks them if they like the rock dudes that you saw before, to which they reply no. It's time to get rid of them. Head over to the left or right side of the front and go up the ramp leading to the tunnel by which you entered the room. Head up the tunnel and through the place where you entered the chapter. Now go over to the rock guys. Approach one of them and it will unroll itself. Give one a whack with the frying pan (the first pop, as Conker puts it), from behind and wait for the other cavemen and to come and kill it. Once all four of them are gone, head to the guard in the back. He tells you that sneakers aren't allowed, but then decides that Conker can go in. The cavemen tell him the password, though he can't understand. He manages to get lucky and is allowed in. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10e Rock Solid CHPT6P05 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ A bunch of crazy techno music is playing when you enter the club. In here, you'll want to avoid the dancers or they might smack you. At the start, you will see a switch. If you stand on it, a door below the cage where Berri is trapped will open, but the minute you step off, the door closes. Jump off the left side of the platform you're on at the start and tailspin down. In this part of the club you'll find a table and three cavemen guys. Behind the table you'll find a rolled up rock guy. Start pushing him away from the table and around the whole club, past the rock dancers. Push it up the ramp leading to the stand and onto the switch, which will open the door below Berri. Jump back down to the table and look near where you found the rock guy. You should see a context button. Use it to drink from the keg above it and become drunk, then head over to the front of the club. Pee on the guy near the middle door and try to aim so that you pee him into the whole. He'll fall on a ledge above where you are. Turn around and you should see a + medicine cabinet on the wall of the platform where the entrance is. There's a ramp you can use next to it to get to it, where you can press B to cure your hangover. Now drop into the middle door where the rock dude fell. Hold A quickly so you can tailspin and avoid taking damage, then start pushing the rock guy around the ledge. You'll encounter two female dancers along the way. When they're on the edge of the ledge, push the rock past them (this way they'll only push you into the alcove instead of knocking you off if you don't make it past them). The second one you'll have to push the rock in front of. After making it to the end, place the rock guy on another switch to open the right door. Tailspin to below where you are and head over to the context pad again. Drink from it to get drunk, then go to the dancer near the right door. Pee him into the door and he'll land on Berri's cage. After curing your hangover , get drunk again and head to the left side of the room. Pee the last male dancer into the last hole. He'll drop on the cage and break it, but Berri will run off, not recognizing Conker in his hat. Conker remarks that he wants to leave, as it's getting a bit noisy. First, drop into the left or right door to appear in Berri's cage. Now go to where Berri was standing to find some CASH ($1610). Tailspin down and head up the ramp to reach the entrance. Talk to the rock guy and he'll tell you you have to see the boss for taking the money. You'll see Berri on a weasel's back, who is sitting at a table with a bunch of other weasels in a Godfather parody. The weasel accuses Conker of stealing his dough, to which Conker asks Berri how she's doing. The weasel asks her if she knows this guy, but Berri asks why she would associate with a caveman. She is dismissed. The weasel kills one of his men for "showing no respect", then goes back to Conker. He is willing to forgive you and let you keep the money as long as you do him a job. The weasel, in a different room, explains that cavemen have come and invaded "his patch". He wants you, with your disguise, to go through the area, through the mouht of the dinosaur idol, and drop the bomb once you make it through its throat. However, if that bomb explodes before Conker makes it, then mama's gonna buy him a mocking bird, I mean the weasel suggests Conker leave town. He also tells him to leave town after the bomb goes off. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10f Bomb Run CHPT6P06 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You appear in front of the door outside the Rock Solid club. Head around the area so you can avoid touching the rock dudes and dying, then go over to the statue that you made drop in here. Go through the tunnel underneath it and avoid the cavemen, then head through the next tunnel into the idol room. Go down the ramp on either side and drop off it after passing the caveman. Head to the idol's tongue, then run up it and go inside his mouth, avoiding the uvilla. Go past the uvilla as they swing from side to side. If you hit one, the bomb will immediately explode, so be careful as hell. You'll have to take the left path this time, as you obviously can't jump with a bomb in your hand. Once you make it to the end, go through the exit. Outside the idol, head to the edge of the ledge you're on and Conker will automatically in the bomb, causing a massive lava flow. After the cut-scene, hop along the stone platforms (be quick as once you step on a platform, it will sink into the lava) and to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10g Mugged CHPT6P07 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ When Conker enters the place, he will be knocked out by a caveman. Later, he'll wake up to see four cavemen on hoverboards above the lava. They've stolen all your money ($0)! What is the world coming to when a squirrel can't even go to a dinosaur-themed world without getting mugged by a bunch of prehistoric brats? One of the cavemen falls off in laughter and dies, so the others challenge you to a hoverboard race in an attempt to get the money back. Once the cut-scene has ended, head forward along the bridge above the lava and to an opening. You appear just above some lava, so walk straight forward and drop onto the hoverboard. This isn't an actual race, as you can take as long as you'll want. You'll have to head around a course at intense speeds, whacking the cavemen with your frying pan once you get close. If you crash into a wall or a dinosaur leg, you die instantly. If you skid along a wall, you'll lose a piece of chocolate. At the start of the course, go forward and dodge the dino's legs, then hop at the end of the cliff to a lava waterfall. If you don't jump, there's a chance you'll crash into the fall and die. Go through the cave, avoiding the pillar. Continue past a dinosaur leg and you should see a caveman. Here, there is a gate on your right and an open path on the left. Take the left path so you don't crash and continue past a bridge to the start. Whack the first caveman when you can ($536). Continue heading around the course, whacking the second caveman ($1073) and the third ($1610). After whacking the second one, you'll have to start taking the right path instead of the left, as the gate switches. Once you've killed all the cavemen, take the left path again (the gates switch for the second time) and head under the bridge. When you see a ramp leading up, go up the ramp and jump for some CASH ($1710) plus an exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10h Raptor Food CHPT6P08 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker rides through a tunnel and crashes into an arena, where he thinks the people are cheering for him. A big caveman, called Buga the Knut, sits up in a chair above with his girlfriend. He commands that Fangy be sent in. After regaining control of Conker, head forward to the door that he points out. The drawbridge opens and a raptor enters with the goal of taking down Conker. This After the raptor comes out, go over to the very middle of the arena as quick as **** where you'll find a context pad. Quickly use it (make sure you're facing the raptor) to take out a watch, which will hipnotize the dinosaur and allow you to ride him. Press B to headbutt and Z to swallow. Head around the place and walk into each caveman to grab him by the mouth, then press Z to swallow him. These guys are unarmed, so it's easy to take them down. After killing them all, Buga will send in a group of infantry. You'll have to kill about 16 cavemen. Getting hit by their clubs will cause you to lose control of the dinosaur. If that happens, rehypnotize him and jump on his back when you here a weird noise. The trick is to run into the caveman and spread them out, as they all get scared. Now head towards one of them so you can deal with him when he's not backed up by his buddies. Alternatively, you can grab one by the mouth, then run to safety and eat him. Continue using either method until they're all gone, at which point another group will come out. This time, the cavemen have spears, so you'll have to worry about them throwing their weapons at you. It's hard to swallow one when there are other's nearby without getting hit, so try to swallow them when they're on their own. Once you have gotten rid of all of them, Buga's girlfriend will tease that Conker has a bigger boner than him, enraging Buga and causing him to drop to the arena to fight you himself. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 10i Buga The Knut CHPT6P09 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Boss - Buga the Knut ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : 4/10 When Buga drops down, head forward, though you should keep your distance. When you see him jump, jump after a second or so to avoid getting hit by the shockwave and falling off the dinosaur/taking damage The dinosaur trusts you now , so you won't have to rehypnotize him should you fall off. When Buga raises his bone (be careful of this), bite his balls (press B). Buga will cover himself in shame as his loincloth drops, so go to his backside and bite his ass, turning his cheek red. Continue doing this, biting his balls when he raises his bone. After biting him three times, his secret (he has a tiny boner) is revealed and he runs off in shame. Conker reckons that now it's "babe time", but the dinosaur doesn't want him to leave. Conker throws a bone from a dead caveman to the drawbridge and shuts it to get the dinosaur away. Turn to your left and go northwestish to find the ledge where the cavemen infantry came from. Jump over to that ledge and go through the opening. You'll appear in front of Buga's girlfriend, who turns out to be a hundred times larger than Conker. She picks him up and "breaks up" with him, then puts him on a higher ledge. Go through the new opening, following the cash that goes in. Go over to your bridge on the right and head to the other side. Run through the tunnel where you'll find the exit. Grab the CASH ($1810), then head rightward and head through the valley until you reach a dropoff. Walk straight forward and you'll fall into the water where you found the three spinning pipes leading to the king's guards. Head around the area and through the exit. Once in the Great Mighty Poo area, simply walk into the pit below. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (11) | II. Windy (Pt. 3) | CHPT2PT3 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 11a Wasps' Revenge CHPT2P08 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You appear in the poo cabin where you met the dung beetle. Go through the door to leave. A cut-scene will show that the evil wasps have stolen the hive once more and are taking it inside the honeycombs. Once the scene ends, head out of the Poo Mountain area and down the slope where you defeated the dung beetles. Go left and talk to Mrs. Bee, who will tell you that the hive has been stolen once more. Conker agrees to it only after Mrs. Bee promises to pay him quadruple the amount paid last time. She tells you that you'll have to go further this time. Head back to the sign and take the "Naughty" path, following the trail past the yellow goo area and to the honeycombs. Before going in, highjump to the left honeycomb, then tailspin from the right honeycomb to the middle one. You appear above the river where some CASH is ($1910). Go back into the comb and drop down, then enter the bottom middle honeycomb. Head forward along the path until you reach the hive. It opens up, so jump into the machine gun chair. At the top of the screen is a radar showing the bees around you as blue dots. If a dot turns red, it's close to the center and will sting if you don't kill it quickly. Turn around, following the radar. As you go on, the amount of bees will increase until the whole place is surrounded by bees, making it a bit difficult. Once they're all gone Mrs. Bee will come and tell you to hurry. Press A to jump out, then grab the hive and head back out of the honeycomb as fast as you can (three wasps will come to chase you). After getting outside, head down the path. When you reach the sticky yellow goo, head around it to avoid being slowed down. Continue down the path until you reach Mrs. Bee. Conker throws the hive, then Mrs. Bee uses missiles to kill the wasps again. After it's over, she'll give you your CASH ($2210). +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 11b Mr. Barrel CHPT2P09 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ From here, hold down C to center the camera and go left to cross the bridge. After crossing, look to your left and you should see a slope leading up a vast mountain. Go upward (be very slow) until a worm pops out of the ground. Go near it and highjump over it, then continue slowly up the mountain. There are a bunch of these worms, so if you go too quickly, you'll run into them and lose a piece of chocolate. After making it to the top, you'll meet Mr. Barrel, who requires Conker to show him a large amount of money (you've got enough though). Hop onto him and ride him down the mountain. Hold the joystick left or right to turn him (you kill the worms as you go down, so don't worry about them). At the bottom, the barrel will crash into a bunch of boards blocking an opening in the river, but Conker passes out. After a while, he wakes up during the nighttime. Jump into the river and go through the opening. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (12) | VII. Spooky | CHAPTER7 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 12a Mr. Death CHPT7P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Rocks will fall, preventing you from leaving the way you came. Head forward through the tunnel until you come to an open water area. Follow the current of the river. If you go left against the current, you'll find a stream going down , though you can't go up it. Continue following the current (you'll pass Gregg the Grim Reaper on a dock trying to kill some catfish) until you reach the end, where you'll find another tunnel. Go into the exit. From here, jump to the switch above the waterfall to open a gate past the Grim Reaper. You appear in a side room of the starting tunnel, so head rightward to the end of the tunnel. Swim forward with the current until you reach the dock, then go up and talk to Gregg. He thinks you want to go up to where you opened the gate, but tells you there are zombies there. To help you, he gives you a shotgun and tells you that only a shot in the head will kill the zombies. Press B to take out/put away your gun, Z to fire, R to aim, and hold Z to use a laser targetting feature. Head up the dock until you find the gate you opened. Go into the graveyard beyond the gate and head through it until some zombies pop up. Highjump onto a gravestone, where you can safely pick off the four zombies. Go a little furhter along the fence on the left and three more zombies will appear. Shoot them down, then hop over the fence and go over to where you'll find another zombie. Kill it , then go to the middle of the path just near the fence. Shoot them, then head along the left side. Near the end, a bunch of zombies will pop up. Blast them until Gregg appears in front of the exit. Put the gun away and talk to Gregg, who will open the gate for you. Go inside. Head up the long winding path at a slow pace. There are skeleton worms here, so if you go too fast, you'll just run right into one as it pops out of the ground. When one pops out, highjump over it to avoid taking damage. Once you reach the top, enter the house you'll find. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 12b Count Batula CHPT7P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You'll vill be immediately greeted by a dracula-like vampire dude who invites Conker into his home. He thinks Conker is in need of "sustenance" and invites him into his dining room. Half of ze dining room floor is ripped apart, but ze table is still intact. Conker munches on chicken and drinks vine, then asks ze guy about the person on the picture on the vall. He explains that it is his forefather, who vas a crusader in a var of long ago, vhen zhey were allies with ze squirrels and panthers. Alas, zat union did not vork out. The count is awed by ze music of ze children of ze night, but zen he hears ze noise of ze villagers braying on his door, and zhere are more of zem zis time. He says he had planned to kill Conker and drink his blood, but it looks as if he'll be needing his help. He looks as if he is about to bite Conker. Just then, ze villagers break into ze house. In another room, the Count dangles by a rope from ze ceiling. He realizes, judging from ze taste of ze blood, zat Conker is his great, great, great, great, great grandson. He explains zat ze villagers occasionally pop into his establishment to try and kill him. He has had a few... minor alterations to ze household. Zhere is a grinder to kill ze villagers (who are mice), pumps to pump blood up a pipe, and some other bits and pieces. He asks Conker to put the villagers in the grinder and allow the him to feed, then it is revealed that Conker has been transformed into a bat. To fly, hold B and move the control stick. Press Z to drop bat shit, which will stun the villagers long enough for you to drop down and grab them. Once you have one, fly it over the grinder. Conker will automatically drop it in. The count drinks blood every two times you drop in mice. From the start, you'll want to fly to the northeast corner and go into the door on your right. This will lead to the library. Between the two shelves you'll find three mice. From the nearby doorway in the same corner you can reach the foyer, where you'll find three mice at the bottom of the stairs and another at the top. From the ledge you start on, fly to the left to the bottom floor just above the grinder. There's a door here that leads to a hedge garden with a three mice lurking throughout the hedges. If you need some chocolate, fly to the very top of the grinder room. In one of the corners you'll find three chocolate pieces on a ledge. After dropping seven mice into the grinder, the count will become so fat that he drops into the grinder and dies. Conker detransforms, but the whole house is now full of zombies and the front door is locked. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 12c Zombies CHPT7P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Head around the room to your left, being careful not to fall to the floor. Go through the doorway on the other side to reach the library. Go down the path on your right and tailspin to the top of the middle bookshelf. Equip the crossbow using the context button, then start shooting the bats in the corners of the room. Wait for them to pause before shooting (the controls are the same as the shotgun). Once they're all gone, tailspin back to the path and head up to the grinder room. Go around the room to your left until you reach the other side, then enter the top of the dining room. Quickly shoot the zombie that pops out, then look on the left side of the ledge to kill another zombie. Go along the plank to the context zone, then shoot the three bats using the crossbow. Carefully tiptoe along the planks to reach the very back of the room, where you will find a key. Pick it up, then take it back along the floorboards and into the grinder room. Go around to your right to reach the library, then head right and head all the way down the path until you reach the bottom of the library. Avoid the zombies and go to the start of the left side. Head through the hallway , avoiding the zombies, then go right once you reach the foyer. Go past the stairs and over to the front door, where Conker will place the first KEY (1) in. A bridge outside will appear above a gap. Turn so that Conker's back is facing the wall, then go right and through the doorway. Ignore the many zombies that flood the hallway until you reach the dining room. Now hop onto the table and blast away all the zombies lurking in here. Head to the side opposite the hallway you came through and go through the door to reach the bridge. Cross it, then continue to reach the outside garden. Highjump onto the hedges so you can safely blast the zombies. Go around the hedges and shoot all the zombies in the garden, then hop to the fountain in the center and grab the key behind it. Head around the hedges until you reach the bridge again, then cross it to reach the dining room. Go around the table and head through the hallway until you reach the foyer, where you should go to the front door and place the next KEY (2) in. A staircase will accend to the place where the three pieces of chocolate are. Head up the stairs across the door and go right to the grinder room. There's a gap leading northward, so go to the edge of the wooden beam and tailspin to the ledge. Head to the corner, then climb the ladder on the wall to reach the ledge full of chocolate. Now look to the right so you're looking at the other side of the room. Tailspin to the ledge here and highjump to the next ledge, where you will find a bone key. Jump to pull it, which opens a couple of doors. Now go back to the chocolate ledge and head leftward. Jump across the pipe and to the ledge, where you'll find a key. After grabbing it, head around the ledge and across a beam, then go through the doorway. You appear near the staircase, so go along the stairs in the foyer over to the main door to put the last KEY (3) in. The front door opens, but there are skeleton worms guarding the path. Fortunately, Conker takes notice of Mr. Barrel nearby and gets an idea. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 12d Mr. Barrel CHPT7P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Hop onto Mr. Barrel, then go through the mansion door. Once outside, carefully roll down the path. You'll have to steer carefully, or you'll fall to into the endless abyss below. As you go down, you'll automatically destroy the skeleton worms. At the end, head through the graveyard where you killed the zombies and through the gate near Gregg. Head into the river and go right, passing Gregg the Grim Reaper. Look around for the waterfall going downward, then head in that direction. With the barrel, you can roll up the mini-waterfall. At the top, the barrel will break, but you're on land. Head through the exit. You appear behind the gate behind the waterfall in the hungover area. Get the CASH here ($2310). Head across from the waterfall and go through the passage. This leads to the river next to the farm patch at the top. Head across the farm patch where Birdy is and go over to the ledge atop the waterfall. Tailspin across the gaps until you reach the tops, then go across the bridge and enter Windy. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (13) | VIII. It's War | CHAPTER8 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13a It's War CHPT8P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You'll see an advertisement for soldier recruits. The gray squirrels are apparently in a war with evil teddy bears. As the general says, it is not known where they came from, but is known that they must be sent back to that place. Head down the slope once in the Windy chapter. At the signs, follow the naughty path leading to the wasp nest. Head up the path along the river until you come to the yellow sticky crap surrounding the grass. Highjump over the electric fence on your left here and go through the doorway. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13b Power's Off CHPT8P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Go forward until you see a cut-scene of a jet being shot by a submarine and crashing into a beach area due to an.... incompitent pilot. A general comes and tells Conker that a boat must arrive, so Conker must clear the jet out. First, we've got to turn the power back on. Head forward when the scene is over and hop into the water. There's an electric eel swimming here, so you'll have to be careful. Swim through the half-circle tires under the water, avoiding the eel. There are three of them positioned around the center. You'll have to grab the eel's attention. Make it follow you through the half circle and keep swimming. Once the eel goes through all three tires, the power turns back on and the eel dies. A context pad also appears above the center platform in the lake. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13c TNT CHPT8P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Swim to the right side of the central platform and go up the stairs. Push the block on your left so it's lined up with the slope ahead, then go up the slope and go to your left to find a men's bathroom door. Press B in front of the door and Conker will knock to reveal a lizard with a giant TNT barrel on his back. You'll have to push him to get him where you want. Once you regain control of Conker, push the lizard past the bathroom and down the slope. He will hit the block that you pushed, stopping him from landing in the ocean. Now push him over to the right. You'll have to take him all the way around the area, but there are mines that pop out of here. This is the order they appear in (inside means closer to the wall and outside means closer to the edge): outside, outside, inside, inside, outside, outside, inside, outside. Push the lizard so that he weaves around the mines and doesn't hit them. He'll sit down and rest once you reach the end of the path. Now go back around the mines and head to where you pushed the block. Go forward to the central platform above lake and use the context button to take out a flaming slingshot. Use it to shoot the TNT barrel all the way across the lake and he'll ignite, blowing away part of the jet. Go back and head up the slope, then head left and open the men's door again to find another lizard with a TNT barrel. Push him back to the right and down the slope, where he'll be stopped by the block. This time, push him to the left and you will find some green crates hopping around. When the one on the inside turns around to go further from you, push it towards him so you're following him. He'll turn around to the inside at the end of the path, letting you go past. Next up is a crate dropping down and being pulled up by a rope. Let the box drop down, then quickly push the lizard past it once it's pulled up. Now all you have to do is make it past another set of green crates and another metal crate. The lizard will sit down and rest at the end, so head back past the metal crates and the wooden crates until you reach the start. Head to the platform above the lake and take out the flame slingshot again. Shoot the lizard you placed and he'll ignite, blowing away another part of the plane. The jet will sink underwater, clearing the way. Put the slingshot away and then drop down to the stairs to the right of the central platform. Talk to the general, who punches Conker, knocking him out. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13d The Assault CHPT8P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker wakes up to find he's on a boat with gray squirrel soliders, who are being shipped to battle. Conker has an army hat himself. The boat stops and opens up, followed by the soliders being blasted to their deaths by Tediz, evil teddy bears (they were mentioned by the professor, who created them, a long time ago). Conker jumps into the water as several more squirrels are killed while swimming. He manages to make it to the beach, where several more squirrels are killed. After the cut-scene ends, head to your right until you find an opening in the fence. Keep going around the fencing, as the place where the opening is alternates, so you'll have to wind around. If you keep moving, you have a good chance of avoiding the fire. You can also use the metal bar barriers until the Tediz stop firing for a second. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13e Sole Survivor CHPT8P05 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ After going to the left for the final time, Conker will jump and meet a soldier, who explains that Tediz have been shooting them for ten hours. He asks you to clear out the machine gun nests up in the building. The gray squirrel dies, so press B to take out dual shotguns (I'm not sure if they're shotguns, as they fire a hell of a lot faster than the gun Gregg gives you). You can use the C buttons to move around like with the shotgun. The controls are the same, except you get a crosshair instead of a laser. If you need some chocolate, go forward. Now shoot the lock on the door next to the dead soldier and enter the building. Like with the zombies, you can only kill Tediz by shooting them in their heads. Go forward a bit after the cut-scene and a bunch of Tediz will pop out from behind boxes. Shoot the Tediz on the boxes on the left side, then shoot the ones ahead. Go forward more and shoot the last few Tediz, which opens the door ahead to reveal lasers. Crawl under the lasers, then go under the high parts of the next few lasers. Grab the piece of chocolate behind the box, then take out your guns and turn the corner. Go forward, shooting the three Tediz behind the stacks of boxes. Kill the Tediz hiding behind the next corner and the one on top of the boxes, then tailspin at the right time through the lasers. Shoot a trio of Tediz that pop out, then head forward and shoot two Tediz that fall from the ceiling. Go under the lasers, shooting the two Tediz from the ceiling and one on the box. Shoot the next two Tediz that drop from the ceiling plus one behind a box, then approach the elevator. Some mines will come out of nowhere and chase Conekr , who makes it in the elevator just in time. Hop over the lasers, getting the two chocolates, then shoot the Tediz that drop from the ceiling and two that come out of nowhere. Turn the corner and wait for the flamethrower to stop, then go past it and shoot the Tediz that come from behind the boxes. Head past two more flamethrowers and turn the corner, where you will have to kill four Tediz. Shoot another Tediz that drops from the ceiling, then head through the lasers and blast another ceiling Tediz. Shoot a couple box Tediz, then continue to go past a flamethrower and tailspin over some lasers. Pass two more flamethrowers, then shoot the five Tediz lurking behind the corner. Shoot two ceiling Tediz and you'll see what looks like a really complex laser pattern. It's actually three laser sets, so tailspin over the two lower ones and walk under the next two to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13f Casualty Dept. CHPT8P06 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Two doctor Tediz are discussing what would happen if they gave the game twenty intelligent characters, but they get back into character once they spot you. Stay where you are and hold R to aim. When the Tediz pop out from behind the counters, shoot them (wait for them to get close to you). Continue blasting Tediz until they stop popping out of the counters, then head to the other end of the room. Go to your right to meet a gray squirrel trapped in an electric chair, who tells you one switch will free him but the other is... Pull either switch on the wall and it will shock him. Now pull the other to open the door (tough luck for him). Head through. You'll find a rather large Tediz who is operating a machine gun chair. He spots you and starts firing, so quickly take cover behind a box. When he stops to reload (you'll see a cut-scene of this the first time), head over to him and climb the rope on your right. Jump to the big stack of boxes and head to the context pad at the end. Quickly take out the bazooka when he's almost out of shots and blast him when he's reloading, getting rid of him. Now jump down and get into the chair yourself. Look to your right and start blasting down the Tediz behind the boxes. Once they're all gone, look to your left and shoot behind more boxes. If you ever need to grab chocolate, press A to hop out of the chair and grab the chocolate around it. The reload time of the machine gun is significant compared to your normal weapons. You'll need to shoot two sets for each side, then two sets on both sides at the same time and a set on the right. Once you're done, a door at the end of the right side will open. Hop out of the chair, then head through the right side. You'll find a conveyer belt past the door that opened. Head to your left and you'll find a small path to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13g Saving Private Rodent CHPT8P07 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ You'll find a room in which a soldier is being held captive by a group of Tediz, who apparently plan to kill him. They shoot at him, but he uses his special suit to protec thimself from the bullets. Once the scene is over, go into aiming mode and blast the Tediz. There's only like five of them, so it's not too hard. Once they're all gone, approach the soldier, who Conker knows (his name is Rodent). Rodent explains his suit is made of titanium and makes him indestructable, so Conker can hide behind him for protection. Once the scene is over, head forward and you should see a pathway tunnel. Go through it and Conker will see bomber planes up in the sky. A bomb will immediately drop, so hide behind Rodent. You are forced to go first, despite what Conker said, which means you can't actually stay behind Rodent at all times. When you hear a bomb about to drop, quickly take cover behind Rodent to avoid getting hurt. Also watch for the shadows of the bombs. A few mines are also lurking in the tunnel, so when they approach, hide behind Rodent. It's a long way, but eventually you will make it to the end of the tunnel. Rodent says he'll go and wait by the door while Conker shoots the locks off. Head all the way to the right until you reach the end. Now look to your right and you'll see a big lake. From the dock, jump to the lifeboat and use the context button to take out a bazooka. Turn around and look for the giant door Rodent mentioned. There are four orange dots holding the lock, so blast them down and the door will open. Put away the bazooka and make a run for the door, avoiding the mass laser fire and Tediz Head through the door. If you die after blasting the locks, you won't have to do it again. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13h Chemical Warfare CHPT8P08 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker will discover a tank, which Rodent realizes is a "class 22", which he's always wanted to be in. Go forward and hop into the tank, which has horrible controls. Holding the control stick forward moves you in whatever direction you face. Holding it left or right will make the tank spin, and holding it backward will make it move backward. You can use the C buttons to rotate the gun, press Z to fire, and R to aim. Use the up and down C buttons to zoom in and out while aiming. What you want to do is head across from where you found the tank and shoot the small door with the toxic symbol on it. Hop out of the tank (A) and go through the door. Go forward. Throughout this tunnel, you'll find mines and toxic waste puddles. Tailspin over the toxic waste, then tailspin back and lure the mine into the waste, where it will explode. Keep using this strategy until you reach the end of the tunnel, where you should pull the switch, making the toxic waste rise. Quickly rush back through the tunnel, tailspinning over the puddles. At the end, highjump to the exit. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13i The Tower CHPT8P09 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Once outside again, go forward and hold down C to turn the camera. Now head left to discover that the giant door has opened. You should see an open area with a giant tower in the middle. The guns at the top of the tower are scoping around. Head forward and tailspin over the gap, then go up the metal bridge to find a context button. Use it to turn into an avil and smash the bridge down. Continue along the narrow path like this, avoiding the Tediz's grenades and the laser fires. You'll have to smash down a couple bridges along the way. Look to your left to find the third bridge, which leads to three Tediz. After smashing the final bridge, quickly head back along the path. When you reach the end, go through the door and hop into the tank. Drive the tank back through the door. The grenades can hurt the tank, but not the tower blasts. Use the sniper to blast away Tediz from afar, which lets you kill them without being hurt. At the start, aim at the tower and you should see a caution strip bar supporting it. Blast it, loosening the tower's support. Continue heading around the area, killing Tediz from as far away as you can. Keep looking at the tower to blast away its four supports. Some of the paths are extremely narrow. If you drive off the path and start slipping, quickly hop out of the tank to avoid falling. You can then head back to where you found the tank to jump back into it. You'll need to go to the three Tediz end area to blast the fianl support off. The tower will collapse, leading behind a giant crater. Hop out of the tank and cross the wooden log, then jump into the crater. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13j Little Girl CHPT8P10 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ After dropping down, cross the bridge to find a little girl in the center of the area, who is apparently trapped in the ground. Submarines are patrolling the water around the area trying to fire Teddyfunkin U47 intercontinental ballistic missiles at you. After the cut-scene, look around the area. There are three arch structures, one in the left, right, and in the back. Head to the one of the left (this is assuming you're facing the little girl) and grab the chocolate if you need it inside the arch. Go behind the arch and you'll see a context button. After the sub fires a missile, use the context pad to take a bazooka. Quickly shoot the submarine. It will be destroyed, but is replaced by another. Keep shooting the subs, putting the bazooka away when the missile comes. You can also use your bazooka to shoot down the missiles before they kill you. Once they're gone, head to the arch in the back. This time, you'll have to deal with two submarines. I suggest trying to blast them both, then putting the bazooka away and hiding behind the arch. Once they're gone, head to the final arch, the one in the right. Again, shoot two submarines until they're all gone. It can be a bit difficult, but just remember to take cover when necessary. Once they're all gone, head to the center and talk to the little girl. She seems happy that she's going to get to see mummy and daddy again. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13k The Experiment CHPT8P11 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Rodent drops into the area with his tank and quickly warns Conker to stay away from the girl, but Conker doesn't listen. He tries to get the girl out of the trap, but she turns evil. The hatch opens and a giant Tediz robot comes out, picking up the little girl. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Boss - Tediz Experiment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : 7/10 Quickly head backward and jump into the tank where Rodent is, as you'll need it to compete with this thing (it's slicer can kill Conker in one hit). Go into one of the arches while the robot goes to the center. He'll take out mini-guns and start blasting you with bullets. Just hide behind the arch until he has to reload. Get out of the arch quickly as **** (you want to be close as possible to outside of it while he's firing) and blast one of the guns away. If you're quick , you can blast away the other gun. The robot will come toward you, so blast the little girl out of its hand. He'll turn around to fetch her while she shows her apparent anger issues. A hatch on the back of the robot opens to reveal a red button, so blast the button to inflict damage. Drive behind the nearest arch and take cover. The robot will begin using magneto laser electroshockers. There's no reloading here, so simply drive out of the arch and blast the two lasers away before he hurts you. As the robot approaches you, blast the little girl away again. When the robot turns to pick her up, blast the red button in his back again. Go behind the nearest arch to take cover. The Tediz robot will take out cannons to start firing missiles like the damn submarines did, only a shit load will blast towards you at a time. Wait for a break in the fire, then blast the robot's missile launchers away. Shoot the girl out of his hand, then blast his red button when he turns to pick her up. The robot collapses and is commanded to get up, but does not rise. A few mines come out of the red button, and attach to the tank, then explode. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The tank has been reduced to rubble, as Conker sees. Conker survived, but Rodent ... no Rodent died. After Conker's tribute to Rodent, the little girl will press a button. A four minute, thirty second timer starts, giving you a limited amount of time to get the **** out. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13l Countdown CHPT8P12 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Once you regain control of Conker, hop into the pit ahead. You land in the corridor just past the first few Tediz you killed after swiping the guns off the dead soldier. However, there are some crazily complicated laser patterns in here , making this almost inarguably the most difficult part of the game. Tailspin a very small distance over the first set, the jump over a laser. Now crawl under three horizontal lasers and tailspin through the small gap in the vertical ones. Jump through some vertical lasers so you land between them, then tailspin through the horizontal ones at the right moment. Take out your guns and kill the Tediz lurking behind the corner. This next laser pattern is difficult. If you don't manage to tailspin through it at the exact right moment, you'll die. I recommend crawling and purposely getting hit by the bottom laser. You lose two pieces of chocolate, so you'll need at least three. Go forward and shoot the Tediz behind the corner, then hop onto the box. Crawl under the laser and drop down, then jump over the next laser. Head to the end of the hallway and crawl under the laser to reach the first room you saw Tediz in. Head to the end of the room and the door will be blocked by blue lasers. Jump onto the nearest box and take out a bazooka, then shoot all the Tediz in the room. I recommend you start with the one on the box. The last one will land in the blue lasers, destroying them, so head through the exit. You appear on the beach at the start. The timer becomes two minutes no matter how long you had when you went past the blue laser exit. Head forward through the beach until you reach the fence. Turn to your right and there will be a Tediz lurking on the beach. These Tediz fire missiles that kill you in one damn hit, and they're accurate as hell. Take out your bazooka and blast the Tediz, then head forward to the end of the row, where you should look on your right to find a Tediz in the corner (the one on your left on the other side of the fence will be destroyed by the lasers). After killing it, make a U-turn left and go down the path. Shoot the Tediz up ahead, then go forward. Eventually, about three or four Tediz will pop out of nowhere. There's no way you can kill them all, so blast the one directly ahead, then rush past the Tediz, making your movement as random as possible. When you reach the end, Conker will rush towards the shore and jump into the boat. The boat starts off, bringing them to safety. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 13m Peace at Last! CHPT8P13 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ The general takes Conker by his side and gives him a talk. War is a terrible thing, as Conker points out. The general says it's sad that all these fine young men are sent off to do the dying while those guys who never see a single bullet whizz past their heads, those so-called generals, twenty miles behind enemy lines, tell them to go and die. Meanwhile, Rodent wakes up! He's not dead, he's alive!!! But... the countdown... The whole island blows up and the building collapses as Conker and the general watch. Rodent goes flying above the boat and the other soliders celebrate his making it out. After the boat lands, head up the stairs and go forward to the exit. Leave the most difficult chapter forever. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (14) | IX. Heist | CHAPTER9 | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 14a The Windmill's Dead CHPT9P01 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ The door behind you closes, so you can't ever return (not that you'd want to). Jump over the electric fence and head up the naughty path until you reach the honeycomb place. Tailspin down to the contxt button you used to kill the beetles and look ahead to find a piece of the windmill in flames. Highjump on top of it, then jump to the path leading up the windmill. There are no longer any worms, so just run up until you reach the top, where Rodent will come and show his respect for Conker. His speech "implies" (it's pretty obvious) that he crashed into the windmill and destroyed it. Once he's gone, head into the windmill and drop down. Go forward through a doorway at the bottom. If you ever looked near the poo cabin, you might have seen a ladder leading up to an exit. This leads to a bunch of signs telling Conker to leave. If you try to go across the rickety bridge, it will fall apart and you'll drop into the pit below. This secret entrance leads to the other side. Go to your left and jump over the wall. Go over to Don Weaso, who tells Conker that he needs you to do a little job. Conker begrudgingly accepts, followed by Berri arriving, in a leather outfit just like Trinity and Neo's in the Matrix (the next part is an excellent and obvious parody of the Matrix). Weaso says he thought Berri didn't know Conker, but Berri explains that Conker is her boyfriend. Don Weaso explains that his escapades with the cavemen put him out of business, so he needs Conker and Berri to rob the Feral Reserve Bank in order to replenish his funds. Conker agrees only on the condition that he gets a leather outfit as well, which he does. Once the scene is over, head forward and through the bank's revolving door. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 14b Enter the Vertex CHPT9P02 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Conker will, in a Matrix fashion, enter the matrix, I mean the bank, and place his luggage on the checking system. The guard tells him to place any metallic objects in the tray, following the alarm going off. Conker kicks the guard and then shoots several others, followed by Berri shooting the last one and standing by Conker's side. Once the scene is over, head forward to the two pillars blocked off by lasers. Hide behind one of the two pillars to find a context button. Two bank guards will start shooting from behind the lasers. Just stay behind the pillar (make sure you're as close as possible to it) and you won't get hit. When the fire ceases, press B to make Conker spin through the air in slow-motion. As the bank guards are regrouping, shoot them. Now wait until the second set of two appears, then shoot them down while they regroup. Stop gliding through the air after they're gone and Berri will hop over the lasers, then turn them off. Progress over to the next pillar. This time, you'll need to deal with three guards at once. If you screw up and get shot while gliding through the air, you might get your head blown off while getting back up and die. Get rid of two sets of three while they're regrouping. Conker will use Neo's classic bullet dodge move to avoid the guard's shots, then spin on his figner. Berri will throw a knife at him. Go forward to the next pillar and take cover behind it. The guards' bullets will blast away the pillar, so you'll have to be careful. When they regroup, press B and blast them all away. Get rid of the next set and Conker will jump into the air in slow motion, then kick the guard. Quickly head forward to the next pillar. There are four guards this time, meaning there's quite a good chance you'll get killed if you screw up. Get rid of the first set (you'll need some fast aiming this time), then kill the second once they regroup. Berri will slow motion kick the guards into the lasers, getting rid of them. Head forward and to the elevator, which Berri and Conker will take to the second floor. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 14c The Vault CHPT9P03 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ There's a huge complex pattern of lasers, but Berri just turns them all off, opening the door ahead. Go forward and through the tunnel to reach the vault, where Conker will be awed by the huge amount of money. You'll need to smack these with your frying pan to get them. After collecting three packs of CASH ($1,000,000), Conker is a millionaire. As he celebrates, Berri notices that the Panther King is sitting on a ledge above. The king is happy that he's finally found a red squirrel, which Conker realizes is him. Conker doesn't recognize him at first, but soon realizes that the fabled Panther King in the stories his mum used to tell him is real. Don Weaso comes to the king's side and is given his bounty, revealing that the whole thing was a setup. Berri thinks that she can intimidate the king, but instead, Don Weaso shoots her in the chest. Berri dies choking in Conker's arms. The king begins to feel a bit sick. The professor comes to his side and then goes to Conker, saying he's going to take him. The king's problem is getting worse. He suddenly can't breathe, and the professor says the incubation period is almost complete. The latest addition to his plans takes shape since Conker got rid of the Tediz as a Xenomorph pops out of the king's chest, killing him. The professor is awed by the Xenomorph's beauty and decides to go into space, as he is fed up by the outdated castle and lack of technology. He commands the alien to attack Conker. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Final Boss - Xenomorph ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Difficulty : 9.5/10 Go over to your right where the Panther King was sitting. On the wall of the ledge he was sitting on is a switch. Jump to pull it, opening up an air duct that sucks the air into space. The Panther King's body is sucked into space and a door opens to reveal a space suit. Go to the side of the room and hop into the space suit. The professor will be sucked into space followed by Berri. Head up to the Xenomorph and it will spin its tail around (it can also bite you). To avoid the tailspin, hold A to hover for a while. You can hold Z to block, which lets you avoid getting hurt by the bite. Hop over a tailspin, then press B a bunch of times to punch him until you uppercut him, stunning him. Fly over to the Bowser, I mean the Xenomorph's tail, which Conker will grab him by. Start spinning him around (move the joystick in slow circles or it won't work) until he stops scratching the floor. When the time is right, press B to throw him into the air duct. He comes back out, though, so the battle isn't over. This time, the Xenomorph is faster and also can dodge your punches. There's no chocolate in the room, so you'll need to conserve energy. The Xenomorph also can jump over you or backup, so beware. Go near it and dodge two tailspins. Right after the second spin, punch it until it's stunned. Glide over it and grab it's tail, then spin it. Once it stops scratching the ground, throw it into the air duct. It comes back once more, so the battle presses on. This time, the Xenomorph jumps over you a lot and is quite good at dodging your punches, so things get quite difficult. I recommend getting close to it. If it spins its tail, jump, but hope that it bites. If it bites, hold Z and block it, which stuns it for a second. Punch it until he's stunned, then glide over him and pick up the tail. Throw the Xenomorph into the air duct for the third time. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 14d End Cutscene CHPT9P04 | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ It looks like it was over, but the Xenomorph leaps out of the air. Just as he is about to kill Conker, he freezes in midair. Conker jumps out of the spacesuit and realizes the game has locked up. He calls out to the developers, who start IMing him. Conker agrees to keep the lockup a secret if they help him out a bit. First, they transport him to a blank white background and give him a bunch of weapons. Conker chooses a double-barreled gun and a sword. The developers take Conker back to the throne room, where Conker unfreezes the game and uses the sword to decapitate the Xenomorph. Franky the pitchfork and the Panther King's guards enter the throne room. The guards decided to make Conker king, but Conker doesn't really want to be king. He realizes he forgot to ask the developers to bring Berri back to life, but they are gone now. Characters from the game come back to visit Conker, including Marvin the mouse, the paint pot and brush, Rodent, and the red lady cog. The characters cheer "Long live the king!" So... there he is. King. King of all the land. He guesses you know the characters surrounding him now, because he certainly does. Conker may be king, and has all the money in the world, plus the land, but he doesn't really want it. All he wants is to go home with Berri and have a bottle of beer. It's true... the grass is always greener. You don't really know what you've got until it's gone. The credits roll. In the Cock and Plucker Bar Conker was in during the opening sequence of the game, Conker orders scotch, single malt, speyside, no ice. The bar tender notices Conker doesn't look too good, but he doesn't want to talk about it. He goes outside a bit drunk and sees it isn't looking too good out. ==========================----------------------------========================== | =~=~=~=~=~=~=~=~=~=~=~=~=~=~ | | (15) | Appendicies | APPEND | | ~=~=~=~=~=~=~=~=~=~=~=~=~=~= | --------------------------============================-------------------------- +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 15a Tail Locations TAIL | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ II. Windy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Behind the NAUGHTY/NICE sign at the bottom of the slope at the start. 2. Inside poo cabin, go to the rope that you use to jump to the top beam. From the top beam, instead of heading into the alcove, go to the other end of the beam, where you'll find a tail. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ III. Barn Boys ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. In the cheese farm where you find the cheese for Marvin, head along the huge cheese ledges around the farm. Highjump off one of them towards the left side and a lightbulb will appear. Press B to turn into an avil, destroying the cheese and revealing a tail. 2. After feeding Marvin the mouse enough cheese to kill him, jump onto the fatass bitch that falls off the block. Jump onto the pipe and tailspin to the barn ledge to the left. Keep heading around the sides of the building, jumping gaps, until you find a tail. 3. In the cave in the "moat" around the barn that is unlocked after defeating Haybot with cash in it, there is a tail. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ IV. Bats Tower ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Along the river to the bullfish, there are waterfalls on the left wall. Behind the first one you'll find a tail. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ V. Sloprano ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - After going into the air that the Great Mighty Poo is flushed into, you'll find a pool of water. Around the pool is a tail. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ VI. Uga Buga ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. On the back of the dinosaur idol. After sacrificing the baby dinosaur, you can use the monk that comes out of the idol's mouth to get on top of it. 2. After completing the chapter and getting to Uga Buga's girlfriend, head into the opening behind her. You'll have to cross a bridge leading to a long path. You would normally go right to leave the chapter, but instead, highjump on top of the entrance, then tailspin to the left. Keep heading around the left until you find the tail, which is worth five tails. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 15b Cheat Codes CHEATS | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ BOVRILLBULLETHOLE - 50 Lives WELDERSBENCH - Unlocks all chapters PRINCEALBERT - Unlocks Barn Boys chapter CLAMPIRATE - Unlocks Bats Tower chapter ANCHOVYBAY - Unlocks Sloprano Chapter MONKEYSCHIN - Unlocks Uga Buga Chapter SPANIELSEARS - Unlocks Spooky Chapter BEELZEBUBSBUM - Unlocks Its War! Chapter CHOCOLATESTARFISH - Unlocks Heist chapter +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 15c Legal Disclaimer LEGALD | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ Ethan Alwaise +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 15d Contact Information CONTACTI | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+. +=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~+ | 15e Credits CREDIT | +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=+ wikipedia - The wikipedia website had the name I used for the final boss. bananagirl - I got the story from her FAQ as I lost my instruction manual. Her FAQ also informed me of the use of the down C button. Dallas - His FAQ was helpful for the weapons section, Clang's Lair, and a few strategies in different sections. Nemesis/Pyro Vesten - Their guide made me realize the haybot trick. coldryon/HoOteYhOo/Starky27 - They contributed the cheats I put in. | http://www.gamefaqs.com/console/n64/file/196973/50444 | crawl-002 | refinedweb | 23,796 | 90.7 |
I absolutely love the idea of Smart Clients. The more exposure I have to them the more I see the vast potential they offer. In my first article on Smart Clients, I had a personal goal to tackle the main tenants of Smart Client design. There are two aspects of Smart Client design I touched on in that article, but wanted to go into greater depth. The topics in question are the purpose of a rich (WinForms) UI and the role of local resources.
Like the previous article, all the source for this project is based on the 2.0 framework.
Every time you go to a FedEx or UPS store, they have 5 or 6 clocks on the wall showing the time in major cities around the world. I thought it would be fun to put together a Smart Client using the same notion of showing the current time in multiple time zones. This Smart Client uses a web service provided by. Their web service has two methods: one to get a list of time zones, and one to return the current time in a given time zone. This remote data source is synchronized with US Naval atomic clocks and gives the user the ability to query any time zone, not just the 24 conventional areas.
The advantage of using a WinForms UI for Smart Client development is that you have all the power of tools like GDI+ which add flare to your UI. In the case of this application I created two custom controls to create a rich environment. The first control is a custom panel that displays a caption with a gradient background (similar to the FotoVision control). There is no inherent advantage to using a custom control, but it makes the UI much more appealing. And yes, aesthetics and usability are a concern with Smart Clients.
The second control is a Clock control that�s fully customizable. The clock allows for all of its display properties to be configured at design or run time. For this application I�ve set up a small �design environment� for a user to create and customize a clock to their liking. The clock is tied to the time in a given time zone, displayed on the main form, and then it just keeps ticking. Of course, since the focus is on a rich UI, the clock has tooltip information, context menus, etc� that set it apart form a web-based alternative.
The other advantage of Smart Clients is their ability to use local resources (unlike browser-based applications). The first local resource I�m using is simply the local system time. The problem with calling a web service to get time in a time zone is that you have no idea how long the request will take. A slow network connection or heavy traffic on the site may cause latency on the return from the web service.
For example, let�s say you�re on the East coast, it�s 4:00:00 pm, and you�re calling the web service to get the time in Arizona. (You�re doing this because Arizona doesn�t use daylight savings time and you can�t remember if they�re 1 or 2 hours ahead.) If there is any lag in the web service, say 15 seconds, you still get back a value of 5:00:00. The problem is that now your local time is 4:00:15. That means the time returned by the web service is off by 15 seconds. It wouldn�t make sense if each clock you add showed a different value for the seconds.
I�m using the local system time to fix any latency problems by adjusting the returned time to reflect the current seconds on the local machine. It�s not complex by any means, but it�s a great example of why a Smart Client shouldn�t rely purely on remote resources.
public DateTime GetCurrentTime(string timeZone) { // Call the web service to get the time in a given time zone WorldTime.WebClock worldTime = new WorldTime.WebClock(); worldTime.GetTime(timeZone); // It may take several seconds to get a response from // the web service. This adjusts the returned time to reflect // the seconds on the local machine. DateTime adjustedTime = new DateTime(zoneTime.Year, zoneTime.Month, zoneTime.Day, zoneTime.Hour, zoneTime.Minute, DateTime.Now.Second); return adjustedTime; }
The next local resource is user settings. The app allows a user to add an unlimited number of clocks. When the application is exiting, a custom Settings object persists the clock layout and time zone settings. Those settings are loaded the next time the app starts and the user�s clocks are recreated. Because the difference between the local time and the clock�s time zone is persisted, it�s not necessary to call the web service again to check the times. (Is the advantage of using local resources starting to make sense? The point here is not additional calls out to the internet.)
The settings are stored using the new
ApplicationSettingsBase class. I created a custom class to hold my settings, a generic
List<> that contains the custom Settings class.
// Class to hold setting information public class ClockSetting { private int _someValue;public ClockSetting() { } public int SomeValue { get{ return _someValue; } set{ _someValue = value; } } } // This class inherits from ApplicationSettingsBase. // The only property being persisted // is a collection of ClockSetting objects. // This allows the settings for each clock // created by the user to be stored and loaded with minimal coding. public class TimePieceSettings : ApplicationSettingsBase { [UserScopedSetting()] public List<ClockSetting> ClockSettings { get { return (List<ClockSetting>)this["ClockSettings"]; } set { this["ClockSettings"] = (List<ClockSetting>)value; } } }
// Loading the settings _settings = new TimePieceSettings(); foreach (ClockSetting setting in _settings.ClockSettings) { AddClock(setting); }
//Saving the settings foreach(Clock clock in _layoutPanel.Controls) { ClockSetting instance = CreateSetting(clock); _settings.ClockSettings.Add(instance); } _settings.Save();
There�s almost nothing to it. Less than 5 lines of code for loading or saving custom user settings locally.
Well, there you have it. Another Smart Client example to experiment with. I hope I�ve shed some light on the question of why local resources are an important part of Smart Client development, and what advantages rich UIs have.
Enjoy!
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/smart/TimePiece.aspx | crawl-002 | refinedweb | 1,040 | 63.09 |
Here's the concept.
class TheCompositeThing(Builder): attribute1 = SomeItem("arg0") attribute2 = AnotherItem("arg1") more_attributes = MoreItems("more args")
The idea is that when we create an instance of TheCompositeThing, inWe've looked at three ways to build composite objects:
- As independent attributes with an flexible but terse implementation.
- As attributes during __init__() using sequential code that doesn't assure independence.
- As properties using wordy code.
I find that The Depedency Builder is more like a configuration item than it is like code. In Python, particularly, we can change the class freely without the agony of a rebuild.
We might do something like this:
This allows us to lazily build the composite object by stepping through the dictionary defined at the class level and filling in values for each item. This could be done via __getattr__() also.
A Build ImplementationWe're reluctant to provide a concrete implementation for the above examples because it could go anywhere. It could be done eagerly or lazily. One choice for a lazy implementation is to use a substitute() method. Another choice is to use the __init__() method.
We might do something like this:
def substitute(self, config): class_dict= self.__class__.__dict__ for name in class_dict: if name.startswith('__') and name.endswith('__'): continue setattr(self, name, class_dict[name].get(config))
This allows us to lazily build the composite object by stepping through the dictionary defined at the class level and filling in values for each item. This could be done via __getattr__() also. | http://slott-softwarearchitect.blogspot.com/2016/03/the-composite-builder-pattern-example.html | CC-MAIN-2018-51 | refinedweb | 248 | 58.58 |
Overview
This is part two of a five-part series of tutorials about making games with Python 3 and Pygame. In part one, I introduced the series, covered the basics of game programming, introduced Pygame, and examined the game architecture.
In this part, we'll look at the
TextObject class used to render text on the screen. We'll create the main window, including a background image, and then we'll learn how to draw objects like bricks, the ball, and the paddle.
The TextObject Class
The
TextObject class is designed to display text on the screen. The case could be made from a design point of view that it should be a sub-class of
GameObject as it is also a visual object and you may want to move it. But I didn't want to introduce deep class hierarchies, when all the text that Breakout displays on the screen stays put.
The
TextObject creates a font object. It renders the text into a separate text surface that is then blitted (rendered) onto the main surface. An interesting aspect of the
TextObject is that it doesn't have any fixed text. Instead, it gets a function called
text_func() that is called every time it renders.
This allows us to update the display of lives and score in Breakout just by providing a function that returns the current lives and current score, instead of keeping track of which text objects display lives and score and updating their text every time they change. This is a neat trick from functional programming, and for larger games it can help you keep everything nice and tidy.
import pygame class TextObject: def __init__(self, x, y, text_func, color, font_name, font_size): self.pos = (x, y) self.text_func = text_func self.color = color self.font = pygame.font.SysFont(font_name, font_size) self.bounds = self.get_surface(text_func()) def draw(self, surface, centralized=False): text_surface, self.bounds = \ self.get_surface(self.text_func()) if centralized: pos = (self.pos[0] - self.bounds.width // 2, self.pos[1]) else: pos = self.pos surface.blit(text_surface, pos) def get_surface(self, text): text_surface = self.font.render(text, False, self.color) return text_surface, text_surface.get_rect() def update(self): pass
Creating the Main Window
Pygame games run in windows. You can make them run fullscreen too. Here is how you display an empty Pygame window. You can already see many of the elements I discussed earlier. First, Pygame
init() is called, and then the main drawing surface and the clock are created.
Next is the main loop, which consistently fills the screen with uniform gray and calls the clock
tick() method with the frame rate.
import pygame pygame.init() screen = pygame.display.set_mode((800, 600)) clock = pygame.time.Clock() while True: screen.fill((192, 192, 192)) pygame.display.update() clock.tick(60)
Using a Background Image
Usually, a uniform color background is not very exciting. Pygame does images very well. For Breakout, I splurged and went for a fancy real space image from NASA. The code is very similar. First, just before the main loop, it loads the background image using the
pygame.image.load() function. Then, instead of filling the screen with color, it "blits" (copy the bits) the image to the screen at position (0,0). The effect is that the image is displayed on the screen.
import pygame pygame.init() screen = pygame.display.set_mode((800, 600)) clock = pygame.time.Clock() background_image = pygame.image.load('images/background.jpg') while True: screen.blit(background_image, (0, 0)) pygame.display.update() clock.tick(60)
Drawing Shapes
Pygame can draw anything. The
pygame.draw module has functions for drawing the following shapes:
- rect
- polygon
- circle
- ellipse
- arc
- line
- lines
- anti-aliased line
- anti-aliased lines
In Breakout, all the objects (except the text) are just shapes. Let's look at the draw() method of the various Breakout objects.
Drawing Bricks
Bricks are bricks. They are just rectangles. Pygame provides the
pygame.draw.rect() function, which takes a surface, a color, and a Rect object (left, top, width and height) and renders a rectangle. If the optional width parameter is greater than zero, it draws the outline. If the width is zero (which is the default), it draws a solid rectangle.
Note that the
Brick class is a subclass of
GameObject and gets all its properties, but it also has a color it manages itself (because there may be game objects that don't have a single color). Ignore the
special_effect field for now.
import pygame from game_object import GameObject class Brick(GameObject): def __init__(self, x, y, w, h, color, special_effect=None): GameObject.__init__(self, x, y, w, h) self.color = color self.special_effect = special_effect def draw(self, surface): pygame.draw.rect(surface, self.color, self.bounds)
Drawing the Ball
The ball in Breakout is just a circle. Pygame provides the
pygame.draw.circle() function that takes the color, center, radius and the options width parameter that defaults to zero. As with the
pygame.draw.rect() function, if the width is zero then a solid circle is drawn. The Ball is also a derived class of GameObject.
Since the ball is always moving (unlike the bricks), it also has a speed that is passed on the
GameObject base class to be managed. The Ball class has a little twist because its x and y parameters denote its center, while the x and y parameters passed to the
GameObject base class are the top-left corner of the bounding box. To convert from center to top-left corner, all it takes is subtracting the radius.
import pygame from game_object import GameObject class Ball(GameObject): def __init__(self, x, y, r, color, speed): GameObject.__init__(self, x - r, y - r, r * 2, r * 2, speed) self.radius = r self.diameter = r * 2 self.color = color def draw(self, surface): pygame.draw.circle(surface, self.color, self.center, self.radius)
Drawing the Paddle
The paddle is yet another rectangle that is indeed moving left and right in response to the player's pressing the arrow keys. That means that the position of the paddle may change from one frame to the next, but as far as drawing goes, it is just a rectangle that has to be rendered at the current position, whatever that is. Here is the relevant code:
import pygame import config as c from game_object import GameObject class Paddle(GameObject): def __init__(self, x, y, w, h, color, offset): GameObject.__init__(self, x, y, w, h) self.color = color self.offset = offset self.moving_left = False self.moving_right = False def draw(self, surface): pygame.draw.rect(surface, self.color, self.bounds)
Conclusion
In this part, you've learned about the TextObject class and how to render text on the screen. You've also got familiar with drawing objects like bricks, the ball, and the paddle.
In the meantime, remember we have plenty of Python content available for sale and for study in the Envato Market.
In part three, you'll see how event handling works and how Pygame lets you intercept and react to events like key presses, mouse movement, and mouse clicks. Then, we'll cover gameplay topics like moving the ball, setting the ball's speed, and moving the paddle.
><< | https://code.tutsplus.com/tutorials/building-games-with-python-3-and-pygame-part-2--cms-30082 | CC-MAIN-2021-04 | refinedweb | 1,201 | 68.06 |
Details
Description
AjaxFormComponentUpdatingBehavior works with Firefox, but doesn't work when using IE7.
I also observed similar problems when using AjaxEventBehavior as well.
The error returned via the ajax debug console was: "Could not locate ajax transport. Your browser does not support the required XMLHttpRequest object or wicket could not gain access to it".
Below is a simple example that should demonstrate the problem.
Example.java
============================================================
package example.page;
import org.apache.wicket.ajax.AjaxRequestTarget;
import org.apache.wicket.ajax.form.AjaxFormComponentUpdatingBehavior;
import org.apache.wicket.markup.html.WebPage;
import org.apache.wicket.markup.html.basic.Label;
import org.apache.wicket.markup.html.form.Form;
import org.apache.wicket.markup.html.form.TextField;
import org.apache.wicket.model.CompoundPropertyModel;
import org.apache.wicket.model.PropertyModel;
public class Example extends WebPage {
private int age;
public Example() {
Form form = new Form("form", new CompoundPropertyModel(this));
add(form);
final Label label = new Label("label", new PropertyModel(this, "age"));
label.setOutputMarkupId(true);
add(label);
final TextField age = new TextField("age");
age.add(new AjaxFormComponentUpdatingBehavior("onblur") {
/**
*/
private static final long serialVersionUID = 1L;
protected void onUpdate(AjaxRequestTarget target){ System.out.println("onUpdate triggered"); target.addComponent(label); }
});
form.add(age);
}
public int getAge(){ return age; }
public void setAge(int age){ this.age = age; }
}
Example.html
==================================================
<html>
<head>
<title>Test</title>
</head>
<body>
<div>
<label wicket:My Label Goes Here</label>
</div>
<form wicket:
<div>Age: <input type="text" wicket:</div>
<div>
<input type="submit" value="Submit" />
</div>
</form>
</body>
</html>
Activity
- All
- Work Log
- History
- Activity
- Transitions
Oh, don't get me wrong. I'm not saying the 'file://' issue is anywhere close to the showstopper in this case. I personally couldn't imagine a scenario where I'd use that in a production application.
The issue is that right now we use either the native XMLHttpRequest or the ActiveX implementation. I'm saying that 'either/or' isn't the right way to go. We need to try to use the best one for the browser (based on capability detection, not browser sniffing), then fall back to the other if our first choice isn't available.
I referenced the jQuery example because it's simple and it works. As of now the code checked into subversion doesn't work on a large number of computers (as posted above, anywhere with restricted IE settings). On top of that, our stuff used to work until a fairly recent change that added the 'Wicket.Browser.isIE7() && ' between 1.3.3 and current trunk (I noticed it in 1.4-M1/M2).
I say at least revert back to the 1.3.3 code. That didn't appear to have any compatibility issues and the changed code doesn't appear to have any benefits.
Just in case that link dies later, the issue was that the 'file://' protocol with the IE7 native XMLHttpRequest doesn't work (and does work with the ActiveX object).
So I guess at this point, what reason is there to use the native implementation on IE7 other than if ActiveX is disabled? I don't see any benefits.
Brian: The comment you made about JQuery is interesting. I have checked the jquery svn () and it points to this issue:
Good info. Thanks Lonnie.
I was concerned about compatibility of the MSXML object, so I looked this up:
To keep it here for posterity:
-
- isn.).
This is what i currently do. I followed microsofts documentation on how to fallback to using a ActiveX object if native XMLHTTP is disabled () and simply added the following to wicket-ajax.js. You just need to make sure this snippet is inserted before Wicket.Ajax = { ... is called.
/**
- Check to see if the native XMLHttpRequest has been disabled for IE7,
- if so fallback to using the ActiveXObject.
*/
if (Wicket.Browser.isIE7() && !window.XMLHttpRequest) {
window.XMLHttpRequest = function()Unknown macro: { try { return new ActiveXObject('MSXML2.XMLHTTP.3.0'); } catch (ex) { return null; } }
}
I have been using this for awhile on our production server and have had no problems. I hope this can be incorporated into 1.3.4.
So what about something like:
Wicket.Ajax = {
// Creates a new instance of a XmlHttpRequest
createTransport: function() {
var transport = null;
if (window.ActiveXObject){ transport = new ActiveXObject("Microsoft.XMLHTTP"); }
if (transport == null && window.XMLHttpRequest){ transport = new XMLHttpRequest(); }
if (transport == null){ Wicket.Log.error("Could not locate ajax transport. Your browser does not support the required XMLHttpRequest object or wicket could not gain access to it."); }
return transport;
},
Still tries to default to ActiveX, but if it's not available reverts to native?
Unfortunately the code that fixed Jarmar's IE7 broke ours.
We're on a similarly locked down configuration of IE 7 within our organization. In this case this is going to break in any large organization that has ActiveX objects disabled.
Removing 'Wicket.Browser.isIELessThan7() &&' (as in Wicket 1.3.3) fixes the issue.
Because of how a lot of organizations lock down IE, it really should try as many options as possible before failing. For instance, here's how a few ajax libraries do it:
MooTools:
return $try(function(){ return new XMLHttpRequest(); }
, function(){ return new ActiveXObject('MSXML2.XMLHTTP'); }
);
- '$try' looks weird because it's a cross browser abstraction
- this will try to use a native XMLHttpRequest first, if that fails in any way it will revert to the ActiveXObject
JQuery:
// Create the request object; Microsoft failed to properly
// implement the XMLHttpRequest in IE7, so we use the ActiveXObject when it is available
var xhr = window.ActiveXObject ? new ActiveXObject("Microsoft.XMLHTTP") : new XMLHttpRequest();
- those comments are from the jQuery devs... maybe they know something we don't? They get a lot more eyes on their ajax stuff than we do...
I determined that the cause of my issue was related to a browser setting. Native XMLHttp was not enabled.
Unfortunately, this option is disabled for everyone in my company by default. Would it be possible to try and use
native XMLHttp for IE7 by default, but if the XMLHttpRequest object is null fallback to the old approach? The code in question
is in wicket-ajax.js (see snippet below).
/**
- The Ajax class handles low level details of creating and pooling XmlHttpRequest objects,
- as well as registering and execution of pre-call, post-call and failure handlers.
*/
Wicket.Ajax = {
// Creates a new instance of a XmlHttpRequest
createTransport: function() {
var transport = null;
if (Wicket.Browser.isIELessThan7() && window.ActiveXObject) { transport = new ActiveXObject("Microsoft.XMLHTTP"); }
else if (window.XMLHttpRequest){ transport = new XMLHttpRequest(); }
if (transport == null){ Wicket.Log.error("Could not locate ajax transport. Your browser does not support the required XMLHttpRequest object or wicket could not gain access to it."); }
return transport;
},
i think that you have security settings or policy settings in IE that is a bit to restrictive
all examples i test for IE7 work fine.
It should work now, but it of cause needs to be tested on other browsers to make sure that we haven't introduced some new weirdness. | https://issues.apache.org/jira/browse/WICKET-1646 | CC-MAIN-2016-26 | refinedweb | 1,153 | 50.33 |
I'm currently building a GPS logging geiger counter, using a raspberry pi zero w and a Ublox Neo 6M GPS module.
Connected via serial, not USB
Interfacing is happening fine, but I have a weird issue that I cannot find any reference to anywhere.
My waypoints are "clustering", and by that I mean that when I log points, they don't change locations from the first fix of that session
even when i move kilometers from that point, and it seeing up to 9 satellites at times.
The memory battery on the board is shot and I have spares to change that over later.
I'm using pynmea2 to glean the gps data from the NMEA serial data at 9600 baud.
Originally I had though that using the latitude output gave me a decimal degrees output
however when I saw this anomaly I realized that maybe I was just getting the degrees component as a decimal, and not the rest of the location.
I would have to move a very long way to see the location change
So I used the dm_to_sd function of pynmea2 to translate it to a float for decimal degrees.
on the surface and for the first few static points this was spot on correct.
so I went for a walk around the block, and all my coords log within a 20sq/m area - any identical in reading.
despite the fact that I walked almost 1000 meters.
What do you guys this? is this a programming issue, a hardware problem, or an idiot (Me) problem?
I'm using python to do the logging and here;s the code:
Code: Select all
import threading import time import os) print (msg.altitude_units) serialPort = serial.Serial("/dev/ttyAMA0", 9600, timeout=0.5) while True: str = serialPort.readline() parseGPS(str) | https://www.raspberrypi.org/forums/viewtopic.php?p=1539659 | CC-MAIN-2019-51 | refinedweb | 301 | 70.23 |
Tutorial 1d: Introducing randomness¶
In the previous part of the tutorial, all the neurons start at the same values and proceed deterministically, so they all spike at exactly the same times. In this part, we introduce some randomness by initialising all the membrane potentials to uniform random values between the reset and threshold values.
We start as before:
from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset) G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) M = SpikeMonitor(G)
But before we run the simulation, we set the values of the
membrane potentials directly. The notation
G.V refers
to the array of values for the variable
V in group
G. In
our case, this is an array of length 40. We set its values
by generating an array of random numbers using Brian’s
rand function. The syntax is
rand(size) generates an
array of length
size consisting of uniformly distributed
random numbers in the interval 0, 1.
G.V = Vr + rand(40) * (Vt - Vr)
And now we run the simulation as before.
run(1 * second) print M.nspikes
But this time we get a varying number of spikes each time we run it, roughly between 800 and 850 spikes. In the next part of this tutorial, we introduce a bit more interest into this network by connecting the neurons together. | https://brian.readthedocs.io/en/1.4.3/tutorial_1d_introducing_randomness.html | CC-MAIN-2019-30 | refinedweb | 247 | 51.89 |
etcd4s alternatives and similar packages
Based on the "Database" category.
Alternatively, view etcd4s alternatives based on common mentions on social networks and blogs.
Slick9.4 8.4 etcd4s VS SlickScala Language Integrated Connection Kit. Slick is a modern database query and access library for Scala
Elastic4s9.2 8.9 etcd4s VS Elastic4sElasticsearch Scala Client - Reactive, Non Blocking, Type Safe, HTTP Client
Quill9.1 8.3 etcd4s VS QuillCompile-time Language Integrated Queries for Scala
doobie9.1 8.9 etcd4s VS doobieFunctional JDBC layer for Scala.
PostgreSQL and MySQL asyncAsync database drivers to talk to PostgreSQL and MySQL in Scala.
ScalikeJDBC8.6 8.9 etcd4s VS ScalikeJDBCA tidy SQL-based DB access library for Scala developers. This library naturally wraps JDBC APIs and provides you easy-to-use APIs.
Phantom8.5 1.7 etcd4s VS PhantomSchema safe, type-safe, reactive Scala driver for Cassandra/Datastax Enterprise
scala-redis8.4 0.6 etcd4s VS scala-redisA scala library for connecting to a redis server, or a cluster of redis nodes using consistent hashing on the client side.
ReactiveMongo8.3 6.8 etcd4s VS ReactiveMongo:leaves: Non-blocking, Reactive MongoDB Driver for Scala
rediscala7.9 0.0 etcd4s VS rediscalaNon-blocking, Reactive Redis driver for Scala (with Sentinel support)
Slick-pg7.9 5.3 etcd4s VS Slick-pgSlick extensions for PostgreSQL
Squeryl7.4 7.3 etcd4s VS SquerylA Scala DSL for talking with databases with minimum verbosity and maximum type safety
Casbah7.4 0.0 etcd4s VS CasbahCasbah is now officially end-of-life (EOL).
Salat7.2 0.0 etcd4s VS SalatSalat is a simple serialization library for case classes.
gremlin-scala6.8 4.8 etcd4s VS gremlin-scalaScala wrapper for Apache TinkerPop 3 Graph DSL
mongo-scala-driver6.8 0.6 etcd4s VS mongo-scala-driverA modern idiomatic MongoDB Scala Driver.
Scanamo6.4 9.1 etcd4s VS ScanamoSimpler DynamoDB access for Scala
Activate5.9 0.0 etcd4s VS ActivateAbandoned: Pluggable persistence in Scala
Scala ActiveRecord5.9 3.5 etcd4s VS Scala ActiveRecordActiveRecord-like ORM library for Scala
Anorm5.4 7.7 etcd4s VS AnormThe Anorm database library
Sorm5.3 0.0 etcd4s VS SormA functional boilerplate-free Scala ORM
Relate5.1 1.8 etcd4s VS RelatePerformant database access in Scala
SwayDB4.9 9.6 etcd4s VS SwayDBNon-blocking persistent & in-memory key-value storage engine for JVM.
scredis4.8 0.0 etcd4s VS scredisNon-blocking, ultra-fast Scala Redis client built on top of Akka IO, used in production at Livestream
Pulsar4s4.7 7.0 etcd4s VS Pulsar4sIdiomatic, typesafe, and reactive Scala client for Apache Pulsar
Scala-Forklift4.6 0.0 etcd4s VS Scala-ForkliftType-safe data migration tool for Slick, Git and beyond.
AnormCypher4.5 0.0 etcd4s VS AnormCypherNeo4j Scala library based on Anorm in the Play Framework
Troy4.2 0.0 etcd4s VS TroyType-safe and Schema-safe Scala wrapper for Cassandra driver
Scruid4.0 0.1 etcd4s VS ScruidScala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.
neotypes3.9 9.2 etcd4s VS neotypesScala lightweight, type-safe, asynchronous driver for neo4j
rethink-scala3.8 0.0 etcd4s VS rethink-scalaScala Driver for RethinkDB
Shade3.6 0.0 etcd4s VS ShadeMemcached client for Scala
Clickhouse-scala-clientClickhouse Scala Client with Reactive Streams support
longevity3.4 0.0 etcd4s VS longevityA Persistence Framework for Scala and NoSQL
scala-sql3.2 0.4 etcd4s VS scala-sqlscala SQL api
Tepkin3.1 0.0 etcd4s VS TepkinReactive MongoDB Driver for Scala built on top of Akka IO and Akka Streams.
Morpheus3.0 0.0 etcd4s VS MorpheusReactive type-safe Scala driver for SQL databases
CouchDB-Scala3.0 0.0 etcd4s VS CouchDB-ScalaA purely functional Scala client for CouchDB
laserdisc2.9 9.1 etcd4s VS laserdiscA Future-free Fs2 native pure FP Redis client
ReactiveCouchbase2.9 0.0 etcd4s VS ReactiveCouchbasePlay 2 plugin for ReactiveCouchbase
ScalaRelational2.8 0.0 etcd4s VS ScalaRelationalType-Safe framework for defining, modifying, and querying SQL databases
ReactiveNeo2.7 0.0 etcd4s VS ReactiveNeo[DISCONTINUED] Reactive type-safe Scala driver for Neo4J
Couchbase2.6 9.3 etcd4s VS CouchbaseThe Couchbase Monorepo for JVM Clients: Java, Scala, io-core…
lucene4s2.6 4.1 etcd4s VS lucene4sLight-weight convenience wrapper around Lucene to simplify complex tasks and add Scala sugar.
Memcontinuationed2.4 0.0 etcd4s VS MemcontinuationedMemcached client for Scala
d4s2.3 8.9 etcd4s VS d4sDynamo DB Database Done Scala-way
scala-migrations2.1 0.0 etcd4s VS scala-migrationsDatabase migrations written in Scala
neo4akka1.8 0.0 etcd4s VS neo4akkaNeo4j Scala client using Akka-Http
GCP Datastore Akka Persistence Pluginakka-persistence-gcp-datastore is a journal and snapshot store plugin for akka-persistence using google cloud firestore in datastore mode.
MapperDao1.4 0.0 etcd4s VS MapperDaoA Scala ORM library
Scout APM - Leading-edge performance monitoring starting at $39/month
Do you think we are missing an alternative of etcd4s or a related project?
Popular Comparisons
README
etcd4s
A Scala etcd client implementing V3 API using gRPC and ScalaPB with optional Akka Stream support. This project is in beta stage with basic test coverage and usable APIs.
Overview
This repo is a client library of etcd implementing V3 APIs using gRPC under the hood with optional Akka Stream support for stream APIs. This library implement the complete set of the APIs in the V3 protoal. More information about the APIs can be found here:
Note that this library do not support gRPC json gateway and use raw gRPC call instead (underlying is java-grpc). This project cross build against Scala 2.11, 2.12 and 2.13 and also tested against etcd 3.2.x, 3.3.x but fail under 3.4.x.
Getting Started
The core lib
libraryDependencies += "com.github.mingchuno" %% "etcd4s-core" % "0.3.0"
To include akka stream support for stream API
libraryDependencies += "com.github.mingchuno" %% "etcd4s-akka-stream" % "0.3.0"
Usage
import org.etcd4s.{Etcd4sClientConfig, Etcd4sClient} import org.etcd4s.implicits._ import org.etcd4s.formats._ import org.etcd4s.pb.etcdserverpb._ import scala.concurrent.ExecutionContext.Implicits.global // create the client val config = Etcd4sClientConfig( address = "127.0.0.1", port = 2379 ) val client = Etcd4sClient.newClient(config) // set a key client.setKey("foo", "bar") // return a Future // get a key client.getKey("foo").foreach { result => assert(result == Some("bar")) } // delete a key client.deleteKey("foo").foreach { result => assert(result == 1) } // set more key client.setKey("foo/bar", "Hello") client.setKey("foo/baz", "World") // get keys with range client.getRange("foo/").foreach { result => assert(result.count == 2) } // remember to shutdown the client client.shutdown()
The above is wrapper for simplified APIs. If you want to access all underlying APIs with full options. You can use the corresponding api instance to have more control.
client.kvApi.range(...) client.kvApi.put(...) client.leaseApi.leaseGrant(...) client.electionApi.leader(...)
If you want the Akka Stream support for the stream APIs, you should add the
etcd4s-akka-stream depns into your
build.sbt
import org.etcd4s.akkasupport._ import org.etcd4s.implicits._ import org.etcd4s.pb.etcdserverpb._ import akka.NotUsed // assume you have the implicit value and client needed in the scope val flow: Flow[WatchRequest, WatchResponse, NotUsed] = client.watchApi.watchFlow val request: WatchRequest = WatchRequest().withCreateRequest(WatchCreateRequest().withKey("foo")) Source.single(request) .via(flow) .runForeach { resp => println(resp) }
More example usage under the test dir in the repo.
Development
Requirment
- Java 8+, Scala 12.12.X+, sbt and docker
# to start a background etcd for development docker-compose up -d
How to start?
Simple! Just
sbt test
Publish
This is to remind me how to publish and may switch to
sbt-release later
- make sure you have
~/.sbt/gpg/ready with pub/sec key paris
- make sure you have
~/.sbt/1.0/sonatype.sbtready with credentials
sbt "+clean" "+compile"
sbt "+publishSigned"
sbt sonatypeReleaseAll | https://scala.libhunt.com/etcd4s-alternatives | CC-MAIN-2021-25 | refinedweb | 1,298 | 52.15 |
I).
I'm not saying we should do exactly what I outline here, or indeed that we must do it at all, but it's the culmination of quite a lot of research into asynchronous support, what it means, and how we can add it to Django in a feasible way rather than rewriting everything.
Firstly, let's start with what this is not. It's not, in any form, an attempt to bring WebSocket or any other protocol handling into Django - but it will probably help their implementation in other projects. Channels will still exist for WebSocket handling (that's covered below).
Instead, the goal is to make Django a world-class example of what async can enable for HTTP requests, such as:
- Doing ORM queries in parallel
- Allowing views to query external APIs without blocking threads
- Running slow-response/long-poll endpoints alongside each other efficiently
And, on top of all this, bringing easy performance improvements to any project that spends a majority of time blocking on databases or sockets (which is most projects!)
Channels has been a great proving ground for a lot of the ideas and techniques to use, and I think will continue to serve a purpose for quite a while as the place for WebSockets, broadcast groups, channel layers and related functionality, but it can only do so much sitting underneath an entirely synchronous Django.
At the same time, it's imperative that we keep Django backwards-compatible with existing code and, especially, don't make it any more complicated for beginners to dive in and get started. We shouldn't expect every developer to know about, or want, asynchronous code support.
This roadmap lays out plans for a phased introduction of async functionality into Django in a way that can remain backwards-compatible, and gradually make the core of Django fully asynchronous over the course of several releases.
The end goals are:
- To end up with a majority of the blocking parts of Django (such as sessions, auth, the ORM, handlers, etc.) being async-native with a synchronous wrapper exposed on top where needed (for user simplicity and/or backwards compatibility)
- To keep Django's familiar layout of models/views/templates/middleware, albeit with a few changes as necessary.
- To have Django be at least the same speed, if not faster, than it was before, and not cause significant performance regressions at any stage of this plan.
- To allow people to write fully-async websites if they desire, from the request handler all the way down to the view and ORM, but to not force this as the default.
- To adopt new talent into the Django team by ensuring the changes are done in a way where we can have new contributors helping out on large-scale features, the kind that traditionally caused people to be added to the core team but have been more scarce recently.
This is no small endeavour, and I expect the overall effort to take on the order of years. That said, I believe it can be done incrementally in a way that provides benefits in every release.
Timing
Why now?
- Django 2.1 will be the first release that only supports Python 3.5 and up, and so this provides us the perfect place to start working on async-native code, as async def and similar native support for coroutines was not present in Python before.
- Asynchronous database backends for Python are starting to appear, and even ones that aren't natively async have been proven to work well in a threadpool.
- The Web is slowly shifting to use cases that prefer high concurrency workloads and large parallelisable queries. (especially API patterns like GraphQL)
Why not now?
- asyncio still has growing pains, and there are alternative async frameworks for Python that propose to replace it but are incompatible.
- There is sufficient momentum behind asyncio that I think it is going to end up staying. I would like to explore ways that we can help gradually evolve it, though, rather than replacing it wholesale, and maybe ways we can provide common async operations as safe higher-level primitives.
- Developers are still unfamiliar developing Python applications with async support, and there's a lack of documentation, tutorials and tooling to help with it.
- Django could prove to be a good catalyst for helping these materials get made, and by default we'll still try and keep people away from most async stuff (apart from maybe await-ing a few ORM queries).
Changes
I'll split Django into its main features and address them individually, as we ideally want to work on several things in parallel (though I imagine Request Path and ORM come first - see "Timeline" below).
Request Path
The Django request path is approximately:
- A WSGI Handler, which turns the raw request into a HTTPRequest object, and a HTTPResponse into a raw response
- A URL routing layer, which maps incoming requests to a view
- Middleware, which processes requests and responses on the way to and from a view
- Views, which run business logic
The first two parts - handlers and the URL routing layer - are internal to Django and expose very few direct public APIs, and so rewriting these to be natively async-capable is easier. Middleware and views are, however, provided by our users, and so backwards compatibility must be maintained. Especially important is that views are kept simple - we should not require users to build something like Channels-style consumers if they don't need the power.
This would be achieved by changing the middleware and view flow to be fully asynchronous, and wrapping any synchronous middleware in a threadpool-based wrapper that allows it to still execute. We may even change to just have ASGI middleware rather than Django middleware - the two are now quite similar in concept after the middleware changes in the last few Django releases - but there would be some extra work needed to make them line up.
The bottom layer of this middleware stack would then inspect to see if the thing it was calling (the item pointed to by the URL routing) was a synchronous view, asynchronous view or maybe even an ASGI application (if we want to support those as well as views) and execute it appropriately.
Currently, detecting if something is an async view or ASGI application would not be possible as both would just look like a callable that returns a coroutine; there are a couple of potential solutions to this. We can detect if something is a synchronous View, however, and preserve backwards compatibility (and run it in a threadpool).
Even if there is a fully asynchronous path through the handler, WSGI compatibility has to also be maintained; in order to do this, the WSGIHandler will coexist alongside a new ASGIHandler, and run the system inside a one-off eventloop - keeping it synchronous externally, and asynchronous internally.
This will allow async views to do multiple asynchronous requests and launch short-lived coroutines even inside of WSGI, if you choose to run that way. If you choose to run under ASGI, however, you will then also get the benefits of requests not blocking each other and using less threads.
ORM
The ORM is arguably the biggest and most complex part of Django, and thus will be the one that takes the most work to convert - especially as async database backends are still an area of active research and development in Python.
Fortunately, most of that complexity is hidden behind a relatively small public API. The only real blocking operations you can do against the ORM are:
- Evaluating a QuerySet
- Introspecting tables, columns or indexes
- Calling save/update/etc. on a model instance
Sadly, it is not possible to have asynchronous attribute access in Python 3 and so we cannot preserve the lazy ForeignKey attribute traversal. However, everything else can be kept:
- QuerySets gain both __await__ (which returns a full result list of all rows) and __aiter__ (paginated/cursor-based results)
- Introspection APIs just become awaitable functions
- Save, update and similar model instance methods become awaitable functions
- Related field references on model instances only work if they were included in a select_related/prefetch_related statement
Internally, we would first run most of the existing ORM code - the query builder and database backends - inside a threadpool, providing asynchronous support over the top. Then, we would progressively move the "synchronous barrier" down towards the cursor wrappers, allowing databases that don't have async backends to then decide to run that in a thread themselves, while async-native backends can have full control over what they do.
Migrations would initially remain unchanged, as they aren't designed to run in a request environment. They could potentially be made async at a later date, though it's unclear if this has any benefits.
Templates
The Django template renderer is difficult to change, as we have found previously, but many uses of QuerySets come from templates and so we need to think about them.
Initially, we would just leave it as a synchronous render inside of a threadpool, which will run those ORM queries in blocking mode inside a thread, effectively working as they do now but providing an async-capable render method for use in asynchronous views.
As a second phase, we would look at what it would take to make it async-capable, and if we want to do that work rather than fully recommending another template language (the most popular alternative, Jinja2, is async-capable). The tricky part of this would be determining when things are called sync vs async; how do we deference variables/handle things like the {% for %} tag? Should we even allow querysets to be called in templates any more?
Forms
Form validation and saving would have to go fully async. In this case, we could probably solve the "is it sync or async?" problem with a constructor argument to the form which says if it's async or sync. If we ever wanted to make async the default, we could change this argument from default-False to default-True.
Caching
Django's cache layer is reasonably simple, and we would simply make a parallel, asynchronous version of the existing API. This would, unfortunately, have to be namespaced or prefixed, since you can't have a function that's both synchronously callable and that also returns an awaitable.
Sessions
Session backends will need to become async-capable; since they're only called directly by Django, we can have them advertise if they are async or not via an attribute, and then run them in a threadpool if needed. Eventually, we would assume all backends are async-capable and ship a "wrapper backend" that takes a sync backend and turns it async, so people can still write them synchronously if needed.
The few end-user session functions will gain asynchronous versions. Most interaction with sessions is reading and writing from request.session, which does not need to be async as it's not saved as soon as you write to it.
Authentication
Similarly to session backends, authentication backends will also need to be optionally-async with an attribute advertising their status. End-user functions, like login/logout, will need optional-async versions as well. Namespacing here is again tricky; we'll need to provide both sync and async versions of these functions for a good while, and they're top-level in a module.
Admin
The core admin classes can be rebuilt to be more async once the rest of the work they would need is complete, but initially it would be left alone and serve as a decent test of the backwards-compatibility of the changes.
The core email sending code will initially remain as synchronous-in-threadloop, but gain async interfaces to trigger email sending. Later on, we can then investigate switching it to use a fully asynchronous SMTP transport.
Static files
The only part here that needs changing would be the static file serving code, which can just run under the backwards-compatibility synchronous layer initially and then be upgraded to full-async at leisure.
Signals
Signals would have to change to have the ability to be called asynchronously - their async-ness would have to be part of their register call, and they would have to be called asynchronously if they had asynchronous listeners.
Other Areas
There are other assorted parts of Django that will need some async work, but none of them present significant challenges like the above sections. In general, we would only convert function calls that have the potential to block (or to call user-supplied code) to be asynchronous, and any converted function would have a deprecation plan and name change.
Timeline
This is very rough, but the idea is to make sure we don't disrupt an LTS release. It is also… ambitious, but a good roadmap always is.
It's also worth noting that the work can be done iteratively so there's no risk of the whole thing being some mammoth 3-year project/branch that might not land. The idea would be to keep this work as close to master as possible.
- Django 2.1: Current in-progress release. No async work.
- Django 2.2: Initial work to add async ORM and view capability, but everything defaults to sync by default, and async support is mostly threadpool-based.
- Django 3.0: Rewrite the internal request handling stack to be entirely asynchronous, add async middleware, forms, caching, sessions, auth. Start deprecation process for any APIs that are becoming async-only.
- Django 3.1: Continue improving async support, potential async templating changes
- Django 3.2: Finish deprecation process and have a mostly-async Django.
The ORM and request handling portions are the important things to get working initially; most of the other areas covered above can be done as and when people become available to work on them.
I would, in fact, be reasonably happy if we only converted the request handling path as a first move; this is the only thing we would need to do to unblock people who wish to do their own asynchronous development on Django.
Safety
Perhaps the main flaw of Python's async implementation is the fact that you can accidentally call synchronous functions from asynchronous contexts - and they'll happily work fine and block the entire event loop until they're finished.
Since Django is meant to provide a safe environment for our users, we'll add detection code to all synchronous user-called endpoints in Django that are blocking/have async alternatives, and alert if they are called on a thread with an active async event loop. This should prevent most accidental usage of synchronous code, which is likely to happen most during initial porting of existing code from a sync to an async context.
This problem also extends to overriding key lookup (__getitem__), attribute lookup (__getattr__), and other operators - you cannot do blocking operations inside these with Python's async solution, as they cannot be called in an asynchronous way. Django fortunately does very little of this, but those uses that exist - such as lazily resolving ForeignKeys, as mentioned above - will have to fundamentally change.
Funding
We cannot expect such a large endeavour to happen purely on the back of volunteer work alone. We should expect to have to raise funds for this project and being able to pay the bigger contributors for their time (and not just coding, but also other aspects like technical writing, project management, etc.)
This could come in several forms - including Kickstarter-style campaigns, direct corporate sponsorship of the development, or directing more money to DSF donations generally with this outlined as a concrete funding result.
What does this mean for Channels?
I would propose that Channels continues to exist as a place for WebSocket handling code, multiplexing, and similar challenges, but stops having to include things like the ASGI handler and authentication/session support.
The core routing and protocol switching parts of Channels would likely move into Django itself as the layer that exists underneath Views; the generic consumers and channel layer support would not.
Daphne would continue to be maintained until enough alternative, mature ASGI servers existed, at which point we would seek to either find new permanent maintainers or sunset it.
Further Discussion
This isn't just some plan I expect us to go with and stick to perfectly - it's meant to be a concrete starting point for discussing what this would mean for Django, if we should invest in it, and what it means for us to do that.
To that end, I have started a discussion on the django-developers mailing list about this post; please chime in there if you can with feedback and comments. If you don't want to post publicly on the list, you are also more than welcome to email comments to me at andrew@aeracode.org and I will aggregate and relay them to everyone anonymously.
I think this presents an exciting way to take Django into the future and lead the way in terms of building a world-class web framework with asynchronous support - but I still want to make sure everyone is on board first! | http://www.aeracode.org/2018/06/04/django-async-roadmap/ | CC-MAIN-2018-34 | refinedweb | 2,872 | 52.63 |
Ok I’ll admit Part 2 to my “Beginner’s Guide To Using Google Plus .NET API” has been on the back-burner for some time (or maybe it’s because I completely forgot). After getting quite a few email recently on the subject, I thought now would be the best time to continue with Part 2.
It’s recommended you take a gander at Part 1 before proceeding to this post.
As the title suggests, I will be showing how to output user’s publicly view posts. The final output of what my code will produce can be seen on my homepage under the “Google+ Posts” section.
Create Class Object
We will create a class called “GooglePlusPost" to allow us to easily store each item of post data within a Generic List.
public class GooglePlusPost { public string Title { get; set; } public string Text { get; set; } public string PostType { get; set; } public string Url { get; set; } }
Let’s Get The Posts!
I have created a method called “GetPosts” that accepts a parameter to select the number of posts of your choice.
public class GooglePlus { private static string ProfileID = ConfigurationManager.AppSettings["googleplus.profileid"].ToString(); public static List<GooglePlusPost> GetPosts(int max) { try { var service = new PlusService(); service.Key = GoogleKey; var profile = service.People.Get(ProfileID).Fetch(); var posts = service.Activities.List(ProfileID, ActivitiesResource.Collection.Public); posts.MaxResults = max; List<GooglePlusPost> postList = new List<GooglePlusPost>(); foreach (Activity a in posts.Fetch().Items) { GooglePlusPost gp = new GooglePlusPost(); //If the post contains your own text, use this otherwise look for //text contained in the post attachment. if (!String.IsNullOrEmpty(a.Title)) { gp.Title = a.Title; } else { //Check if post contains an attachment if (a.Object.Attachments != null) { gp.Title = a.Object.Attachments[0].DisplayName; } } gp.PostType = a.Object.ObjectType; //Type of post gp.Text = a.Verb; gp.Url = a.Url; //Post URL postList.Add(gp); } return postList; } catch { return new List<GooglePlusPost>(); } } }
By default, I have ensured that my own post comment takes precedence over the contents of the attachment (see lines 24-35). If I decided to just share an attachment without a comment, the display text from the attachment will be used instead.
There are quite a few facets of information an attachment contains and this only becomes apparent when you add a breakpoint and debug line 33. For example, if the attachment had an object of type “video”, you will get a wealth of information to embed a YouTube video along with its thumbnails and description.
So there is room to make your Google+ feed much more intelligent. You just have to make sure you cater for every event to ensure your feed displays something useful without breaking. I’m in the process myself of displaying redoing my own Google+ feed to allow full access to content directly from my site.
Recommendation
It is recommended that you cache your collection of posts so you are not making constantly making request to Google+. You don’t want to exceed your daily request limit now do you.
I’ve set my cache duration to refresh every three hours. | https://www.surinderbhomra.com/Blog/2013/01 | CC-MAIN-2020-16 | refinedweb | 515 | 54.73 |
TinyWorld – Part 7
Under the hood
Hello!
In Introduction to the TinyWorld application we introduced the TinyWorld muti-part tutorial, describing how to develop applications for SAP HANA and XS Advanced, using the SAP Web IDE for SAP HANA (Web IDE).
Now it is time to dive under the hood, and understand a little bit more of the concepts related to the development of multi-module applications.
As discussed in the introduction of this tutorial, business applications usually consist of multiple modules, e.g. a database model, Java or Node.js business logic, a UI app that are deployed to different target runtimes.
Development of such applications requires careful coordination of APIs and dependencies. Deploying such applications is even more challenging due to the need to orchestrate and provision the different parts and targets.
The SAP Web IDE supports the development, testing and deployment of entire multi-module applications in the context of a so-called multi-target applications (MTA) project. The development process is governed by a special meta-file, which we have already met, the MTA descriptor (mta.yaml).
The MTA project
The following illustration shows the structure of the fully expanded TinyWorld project (a few hidden “system files” are intentionally not shown):
The Local folder is the root folder for all projects, and represents your “workspace”. There can be more than one project per workspace. A project is simply a folder structure with multiple modules. Here we can see three modules:
In addition to the above structure, there may be additional, normally hidden, special files: a folder called .git/ (used by the version control system), a folder called .che/ (used to record workspace state), and within HDB source folders, two files respectively called .hdbconfig, and .hdbnamespace. You normally don’t need to be concerned with any of these files.
Each project root folder also contains the MTA descriptor, mta.yaml, discussed in more details below.
The MTA descriptor
The MTA descriptor file is automatically created and maintained by the Web IDE. When we created the application project in part 2 of this tutorial, a small skeleton was created. Initially, it didn’t have a lot of interesting content:
As we started adding modules to the project, the MTA descriptor file automatically changed. Here is what it looked like at the end of part 3 of this tutorial:
Let’s take a closer look.
Lines 1-3 are like a header, providing the application unique id and its current version. The ID and version properties uniquely represent the application version in the production environment. By convention, ID is a reverse-URL dot-notation, e.g. com.acme.demo.tinyworld. In this tutorial, we keep it simple: just tinyworld. Version follows the semantic versioning standard ().
Lines 5-31 describe the modules of the application. Each module has a type (hdb, nodejs or ui5) and a path to its source code, relative to the MTA root. TinyWorld has three modules.
Lines 33-35 describe the resources on which the application depends. Here, we have only one resource, a HDI container (of type “com.sap.xs.hdi-container”) named laconically hdi–container.
Dependencies
One of the most complex aspects of developing interacting application modules in XS Advanced is the question of how a module knows the access point of another module.
How can you write a UI that calls an OData service, when you don’t know what the URL of that service will be at runtime, as it can theoretically be deployed on any host under XS Advanced management? Or, how can you write a database module that must run inside an HDI container that may later be established by a HANA administrator on a productive database? Or how can you specify that a certain business logic module should only be deployed after the database module it depends on has been deployed?
The role of the MTA descriptor is to describe the logical dependencies between all the modules of an MTA. It does this with variables in “requires” and corresponding “provides” sections, and with predefined (reserved) variables, like ${default-url}. The value of this variable will be determined and substituted when the application is deployed (a topic we will discuss in more detail in part 9 of this tutorial).
Let’s examine the dependencies of our small application.
On line 10 you can see that the tinydb module uses a “requires” section to declare the specific database container to which the code will deployed (lines 10). The MTA descriptor is tracked by the development and deployment tools. For instance, when you “build” the tinydb module, the tools will automatically create and provision the necessary database container and associated XS Advanced services, technical users and permissions, both during development, and later during deployment.
Line 16 shows how the tinyjs module has a dependency on tinydb (so you won’t be able to run tinyjs unless you first build tinydb) and on the HDI container which its code accesses (line 17).
The tinyjs module has a “provides” section (lines 18-21), containing a variable tinyjs_api with a property named service_url that has the reserved value of ${default-url}. So how does this work?
Check out the definition of tinyui. As we already know, this module is going to call the OData interface exposed by tinyjs, and thus needs to know its URL. It therefore “requires” the variable tinyjs_api.
The actual “binding” to the URL of tinyjs is performed by the XS Advanced “approuter” application.
This application expects to access an environment variable called destinations, which has a predefined “grouped” structure of name-value pairs. We create this grouped structure via defining a property tinyjs_be with the value of the local variable ~{service_url} i.e. tinyjs‘s runtime URL. To close the loop, the same property must also be listed as a route in the approuter’s configuration file, xs-app.json, as already discussed in part 3 of this tutorial:
{“source”: “^/euro.xsodata/.*$”, “destination”: “tinyjs_be”}
To summarize: the “approuter” will use this information to route euro.xsodata to the destination URL defined by the tinyjs_be property, which is automatically mapped to the OData service of the tinyjs module.
Yes, it looks a little complicated at first, but the benefits far outweigh this. As mentioned, the MTA descriptor is tracked by the development and deployment tools which automatically create and provision the necessary dependencies and associated XS Advanced services, technical users and permissions, both during development, and later during deployment. This beats, hands-down, the need to hard-code absolute values in various “manifest” files, and modify these hard-coded values for each landscape that you deploy your application to!
Matching CDS conventions
CDS artifacts are created in a context, which adheres to certain conventions. While this is not a CDS tutorial, the following table will help you keep things consistent when creating and deploying CDS artifacts with the Web IDE:
Summary of part 7
XS Advanced is based on the cloud principles of the Cloud Foundry architecture. This imposes new challenges when it comes to developing the multiple parts of a business application, in a way that is independent of the actual system and landscape in which the application will be deployed. Here we took a look under the hood of MTAs – the muti-target source specification that makes this possible. More details can be found in the SAP HANA developer documentation.
You can continue to explore the more advanced parts of this TinyWorld tutorial, covering things like the use of version control, adding authentication and authorization control to our application, and how to manage the life cycle of our application, from development to deployment, in the following parts of the TinyWorld tutorial:
Part 8: Source control
Part 9: Application life cycle
Part 10: Add authentication
Part 11: Add authorization
XS Advanced concepts wonderfully explained in this Part. thanks a lot 🙂 | https://blogs.sap.com/2016/03/29/developing-with-xs-advanced-under-the-hood/ | CC-MAIN-2021-31 | refinedweb | 1,309 | 52.9 |
"Eric M. Ludlam" <eric@...> writes:
> Your below example was excellent. It exposed an area of namespaces
> that wasn't yet supported. I was able to fix the second problem in
> semantic-c.el, and semantic-analize.el such that it worked for your
> below example.
Sweet!
> If you could get the latest from CVS and let me know how it goes,
> I would be most appreciative.
An attempt to build...
# Because of "Makefile is out of date! It needs to be regenerated by EDE.
# If you have not modified Project.ede, you can use touch to update
# the Makefile time stamp." message
touch **/Makefile
make clean-autoloads
make clean-all
make EMACS=/usr/local/emacs-cvs/bin/emacs
...the CVS version results to:
"/usr/local/emacs-cvs/bin/emacs" -batch --no-site-file -l grammar-make-script -f semantic-grammar-batch-build-packages semantic-grammar.wy
Compiling Grammars from: /home/foo/cedet/semantic/semantic-grammar.el
Package `semantic-grammar-wy' is up to date.
In toplevel form:
semantic-grammar-wy.el:365:14:Error: Apparently circular structure being printed
make[1]: *** [metagrammar] Error 1
make[1]: Leaving directory `/home/foo/cedet/semantic'
make: *** [semantic] Error 2
The sad part is that I actually faced the same problem the previous
time and somehow managed to get rid of it but I no longer remember how...
> A good way to check when you encounter an issue is with:
> M-x semantic-analyze-current-context RET
>
> If the Prefix slot doesn't have any type information, then the rest of
> the system will fail.
Thanks for the tip.
--
Hannu
View entire thread | http://sourceforge.net/p/cedet/mailman/message/8181932/ | CC-MAIN-2016-07 | refinedweb | 269 | 52.05 |
I am using Aspose.words for the first time. Its easy to add the export formats, but when I export to word(docx) all the headlines are shown in vertical. Please help me solving this issue.
Hi<?xml:namespace prefix = o
Thanks for your request. Could you please attach your RDL report for testing? We will investigate the problem and provide your more information.
Best regards.
I have also attached the sample report after exporting it to Word document. I am using evaluation version and we need to test prior to make any decision regarding getting a license.
In some cases we do not know the length of string in textbox.
Thanks for help.
Hi,
Thanks for additional info. I will proceed to looking into your issue shortly. | https://forum.aspose.com/t/aspose-words-for-reporting-services-showing-text-in-vertical/99363 | CC-MAIN-2022-40 | refinedweb | 128 | 70.29 |
Agenda
See also: IRC log
Tom: Telecon next week, Guus to chair.
PROPOSED to accept minutes of the last telecon:
RESOLVED to accept the minutes
ACTION: Ed to investigate what text could be added to primer re. concept co-ordination [recorded in] [CONTINUES]
ACTION: Guus to write primer text re: broaderGeneric and equivalence w/r/t subclass [recorded in] [CONTINUES]
ACTION: Alistair to check the old namespace wrt dereferencing [recorded in] [CONTINUES]
ACTION: Antoine and Ed to add content to Primer about irreflexivity [recorded in] [CONTINUES]
<Ralph> Antoine's proposed text
ACTION: Sean to write a proposal to indicate to OWL WG our requirements for annotation properties [recorded in] [DONE]
<Ralph> Sean's proposed text
<Ralph> Sean: I plan to discuss this with folks in the OWL WG who have offices near me
<Ralph> ... if we had rich annotations, that's what we would use for SKOS
<Ralph> ... it's not clear how much benefit we'd get from just labels and the documentation properties; hard to reason with these
<Ralph> ... hard to see much benefit from defining complex classes using the documentation properties
<Ralph> ... both Alistair and I think of these things as being annotations
<Ralph> ... I'll post this to the OWL WG in a week or so after collecting comments from SWD WG
<Ralph> ... the OWL WG is interested in our comments as they see SKOS as a use case for their annotation and punning work
ACTION: Sean to post comment to OWL WG re annotation requirements. [recorded in]
ACTION: Alistair to update the history page adding direct link to latest version of rdf triple [recorded in [recorded in] [CONTINUES]
ACTION: Editors of the Use Cases to clean up the lists of requirements in light of resolutions [recorded in] [CONTINUES]
Antoine: put notes about
requirements up on wiki.
... do we change the list of requirements?
... good starting points
Tom: reminder. Requirements to be
published as WG Note in December.
... The document as is is a record of our thinking. Should we edit it to
... reflect our requirements?
Antoine: Main thing to check whether we have met the resolutions
Ralph: Suggest we don't delete
requirements. If there are reqs that we've decided not
... to meet, should say that explicitly.
... Wouldn't bother adding requirements to document. Don't feel so
... strongly, but not necessary to put in there detail abut outher things that we ended
... up doing.
... Would mark anything additional as additional
... As Tom suggested, more of a historical record. How we resolved those
<TomB> +1 on Ralph's approach to Use Cases
Ralph: things listed as potential requirements.
<aliman> +1 on what ralph said, say explicitly if don't meet stated requirements, don't need to add requirements
Tom: Basically cleaning up, not necessarily adding things but making things neat.
Ralph: Have we gone through and identified everything?
TOm: Ongoing task to get Use case + requirement as WG Note.
Antoine: Would rather consider this action done.
Ralph: another action may be needed.
Antoine: Mostly Guus who wanted
this thin done. Spotting requirements for which
... we haven't done the jobs.
<Ralph> status of SKOS requirement [Antoine 2008-06-24]
TOm: Action is to get use cases
doc as a whole into shape. Then have two people read
through
... and provide views, then declare as note. Could continue action as it
... covers what needs to be done.
ACTION: Guus to mail his position on issues 72, 73 and 75 to the list [recorded in] [CONTINUES]
<TomB>
Tom: Agenda shows two issues. Antoine raised ISSUE 41 which he thought was closed.
Antoine: Use of language tags in
examples in all documents. Can't remember original comment.
Long
... time ago. Examples changed in primer to fit the comment. Idea was to check with original author?
... is this why issue is pending? Confident that the required changes were made.
Ralph: Issue tracker suggests that this was sorted and we just need to get in touch with the commentor
Alistair: Can't remember if email
was sent.
... sure that commentor was happy.
PROPOSED to declare ISSUE 41 closed.
RESOLVED to close ISSUE 41.
Tom: Two remaining issues
... 84.
<Ralph> ISSUE 84; ConstructionOfSystematicDisplaysFromGroupings
Antoine: Ongoing. No time to
check this. Diego sent proposal for algorithm a while ago.
Noone
... has checked it. Could decide to postpone?
Tom: Would propose to postpone. If this is posted to list, then we could decide this next week.
Antoine; Even if Diego's algorithm is really cool (which there's little doubt about :-)
scribe: should it be in documents?
Diego: Based on how thesaurus
should be displayed. Don't have access to ISO
... standard, so work based on things that Alistair included in wiki page.
<Ralph> in February, Alistair wrote "ISSUE-84 ConstructionOfSystematicDisplaysFromGroupings) --
<Ralph> important, but arguably out of scope.
<Ralph> "
Diego: can't be considered a complete implementation
<Ralph> Issues Review [Alistair 2008-02-21]
Diego: if someone can help or
provide informaiton about systematic display, would be happy
to
... extend implementation. Don't thunk algorithm should be in documents. It's a toy/example.
Tom: We are agreeing that this is
out of scope for inclusion in specs. Independent of whether
we
... can work with Diego to publish in some form.
... Suggest that in the interest of closing issues, Antoine takes an action to propose postponement.
ACTION: Antoine to propose that we postpone ISSUE 84. [recorded in]
Tom: Issue 86.
<Ralph> ISSUE 86; SkosURIDereferenceBehaviour
Alistair: We just need a proposal
here which will be something along the lines of adding text to
primer/reference following
... Cool URIS, semantic Web recipes etc.
Tom: Basically linking to
external resoruces.
... Is this reference, primer or both?
Ralph: Makes sense to have it as
a reference issue. It's
... best practice for use of SKOS, so reference.
Tom: Add a sentence or two plus links in reference.
Ralph: Particularly if there are
any minimum required behaviours. E.g.
... you're conforming if you do the following.
Alistair: would be reluctant to
bring that into the reference. Additional level of
... conformance.
Ralph: Makes sense to give some advice.
Alistair: Happy to have no minimum requirements. But would like to encourage good practice.
<TomB> +1 Alistair
Ralph: Would like to see a
proposal on what the recommended behaviour would be,
... Are concepts different from other stuff?
ACTION: SKOS Reference Editors to propose a recommended minimum URI dereference behaviour [recorded in]
Tom: Comment on change of
namespace.
... Do we need to do anything?
... Has anyone pointed out this is a major versioning?
Antoine: Comment is slighlty different from the first one. While we might have a new version,
<Ralph> SKOS comment: change of namespace [Laurent LE MEUR 2008-06-17]
Antoine: we could have used the original as it wasn't really official.
<Ralph> Sean: could be argued either way; we decided to make the change
Tom: This posting puts the
emphasis on the status of the vocabulary. The answer really
involves explaining that this is
...
a major versining. Simply recording the justification that the previous version didn't have this status.
scribe: Would be useful to respond along those lines.
Antoine: Maybe a good rsponse
would be that akthough the previous version wasn't a
standard,
... it was "de facto"
Ralph: Refrain from using the
words "de facto", but this is roughly the reasons from the face
to face.
... Discussions from f2f were that changing existing vocabs would be a lot of work.
... Meeting record should shw that we considered pain for authors and developers.
... Should respond or it may turn into Last Call comment. Feel a bit
... bad that we didn't highlight this in the status of the document. In retrospect we might have
... written a sentence calling this to people's attention
Tom: This will come up, so lets
formualte a response now. Can someone take an action to
formulate a resposne
... on the list.
... Look at f2f washinton record to reconstruct this.
<Ralph> SKOS Namespace discussion of 2008-05-06
<aliman> Ralph, I feel bad about that too, I can't believe I didn't think to add a note on this to the "changes" section of the reference.
Sean: is it right to do this on the list?
Tom: Yes. Final response will go to the commentor.
Ralph: Draft the response in the fish bowl.
ACTION: Sean to draft response to comment about namespace. [recorded in]
ACTION: Ben to prepare draft implementation report for RDFa (with assistance from Michael) [recorded in] [CONTINUES]
Tom: Anyone reporting re RDFa?
Ralph: Telecon last week dealing
with last minute LC comment which was resolved
... with editorial changes only. Substance of comment was uncertainty about the technical
... direction. Clarified language about use of doc type when DTD validation is considered important
... by the document author.
... Published CR. Now officially in CR. WOuld like to point out that
<Ralph> Pubrules update: XHTML+RDFa permitted as DTD in non-Recs
Ralph: pubrules now permit
XHTML+RDFa in documents.
... So anything up to CR can use RDFa. Important milestone.
... Have already met the CR. Two interoperable implementations.
... Hoping for more information about implementations. Only thing we really need to do is
... respond to comments.
ACTION: Ralph/Diego to work on Wordnet implementation [of Recipes implementations] [recorded in] [CONTINUES]
Ralph: Continues until infrastructure is available.
ACTION: Jon and Ralph to publish Recipes as Working Group Note [recorded in] [CONTINUES]
Ralph: Jon owes me some text.
Diego: Wondering about what Ralph
just said about RDFa in W3C documents. Does this apply to
notes?
... Should we add RDFa to our upcoming documents?
Ralph: Excellent question. Ian's
message to chairs says "all TR except for
Recommendations".
... so could do this for Notes.
Tom: Essentially taking header information
Ralph: Basic DC metadata.
... Idea why Recommendations are excluded is because any document that's expected to be
... updated can use RDFa, but Recs are hard to change, so need to wait until RDFa is Rec.
Diego: W3C has database with RDF data?
Ralph: yes
Diego: is it a good idea to add metadata inside document?
Ralph: Diego -- make a list of interesting metadata that could be included
Diego: For instance links between
current and previous versions
... would be interesting. So could be added as RDFa
Tom: In terms of process, we're
proposing to adopt a uniform approach for all the new technical
documents.
... Recipes, new drafts etc.
... Welcome to Daniel Maycock of Boeing.
... What we should probably do is work out what the metadata will say,
... maybe using recipes as test case, then adopt that approach for others. Otherwise will
... end up with inconsistency. If we can get this right, then it's an example we can point to.
... Worth taking a moment to look specifically at content of the metadata. Can someone take an action to
... make a proposal.
Diego: Considering adding a page to the wiki about using RDFa on W3C TR.
Tom: Excellent idea. Even if it's
just a page that's simple, thought through, then
... we could even publish it as a Note.
Ralph: No need to go hog wild
(!!). DC is a no-brainer. Maybe also some other
... things, but good to see a list.
ACTION: Diego to propose minimum RDFa metadata set for WG deliverables. [recorded in]
No actions, Vit and Elisa not on the call.
<TomB> vit, would you like to report on Vocabulary Management?
<vit> no news from me
Ralph: Objective next week is to decide that we have a LC document?
<Ralph> Ralph: regrets from me for 8 July
Ralph: I will not be here for July 8th.
Tom: Not available for July 8,
15, 22. Need to think about how to schedule further
calls.
... How close to LC?
Alistair: What do we need to do?
Ralph: Anything about which we
expect substantive comments, if there's something
... that we're likely to change, would be good to document those
... explicitly in the LC draft, which would then allow us to fic them.
Alistair: Flag anythin that might change.
Ralph: Things that we anticipate there will be sufficient comment to make us change our position.
Alistair: e.g namespace. Don't want to chew up lots of time.
Tom: Should we also flag mapping
properties as at risk?
... Where would one flag this?
Ralph: Best place to do that
would be in the mapping properties section. "This section
... /part of section are features at risk"
Tom: Do we need to do this in announcement?
Ralph: Announcement would include
status. Don't have to enumerate, but should say if there are
some
... at risk features.
Tom: Who will write that announcment?
Ralph: Chairs and team contact
Alistair: Will need some input on
which sections or features should be marked. Please
Tom: Two mentioned: namespace and mapping properties. Are there any others?
Ralph: Suspect that minutes of f2f will show anything controversial.
<Ralph> ISSUE 71 ParallelMappingVocabulary
Ralph: Recent decisions more likely to be controversial as these issues have been around a while.
Alistair: Should we err on the side of caution?
Ralph: If we believe it's
controversial and new evidence could persuade us to change our
psoition, then yes.
... if it's controversial and we know we won't change our minds, then no.
Alistair: So w.r.t namespace, what new evidence would make us change our mind.
Ralph: If all the authors of
SKOS documents complained.
... this is primarily a deployment based decision.
Tom: Kind of hoping that won't
happen :-)
... inclined to avoid marking too many things as at risk. Looking at the Washington record would
... be useful
Ralph: e.g. resolution for mapping stated this explciitly.
ACTION: SKOS Reference Editors to specifically flag features at risk for Last Call. [recorded in]
meeting adjourned | http://www.w3.org/2008/06/24-swd-minutes.html | CC-MAIN-2021-49 | refinedweb | 2,281 | 68.97 |
Cairo is pretty much the main library used for drawing anything with vectors. It is used everywhere, sort of like SQLite. The website is
In the GNOME trifecta of things, Cairo provides the 2D drawing API, whereas Pango provides the Text Rendering API, and Gtk-Glib provide the toolkit/glue for all the above.
The neat thing about Cairo is that output is not limited to the screen. Backends for Cairo can be written for anything, so things drawn in Cairo can be outputted to an SVG file, OpenGL commands, a printer, or to a Pixbuf for use in a very popular web browser (Firefox anyone).
So today I'm going to use Cairomm (C++ bindings for Cairo) to do some basic drawing. Here is our header file justin_draw.h that defines our DrawingArea.
#ifndef SAMPLE04_JUSTIN
#define SAMPLE04_JUSTIN
#include <gtkmm/drawingarea.h>
class JustinDraw : public Gtk::DrawingArea
{
public:
JustinDraw();
virtual ~JustinDraw();
protected:
virtual bool on_expose_event(GdkEventExpose* event);
};
#endif
The only thing of note here is that we are going to define an on_expose_event. This is important because this is the method that is called when the window needs to draw itself. Either because the Window is new, or we've uncovered the window an exposed a part of it that was under another window.
Now let's implement our Gtk::DrawingArea with our justin_draw.cc file:
#include "justin_draw.h"
#include <cairomm/context.h>
JustinDraw::JustinDraw() {}
JustinDraw::~JustinDraw() {}
bool JustinDraw::on_expose_event(GdkEventExpose* event)
{
Glib::RefPtr<Gdk::Window> window = get_window();
if (window)
{
Gtk::Allocation a = get_allocation();
const int w = a.get_width();
const int h = a.get_height();
int xc, yc;
xc = w/2;
yc = h/2;
Cairo::RefPtr<Cairo::Context> cr = window->create_cairo_context();
cr->set_line_width(10.0);
cr->rectangle(event->area.x, event->area.y, event->area.width, event->area.height);
cr->clip();
cr->set_source_rgb(0.8, 0.0, 0.0);
cr->move_to(0,0);
cr->line_to(xc,yc);
cr->line_to(0,h);
cr->move_to(xc,yc);
cr->line_to(w,yc);
cr->stroke();
}
return true;
}
Notice a couple of things here before we move on. Drawing an entire screen with Cairo should be done very rarely. Not to say Cairo is slow but there is no need to waste time on drawing. The cr->rectangle(...); cr->clip(); parts allow us to focus on only the part of the window that has changed. Pretty much all code that uses Cairo has something similar to this so it's a good idea to include something like this in you code before you actually do any drawing.
Also notice that we are using a Cairo::RefPtr. Yes Cairo has it's own type of RefPtr's that you must use to hold Cairo related things. You have no idea how much time you can waste using the wrong type of RefPtr.
Finally, our program that uses our new Gtk::DrawingArea. main.cc
#include "justin_draw.h"
#include <gtkmm/main.h>
#include <gtkmm/window.h>
int main(int argc, char** argv)
{
Gtk::Main kit(argc, argv);
Gtk::Window win;
win.set_title("DrawingArea");
JustinDraw area;
win.add(area);
area.show();
Gtk::Main::run(win);
return 0;
}
And that's all there is to it! You can hit up basically any manual on Cairo to find all the 2D API commands. It's basically a 1:1 translation between the C and C++ bindings so you shouldn't have any problems there.
Tomorrow I'm going to cover Making a throbbing image in Java, because I'm gotten two emails over the last four days asking how to do that. Till then, Cheers! | http://ramenboy.blogspot.com/2010/08/ | CC-MAIN-2018-09 | refinedweb | 593 | 66.64 |
Q-Learning Tic-Tac-Toe, Briefly
Sunday November 3, 2019
Tic-tac-toe doesn't call for reinforcement learning, except as an exercise or illustration. Recently, I saw several examples implementing Q-learning, all of which were rather long. I thought I'd give tic-tac-toe with Q-learning a try myself, using Python and TensorFlow, aiming for brevity.
The board is represented with a matrix, where zero means empty.
def new_board(size): return np.zeros(shape=(size, size))
Moves are represented by their coordinates, like
[0, 0] for the
upper left.
def available_moves(board): return np.argwhere(board == 0)
The first player is
+1 and the second player is
-1, so having
three in a row means getting a row, column, or diagonal to sum to
+3
or
-3, respectively.
def check_game_end(board): best = max(list(board.sum(axis=0)) + # columns list(board.sum(axis=1)) + # rows [board.trace()] + # main diagonal [np.fliplr(board).trace()], # other diagonal key=abs) if abs(best) == board.shape[0]: # assumes square board return np.sign(best) # winning player, +1 or -1 if available_moves(board).size == 0: return 0 # a draw (otherwise, return None by default)
Q-learning will require some state, so a player will be an object with
a
move method that takes a board and returns the coordinates of the
chosen move. Here's a random player:
class RandomPlayer(Player): def move(self, board): return random.choice(available_moves(board))
This is sufficient for the game loop, starting from any initial board:
def play(board, player_objs): player = +1 game_end = check_game_end(board) while game_end is None: move = player_objs[player].move(board) board[tuple(move)] = player game_end = check_game_end(board) player *= -1 # switch players return game_end
So this plays out a game between two random players and gives the result:
play(new_board(3), {+1: RandomPlayer(), -1: RandomPlayer()})
Playing out 10,000 games, it's clear that the first random player is more likely to win, and also that results can vary a good deal even when averaging over 500 games.
Here's a boring player that chooses the first available move from left to right, top to bottom:
class BoringPlayer(Player): def move(self, board): return available_moves(board)[0]
Games between boring players always end the same way. The comparison with the random player is more interesting. This table give results for 10,000 games per row:
| Size | First Player | Second Player | Draw | |------|--------------|---------------|------| | 3x3 | Random: 59% | Random: 29% | 12% | | 3x3 | Boring: 78% | Random: 18% | 4% | | 3x3 | Random: 52% | Boring: 44% | 4% |
The boring player does better than a random player whether it plays first or second. Now we have multiple "baselines." A learning agent should do better than the baselines.
The Q-learning player starts with a neural network that takes a board as input and produces an estimate of how good each move is from that position: Q-values.
class Agent(Player): def __init__(self, size): self.size = size self.model = tf.keras.Sequential() self.model.add(tf.keras.layers.Dense(size**2)) self.model.compile(optimizer='sgd', loss='mean_squared_error')
A couple helper methods make the interface nicer for using and training the neural net. (There may be a better way of achieving this.)
def predict_q(self, board): return self.model.predict( np.array([board.ravel()])).reshape(self.size, self.size) def fit_q(self, board, q_values): self.model.fit( np.array([board.ravel()]), np.array([q_values.ravel()]), verbose=0)
The Q-learning agent preserves some history, which is reset when a new game starts.
def new_game(self): self.last_move = None self.board_history = [] self.q_history = []
The
move method uses the Q-network to choose the best available move
and then calls the
reward method to operationalize Q-learning's
Bellman equation.
def move(self, board): q_values = self.predict_q(board) temp_q = q_values.copy() temp_q[board != 0] = temp_q.min() - 1 # no illegal moves move = np.unravel_index(np.argmax(temp_q), board.shape) value = temp_q.max() if self.last_move is not None: self.reward(value) self.board_history.append(board.copy()) self.q_history.append(q_values) self.last_move = move return move
The
reward method trains the Q-network, updating a previous estimate
of Q-values with a new estimate for the last chosen move.
def reward(self, reward_value): new_q = self.q_history[-1].copy() new_q[self.last_move] = reward_value self.fit_q(self.board_history[-1], new_q)
Using a
play function that resets games and delivers a reward of
+1 for wins and
-1 for losses and draws (see the full
notebook), a Q-learning agent plays a random agent:
The Q-learning player quickly achieves much stronger play than the baselines.
| Size | First Player | Second Player | Draw | |------|--------------|---------------|------| | 3x3 | Random: 59% | Random: 29% | 12% | | 3x3 | Boring: 78% | Random: 18% | 4% | | 3x3 | Random: 52% | Boring: 44% | 4% | | 3x3 | Q-Learn: 93% | Random: 6% | 1% | | 3x3 | Random: 21% | Q-Learn: 78% | 1% |
(For Q-learning players, they play 10,000 games while learning and then another 10,000 games while frozen to evaluate their performance level after learning, which is what's shown in the table.)
And that's it! Tic-tac-toe with a Q-learning agent, comfortably under 100 lines of code.
Ideas for doing more
- The Q-learning player didn't achieve a "perfect" strategy in which it never loses. Why not? Can you fix it? Further ideas below may contribute to a solution.
- You can write your own perfect tic-tac-toe player, or use a package like xo. How does the Q-learning player do against a perfect player?
- For tic-tac-toe, it's also possible to implement tabular Q-learning: a table keeps track of the estimated Q-value for every board state/action combination. How does this compare to the neural net approach here, in terms of game performance, sample efficiency, and space/time/compute complexity? The network architecture here has just 90 parameters, which is a good deal less space than is needed for a full table. What are the trade-offs? (Thanks to Chris Marciniak for suggesting this topic.)
- The initialization of the network and the play of the random opponent vary from run to run. How consistent are results? How do they vary? What source of randomness matters more?
- How can you analyze and describe the "strategies" that the system learns? How much do these vary from run to run, or when varying other things?
- Draws require nine moves, but it's possible to win in five. How does game length vary by player strategy and over the course of training the Q-learning player?
- The Q-learning algorithm here is simplified, with no reward decay, for example. Can you re-complicate the algorithm and perhaps achieve shorter game lengths?
- There is no explicit exploration mechanism in the Q-learning algorithm here. Can you add one and improve performance?
- What happens when the Q-learning player plays against the boring player, and what does this suggest?
- What happens when a Q-learning player plays against one type of player, and then faces a different player? How can this vary and what does it suggest?
- None of the players in these implementations are ever told which player they are, and none of them explicitly work it out for themselves, though this is possible. How does a Q-learning player that has played as first player do when made to play as second player instead? Can you improve this?
- How effective is self-play (Q-learning player vs. another Q-learning player)?
- Can you quantify the sample efficiency of the Q-learning algorithm, which should capture how many games or moves the algorithm needs to learn a good strategy?
- There are lots of enhancements to Q-learning, for example experience replay. Can you improve sample efficiency using such a mechanism?
- There are symmetries in the tic-tac-toe board; can you take advantage of these?
- The neural network used here is very simple. It is not at all "deep," for example. Does a more sophisticated network architecture give better results?
- Convolutional filters are useful in neural networks for computer vision. Can this kind of domain-specific approach inform an architecture design specialized for tic-tac-toe?
- The form of the input to the neural network here has only nine elements, which can each be positive or negative. Especially with a simple network architecture, this may not be ideal. (For example, consider how the current setup manifests the exclusive or (XOR) problem.) What other board representations can you use as input to the neural net? Do some work better than others?
- Lots of defaults are used in the implementation here, for example for the neural network's learning rate. Can you find better values?
- The rewards for a loss and a tie are both
-1in the implementation here. What happens when ties aren't as bad as losses? Is zero a good reward for a tie?
- The code shown here aims to be brief. Is this good or bad for readability, maintainability, and extensibility? Are other characteristics also important? How can you improve the code?
- Experiment with different board sizes:
| Size | First Player | Second Player | Draw | |------|--------------|---------------|------| | 3x3 | Random: 59% | Random: 29% | 12% | | 4x4 | Random: 31% | Random: 27% | 42% | | 5x5 | Random: 25% | Random: 15% | 60% |
As the board gets bigger, the advantage of going first decreases and the probability of a draw goes up considerably.
It also takes a good deal more games to learn a good strategy:
For the implementations here, some results are as follows:
| Size | First Player | Second Player | Draw | |------|--------------|---------------|------| | 4x4 | Random: 31% | Random: 27% | 42% | | 4x4 | Q-Learn: 83% | Random: 10% | 7% | | 4x4 | Random: 11% | Q-Learn: 80% | 9% | | 5x5 | Random: 25% | Random: 15% | 60% | | 5x5 | Q-Learn: 79% | Random: 5% | 16% | | 5x5 | Random: 18% | Q-Learn: 53% | 29% |
What is optimal for these larger boards? How well can you do? At what point is a Q-learning approach more effective (considering the trade with compute time) than exhaustive minimax search, if ever? | https://planspace.org/20191103-q_learning_tic_tac_toe_briefly/ | CC-MAIN-2021-04 | refinedweb | 1,648 | 57.47 |
Usage Guide
Overview
What is Twilio Studio?
Twilio Studio is a visual interface to design, deploy, and scale customer communications. Twilio Studio, a new addition to the Twilio Engagement Cloud, is the first visual interface that empowers millions of cross-functional team members to design, deploy, and scale customer communications. Companies can now fast-track their customer engagement roadmap using the creative talent of the entire organization.
When to use Twilio Studio?
Anyone on your team can use Twilio Studio to quickly and easily create and modify flows. Studio is designed for use by cross-functional teams, and it provides a common framework for everyone to do the work they need to do. Designers can make swift UX modifications, copywriters can implement their own changes to messaging, and developers can delegate work to others and focus on building more complex features (such as calling Functions).
Studio Features
- Trigger Flows via Inbound SMS, Inbound Voice Calls, and Webhooks
- Create, Modify, and Deploy Flows (Workflows)
- Import and Export Flows
- Add and Remove Transitions
- Manage Widget Settings with the Inspector Panel
- Define Transitions to Advance Users through Flows
- Create and Pass Variables
- View Executions (Individual runs through flows)
- Organize Use Cases and Logic in Separate Flows
Getting Started
You can start using Studio by logging in and visiting the Studio homepage in the Twilio console.
Glossary
Here are a few of the key terms that will help you as you get started with Studio:
Data Retention in Studio
Studio retains Execution logs for 30 days after Executions end. However, Executions that have not ended within 60 days of being created are considered stuck. Studio automatically ends and deletes stuck Executions 60 days after they were created. Once Executions are deleted, they are no longer accessible via Console or the API.
Log data is persisted in Studio for 30 days after an execution ends. This relates to all execution step data persisted by the application. Data relating to underlying products used via Studio, such as SMS or voice call logs, are not automatically deleted at the same time as execution data. Data generated by other products is retained / deleted in line with those products' data policies. Details of individual product data retention policies can be found in the specific product documentation, such as here for SMS and here for voice calls.
Handling concurrent calls from the same number in Studio
The concurrent calls setting only applies to Executions created via the Incoming Call trigger type. Incoming Message and REST API triggered Executions are not affected.
Note: This functionality changed on October 1st 2018. New flows created after this date will handle concurrent calls from the same caller ID by default. For flows created before that date, the switch must be made manually in the call trigger (see screenshot below).
By default, Studio flows can handle inbound concurrent calls from the same number. The Send and Wait For Reply widget cannot be attached anywhere downstream of the Incoming Call trigger point. This is because if multiple users are calling from the same number, Studio can't uniquely text back one user in an active execution and correctly identify a reply, because callers all share the same number.
For certain specific use cases, where you know callers to a Studio flow will always have a unique visible caller ID, you can disable the concurrent calls mode via the Advanced Features dropdown in the Trigger widget (see below). When disabled, you will be able to add the Send and Wait for Reply widget to incoming call trigger connected flows. This is only recommended for advanced users who know that their flow is triggered from visible unique caller IDs.
Note: Concurrent Calls does not apply to Executions triggered by an Incoming Message or the REST API. Those trigger types can only handle one concurrent Execution per Contact to ensure each session is uniquely tracked.
Using Studio
Creating Flows
Flows can be created in the console by clicking +
You can have many flows, which can represent different use cases or as a way to organize logic.
Navigating the Canvas
The canvas is where you’ll build your Flows. It begins with the Trigger widget, and from there, you can drag and drop Widgets to build the exact Flow to meet your use case.
The canvas can sometimes get crowded (especially in complex Flows!) so it’s important to be able to control the area of focus. You can use the mouse to drag and reposition the canvas, and you can use the Zoom in / out links to change zoom. You can also pinch and squeeze to zoom if you are using a trackpad.
Working With Widgets
Widgets are the building blocks of the Flow. They allow you to handle incoming actions and respond accordingly, sending a message, routing the user to another part of the Flow, capturing information, and more.
The Widget Library panel can be found on the right side of the Flow canvas, and includes several drag-and-drop-ready Widgets. When you select a Widget, this panel transforms into the Inspector Panel.
Simply drag any of these Widgets onto the canvas, and connect it to your new or existing Flow. You’ll see configuration options in the same right-side panel when you click on an individual Widget.
Keyboard shortcuts are available if you'd like to duplicate widgets on the canvas. Simply select a widget (it will highlight in blue) and then copy (
Cmd + C on Mac or
Ctrl +C on Windows) and paste (
Cmd + V on Mac or
Ctrl + V on Windows).
You can configure Widgets via the Inspector Panel (in the same right panel as the Widget Library). Simply click on a Widget to show configuration options. You may give Widgets custom names; the type will always show in parentheses so you can always tell what a widget does. Note that widget names must be unique, must start with a letter and cannot include spaces or periods -- use the underscore character to separate words.
Do not use Personally Identifiable Information in Flow names or Widget names
You should not use directly identifying information (aka personally identifiable information or PII) like a person's name, home address, email or phone number, etc., in Flow names or Widget Names because the systems that will process this attribute assume it is not directly identifying information. You can read more about how we process your data in our privacy policy.
Some Widgets have required configuration settings -- these will be indicated with a red asterisk and in the Widget Library reference below. You won’t be allowed to save your Flow if any required configurations are missing or invalid.
Defining Transitions
A Transition defines how a flow advances from one widget to the next based on events and specified conditions. From the canvas, you can simply tap New from a Widget to call up Transition options.
You can also set Transitions in the right-side Inspector Panel by clicking on New Transition. Transitions are often pre-set based on the type of Widget, and usually reflect the state of a call or message -- Was a response received? Did the call time out? Did the message send successfully?
Each Transition connects to another Widget. You may choose to set different destination Widgets for each Transition, or have two or more Transitions point to the same Widget (for example, if you want a user saying the number “one” to map to the same Widget as that user pressing the 1 key on the keypad).
To remove a Transition, click on the widget that starts the Transition and drag the line away. You can also click on the widget, select the Transitions tab on the configuration pane and click on the "..." next to the Transition you would like to remove, then click on the Disconnect Transition option from the dropdown menu.
Triggering Flows
There are three ways to trigger a Flow’s start:
- Incoming Message
- Incoming Call
- REST API (Inbound Request)
All three of these appear in the Trigger Widget, and you can drag-and-drop from one or more to reflect the needs of your use case.
Incoming Message
The Incoming Message trigger is invoked when your Twilio Phone Number (or other message-based channel) receives a new message and sends it to your Studio Flow Webhook URL:{AccountSid}/Flows/{FlowSid}
By connecting Send Message or Send & Wait for Reply widgets, you can respond to those incoming messages and carry on a conversation with the Contact.
Studio maintains a unique session based on the Contact's identifier (usually a phone number). For a messaging Flow, Studio only allows a single active Execution per Contact. Thus, all messages from the Contact to that Flow during an active Execution will be handled by that same Execution.
Incoming Call
The Incoming Call trigger is invoked when your Twilio Phone Number (or other voice-based channel) receives a new call and sends it to your Studio Flow Webhook URL:{AccountSid}/Flows/{FlowSid}
By connecting voice widgets, such as Say/Play and Gather Input, you can guide the Contact through a series of interactive voice responses (IVR) and even connect the Contact to another party with Connect Call To or route them to Record Voicemail.
By default Studio maintains a unique voice session based on the combination of the Contact's identifier (usually a phone number) and the unique Call SID, ensuring every call is handled uniquely, even if concurrent calls use the same Caller ID.
REST API Trigger
Using the Studio REST API, you can trigger an outbound call or message to a Contact. Use this trigger type to initiate appointment reminders, surveys or alerts from your own application. Add widgets to the REST API trigger to control the conversation just as you would for Incoming Message or Incoming Call.
For outbound voice calls, use the Make Outgoing Call widget to initiate the call; for oubound messages, use either Send Message to fire and forget or Send & Wait for Reply to initiate a two-way conversation.
Studio maintains a unique session based on the Contact's identifier (usually a phone number). For a REST API Flow, Studio only allows a single active Execution per Contact. If an Execution is already active for a Contact, a new REST API request for that same Contact will simply return the existing Execution.
Tip: Be sure the
To and
From phone numbers are formatted as E.164 (e.g. +1xxxxxxxxxx) to ensure correct session tracking.
See the Studio REST API docs for more details.
Configuring Your Twilio Number for Studio
Once you’re happy with your Flow, you can connect it to a Twilio Number so people can start interacting with it! Simply navigate to the Console > Phone Numbers > Manage Numbers, choose the number you’d like to connect to the Flow, and select Studio Flow for when a call comes in (for Voice) and when a message comes in (for SMS). Choose your Flow from the dropdown, and save to connect.
You can also copy-paste the Webhook URL onto any Twilio resource that takes a callback URL, including Messaging Services, Short Codes, and Channels. Depending on the product, this can be done in the Console, via API, or both.
Note: A Twilio phone number can only route inbound messages and calls to a single Studio Flow (one-to-one), but that Flow can process messages and calls from multiple phone numbers (one-to-many).
Important: For Voice Flows, add the Studio Flow Webhook URL to both the Call Status Changes and the Primary Handler Fails fields to ensure Studio can always correctly detect the end of the call.
Now, try calling the number in the screenshot -- if you hear a message referencing this guide, it’s powered by a Studio Flow!
Working With Variables
As your flow executes, we save the state in what's called the Flow Context. Any data in the flow context can be accessed by your widgets as variables, either in configuration fields or in text areas as variable substitution.
Studio supports the Liquid template language, which supports both output tags and logic tags. For example, to send a text message and include the contact's name, you could use variables like so:
Hello {{flow.data.first_name}}
More sophisticated logic is also supported. In this example, assume you want to check if you actually know the contact's name before trying to reference it:
{% if flow.data.first_name %}
Hello {{flow.data.first_name}}
{% else %}
Hello friend
{% endif %}
Note: Liquid template variables can render up to 16KB strings max.
Context Variables
There are 4 types of data stored in the Context:
- Flow: data intrinsic to the flow, such as the phone number associated with it
- Trigger: data that gets set when a flow is initiated, such as the initial incoming message, phone call, or REST API.
- Widgets: data that each widget sets itself and adds to the context as it gets executed, such as the digits a user presses or the body of an incoming message.
- Contact: data about the current contact engaging with your flow, such as their phone number
Flow Variables include:
The Flow's Execution Sid:
flow.sid
The Flow's address (e.g. Twilio phone number):
flow.channel.address
Example Flow variables in JSON format:
{ "flow": { "flow_sid": "FWxxxx", "channel": { "address": "+1xxxxxxxxxx" }, "sid": "FNxxxxx" } }
Contact Variables include:
The user's address (e.g. handset phone number):
contact.channel.address
Widget Variables
After execution, many widgets publish data variables to the Execution context, and these variables may be referenced by subsequent widgets. For example, when the Send Message widget step completes, it will have stored the following variables:
Sid
widgets.MY_WIDGET_NAME.message.Sid
To
widgets.MY_WIDGET_NAME.message.To
From
widgets.MY_WIDGET_NAME.message.From
Body
widgets.MY_WIDGET_NAME.message.Body
Note the casing of variable names, and remember that widget names must be unique, must start with a letter and cannot include spaces or additional periods. Any variables that come from an external source, such as a status callback or Twilio API call, are cased according to the relevant spec for that callback. For example, an incoming message will have a "Body" parameter, where we keep the capitalized "Body" like in the Twilio SMS API. Variables specific to the
flow,
trigger, and
widgets context are lower cased.
If a Flow is triggered via an inbound request (REST API), variables can be passed in at request time.
Publishing Flows & Revision History
Changes are automatically saved but will not be made live for consumers until you explicitly click "Publish". This lets you safely make changes and when you're happy with the final product, publish them for everyone.
Studio also includes Revision History. You will be able to see a list of every change made to your flow and the differences between the currently published version and the latest draft.
Testing Draft Flows
Testing draft flows is easy with Studio -- you just need to make your phone number go through the latest draft version instead of the published version. Click the Trigger widget and you can add as many phone numbers as you would like, separated with commas, to experience the latest draft version of the flow.
When you are pleased with the result, you can click Publish to make it accessible to everyone.
Renaming Flows
To change the name of your Flow, click on the Trigger widget. The right-side configuration panel includes a field for Flow Name. Enter the desired name, and click Save to rename your Flow.
Duplicating Flows
To make a copy of an existing Flow, navigate to your list of Flows and locate the one you'd like to copy. Click on Duplicate Flow and a new copy of the Flow will be created and automatically opened.
Deleting Flows
If you'd like to delete a Flow, navigate to your list of Flows and locate the one you'd like to remove. Click on Delete Flow to remove the Flow.
Caution: Deleting a Flow is not recoverable. If you delete a Flow that is being used in production to handle calls or messages, you will need to rebuild the Flow to restore service.
Importing and Exporting Flows
Use this functionality to export flow definitions as JSON and import them to other Twilio accounts. It's best to use Duplicate flow for simply creating a copy of a Flow in the same account. Import / Export of Flows is intended for exporting a flow to store elsewhere, e.g. source control and/or to move flows between Twilio accounts.
Import/Export via the REST API v2Try the Quickstart
Import and export functionality for Flows is also available via the REST API v2. Check out the Flows API Quickstart to learn how to export the Flow JSON and import it into a new Flow via the REST API and helper libraries.
Your Studio Flow definition may reference other Twilio resources (like phone numbers, Functions, etc.). These references are not automatically copied when the Flow is imported to another account, and you may need to make manual updates to your Flow to refresh references to those resources.
Exporting Flow Data
Click on the Trigger widget and select Show Flow JSON.
This will display the JSON data that defines your flow. You can copy this data out and store it elsewhere.
Importing Flow Data
Create a new Flow, and select Import from JSON.
Click Next. You will be presented with a code window to paste valid Flow JSON.
Click Next to create the new Flow once the JSON definition is added.
Troubleshooting
Here are some common gotchas and things to look out for when troubleshooting Studio:
- Sometimes Executions become stuck for Inbound Calls. Follow our best practices to avoid stuck Executions.
- Only API versions 2010-04-01 and later are supported. If your account is configured to use the deprecated 2008 API by default, be sure to upgrade your phone numbers to use a later API.
- Infinite loops are possible! We have a built-in limit, so your Execution will end after 1,000 steps. But be careful when creating loops over a set of widgets.
- Returning custom TwiML from an HTTP Request widget isn't supported. Instead, follow this guide for returning custom TwiML from a Run Function widget or use the Add TwiML Redirect widget.
- Flows are limited to a maximum of 1,000 widgets for published Flows.
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the Twilio tag on Stack Overflow. | https://www.twilio.com/docs/studio/user-guide | CC-MAIN-2021-10 | refinedweb | 3,096 | 60.75 |
I'd like to plot numerical data against non numerical data, say something like this:
import matplotlib.pyplot as pl x=['a','b','c','d'] y=[1,2,3,4] pl.plot(x,y)
However, with matplotlib plot packages you get a warning that the data is not float (ValueError: invalid literal for float(): a).
In their 'How-to', they suggest to put first the numerical data on the x axis and then format it. Is there a way to do it directly (as above)?
import matplotlib.pyplot as pl xticks=['a','b','c','d'] x=[1,2,3,4] y=[1,2,3,4] pl.plot(x,y) pl.xticks(x,xticks) pl.show()
import matplotlib.pyplot as plt x = ['a','b','c','d'] y = [1,2,3,4] plt.plot(y) plt.xticks(range(len(x)), x) plt.show()
On a side note, dates are numerical in this sense (i.e. they have an inherent order and spacing).
Matplotlib handles plotting temporal data quite well and very differently than the above example. There's an example in the matplotlib examples section, but it focuses on showing off several different things. In general, you just use either
plot_date or just plot the data and call
ax.xaxis_date() (or
yaxis_date) to tell matplotlib to use the various date locators and tickers. | https://pythonpedia.com/en/knowledge-base/6974847/plot-with-non-numerical-data-on-x-axis--for-ex---dates- | CC-MAIN-2020-40 | refinedweb | 221 | 68.47 |
pthread_join()
Join thread
Synopsis:
#include <pthread.h> int pthread_join( pthread_t thread, void** value_ptr );
Since:
BlackBerry 10.0.0 non-POSIX pthread_timedjoin() function is similar to pthread_join(), except that an error is returned if the join doesn't occur before a given time.
The target thread must be joinable. Multiple pthread_join(), pthread_timedjoin(), ThreadJoin(), and ThreadJoin_r() calls on the same target thread aren't allowed. When pthread_join() returns successfully, the target thread has been terminated.
Returns:
- EOK
- Success.
- EBUSY
- The thread thread is already being joined.
- EDEADLK
- The thread thread is the calling thread.
- EFAULT
- A fault occurred trying to access the buffers provided.
- EINVAL
- The thread thread isn't joinable.
- ESRCH
- The thread thread doesn't exist.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_join.html | CC-MAIN-2015-11 | refinedweb | 138 | 62.85 |
-xml/perl-xml-faq
In directory sc8-pr-cvs1:/tmp/cvs-serv26784
Modified Files:
faq-style.xsl faq.css perl-xml-faq.xml
Log Message:
- added XML::Validator::Schema section
- updated XML::XSLT status
- rewrote Perl 5.8 section in the present tense :-)
- rewrote html form section
- fixed DOCTYPE for DocBook 4.2
- corrected SYSTEM URI for use with catalog
- added paragraph on pull parsing
- numerous minor tweaks
perl-xml-faq.xml
Index: faq-style.xsl
===================================================================
RCS file: /cvsroot/perl-xml/perl-xml-faq/faq-style.xsl,v
retrieving revision 1.4
retrieving revision 1.5
diff -u -d -r1.4 -r1.5
--- faq-style.xsl 19 Jun 2002 21:30:29 -0000 1.4
+++ faq-style.xsl 14 Oct 2003 09:13:47 -0000 1.5
@@ -4,10 +4,23 @@
xmlns="";
-<!-- This is where I chose to install the DocBook XSL Stylesheets -->
-<!-- from: -->
+<!--
-<xsl:import
+ This stylesheet merely imports the Docbook XSL stylesheets and sets a few
+ parameters. Download the stylesheets from:
+
+
+
+ Unpack them onto your system and set up a catalog entry to map the URI of
+ the 'current' release to the directory where you unpacked it, eg:
+
+ <rewriteURI
+ uriStartString="";
+
+-->
+
+
+<xsl:import
<!-- Parameter settings -->
@@ -21,8 +34,6 @@
</xsl:param>
-
-
<!-- Templates to override defaults -->
<xsl:template
@@ -33,11 +44,39 @@
<br /><b><xsl:value-of</b><br />
</xsl:template>
-<!-- Why didn't this work?
-<xsl:template
- <p>Revision History Here</p>
+<xsl:template
+ <div class="revhistory">
+ <p class="title"><b>Last updated:</b>
+ <xsl:text> </xsl:text>
+ <xsl:call-template
+ <xsl:with-param
+ </xsl:call-template>
+ <xsl:text> </xsl:text>
+ <xsl:value-of
+ <xsl:text>, </xsl:text>
+ <xsl:value-of
+ </p>
+ </div>
+</xsl:template>
+
+<xsl:template
+ <xsl:param0</xsl:param>
+ ><xsl:value-of</xsl:otherwise>
+ </xsl:choose>
</xsl:template>
--->
</xsl:stylesheet>
Index: faq.css
===================================================================
RCS file: /cvsroot/perl-xml/perl-xml-faq/faq.css,v
retrieving revision 1.2
retrieving revision 1.3
diff -u -d -r1.2 -r1.3
--- faq.css 17 Apr 2002 20:46:36 -0000 1.2
+++ faq.css 14 Oct 2003 09:13:47 -0000 1.3
@@ -1,45 +1,50 @@
BODY {
background: #FFFFFF;
font-family: Verdana, Arial, Helvetica, sans-serif;
- font-size: 10pt;
- font-weight: normal;
-}
-
-TD {
- background: #FFFFFF;
- font-family: Verdana, Arial, Helvetica, sans-serif;
- font-size: 10pt;
+ font-size: 90%;
font-weight: normal;
}
-
-TH {
- background: #FFFFFF;
- font-family: Verdana, Arial, Helvetica, sans-serif;
- font-size: 10pt;
- font-weight: bold;
-}
-
-.programlisting {
- padding-top: 10;
- padding-left: 8;
- background-color: #ffffe0;
-}
H1.title {
- padding: 6;
+ padding: 0.2em;
border-style: solid;
- border-width: 2;
+ border-width: 2px;
border-color: #eeeeee;
}
H3.title {
- padding: 4;
- margin-top: 20;
+ padding: 0.4em;
+ margin-top: 2em;
border-style: solid;
- border-width: 2;
+ border-width: 2px;
border-color: #eeeeee;
}
+DIV.abstract {
+ margin: 2em;
+ padding: 1em;
+ background-color: #eeeeee;
+}
+
+DIV.abstract P.title {
+ font-size: 120%;
+}
+
DIV.revhistory {
width: 400px;
}
+
+TR.question TD {
+ padding-top: 1.0em;
+}
+
+TT {
+ font-size: 120%;
+}
+
+.programlisting {
+ padding-top: 0.8em;
+ padding-left: 0.8em;
+ background-color: #ffffe0;
+}
+
Index: perl-xml-faq.xml
===================================================================
RCS file: /cvsroot/perl-xml/perl-xml-faq/perl-xml-faq.xml,v
retrieving revision 1.9
retrieving revision 1.10
diff -u -d -r1.9 -r1.10
--- perl-xml-faq.xml 19 Jun 2002 21:29:41 -0000 1.9
+++ perl-xml-faq.xml 14 Oct 2003 09:13:47 -0000 1.10
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8" standalone="no"?>
-<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.2b1//EN"
- "";
+<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+ "";
>
<article class="faq">
@@ -16,6 +16,7 @@
</author>
<year>2002</year>
+ <year>2003</year>
<holder>Grant McLean</holder>
@@ -32,7 +33,7 @@
most common question for beginners - "Where do I start?"</para>
<para>The official home for this document on the web is:
- <ulink url=""></ulink>.
+ <ulink url=""></ulink>.
The official source for this document is in CVS on <ulink
url="">SourceForge</ulink> at <ulink
url="";
@@ -365,6 +366,16 @@
stream style. You configure the parser and it gives you the document in
chunks (bits of the tree or 'twigs').</para>
+ <para>Finally, the latest trendy buzzword in Java and C# circles is
+ 'pull' parsing (see <ulink url="";
+ ></ulink>). Unlike SAX, which 'pushes' events at your
+ code, the pull paradigm allows your code to ask for the next bit when
+ it's ready. This approach is reputed to allow you to structure your code
+ more around the data rather than around the API. Eric Bohlman's
+ <classname>XML::TokeParser</classname> offers a simple but powerful
+ pull-based API on top of <classname>XML::Parser</classname>. There
+ are currently no Perl implementations of the XMLPULL API.</para>
+
</answer>
</qandaentry>
@@ -487,7 +498,7 @@
is no API for finding or transforming nodes. This module is also not
suitable for working with 'mixed content'.
<classname>XML::Simple</classname> has it's own <ulink
-frequently asked
+frequently asked
questions</ulink> document.</para>
<para>Although <classname>XML::Simple</classname> uses a tree-style, the
@@ -887,13 +898,13 @@
</question>
<answer>
- <para>This module aims to implement XSLT in Perl, so the good news is
- that so long as you have <classname>XML::Parser</classname> working you
- won't need to compile anything to install this module. The bad news is
- that it is not a complete implementation of the XSLT spec, it is still in
- 'alpha' state and it's not clear whether it is under active development.
- The <classname>XML::XSLT</classname> distribution includes a script you
- can use from the command line like this:</para>
+ <para>This module aims to implement XSLT in Perl, so as long as you have
+ <classname>XML::Parser</classname> working you won't need to compile
+ anything to install it. The implementation is not complete, but work is
+ continuing and you can join the fun at the project's <ulink
+SourceForge page</ulink>. The
+ <classname>XML::XSLT</classname> distribution includes a script you can
+ use from the command line like this:</para>
<programlisting><![CDATA[
xslt-parser -s toc-links.xsl perl-xml-faq.xml > toc.html
@@ -904,13 +915,6 @@
Introduction to Perl's XML::XSLT module</ulink> at <ulink
url="">linuxfocus.org</ulink>.</para>
- <para>Some people have experienced difficulty installing the latest
- version of this module - possibly since maintenance has been handled by
- multiple people. At the time of writing, the latest version was
- <filename>J/JS/JSTOWE/XML-XSLT-0.40.tar.gz</filename> although CPAN.pm
- would only find
- <filename>B/BR/BRONG/XML-XSLT-0.32.tar.gz</filename>.</para>
-
</answer>
</qandaentry>
@@ -1170,7 +1174,7 @@
<qandaentry id="utf_perl_5_6">
<question>
- <para>What can Perl do with a UTF8 string?</para>
+ <para>What can Perl do with a UTF-8 string?</para>
</question>
<answer>
@@ -1231,34 +1235,28 @@
<qandaentry id="utf_perl_5_8">
<question>
- <para>What will Perl 5.8 do with a UTF8 string?</para>
+ <para>What can Perl 5.8 do with a UTF-8 string?</para>
</question>
<answer>
- <para>The Unicode support in Perl 5.6 is not complete and many of the
- shortcomings will be fixed in Perl 5.8. One major leap forward in 5.8
- will be the move to Perl IO and 'layers' which will allow translations to
- take place>
+ <para>The Unicode support in Perl 5.6 had a number of omissions and bugs.
+ Many of the shortcomings were fixed in Perl 5.8 and 5.8.1. One major
+ leap forward in 5.8 was the move to Perl IO and 'layers' which allows
+ translations to take place transparently>
<programlisting><![CDATA[
-open($fh,'>:encoding(iso-8859-1)', $path) || die "open($path): $!";
+open($fh,'>:encoding(iso-8859-1)', $path) or die "open($path): $!";
$fh->print($utf_string);
]]></programlisting>
- <para>File handle operations will also be applicable to in-memory 'files'
- held in Perl scalars.</para>
-
- <para>New built-in functions will allow you to check the utf8 flag
- on scalars and convert utf-8 strings to and from byte strings.</para>
-
- <para>The core 5.8 distribution will also include a number of new modules
- in the Unicode:: namespace. Supported operations will include querying
- the Unicode Character Database, sorting using Unicode collating rules
- and normalising Unicode character forms.</para>
+ <para>The new core module 'Encode' can be used to translate between
+ encodings (but since that usually only makes sense during IO, you might
+ as well just use layers) and also provides the 'is_utf8' function for
+ accessing the UTF-8 flag on a string.</para>
</answer>
</qandaentry>
@@ -1322,9 +1320,8 @@
<formalpara>
<title>Perl 5.8 IO layers</title>
- <para>At the time of writing this document, Perl 5.8 had not been
- released but when it is you'll be able to specify an encoding
- translation layer as you open a file like this:</para>
+ <para>You can specify an encoding translation layer as you open a file
+ like this:</para>
</formalpara>
@@ -1333,8 +1330,8 @@
$fh->print($utf_string);
]]></programlisting>
- <para>You'll also be able to push an encoding layer onto an already
- open filehandle like this:</para>
+ <para>You can also push an encoding layer onto an already open filehandle
+ like this:</para>
<programlisting><![CDATA[
binmode(STDOUT, ':encoding(windows-1250)');
@@ -1502,13 +1499,15 @@
control characters with printable characters. For strict Latin1 text it
shouldn't matter, but if your text contains 'smart quotes', daggers,
bullet characters, the Trade Mark or the Euro symbols it's not
- iso-8859-1.</para>
+ iso-8859-1. <classname>XML::Parser</classname> version 2.32 and later
+ include a CP1252 mapping which can be used with documents bearing this
+ declaration:</para>
</formalpara>
-<!--
- <para>FIXME: Is there a cp1252 encoding map?</para>
--->
+ <programlisting><![CDATA[
+<?xml version='1.0' encoding='WINDOWS-1252' ?>
+ ]]></programlisting>
</answer>
@@ -1517,27 +1516,34 @@
<formalpara>
<title>Web Forms</title>
- <para>If your script accepts text from a web form, you have no way of
- knowing what encoding the client system was using. If you save the data
- to an XML file, random high characters in the data may cause the file to
- not be 'well-formed'.</para>
+ <para>If your Perl script accepts text from a web form, you are at the
+ mercy of the client browser as to what encoding is used - if you save the
+ data to an XML file, random high characters in the data may cause the
+ file to not be 'well-formed'. A common convention is for browsers to
+ look at the encoding on the page which contains the form and to translate
+ data into that encoding before posting. You declare an encoding by using
+ a 'charset' parameter on the Content-type declaration, either in the
+ header:</para>
</formalpara>
- <para>A good starting point is probably to include an XML declaration
- which specifies iso-8859-1 encoding. By doing this, you are stating your
- assumption that characters in the range 0x00-0x7F are ASCII and
- characters in the range 0xA0-0xFF are Latin1. It's probably not safe to
- stop there though.</para>
+ <programlisting><![CDATA[
+print CGI->header('text/html; charset=utf-8');
+ ]]></programlisting>
- <para>If the user submits characters in the range 0x80-0x9F they are
- unlikely to be ISO Latin1. You can't just assume this won't happen
- as it's remarkably common for users to prepare text in Microsoft
- Word and copy/paste>
+ <para>or in a meta tag in the document itself:</para>
+
+ <programlisting><![CDATA[
+<meta http-
+ ]]></programlisting>
+
+ <para>If you find you've received characters in the range 0x80-0x9F, they
+ are unlikely to be ISO Latin1. This commonly results from users
+ preparing text in Microsoft Word and copying/pasting>
<programlisting><![CDATA[
sub sanitise {
@@ -1551,6 +1557,10 @@
}
]]></programlisting>
+ <para>Note: It might be safer to simply reject any input with characters
+ in the above range since it implies the browser ignored your charset
+ declaration and guessing the encoding is risky at best.</para>
+
</answer>
</qandaentry>
@@ -1571,9 +1581,9 @@
<para>These days, there are a number of alternatives to the DTD and the
term validation has assumed a broader meaning than simply DTD
conformance. The most visible alternative to the DTD is the W3C's own
- XML Schema. <ulink
-Relax NG</ulink> is
- a popular alternative developed by OASIS.</para>
+ <ulink url="">XML Schema</ulink>.
+ <ulink url="">Relax
+ NG</ulink> is a popular alternative developed by OASIS.</para>
<para>If you design your own class of XML document, you are perfectly
free to select the system for defining and validating document
@@ -1689,7 +1699,21 @@
<para><classname>XML::Xerces</classname> provides a wrapper around the
Apache project's Xerces parser library. Installing Xerces can be
challenging and the documentation for the Perl API is not great, but it's
- the only tool offering Schema validation from Perl.</para>
+ the most complete offering for Schema validation from Perl.</para>
+
+ </answer>
+ </qandaentry>
+
+ <qandaentry id="validation_xml_validator_schema">
+ <question>
+ <para>W3C Schema Validation With <classname>XML::Validator::Schema</classname></para>
+ </question>
+ <answer>
+
+ <para>Sam Tregar's <classname>XML::Validator::Schema</classname> allows
+ you to validate XML documents against a W3C XML Schema. It does not
+ implement the full W3C XML Schema recommendation, but a useful
+ subset.</para>
</answer>
</qandaentry>
@@ -1947,7 +1971,7 @@
<title>Bad encoding declaration</title>
<para>An incorrect or missing encoding declaration can cause this. By
- default the encoding is assumed to be UTF8 so if your data is (say)
+ default the encoding is assumed to be UTF-8 so if your data is (say)
ISO-8859-1 encoded then you must include an encoding declaration. For
example:</para>
@@ -1999,7 +2023,7 @@
<para>You can find the definitions for <ulink
url="";
- >HTML Latin 1 characters entities</ulink> on the W3C Site.</para>
+ >HTML Latin 1 character entities</ulink> on the W3C Site.</para>
<para>You can include all these character entities into your DTD, so that
you won't have to worry about it anymore:</para>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/perl-xml/mailman/perl-xml-commits/thread/E1A9LFV-0006yA-00@sc8-pr-cvs1.sourceforge.net/ | CC-MAIN-2016-36 | refinedweb | 2,339 | 57.57 |
At work (my real job) today, an interesting discussion came up about how
nice it is to be able to use the Windows Scripting Host Object Model
and / or Javascript to parse and execute arbitrary strings of code "on
the fly". The .NET Platform doesn't have, in any of the supported
languages I've seen, an "EVAL" method to do this. However, after
a little research, I found out that it's almost a trivial matter to "roll
your own"!
The key built - in .NET classes with which this can be so easily achieved
are the CodeProvider classes (VBCodeProvider and CSharpCodeProvider,
for example) in the System.CodeDoMCompiler namespace. Two entire book
chapters could be written about how to use these, along with Reflection,
and some authors probably already have, so I leave the research to you.
Here my objective is simply to illustrate how easily these namespaces
and classes can be put to use for something truly useful.
Think about this: Your application does a lot of business logic, some of
which requires complicated logical strings of code that may change over
time to meet certain business conditions or metadata. Wouldn't it be
great if you could pull the most current string of code to be run out
of your
database based on certain stored procedure input parameters, and be sure
it's run and you get back the desired result? In fact, the returned string
of code may even be dynamically created based on some of the input parameters
from the sproc itself. . . Well, that's what this exercise is all about!.
Without further discussion, I present the basic framework for the class:
Imports Microsoft.VisualBasic
Imports System
Imports System.Text
Imports System.CodeDom.Compiler
Imports System.Reflection
Imports System.IO
Namespace PAB.Util
Public Class EvalProvider
Public Function Eval(ByVal vbCode As String) As Object
Dim c As VBCodeProvider = New VBCodeProvider
Dim icc As ICodeCompiler = c.CreateCompiler()
Dim cp As CompilerParameters = New CompilerParameters
cp.ReferencedAssemblies.Add("system.dll")
cp.ReferencedAssemblies.Add("system.xml.dll")
cp.ReferencedAssemblies.Add("system.data.dll")
' Sample code for adding your own referenced assemblies
'cp.ReferencedAssemblies.Add("c:\yourProjectDir\bin\YourBaseClass.dll")
'cp.ReferencedAssemblies.Add("YourBaseclass.dll")
cp.CompilerOptions = "/t:library"
cp.GenerateInMemory = True
Dim sb As StringBuilder = New StringBuilder("")
sb.Append("Imports System" & vbCrLf)
sb.Append("Imports System.Xml" & vbCrLf)
sb.Append("Imports System.Data" & vbCrLf)
sb.Append("Imports System.Data.SqlClient" & vbCrLf)
sb.Append("Namespace PAB " & vbCrLf)
sb.Append("Class PABLib " & vbCrLf)
sb.Append("public function EvalCode() as Object " & vbCrLf)
'sb.Append("YourNamespace.YourBaseClass thisObject = New YourNamespace.YourBaseClass()")
sb.Append(vbCode & vbCrLf)
sb.Append("End Function " & vbCrLf)
sb.Append("End Class " & vbCrLf)
sb.Append("End Namespace" & vbCrLf)
Debug.WriteLine(sb.ToString()) ' look at this to debug your eval string
Dim cr As CompilerResults = icc.CompileAssemblyFromSource(cp, sb.ToString())
Dim a As System.Reflection.Assembly = cr.CompiledAssembly
Dim o As Object
Dim mi As MethodInfo
o = a.CreateInstance("PAB.PABLib")
Dim t As Type = o.GetType()
mi = t.GetMethod("EvalCode")
Dim s As Object
s = mi.Invoke(o, Nothing)
Return s
End Function
End Class
End Namespace
What does it do? Well, it creates the CodeCompiler instance and the
ICodeCompiler interface, sets it all up, and takes your input string
(which needs to look just like valid code (VB.NET in this case, though
its just as easy to use the CSharpCodeProvider). It creates a public
method, EvalCode() with a return value of Object, compiles it, loads
the compiled assembly into memory, and using Reflection, instantiates
an instance and calls your method which is "wrapped' into the generic
EvalCode method. It then returns the result (whatever Object that may
be) and you are "good to go!".
An example of code that you could send into this method might be:
Dim strConn As String = "Server=(local);dataBase=Northwind;User
id=sa;Password=;"
Dim cmd As New SqlCommand()
Dim cn As New SqlConnection(strConn)
cn.Open()
cmd.Connection = cn
cmd.CommandText = "select * from employees"
cmd.CommandType = CommandType.Text
Dim ds As DataSet = New DataSet()
Dim da As New SqlDataAdapter()
da.SelectCommand = cmd
da.Fill(ds)
Return ds
This code could come from your database, or wherever,
and be dynamically assembled before you send it in. The downloadable
solution has a Winforms test harness illustrating sample usage with
the code snippet shown above, and uses it to populate a DataGrid on
the form. Enjoy!
Download the code that accompanies this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave | http://www.eggheadcafe.com/articles/20030908.asp | crawl-002 | refinedweb | 747 | 50.73 |
When you need to build something people can depend on, you start with a strong foundation. For OpenShift’s foundation, we have been building on Kubernetes for over a year.
We have enjoyed the journey, but are not simply along for the ride. OpenShift has tirelessly helped make Kubernetes one of the fastest growing and sought after container orchestration engines available. OpenShift 3.3 is leveraging Kubernetes 1.3, and for that reason, Red Hat has contributed to the following projects in the Kubernetes 1.3 release:
- Authentication
- PetSets
- Init Containers
- Rolling Update Status
- Disk Attach Controllers
- Pod Security Policies
- Pod Evictions
- Quota Controlled LoadBalancer Services
- Quota Controlled nodePorts
- Scale to 1,000 nodes
- Dynamic Provisioning of Storage
- Multiple Schedulers in Parallel
- Seccomp Policy Support
- Deployments
This work represents a significant investment from Red Hat in the Kubernetes community and technology. We truly feel this is the place to work on cloud native container enabled solutions.
While we are working upstream in Kubernetes, we also work upstream in Origin (the open source community for OpenShift) where we leverage a vibrant gathering over over 200 corporations that have been driving innovation into OpenShift. We tailor the features of Kubernetes towards enterprise class use cases and deliver an out of box experience that people can immediately start taking advantage of in order to increase their ability to leverage these popular technologies on private and public clouds. I’d like to call out a few new features that will grab your attention in OpenShift 3.3.
- Networking
- Security
- Cluster Longevity
- Framework Services
Networking
Four main features in OpenShift 3.3 open the door to easier usage of the solution in production environments. Due to the unbelievable popularity of OpenShift 3, the product has found itself in some critical production situations with customer revenue generating applications. People are leveraging OpenShift 3 to run critical business services…today. OpenShift 3.3 delivers the following four features to assist in that usage:
Controllable Source IP
Tenants on the platform would like to leverage data sources or end points that live outside of the platform such as HRM, CRM, or ERM systems. These could be anything from specialized hardware appliances to decade old deployments that simply have stood the test of time. Due to where they are located in the datacenter, customers have chosen to guard access to them by firewalling them off and only allowing for approved IP address connections. This access design was common before the age of API management and is still used today for many business services.
The problem with cloud architectures and containers, is as you increase the mobility and manageability of the container it has a higher chance of moving around the cluster. This means that the underlying source IP packet, that comes from the actual node level, can change as the container moves around the cluster. If that is changing, it becomes difficult to grant access from an application living on a cloud platform to a service that is behind corporate firewalls. Thus we have a problem getting to the good stuff!
In comes controllable source IPs in OpenShift 3.3. Now a platform administrator can identify a node in the cluster and allocate a number of static IP address to the node (at the host level). If a tenant needs an unchanging source IP for his or her application service, they can request access to one during the process they use to ask for firewall access. The platform admin will then deploy an egress router from the tenant’s project leveraging a nodeSelector in the deploymentConfig to insure the pod lands on the host with the pre-allocated static IP address.
The egress pod’s deployment will declare one of the sourceIPs, the destinationIP of the protected service, and a gatewayIP to reach the destination. Once the pod is deployed, the platform admin can create a service to access the egress router pod. They will then add that sourceIP to the corporate firewall and close out the ticket. The tenant will now have access information to the egress router service that was created in their project (ie service.project.cluster.domainname.com).
When the tenant would like to reach the external, firewalled service they will call out to the ergress router pod’s service (ie service.project.cluster.domainname.com) in their application (ie the JDBC connection information) rather than the actual protected service url.
Router Sharding
Our customers have been leveraging the platform to offer a multi-tenant, docker compliant, platform. As such, they are placing thousands of tenants on the platform from all different walks of life. In some cases, the tenant are subsidiary corporations or have drastically different affiliations. With such diversity, often times business rules and regulatory requirements will dictate that tenants not flow through the same routing tier. To solve this issue, OpenShift 3.3 releases router sharding. With router sharding a platform administrator can group specific routes or namespaces into shards and then assign those shards to routers that may be up and running on the platform or be external to the platform. This allows tenants to have separation of egress traffic at the routing tiers.
Non-Standard Ports
OpenShift has always been able to support non-standard TCP ports via SNI routing with SSL. As the internet of things (IoT) have exploded, so to as the need to speak to dumb devices or aggregation points without SNI routing. At the same time, with more and more people running data sources (such as databases) on OpenShift, many more people want to expose ports other than 80/433 for their applications so that people outside of the platform can leverage their service.
Until today, the solution for this in Kubernetes was to leverage NodePorts or External IPs. The problem with NodePorts is that only 1 tenant can have the port on all the nodes in the cluster. The problem with External IPs is that duplications can be common if the admin is not carefully assigning them out.
OpenShift 3.3 solves this problem through the clever use of edge routers. What happens is the platform administrator will either select one or more of the nodes (more than one for high availability) in the cluster to become edge routers or they can just run additional pods on the HAProxy nodes. The additional pods we are going to run are ip failover pods. But this time, we will specify a pool of available Ingress IPs that are routable to the nodes in the cluster and resolvable externally via the corporate DNS.
This pool of IP address are going to be served out to tenants who want to use a port other than 80 and 433. In these use cases, we have services outside of the cluster trying to connect to services inside the cluster that are running on ports other than 80/433. This means they are coming into the cluster (ingress) as opposed to leaving the cluster (egress). By resolving through the edge routers, we are able to insure each tenant gets the port they desire by pairing it with a Ingress IP from the available pool rather than giving them a random port.
In order to trigger this allocation of an IngressIP, the tenant will just declare a ‘LoadBalancer’ as type in their service json for their application. Afterwards they can use a ‘oc get $servicename’ in order to see what IngressIP was assigned to them.
A/B Service Annotation
This OpenShift 3.3 feature is one of my favorites. We have always been able to do A/B testing with OpenShift, but it was not a “easy to use” feature. Now in OpenShift 3.3 we have added service lists to routes. Each route can now have multiple services assigned to it and those services can come from different applications or pods. We then designed automation with HAProxy to be able to read weight annotations on the route for the services. A tenant can now very easily from the command line or webconsole declare 70% of traffic will flow to appA and 30% will flow to appB.
Security
Three main improvements to the security of the cluster come in the form of stronger AUTH control, the ability to disable system calls for containers via security context constraints, and a easier way to keep track of and update CERTs we use for the SSL traffic between OpenShift framework pieces.
SCC Profiles for seccomp
seccomp is a relatively unknown feature in RHEL that has been enabled for docker 1.10 or higher. seccomp allows containers to define interactions with the kernel using syscall filtering. This will reduce the risk of a malicious container exploiting a kernel vulnerability, thereby reducing the guest attack surface. We have added an ability to create seccomp policies with OpenShift 3.3 security context constraints (SCC). This will allow platform administrators to set SCC policies on tenants that will impose a filter on their containers for linux level system calls.
Kerberos Support in the oc client for Linux
We now can recognize and handle the kinit process of generating a kerberos tickets during a tenant’s interaction with the oc client on Linux.
$ kinit user1@MYDOMAIN.COM (password = 'password') $ oc login <OPENSHIFT_MASTER>
CERT Maintenance
OpenShift leverages TLS encryption and token based authentication between its framework components. In order to accelerate and ease the installation of the product, we will self sign CERTs during a hands free installation. OpenShift 3.3 adds the ability to update and change those CERTs that govern the communication between our framework components. This will allow platform administrators to more easily maintain the life cycles of their OpenShift installations.
Cluster Longevity
Once you stand up a cluster and people start using it, your attention as a platform administrator will turn to care and feeding for the cluster. The platform should possess features that help it remain stable under frequent and constant use. In OpenShift 3.3 we spent some time focusing on features that will help. We take advantage of Kubernetes workload priority and eviction policies, we offer an ability to idle and unidle workloads, we increased the number of pods per node and node per cluster, and we help the tenant find the right persistent storage for their deployments needs.
Pod Eviction
OpenShift 3.3 allows platform administrators more control over what happens over the life cycle of the workload on the cluster once the process (container) is started. By leveraging limits and request setting at deployment time, we can figure out automatically how the tenant wants us to treat their workload in terms of resources. We can take one of three positions. If the tenant declares no resource requirements (best effort), we can offer them slack resources on the cluster. But more importantly, that choice allows us to decide to re-deploy their workloads first should an individual node become exhausted. If the tenant tells us their minimum resource requirements but does not ask for a very specific range of consumption (burstable), we can offer them their min while also giving them an ability to eat slack resources should any exist. We will consider this workload more important than best effort in terms of re-deployment during a node eviction. Lastly, if a tenant tell us the minimum and maximum resource requirements (guaranteed), we will find a node with those resources and lock them in as the most important workload on the node. These workloads will remain as the last survivor on a node should it go into a memory starvation situation. The decision to evict is an intimate one to the platform administrator. With that in mind, we have made it configurable. It is up to the platform administrator to turn on the ability to hand a pod (container) back to the scheduler for re-deployment on a different node should out of memory errors start to occur.
Scale
For OpenShift 3.3 we have taken the time to qualify the solution on larger environments. You can see some of this work publicly via the Cloud Native Foundation work we completed recently as well as increased information within the product documentation on expectations. We are now up to 1,000 nodes per cluster at 250 pods per node (with a recommendation of 10 pods per hyper-threaded core). That is a ¼ of a million containers per cluster. A truly remarkable milestone considering we are not just talking about starting a container. We are talking about establishing developer projects, enforcing quota, running multi-tier application services, exposing public routes, offering persistent storage, and all the other intricacies of deploying real applications.
Idling/UnIdling
Wouldn’t it be great if we lived in a world where developers did not have to care about giving back resources from innovation projects they have paused while working on emergencies? OpenShift 3.3 delivers something that will help. New in OpenShift is an API to idle an application’s pods (containers). The idea is to have your monitoring solution call the API when a threshold to a metric of interest is crossed. The magic happens at the routing tier. The HAProxy will hold the declared route url, that is connected to the service, open and then we will shutdown the pods. Should someone hit this application URL, we will re-launch the pods on available resources in the cluster and connect them to the existing route.
Storage Labels
Ephemeral containers (ones that erase once they are rebooted) are extremely powerful. But figure out how to give them persistent in a fluid manner across a 1,000 node cluster and you have got something to write home about. OpenShift has had an ability to offer remote persistence block and file based storage for over a year. In OpenShift 3.3, we increase the ability of the application designer or tenant to select a storage provider on the cluster in a more granular manner than stating just file or block. Storage labels can help people call out to a specific provider in a simple manner by adding a label request to their persistent volume claim (PVC).
Framework Services
OpenShift provides resource usage metrics and log access to tenants. These are native framework services that run on the platform that are based on the hawkular and elasticSearch open source projects. With every release of OpenShift, these services become stronger and more feature rich.
Logging Enhancements
We have delivered a log curator utility to help platform administrators deal with the storage requirements of storing tenant logs over time. We have also enhanced the integration with existing ELK stacks you might already own or be invested in by allowing logs to more easily be sent to multiple locations.
Metric Install Enhancement
We added network usage attributes to the core metrics we track for tenants in this release. But we also made metrics a core installation feature instead of a post-install activity. Now the OpenShift installer will guide you through the ansible playbooks required to successfully deploy metrics. Thus driving more usage of the feature in the user interface and CloudForms.
Conclusion
The point of cluster management is to enable the tenants to get the most out of the platform without knowing the details. We try to remove as many barriers that the underlying technologies, infrastructure, or runtimes may impose and allow developers and operators to focus on delivering business services in a high velocity pattern at low operational risk. We hope you enjoy OpenShift 3.3. Please be sure to check out the user interface enhancements and improved developer experience!
Related Posts
If you want to learn more about the new features of OpenShift 3.3 Don’t miss the following blog posts from our engineering team:
What’s New in OpenShift 3.3 – Developer Experience
What’s New in OpenShift 3.3 – Web Console
What’s New in OpenShift 3.3: Enterprise Container Registry | https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/ | CC-MAIN-2017-43 | refinedweb | 2,652 | 51.89 |
In yesterday’s BlackBerry API Hidden Gems post, I showed you some of my favorite classes and methods oft overlooked within BlackBerry® APIs. I’ve got a few more in store for you today, so let’s get started!
IntVector
In the net.rim.device.api.util package there are a bunch of collections for storing primitive Java types, such as ‘int’, ‘byte’, and ‘long’, which mirror the equivalent java.util classes. Using these classes to store the primitive types is more efficient in both memory and time than storing wrapper objects in standard java.util collections.
My favourite example is IntVector. IntVector has the same methods as java.util.Vector but stores primitive ‘int’ values instead of Object references. Under the hood it uses an int[] array to store values instead of an Object[] array and therefore no conversions between ‘int’ and Integer are necessary. This makes IntVector much better for storing ‘int’ values than java.util.Vector as it is both faster and uses less memory. It is also fully synchronized, just like java.util.Vector.
Other adapted classes in net.rim.device.api.util include:
- ByteVector and LongVector: similar to IntVector but for ‘byte’ and ‘long’ types.
- IntHashtable, LongHashtable: adaptations of Hashtable that use primitive ‘int’ and ‘long’ values as the keys, and Objects as the values.
- ToIntHashtable, ToLongHashtable: similar to IntHashtable and LongHashtable but uses Objects for the keys and ‘int’ and ‘long’ for the values.
Here is an example usage of IntVector to store a list of high scores, with the highest scores at the lowest indices.
public class HighScores {
private IntVector _scores;
public HighScores() {
_scores = new IntVector();
}
public void add(int score) {
if (_scores.contains(score)) {
return; // already there
}
boolean isAdded = false;
for (int i = 0; i < _scores.size(); i++) {
if (_scores.elementAt(i) < score) { _scores.insertElementAt(score, i); isAdded = true; break; } } if (!isAdded) { _scores.addElement(score); } while (_scores.size() > 10) {
_scores.removeElementAt(_scores.size() – 1);
}
}
public int getHighScore() {
if (_scores.isEmpty()) {
return 0;
} else {
return _scores.elementAt(0);
}
}
public int[] getHighScores() {
int[] array = new int[_scores.size()];
_scores.copyInto(array);
return array;
}
}
Timer and TimerTask
Like weak references, this next gem is also defined in CLDC but is mostly overlooked for its utility. Suppose you want to perform background tasks in your application. You can either use Application.invokeLater() or devise a grandiose background thread implementation that cleverly uses Java® synchronization primitives to efficiently perform background event dispatching. The former consumes your application’s event thread, potentially causing UI lag, and the latter is just a lot of work.
I recommend whipping out Timer and TimerTask for background task processing. Each Timer object has exactly one background thread which processes TimerTasks sequentially. These tasks can be scheduled to occur immediately, after some delay, at a particular time, or repeatedly at a given interval.
The sample below shows how to use Timer and TimerTask to notify an object on a non-event thread about the user pressing the trackball.
public class MyScreen extends net.rim.device.api.ui.container.MainScreen {
private Timer _timer;
public MyScreen() {
this.setTitle(“Timer Demo”);
this._timer = new Timer();
}
public void onTrackballClick() {
System.out.println(“Quit pressing the trackball!”);
}
protected boolean navigationClick(int status, int time) {
this._timer.schedule(new ClickTask(), 0);
return super.navigationClick(status, time);
}
private class ClickTask extends TimerTask {
public void run() {
onTrackballClick();
}
}
}
There are many more hidden gems in the BlackBerry® SDK but just not enough time here to share them all. I will be doing a talk on this topic at the 2009 BlackBerry Developer Conference and plan to talk about some hidden gems not mentioned here as well as some lesser-known cool features of the JDE itself. If you have found any hidden gems of your own please comment on this post to share your great discovery with the world! I’d love to know which APIs you find useful. | http://devblog.blackberry.com/2009/08/blackberry-api-hidden-gems-part-two/?relatedposts_to=12482&relatedposts_order=3 | CC-MAIN-2015-18 | refinedweb | 647 | 58.89 |
How to find a short form of recursive defined sequences?
Hi, I'm new to sagemath.
Is there any way to so calculate/solve/find a short version of a recursive defined sequence?
E.g. I have a sequence like: (Fibonacci)
def f(n): if n == 0: return 0 if n == 1: return 1 if n == 2: return 1 else: return f(n-1)+f(n-2)
How can I compute a short form of $f_n$?
In this example case $f_n$ would be:
$f_n=\frac{1}{\sqrt{5}} (\frac{1+\sqrt{5}}{2})^n - \frac{1}{\sqrt{5}} (\frac{1-\sqrt{5}}{2})^n$
Edit: Thanks to Emmanuel I found how to solve those equations in pdf:
from sympy import Function,rsolve from sympy.abc import n u = Function('u') f = u(n-1)+u(n-2)-u(n) rsolve(f, u(n), {u(0):0,u(1):1}) -sqrt(5)*(1/2 - sqrt(5)/2)**n/5 + sqrt(5)*(1/2 + sqrt(5)/2)**n/5
Now read the book, cover to cover, and keep it under your pillow... | https://ask.sagemath.org/question/48820/how-to-find-a-short-form-of-recursive-defined-sequences/?sort=votes | CC-MAIN-2021-17 | refinedweb | 180 | 76.25 |
Instrumenting and listening to your code
Paul is chief architect for SoftConcepts and author of Advanced C# Programming (McGraw-Hill, 2002), among other books. He can be contacted at pkimmel softconcepts.com.
I like to watch The Discovery Channel television show American Chopper about the Teutul family () who build some amazing motorcycles. As well as being entertaining, the Teutuls build these very cool bikes in a short time from about 90 percent stock parts. According to the show, they turn around custom choppers in about a week. How do they do it and what can we learn from them?
The key to the success of Orange County Choppers is that they do use stock parts and most of the pieces and some of the labor are supplied by specialists. For instance, Paul Teutul, Jr. may start with a stock frame, or request a custom frame with a some pretty basic specifications. On the "Fire Bike" episode, for example, they added a custom carburetor that looked like a fire hydrant. The key elements here are that a carburetor is a well-defined pattern and the engineers that built it are experts at building carburetors. Collectively, the Teutuls use stock parts and improvise selectively for the extra cool factor.
In our business, stock tools are existing components and frameworks. The more we use stock components, frameworks, and rely on experts, the less time we have to expend on meeting deadlines. In addition, this surplus of energy and time can be exploited to add the fit-and-finish that exceeds expectations and excites users. To this end, in this article I examine the open-source NUnit tool and the .NET Framework's TraceListeners that give you a means of easily and professionally eliminating bugs from code.
Instrument Code As You Write It
To instrument code means to add diagnostics code that lets you monitor and diagnose the code as it runs. One form of instrumenting code is to add Trace statements that tell you what the executing code is really doing. Granted, tracing has been around a while, but only recently has it been included in the broader concept referred to as "instrumenting" code.
Adding Trace statements as you progress is a lot easier and more practical than adding them after the solution code has been written. Instrumenting as you go is better because you know more about the assumptions you are making when you are implementing the solution.
The basic idea is simple: When you write a method or property, add a Trace statement that indicates where the instruction pointer is at and what's going on. This is easy to do. If you are programming in C#, add a using statement that refers to the System.Diagnostics namespace and call the static method Trace .WriteLine statement, passing a string containing some useful text.
The Trace class maintains a static collection of TraceListeners and one Trace statement multicasts to every listener in the collection. This impliesand is the case, in factthat you can create a custom TraceListener. A default TraceListener sends information to the Output window in VS.NET, which is always in the listener's collection, but you need to be running VS.NET to see these messages.
In addition to supporting custom TraceListeners, the .NET Framework facilitates postdeployment turning tracing on/off. This means, you can instrument your code with Trace statements, leave them in when you deploy your code, and turn them back on in the fieldmodifying the application or machine's external XML config file.
Unit Test Frequently
The next thing you need is a process for testing chunks of code. Testing should occur early and often, and it is more practical and prudent to test in iterations, especially since NUnit makes it so easy. Returning to the custom chopper analogy, I guarantee you that the company supplying motors to the Teutuls turns them over and run them a bit before they ever get on a bike. This is because the motor is an independent, testable entity, a component if you will.
NUnit () is built with .NET. NUnit 2.1 is a unit-testing framework for .NET languages. Although originally modeled after JUnit, NUnit is written in C# and takes advantage of numerous .NET language features. To test code, all you need do is download and install NUnit. Then create a class library and add some attributes to classes containing test code. .NET custom attributes defined by the nunit.framework namespace are used to tag classes as test fixtures and methods for initialization, deinitialization, and as tests.
For example, Listing One is the the canonical HelloWorld.exe application, and Listing Two is a class library that implements tests. In Listing One, a sample class keeps down the noise level. Greetings shows you what you need to see, the Trace.WriteLine statement. (I use Trace statements in methods with this low level of complexity if the method is important to the solution domain.)
In Listing Two, I add a reference to the library containing the Greetings class and a using statement introducing its encompassing namespace. Next, I add a reference to the nunit.framework.dll assembly and its namespace. After that, you just need a class tagged with the TestFixtureAttributedropping the attribute suffix by conventionand the TestAttribute on public methods that return void and take no arguments.
If you load the test library in NUnit, it takes care of the rest. A green for "pass" and red for "fail" (see Figure 1) removes all ambiguity from the testing process..
The collective result is that NUnit facilitates testing while reducing the amount of scaffolding you have to write to run the tests and eliminates the time between testing, modifying, and retesting code. You focus on the code to solve the problem and tests, not the testing utility itself.
Listening to Your Code
Once you have instrumented your code with Trace statements and written NUnit tests, wouldn't it be nice to be able to see the output from those Trace statements?
Remember that the Trace class writes to every listener in the Trace.Listeners collection. All you need to do is implement a custom TraceListener and NUnit tells you if a test passed or failed, and shows you what's going on behind the scenes. Listing Three shows how to implement a sufficient TraceListener for NUnit. (The new code is shown in bold font.) Inside the file containing the TestFixture, I added a custom TraceListener. The custom listener overrides the Write and WriteLine methods and sends the message to the Console. NUnit redirects standard output (the Console) to NUnit's Standard Out tab (Figure 2). To finish up, you stuff the listener in the Trace.Listeners collection. Now that you have NUnit listening for Trace messages, you can run the tests and all of your Trace statements are written to the Standard Out tab. When the tests are all green, things are going okay.
Conclusion
If you instrument your code with Trace statements, define a custom listener, and use NUnit tests, you have some powerful but easy-to-use code working on your behalf. Making this a regular part of your software development process goes a long way in speeding up a better end result.
DDJ
using System; using System.Diagnostics; namespace TestMe { public class Greetings { public static string GetText() { Trace.WriteLine("Greetings.GetText called"); return "Hello, World"; } } }Back to article
Listing Two
using NUnit.Framework; using TestMe; namespace Test { [TestFixture()] public class MyTests { [SetUp()] public void Init() { // pre-test preparation here } [TearDown()] public void Deinit() { // post-test clean up } [Test()] public void GreetingsTest() { Assertion.AssertEquals("Invalid text returned", "Hello, World", Greetings.GetText()); } } }Back to article
Listing Three
using System; using System.Diagnostics; using NUnit.Framework; using TestMe; namespace Test <b>{</b> public class Listener : TraceListener { public override void Write(string message) { Console.Write(message); } public override void WriteLine(string message) { Console.WriteLine(message); } } [TestFixture()] public class MyTests { private static Listener listener = new Listener(); [SetUp()] public void Init() { <b> if( !Trace.Listeners.Contains(listener))</b> Trace.Listeners.Add(listener); } [TearDown()] public void Deinit() { <b> Trace.Listeners.Remove(listener);</b> } [Test()] public void GreetingsTest() { Assertion.AssertEquals("Invalid text returned", "Hello, World", Greetings.GetText()); } } }Back to article | http://www.drdobbs.com/windows/tracing-program-execution-nunit/184405769?pgno=1 | CC-MAIN-2015-32 | refinedweb | 1,365 | 64.71 |
FWIW, php7 is about 5x faster than Python on spectral norm benchmark.
There two major reasons:
* PHP uses scalar type for float and int
* PHP uses type-specialized bytecode (PHP8 will use JIT, but PHP7 dosn't)
Source code is here:
php:
Python:
The most hot function is eval_A()
```
def eval_A(i, j): # i and j are int.
ij = i + j
return ij * (ij + 1) // 2 + i + 1
```
And its bytecode:
```
Disassembly of <code object eval_A at 0x107fd8500, file "x.py", line 1>:
2 0 LOAD_FAST 0 (i)
2 LOAD_FAST 1 (j)
4 BINARY_ADD
6 STORE_FAST 2 (ij)
3 8 LOAD_FAST 2 (ij)
10 LOAD_FAST 2 (ij)
12 LOAD_CONST 1 (1)
14 BINARY_ADD
16 BINARY_MULTIPLY
18 LOAD_CONST 2 (2)
20 BINARY_FLOOR_DIVIDE
22 LOAD_FAST 0 (i)
24 BINARY_ADD
26 LOAD_CONST 1 (1)
28 BINARY_ADD
30 RETURN_VALUE
```
My thoughts:
* bytecode specialized for `int op int` will some help.
* there are many incref/decref overhead.
* multi operand bytecode (e.g. BINARY_ADD_FAST_FAST, BINARY_ADD_FAST_CONST, etc) will reduce refcount overhead. | https://bugs.python.org/msg379282 | CC-MAIN-2021-25 | refinedweb | 164 | 57 |
Getting Started with PHP Extension Development via Zephir.
Installation
To build a PHP extension and use Zephir you’ll need the following:
- gcc >= 4.x/clang >= 3.x/vc++ 9
- gnu make 3.81 or later
- php development headers and tools
- re2c 0.13 or later
- json-c
The installation instructions vary for every platform, so I trust you’ll know how to obtain them if you’re reading an article with a topic as advanced as this one. For the record – I recommend using a Linux based system for developing Zephir apps.
Once you obtain all the prerequisite software, download the latest version of Zephir from Github, then run the Zephir installer, like so:
git clone cd zephir && ./install -c
It should install automatically – try typing
zephir help. If it is not working, add the “bin” directory to your PATH enviroment variable. In my case:
/home/duythien/app/zephir/bin, like so:
vi $HOME/.bash_profile
Append the following export command:
export PATH=$PATH:/home/duythien/app/zephir/bin
To verify the new path settings and test the installation, enter:
echo $PATH zephir help
You can find out about Zephir basics and syntax, as well as its typing system and see some demo scripts over at their website.
Programming with Zephir
Now we’ll use Zephir to re-work a mathematical equation that C and Fortran handle very well. The example is rather esoteric and won’t be explained into much detail, except to demonstrate the power of Zephir.
Time-Dependent Schrodinger Equation solved with Finite Difference
The time-dependent Schrödinger equation can be solved with both implicit (large matrix) and explicit (leapfrog) methods. I’ll use the explicit method.
Firstly, issue the following command to create the extension’s skeleton:
zephir init myapp
When this command completes, a directory called “myapp” is created on the current working directory. This looks like:
myapp/ |-----ext/ |-----myapp/ |-----config.json
Inside the “myapp” folder, create a file called “quantum.zep” (which will give us the
Myapp\Quantum namespace). Copy paste the following code inside:
namespace Myapp; class Quantum{ const PI = 3.14159265358979323846; const MAX = 751; public function Harmos(double x){ int i,j,n; var psr, psi, p2, v,paramater,fp; double dt,dx,k0,item_psr,item_psi; let dx = 0.02, k0 = 3.0*Myapp\Quantum::PI, dt = dx*dx/4.0; let paramater =[dx,k0,dt,x]; let i = 0, psr = [], psi = [], p2 = [], v = [], fp = []; let fp = fopen ("harmos.txt", "w"); if (!fp) { return false; } while i <= Myapp\Quantum::MAX{ let item_psi = sin(k0*x) / exp(x*x*2.0), item_psr = cos(k0*x) / exp(x*x*2.0); let psr[i] = [item_psr], psi[i] = [item_psi], v[i] = [5.0*x*x], x = x + dx, i++; } var tmp; let i =1, j=1,tmp=[2.0]; for n in range(0, 20000){ for i in range(1,Myapp\Quantum::MAX - 1 ){ let psr[i][3] =psr[i][0] - paramater[2]*(psi[i+1][0] + psi[i - 1][0] - tmp[0]*psi[i][0]) / (paramater[0]*paramater[0]) + paramater[2]*v[i][0]*psi[i][0], p2[i] = psr[i][0]*psr[i][4] + psi[i][0]*psi[i][0]; } for j in range(1,Myapp\Quantum::MAX - 1 ) { let psr[0][5] = 0, psr[Myapp\Quantum::MAX][6]= 0 ; let psi[j][7] = psi[j][0] + paramater[2]*(psr[j+1][8] + psr[j - 1][9] - tmp[0]*psr[j][10]) / (paramater[0]*paramater[0]) - paramater[2]*v[j][0]*psr[j][11]; } //output split if (n ==0 || n % 2000 == 0) { let i =1; while i < Myapp\Quantum::MAX - 1 { fprintf(fp, "%16.8lf %16.8lf %16.8lf \n",i*dx,n*dt,p2[i]); let i = i + 10; } fprintf(fp, "\n"); } // change new->old let j = 1; while j < Myapp\Quantum::MAX - 1 { let psi[j][0] = psi[j][12], psr[j][0] = psr[j][13]; let j++; } } return true; } }
We’ve used many PHP functions such as fopen(), sin(), fprintf(), etc – feel free to study the syntax. I’ll also give you one more example. In the process of working with the Phalcon PHP framework, the function Phalcon\Tag::friendlyTitle() is invalid if you’re working in Vietnamese or German. This example, far simpler than the equation above, creates the file
normalizeChars.zep. Insert the following code into the file:
namespace Myapp; class NormalizeChars{ public function trans(var s) { var replace; let replace = [ "ế" : "e", "ề" : "e", "ể" : "e", "ễ" : "e", "ệ" : "e", //--------------------------------E^ "Ế" : "e", "Ề" : "e", "Ể" : "e", "Ễ" : "e", "Ệ" : "e", //--------------------------------e "é" : "e", "è" : "e", "ẻ" : "e", "ẽ" : "e", "ẹ" : "e", "ê" : "e", //--------------------------------E "É" : "e", "È" : "e", "Ẻ" : "e", "Ẽ" : "e", "Ẹ" : "e", "Ê" : "e", //--------------------------------i "í" : "i", "ì" : "i", "ỉ" : "i", "ĩ" : "i", "ị" : "i", //--------------------------------I "Í" : "i", "Ì" : "i", "Ỉ" : "i", "Ĩ" : "i", "Ị" : "i", //--------------------------------o^ "ố" : "o", "ồ" : "o", "ổ" : "o", "ỗ" : "o", "ộ" : "o", //--------------------------------O^ "Ố" : ", //--------------------------------y "ý" : "y", "ỳ" : "y", "ỷ" : "y", "ỹ" : "y", "ỵ" : "y", //--------------------------------Y "Ý" : "y", "Ỳ" : "y", "Ỷ" : "y", "Ỹ" : "y", "Ỵ" : "y", //--------------------------------DD "Đ" : "d", "đ" : ", "Ụ" : "u", "Ư" : "]; return strtr(s, replace); } }
Now, we need to tell Zephir that our project must be compiled and the extension generated:
cd myapp zephir build
On the first time it is run a number of internal commands are executed producing the necessary code and configurations to export this class to the PHP extension. If everything goes well you will see the following message at the end of the output:
Compiling…
Installing…
Extension installed!
Add extension=myapp.so to your php.ini
Don’t forget to restart your web server
Note that since Zephir is will in its infancy, it’s possible to run into bugs and problems. The first time I tried to compile this it didn’t work. I tried the following commands and eventually got it to work:
zephir compile cd ext/ phpize ./configure make && sudo make install
The last command will install the module in the PHP extensions folder (in my case:
/usr/lib/php5/20121212/). The final step is to add this extension to your php.ini by adding the following line:
extension=/usr/lib/php5/20121212/myapp.so //or extension=myapp.so
Restart Apache, and we’re done.
Test the code
Now, create a new file called zephir.php :
$flow = new Myapp\Quantum(); $ok = $flow->Harmos(-7.5); if ($ok == true) { echo "Write data Harmos sucess <br>"; } $normalize = new Myapp\NormalizeChars(); echo $normalize->trans("Chào mừng bạn đến Sitepoint");
Finish up by visiting your
zephir.php page. It should look similar to the following output:
If you’re mathematically inclinced, install gnuplot and run it with the .txt output we got from our Zephir extension:
gnuplot splot './harmos.txt' w l
This command will draw the image using the data file harmos.txt, which will look like this, proving our equation was calculated properly.
Protected code
In some cases, the compilation does not significantly improve performance, maybe because of a bottleneck located in the I/O bound of the application (quite likely) rather than due to limits in computation or memory. However, compiling code could also bring some level of intellectual protection to your application. When producing native binaries with Zephir, you can also hide the code from users or customers – Zephir allows you to write closed source PHP applications.
Conclusion
This article gave a basic guide on how to create extensions in Zephir. Remember, Zephir wasn’t created to replace PHP or C, but as a complement to them, allowing developers to venture into code compilation and static typing. Zephir is an attempt to join the best things from the C and PHP worlds and make applications run faster, and as such competes rather directly with HHVM and Hack.
For more information on Zephir check out the online documentation. Did you enjoy this article? Let me know in the comments! | https://www.sitepoint.com/getting-started-php-extension-development-via-zephir/ | CC-MAIN-2019-18 | refinedweb | 1,309 | 61.77 |
In most demos that you see that involve entity framework code first and ASP.NET MVC 3, localization is not mentioned. What if you want to make your data multi-language friendly? Here is a simple example: I want to have title column to be at most 30 characters in my entity:
[StringLength(30)]
public string Title { get; set; }
If you run the form with this configuration and you enter more than 30 characters, you will get the following message: The field Title must be a string with a maximum length of 30. This may be OK for some folks, but it does not follow proper English grammar. So, let’s fix the problem. First thing we need is create a resource file. I typically put resources into its own assembly, so I am going to create new class library project, add new item of type Resource, and create new string key Title30Characters and appropriate value:
Now, let’s customize the attribute in the following manner:
[StringLength(30,
ErrorMessageResourceName = "Title30Characters",
ErrorMessageResourceType = typeof(Resource))]
That is it. One important thing to notice is that most attributes in Data Annotations namespace have the same two parameters used for localization: ErrorMessageResourceName and ErrorMessageResourceType.
[StringLength(30,
ErrorMessageResourceName = "Title30Characters",
ErrorMessageResourceType = typeof(Resource))]
[Required(
ErrorMessageResourceName = "TitleRequired",
ErrorMessageResourceType = typeof(Resource))]
[Display(Name = "Title", ResourceType = typeof(Resource))]
public string Title { get; set; }
Above you see the final version of the Title field attributes. I am specifying maximum length, required field and display name, all coming from resources, specifically Resource class. One interesting thing to notice is that I am using the same attributes to define messages in ASP.NET MVC and database structure. Database does not care about messages I am adding to StringLength attribute, but it is using the value of 30 to define the size of the field.
Now, here is one more use case: I want to handle parsing errors. For example I have a Session Date field that maps to date time picker control in jQuery. If I enter the following text 112121, here is what message will be generated: The value ‘112121’ is not valid for Session Date. Not friendly. Here is how I can also customize all error messages in ASP.NET MVC. I can do it in Controller (Create or Update) method. In the example below I am using Create method, testing for errors, and putting in custom error for Session Date property:
public ActionResult Create()
{
var view = View(new Session());
view.ViewBag.Action = "Create";
return view;
}
[HttpPost]
public ActionResult Create(Session session)
{
if (ModelState.IsValid)
{
db.Sessions.Add(session);
db.SaveChanges();
return RedirectToAction("Index");
}
else
{
if (ModelState["SessionDate"].Errors.Count == 1)
{
string date = ModelState["SessionDate"].Value.AttemptedValue;
DateTime test;
if (!DateTime.TryParse(date, out test))
{
ModelState["SessionDate"].Errors.Clear();
ModelState["SessionDate"].Errors.Add(new ModelError(Resource.DateInvalidInvalidFormat));
}
}
}
var view = View(session);
view.ViewBag.Action = "Create";
return view;
}
As you can see above, I am checking for errors in ModelState, removing default error only if my parsing fails on proposed value, and putting in custom message from resources.
Alternatively, you can use “buddy classes” or MetadataType attribute to define your metadata in a “buddy” class. I personally do not like this approach, it feels awkward to me.
That is all there is to it.
Thanks.
[Required(ErrorMessageResourceName = “TitleRequired”,ErrorMessageResourceType = typeofResource))]
I used the above code in my case(WPF). But nothing happens. But if I use,
[Required(ErrorMessage=”hi”)] it is working perfectly. Why localization from the resource file didnt work?
The EF code I provided is for ASP.NET MVC. I have not tried to use the same in WPF. Should work in theory. Did you double-check to make sure the resource is available and public for your WPF app? If you need help, please email a sample project, and I will try to take a look. My email address on contacts page.
Hi Priya,
I’ve encountered the same problem when trying to use resources defined in other assembly.
I’m currently working on MVC application with model in a separate project. At first I used also a separate project for resources only. But it turned out that in such case the DataAnnotations such as Required or MaxLength were not applied in database. My solution was to move resources to the project with model and it worked as a charm! I suppose this is a kind of “feature” the EntityFramework team should look at 🙂
If it is also your case (resources in different assembly), try to define resources in the same assembly as your DataAnnotations.
HTH,
tiggris.
This is strange because in the demo I put togehter having resources in a separate assembly worked just fine. I think it might be because you do not define your resource’s visisbility as public. THere is a drop down on top of the resource editor that would allow you to set visibility to public. | http://www.dotnetspeak.com/asp-net-mvc/localizing-customizing-entity-framework-and-asp-net-mvc-3/ | CC-MAIN-2022-21 | refinedweb | 814 | 55.74 |
Swift - Environment
Local Environment Setup
Swift 4 provides a Playground platform for learning purpose and we are going to setup the same. You need xCode software to start your Swift 4 coding in Playground. Once you are comfortable with the concepts of Swift 4, you can use xCode IDE for iSO/OS x application development.
To start with, we consider you already have an account at Apple Developer website. Once you are logged in, go to the following link − Download for Apple Developers
This will list down a number of software available as follows −
Now select xCode and download it by clicking on the given link near to disc image. After downloading the dmg file, you can install it by simply double-clicking on it and following the given instructions. Finally, follow the given instructions and drop xCode icon into the Application folder.
Now you have xCode installed on your machine. Next, open Xcode from the Application folder and proceed after accepting the terms and conditions. If everything is fine, you will get the following screen −
Select Get started with a playground option and enter a name for playground and select iOS as platform. Finally, you will get the Playground window as follows −
Following is the code taken from the default Swift 4 Playground Window.
import UIKit var str = "Hello, playground"
If you create the same program for OS X program, then it will include import Cocoa and the program will look like as follows −
import Cocoa var str = "Hello, playground"
When the above program gets loaded, it should display the following result in Playground result area (Right Hand Side).
Hello, playground
Congratulations, you have your Swift 4 programming environment ready and you can proceed with your learning vehicle "Tutorials Point". | https://www.tutorialspoint.com/swift/swift_environment.htm | CC-MAIN-2018-51 | refinedweb | 291 | 57.4 |
08 December 2010 03:45 [Source: ICIS news]
By Malini Hariharan
DUBAI (ICIS)--Global monoethylene glycol (MEG) markets are likely to remain robust in 2011, supported by strong demand from ?xml:namespace>
“If the dynamics stay intact, there is nothing that we see now [to show] that 2011 will not be as good as 2010. And that is across the chain; our optimism is dependent on our customers remaining profitable,” Ramesh Ramachandran, CEO and president of MEGlobal, told ICIS.
This year has been a surprise for producers of MEG, a commonly used intermediate in the production of polyester fibres that go into clothes, as a widely expected industry downturn did not materialise. Prices and margins recovered throughout the year, supported by the strength in the polyester industry - especially in
Global MEG demand was expected to rise by around 10% to 21m tonnes by the end of this year, driven mainly by Chinese demand, which was expected to hit 9m tonnes, up from around 7m tonnes last year.
“You may debate about when this will happen, but it is only a matter of time,” Ramachandran said on the sidelines of the 5th Gulf Petrochemicals and Chemicals Association (GPCA) forum being held in Dubai, the United Arab Emirates (UAE).
Besides demand, supply-side factors also helped producers in 2010 as operating problems constrained availability.
Ramachandran stressed that while on paper global MEG capacity was in excess of demand, forecasts should not be based on this “simplistic view”.
“Not all capacity runs all the time; our take home message from last year has been that overcapacity does not mean oversupply,” he said.
He cautioned that although MEG was a commodity, one should not assume that it was easy to produce as it involves difficult technology.
“There needs to be an estimate of effective capacity; in our view there is a 10% swing between nameplate capacity and effective capacity over the course of the year,” he added.
With demand in key markets such as
“We need to build; we definitely need to bring economically viable capacity as our customers are growing. We have nothing to announce now, but we are looking,” he said.
The GPCA annual forum runs on 7-9 December.
MEGlobal is a joint venture between Dow Chemical and Petrochemical Industries Co (PIC) of Kuwait that was set up in 2004. | http://www.icis.com/Articles/2010/12/08/9417515/gpca-10-meg-to-remain-strong-in-2011-on-china-demand.html | CC-MAIN-2013-48 | refinedweb | 389 | 53.95 |
Let's say I've the following drawing:
Is it more likely to use an Array or a List?
(The max of Px (p1,p2...) is 10)
If it's possible to do in an Array can you please give me a quick explanation how?
Let's say I've the following drawing:
Is it more likely to use an Array or a List?
(The max of Px (p1,p2...) is 10)
If it's possible to do in an Array can you please give me a quick explanation how?
Can you re-post the picture?
I'm afraid to click that link.
Can you attach it to the message instead of posting a link?
Thanks!
That would (most likely) be a linked list with some of the elements terminating to null.
...or even a dictionary with Lists as members -- one of which is a linked-list.
I've one more question, is this code able to do the orientation that I've been asked to do?
(I know that this is not the full syntax but lets say that from line 39 forward its in the Main)
using System; namespace ConsoleApplication1 { public class Page { private string name; private Page[] links = new Page[0]; public Page(string name) { this.name = name; } public string GetName() { return this.name; } public void AddLink(Page link) { if (this.links.Length < 10) { Page[] a = new Page[this.links.Length + 1]; for (int i = 0; i < this.links.Length; i++) { a[i] = this.links[i]; } a[a.Length - 1] = link; this.links = a; } } public Page[] GetAllLinks() { return this.links; } } } page index = new page("Index"); page p1 = new page("p1"); page p2 = new page("p2"); page p3 = new page("p3"); page p4 = new page("p4"); page p5 = new page("p5"); page p6 = new page("p6"); index.AddLink(p1); p1.AddLink(p4); index.AddLink(p2); index.AddLink(p3); p3.AddLink(p5); p3.AddLink(p6); p6.AddLink(index);
It might work, but there's a better way to do that?
We're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge. | https://www.daniweb.com/programming/software-development/threads/401582/a-list-or-an-array | CC-MAIN-2021-04 | refinedweb | 354 | 75.81 |
Hi i have the following little programme which prints out some numbers from an Array i declared at the start which holds 5 values. Once the 5 values are printed the programme catches the exception and prints out the message, as there are no more numbers to print!.
Here is the code.
Code :
package ClientServer; public class GoTooFar { public static void main(String[] args) { int dan[] = {26, 42, 55, 67, 43}; try { for (int counter = 0; counter <= dan.length; counter++) { System.out.println(dan[counter]); } } catch (ArrayIndexOutOfBoundsException e) { System.out.println("Youve gone too far"); } } }
As you can see the numbers seem to printed out all at the same time, (although scientifically i guess there is a fraction of a second between each one).
My question is how can i make there be 1 second in between printing out each digit? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/11566-printing-out-numbers-array-printingthethread.html | CC-MAIN-2014-52 | refinedweb | 141 | 63.19 |
Conor MacNeill wrote:
>> 3. If you don't need namespaces, you'll not have to use them. I.e. if you
>> have a simple project and use few libs - you can still use the simpler
>> syntax.
>
> what is the "simpler syntax"? You mean no namespace qualifiers? How do you
> see both these mechanisms working at once? - defining all tasks in the
> default namespace and ignoring collisions?
The current syntax, with no namespaces. Just define the tasks with
<taskdef resource="my/antlib/package/ant.properties />
or some new
<antlib name="my.antlib.package" />
The behavior will be to load the tasks just like today, and eventually
generate errors on name conflicts ( with a message that if you have
conflicts you must use namespaces - you are no longer in the "simple"
case ).
> This means the build file depends on the user's config to some extent. One
> user has no colision and uses <deploy> whilst another does and get the
> wrong <deploy> defined first. Not sure this is good.
Yes, if the user has a simple config with non-conflicting antlibs - he'll
have a simpler build file. He'll still declare what antlibs it uses, and
if we detect a conflict we can inform him that the simple times have ended
and he must use namespaces ( at least for the antlibs with conflicts ).
>>.
>>
>
> We need to have some mechanism to match the URI to a jar, whatever the URI
> format is. At this stage my pref would be to use a standard HTTP URL from
> which the library may be downloaded (not necessarily directly).
That's a good option too.
What about combining the 2 variants. My proposal was to use the URI to
identify the package, and use this to load the descriptor ( assuming
the jar is available and a catalog can help the download ).
We can use the URL, and require that each jar includes a
META-INF/ant/ENCODED_URL resource for the descriptor. This way you
have a simple way to match the URL against a .properties ( or .xml )
descriptor, without extra config ( catalog, etc ) or net access.
Costin
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | https://mail-archives.eu.apache.org/mod_mbox/ant-dev/200301.mbox/%3Cav9k8t$o9b$1@main.gmane.org%3E | CC-MAIN-2021-25 | refinedweb | 374 | 75.1 |
I'm buying: vinegar, Worcestershire sauce, brown sugar and chili sauce.
What are we making?
The flowers role is easy - I'm trying shamelessly to impress Shira.
--Ben
Update: The answer: Sloppy Joe's. And boy was it good.
I'm buying: vinegar, Worcestershire sauce, brown sugar and chili sauce.
What are we making?
The flowers role is easy - I'm trying shamelessly to impress Shira.
--Ben
Update: The answer: Sloppy Joe's. And boy was it good.
Anyone who's ever tried to find a domain name for a website they want to launch has known the frustration of trying to find an available name to purchase. It sometimes seems like all the good ones are gone.
Gavin pointed me to a really impressive site that helps overcome a bad case of namers-block. Check it out: bustaname.com.
The idea is simple: give Bustname.com a few words, and it will tell you which combinations of those words are available in the form of a domain name. Bustname.com also lets you easily add synonyms and tune other options like whether it should find .com domains only, or branch out to other domains.
The site makes use of Web 2.0 style AJAX - and it does so beautifully. The interaction with the site is nothing short of remarkable. This is yet another site to review just for the UI alone, not to mention the cool service it offers.
Let's look at an example of how it might be used. Suppose I've messed up with Shira (I know, unlikely, but it occasionally happens) and I'd like to put up a website that's sort of the geek's version of buying flowers. I need a name domain name, but which one to choose? According to Bustname.com here are some options:
veryregrettable.com wronghusband.com iamregrettable.com wrongiam.com sorrywrong.com wrongsorry.com
The site also offers a quick check facility for names - good news - wrong-and-sorry.com is also available.
Godaddy - you would be insane not to throw piles of money at this guy to buy his site. You'll sell way more domains if people have a really easy interface for finding them.
Hmmm, that apology website doesn't seem like such a bad idea now...
If all goes well, I should be seeing a lot more of Greg as he's just
moved to the area. He's thinking about settling down in Maryland, so I
have to talk him out of that decision.
Welcome Greg - enjoy the humidity and traffic, there's plenty to go
around.
--Ben
Corollary: If you take additional time to store said food container in
its own secure plastic bag, it will under no circumstances leak.
--Ben
For the record, here's a sample photo I took while playing with the T-mobile Wing.
The quality seems quite reasonable - far better than my Sidekick, but that's not saying much.
As much as I kvetch about the Wing, it does have pretty much every feature I could ever ask for. Including a nice camera - a must have for mobile blogging.
Good
I spend enough time around a keyboard that I've found once I start typing a familiar word, my brain goes into a sort of auto complete mode, typing out the rest of what I normally type. It's basically a muscle memory thing - I may know the word I want to type, but my fingers might type something else.
For example, if I've been working on a schema for the last few days, and I want to type a message about scheme, I find myself constantly typing: schema <backspace> e throughout the message.
Surely this is a common brain thing right?
Boy did it nearly get me in a heap of trouble tonight.
Shira's on travel, and I was IM'ing her a good night message. I wanted to say miss you tons - but there's a problem. I work with Tonya, and send her quite a few e-mails.
So what do you think my brain auto-completed: I miss you ton... to? I came awfully close to typing tonya, but ended up instead with tonys. Whew, that was a close one.
Shira knew exactly what I did. I really need to find an off switch to this feature before I really get myself in hot water.
Here's a really handy list of Postgres tips. It takes just a few minutes to browse through it, but has quite a few gems to offer.
Here's an example:
Ditch All Output Of Query by xinu on Jan 12, 2005 10:25 AM
If you want to run a query or a function and ditch the output, you can do something like this:
xinu=# select my_void_function() \g /dev/null
So, if you want to redirect the output of a command you can say: \g /path/to/a/file instead of the usual ;.
I just attempted to go shopping for a new cell phone. I wanted to check out the Wing. I know it's a product of the Evil Empire, but it has all the bells and whistles. Not to mention, my Sidekick lasted less than 9 hours today before running out of battery juice. I'm getting desperate and not loving Danger.
So I headed over to a T-mobile store in the Balston Mall.
I could quickly tell they had a Wing in stock. Whew, didn't waste my time getting out here.
Then I asked to handle one. You know, before I spend like $400 on a phone I'd like to use it.
The rep at the counter explained they didn't have a live one. I could play with a dummy phone. Excuse me? What about the live one secured to the counter in a Fort-Knox grade security device? He can't give me access to that one - against the rules.
I had no choice, I thanked the guy and walked away.
A cell phone is an incredibly tactile device, not to mention a complex one. How can I choose to buy one if I can't play with it? I need to feel its weight, notice the delay in the screen flip and shoot some video. What an absolutely absurd policy.
That's like going to a Toyota showroom and not being allowed to drive a Prius. Sure, you can sit in the one parked on the sales floor. But drive it? Please, that's against the rules.
It didn't help that when I asked him about the phone in general he said I needed to be more specific. Then I asked him if people had any complaints about it? Nope, not that he knew of.
This is a $400 computer we are talking about here - not a $30 cordless phone. Argh. I'm officially unimpressed with this shopping experience.
--Ben
About 10 minutes before we had to leave for Aaron and Michelle's wedding, I realized I had left the bow tie for my tuxedo on the floor of my bedroom. One small catch, my bedroom floor was 395 miles away from parents house, where I was getting dressed.
What to do?
I scrounged around looking for a bow tie - but had no luck. Shira's mom couldn't dig up one either. We didn't have time to stop at a store. Things were looking pretty grim.
Then I got a text message from my mom - the problem had been solved. She had one for me.
I arrived at the wedding, and sure enough, my mom handed me a black bow tie.
How did she pull this miracle off? She asked a waitress if she, or one of her fellow wait staffers had an extra. Sure enough, they did.
Brilliant! This is a hack I'm definitely going to remember. Just more proof - moms can fix anything.
While Shira and I were in Rochester this weekend, the topic of the blog came up. My Mom, Dad, Brother and Shira all agree - the Scratch Pad, my Twitter feed, has to go.
Obviously, I thought it was a clever idea. I thought, hey, it'll give me yet another way to blog. Apparently, I was wrong - very wrong.
Opinions on why the Scratch Pad was so bad range from the fact that it is just noise, to the fact that it makes me look less intelligent. Ouch.
It's interesting to note, that while I've blogged about many topics, from relationship advice to Python style Generators, I've never had any opposition to any content from my family. But the Scratch Pad - yuck, useless.
The whole dislike of the twitter feed reminds me of a comment from Wired Magazine on the topic of why the Star Wars Prequels were such a bad idea:
Patton Oswalt: The prequels are like offering someone ice cream, then giving them a bag of rock salt and saying, "Eventually, you can turn this into ice cream." Star Wars is ice cream. Don't give us rock salt.
The Scratch Pad was the rock salt. Nobody cares.
Lesson learned. It's coming down in just a few minutes...
For those who enjoyed the Scratch Pad, you can still follow it on Twitter.
Yesterday Shira and I had the pleasure of attending Aaron and Michelle's wedding. We had an awesome time - the ceremony was beautiful and it was great seeing old friends. It didn't hurt that they served two kinds of cake. Mmmm...cake.
We wish them all the best - they should continue to know the kind of joy we all shared last night. Mazel Tov!
...because:
Luckily, I have just as many things to be thankful for.
Instead of walking back to my desk while toasting, and returning to find burnt remnants of lunch - I'm bringing my desk to the toaster.
As long as I don't melt any plastic, I should be good.
--Ben
This morning, on my way back to DC, I was happily blogging on the airplane. I had a bunch of messages queued and ready to be sent when we landed.
As we started our descent, I popped out the SD card I had inserted to upload a few photos from. Apparently, my timing was bad because the phone locked up.
When I rebooted, the device had reset itself. Pretty much everything was gone: contacts, e-mail, etc.
The good news is that the vast majority of my settings and data are on the server. So, when the device finally syncs, things should return to normal. I'll have to recreate the posts from earlier, and I'm without e-mail till the sync is complete.
However, if all goes well, things should be back to normal soon'ish.
Annoyances like this are custom made for Mondays. Argh.
--Ben
Nope, I've gone to Rochester.
OK, it's not *that* cold here - it's a pleasant 76 ish degrees. Compared
to DC, though, this feels like fall. Yesterday in DC, for instance, it
had a feel like of 96 degrees at 8:00pm at night.
We're in town for under 24 hours. We are attending a friend's wedding,
and tomorrow AM it's back to DC.
--Ben
Normally, I have no interest in Sudoku, but I've found it's a wonderful
way to pass time on the plane as we taxi to and from the gate.
Usually, I get part way through the puzzle - but not this time. I so
nailed the easy one presented in the in-flight magazine.
I'm not exactly sure why Sudoku is such a hit on the plane. Obviously,
there's the practical matters. It requires no electricity and doesn't
require the same mental investment as reading or working.
However, I think there's more to it than the pragmatic reasons. When I
first started playing, I would manually scan each square looking for a
match. Now, I find myself in a near Flow like state where I'm just
scanning the page, and poof, something clicks. I then get a little
mental boost at every correct answer.
So, basically, Sudoku is a drug. The best kind of drug too, as it's
using the chemicals I already have in my body.
Mmmmm....instant gratification, feels good.
--Ben
Another gem from my Mother-in-Law:
Armed with a soldering iron and energy drinks, a teenager who will soon be a freshman at Rochester Institute of Technology has developed a way to make the Apple iPhone available to a much wider audience.
From the Rochester Democrat & Chronicle
Let's see, as I was heading off to SUNY Buffalo for my freshman year of school, I was more concerned with picking classes, buying books and figuring out how to turn my vax username of VP24NYVM into an e-mail address (yes, that was my username - kids today have it so easy, what with their intelligible e-mail addresses and such).
Hacking the latest technology at the hardware level just wasn't on my radar.
If I were Apple, or better yet, Microsoft, I'd start wooing this kid today.
I printed out directions from Yahoo, and made sure to visit GPS Visualizer () to get the latitude and longitude of the store's address. I then punched it into the GPS when I got to the car.
The Yahoo maps got me to the area, but it was really the Garmin that allowed me to find the place. The lat/long steered me to the very front of the store (hiding in the back corner of a shopping center).
Sure, the eTrex is as basic as it gets. But I love the fact that it sips batteries, so it's always at the ready. It's no replacement for a real car GPS that provides verbal instructions, but it makes a terrific durable backup. Its basic map gave me just the info I needed to see I was headed in the right direction.
If you don't own a GPS, it might be a fun one to try on the cheap.
----
*Unexpected in the sense that Shira only told me a few times this week and I kept forgetting about it
--Ben
Here's another great Paul Graham article, on the topic of programming:
As usual, it's more propaganda than how-to, but it's certainly a good, fun, read. If you are a programmer, you'll either love or hate his opinion. I happen to enjoy it immensely.
My big question is, how much programming does Paul do these days? I wonder if he's in danger of talking the same talk as always, even though he's not in the trenches anymore.
Hope he's still hacking lisp late into the night.
--Ben
Update: I'm not particularly in love with my quick synopsis of this article. I think it was a bit too hasty. Paul actually does layout quite a few concrete things you can and should do to write great software. I'm also a believer in the fact that a few, expert programmers, can far outdo a large group of mediocre programmers. And, naturally, I agree with this bottom up (read: lisp friendly) approach.
One point where I might depart from Paul is that it isn't an organization vs. hacker issue that keeps great software from being written, but The Dip. It's easy to blame organizations, because, well, they are often to blame. But, even in an ideal environment, a programmer needs to cope with The Dip.
Rule #1 of using the office toaster: don't put something in and walk away. If you do, you'll end up with charcoal.
Rule #2 of using the office toaster: don't stand around watching the toaster, it'll never heat up with you staring at it. If you break this rule, you'll end up with cold food.
I opted to break rule #1. Mmmmm....carbon. Just like Ma used to have a reputation for making.
--Ben
Yesterday, a battery I ordered off eBay to fix my battery woes finally arrived.
At the moment, I'm at 90% battery life from charging all night. Hmm, it's been unplugged for just a few hours - that seems kinda unimpressive, no?
Is there some sort of ritual I should be going through to condition the battery? Perhaps some voodoo or a misheberach?
For now, I'll let it full drain then fully charge it a few times.
--Ben
Yesterday, as Beamer walked into my office, I decided to show him who was the real gun slinger around here. I pulled my cell from my holster.
The result, he had his out faster, and mine was on the ground in three pieces. Defeated. He even had his at the ready to snap this photo.
Beaten at my own game.
--Ben
I have this friend, we'll call him Gen. Gen is a real slob and keeps spilling food on himself. Of late, he keeps getting bits of candy, mostly chocolate on himself and his surroundings.
How can he get the stains out?
So far, he's tried: rubbing really hard, rubbing really hard with a damp cloth, and anger. No luck so far.
Please, for Gen's sake - how can he remove these blemishes.
I, uh, I mean he, will be forever in thankful for such a solution.
--Ben
Recently, the topic of adding Python style generators to both Java and Scheme has come up. It's interesting to compare the two approaches taken.
Before we jump into the details - what the heck is a generator anyway?
Generators are an approach to encapsulated iteration, much like a java.util.Iterator object is. Here's a trivial Python example:
from __future__ import generators def foo(): print 'hello' yield 1 print 'world' yield 2
Calling foo above will return a generator back. You can then call next() on that generator (much like an Iterator in Java) to return the values that are associated with the yield statement. Think of yield as being the same as return, with the benefit that calling next() will resume there again.
Why add generators to your language, especially if it already has iterators? Generators simplify your life: rather than manually tracking which value to return next, you have the interpreter handle all the details. This is much like garbage collection - you could manage memory manually, but isn't it nice to have the interpreter do all the book keeping for you. It is.
Here's a description of technique to add generators to Java used by infomancers:
It works by using Java 1.5's facility for hooking user-defined class inspectors/transformers into the classloader, and rewriting the bytecode of a yieldNextCore() method containing calls to yieldReturn(foo) so that its local variables are promoted to members of an anonymous inner class, which also maintains the state of the iterator/generator.
Wow that's slick. Who would have thought to use user-defined class inspectors/transformers to re-write bytecode on the fly? here's a sample of how to use); }}}}}}
The Scheme approach was written iteratively, starting off by using the primitive call/cc operator and finishing up by using control.ssand a syntax tweaking macro. Here's the final implementation:
(require (lib "control.ss")) (define-syntax (define/y stx) (syntax-case stx () [(_ (name arg ...) body ...) (with-syntax ((yield-name (datum->syntax-object stx 'yield))) (syntax (define (name arg ...) (define (yield-name x) (control resume-here (set! name (lambda () (prompt (resume-here 'dummy)))) x)) (prompt body ...))))]))
Here's a sample Scheme usage
(define/y (step) (display "hello\n") (yield 1) (display "world\n") (yield 2) 'finished)
Here's the lessons I took away from this...
In Java, where there's a will, there's a way. It's pretty dang impressive that Java allows you to re-write bytecode on the fly. It may be a relatively crude way to extend the language, but it works.
In Scheme, mere mortals can add experimental concepts to the language. In about the same space it takes to describe the Java solution in English, you can actually write it in Scheme. The implementation doesn't require any low level hacking - just understanding a particular library of control operators, or, if you prefer the basics of how call/cc works. This lowers the bar for adding new language constructs and means that you can bend the language to your project's needs, instead of the other way around.
Scheme's macros allow you to make the new concept blend seamlessly with the language. Comparing the Python and Scheme samples, there isn't a whole lot of difference. Not only can you add new concepts to your language, but you can make them convenient for the programmers who will be using them. Again, the language bends to project, not vice versa.
What concept do you wish your programming language had? Have you tried adding it in 20 lines of Scheme code?
Beamer's got a new phone - but more importantly, he's got a new hip holster.
He demonstrated his quick draw skills. He's good, and has obviously been practicing in front a mirror.
Surely, he will now be irresistible to the ladies. I stand in awe and will have to do some practicing later tonight myself.
--Ben
Try this with your significant other...
I'm not exactly sure why the above is a good idea (I think it has to do with both showing effort and a bit of mystery), but I'm pretty sure it should work. If it doesn't, there's always plan B.
Here's a sample:
Fuven,
Lbh ner zl gehr ybir. V'z fbeel gung V'z fhpu n cnva va gur ohgg naq gung V fcraq fb zhpu gvzr oybttvat.
Yvsr jvgubhg lbh vf n yvxr n freire jvgubhg ENZ.
Depending on the fallout from this post, this may very well be my one and only Romatic Tip Of The Day post...ever
Be prepared to appreciate what you meet
-Fremen saying
Here's an entire list of quotes from the book - many of them quite cool. It's like reading a novel with chunks of Pirkei Avot dropped in between the chapters.
Shira runs a tight ship. She's yet to make me walk the plank, but we aren't docked yet.
--Ben
I'm happy to report that to date Shira has not gotten sea sick. She's been a total trooper. It's even been a bit choppy and she's holding up.
Justin and I helped out Dave with tacking, and Jenna and Shira were spotters.
This whole sailing thing is quite relaxing - throw in a few beers and some Jimmy Buffet music and I could get used to this.
--Ben
This is just too cool!
--Ben
Thus whole standing at the front of a rocking boat and blogging is just a bit tricky.
Time to walk back to a more stable part of the boat.
--Ben
Man this is fun!I couldn't resist having a Titanic moment before heading out to he Bay.
Sure, it may not look like much...but it's the crew that matters.
--Ben
Even Wikipedia confirms this:
Q: Why is this interesting?
A: Does everything I post here have to be interesting? It just is.
--Ben
It's probably an urban legend - but I'll be damned if I'm going to let a
bunch of defenseless fish and birds be strangled by this
6-pack 8-pack ring.
What an odd habit to have, that I feel responsible for cutting these up.
Do I take the bus to work, make sure I recycle at every possible chance
or bring my own bags to the super market? Nope, but I certainly won't
let a Coke
6 pack 8-pack ring pass me by.
Why is that? Probably marketing...
--Ben
I'm a big fan of Bugzilla and couldn't agree with more with Joel, a bug tracking system is an absolutely essential tool for making a successful dev team.
Where Bugzilla seems to be lacking the most is on the reporting side. I can cut them a bit of slack on this, as reporting needs are going to differ from team to team.
For a while now, my team has been tracking some items manually both in Bugzilla and on a Wiki page. Last week, I finally got fed up enough to do something about it.
So, this morning, Kelly and I decided to peek behind the curtain and actually look into querying Bugzilla's database directly to get the tidbits of info we needed from it.
I'm not quite sure what I was expecting - some sort of hideous, hacked up mess or something. But I was completely wrong. The Bugzilla MySQL database is pretty much exactly what you'd expect. It has a bugs table, products table, components table and lots of there tables exactly where you'd expect them. They are linked together by perfectly sane foreign keys.
In just a few minutes we had our first query written in PHP, and after an hour (with at least 10 minutes before the report was due), we had a working replacement for the report we had been working on manually.
Bugzilla even has a feature which seems custom made for querying stuff behind its back - keywords. Keywords are basically flickr/delicious style tags. You can add an arbitrary number of them to any bug report and then query against them later. Want to see all bugs that involved a problem with deployments - just tag them deployment and query for that after the fact. (Note: Tonya does our deployments, so we don't have bugs in this category.)
As an example, below is a query which will show all bug reports closed out in the last 7 days which have the tag foo.
SELECT b.bug_id, b.priority, b.resolution, b.short_desc, p.name as product, c.name as component, u.login_name as owner FROM bugs b, components c, products p, keywords k, keyworddefs d, profiles u WHERE b.bug_id = k.bug_id AND k.keywordid = d.id AND d.name = 'foo' AND b.product_id = p.id AND b.component_id = c.id AND (b.bug_status = 'CLOSED' OR b.resolution <> '') AND b.assigned_to = u.userid AND b.lastdiffed > (now() - interval 7 day) ORDER BY b.lastdiffed DESC
I just love the feeling of replacing manual effort with a script.
I'm a big believer in moving people around for efficient communications. If two people have to work closely on something, even if only for a week or two, then, I think they should sit side by side.
As a result of this philosophy, we had yet another team office shuffle. My final placement is decent - I can see the door, and passers by can't see my monitor. Though I have folks sitting behind me.
With my wide angle mirror though I can at least get a glimpse of them back there. Theoretically, this would avoiding a heart attach if they tap me on the shoulder and I'm deep in flow.
Mostly I needed the mirror because they were laughing behind me and I needed to be sure they weren't also gesturing at me at the same time.
--Ben
Note to self: don't drop your mostly empty, mostly sealed, Tupperware container of Corn Chowder soup in your briefcase.
Unless of course you like the idea of your laptop, power cord and brief case having a pleasant Corn Chowder smell.
Luckily, I don't carry much in my briefcase and it's easy to clean.
--Ben
I've always shied away from reading sci-fi. Even movies like Lord Of The Rings get me nervous - what with not only a cast of characters to learn, but different worlds and species as well. I took a chance though on Dune by Frank Herbert.
At first my concerns were well founded. There are 18 CDs in the book. It opens with an introduction of the cast - not single author - who will be acting out the book. Then the clincher - this is just Book 1 they tell me. Gulp. How many books are we talking about?
Things got scarier from there as quotes from an alien religious text is read and the story begins.
As I feared, new species, worlds, and concepts were being introduce by the minute.
I'm now on disk 6. I have to say that I absolutely love the book. I love it for the reasons I like most books: the characters are ones I like and can learn from and the storyline is quite interesting. There's enough action to keep my attention too.
The book clearly has a soap opera quality to it. Try this experiment one day when you are home sick - watch an episode of Days Of Our Lives. Within 3 minutes they will drop enough clues that you'll be totally up to speed. By the end of an episode, you'll be a pro.
Same with Dune. While the book invents new science, history, religion, politics, warcraft and entertainment for an alternate set of worlds and creatures it does so in a way that you can indeed follow along with.
I'm glad I took a chance on this one - it's already paid off.
--Ben
A while back, I discovered the Windows XP Remote Assistance and my life of giving tech support advice to family and friends changed dramatically. All of a sudden, I had an easy way to get a view into what was on their screen. From giving HTML advice to figuring out why e-mail wasn't being downloaded, I was the Tech Support Man. All I lacked was the nifty headset phone and the possibly foreign accent.
However, I found that under some circumstances Remote Assistance wouldn't work. This was a major pain the neck because without a view into a family member's screen, debugging was reduced to "OK, read me exactly what's on the screen again" - not fun at all.
I finally dug into why I might be having issues with Remote Assistance - it turned out to be obvious. Remote assistance does nothing fancy to deal with routers and firewalls. I thought for sure Microsoft had a really sophisticated approach to dealing with giving support to customers on random home networks. Turns out, they don't support this in the least. When I actually looked into the contents of an invite that wasn't working, I found that the IP address listed to connect to was 192.168.1.18. Well, that's useless.
I could have tried to get around this by having the tech-suportee setup a rule on their router to forward requests from port 3389 to their computer, but please. If I can't explain to them how to fix their e-mail, then there's no way I'm going to be able to explain to them how to setup firewall rules on a random router interface, that of course, I can't see.
I was just about to gave up, I did one last Google search. Of course, LifeHacker to the rescue.
LifeHacker pointed me to ShowMyPC.com. This totally free service does exactly what I want. With the person on the phone, I have them click on the link labeled Show My PC to a remote user. They then click a button which gives them a unique password. On my side, I enter this password. Poof, a few seconds later, I'm staring at their desktop. All without a single thought about firewalls, NAT routing or a million other details.
No worrying about invitations, e-mail or hand generated passwords. It just works. This is my new preferred way to a share a desktop, hands down.
I've added a new piece to Ben's Journal - Ben's Scratch Pad. Folks will quickly recognize this as my twitter stream.
I've been thinking through twitter, trying to decide if it deserves a place in my publishing toolkit. On one hand, I really like how easy it is to publish with it. Quick thoughts, links, quotes and other small bits of content seem like better candidates for twitter than for my regular blog. Perhaps most importantly, the small format size should make my commitment of almost-daily posting an easier one to fulfill.
On the other hand, I didn't want to divide my already limited effort between two sites. I have enough trouble getting folks to read my blog - getting them to read my twitter stream seems like another big hurdle.
Then it hit me - I have total control over the benjisimon.blogspot.com. Rather than thinking of twitter as a secondary site, I decided to integrate into my existing site. You can't help read one without reading the other.
Now when I have a really busy day, and just have enough time to post a few snippets to twitter, my blog visitors will be treated to at least some fresh content. Mission accomplished. Almost.
What if you don't read my blog by visiting my website? What if you just read the feed. A quick Google search for feed splicing found this useful article on the topic. It turns out, Yahoo offers Pipes which among other things, glues together RSS feeds.
As a side note, Yahoo Pipes really is remarkable. It makes use of a graphical approach to composing feeds. In fact, it's the closest thing to a Graphical Programming Language that I've ever seen work. Check out my pipe here for an example.
The end result is I now have a new feed URL. Click here to subscribe to it. This feed allows you to simultaneously read both my Journal and my Scratch Pad at the same time.
Kostyantyn ran into this issue today. I was totally clueless till he got it sorted out.
Suppose you have a table like so: CREATE TABLEE foo(timevalue timestamp); You successful run the following INSERT: INSERT INTO foo(timevalue) values ('2007-03-11 01:53:44'); You attempt to run the following INSERT: INSERT INTO foo(timevalue) values ('2007-03-11 02:53:44'); You get the error: error# 1292: Incorrect datetime value: '2007-03-11 02:53:44' for column timevalue
What's the problem? How do you fix it?
The value 3/11/2007 corresponds to a daylight savings time switchover date. On that particular day, there was no 2:00 - 2:59am.
This is issue was caused by the fact that the database contained raw timestamp data from another timezone. When this data was loaded, the timezone wasn't adjusted to the locale specific timezone. The result was this one particular record which can't really exist.
I'm sitting on 66 going 0 miles per hour. Looks like I guessed wrong on the right path to work today.
In past few days, I've been leaving at this same time and arriving at work in just over 23 minutes.
It's now 22 minutes into my commute and I've barely made it a few miles away from my house.
Argh. Nothing left to do but crawl forward when traffic starts up again and bail out of 66 as soon as possible.
Argh. Better blog rage than road rage.
--Ben
With Shira out of town, I was able to take a walk this morning at 6am'ish. While out, I caught some photos of the sunrise.
I can't remember the last time I watched the sun actually peak up over the horizon, so this turned out to be quite a treat. The Washington, D.C. skyline provided a nice backdrop for the whole event, too. | http://www.blogbyben.com/2007_08_01_archive.html | CC-MAIN-2018-22 | refinedweb | 5,996 | 74.19 |
#include <IOStream_T.h>
Inheritance diagram for ACE_Streambuf_T:
We will be given a STREAM by the iostream object which creates us. See the ACE_IOStream template for how that works. Like other streambuf objects, we can be input-only, output-only or both.
Definition at line 39 of file IOStream_T.cpp.
00042 : ACE_Streambuf (streambuf_size, io_mode),
00043 peer_ (peer)
00044 {
00045 // A streambuf allows for unbuffered IO where every character is
00046 // read as requested and written as provided. To me, this seems
00047 // terribly inefficient for socket-type operations, so I've disabled
00048 // it. All of the work would be done by the underflow/overflow
00049 // functions anyway and I haven't implemented anything there to
00050 // support unbuffered IO.
00051
00052 #if !defined (ACE_LACKS_UNBUFFERED_STREAMBUF)
00053 this->unbuffered (0);
00054 #endif /* ! ACE_LACKS_UNBUFFERED_STREAMBUF */
00055
00056 // Linebuffered is similar to unbuffered. Again, I don't have any
00057 // need for this and I don't see the advantage. I believe this
00058 // would have to be supported by underflow/overflow to be effective.
00059 #if !defined (ACE_LACKS_LINEBUFFERED_STREAMBUF)
00060 this->linebuffered (0);
00061 #endif /* ! ACE_LACKS_LINEBUFFERED_STREAMBUF */
00062 }
[protected, virtual]
Reimplemented from ACE_Streambuf.
Definition at line 47 of file IOStream_T.i.
References peer_.
00048 {
00049 return peer_ ? peer_->get_handle () : 0;
00050 }
[virtual]
Implements ACE_Streambuf.
Definition at line 19 of file IOStream_T.i.
References ESUCCESS, ETIME, peer_, ssize_t, and ACE_Streambuf::timeout_.
00023 {
00024 this->timeout_ = 0;
00025 errno = ESUCCESS;
00026 ssize_t rval = peer_->recv (buf, len, flags, tv);
00027 if (errno == ETIME)
00028 this->timeout_ = 1;
00029 return rval;
00030 }
Definition at line 11 of file IOStream_T.i.
References ssize_t.
00014 {
00015 return this->recv (buf, len, 0, tv);
00016 }
Definition at line 33 of file IOStream_T.i.
00037 {
00038 this->timeout_ = 0;
00039 errno = ESUCCESS;
00040 ssize_t rval = peer_->recv_n (buf, len, flags, tv);
00041 if (errno == ETIME)
00042 this->timeout_ = 1;
00043 return rval;
00044 }
Stream connections and "unconnected connections" (ie -- datagrams) need to work just a little differently. We derive custom Streambuf objects for them and provide these functions at that time.
Definition at line 5 of file IOStream_T.i.
References peer_, and ssize_t.
00006 {
00007 return peer_->send_n (buf,len);
00008 }
[protected]
This will be our ACE_SOCK_Stream or similar object.
Definition at line 72 of file IOStream_T.h.
Referenced by get_handle, recv, recv_n, and send. | http://www.theaceorb.com/1.3a/doxygen/ace/classACE__Streambuf__T.html | CC-MAIN-2017-51 | refinedweb | 382 | 67.15 |
I want to use external javascript library (MyLib) without typings in my XClient.ts file so I created a dump declaration file for external lib, (myLib.d.ts) When I import myLib to XClient, everything is ok in XClient constructor but typescript compiler throw error at "readonly _lib: MyLib;" line of XClient.ts
Compiler error: " error TS2304: Cannot find name 'MyLib'
How can I use external library without typings properly? And why compiler throw error?
// XClient.ts
import * as $ from "jquery";
import * as MyLib from "myLib";
export class XClient {
readonly _modelId: string;
readonly _lib: MyLib;
constructor(modelId: string) {
this._modelId = modelId;
this._lib = new MyLib(this._modelId);
}
}
// myLib.d.ts
declare var inner: any;
declare module "myLib" {
export = inner;
}
Cannot find name 'MyLib'
Its there as a variable but not as a type. More on this :
Probably:
declare module "myLib" { var inner: any; type inner = any; export = inner; } | https://codedump.io/share/Dmp5An5hhdzw/1/typescript-compiler-throw-error-39error-ts2304-cannot-find-name39-in-class | CC-MAIN-2017-47 | refinedweb | 148 | 66.13 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello! This script only requires a bunch of .mp4s in the data folder. You'll see a collage of 9 videos (or duplicates) in different positions and transparencies. The thing runs ok but even pre-loading the videos at the beginning every time there's a change/swap in one of the videos (around every 4 seconds) the whole thing freezes for a sec, sometimes (try to use mp4s of different sizes).
I tried to use the thread() command in the switching of the videos but nothing happened. I put a println to see the thread but alway show the main thread, I really don't know if I'm doing this ok..
Thanks a LOT for any help!
import processing.video.*; File path; String[] files; int numfiles = 0; int timer = -5000; int tcontrol= 2000; int numVideos=9; VideoM[] v = new VideoM[numVideos]; Movie[] vv; void setup(){ size(1920, 1080, P2D); frameRate(60); files = files(); numfiles = files.length; if (numfiles>11) numfiles=11; loadvideos(); for(int i = 0;i < numVideos;i++){ v[i] = new VideoM(); } } void draw() { background(0); for(int i = 0; i <numVideos; i++){ v[i].display(); } if ((millis() - timer) > tcontrol) { thread("newvideo"); timer = millis(); tcontrol= int(random(2,6))*1000; } } void loadvideos(){ String video; vv = new Movie[numfiles]; for (int i=0; i<numfiles; i++){ video= files[int(random(numfiles))]; vv[i] = new Movie(this, video); println ("Loading ", video); } } class VideoM { Movie m; String video; int x; int y; int w; int h; int alpha; int alphai; int fadeout=0; VideoM() { genera(); println(numfiles); m = vv[int(random(numfiles)) ]; m.loop(); // m.volume(random(0.4,0.6)); // m.speed(random(0.6,1.0)); m.jump(random(m.duration())); //m.play(); } void genera(){ x=int(random(50, width/2+width/4)); y=int(random(50, height/2+height/4)); w=int(random(280,820)); h=int(w/1.88); alpha=int(random(100,255)); alphai=0; } void display(){ tint(255, alphai); if (fadeout==0) { alphai++; if (alphai>alpha) alphai=alpha; } else { alphai--; if (alphai<0) {alphai=0;fadeout=0;this.newvid();} } if (frameCount > 1) { image(m, x, y, w, h); } } void cambiavid(){ fadeout=1; } void newvid() { m=null; int j=int(random(numfiles)); println("cambio: ", j); m = vv[j]; m.loop(); genera(); m.jump(random(m.duration())); println(Thread.currentThread()); } } void newset(){ for(int i = 0;i < numVideos;i++){ println(i); v[i].newvid(); } } void newvideo(){ int i = int(random(numVideos)); //v[i].nuevovid(this); v[i].cambiavid(); } void movieEvent(Movie m) { m.read(); } boolean bStop,bColor=true; boolean bSave=false,bVidO=false; void keyPressed() { int k = keyCode; // reset all videos if (key == 'n' || key == 'N') { newset(); } } // load files in data String[] files(){ // The data path of the folder to look in (write your own) java.io.File folder = new java.io.File(dataPath("")); // let's set a filter (which returns true if file's extension is .jpg) java.io.FilenameFilter pngFilter = new java.io.FilenameFilter() { public boolean accept(File dir, String name) { return name.toLowerCase().endsWith(".mp4"); } }; // list all the folders inside the main directory String[] listFolders = folder.list(new java.io.FilenameFilter() { public boolean accept(File current, String name) { return new File(current, name).isDirectory(); } }); // list the files in the data folder, passing the filter as parameter String[] filenames = folder.list(pngFilter); return(filenames); }
Answers
There is a new forum
Please ask there too
Not sure if doing this through a different thread will help. If you are loading the videos, I would say that assigning the videos, the way you do, should work, assuming your computer can handle loading these many videos.
I have to say I am not sure your approach works at the end. You are assigning VideoM objects from your video list but I don't think you are ensuring the same video is not picked. Why is this important? Well, because you are calling jump(). If two VideoM objects are handling the same video, you are calling jump on the same handle. it is very likely only the last jump is applied and both handles would display the same video.
I have to say your code comments are missing? It is hard to understand what you are trying to accomplish in certain parts of your code. Adding comments would help. Also, the name of your arrays are not the best. For this bit of code, the array functionality is lost between the lines. Using better names like masterVideolist would make your code easier to follow.
I have two questions:
1. When the timer ends, do you change one video or all the videos?
2. The display function manages some tinting. What does it do? I can see the
alphaicounter going up and down but I didn't understand what you want to do.
I have added some code below but it is not a solution. I just want to capture some changes I did and I will change it as soon as you answer my questions.
Kf
Thanks a lot @kfrajer
This changes one video, calling the function newvideo (in here I used the thread function but I don't think is doing anything)
The alphai is the alpha value of the object, so it goes up or down to create a fadein/fadeput effect every time a new video is called.
In general the script works well, is just that the moment the newvideo() function is called sometimes there's a freezing in the other videos --or even stop. It should be some memory issue but I limited the preloaded videos and the videos shown and the script works better. The thing is that the script running won't take more than 300 Mb in memory (an there's much more available), it should work better, isn't?
That is good to know bc it wasn't running on my machine so it made me thing accessing the same reference was the issue. Can you make a quick test and only load one video and displaying 9 of them? Does it work for you?
One suggestion. When you load the videos, you could do something like, as soon as each gets created, to call their
play()followed by a
pause()called to get the videos ready to stream. When you change the video and before you drop the video handle, you could leave it ready to play when the video gets picked again in the next reading operation. This is the concept:
Video[] videoStack; ///All your available videos
This is an idea and not totally sure if it will work. I'll see if I can do some testing later today.
Kf
@grumo -- if this is an ongoing project I recommend continuing to post for advice about it on the new forum:
Thanks @jeremydouglass ! I tried but my account is in hold (user: prandam) since a week ago or more.. dunno what happened.
@grumo -- I've reactivated your account on the new forum. | https://forum.processing.org/two/discussion/28057/multiple-videos-smoothness-threaded-playback | CC-MAIN-2019-43 | refinedweb | 1,175 | 72.16 |
PMD is a utility for finding problems in
Java code. PMD does this using static analysis; that is, analyzing
the source code without actually running the program. PMD comes with a number
of ready-to-run rules that you can run on your own source code to find unused
variables, unnecessary object creation, empty catch blocks, and so forth. You
can also write your own rules to enforce coding practices specific to your
organization. For example, if you're doing EJB programming, you could write a
PMD rule that would flag any creation of
Thread or
Socket objects. If you're feeling generous, you can donate that
rule back to PMD for anyone to use.
PMD was initially written in support of Cougaar, a Defense Advanced Research Projects Agency (DARPA) project billed as "An Open Source Agent Architecture for Large-scale, Distributed Multi-Agent Systems." DARPA agreed that the utility could be open sourced, and since its release on SourceForge, it has been downloaded over 14,000 times and has garnered over 130,000 page views. More importantly, though, numerous PMD rule suggestions and IDE plugins have been written by open source developers and contributed back to the core PMD project.
You can download PMD in either a binary release or with all of the source code; both are available in .zip files on the PMD web site. Assuming you've downloaded the latest PMD binary release, unzip the archive to any directory. Then it's up to you how to use it--if you simply want to run PMD on a directory of Java source files, you can run it from a command line like this (the command should be all on one line):
C:\data\pmd\pmd>java -jar lib\pmd-1.02.jar c:\j2sdk1.4.1_01\src\java\util text rulesets/unusedcode.xml c:\j2sdk1.4.1_01\src\java\util\AbstractMap.java 650 Avoid unused local variables such as 'v' c:\j2sdk1.4.1_01\src\java\util\Date.java 438 Avoid unused local variables such as 'millis' // etc, etc, remaining errors skipped
You can also run PMD using Ant, Maven, or an assortment of Integrated Development Environments (IDEs) including jEdit, Netbeans, Eclipse, Emacs, IDEAJ, and JBuilder.
So what rules come with PMD? Well, here are some examples:
Unused code is always bad:
public class Foo { // an unused instance variable private List bar = new ArrayList(500); }
Why are we returning a concrete class here when an interface--i.e.,
List--would do just as well?
public ArrayList getList() { return new ArrayList(); }
Nothing's being done inside the
if success block ... this
could be rewritten for clarity:
public void doSomething(int y) { if (y >= 2) { } else { System.out.println("Less than two"); } }
Why are we creating a new
String object? Just use
String x =
"x"; instead.
String x = new String("x");
There are many other rules, but you get the idea. Static analysis rules can catch the things that would make an experienced programer say "Hmm, that's not good."
At the heart of PMD is the JavaCC parser generator, which PMD uses in conjunction with an Extended Backus-Naur Formal (EBNF) grammar and JJTree to parse Java source code into an Abstract Syntax Tree (AST). That was a big sentence with a lot of acronyms, so let's break it down into smaller pieces.
Java source code is, at the end of the day, just plain old text. As your compiler will tell you, however, that plain text has to be structured in a certain way in order to be valid Java code. That structure can be expressed in a syntactic metalanguage called EBNF and is usually referred to as a "grammar." JavaCC reads the grammar and generates a parser that can be used to parse programs written in the Java programming language.
There's another layer, though. JJTree, an add-on to JavaCC, enhances the
JavaCC-generated parser by decorating it with an Abstract Syntax Tree (AST)--a semantic layer on top of the stream of Java tokens. So instead of getting a
sequence of tokens like
System,
.,
out,
., and
println, JJTree serves up a tree-like hierarchy of
objects. Here's a simple code snippet and the corresponding AST:
public class Foo { public void bar() { System.out.println("hello world"); } }
CompilationUnit TypeDeclaration ClassDeclaration UnmodifiedClassDeclaration ClassBody ClassBodyDeclaration MethodDeclaration ResultType MethodDeclarator FormalParameters Block BlockStatement Statement StatementExpression PrimaryExpression PrimaryPrefix Name PrimarySuffix Arguments ArgumentList Expression PrimaryExpression PrimaryPrefix Literal
Your code can traverse this tree using the
Visitor pattern--the object tree generated by JJTree supports.
Here's a simple PMD rule that checks for empty
if
statements:
//base class.
ASTBlock, so we declared the method
visit(ASTBlock node, Object data).
ifstatements with empty bodies, so we look up the tree to ensure we are in an
ASTIfStatement, and then down the tree to see if there are any child nodes.
ASTIfStatementand then look down the tree to check for an empty block. Which way you do this is up to you; if you run into performance problems, consider rewriting your rule..
Return to ONJava.com. | http://www.onjava.com/2003/02/12/static_analysis.html | CC-MAIN-2014-52 | refinedweb | 846 | 53.1 |
Wiki
SCons / ReduceMemory
Memory Reduction Techniques
This page describes techniques for reducing the memory footprint of a SCons build. The first part addresses SCons users while the second part discusses techniques that can be applied to reduce the memory consumption in SCons itself.
User Guide
SCons implements some debugging facilities that can help to track down memory problems. If those are not sufficient, check out the SCons Heapmonitor branch.
scons --debug=memory --debug=count
If you got the impression that SCons allocates too much memory to build your project you should first make sure it is really SCons and not a build tool that is used. The easiest way is to run an up-to-date check of your project after a full build was performed. If you still think that SCons takes up too much memory run SCons with the above mentioned debug flags.
The Node and Executor objects in the
--debug=count output should correspond to the number of files (or other Node types). There is not much you can do about it. If you see a high number of Environment objects, try to reduce those in your SConscript files.
Environment Objects
Environment objects have a high memory footprint. Try to reuse existing environments if you can. Also note that you can override flags when invoking the builder:
env.Program('fastfoo.c', CFLAGS='-O3')
Developer Guide
Limit Lifetime
If you can formulate invariants stating that when reaching a specific condition an object is not needed anymore, delete it.
Caching
Caching is commonly employed to speed up access to values which are used more then once. Therefore, there is always a trade-off between memory consumption and runtime performance. SCons used
_memo dictionaries attached to each object which uses caching. These dictionaries allocate a considerable share of memory. It should always be asked if caching a specific object actually speeds up the build enough to sacrifice the additional memory used by the slot in the cache.
Lazy initialization
Empty sequences, dictionaries and sets consume memory. If such an attribute is only needed for a small subset of the instances of a class, lazy initialization can be used. The
ignore and
depends attributes of Node objects would be a classic example. Instead of:
def __init__(self): self.rarely_used = {} def append(self, k, v): self.rarely_used[k] = v
The
rarely_used dictionary can be assigned to an object only if it's used:
def __init__(self): pass def append(self, k, v): try: self.rarely_used[k] = v except AttributeError: self.rarely_used = {k: v}
If the
append method is not called for most of the existing objects, the alternative above might be used. However, every attempt to access the object must then be protected to not raise an
AttributeError.
slots
New-styles classes with slots might be used for helper classes which contain a fixed set of attributes. Some memory overhead can be avoided by using slots because it doesn't create a
__dict__ for the object.
class Color(object): __slots__ = ('red', 'green', 'blue')
Reuse strings with intern
String objects are not reused in general. Assigning the same string to another string actually makes a copy. This can be avoided by using the built-in function intern(). Using
intern() applies the Fly-weight pattern: It's most useful when a huge number of instances share a small number of unique attributes. For example, in a huge address database for a specific state, the
city attribute might be interned as it will most likely be reused by a large number of other address instances:
class Citizen: def __init__(self, name, address, city): self.name = name self.address = address self.city = intern(city)
In SCons, intern strings could be used for filenames, suffixes, any string that has a good chance of being reused.
Singleton pattern
Don't create a unique object if a singleton can be used instead.
Updated | https://bitbucket.org/scons/scons/wiki/ReduceMemory | CC-MAIN-2015-32 | refinedweb | 644 | 64 |
daemon rotates socket on restart
Bug Description
[Impact]
* Lack of option for disabling wsgi socket rotation leads to errors on graceful restarts, making them not as graceful.
* Specifically, when mod-wsgi is running in daemon mode (which uses
sockets), and a graceful restart ('sudo systemctl reload apache2')
happens, the socket filename changes, and upcoming HTTP requests
in a keep-alive connection (i.e., same connection is re-utilized
for multiple HTTP requests) initiated before the graceful restart
are failed (HTTP 503 error) because the socket file is not found.
* This change introduces a new config option WSGISocketRotation that allows to disable the rotation.
* The option is disabled by default, so the default behavior remains
consistent with the previous versions (ie, socket rotation occurs.)
* This is actually desired, and designed that way by upstream,
because disabling socket rotation requires no _wsgi_ config changes,
as they can impact/alter the upcoming HTTP requests (see patch link.)
[Test Case]
* Setup apache2 with mod-wsgi.
* Make sure there are some wsgi sockets open.
* Reload apache gracefully.
* (Detailed steps are provided in comments #9 and #10)
Expected result:
No errors related to sockets in the logs
Actual result:
There are error messages related to sockets in the logs.
[Regression Potential]
* Since the value is set to On by default any regressions would manifest only after explicitly setting it to Off.
* After it's set to off WSGI application behavior will change on reloads - connections should be resumed instead of cancelled.
[Other Info]
* The fix has been introduced in mod-wsgi 4.6.0,
thus already present in Disco/Eoan/Focal,
only needed in Xenial/Bionic.
* Original bug description:
On Apache reloads the WSGI daemon tries to rotate wsgi sockets causing unnecessary log entries, especially in OpenStack context.
This has been addressed in mod-wsgi upstream (4.6.0) and could be backported to Ubuntu.
SRU proposal for Xenial.
SRU proposal for bionic.
Hi Dariusz,
I reviewed the debdiffs, they look good overall.
I just had some minor changes to reflect review
practices/comments that I have been thru myself:
- Nice catch on updating the Maintainers field.
- Changelog order of patch file/description:
I changed to file first, description later,
as I've seen as more used/standard practice.
- DEP3 headers are present in the .patch; good.
I updated from 'Origin: upstream' to 'backport'
because the patch has a removal of (unneeded)
release notes file.
Even being uneeded, there were changes to the
upstream patch, so it's no longer clean apply/
cherry pick, thus the change to 'backport',
per Debian DEP-3 spec [1]:
"backport" (in the case of an upstream patch that had to be modified to apply on the current version)
I added a '[backport]' section to the .patch
file describing that. (and this other change:)
I also noted that on Xenial there are changes
actually needed to src/server/
arguably indeed a "true" backport this time.
Oh, and there's a digit missing in Bug-Ubuntu
number, just added that/goes to right URL now.
- Xenial fix on .patch file:
Still on Xenial there's an extra '+'/plus sign
in the commented line added to __init__.py, so
I fixed that one:
+++#
Bionic had that right:
++#WSGISocke
- Version numbers look good/upgrade path is OK:
- - Versions 4.3.0-1.1ubuntu1 and 4.5.17-1ubuntu1
never existed in package's publishing history [2].
- - Upgrade path is OK in the same release.
$ grep -m2 urgency= x/dput/
+mod-wsgi (4.3.0-1.1ubuntu1) xenial; urgency=medium
mod-wsgi (4.3.0-1.1build1) xenial; urgency=medium
$ dpkg --compare-versions 4.3.0-1.1build1 lt 4.3.0-1.1ubuntu1 ; echo $?
0
$ grep -m2 urgency= b/dput/
+mod-wsgi (4.5.17-1ubuntu1) bionic; urgency=medium
mod-wsgi (4.5.17-1) unstable; urgency=medium
$ dpkg --compare-versions 4.5.17-1 lt 4.5.17-1ubuntu1; echo $?
0
- - Upgrade path is OK across releases (t/x/b/e)
$ rmadison -a source mod-wsgi | grep -e trusty -e eoan
mod-wsgi | 3.4-4ubuntu2 | trusty | source
mod-wsgi | 3.4-4ubuntu2.
mod-wsgi | 3.4-4ubuntu2.
mod-wsgi | 4.6.5-1 | eoan
$ dpkg --compare-versions 3.4-4ubuntu2.
0
$ dpkg --compare-versions 4.3.0-1.1ubuntu1 lt 4.5.17-1ubuntu1; echo $?
0
$ dpkg --compare-versions 4.5.17-1ubuntu1 lt 4.6.5-1 ; echo $?
0
That's it!
cheers,
Mauricio
[1] https:/
[2] https:/
Attaching the updated debdiffs for reference.
They build successfully on a PPA for all architectures,
and have been used on steps to reproduce/verify (below.)
Dariusz,
As part of my learning homework to be able to
technically review the patch in this proposal,
I came across three things that may be useful
for the bug description/SRU template, so I'll
just add them, if you don't mind.
1) Understanding the scenario where the bug
happens (mod-wsgi running in daemon mode/not
embedded mode, only the former uses sockets;
and reload/not-restart apache2 between HTTP
requests in the same keep-alive connection)
[1, 2].
2) Detailed steps to reproduce/verify the bug.
3) Confirming that the default behavior with
the patch remains as in the previous version
(of course, as it should/is supposed to, but
it's worth mentioning in bug description :-)
With all those points now understood/
I'll proceed with the upload for SRU to B/X.
Thanks!
cheers,
Mauricio
[1] https:/
[2] https:/
Steps to reproduce/verify on Xenial:
===
References:
- https:/
- https:/
- https:/
- https:/
Create a container with apache2/mod-wsgi in daemon mode,
w/ keep-alive timeout long enough, and wsgi hello-world:
---
$ lxc launch ubuntu:xenial lp1863232x
$ lxc exec lp1863232x -- su - ubuntu
$ sudo apt update
$ sudo apt install -y apache2 libapache2-mod-wsgi
$ sudo sed -i '/^KeepAliveTim
$ cat <<EOF | sudo tee /etc/apache2/
WSGIScriptAlias /hello-world /var/www/
WSGIDaemonProcess 127.0.0.1 processes=2 threads=16 display-
WSGIProcessGroup 127.0.0.1
EOF
$ sudo a2enconf wsgi
$ sudo systemctl reload apache2
$ cat <<EOF | sudo tee /var/www/
def application(
status = '200 OK'
output = b'Hello World!\n'
response_
start_
return [output]
EOF
$ curl 127.0.0.
Hello World!
Check WSGI socket filename changes when
reloading apache2 (aka graceful restart):
---
$ ls -1 /var/run/
/var/run/
$ sudo systemctl reload apache2
$ ls -1 /var/run/
/var/run/
$ sudo systemctl reload apache2
$ ls -1 /var/run/
/var/run/
Create script and shell one-liner
to send two HTTP requests in the
same connection (keep-alive used)
---
$ cat <<EOF >http-request
GET /hello-world HTTP/1.1
Host: 127.0.0.1
Connection: keep-alive
EOF
One connection/One request:
$ (cat http-request; sleep 1) | telnet 127.0.0.1 80
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
HTTP/1.1 200 OK
Date: Mon, 17 Feb 2020 23:33:19 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 13
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/plain
Hello World!
Connection closed by foreign host.
One connection/Two requests (added timestamps to check timeout values)
$ (cat http-request; sleep 1; cat http-request; sleep 9) | telnet 127.0.0.1 80 2>&1 | while read line; do echo "$(date +'%T') == $line"; done
23:34:42 == Trying 127.0.0.1...
23:34:42 == Connected to 127.0.0.1.
23:34:42 == Escape character is '^]'.
23:34:42 == HTTP/1.1 200 OK
23:34:42 == Date: Mon, 17 Feb 2020 23:34:42 GMT
23:34:42 == Server: Apache/2.4.18 (Ubuntu)
23:34:42 == Content-Length: 13
23:34:42 == Keep-Alive: timeout=5, max=100
23:34:42 == Connection: Keep-Alive
23:34:42 == Content-Type: text/plain
23:34:42 ==
23:34:42 == Hello World!
23:34:43 == HTTP/1.1 200 OK
23:34:43 == Date: Mon, 17 Feb 2020 23:34:43 GMT
23:34:43 == Server: Apache/2.4.18 (Ubuntu)...
Steps to reproduce/verify on Bionic:
===
Similarly to steps described for Xenial.
Skipping the identical steps.
$ lxc launch ubuntu:bionic lp1863232b
$ lxc exec lp1863232b -- su - ubuntu
One connection/Two requests (added timestamps to check timeout values)
---
$ (cat http-request; sleep 1; cat http-request; sleep 9) | telnet 127.0.0.1 80 2>&1 | while read line; do echo "$(date +'%T') == $line"; done
00:03:46 == Trying 127.0.0.1...
00:03:46 == Connected to 127.0.0.1.
00:03:46 == Escape character is '^]'.
00:03:46 == HTTP/1.1 200 OK
00:03:46 == Date: Tue, 18 Feb 2020 00:03:46 GMT
00:03:46 == Server: Apache/2.4.29 (Ubuntu)
00:03:46 == Content-Length: 13
00:03:46 == Vary: Accept-Encoding
00:03:46 == Keep-Alive: timeout=15, max=100
00:03:46 == Connection: Keep-Alive
00:03:46 == Content-Type: text/plain
00:03:46 ==
00:03:46 == Hello World!
00:03:47 == HTTP/1.1 200 OK
00:03:47 == Date: Tue, 18 Feb 2020 00:03:47 GMT
00:03:47 == Server: Apache/2.4.29 (Ubuntu)
00:03:47 == Content-Length: 13
00:03:47 == Vary: Accept-Encoding
00:03:47 == Keep-Alive: timeout=15, max=99
00:03:47 == Connection: Keep-Alive
00:03:47 == Content-Type: text/plain
00:03:47 ==
00:03:47 == Hello World!
00:03:56 == Connection closed by foreign host.
Reproduce the problem by placing
'sudo systemctl reload apache2'
between the two HTTP requests
(second request hits Error 503,
depending on apache2 MPM module.
mpm_event just closes connection,
mpm_worker/
---
$ lsb_release -cs
bionic
$ dpkg -s libapache2-mod-wsgi | grep ^Version
Version: 4.5.17-1
$ sudo a2dismod mpm_event
$ sudo a2enmod mpm_worker
$ sudo systemctl restart apache2
For reference on socket filename:
$ sudo systemctl restart apache2
$ ls -1 /var/run/
/var/run/
$ (cat http-request; sleep 1; sudo systemctl reload apache2; sleep 5; cat http-request; sleep 9) | telnet 127.0.0.1 80 2>&1 | while read line; do echo "$(date +'%T') == $line"; done
14:46:04 == Trying 127.0.0.1...
14:46:04 == Connected to 127.0.0.1.
14:46:04 == Escape character is '^]'.
14:46:04 == HTTP/1.1 200 OK
14:46:04 == Date: Tue, 18 Feb 2020 14:46:04 GMT
14:46:04 == Server: Apache/2.4.29 (Ubuntu)
14:46:04 == Content-Length: 13
14:46:04 == Vary: Accept-Encoding
14:46:04 == Keep-Alive: timeout=15, max=100
14:46:04 == Connection: Keep-Alive
14:46:04 == Content-Type: text/plain
14:46:04 ==
14:46:04 == Hello World!
14:46:10 == HTTP/1.1 503 Service Unavailable
14:46:10 == Date: Tue, 18 Feb 2020 14:46:10 GMT
14:46:10 == Server: Apache/2.4.29 (Ubuntu)
14:46:10 == Content-Length: 374
14:46:10 == Connection: close
14:46:10 == Content-Type: text/html; charset=iso-8859-1
14:46:10 ==
14:46:10 == <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
14:46:10 == <html><head>
14:46:10 == <title>503 Service Unavailable</title>
14:46:10 == </head><body>
14:46:10 == <h1>Service Unavailable</h1>
14:46:10 == <p>The server is temporarily unable to service your
14:46:10 == request due to maintenance downtime or capacity
14:46:10 == problems. Please try again later.</p>
14:46:10 == <hr>
14:46:10 == <address>
14:46:11 == </body></html>
14:46:11 ...
Uploaded changes to B/X.
There are no other mod-wsgi uploads in the unapproved queue nor waiting in -proposed/pending SRU.
mod-wsgi (source)
4.5.17-1ubuntu1 main python medium ubuntu-server Proposed 56 seconds ago
mod-wsgi (source)
4.3.0-1.1ubuntu1 main python medium ubuntu-server Proposed 53 seconds ago
Hello Dariusz, or anyone else affected,
Accepted mod-wsgi.
Hello Dariusz, or anyone else affected,
Accepted mod-wsgi.
Bionic verification:
After installing libapache2-mod-wsgi 4.5.17-1ubuntu1 and setting
WSGISocketRotation Off
in /etc/apache2/
$ ls -1 /var/run/
/var/run/
$ sudo systemctl reload apache2.service
$ ls -1 /var/run/
/var/run/
Xenial verification:
Similarly to bionic with Xenial version of libapache2-mod-wsgi 4.3.0-1.1ubuntu1 and setting
WSGISocketRotation Off
in /etc/apache2/
$ ls -1 /var/run/
/var/run/
$ sudo systemctl reload apache2.service
$ ls -1 /var/run/
/var/run/
$ apt-cache policy libapache2-mod-wsgi | grep Installed
Installed: 4.3.0-1.1ubuntu1
All autopkgtests for the newly accepted mod-wsgi (4.5.17-1ubuntu1) for bionic have finished running.
The following regressions have been reported in tests triggered by the package:
nova/2:
Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUp
https:/
[1] https:/
Thank you!
The verification of the Stable Release Update for mod-wsgi mod-wsgi - 4.5.17-1ubuntu1
---------------
mod-wsgi (4.5.17-1ubuntu1) bionic; urgency=medium
* d/p/allow-
Add option for disabling daemon socket rotation on restarts (LP: #1863232)
-- Dariusz Gadomski <email address hidden> Fri, 14 Feb 2020 12:08:41 +0100
This bug was fixed in the package mod-wsgi - 4.3.0-1.1ubuntu1
---------------
mod-wsgi (4.3.0-1.1ubuntu1) xenial; urgency=medium
* d/p/allow-
Add option for disabling daemon socket rotation on restarts (LP: #1863232)
-- Dariusz Gadomski <email address hidden> Fri, 14 Feb 2020 12:36:00 +0100
Since it is already present upstream it's fixed for focal and eoan. | https://bugs.launchpad.net/ubuntu/+source/mod-wsgi/+bug/1863232 | CC-MAIN-2021-17 | refinedweb | 2,227 | 57.27 |
In this section, we discuss
issues that emerge in database applications when multiple users
access a database system; some users are inserting, updating, or
deleting data, while others run queries.
To motivate the problems and solutions discussed here, consider an
example. Imagine a user of the winestore wants to buy the last bottle
of an expensive, rare wine that's in stock. The user
browses the database and finds the wine. There is only bottle left,
and the user quickly adds this to her shopping cart. The shopping
cart is a row in the order table with only one
related row in the items table. Now, the user
decides to finalize the purchase and is presented with a summary of
the shopping cart.
However, while the user fumbles about finding her password to log in,
another user enters the system. This user quickly locates the same
wine, sees that there is only one bottle left, adds it to his
shopping cart, logs in to the system, and purchases the wine. When
our first user finally logs in to finalize the order, all the details
look fine, but the wine has actually been sold. Our database
UPDATE operation to deduct from the inventory
fails since the stock value is already zero, and we end up reporting
an error to our original—now very unhappy and
confused—user.
Consider another example. Imagine that one of our winestore stock
managers wants to order 12 more bottles of a popular wine, but only
if there are less than two dozen bottles currently in stock. The
manager runs a query to sum the total stock for that wine from the
inventory table. The result is that there are 15 24 bottles.
The result of the query is the same: 15 bottles. The second manager
orders a dozen bottles, and updates the inventory to 27, knowing the
bottles will arrive in the afternoon. The problem occurs when the
first manager returns: he doesn't rerun the
query—why should he?—and he too orders 12 bottles and
updates the inventory to 27. Now the system has record of 27 bottles,
but two dozen will arrive in the afternoon to take the total to 39!
The first problem is a design issue—as well as an example of an
unrepeatable read—and one that can be
solved with more restrictive system requirements, knowledge of how
the DBMS behaves, and some careful script development. The second is
a classic problem—what textbooks describe as a lost
update—and it requires more understanding of
relational database problems and theory. We cover simple solutions to
fundamental problems like these here, and discuss how MySQL
implements locking for transactions, concurrency, and performance.
This section isn't intended as a substitute for a
relational database text. Most textbooks contain extensive treatment
of transaction and concurrency topics, and most of these are highly
relevant to the state problems of web database applications.
We
have illustrated two examples of the problems users have when they
access a web database at the same time (that is, concurrently).
Allowing uncontrolled interleaving of SQL statements—where each
of the users is reading and writing—can result in several
well-known problems. The management of a group of SQL
statements—we call these
transactions—is
one important area of the theory and practice of relational
databases. Here are four of the more common problems of concurrent
read and write transactions:
Consider an example: a manager decides to add a 3% surcharge to a
particular wine inventory, so she reads and updates the cost of that
wine in the inventory table..
For example, consider a case in the winestore in which one user
updates inventories and another produces a management stock report.
Fortunately, most of these problems can be solved through
locking or careful design of scripts that carry
out database transactions. However, some problems may be deliberately
unsolved in a particular system because they restrict the system
requirements or add unnecessary complexity. We discuss locking in the
next section.
It has been shown that a simple scheme
called locking—actually, two-phase
locking—solves the four transaction
problems identified in the last section..
TIP:
Locking is required only when developing scripts that first read a
value from a database and later write that value to the database.
Locks are never required for self-contained insert, update, or delete
operations such as updating a customer's details,
adding a region to the region table, or
unconditionally deleting an inventory. Locking may not be required
for all parts of a web database application: parts of the application
can still be safely used without violating any locking conditions.
LOCK s, he is performing a
transaction susceptible to a concurrency problem, he must obtain a
READ LOCK.
TIP:
SELECT, UPDATE,
INSERT, or DELETE operations
that don't use LOCK
TABLES don't proceed if locks are
held that would logically prevent their operation. For example, if a
user holds a WRITE LOCK on a
table, no other user can issue a SELECT,
UPDATE, INSERT,
DELETE, or LOCK operation. This is a design decision in
MySQL that gives priority to database modifications over database
queries.
NOTE:
MySQL is designed to give writing priority over reading. Regardless
of how long a user has been queued in the
READ LOCK queue, any
request in the WRITE LOCK queue receives
priority. This can lead to a problem called
starvation, where a transaction never completes
because it can't obtain the required locks. However,
since most web database applications read from databases much more
than they write, and locks are required in only a few situations,
starvation is very uncommon in practice...[9]
[9]Deadlock is a difficult problem. As recently as Version 3.22.23
of MySQL, there were bug fixes to MySQL to avoid deadlocking problems
in the DBMS.
[9]Deadlock is a difficult problem. As recently as Version 3.22.23
of MySQL, there were bug fixes to MySQL to avoid deadlocking problems
in the DBMS.
WARNING:
MySQL has a feature called
INSERT DELAYED for
insertion that is described in the MySQL manual.. DBMS, 6-9 shows a PHP
function, updateDiscount(
), that requires locking to ensure that the
value returned from the SELECT query
can't change before the UPDATE
operation. The script is designed to be run either by the winestore
system administrator—it would then require a
<form> for user input—or as the final
module in the ordering process for users. Another example that
requires locking for winestore ordering is included in Chapter 12.
The.
WARNING:
Be if a
nonpersistent connection opened with mysql_connect(
) is used. Locks are automatically released when the
script finishes. However, it is good practice to include the
UNLOCK TABLES statement. may.
If values must be shown to a user, consider
adding a summary table for identifiers, or copying rows to a
temporary table. For example, an identifier table can store the next
available identifier for each other table, this can then be
incremented by the script and the value can be used in subsequent
scripts without locking problems and without any clashes in
numbering.
This solution is shown in Example 6-10, using an
auxiliary table named ids that manages the next
available region_id attribute. The use of the
additional table prevents duplicate rows being inserted, and avoids
any problems with locking or updates.
<?php
// This code needs an auxiliary table called "ids"
// that might be created with:
// CREATE TABLE ids (
// region_id int default 0,
// other_id int default 0,
// another_id int default 0
// );
// It has one row, and no primary key is required.
// After creating the table, a row is needed,
// so issue an: INSERT INTO ids (NULL, NULL, NULL);
// (if it's being added later, use MAX( ) to get the
// correct ID values!)
include 'db.inc';
include 'error.inc';
function getNextRegion ($connection)
{
// A nice way to do it... use an auxiliary table
// Lock the auxiliary table
$query = "LOCK TABLES ids WRITE";
if (!mysql_query($query, $connection))
showerror( );
// Add one to the region_id attribute
$query = "UPDATE ids SET region_id = region_id + 1";
if (!mysql_query($query, $connection))
showerror( );
// Find out the new value of region_id
$query = "SELECT * FROM ids";
if (!($result = mysql_query($query, $connection)))
showerror( );
// Get the row that is returned
$row = mysql_fetch_array($result);
// Unlock the table
$query = "UNLOCK TABLES";
if (!mysql_query($query, $connection))
showerror( );
// Return the region_id
return ($row["region_id"]);
}
// MAIN -----
if (!($connection = @ mysql_connect($hostName,
$username,
die("Could not connect to database");
if (!mysql_select_db($databaseName))
showerror( );
if (empty($regionId))
{
$regionId =
getNextRegion($connection, $databaseName);
?>
<!DOCTYPE HTML PUBLIC
"-//W3C//DTD HTML 4.0 Transitional//EN"
"">
<html>
<head>
<title>Insert a region</title>
</head>
<body bgcolor="white">
region_id: <?= $regionId ?>
<br>
<form method="post" action="example.6-10.php">
<input type="hidden"
name="regionId" value="<?=$regionId;?>">
<br>region_name:
<br><input type="text" name="regionName" size=80>
<br>description:
<br><textarea name="description" rows=4 cols=80>
</textarea>
<br><input type="submit">
</form>
</body>
</html>
<?php
}
else
{
$regionId = clean($regionId, 3);
$regionName = clean($regionName, 20);
$description = clean($description, 255);
$query = "INSERT INTO region SET " .
"region_id = " . $regionId . ", " .
"region_name = \"" . $regionName . "\", " .
"description = \"" . $description . "\"";
if ((@ mysql_query ($query, $connection))
&& @ mysql_affected_rows( ) == 1)
header("Location:insert_receipt.php?" .
"values=$regionId&status=T");
else
header("Location: insert_receipt.php?status=F");
}
?>
Until recently,
MySQL supported only table locking. Other
DBMSs support locking at other levels, including
locking rows, groups of rows, attributes across all rows in a table,
and disk pages.
A common argument against using MySQL has been that table locking is
too heavy-handed and that it limits concurrency in web database
applications. This isn't really true, except when
there are specific requirements that are uncharacteristic of web
database applications.
Table locking works particularly well in web database applications,
where typically:
DELETE and UPDATE operations
are on specific rows—most often accessed by the primary key
value—and the rows are accessed through an index.
There are many more read operations than write operations.
Operations require locks on whole tables. Examples include
GROUP BY operations, updates of
sets of rows, and reading in most rows in a table.
By default, MySQL uses a type of table called
MyISAM. Up to now, the
MyISAM and heap have
supported only table locking. However, three new database types have
recently become supported by MySQL, and these have different locking
paradigms:
The Berkeley Database (BDB) tables have disk
page-level locking; the LOCK
TABLES statement can still be used in
BDB.
The InnoDB tables have row-level locking. They
are designed to support very large volumes of data efficiently, and
the locking mechanisms are designed to have low overheads.
The Gemini tables have both row- and table-level
locking; unlike the other table types that can be used with MySQL,
the Gemini table is covered by a commercial
license and isn't free software.
Support for BDB and InnoDB
tables must be compiled into MySQL during the installation process,
and each requires MySQL 3.23.34 or a later version. The
Gemini table type is a component of the
commercially available NuSphere product range. Configuration of these
table types is outside the scope of this book.
Interestingly, the MyISAM tables permit a
limited form of concurrency that isn't immediately
obvious with the table-locking paradigm. When a mix of select and
write operations occur on a MyISAM table, MySQL
automatically allows write operations to change copies of the data.
Other SELECT statements being run by other users
read the unchanged data and, when they are completed, the modified
copies are written back to the database. This approach is known as
data versioning.
The row-locking paradigm is used in the
InnoDB and
Gemini table types, and is the dominant paradigm
in other DBMSs. The BDB table type offers page locking,
which is similar to locking selected rows.
Row or page locking works well in situations that are infrequently
seen in web database applications, such as:
Transaction environments where a number of steps need to be undone or
rolled back.
Many users are writing to the same tables concurrently.
Locks need to be maintained for long periods of time.
The drawbacks of row and page locking include:
Higher memory requirements to manage an increased number of locks
Poor performance, since there is much more locking and unlocking
activity
Slow locking for operations that require locks on a whole table, such
as GROUP BY operations
There are two significant topics related to transactions and
concurrency that aren't covered in this chapter. We
have omitted these topics because they are less important in web
database applications than in traditional relational systems, and
because this book isn't intended as a substitute for
a good relational database text or the documentation of the MySQL
DBMS.
The first is a more traditional treatment of
transactions from a
commit and rollback perspective. The
InnoDB, BDB, and
Gemini table types support a model where a
statement can be issued to begin a transaction that consists of
several database operations. On completion of the operations, a
commit statement can be issued to write the changes to the database
and verify that these changes have occurred. If, for some reason, the
operations need to be undone—for example, when a user presses
Cancel—a rollback command can be issued to return the database
to its original state.
Commit and rollback processing is useful, but it can be argued that
it is less interesting in the stateless HTTP environment, in which
operations need to be as independent as possible. For most practical
purposes in web database applications, complex transactional
processing isn't required.
The second topic we have not covered is
recovery. Database
recovery techniques are based on logging, in which database changes
are written to a file that can be used for transaction rollback and
for DBMS system recovery. MySQL does support logging for recovery,
and more details can be found in the MySQL manual. | https://docstore.mik.ua/orelly/webprog/webdb/ch06_02.htm | CC-MAIN-2020-05 | refinedweb | 2,304 | 52.7 |
Note: Article has been updated - see Appendix A - Version 1.1
The Azure Event Grid represents an eventing Pub/Sub model with pre-built publishers of the event interests in the Azure resources. The following picture shows this model with fundamental entities such as Event Sources, Topics, Event Subscriptions and Event Handlers:
Basically, there are event sources with event interests and on the consumer side, there are subscribers that receive these interests. Each subscriber needs to subscribe to the event source for its specific event interest using an event subscription. Based on the subscriptions, the Event Grid service knows what, where and how to deliver the event interest when it occurs. The above picture shows a current version such as 2018-05-01-preview.
2018-05-01-preview
The Azure Event Grid is loosely decouple (push) eventing model. It's a PaaS "standalone service" per region as a part of the serverless cloud architecture. Its capability is to deliver millions of events per second to the destination endpoints (Event Sinks).
The following picture can simplify this Pub/Sub model, where Publishers are the left side, for emitting events and on the other side are Subscribers for consuming the event messages represented by source event interest, for example: the blob has been created/deleted in the azure storage.
Publishers
Subscribers
From the logical model point of the view, the Topic represents an input endpoint and the Subscription point represents an output point for this routing model. The following picture shows that:
Topic
Subscription
Based on the above description, we can say, that each subscription represents a Logical Connectivity between the event interest (topic) and consumer (subscriber) within the eventing model. The Azure Event Grid limitation for the eventing model is shown in the following screen snippet:
width="537" alt="Image 5" data-src="/KB/azure/1254463/AzureE37.jpg" class="lazyload" data-sizes="auto" data->
Now, if you know a position of the Topic and Subscription in the Azure Event Grid Model, let's describe their connectivity patterns.
Basically, there are two patterns such as Fan-In and Fan-Out.
Fan-In
Fan-Out
The following picture shows a Pattern Fan-In, where multiple Topics are aggregated into one Event Sink (subscriber). Note, that the Azure Event Grid model supports only single connectivity model between the Topic and Subscription, in other words, the subscription can only have one input (Topic) and one output (event handler) endpoint.
Pattern Fan-In
The above Pattern Fan-In is using multiple topics but the same Event Sink (event handler)
The second pattern is the Pattern Fan-Out, where a single Topic is distributed to multiple Event Sinks. In other words, the same input Topic in the subscriptions, but different output endpoint for each destination endpoint (event handler):
Pattern Fan-Out
Note, that the above Fan-Out pattern is used for the concept of this Tester tool, where the Tester Subscription with a "localhost event handler" is added to the exploration of the event source topic.
The following picture shows an Azure Event Grid Tester which allows exploring event messages, subscriptions, firing custom topics, etc. from the localhost machine.
Azure Event Grid Tester
That's great. Let's describe its concept and design. I am assuming you have some knowledge of the Azure Event Grid.
The concept of Azure Event Grid Tester is based on cloning an event subscription for a "localhost destination" in the event handler. In other words, for each interested Topic (Subscription), the Tester will create a new or clone subscription for its event message consuming.
Basically, there are two ways how to receive an event message from the Azure Event Grid on-premise local environment, such as using a Tunnel channel or Azure Hybrid Connection. The Azure Event Grid Tester has built-in support for both ways to subscribe an event subscription with these event handlers. Note, the Azure Hybrid Connection is still in the preview version in the time of writing this article and just recently has been added.
Azure Event Grid
The following picture shows a scenario using the free and publicly-available tunnel tool called ngrok.
As you can see, between the Azure and Tester on local machine is the ngrok tunnel for handling a secure connectivity via NAT or firewall on the HTTP ports (80, 443). For this tunnel, the ngrok proxy needs to be installed on your local machine for forwarding request to your destination application. The ngrok tunnel is accessible via the public internet address, which is generated by ngrok for you when the tunnel is created.
ngrok
This ngrok address, for instance: is used for WebHook endpoint handler at the Subscription-Tester.
The second way to subscribe the local Tester to the Azure Event Grid topic is using the Azure Relay Hybrid Connection. The Tester has a built-in listener for one selected Hybrid Connection, so via this event handler, we can see any event message from the azure event sources.
The ngrok and Hybrid Connection are fully equal to the Tester features. Notice, that the Hybrid Connection is billable.
As I mentioned earlier, there are two Topic-Subscription patterns. From the Tester point of the view, this pattern is Fan-In, where all event interests are targeted into the Tester subscriber.
Fan-In
For existing an event subscription, we can use a Fan-Out pattern with a clone Subscription-Tester.
Recently, the preview version added a new feature into the event subscription such as deadLetterDestination property. Today, we can select only one endpointType such as BlobStorage. If the event message fails on delivery to the event subscriber, the message is delivered as a deadLetter message (blob) into the specified container. Note, that this deadLetter container can be an event driven, so the Tester can also see this eventing.
deadLetterDestination
deadLetter
Note, that the limit of the Subscriptions per Topic is 500 per region, which for our Tester is not a critical, besides that, there is a easy way to delete those Tester-Subscriptions after usage.
As a part of the concept, this is how we can manage the Azure Event Grid resources. The solution for this part is done using the REST Management APIs, see the following:
The Azure Event Grid Tester communicates with Azure via the REST Management API calls. The following screen snippet shows an example of the REST API call to obtain list of custom topics created in the Azure Event Grid for specific scope (Azure SubscriptionId).
Every REST call to the Management API requires an Authorization Bearer token header. This Bearer token is taken from the Azure Active Directory, see more details later. Once we have a valid Bearer token, the Tester can manage Azure Event Grid resources such as query providers, topics, subscriptions, etc. As a part of the Tester, there is a generic REST-API node in the treeView for REST client requests loaded from the template json file, located in the binaries folder. Note, that the name of this template json file is the name of the Azure Subscription. You can customize this template json file based on your needs, the sample of this file is included in the package (rk20180618.json).
rk20180618.json
The Tester design is oriented around the ResourceGroups node with drilldown to the requested resource. The end of the selection is the Microsoft.EventGrid providers (blue color node) where the event subscriptions are stored. The event messages received by the Tester subscriber will be displayed below the selected provider, like it is shown in the following screen snippet:
Microsoft.EventGrid
The above example shows that the Tester subscriber received an event message delivered by changes in the resource group (TESTER-74902843) and six event messages from the custom topic rk20180818topic1 (TESTER-49433140). By clicking on the specific message node, we can see the message content, see the following screen snippet:
rk20180818topic1
The blue color node such as a Microsoft.EventGrid provider has the following context menu. In this node we can manage an Event Subscription:
Microsoft.EventGrid
OK, it's show time. Let's describe what this tiny tool can do for you. I am assuming you have some prior exposure to the Azure Event Grid.
Before you start using the Azure Event Grid Tester, I am assuming you have already read Azure Event Grid MSDN articles and you have an active Azure account.
Before you can use this Tester tool, it is necessary to register it within the Azure Active Directory (AAD). The tool cannot establish connection to your Azure account if you don't have it registered.
Create your default AAD directory, if you don't have it. Then Add (Register) a new Application, see the following picture:
Populate the following properties:
Name: EventGridTester
Application Type: Native
Redirect Uri:
After pressing the button Create, we can get the ApplicationID for our Tester tool, save it for later step:
ApplicationID
Now, we have to Add API access, so select the Setting and the following API such as Windows Azure Service Management API:
Setting
Windows Azure Service Management API
After the above Select and clicking on the Done (next page), we are done with this step. Our Tester has been registered in our AAD with a Management API access, so we can get a Bearer token for Authorization calls.
Done
Adding more users to the AAD, the Tester tool can be used with each user's credential individually.
In this step, when the Tester is launched, the following prompt dialog shows up. This is a warning dialog to notice, that the ngrok tunnel doesn't exist, so press the button OK to continue process.
After that, the following form should show up on your screen:
As I mentioned earlier, there are two ways how to subscribe the Tester such as using a ngrok tunnel or Hybrid Connection. If your Azure account has already Hybrid Connection or you are going to create one, the next Step can be skipped and you may continue with the Step 3.
If you decided to use a ngrok tunnel for connectivity between the Azure and this Tester, the following steps must be completed every time, when your local machine is powered-off or the ngrok proxy has been closed.
First of all, download a ngrok for windows and unzip it in the folder, for instance: c:\\util.
Lunch the ngrok.exe application as an Administrator, see the following screen snippet:
ngrok.exe
From the File menu, select the Ngrok Tunnel and click on the Get cmd line, like is shown on the following screen snippet:
Your clipboard contains a ngrok cmd line, so past it to the console program like is shown in the following picture:
Run this cmd line and the result is the following:
At this moment, the ngrok tunnel between the internet and your local machine has been created. The above screen shows a public internet address of the ngrok tunnel and where the request is forwarded.
In this step we want to get the nqrok tunnel address into the Tester, so click on the Get tunnel address, like is shown in the following picture:
After this action, the tunnel host name will be updated with an actual address, such as in this example 206e8394.nqrok.io.
That's all for ngrok tunnel.
Note, when the Tester is reopened and the ngrok proxy is still running, the Tester will automatically get this ngrok tunnel address, so there is no need to follow the Step 2.
To perform this step, the Azure Hybrid Connection is required in prior. Let's assume our Azure account has one, so we can continue with the following screen for its opening:
Clicking on the Open menu, the following dialog box shows up:
From the Azure portal copy and paste the Name and ConnectionString of the Hybrid Connection, then select the row and press the button OK.
Name
ConnectionString
At this moment, the listener for this specified Hybrid Connection has been created and opened. The result is logged in the Log panel and the context menu has been changed into the Close action, see the following screen snippet:
As you can see, step for opening the Hybrid Connection is very simple and straightforward comparing to the ngrok tunnel channel.
To continue, we need to make one more step and then the Azure Event Grid Tool is ready for usage. We need to sign-in to our Azure account. Ok, let's do it.
Note: In prior of this step and only first time for Login to your Azure account, go to the binary folder of the Tester assemblies and rename the template (rk20180618) json file with the name of the Azure Subscription.
rk20180618
Go to the File menu and select the Login to Azure Event Grid:
Actually, this is a sign-in the Tester tool to your AAD allows to managed the Event Grid resources, etc.
The following dialog box is shown up on your screen:
Add the new row with values:
Name: Name of the Azure Subscription
Id: Azure Subscription ID (guid)
ApplicationKey: Registered Application Id from the AAD
Select the row and press the button OK, you will ask for your credential username/password registered in the AAD:
Once your credential has been accepted (for 60 minutes), the Tester tool will show up in the AzureEventGrid treeView node, see the next step.
AzureEventGrid
After login the Tester to your Azure account, the Tester is ready for usage. The Azure account (your Azure Subscription) represents a child node of the AzureEventGrid root. Each Azure account in this treeView has three subnodes likes is shown in the following picture:
More details about these nodes:
ResourceGroup
Event Subscription
Custom Topic
If you renamed the template json file with a name of your Azure Subscription (in this example the name is rk20180618), then the REST-API node will looks the following:
This example demonstrates simulation of the Custom Topic with receiving an event message. So, select the resource group where the specific custom Topic is located, see the following screen snippet:
Next, find your resource custom Topic (in this example the name is rk20180618topic1) and click on the Select Eventing Resource on the context menu on the selected resource group from previously action:
rk20180618topic1
Select Eventing Resource
Now, we have a Microsoft.EventGrid provider where we can see all Event Subscription. Because this is an end of the drilldown process, the node is blue colored. Also, within this resource, a custom Topic will automatically show a FireTopic node (green color), see the following screen snippet:
FireTopic
The above picture shows details about all subscribed Event Subscriptions for this topic. The details are in the datagrid form and for the selected row we can see event subscription properties in the json formatted text.
Selecting a Fire Topic node, we can see a REST Client with a sample payload for firing a Custom Topic, so press the button SEND in this REST Client:
Fire Topic
Note: Generating a guid and/or datetime properties every time when the button Send is pressed, use the following substitution:
[
{
"id": "$Guid.NewGuid",
"mySubject": "/myapp/vehicles/motorcycles",
"myEventType": "recordInserted",
"eventTime": "$DateTime.UtcNow",
"data": {
"make": "Ducati",
"model": "Monster"
}
}
]
What has happened here? The REST call sent the POST request to the custom Topic (see the url address) endpoint. This publisher endpoint will emit an event message for Event Grid delivery to all subscribers on this topic based on their subscription. The following screen snippet shows a new node such as Events with all messages related with this topic:
That's great. We can see the full eventing pushed by publisher and received by Tester subscriber on this topic.
In this step, I am going to demonstrate a Create/Clone/Edit/Delete subscription on the selected Topic (Microsoft.EventGrid provider) - blue color node.
The following screen snippet shows a context menu for this node and selected row of the Event Subscription:
Basically, here are the following choices for Event Subscription:
Create a new Event Subscription for Tester:
Create a clone of selected Event Subscription for Tester:
Edit selected Event Subscription:
As you can see, the above Event Subscription dialogs are the same with predefined properties and some read-only properties. The Subscriber Type has a special feature for WebHook and Hybrid Connection allows to select a ngrok tunnel address or open a Hybrid Connection.
The response of the REST API call to the Management API is logged into the Log panel.
In addition, the context menu has more actions such as:
This example demonstrates delivery a custom topic event with a retry policy and dead-lettering feature. For this example we need the following:
Let's make the above requirements.
The following screen snippet shows our needs on the Azure Event Grid Tester:
Clicking on the custom topic event node, we can select for test purpose always error with a code = 503, so if the tester subscriber will receive an event, the response will with the HttpStatus.ServiceUnavailable (503):
HttpStatus.ServiceUnavailable
Now, the event subscriber at the custom topic is the following:
As you can see the above, the maxDeliveryAttempts = 3, so we are expecting 3x delivery event message and after that, the message is going to store to the deadletter container as a dead-letter message.
maxDeliveryAttempts = 3
The regulate storage event subscription:
Note, that both event handlers are Hybrid Connection, so we have to open them in our tester for receiving events.
Now, clicking on the Fire Topic node, we can simulate a custom topic and see how the retry policy works. After all retry deliveries, you should see the following:
As the above picture shows, there are 3 times event messages sent to the Tester Subscriber (#0 0sec, #1 ~8sec, #2 ~38sec). The dead-letter has been sent after ~ 5 minutes. I think, this timeframe does not have to be too long, the 5-10 seconds after the last failed delivery is enough. I am going to ask Microsoft team, why we have to wait for 5 minutes for dead-lettering.
OK, and finally the following screen snippet is the picture of the dead-letter message:
Note, that the correlation Id (such as an id generated by custom topic) is part of the message payload. I think, it should be also in the blob metadata and/or in the blob pathname.
Anyway, that is all for this example.
The REST-API node allows to send a http(s) request to the url address. The request headers can be as simple as inserting as a name/value pair with a colon delimiter. For instance, content-type:application/json. Each line represents one header, only.
content-type:application/json
There is one special header such as Authorization header in the request. If this header doesn't exist it, the runtime client proxy will ask a Bearer token from the Azure Active Directory.
Authorization
Using this REST-API node is very straightforward like another REST tool such as setting the Url, headers, method and clicking on the Send button. The request will return a response status and payload.
Send
As I mentioned earlier, the Azure Management APIs is used for handling an Azure Event Grid resources, so the Azure Event Grid Tester allows to provide these REST API calls with minimum required settings.
Based on the MSDN Document Event Grid REST API each request is described by simple json object, which will be used for creating its tree node in the tester. Each request definition represents one item in the json array. Note, that the name of the json file (where array is stored) must be a name of the Azure Subscription, for instance rk20180618 with an extension json, otherwise the tester will not find it.
json
The structure of this request template is shown in the following code snippet:
[
{
"category": "CustomTopic/EventPublisher",
"name": "RegenerateKey",
"method": "POST",
"url": "/resourceGroups/rk2018-tester/providers/Microsoft.EventGrid/topics/testerTopic/regenerateKey?api-version=2018-05-01-preview",
"headers": "content-type:application/json | myHeader:abcd",
"payload": {
"keyName": "key2"
},
"description": null
},
{
...
}
]
As you can see the above code, the headers splitter is character pipe (|) and the url address can have only path and query, the protocol with the domain will be added by tester during the runtime such as:
|{subscriptionId}
If the url address contains a full address (protocol, domain, etc.), the tester will accept it without any modification.
Note: The article download contains sample of the REST-API templates under the name rk20180618.json, so please rename it based on your Azure Subscription name. When the Tester tool is used for multiple Azure accounts, each Azure Subscription will have its own template json file located in the binaries folder.
This example demonstrates how to create a template for REST-API node that creates a new Storage account rk2018stg in the resource group rk20180618resgroup. The following item can be added to the array of the REST API call in the json file:
rk2018stg
rk20180618resgroup
{
"category": "Misc",
"name": "CreateStorageAccount",
"method": "PUT",
"url": "/resourceGroups/rk20180618resgroup/providers/Microsoft.Storage/storageAccounts/rk2018stg?api-version=2017-10-01",
"headers": null,
"payload": {
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"kind": "StorageV2",
"location": "westus",
},
"properties": {
"accessTier": "Cool"
}
},
"description": null
}
To obtain the storage keys, we can create a REST-API template like it is shown in the following code snippet:
{
"category": "Misc",
"name": "ListOfKeysForStorageAccount",
"method": "POST",
"url": "/resourceGroups/rk20180618resgroup/providers/Microsoft.Storage/storageAccounts/rk2018stg/listKeys?api-version=2017-10-01",
"headers": null,
"payload": {
},
"description": null
}
In the above advanced example, it has been demonstrated how easily we can extend this Tester tool by using a predefined Http template to call it.
Finally, the following screen snippet shows the above templates in the REST-API node under the category Misc.
REST-API
Misc
First of all, the following are prerequisites:
Visual Studio 2017 Version 15.7.5 and up
Microsoft Azure account
Azure Relay Hybrid Connection
Ngrok
Connectivity to the Internet
Downloading packages (source and/or exe) for this article
I am going to describe few methods, fragments critical for the concept and design of the Azure Event Grid Tester tool. As I have mentioned earlier, the Tester tool communicate with the Azure Management APIs via the REST APIs. Every REST request to the Management API must be authenticated using the Bearer token.
To obtain this Bearer token from the Azure AD is implemented by the following code snippet located in the Form1.cs source file:
Bearer
private TokenInfo AccessTokenToARM(string clientID, string subscriptionId)
{
string redirectUri = "";
authContext = new AuthenticationContext("", TokenCache.DefaultShared);
authContext.ExtendedLifeTimeEnabled = true;
var ar = authContext.AcquireTokenAsync("", clientID, new Uri(redirectUri), new PlatformParameters(PromptBehavior.SelectAccount)).Result;
return new TokenInfo() { Token = ar.AccessToken, ExpiresOn = ar.ExpiresOn, ApplicationKey = clientID, SubscriptionId = subscriptionId };
}
The above result is stored in the REST-API node as a tag object TokenInfo:
TokenInfo
[Serializable]
[DataContract(Namespace = "urn:rkiss.eventgrid/tester/2018/04")]
public class TokenInfo
{
[DataMember]
public string Token { get; set; }
[DataMember]
public DateTimeOffset ExpiresOn { get; set; }
[DataMember]
public string ApplicationKey { get; set; }
[DataMember]
public string SubscriptionId { get; set; }
}
Each time if the REST client (at any treeview node) is going to call an Azure Management API, the following method is performed:
private TokenInfo AccessTokenToARM(TreeNode node, bool regenerate = false)
{
TokenInfo tokenInfo = null;
var node1 = this.GetRestApiNode(node);
if (node1 != null && node1.Tag != null && node1.Tag is TokenInfo)
{
tokenInfo = node1.Tag as TokenInfo;
if (regenerate || tokenInfo.ExpiresOn < DateTimeOffset.UtcNow - TimeSpan.FromMinutes(1))
{
tokenInfo = AccessTokenToARM(tokenInfo.ApplicationKey, tokenInfo.SubscriptionId);
node1.Tag = tokenInfo;
}
}
return tokenInfo;
}
As the above code snippet shows, the Bearer token is obtained from the TokenCache or retrieved again with asking for a user credential.
One more interesting implementation is a listener for Hybrid Connection. When the Hybrid Connection has been selected from the dialog box, the following task is perform:
ThreadPool.QueueUserWorkItem(delegate (object state)
{
try
{
this.InvokeEx(() => this.openToolStripMenuItem.Enabled = false);
var listener = new HybridConnectionListener(selectedHybridConnectionInfo.ConnectionString);
listener.Connecting += (o, hce) =>
{
this.InvokeEx(() => this.richTextBoxLog.AppendText($"[{DateTime.Now.ToLocalTime().ToString("yyyy-MM-ddTHH:MM:ss.fff")}] HybridConnection: Connecting, listener:{listener.Address}\r\n", Color.Black));
};
listener.Online += (o, hce) =>
{
this.InvokeEx(() =>
{
this.hybridConnectionToolStripMenuItem.Tag = listener.Address;
this.hybridConnectionToolStripMenuItem.ToolTipText = listener.Address.ToString();
this.richTextBoxLog.AppendText($"[{DateTime.Now.ToLocalTime().ToString("yyyy-MM-ddTHH:MM:ss.fff")}] HybridConnection: Online, listener = {listener.Address}\r\n", Color.Green);
this.richTextBoxLog.AppendText($" {sastoken}\r\n", Color.Gray);
this.openToolStripMenuItem.Visible = false;
this.closeToolStripMenuItem.Enabled = true;
this.closeToolStripMenuItem.Visible = true;
});
};
listener.Offline += (o, hce) =>
{
this.InvokeEx(() =>
{
this.hybridConnectionToolStripMenuItem.ToolTipText = "";
this.hybridConnectionToolStripMenuItem.Tag = null;
this.richTextBoxLog.AppendText($"[{DateTime.Now.ToLocalTime().ToString("yyyy-MM-ddTHH:MM:ss.fff")}] HybridConnection: Offline, listener = {listener.Address}\r\n", Color.Red);
this.openToolStripMenuItem.Visible = true;
this.closeToolStripMenuItem.Enabled = false;
});
};
listener.RequestHandler = (context) =>
{
try
{
if (!context.Request.Headers.AllKeys.Contains("Aeg-Event-Type", StringComparer.OrdinalIgnoreCase) || !string.Equals(context.Request.Headers["Aeg-Event-Type"], "Notification", StringComparison.CurrentCultureIgnoreCase))
throw new Exception("Received message is not for EventGrid subscriber");
string jsontext = null;
using (var reader = new StreamReader(context.Request.InputStream))
{
var jtoken = JToken.Parse(reader.ReadToEnd());
if (jtoken is JArray)
jsontext = jtoken.SingleOrDefault<jtoken>().ToString(Newtonsoft.Json.Formatting.Indented);
else if (jtoken is JObject)
jsontext = jtoken.ToString(Newtonsoft.Json.Formatting.Indented);
}
this.InvokeEx(() => this.AddMessageToTreview(JsonConvert.DeserializeObject<eventgridevent>(jsontext), context.Request.Headers, jsontext));
}
catch (Exception ex)
{
this.InvokeEx(() => this.richTextBoxLog.AppendText($"[{DateTime.Now.ToLocalTime().ToString("yyyy-MM-ddTHH:MM:ss.fff")}] HybridConnection: Message processing failed - {ex.InnerMessage()}\r\n", Color.Red));
}
finally
{
context.Response.StatusCode = HttpStatusCode.NoContent;
context.Response.Close();
}
};
this.mre.Reset();
listener.OpenAsync();
this.mre.WaitOne();
listener.CloseAsync();
}
catch (Exception ex)
{
this.InvokeEx(() => this.richTextBoxLog.AppendText($"[{DateTime.Now.ToLocalTime().ToString("yyyy-MM-ddTHH:MM:ss.fff")}] Open HybridConnection failed - {ex.InnerMessage()}\r\n", Color.Red));
}
finally
{
this.InvokeEx(() => this.hybridConnectionToolStripMenuItem.Tag = null);
this.InvokeEx(() => this.openToolStripMenuItem.Enabled = true);
this.InvokeEx(() => this.openToolStripMenuItem.Visible = true);
this.InvokeEx(() => this.closeToolStripMenuItem.Visible = false);
}
});
</eventgridevent></jtoken>
As the above code shows, the background task will create a listener object for Hybrid Connection from its connection string. Then there are three handlers for online/offline and the handler for received request. This handler handles an incoming event message from the Azure Event Grid. The listener is opened until the signal from the ManualResetEvent object such as a Close item on the context menu or closing/exiting the Tester tool.
That's all for implementation.
This article gives you a tiny tester for Azure Event Grid. It can be your helper while evaluating and exploring the event driven resources in the Azure namespaces. This tinny tool is a next tool from my line such as Azure Service Bus Tester and Azure IoT Hub Tester. I hope you will find useful.
This appendix described all new Azure Event Grid version 2018-09-15-preview implemented features in the Azure Event Grid Tester version 1.1.0.0.
I do recommend to download it into separate folder and then manually copy/paste all your configurations (such as AzureSubscriptionsDialog and AzureHybridConnectionsDialog) from the previously version. Also, your subscription REST-API json file must be copied into this new folder and updated based on the sample file rk20180618.json if you want to use new templates related with an version 2018-09-15-preview. Note, that this updating process of the custom REST-API templates node is manual and it will require to re-process it for each new version of the Tester.
AzureSubscriptionsDialog
AzureHybridConnectionsDialog
OK, let's describe what is the new in this version.
The Event Domains is a big new feature of this update version 2018-09-15-preview to manage the flow of event domain topics. The previous version allows to handle each custom topic as an event publisher endpoint in the manner one to one, where the subscriber subscribed for existing custom topic in the tightly coupled manner.
domain topics
custom topic
In the event domains model, we have a different pattern such as one to many. One event domain endpoint can have multiple dynamically topics, see more details in the document Understand event domains for managing Event Grid topics.
The following picture is from that document and showing a model of the Event Domain:
As you can see, the above Event Domain Endpoint is an entry point of the event publisher for distributing event messages within the Event Domain based on the topic property in the event message. The payload of this entry point is an array of the event messages. In the case when the domain topic doesn't exist, the event message is routed to the Event Domain root (no topic), where can be subscribed the domain scope subscriptions. Note, that the above picture doesn't show domain scope subscriptions.
The domain topic is created during its first subscription, that is the major difference to the custom topic. The Event Domain has built-in the capability to route the event message to the Event Domain route if there is no match on topic. This Event Domain Pub/Sub Model enables to subscribe a subscription in loosely coupled manner and dynamically forwarded event message to the specific topic instead of the event domain route. In other words, the event domain root subscriber can easily figured out all existing domain topics.
As I have mentioned, the domain topic is created during its first subscription. In addition, the domain topic is automatically deleted when the last subscription has been deleted. Thanks for this built-in feature for event domain Pub/Sub model, it looks like very useful.
Based on the Event Domain feature, the Tester UI tree node has been extended like is shown on the following screen snippet:
As you can see, the above Event Domain (myDomain) has a node for receiving root event messages and a special resource node Topics. We can select a domain topic from the right datagrid and add it to the tree node for its exploration.
myDomain
Topics
To create a domain topic using this Tester requires to have in prior an Event Domain (for example myDomain) resource (endpoint). Selecting a New Subscription on this node, the following dialog will show up:
New Subscription
As you can see, there is a domainTopic textbox in the above dialog. If this textbox is empty, the subscription will be created for myDomain scope, otherwise the domain topic will be created if this subscription is the first one for this topic.
domainTopic
myDomain
Note: Using the REST-API EventDomain/EventPublisher node with a template CreateOrUpdateDomain in the Tester, we can create/update any Event Domain resource in our resource group.
REST-API EventDomain/EventPublisher
CreateOrUpdateDomain
This new feature allows to define time to live duration for subscription. Expiration time will automatically delete a subscription. If this subscription is for domain topic and it is the last one, then also a domain topic will be deleted from the Event Domain scope. Note, that expirationTimeUtc property is updateable property of the subscription, so it can be updated/removed using an Edit Subscription dialog.
expirationTimeUtc
The advancedFilters property is an array of the filters allows filtering on envelope properties as well as the first layer of the data payload. The following screen snippet shows this property of the subscription dialog. The syntax format for each filter is used the same as Azure CLI 2.0 programming. The filter delimiter is used character '|' like is highlighted in the picture and the delimiter in the array of values is used a character space ' '. Validation of the filters is during the typing in the tooltip textbox.
advancedFilters
In advanced filtering, you specify the:
key - The field in the event data that you're using for filtering. It can be a number, boolean, or string
operator type - The type of comparison
value or values - The value or values to compare to the key
Based on the above description, the advancedFilters format is:
key operatorType value/values [ | other filter ...]
The following operatorTypes, keys, values are supported:
The available operators for numbers are:
The available operator for booleans is:
BoolEquals
The available operators for strings are:
All string comparisons are case-insensitive.
For events in the Event Grid schema, use the following values for the key:
Event Grid schema
For events in Cloud Events schema, use the following values for the key:
Cloud Events schema
For custom input schema, use the event data fields (like Data.key1).
The values can be:
More details about the advanced filtering for Event Grid Subscription can be found here.
This feature has been already built in the Tester version 1.0, here I would like to notice, that after with Microsoft Event Grid team discussion, the deadLetterDestination property can not be removed from the event subscription using a REST PATCH call like for property labels, for instance. In other words, once the deadLetterDestination property has been populated (deadLettering is enabled) we can not turn it off. The enabled deadLettering feature can only be modified. For this issue, we have to make a Clone Subscription and select a None EndpointType in the deadLetterDestination groupbox.
Clone Subscription
None
[0] An introduction to Azure Event Grid
[1] Azure REST API Reference
[2] Concepts in Azure Event Grid
[3] Event Grid Relay listener
[4] Event Grid REST API
[5] Version 2018-09-15. | https://codeproject.freetls.fastly.net/Articles/1254463/Azure-Event-Grid-Tester | CC-MAIN-2021-31 | refinedweb | 5,506 | 51.58 |
OcempGUI 0.2.0-alpha2 has been released.
OcempGUI is a python based toolkit for creating (graphical) user
interfaces using the pygame library. It offers various widgets and base
classes, which make it suitable for a broad range of projects and easily
extensible.
Note:
----
This is a prerelease package, which does not claim to incorporate full
functionality. Parts of it can be unstable or broken, thus it is not
intended for a production use.
Dependencies
------------
* python (>= 2.3)
* pygame (>= 1.7.1)
Optional:
* ATK (>= 1.11.0)
* pkg-config
Features in 0.2.0-alpha2
------------------------
access package:
* Implemented various new interfaces from the AtkUtil namespace.
widgets package:
* VIDEOREISZE events are supported by the Renderer class using the
'support_resize' attribute.
* Additional pygame flags can be passed to the
Renderer.create_screen() method.
Changes in 0.2.0-alpha2
-----------------------
* Documentation updates.
* Fixed a minor installation issue, so that the Style.py file is
recompiled after adjusting the paths.
access package:
* Sanitized method names in the global namespace.
widgets package:
* Fixed several updating bugs in the Renderer class.
* Fixed the removal of widgets from the event system, if
Renderer.remove_widget() is used.
* Fixed ScrollBar behaviour in ScrolledList class.
* Fixed a bug in the Scale classes, which prevented them from
recognizing mouse clicks correctly, when used in Bins or Containers.
* Fixed a bug in the Container class which prevented it from adding
its children to a Indexable.
Download
--------
The package and its signature are available from here:
MD5 sum = 7f7dddbb1115646e495a5b4d6e0d84ea
Special thanks to Bertrand Cachet for his help.
Regards
Marcus | http://sourceforge.net/p/ocemp/mailman/ocemp-devel/thread/20060704151129.GA628@medusa.sysfault.org/ | CC-MAIN-2015-11 | refinedweb | 255 | 61.43 |
Hi,
There's been a few mentions about writing an XSL transformation for
converting node type definitions from the XML format to the more
readable CND format. The equivalent functionality is already available
in terms of the XML and CND reader and writer classes in Jackrabbit,
but an XSL transformation is in many cases a more convenient tool.
Attached is a simple implementation of this transformation. I just
hacked it together for a moment's need, so it for example fails to
properly quote strings and doesn't set up correct namespace mappings.
I'm posting it hoping that someone perhaps finds it useful or that it
could eventually evolve to a point where we could use it to avoid the
duplicate functionality in the node type reader and writer classes in
Jackrabbit.
Simple usage with the xsltproc tool from libxslt:
$ xsltproc xml2cnd.xslt nodetypes.xml
... (outputs the CND node type definitions)
BR,
Jukka Zitting
--
Yukatan - - info@yukatan.fi
Software craftsmanship, JCR consulting, and Java development | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200609.mbox/%3C510143ac0609051418n9b038b0h720721c876464df0@mail.gmail.com%3E | CC-MAIN-2014-41 | refinedweb | 166 | 51.38 |
Tutorial 2: Setting up OpenGL with WindowsEdit
Using the code from the first lesson (Win32 primer), we are going to set up the Windows program to work with OpenGL. First off, start up the Dev – C++ project that we worked on in the first lesson.
Linking the OpenGL librariesEdit
When you open up your Windows project, go to the top menu bar and click on Project and then Project Options. In the Project Options window, click on the Parameters tab. Under the third window (the “Linker” window) click on the Add Library or Object button. From there navigate to where you installed Dev – C++ (most likely “C:/Dev-Cpp”). When you get there, open up the “lib” folder. In there click once on the libglu32.a file and then hold down the Control key and click on the libopengl32.a file to select them both. Then click the Open button on the bottom of the dialog. Then click the OK button:
Now, going back to your source code, right below the including of the windows header, put these include headers in:
#include <gl/gl.h> #include <gl/glu.h>
These will include the appropriate OpenGL and GLU headers.
ContextsEdit
Now we need to create two variables called Contexts. A context is a structure that performs certain processes. The two contexts that we are going to deal with are the Device Context and the Rendering Context. The Device Context is a Windows specific context used for basic drawing such as lines, shapes, etc… The Device Context is only capable of drawing 2- dimensional objects. The Rendering Context, on the other hand, is an OpenGL specific context used to draw objects in 3D space. The Device Context is declared with HDC and the Rendering Context is declared with HGLRC. Declare the two context variables like this right below the including of the headers:
HDC hDC; //device context HGLRC hglrc; //rendering context
We are now going to initialize both of these contexts so we can in the end draw something OpenGL related.
In the code, go to the message procedure (WinProc). Right now all we have is one message (WM_DESTROY). What we want is to create the contexts when the program is first opened. The windows message for that is WM_CREATE which is processed when the window first opens up:
case WM_CREATE:
Under that message we have to retrieve the current device context. To do that we set the regular Device Context (hDC) equal to the function GetDC() which takes as one parameter the window handle which we declared at the WinProc declaration (hWnd). This function returns the current Device Context:
hDC = GetDC(hWnd);
For now we will leave the message like this. We will get back to this message later. What we need to do know is set up what is called the Pixel Format of the program.
Pixel FormatEdit
The Pixel Format is how, when drawing something, the pixels appear on the window. The structure that holds the pixel data is called the PIXELFORMATDESCRIPTOR:
typedef struct tagPIXELFORMATDESCRIPTOR { // pfd;
There are a lot of fields here. The good thing is we only have to fill in a few fields in order to get this structure working. Lets start setting up the pixel format in a new function.
At the top of the code, add in this function call:
void SetupPixels(HDC hDC) {
The reason for taking as parameter a device context is because when we set the pixel format to be working with the window, we need to pass as a parameter to one of the functions the device context of the window.
Now, within the function we just created created declare a variable of type Integer called “pixelFormat”. This variable will hold an index that references the pixel format we are going to create. After that declare a variable of type PIXELFORMATDESCRIPTOR called “pfd” to hold the actual pixel format data:
int pixelFormat; PIXELFORMATDESCRIPTOR pfd;
Now lets start filling in a few of the fields of the pixel format.
The first field we fill in is the nSize field which is set to the size of the structure itself:
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
The next field we will fill in is the flags field called dwFlags. These set certain properties of the pixel format. We will set this to three flags. The first, PFD_SUPPORT_OPENGL, lets the pixel format to be able to draw in OpenGL. The next one, PFD_DRAW_TO_WINDOW, tells the pixel format to draw everything onto the window we provide. The final one, PFD_DOUBLEBUFFER, allows us to create smooth animation by providing to buffers to draw on, which get switched to make animation smooth:
pfd.dwFlags = PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW | PFD_DOUBLEBUFFER;
The next field we will fill in is the version field, nVersion, which is always set to 1:
pfd.nVersion = 1;
The next field will be the pixel type, iPixelType, which sets the type of colors we want to support. For this we will use PFD_TYPE_RGBA so we get a red, green, blue, and alpha color set (don’t worry about the alpha part yet. We will go over that when the need comes):
pfd.iPixelType = PFD_TYPE_RGBA;
The next field, cColorBits, specifies the number of color bits to use. We will set this to 32:
pfd.cColorBits = 32;
The final field we set, cDepthBits, sets the depth buffer bits. For now set it to 24:
pfd.cDepthBits = 24;
After we set the fields of the pixel format, we need to set the device context to match the newly created pixel format. For this we use the ChoosePixelFormat() function, which takes as the first parameter the device context and as the second parameter the address of the PIXELFORMATDESCRIPTOR structure we created earlier (pfd). We set the integer variable we declared at the beginning of the function (pixelFormat) equal to this function:
pixelFormat = ChoosePixelFormat(hDC, &pfd);
Now we set the pixel format to the device context with the SetPixelFormat() function that takes as three parameters the device context, the pixel format integer, and the address of the PIXELFORMATDESCRIPTOR structure. Also, we will check to make sure this function worked. This particular function returns a Boolean value depending on whether it was successful or not. We will check if it was. If it was not successful, then we alert the user with a message box and close the program down:
if(!SetPixelFormat(hDC, pixelFormat, &pfd)) { MessageBox(NULL,"Error setting up Pixel Format","ERROR",MB_OK); PostQuitMessage(0); }
Now we end the function since we are done setting up pixels:
}
Now we go back to the WM_CREATE message in the window procedure, and right under the obtaining of the device context (hDC = GetDC(hWnd)) we call the SetupPixels() function and pass as the parameter the device context:
SetupPixels(hDC);
Now remember that rendering context we declared earlier (hglrc). Well, we will now create it using the wglCreateContext() function which takes as parameter the regular device context:
hglrc = wglCreateContext(hDC);
Now we will make the rendering context the current one we will use throughout the program. For that we use the wglMakeCurrent() function which takes as a first parameter the device context and the second parameter the rendering context:
wglMakeCurrent(hDC, hglrc);
Now we are finished with the WM_CREATE message. Make sure to include the break statement at the end of the message:
break;
Remember that WM_DESTROY message we created in the first lesson which is responsible for the exiting of the program? We need to release the rendering context there so there are no memory leaks. So go back to the WM_DESTROY message and we will add to the code there.
First, before we actually delete the rendering context, we have to make sure it is no longer active. For that we use the wglMakeCurrent() function again. This function, if you remember, takes as parameters the device context and the rendering context. For this we pass the device context, but for the rendering context we put in NULL to indicate we don’t want the rendering context to be current:
wglMakeCurrent(hDC,NULL);
Now we are safe to finally release the rendering context. For this we use the wglDeleteContext() function which takes as a single parameter the rendering context to delete:
wglDeleteContext(hglrc);
Make sure after these two function calls you still have the PostQuitMessage() and the break statement as usual. Here is the whole WM_DESTROY message defined:
case WM_DESTROY: wglMakeCurrent(hDC,NULL); wglDeleteContext(hglrc); PostQuitMessage(0); break;
Window ResizingEdit
Resizing happens when someone expands the width and/or height of the window. If we don’t control this, OpenGL will get confused and start drawing things out of whack. So first we will create a function called Resize() that will handle the resizing of the window. This function will take as two parameters the width and height of the window, which I will discuss how we receive those parameters:
void Resize(int width, int height) {
The first thing we have to do in this function is called setting the viewport. The viewport is the portion of the window we want to see the OpenGL drawing going on. The function to set the viewport is called glViewport():
void glViewport( GLint x, GLint y, GLsizei width, GLsizei height );
The first and second parameter, x and y, are the coordinates of the lower – left corner of the viewport. Since we want to see the drawing on the whole window, we set both of these to 0 indicating the lower –left corner of the window. The third and fourth parameter, width and height, is the width and height in pixels of the viewport. Since we want it to cover the whole window, we set it to the width and height parameters that were passed to the Resize() function. Make sure to also cast the third and fourth parameters to the GLsizei data type just to be safe. So here is the glViewport() function with the parameters filled in:
glViewport(0,0,(GLsizei)width,(GLsizei)height);
Now that we got the viewport set up, we need to set up what is called the projection. Projection is basically how the user views everything. There are two types of projection: Orthographic and Perspective. Orthographic is an unrealistic view. To better explain it, when an object is drawn in an orthographic 3D scene, the objects that are placed farther away from another object look like they are the same size, even with the distance taken into account. Perspective, on the other hand, is more realistic such as objects farther away appear smaller than objects closer to the viewer. Now that you got a better idea of projections, lets create one in code. We will use for this lesson the Perspective projection.
To start editing the projection, we need to select the Projection matrix. To do that we use the glMatrixMode() function which takes as a single parameter the matrix we want to edit. To edit the projection matrix, we give the function the value GL_PROJECTION:
glMatrixMode(GL_PROJECTION);
Before we start editing the projection matrix, we need to make sure that the current matrix is the Identity matrix. To do that we call the glLoadIdentity() function which takes no parameters and simply loads the identity matrix as the current matrix:
glLoadIdentity();
To set the perspective projection, we use the gluPerspective() function:
void gluPerspective( GLdouble fovy, GLdouble aspect, GLdouble zNear, GLdouble zFar );
The first parameter, fovy, is the field of view angle in the y direction in degrees. You can set this to 45 to get a normal angle of view. The second parameter, aspect, is the field of view in the x direction. This is usually set by the ratio of width to height. The third and fourth parameters, zNear and zFar, is depth distance that the viewer can see. We set zNear to 1.0 and zFar to 1000.0 to give the user a view of a lot of depth.
Here is the function with all the parameters filled in:
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,1.0f,1000.0f);
Now we have to switch the matrix mode to the model view matrix. Add another call to the glMatrixMode() function, but this time with the parameter of GL_MODELVIEW. The model view matrix holds the object information which we will draw. I will get into more detail about it in later lessons:
glMatrixMode(GL_MODELVIEW);
Now we need to reset the model view matrix by calling the glLoadIdentity() function. After that call, we are finished with the Resize() function:
glLoadIdentity(); }
Now we have to put this Resize() function call in the Window Procedure. The message we will put it in is called WM_SIZE, which gets called whenever the window is resized by the user:
case WM_SIZE:
Now we need some way to keep track of the current window width and height. First off, declare two integer variables called “w” for width and “h” for height right before the switch structure for the messages:
int w,h; switch(msg)
Going back to the WM_SIZE message, we need to set the variables we just created to the current width and height. For this we use the lParam parameter that was passed to the Window Procedure function. To get the width of the window, you use the LOWORD() macro function and put in as a single parameter the lParam variable. It will return the current width of the window. To get the current height of the window, use the HIWORD() macro function, which will return the current height of the window. Finally, pass the two integer variables (w,h) to the Resize() function we created and that will be the end of the WM_SIZE message:
case WM_SIZE: w = LOWORD(lParam); h = HIWORD(lParam); Resize(w,h); break;
Drawing something with OpenGLEdit
Now that we got OpenGL set up with our program, lets test it out to make sure that it is set up right.
First create a new function called Render(). This function will be responsible for all the OpenGL drawing done in this program:
void Render() {
First thing we do in this function is called buffer clearing. I will discuss buffers in a later lesson, but make sure before you render anything on the screen that you clear the buffers you are using. For this we use the glClear() function which takes as parameters the buffers we want to clear. We will put in GL_COLOR_BUFFER_BIT for the color buffer and GL_DEPTH_BUFFER_BIT for the depth buffer:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Now we have to load the identity matrix using the glLoadIdentity() function so we can start fresh at the origin:
glLoadIdentity();
Right now up to this point, the view we have is centered about the origin and the whole window is 1 unit wide and 1 unit high. A unit is a type of measurement that OpenGL uses. This is basically a user – defined measurement system. The default window width and height is now 1 and 1 units. To gain more units on the window, we need to move along the z axis in the negative direction, meaning moving away from the viewer. All we want to do is move 4 units in the –z direction so the window is 4 units wide and 4 units high. For this we use the glTranslatef() function:
void glTranslatef( GLfloat x, GLfloat y, GLfloat z );
The parameters (x,y,z) specify what axis to move along. Since we want to move along the z axis negative 4 units, we leave the first two parameters as 0.0 and put in –4.0f for the z parameter:
glTranslatef(0.0f,0.0f,-4.0f);
Now we are finally going to draw something on the screen. First off, when drawing something on screen and you don’t specify a color for the object to draw, then OpenGL automatically makes the color of the object white. To fix this, before you draw any objects, use the glColor3f() function which takes as three parameters the red, green, and blue color values. One thing you have to know is that the color values you can put in go from 0.0 to 1.0, not the usual RGB values which go from 0 to 255. For now we will set the color of the object we are going to draw to blue by setting the red and green values to 0.0 and the blue value to 1.0f:
glColor3f(0.0f,0.0f,1.0f);
Now we will finally get to the part where we actually draw something. How OpenGL drawing works is that first you have to specify what type of object you are going to draw. After that you specify the vertices of the object.
To start drawing objects, we need to use the glBegin() function which takes as one parameter the type of object we are going to draw. The glBegin() function tells OpenGL that the statements after this function call are going to be drawing specific. The parameter we will put for this function for now is GL_POLYGON which tells OpenGL that we are going to draw a polygon:
glBegin(GL_POLYGON);
Now we need to specify the vertices that we want connected to form the polygon. For this example, all we are going to draw is a square so all we need is to specify 4 vertices. To draw a vertex we use the glVertex3f() function which takes as three parameters the x, y, and z location of the vertex. The OpenGL window initially was 1 unit wide and 1 unit high. Earlier in the Render() function we used the glTranslatef() function to move 4 units away. So that means that the viewing window is now 4 units wide by 4 units high. The origin starts at the complete center of the window and acts like a standard coordinate system. We will draw the first vertex close to the upper – right corner with the coordinate of (1.0f,1.0f,0.0f), meaning we place the vertex one unit to the right and one unit up:
glVertex3f(1.0f,1.0f,0.0f);
Now we will set the other four corners of the square just like we did with the first vertex:
glVertex3f(-1.0f,1.0f,0.0f); glVertex3f(-1.0f,-1.0f,0.0f); glVertex3f(1.0f,-1.0f,0.0f);
Now to end the drawing, we use the glEnd() function to tell OpenGL we are done with drawing. That also finishes our Render() function:
glEnd(); }
Controlling the Render loopEdit
Now we have to go back to the WinMain() function and put the Render() function somewhere that it will be called in a loop. First off, under the UpdateWindow() function call in WinMain(), we need to make sure we have the current device context using the GetDC() function we used before:
hDC = GetDC(hWnd);
One other thing we do before we get to the render loop is clear the screen to a certain color. For this we use the glClearColor() function which takes as 4 parameters the red, green, blue and alpha color values. Set these values to all 0 and put the function right under the previous GetDC() function call:
glClearColor(0.0f,0.0f,0.0f,0.0f);
Now we are going to put the Render() function in the WHILE loop we have at the end of WinMain(). Right under the code while(1), put in the Render() function:
while(1) { Render();
The last thing we have to do before we compile this program is swap the buffers. Since we set the pixel format to be double – buffered we use the SwapBuffers() function which takes as a single parameter a device context. Put this function call right after the Render() function call:
SwapBuffers(hDC);
Now we are done writing the setup of OpenGL with Windows. Compile and run the program to get this output: | http://en.m.wikibooks.org/wiki/OpenGL_Programming/GLStart/Tut2 | CC-MAIN-2014-52 | refinedweb | 3,298 | 68.2 |
.
In script1.py place this:
def main():
do something
if __name__ == "__main__":
main()
In script2.py:
import script1
if condition:
script1.main()()
The child process flushes its output buffers on exit but the prints from
the parent are still in the parent's buffer. The solution is to flush the
parent buffers before running the child:
print("Starting script...")
sys.stdout.flush()
build.run()
Thanks to @Joop I was able to come-up with the proper answer.
try:
import zumba
except ImportError:
import pip
pip.main(['install', '--user', 'zumba'])
import zumba
One important remark is that this will work without requiring root access
as it will install the module in user directory.
Not sure if it will work for binary modules or ones that would require
compilation, but it clearly works well for pure-python modules.
Now you can write self contained scripts and not worry about dependencies.
In my project i call the scrapy code inside another python script using
os.system
import os
os.chdir('/home/admin/source/scrapy_test')
command = "scrapy crawl test_spider -s
FEED_URI='' -s
LOG_FILE='/home/admin/scrapy/scrapy_test.log'"
return_code = os.system(command)
print 'done'
print os.path.dirname(sys.executable)
is what you should use.
When you click it it is probably running through python.exe so you are
removing the extra char from the... | http://www.w3hello.com/questions/Modify-configuration-python-script-inside-modification-tool-script-python- | CC-MAIN-2018-17 | refinedweb | 221 | 60.11 |
Hi guys,
I am trying to create a simple tool to copy XY from where i click the mouse button. What i have done already is:
1. create add-in files using Python Add-In Wizard
2. create toolbar and tool inside it
in *.py file i got:
import arcpy import pythonaddins import win32clipboard as clipboard class p_tool(object): """Implementation for python-add-in-proj_addin.tool (Tool)""" def __init__(self): self.enabled = True self.shape = "NONE" def onMouseDownMap(self, x, y, button, shift): button = 1 shift = 2 clipboard.OpenClipboard() clipboard.EmptyClipboard() xy = str(x1)+' '+str(y1) clipboard.SetClipboardData(xy, clipboard.CF_TEXT) clipboard.closeClipboard() message = xy pythonaddins.MessageBox(message, "My Coordinates", 0)
About win32clipboard - i tested in in python windon in ArcMap and I am able to import it and openclipboard() but emptyclipboard() does not work and gives me that error:
File "<string>", line 1, in <module>
error: (1418, 'EmptyClipboard', 'Thread does not have a clipboard open.')
thanks for your help!
You probably don't need to bother with the .EmptyClipboard() call at all, I'd assume SetClipboardData would just overwrite it if needed. Also make sure you correctly case CloseClipboard() in your code, it's closeClipboard() in your source right now. | https://community.esri.com/thread/115585-copying-xy-to-clipboard-using-python-add-in | CC-MAIN-2019-13 | refinedweb | 201 | 59.5 |
simplemail 0.2
An easy way to send emails in Python
## Overview
**simplemail** is an easy way to send emails in Python. It will use a sendmail binary which is almost always available and ready to go.
## Dependencies
- Python 2.6 or 2.7
## Sample usecase
The code has docstrings which explains how to use the library. But here's the sample usecase I just made up for showing you the benefits and simplicity of using it.
Let's assume that you have a general announcement which you would like to send out to your mailing list. The message itself is common for every member of your list, but you want to greet every person by his or her name in the beginning of your message.
So you have a list of your customer in a dictionary.
from simplemail import Simplemail
mailinglist = {"Bob": "bob@domain.tld",
"Alice": "alice@domain.tld"}
Let's fire the default settings for all our emails.
message = Simplemail(sender="Maillist Owner <postmaster@domain.tld>",
subject="Monthly announcement")
Next you are going to write a default message body for everyone.
body = "We are proudly to present our new feature."
Let's compose a personal greeting for every member of your list and fire an email.
for person in mailinglist.keys():
gr = "Hello, %s\n\n%s" % (person, body)
message.send(recipient=mailinglist[person], body=gr)
Now you have a personal greeting for all of your subscribers.
## Logging handler
There is a special logging handler which utilizes the simplemail library. Here's a code sample:
import logging
from simplemail import Simplemail
from simplemail.handlers import SimplemailLogger
# Initializing Simplemail
mail = Simplemail(sender="Application Error <errors@domain.tld>",
recipient=["you@domain.tld"])
# Initializing logger
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.propagate = False
# Sending SimplemailLogger the Simplemail object, application name
# and the treshold
sl = SimplemailLogger(mail, __name__, logging.WARNING)
logger.addHandler(sl)
# Writing to the log
logger.warn("test")
The handler's constructor expects three arguments. One is mandatory: **mailobject** is the Simplemail object itself, which needs to be initialized before. Two others are optional: **app** contains your application name, which will be mentioned in the mail subject; and **level** which is the treshold at which the handler will be triggered.
## Downloads
This library is available at [PyPi]()
- Author: Ilya A. Otyutskiy
- License: MIT
- Package Index Owner: thesharp
- DOAP record: simplemail-0.2.xml | https://pypi.python.org/pypi/simplemail/0.2 | CC-MAIN-2017-39 | refinedweb | 395 | 51.34 |
First of all StreamWriters and StreamReaders are used for writing or reading from some .txt file, later you can do whatever you want from them, put them into loops, or whatever. So let's start.
First thing you will do is to call IO(input/output) library.
using System.IO;Than you can put start using streamwriter or reader in events. In this case I will be using an openFileDialog and saveFileDialog to show you how you can select file you want to read or give the file a path where you want to save it:
private void button1_Click(object sender, EventArgs e) { if (saveFileDialog1.ShowDialog() == DialogResult.OK) { StreamWriter sw = new StreamWriter(saveFileDialog1.FileName); sw.Write(richTextBox1.Text); sw.Close(); } }Now I will explain every line
First of all you see that if you want to save something to a .txt file you must use savefiledialog, and streamWriter (it's not necessary to use save file dialog, you can just put a path whre you want to save it)
So first of all you create new streamwriter by telling him where to save that file( in this case this is the path you select from save file dialog).
Second you use Write() method, to write all text from textbox, you can also use WriteLine() method for writing just a single line.
If you call this:
sw.WriteLine(richtextbox1.Text); sw.WriteLine(richtextbox1.Text);you will get just first 2 lines from that text box.
Okay, so you finished writing to a .txt file, now what you are going to do is to close that writer by calling a method Close();
You need to close reader or writer every time when you finish because if you dont, he will just continue and you will get an error.
In this example I will show just some examples of path usage in streamwriter(the same is used for readers).
StreamWriter sw = new StreamWriter(@"C:\\Documents\YourFolder", "filename.txt"); //example of giving exact path where you want to save something, you can put whatever path you want I'm just giving an example. StreamWriter sw = new StreamWriter("filename.txt"); //It puts your file to the Debug Folder, right next to the .exe file StreamWriter sw = new StreamWriter(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop, "filename.txt"); // It puts file to the desktop, on every computer.Okay now that you know what StreamWriter is, let's go to StreamReader, it is pretty much the same.
In this example I will show how to read to textbox from txt file, calling a streamreader from openFileDialog.
private void button2_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { StreamReader sr = new StreamReader(openFileDialog1.FileName); richTextBox1.Text = sr.ReadToEnd(); sr.Close(); } }So basically it is the same, First line of StreamReader is the same as StreamWriter, just for reader you put path where you want it to READ from.
Now i will explain just 2 of the most used methods.
ReadToEnd() - reads all text from .txt file and puts it into a textbox or label or whatever you put.
ReadLine() - reads just one line.
You can also set streamreader an exact path: @"location". So everything I told for StreamWriter stays for StreamReader.
I hope I helped, and that you now understand writer and reader.
Tutorial by Cushpajz.
Ps. If you are putting this in other forums, please put an author. Thanks. | http://forum.codecall.net/topic/67343-streamwriters-and-streamreaders/ | CC-MAIN-2019-51 | refinedweb | 563 | 65.01 |
The Rise of Test Impact Analysis
Test Impact Analysis (TIA) is.
07 August 2017
Contents. Those teams might also harbor someone who sometimes wants to 'economize' on the agreed integration process and NOT run tests. Luckily, CI servers and a decent series of build steps are how you quickly catch those moments.
Various techniques exist to speed up test execution, including running them in parallel over many machines and using test doubles for slow remote services. But this article will focus on reducing the number of tests to run, by identifying those most likely to identify a newly added bug. With a test pyramid structure, we run unit tests more frequently because they usually run faster, are less brittle and give more specific feedback. In particular, we frame a suite of tests that should be run as part of CI: pre-integrate and post-integrate. We then create a deployment pipeline to run slower tests later.
The same problem restated: If tests ran infinitely quickly, we would run all the tests all the time, but they are not, so we need to balance cost vs value when running them.
In this article, I detail an emerging field of testing-related computer science where Microsoft is leading the way, a field that companies with long test automation suites should take note of. You may be able to benefit from Microsoft's advances around "Test Impact Analysis" immediately, if you are in the .NET ecosystem. If you are not doing .NET, you have to be able to engineer something yourself fairly cheaply. A former employer of mine engineered something themselves based on proof of concept work that I share below.
Conventional strategies to shorten test automation
To complete the picture, I will recap the traditional "run a subset of the tests" strategies, that remain dominant in the industry. Well, with the newer reality of parallel test execution and service virtualization.
Creation of suites and tags
Major groupings of unit, service and functional UI. Within unit tests, tags for meaningful sub-groupings, including one 'express' that samples a subset of the the others.
After 'Shopping Cart' tests were run, all bar one passing. here.
This approach works with recursive build technologies such as Ant, Maven, MSBuild, Rake, and alike.
Historically, teams would give up on making their tests infinitely fast, and use suites or tags to targeting a subset of tests at any one time. With the creation of suites and tags, a subset of tests can be verbally describable. For example "UI tests for the shopping cart". Tags or suites could allude to business areas of the application, or to technical or tiered groupings. Defining tags and suites requires expert human design creativity. At least in order to push towards optimal groupings. That implied that they could be insufficiently, inexactly, and incorrectly grouped too, which is common enough, even if difficult for humans to determine. Too few and too many tests executed at the same time is a strong possibility for running one suite only - a wasteful use of computing esources and time, that also risks letting a bug slip through. Teams might choose to have CI jobs that use a smaller suite per commit, and then also a nightly build job that runs all tests. Obviously that delays bad news, and defeats the aims of Continuous Integration.
Suites and tags, however, is the way the majority of the software development world has organized its test code-base.
Pre-calculated graphs of source vs tests
276 tests with their notional 'size' designations.
The ones executed for given commits, with two failures resulting. As it happens, some of those turn out to be small, some medium, and some large. I have only depicted a tree/fractal because it helps explain the concepts (it is not really like that)..
Importantly, that tooling could point out redundant claims about dependencies. Thus for a given directory/package/namespace, the developer could kick off a subset of the tests quite easily - but just the ones that are possible via directed graphs from the BUILD files. The ultimate time saver, both for the developer pre-integrate and the integration infrastructure, was scaled CI infrastructure 'Forge' (later TAP), was the automated subsetting of tests to run per commit based on this baked-in intelligence.
There are a bunch of mind-blowing stats in Taming Google-Scale Continuous Testing. In my opinion this stuff has cost Google tens of millions but made them tens of billions over the years. Perhaps far greater than the earnings:wages ratio.
Test Impact Analysis
Test Impact Analysis (TIA) is a technique that helps determine which subset of tests for a given set of changes.
A similar depiction for tests to run for a hypothetical change.
The key to the idea is that not all tests exercise every production source file (or the classes made from that source file). Code coverage or instrumentation, while tests are running is the mechanism by which that intelligence is gleaned (details below). That intelligence ends up as a map of production sources and tests that would exercise them, but begins as a map of which productions sources a test would exercise.
One test (from many) exercises a subset of the production sources.
One prod source is exercised by a subset of the tests (whether unit, integration or functional)
So you will note that the stylized diagram of executed tests, is the same as for the Directed graph build technologies above. It's effectively the same, as the curation of the BUILD files over time leads to more or less the same outcome as TIA.
The TIA maps can only really be used for changes versus a reference point. That can be as simple as the work the developer would commit or has committed. It could also be a bunch of commits too. Say everything that was committed today (nightly build), or since the last release.
One realization from using a TIA approach is that you have too many tests covering the same sections of prod code. Of those are straight duplicates, then deleting tests after analysis of the test and the paths through prod code that it exercises, is a possibiity. Often they are not though, and working out how to focus testing on what you want to test, and not on at all on transitive dependencies in the code is a different focus area that inevitably rests on the established practice of using test doubles and more recently Service Virtualization (for integration tests).
The minimal level of creating a list of what has changed is "Changed production sources", but the ideal would to determine what methods/functions have changed, and further subset to only tests that would exercise those. Right now though, there is one ready to go technology from Microsoft that works at the source-file level, and one reusable technique (by me). Read on.
Microsoft's extensive TIA work
Microsoft has put in the longest concerted effort to productionize Test Impact Analysis ideas, and gave the concept that name and its acronym.
They have a current series of blog entries that span March to August of 2017, so far: Accelerated Continuous Testing with Test Impact Analysis - Part 1, Part 2, Part 3, and Part 4.
Their older articles on this go back eight years:
- Test Impact Analysis in Visual Studio 2010 (2009)
- Streamline Testing Process with Test Impact Analysis) (2010)
- Which tests should be run since a previous build? (2010)
- How to: Collect Data to Check Which Tests Should be Run After Code Changes (2010)
- Test Impact Analysis (2011)
Microsoft's Pratap Lakshman detailed the evolution of their implementation. Concerning the current evolution of their TIA tech, Pratap says:[1]
The map of the impacted tests versus production code is recalculated when there is a build that is triggered. The job to do that runs as part of the VSTest task within a VSTS build definition.
Our TIA implementation collects dynamic dependencies of each test method as the test method is executing.
At a high level here is what we do: As the test is executing it will cover various methods – the source file in which those methods reside are the dynamic dependencies we track.
So we end up with a mapping like the following:
Testcasemethod1 <--> a.cs, b.cs. d.cs Testcasemethod2 <--> a.cs, k.cs, z.cs
and so on …
Now when a commit comes in to say
a.cs, we run all
Testcasemethod(s) that had
a.cs as
their dynamic dependency. We, of course, take care of newly introduced tests (that might
come in as part of the commit) and carry forward previously failing tests as well.
Our TIA implementation does not do test prioritization yet (e.g. most often broken first). That is on our radar, and we will consider it if there is sufficient interest from the community.
The actual TIA map data is stored in TFS as in an SQLServer database. When a commit comes in, TIA uses TFVC/Git APIs to open up the commit and see the files into the which the changes are going into. Once it knows the files, TIA then consults the mapping to know which tests to run.
Of course, usage of this TIA technology is supported in pull request (PR) and regular CI workflows, as well as pre-integrate on the developer's workstation.
We want our users to embrace the Shift Left and move as many of their tests to earlier in the pipeline. In the past we have seen customers a little concerned about doing that because it would mean running more tests at every commit, thereby making the CI builds take longer. With TIA we want to have our users Shift Left and let TIA take care of running only the relevant tests – thereby taking care of the performance aspect.
Concerning the first few years of TIA inside TFS and Visual Studio, he says:
The TIA technology at that time was quite different in many ways:
- It would only identify impacted tests. it was the left to the user to explicitly run them.
- It used block level code coverage as the approach to generate the test <--> mapping. In the subsequent build, it would do an IL-diff with the earlier build to find out blocks that changed, and then use the mapping to identify and list impacted tests. Note that it would not run them for you.
- The above approach made it slow (compared to the current implementation), and required way more storage for the mapping information (compared to the current implementation)
- The above approach also caused it to be less safe than the current implementation (it would miss identifying impacted tests in some cases).
- It did not support the VSTS build workflow (it was only supported in the older XAML build system)
Test Impact Analysis via conventional code coverage tools and scripting
I had an idea to co-opt modern, off the shelf, code-coverage tools into the same impact analysis when I worked at HedgeServ. From that idea, I made two proof of concept blog entries (with the associated code on Github): One for Maven & Java[2], and one for Python[3]. Of course, like an eejit, I thought I had a novel invention, but I did not know at the time that there was a quite a bit of prior-art in this space (Microsoft above). The technique I've shown is cheap to develop within your tool chain, even if it may have a cost to run within your CI infrastructure.
A simple implementation of the idea of Test Impact Analysis requires that you have one up front activity:
- Run single test and collect code-coverage for it
- From the prod source files touched for the test, make a temporary map of the prod sources (key) to test path/name (value)
- Update the source files that contain the master map, replacing all previous entries for that test
- Commit those changed map source file to VCS (only the CI job in question should have permission to do this)
- Clear the coverage data (so as to not entangle coverage reports per test)
- Go back to #1 for the next test (most recently changed sources/tests first?)
After running all the tests one at a time you have a comprehensive map connecting prod code to the tests that cover them.
Then when you later make changes to some prod code, you can figure out which tests exercised that code, and are thus likely to be informative when run. Any test failures produced are provably the only test failures that could happen from the changes made. The two proofs of concept referred to above contain a small amount Python code that attempts to:
- Calculate TIA maps ahead of need
- Use TIA maps in a pre-integrate situation (with small modifications it could be used in CI jobs too)
HedgeServ's test-base is comprised of regular speedy unit tests, followed by integration tests that are Microsoft Excel spreadsheets which in turn indirectly invoke Python. There are 12,000 of them. Those tests are many hours of extensive and advanced algorithm tests that would be impossible to do per-integration in CI infrastructure without some "run less of them" strategy. Their DevOps team operationalized the proof of concept as "Test Reducer" (the initial name I gave this tech), and the quickest permutations are now ten minutes. A fantastic improvement. Developers and Test Engineers can run them pre-integrate, and the CI infrastructure can do the same. HedgeServ's Managing Director of Software Development, Kevin Loo, tells me that "developers count on the quicker test runs, and the pace of development has increased because of an increased confidence".
Because generic code coverage tools are being used the TIA aspect has to be run one test at a time, which has an up front cost. For that reason, the map that results from the analysis is checked into source-control and incrementally updated. It, therefore, has to be text and ordered in nature so that the diffs can be terse and have some meaning. Checking the map into source control also benefits the CI infrastructure and the individual developers looking to run fewer tests prior to integration (and code review).
This TIA design has a limitation because of the nature of code coverage tools: Only one test can be run at a time in order for an accurate impact graph to be calculated. In order to use the map data, there is a need run "git status" (or git show >hash>) and then parse the output to find the 'changed/added/deleted' production code sources. And those are the keys to the impact map, that result in a list of tests to run. It is only the data gathering CI job that has the limitation of "one test at a time", which is why you more or less consider it perpetually running.
The testing technology, as you see, can be totally alien to your main language choices for the prod code. In HedgeServ's case, their algorithmic tests were in Microsoft Excel files under source control (that even the BAs contributed to). If that is possible then so are SmartBear's TestComplete, HP's Unified Functional Testing (UFT - formerly QTP), and of course Selenium. The only requirement is that tests can be scripted to run one at a time (while you build the TIA map). You are also going to have to commit to updating the map at some frequency following its initial creation - use your CI infrastructure.
You are then left with a decision as to where to store that map data. You could choose a file share, or a document store, or a relational schema. I chose (and would recommend) text files in a directory that's in the same repo/branch as the prod source itself. That at least allows branching to work (whatever your branching-model) and have divergent impact maps perhaps reflecting the divergent nature of the code.
For a client recently, I was looking at a project that uses KDB and Q for its system and trying to advise them on how to bring down test times. There is no code coverage tech for these, so that was the end of that conversation.
VectorCAST/QA - application
Vector Software has made a product called VectorCAST/QA that is a one stop shop application that leverages code-coverage in the same way to run fewer impacted tests (and more). Their technology is mostly sold to the automotive (and related) industry that embeds C, C++ and Ada software. VectorCAST working in this mode of operation also predates my kitchen sink experiments. I have to work on my googling skills!
TIA support in IDEs
Microsoft also has a powerful Live Unit Testing[4] feature in Visual Studio that, if enabled, automatically runs impacted unit tests even as you edit the code. While related to TIA, this is perhaps worth a separate analysis.
Last month I thought I would raise a feature request for JetBrains to equivalent create TIA functionality to their IDEs. During the triage of the ticket I had raised, JetBrains connected it to another one from 2010 on the same topic, and in that one there was a suggestion that some of this functionality is already implemented. I could not get it working when I tried it though ☹️
For articles on similar topics…
…take a look at the tag:
Definitions
Pre-integrate and post-integrate
Pre-integrate activities are those that developers do on their workstation, that may include local (hopefully short lived) feature branches, little commits (that may or may not be squashed into one later, and editing/building cycles before a declaration of "done" for the story card in question. It is definitely before the commit/commits are passed into code review, etc.
Post-integrate is the stage where that work (one commit or more) has completed code review and is going back into the trunk/master/mainline. Soon after that, all team mates will be able to pull it to their workstations, and probably should.
Shift left and right
Shift left is a process of taking a step that's part of the value stream of software development and moving it up the timeline. Taking manual testing, and replacing it with test automation is an example. Another is defects caught in a product owner's head before being typed into the backlog (a story tracker). In that case the defect is a product idea that would become a feature request in the tracker - that would be ultimately judged to be incorrectly specified - not a bug as such. That could be accidental, but if you put in place a new process to do that, it would be part of a shift left agenda. Barry Boehm's "cost of change curve" speaks to the larger topic - mistakes are cheaper to fix the sooner you catch them and are most expensive to remediate when found in production. As it happens, 'shift left' is a minor industry cause. Enterprises are using it to describe an alternate way explaining questing for cheaper and quicker. The same goals as continuous integration, in particular, and Agile, CD and all that generally.
Shift right is when you do un-agile things like moving unit test execution to a "nightly" build (or less frequently) instead of making them faster in the per-integration CI build. Sometimes you have done a bunch of shift left activities that come with some risk, which can only be mitigated by an additional sift-right step.
Historical work in this space
Google-Testar (2006)
Mikhail "Misha" Dmitriev made Testar while at Google in 2006.
Misha's goal: don't run all the tests, while simultaneously claiming "we can empirically prove we do not have to".
Testar uses bytecode instrumentation to record code coverage for each test - that is, which application methods are exercised by each test. This information, along with checksums for classes and methods, is saved into the Test Database (TDB). On subsequent invocations, Testar finds out what classes and methods the developer has changed, and then re-runs only those tests that cover the updated code. It is assumed that other tests, that passed before, will pass again since the code that they exercise hasn't changed. Of course, if any test didn't pass before, or has just been added, Testar will run it unconditionally.
Misha reported average 60..70% time savings due to not running "unaffected" tests. However, this technology is not problem-free. First, savings are inconsistent: for example, if a developer repeatedly changes a method that is used by most tests, the savings will be small. Second, if test outcome depends not only on the code but also on some external input, such as resource files, the user needs to explicitly specify all these files to Testar. The tool cannot automatically determine which tests depend on a given resource file, and can only re-run all tests or those explicitly specified by the user.
Misha's reflections here: [5]
At that time the off the shelf libraries for bytecode instrumentation were less sophisticated, and/or there was a lack of flexibility with them. I found quite often that unless some library is top quality, it makes more sense to write a bit more of my own code upfront and then have much more freedom later.
Misha on his experience at Google and other companies:
I eventually concluded that a technology like Testar may not be the best option if a company runs a wide variety of tests, and has enough hardware resources. In this scenario, running all tests with a high degree of parallelization is more reliable. However, fine-grain selective test execution as done by Testar may still work in niche cases. For example, when individual tests take extremely long and cannot be sped up by parallelization or other techniques, or when there are limited hardware resources but there is no problem with resource files and other non-code input for the tests, etc.
It seems to me that Testar could be rejuvenated, if somebody wanted to give it new life. Also, Testar is the first technology to do TIA that I could find, but was based on a paper that was presented at a conference that talked about the concepts. Misha says he can't locate that paper today.
ProTest: brittle tests should fail sooner
In 2007 some ThoughtWorks colleagues Kent Spillner, Dennis Byrne, Naresh Jain and others open sourced Protest that sought to run the tests most likely to break first. Most likely was a combination of historical breakage stats, and whether the prod or test source was currently being changed or not. Recently changed prod source files had impacted tests, which became candidates for prioritization. Changed tests would be candidates too. The intersection of the historically brittle and the recently changed would be the tests that a custom JUnit test runner, would execute first. The pertinent build step would take the same amount of time over all, but the test failure news can be issued sooner. That is because CI technologies do not wait for the end of the build step to communicate failure as they are listening to the log events on a per test basis, or scraping the log output. The same is true of the runners that are integrated into IDEs (Intellij and Eclipse).
Naresh remembers:
We built an AST and then used the visitor pattern to walk through the code and collect interesting stats. In terms of steps ProTest did:
- At the launching of tests stage, ProTest processes the bytecode of tests and prod classes to build a map of prod-classes versus tests
- It calculates the changes in the working copy, based on source file time stamps
- From those changes, it works out smallest set of tests it has to run and runs them first
- Test reports issued as normal, and a consequential pass/fail for the set
Dennis adds:
Kent wrote an extension point for any pluggable algorithm you wanted to use based on the Condorcet voting strategy (each voter must rank all candidates). I wrote a bunch of bootstrap algorithms. One of them used ASM to build a dependency graph and then sorted tests based upon how close they are to whatever changed. We never got to version 1.0 though 🤔
Kent sees ProTest like so:
The map wasn't retained between invocations, as it didn't need to be, the calculation of which tests should be run was consistent and very quick. The closest thing I've ever seen to ProTest was Kent Beck's JUnit Max, but that was probably a few years after Dennis came up with the idea for ProTest. I wish we had done more to finish ProTest. To this day I frequently find myself still in situations where I could really use something like it. I know Naresh subsequently recruited some more people to continue working on it, but that effort also stalled. Maybe someday someone else will pick up the mantle and carry-on. That's the beauty of open-source, right?
JTestMe: "Just Test Me"
Also in 2007, some other ThoughtWorks colleagues Josh Graham, and Gianny Damour worked on an open source tech called JTestMe that used AOP to build a repository of production classes versus tests that would exercise them. It was very similar to ProTest in that it was also wanting to prioritize tests more likely to fail.
Josh recalls:
The reasoning behind using runtime call analysis was that, although static analysis was useful for Java, it doesn't reveal everything on a platform with reflection capabilities and one that encourages other languages, including those where static analysis was less straightforward (JRuby, then, for example, and also Clojure nowadays).
The proof-of-concept was simple and fast. It created a point cut on all JUnit test methods (i.e. test cases) during the running of the test suite. That point cut advised whenever a non-private method of any non-test class was called. That advice then recorded the source file name of the class and the test executing at the time.
With this map of source code file names to test cases, any time the source code file was changed, it was known which test cases needed to be run. I called this a "dynamically defined smoke test". On a CI server, the revision information was extracted and used to query the map to ascertain which test cases could be run first. On a developer's local machine, an inotify-based tool notified when a source code file changed and the test cases that touched that class were run in a side process. In an IDE I used the excellent Infinitest with a custom list (being continually updated by a query to JTestMe). The map itself was a simple Java object serialization to a file. It could be committed to source control as it would go on to save the CI redoing it between builds, which could be quite a problem on agent-based builds.
I chose load-time weaving so the test suite could be run with or without instrumentation (in case somehow the AOP tool created false test outcomes).
The obvious drawback with the approach is that when a new test case is created (often in a TDD team), it isn't included as a smoke test candidate until it is instrumented, so there were some straightforward but cumbersome command line changes in a few places in the development pipeline. Standard tooling that was more in tune with this sort of approach would attract more teams to the benefits. Alas, precise specification and verification remain a very undervalued asset in teams swamped with cargo-culting and chasing buzzwords.
As an aside, the team building "Clover" (soon after acquired by Atlassian) did something similar to the ProTest static analysis approach.
Footnotes
1: These extracts are from an email exchange that I had with Pratap.
2: Previous Blog entry: Reducing Test Times by Only Running Impacted Tests - for Maven & Java.
3: Previous Blog entry: Reducing Test Times by Only Running Impacted Tests - Python Edition.
4: Article: Live Unit Testing with Visual Studio 2017 on Microsoft.com.
5: These comments, and the other contributions in this appendix, are from emails with the various contributors.
As well as the correspondents, Patric Fornasier, Mark Taylor, Clare Sudbery, Ramakrishnan "Ramki" Sitaraman, and Graham Brooks reviewed this article before publication, and provided great feedback. | https://martinfowler.com/articles/rise-test-impact-analysis.html | CC-MAIN-2017-34 | refinedweb | 4,758 | 57.91 |
A while back I came a problem:
I needed to build for a client an activity that runs on an unknown number of projects and builds them with DevEnv 2005/2008/2010.
I needed to create an editor that allows the user to select projects to be built in DevEnv and additional properties for each project such as “DevEnv version” and “Configuration to build”.
The problem was that I had very little time to do this.
This is where my good friend Baruch Frei came into the picture; He informed me that when I build a simple dumb object such as:
1: using System.ComponentModel;
2:
3: namespace BuildDemosB
4: {
5: public class DevEnvCompilationItem
6: {
7: [Category("Properties")]
8: public string Project { get; set; }
9:
10: [DisplayName("Compiler Name")]
11: [Category("Properties")]
12: public DevEnvType CompilerName { get; set; }
13:
14: /// <summary>
15: /// Specifies whether Clean and then build the solution or project with the specified configuration. Default is false
16: /// </summary>
17: [Category("Properties")]
18: public bool Rebuild { get; set; }
19:
20: #region Methods
21:
22: public override string ToString()
23: {
24: return string.IsNullOrEmpty(Project) ? "New" : Project + "=>" + CompilerName;
25: }
26:
27: #endregion
28: }
29: }
and add a List<T> of that object to the builds “Process Template” as an argument like this:
What we will see in the “Process” tab of the “Edit Build Definition” window is the Microsoft “out of the box” editor of the List<T> object in TFS build definition.
This editor will hand us the functionality of adding and removing objects of type T to and from the list as well as to edit each of the public properties of the object.
On the left side you see the object in the list, displayed by their ToString() method, and on the right you see a properties grid of the selected object on the left.
What is also nice is that if you have in your object properties such as Boolean or Enum the properties grid will inflict those restrictions in the form of a drop down.
Easy and fast.
In my next post I will show how to use some of Microsoft's internal UITypeEditors with the OOTB List<T> editor.
Thanks for posting this.
Well, what's that CSS code that was pasted? What is it for?
{ .csharpcode, .csharpcode pre ……}
Can you please describe this?
Thanks.
Hi Kumar,
First of all, my pleasure.
If you mean these attributes:
* [DisplayName("Compiler Name")]
* [Category("Properties")]
They determine the way in which the property is displayed in the properties grid
* [DisplayName("")] – The name to display for the property.
* [Category("")] – The section under which the property is displayed.
Under your BuildDemosB code snipit and just before the first image all we see is .csharpcode, .csharpcode pre { font-size: small……}
Looks like another code snipit that the browser can't display. I've IE and Chrome.
I see what you mean.
I was using a bad "code sipped add-in".
Fixed it now.
Thanks!
Thanks.
Waiting for your next post to customize some of UITypeEditors
I'm unable to refer the custom object under Argument Type. DevEnvCompilationItem is mising.
Do I need to add the assembly anywhere else in the template?
Waiting for your early reply.
Hi Kumar,
Yes, in order to refer to the class from the template you must do the following: Take the project's dll/'s and add it/them to the folder in the source control to which the build controller you're using is referring as its assemblies directory.
Hello Oshry,
I'm using a simple List
as a Argument Type and it automatically adds a UI in order to fill those properties of T. It works fine but when editing at "Queue new build" time that list and removing list items or modifying a property of an item doesn't have any effect. It still stores those values of the list that it had at the beginning, the ones that I added editing the build definition. Obviously I know that because I print all list's values on screen and they keep unchanged. Do you have any idea why could be that?
Thanks in advance,
Xabi | http://blogs.microsoft.co.il/oshryhorn/2011/07/14/the-tfs-build-definition-listlttgt-ootb-uitypeeditor/ | CC-MAIN-2013-48 | refinedweb | 694 | 69.62 |
.
Citation format
van Gent, P. (2016). Emotion Recognition Using Facial Landmarks, Python, DLib and OpenCV. A tech blog about fun things with Python and embedded electronics. Retrieved from:
IE users: I’ve gotten several reports that sometimes the code blocks don’t display correctly or at all on Internet Explorer. Please refresh the page and they should display fine.
Introduction and getting started
Using Facial Landmarks is another approach to detecting emotions, more robust and powerful than the earlier used fisherface classifier, but also requiring some more code and modules. Nothing insurmountable though. We need to do a few things:
- Get images from a webcam
- Detect Facial Landmarks
- Train a machine learning algorithm (we will use a linear SVM)
- Predict emotions
Those who followed the two previous posts about emotion recognition will know that the first step is already done.
Also we will be using:
- Python (2.7 or higher is fine, anaconda + jupyter notebook is a nice combo-package)
- OpenCV (I still use 2.4.9……so lazy, grab here)
- SKLearn (if you installed anaconda, it is already there, otherwise get it with pip install sklearn)
- Dlib (a C++ library for extracting the facial landmarks, see below for instructions)
- Visual Studio 2015 (get the community edition here, also select the Python Tools and the Common tools for visual c++ in the installation dialog)
Installing and building the required libraries
I am on Windows, and building libraries on Windows always gives many people a bad taste in their mouths. I can understand why, however it’s not all bad and often the problems people run into are either solved by correctly setting PATH variables, providing the right compiler or reading the error messages and installing the right dependencies. I will walk you through the process of compiling and installing Dlib.
First install CMake. This should be straightforward, download the windows installer and install. Make sure to select the option “Add CMake to the system PATH” during the install. Choose whether you want this for all users or just for your account.
Download Boost-Python and extract the package. I extracted it into C:\boost but it can be anything. Fire up a command prompt and navigate to the directory. Then do:
bootstrap.bat #First run the bootstrap.bat file supplied with boost-python #Once it finished invoke the install process of boost-python like this: b2 install #This can take a while, go get a coffee #Once this finishes, build the python modules like this b2 -a --with-python address-model=64 toolset=msvc runtime-link=static #Again, this takes a while, reward yourself and get another coffee.
Once all is done you will find a folder named bin, or bin.v2, or something like this in your boost folder. Now it’s time to build Dlib.
Download Dlib and extract it somewhere. I used C:\Dlib but you can do it anywhere. Go back to your command prompt, or open a new one if you closed it, and navigate to your Dlib folder. Do this sequentially:
# Set two flags so that the CMake compiler knows where to find the boost-python libraries set BOOST_ROOT=C:\boost #Make sure to set this to the path you extracted boost-python to! set BOOST_LIBRARYDIR=C:\boost\stage\lib #Same as above # Create and navigate into a directory to build into mkdir build cd build # Build the dlib tools cmake .. #Navigate up one level and run the python setup program cd .. python setup.py install #This takes some time as well. GO GET ANOTHER COFFEE TIGER!
Open your Python interpreter and type “import dlib”. If you receive no messages, you’re good to go! Nice.
Testing the landmark detector
Before diving into much of the coding (which probably won’t be much because we’ll be recycling), let’s test the DLib installation on your webcam. For this you can use the following snippet. If you want to learn how this works, be sure to also compare it with the first script under “Detecting your face on the webcam” in the previous post. Much of the same OpenCV code to talk to your webcam, process the image by converting to grayscale, optimising the contrast with an adaptive histogram equalisation and displaying it is something we did there.
#Import required modules import cv2 import dlib #Set up some required objects video_capture = cv2.VideoCapture(0) #Webcam object detector = dlib.get_frontal_face_detector() #Face detector predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") #Landmark identifier. Set the filename to whatever you named the downloaded file while True: ret, frame = video_capture.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) clahe_image = clahe.apply(gray) detections = detector(clahe_image, 1) #Detect the faces in the image for k,d in enumerate(detections): #For each detected face shape = predictor(clahe_image, d) #Get coordinates for i in range(1,68): #There are 68 landmark points on each face cv2.circle(frame, (shape.part(i).x, shape.part(i).y), 1, (0,0,255), thickness=2) #For each point, draw a red circle with thickness2 on the original frame cv2.imshow("image", frame) #Display the frame if cv2.waitKey(1) & 0xFF == ord('q'): #Exit program when the user presses 'q' break
This will result in your face with a lot of dots outlining the shape and all the “moveable parts”. The latter is of course important because it is what makes emotional expressions possible.
Note if you have no webcam and/or would rather like to try this on a static image, replace line #11 with something like frame = cv2.imread(“filename”) and comment out line #6 where we define the video_capture object. You will get something like:
my face has dots
people tell me my face has nice dots
experts tell me these are the best dots
I bet I have the best dots
Extracting features from the faces
The first thing to do is find ways to transform these nice dots overlaid on your face into features to feed the classifer. Features are little bits of information that describe the object or object state that we are trying to divide into categories. Is this description a bit abstract? Imagine you are in a room without windows with only a speaker and a microphone. I am outside this room and I need to make you guess whether there is a cat, dog or a horse in front of me. The rule is that I can only use visual characteristics of the animal, no names or comparisons. What do I tell you? Probably if the animal is big or small, that it has fur, that the fur is long or short, that it has claws or hooves, whether it has a tail made of flesh or just from hair, etcetera. Each bit of information I pass you can be considered a feature, and based the same feature set for each animal, you would be pretty accurate if I chose the features well.
How you extract features from your source data is actually where a lot of research is, it’s not just about creating better classifying algorithms but also about finding better ways to collect and describe data. The same classifying algorithm might function tremendously well or not at all depending on how well the information we feed it is able to discriminate between different objects or object states. If, for example, we would extract eye colour and number of freckles on each face, feed it to the classifier, and then expect it to be able to predict what emotion is expressed, we would not get far. However, the facial landmarks from the same image material describe the position of all the “moving parts” of the depicted face, the things you use to express an emotion. This is certainly useful information!
To get started, let’s take the code from the example above and change it so that it fits our current needs, like this:
import cv2 import dlib detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat"))) for x, y in zip(xlist, ylist): #Store all landmarks in one list in the format x1,y1,x2,y2,etc. landmarks.append(x) landmarks.append(y) if len(detections) > 0: return landmarks else: #If no faces are detected, return error message to other function to handle landmarks = "error" return landmarks
The .dat file mentioned can be found in the DLIB zip file you downloaded, or alternatively on this link.
Here we extract the coordinates of all face landmarks. These coordinates are the first collection of features, and this might be the end of the road. You might also continue and try to derive other measures from this that will tell the classifier more about what is happening on the face. Whether this is necessary or not depends. For now let’s assume it is necessary, and look at ways to extract more information from what we have. Feature generation is always a good thing to try, if only because it brings you closer to the data and might give you ideas or alternative views at it because you’re getting your hands dirty. Later on we’ll see if it was really necessary at a classification level.
To start, look at the coordinates. They may change as my face moves to different parts of the frame. I could be expressing the same emotion in the top left of an image as in the bottom right of another image, but the resulting coordinate matrix would express different numerical ranges. However, the relationships between the coordinates will be similar in both matrices so some information is present in a location invariant form, meaning it is the same no matter where in the picture my face is.
Maybe the most straightforward way to remove numerical differences originating from faces in different places of the image would be normalising the coordinates between 0 and 1. This is easily done by:
, or to put it in code:
xnorm = [(i-min(xlist))/(max(xlist)-min(xlist)) for i in xlist] ynorm = [(i-min(ylist))/(max(ylist)-min(ylist)) for i in ylist]
However, there is a problem with this approach because it fits the entire face in a square with both axes ranging from 0 to 1. Imagine one face with its eyebrows up high and mouth open, the person could be surprised. Now imagine an angry face with eyebrows down and mouth closed. If we normalise the landmark points on both faces from 0-1 and put them next to each other we might see two very similar faces. Because both distinguishing features lie at the edges of the face, normalising will push both back into a very similar shape. The faces will end up looking very similar. Take a moment to appreciate what we have done; we have thrown away most of the variation that in the first place would have allowed us to tell the two emotions from each other! Probably this will not work. Of course some variation remains from the open mouth, but it would be better not to throw so much away.
A less destructive way could be to calculate the position of all points relative to each other. To do this we calculate the mean of both axes, which results in the point coordinates of the sort-of “centre of gravity” of all face landmarks. We can then get the position of all points relative to this central point. Let me show you what I mean. Here’s my face with landmarks overlaid:
First we add a “centre of gravity”, shown as a blue dot on the image below:
Lastly we draw a line between the centre point and each other facial landmark location:
Note that each line has both a magnitude (distance between both points) and a direction (angle relative to image where horizontal=0°), in other words, a vector.
But, you may ask, why don’t we take for example the tip of the nose as the central point? This would work as well, but would also throw extra variance in the mix due to short, long, high- or low-tipped noses. The “centre point method” also introduces extra variance; the centre of gravity shifts when the head turns away from the camera, but I think this is less than when using the nose-tip method because most faces more or less face the camera in our sets. There are techniques to estimate head pose and then correct for it, but that is beyond this article.
There is one last thing to note. Faces may be tilted, which might confuse the classifier. We can correct for this rotation by assuming that the bridge of the nose in most people is more or less straight, and offset all calculated angles by the angle of the nose bridge. This rotates the entire vector array so that tilted faces become similar to non-tilted faces with the same expression. Below are two images, the left one illustrates what happens in the code when the angles are calculated, the right one shows how we can calculate the face offset correction by taking the tip of the nose and finding the angle the nose makes relative to the image, and thus find the angular offset β we need to apply.
Now let’s look at how to implement what I described above in Python. It’s actually fairly straightforward. We just slightly modify the get_landmarks() function from above.
) #Find both coordinates of centre of gravity ymean = np.mean(ylist) xcentral = [(x-xmean) for x in xlist] #Calculate distance centre <-> other points in both axes"
That was actually quite manageable, no? Now it’s time to put all of the above together with some stuff from the first post. The goal is to read the existing dataset into a training and prediction set with corresponding labels, train the classifier (we use Support Vector Machines with linear kernel from SKLearn, but feel free to experiment with other available kernels such as polynomial or rbf, or other classifiers!), and evaluate the result. This evaluation will be done in two steps; first we get an overall accuracy after ten different data segmentation, training and prediction runs, second we will evaluate the predictive probabilities.
Déja-Vu All Over Again
The next thing we will be doing is returning to the two datasets from the original post. Let’s see how this approach stacks up.
First let’s write some code. The approach is to first extract facial landmark points from the images, randomly divide 80% of the data into a training set and 20% into a test set, then feed these into the classifier and train it on the training set. Finally we evaluate the resulting model by predicting what is in the test set to see how the model handles the unknown data. Basically a lot of the steps are the same as what we did earlier.
The quick and dirty (I will clean and ‘pythonify’ the code later, when there is time) solution based off of earlier code could be something like:
import cv2 import glob import random import math import numpy as np import dlib import itertools from sklearn.svm import SVC emotions = ["anger", "contempt", "disgust", "fear", "happiness", "neutral", "sadness", "surprise"] #Emotion list clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") #Or set this to whatever you named the downloaded file clf = SVC(kernel='linear', probability=True, tol=1e-3)#, verbose = True) #Set the classifier as a support vector machines with polynomial kernel data = {} #Make dictionary for all values #data['landmarks_vector) ymean = np.mean(ylist) xcentral = [(x-xmean) for x in xlist]" def make_sets(): training_data = [] training_labels = [] prediction_data = [] prediction_labels = [] for emotion in emotions: print(" working on %s" %emotion) clahe_image = clahe.apply(gray) get_landmarks(clahe_image) if data['landmarks_vectorised'] == "error": print("no face detected on this one") else: training_data.append(data['landmarks_vectorised']) #append image array to training data list training_labels.append(emotions.index(emotion)) for item in prediction: image = cv2.imread(item) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) clahe_image = clahe.apply(gray) get_landmarks(clahe_image) if data['landmarks_vectorised'] == "error": print("no face detected on this one") else: prediction_data.append(data['landmarks_vectorised']) prediction_labels.append(emotions.index(emotion)) return training_data, training_labels, prediction_data, prediction_labels accur_lin = [] for i in range(0,10): print("Making sets %s" %i) #Make sets by random sampling 80/20% training_data, training_labels, prediction_data, prediction_labels = make_sets() npar_train = np.array(training_data) #Turn the training set into a numpy array for the classifier npar_trainlabs = np.array(training_labels) print("training SVM linear %s" %i) #train SVM clf.fit(npar_train, training_labels) print("getting accuracies %s" %i) #Use score() function to get accuracy npar_pred = np.array(prediction_data) pred_lin = clf.score(npar_pred, prediction_labels) print "linear: ", pred_lin accur_lin.append(pred_lin) #Store accuracy in a list print("Mean value lin svm: %s" %np.mean(accur_lin)) #FGet mean accuracy of the 10 runs
Remember that in the previous post, for the standard set at 8 categories we managed to get 69.3% accuracy with the FisherFace classifier. This approach yields 84.1% on the same data, a lot better!
We then reduced the set to 5 emotions (leaving out contempt, fear and sadness), because the 3 categories had very few images, and got 82.5% correct. This approach gives 92.6%, also much improvement.
After adding the less standardised and more difficult images from google, we got 61.6% correct when predicting 7 emotions (the contempt category remained very small so we left that out). This is now 78.2%, also quite an improvement. This remains the lowest accuracy, showing that for a more diverse dataset the problem is also more difficult. Keep in mind that the dataset I use is still quite small in machine learning terms, containing about 1000 images spread over 8 categories.
Looking at features
So we derived different features from the data, but weren’t sure whether this was strictly necessary. So was this necessary? It depends! It depends on if doing so adds more unique variance related to what you’re trying to predict, it depends on what classifier you use, etc.
Let’s run different feature combinations as inputs through different classifiers and see what happens. I’ve run all iterations on the same slice of data with 4 emotion categories of comparable size (so that running the same settings again yields the same predictive value).
Using all of the features described so far leads to:
Linear SVM: 93.9%
Polynomial SVM: 83.7%
Random Forest Classifier: 87.8%
Now using just the vector length and angle:
Linear SVM: 87.8%
Polynomial SVM: 87.8%
Random Forest Classifier: 79.6%
Now using just the raw coordinates:
Linear SVM: 91.8%
Polynomial SVM: 89.8%
Random Forest Classifier: 59.2%
Now replacing all training data with zeros:
Linear SVM: 32.7%
Polynomial SVM: 32.7%
Random Forest Classifier: 32.7%
Now this is interesting! First note that there isn’t much difference in the accuracy of the support vector machine classifiers when using the extra features we generate. This type of classifier already preprocesses the data quite extensively. The extra data we generate does not contain much if any extra information to this classifier, so it only marginally improves the performance of the linear kernel, and actually hurts the polynomial kernel because data with a lot of overlapping variance can also make a classification task more difficult (here, it probably results in overfitting the training data). By the way, this is a nice 2D visualisation of what an SVC tries to achieve, complexity escalates when adding one dimension. Now remember that the SVC operates in an N-dimensional space and try to imagine what a set of hyperplanes in 4, 8, 12, 36 or more dimensions would look like. Don’t drive yourself crazy.
Random Forest Classifiers do things a lot differently. Essentially they are a forest of decision trees. Simplified, each tree is a long list of yes/no questions, and answering all questions leads to a conclusion. In the forest the correlation between each tree and the others is kept as low as possible, which ensures every tree brings something unique to the table when explaining patterns in the data. Each tree then votes on what it thinks the answer is, and most votes win. This approach benefits extensively from the new features we generated, jumping from 59.2% to 87.8% accuracy as we combine all derived features with the raw coordinates.
So you see, the answer you likely get when you ask any scientist a direct question holds true here as well: it depends. Check your data, think twice and don’t be afraid to try a few things.
The last that may be noticed is that, when not adding any data at all and in stead presenting the classifiers with a matrix of zeros, they still perform slightly above the expected chance level of 25%. This is because the categories are not identically sized.
Looking at mistakes
Lastly, let’s take a look at where the model goes wrong. Often this is where you can learn a lot, for example this is where you might find that a single category just doesn’t work at all, which can lead you to look critically at the training material again.
One advantage of the SVM classifier we use is that it is probabilistic. This means that it assigns probabilities to each category it has been trained for (and you can get these probabilities if you set the ‘probability’ flag to True). So, for example, a single image might be “happy” with 85% probability, “angry” with “10% probability, etc.
To get the classifier to return these things you can use its predict_proba() function. You give this function either a single data row to predict or feed it your entire dataset. It will return a matrix where each row corresponds to one prediction, and each column represents a category. I wrote these probabilities to a table and included the source image and label. Looking at some mistakes, here are some notable things that were classified incorrectly (note there are only images from my google set, the CK+ set’s terms prohibit me from publishing images for privacy reasons):
anger: 0.03239878
contempt: 0.13635423
disgust: 0.0117559
fear: 0.00202098
neutral: 0.7560004
happy: 0.00382895
sadness: 0.04207027
surprise: 0.0155695
The correct answer is contempt. To be honest I would agree with the classifier, because the expression really is subtle. Note that contempt is the second most likely according to the classifier.
anger: 0.0726657
contempt: 0.24655082
disgust: 0.06427896
fear: 0.02427595
neutral: 0.20176133
happy: 0.03169822
sadness: 0.34911036
surprise: 0.00965867
The correct answer is disgust. Again I can definitely understand the mistake the classifier makes here (I might make the same mistake..). Disgust would be my second guess, but not the classifier’s. I have removed this image from the dataset because it can be ambiguous.
anger: 0.00304093
contempt: 0.01715202
disgust: 0.74954754
fear: 0.04916257
neutral: 0.00806644
happy: 0.13546932
sadness: 0.02680473
surprise: 0.01075646
The correct answer is obviously happy. This is a mistake that is less understandable but still the model is quite sure (~75%). There definitely is no hint of disgust in her face. Do note however, that happiness would be the classifier’s second guess. More training material might rectify this situation.
anger: 0.0372873
contempt: 0.08705531
disgust: 0.12282577
fear: 0.16857784
neutral: 0.09523397
happy: 0.26552763
sadness: 0.20521671
surprise: 0.01827547
The correct answer is sadness. Here the classifier is not sure at all (~27%)! Like in the previous image, the second guess (~20%) is the correct answer. This may very well be fixed by having more (and more diverse) training data.
anger: 0.01440529
contempt: 0.15626157
disgust: 0.01007962
fear: 0.00466321
neutral: 0.378776
happy: 0.00554828
sadness: 0.07485257
surprise: 0.35541345
The correct answer is surprise. Again a near miss (~38% vs ~36%)! Also note that this is particularly difficult because there are few baby faces in the dataset. When I said earlier that the extra google images are very challenging for a classifier, I meant it!
Upping the game – the ultimate challenge
Although the small google dataset I put together is more challenging than the lab-conditions of the CK/CK+ dataset, it is still somewhat controlled. For example I filtered out faces that were more sideways than frontal-facing, where the emotion was very mixed (happily surprised for example), and also where the emotion was so subtle that even I had trouble identifying it.
A far greater (and more realistic still) challenge is the SFEW/AFEW dataset, put together from a large collection of movie scenes. Read more about it here. The set is not publicly available but the author was generous enough to share the set with me so that I could evaluate the taken approach further.
Guess what, it fails miserably! It attained about 44.2% on the images when training on 90% and validating on 10% of the set. Although this is on par with what is mentioned in the paper, it shows there is still a long way to go before computers can recognize emotions with a high enough accuracy in real-life settings. There are also video clips included on which we will spend another post together with convolutional neural nets at a later time.
This set is particularly difficult because it contains different expressions and facial poses and rotations for similar emotions. This was the purpose of the authors; techniques by now are good enough to recognise emotions on controlled datasets with images taken in lab-like conditions, approaching upper 90% accuracy in many recent works (even our relatively simple approach reached early 90). However these sets do not represent real life settings very much, except maybe when using laptop webcams, because you always more or less face this device and sit at a comparable distance when using the laptop. This means for applications in marketing and similar fields the technology is already usable, albeit with much room for improvement still available and requiring some expertise to implement it correctly.
Final reflections
Before concluding I want you to take a moment, relax and sit back and think. Take for example the SFEW set with real-life examples, accurate classification of which quickly gets terribly difficult. We humans perform this recognition task remarkably well thanks to our highly complex visual system, which has zero problems with object rotation in all planes, different face sizes, different facial characteristics, extreme changes in lighting conditions or even partial occlusion of a face. Your first response might be “but that’s easy, I do it all the time!”, but it’s really, really, really not. Think for a moment about what an enormously complex problem this really is. I can show you a mouth and you would already be quite good at seeing an emotion. I can show you about 5% of a car and you could recognize it as a car easily, I can even warp and destroy the image and your brain would laugh at me and tell me “easy, that’s a car bro”. This is a task that you solve constantly and in real-time, without conscious effort, with virtually 100% accuracy and while only using the equivalent of ~20 watts for your entire brain (not just the visual system). The average still-not-so-good-at-object-recognition CPU+GPU home computer uses 350-450 watts when computing. Then there’s supercomputers like the TaihuLight, which require about 15.300.000 watts (using in one hour what the average Dutch household uses in 5.1 years). At least at visual tasks, you still outperform these things by quite a large margin with only 0.00013% of their energy budget. Well done, brain!
Anyway, to try and tackle this problem digitally we need another approach. In another post we will look at various forms of neural nets (modeled after your brain) and how these may or may not solve the problem, and also at some other feature extraction techniques.
The CK+ dataset was used for validating and training of the classifier in this article, references to the set.
The SFEW/AFEW dataset used for evaluation is authored by and described in:
- A. Dhall, R. Goecke, S. Lucey and T. Gedeon, “Collecting Large, Richly Annotated Facial- Expression Databases from Movies”, IEEE MultiMedia 19 (2012) 34-41.
- A. Dhall, R. Goecke, J. Joshi, K. Sikka and T. Gedeon, “ Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol”, ACM ICMI 2014.
215 Comments
BasAugust 11, 2016
This software is going to be Huuuuggeeee!
ZachOctober 6, 2016
This is amazing – thank you for providing a human readable walkthrough! I was not learning much reading the many post-doc borg machine code style walkthroughs Google keeps pointing me to.
Paul van GentOctober 6, 2016
Thanks :)! This is exactly why I decided to work out something myself and share it. Glad it helped.
jettinOctober 7, 2016
sir,
I was unable to install boost python i am getting an error like : mscv.jam no such a file or directory. i have vs2015
Paul van GentOctober 20, 2016
Hi Jettin,
I’ve never seen this error. mscv.jam should be msvc.jam though. Are you setting your compiler name correctly?
JoneOctober 20, 2016
When I run the solution code above, the pred_lin I got was always 1.0 no matter how I changed the ratio of the training and test set. I just used the CK+ dataset, and put them into 8 respective folders. Did I miss something, or something wrong with the code?
Paul van GentOctober 20, 2016
Hi Jone,
If I run it it functions fine. What likely happens is that you have overlap between your testing and training data. For example if I set the size of the training set to 1.0 and leave the testing set at 0.1 it also returns 100% accuracy. This is because the model easily remembers what it has already seen, but then you still have no information about how well it generalizes. Use the settings from the tutorial.
Also, I’ve written the train/test split out because it gives a more clear image of what happens. If you can’t get it to work, look for a simpler approach such as SKLearns train_test_split() function.
Good luck!
StanislavNovember 9, 2016
Hi all !
My configuration is : win7 – 32, Microsoft Visual Studio 15 (community), Phyton3.5, Cmake3.7.0, Boost1.6.2.0
I try this instructions and find small bug , in string
b2 -a –with-python address-model=64 toolset=mscv runtime-link=static
Need replace the string toolset=mscv, to toolset=msvc !!! Its not a joke , i find this rightly string option in the bootstrap.bat
For win32 my string is:
b2 -a –with-python address-model=32 toolset=msvc runtime-link=static
work fine !!
Paul van GentNovember 9, 2016
Thanks Stanislav for catching that, seems to be a typo! It should indeed be msvc (microsoft visual c++). I’ve updated it.
StanislavNovember 9, 2016
Thanks.. And I remember. The file in boost library /dlib/cmake_utils/add_python_modue caused error – “Not find header for python-py34”.
Replace in file add_python_modue next :
FIND_PACKAGE(Boost boost-python COMPONENTS python-py34 )
if (NOT Boost_FOUND)
FIND_PACKAGE(Boost boost-python COMPONENTS python-py35)
endif()
if (NOT Boost_FOUND)
FIND_PACKAGE(Boost COMPONENTS python3)
endif()
if (NOT Boost_FOUND)
FIND_PACKAGE(Boost COMPONENTS python)
endif()
to
FIND_PACKAGE(Boost COMPONENTS system)
if (NOT Boost_FOUND)
FIND_PACKAGE(Boost COMPONENTS thread )
endif()
if (NOT Boost_FOUND)
FIND_PACKAGE(Boost COMPONENTS python)
endif()
if (NOT Boost_FOUND)
FIND_PACKAGE(Boost COMPONENTS REQUIRED )
endif()
And everything will be work fine ! Sorry for my bad english..
Francisco Gonzalez HernandezNovember 24, 2016
Hi Paul, I’ve used your code and I’ve obtained some good results, your work is fantastic. By the way, I want to cite you on a scientific paper, do you have any scientific paper where you want to be citated?, also, I don’t know if you can give me more information about the used works to create this work. I look forward to reading from you soon and thanks.
Paul van GentNovember 25, 2016
Hi Francisco,
I’m happy to hear this! I’ve sent you a mail with detailed information. I don’t have the papers describing this methodology handy right now, but a quick search for “facial landmark generation” and “emotion recognition using support vector machines” should return a good overview of the field.
Cheers,
Paul
YingNovember 29, 2016
Hi, Paul,
Thanks for your work and your post. I am working on a project of emotion recognition right now and your post is a saver.
I have some doubts though, wondering if you have an answer.
Do you think it is possible to do all this work in Linux (Ubuntu)? Even more in Raspberry Pi (also Ubuntu)?
Thanks,
Ying
Paul van GentNovember 29, 2016
Hi Ying,
). What are you making?
The great thing about Python is that it is very cross-platform. Make sure you install the right dependencies. Also the code for reading from the webcam might be a bit different (in linux it lives under /dev/video
YingDecember 5, 2016
I am making a facial expression (emotions) recognition algorithm which will associate users’ emotions with some specific pieces of music. Cause I am cooperating with some music artists. They will study the relation between emotions and music.
YingDecember 5, 2016
By the way, I have sent my email address for demanding the dataset of face emotions you used in your previous post, always no reply.
KazunoDecember 21, 2016
Hi, Paul…
Currently, I am working on a project of emotion recognition through webcam. I’ve used your code and it’s really saved my life. Thanks for your work. But, I’m little bit confused with clf.predict. I didn’t know how to show emotion label for each video frame. Please help me out.
Paul van GentDecember 27, 2016
Glad to hear it helped. I’m not sure what you mean. Do you want to overlay the emotion label on a real-time video stream?
KazunoJanuary 12, 2017
Owh, sorry. My bad. That’s not what I meant. Regarding your tutorial, you only show the training part and fit training set to classifier. But doesn’t show how to predict emotion through webcam based on facial landmark using ctf.predict. I know how to used fishercascade prediction based on your previous tutorial, but I just don’t know how to implement ctf.predict in this tutorial. Please help me out. Thank you.
Paul van GentFebruary 8, 2017
No I didn’t use the fishercascade prediction from the previous tutorial. Here I use DLib to mark face features and use different classifiers (random forest, SVM) to predict emotions.
Once the model is trained as shown in the tutorial, you can feed it images from a webcam just as well as images from a dataset. Based on this and the previous tutorial, you should be able to figure out how to do that :).
You can view the SKLearn documentation to see how the classifiers work. Some use a .predict() function, others a .score().
ShivamJanuary 28, 2017
Hi, Paul…
Currently, I am working on a project of emotion recognition and i am facing problem that it’s not able to run bootstrap.bat file as it’s showing an error (there is no file named bootstrap.bat supplied with boost-python
)
NameError: name ‘bootstrap’ is not defined
Can you please help me out!
thanks
Paul van GentFebruary 8, 2017
Hi Shivam. Wat OS are you on?
ShivamFebruary 24, 2017
Sir , I am using OS- windows 8.1.
Could you please help me out.?
Paul van GentFebruary 24, 2017
Be sure to download the correct boost file. The most current is here:
If I download and extract the zip, the bootstrap.bat is there.
shivamApril 17, 2017
Sir I have downloaded the correct boost file only from
But the bootstrap.bat file is not available in boost/1.63.0 folder.
Could you please help me out with any other method to build this boost python so that I can Continue with my project??
thanks.
GauravFebruary 7, 2017
Hi Paul
Which version of SKlearn did you use?
Paul van GentFebruary 8, 2017
Hi Gaurav. I use 0.18.
BlackwoodFebruary 11, 2017
Hi,Paul
This an amazing idea. But when i use visual studio 2013 to do it ,if find the predict result is very bad. the probility event not up to 20%.
I use libsvm to train the model useing “C_SCV, LINEAR”,every sample have 272(68*4)features.and the model file is about 170Mbytes.
is this right?
thank you.
Paul van GentFebruary 13, 2017
Hi Blackwood. I don’t know if the model size is right. What kind of dataset do you use to train the classifier? My guess is that either the dataset is too small, doesn’t contain a lot (or too little) variance, or that the feature extraction somewhere goes wrong.
BlackwoodFebruary 13, 2017
Hi, Paul
I use CK/CK+ dataset,and pick the first and the last picture of each emotion sequences.
The first picture is the netural and the last picture is the emotion of the other 7 types.
There are about 650 picture in training.
the dataset is too small ?
Is each sample has 272 (68*4) features?
What is you dataset size?
thank you .
Paul van GentFebruary 15, 2017
Hi Blackwood,
So strange, my set is also about 650 images (647). Could you send me your full code so I can have a look (info@paulvangent.com)? If I run my code it will still attain upper .8, lower .9 accuracy.
Cheers
BlackwoodFebruary 16, 2017
Thank you Paul.
I check code,and find the svm parameter is not right and i changed it.
now, the predict result is up to 85%. I have emailed the c++ code to you mailbox,
Do you plan to do it using dnn?
Paul van GentFebruary 16, 2017
Hi Blackwood,
Good to hear you found the issue. There is a deep learning tutorial planned for somewhere in the near future yes, but I will need to see when I find the time to make it :). Stay tuned!
Cheers
Sujay AngadiFebruary 14, 2017
plz provide proper links to download and install Cmake and python.boost
Paul van GentFebruary 14, 2017
At the time of writing of the tutorial the links worked. As far as I can see they still do. What exactly do you mean?
Also, google is your friend 🙂
Paul van GentFebruary 14, 2017
Ah I see the CMake link pointed to jupyter, strange. I updated it.
AshwinFebruary 14, 2017
Hi Paul!
Great tutorial. I’m using your tutorial to find emotion using SVM as its is a part of my project.
My Configuration is as follows –
-Windows 10 64-bit
-Visual Studio 2015
-Python 2.7
-Opencv 2.4.9
-Cmake-3.8.0-rc1-win64-x64.msi
When I run the command – python setup.py install. It returns the following error –
libboost_python-vc140-mt-s-1_63.lib(errors.obj) : fatal error LNK1112: module machine type ‘x64’ conflicts with target machine type ‘X86’ [C:\Dlib\tools\python\build\dlib_.vcxproj]
Done Building Project “C:\Dlib\tools\python\build\dlib_.vcxproj” (default targets) — FAILED.
Done Building Project “C:\Dlib\tools\python\build\ALL_BUILD.vcxproj” (default targets) — FAILED.
Done Building Project “C:\Dlib\tools\python\build\install.vcxproj” (default targets) — FAILED.
Build FAILED.
“C:\Dlib\tools\python\build\install.vcxproj” (default target) (1) ->
“C:\Dlib\tools\python\build\ALL_BUILD.vcxproj” (default target) (3) ->
“C:\Dlib\tools\python\build\dlib_.vcxproj” (default target) (5) ->
(Link target) ->
libboost_python-vc140-mt-s-1_63.lib(errors.obj) : fatal error LNK1112: module machine type ‘x64’ conflicts with target machine type ‘X86’ [C:\Dlib\tools\python\build\dlib_.vcxproj]
0 Warning(s)
1 Error(s)
Time Elapsed 00:07:04.04
error: cmake build failed!
So I’m not able to move ahead. I really need your help in this.
Paul van GentFebruary 14, 2017
Hi Ashwin,
Thanks, glad it’s helping!
The first bit and last bit of the error is what it’s about:
“libboost_python-vc140-mt-s-1_63.lib(errors.obj) : fatal error LNK1112: module machine type ‘x64’ conflicts with target machine type ‘X86’ [C:\Dlib\tools\python\build\dlib_.vcxproj]”
One of the things in your list seems to be 32-bit. You cannot mix 32 and 64 bit architectures. Verify that Python is 64 bit, Boost is 64 bit, Dlib is 64 bit. Did you build the boost library with the 64-bit flag?
AshwinFebruary 15, 2017
Yes, I built the boost library with the 64-bit flag. By the way which version of Boost, Dlib and Cmake did you use in this tutorial?
Paul van GentFebruary 15, 2017
Are they all 64-bit? What about your python distro? I used:
– Boost 1.61
– Dlib 19.2
– Cmake 3.7.1
However, I highly doubt that versions matter. The error is quite specific about there being an instruction set incompatibility.
AshwinFebruary 16, 2017
Hi Paul. Thanks. I did install all 64 bit modules and it worked. But now when I execute the first code it gives me the following error-
predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”) #Landmark identifier. Set the filename to whatever you named the downloaded file
RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat
So do i need to install that particular file ?
Sorry I’m new to python.
Paul van GentFebruary 16, 2017
Yes, this file is the trained model for the face detector in Dlib. Without it, it doesn’t know what a face looks like.
I think the download link is in the tutorial.
Good luck :)!
YukiMarch 21, 2017
Hi Paul.
I have the same problem as Ashwin.
I used:
-Python 2.7.13 64bit.
-dlib 19.1 (I don’t know how to check is it 64bit or not)
-Boost 1.63.0 (with the flag “address-model=64”)
-cmake 3.6.1
This error had trouble me more 1 week, tried a lot method and still can’t solve it QAQ
TonyFebruary 15, 2017
Hi Paul,
Excellent tutorials. I have a doubt. The accuracy of both landmark as well as the fishare face for me are quite low, around the low thirties. Any idea why ? I am using the same data set and same algorithm.
Any idea regarding this ?
Thanks
Paul van GentFebruary 15, 2017
Hi Tony,
Unfortunately I have no idea from a distance. Could you send me your full code (info@paulvangent.com)? I’ll have a look.
Cheers
BlackwoodFebruary 17, 2017
Hi Paul:
I hope to use more pictures to train the model . but i am not as lucky as you to get the SFEW/AFEW dataset. I have email the AFEW/SFEW downloading requirement few days ago, but no reply comes back.
Can you tell me how and where i can get the dataset else ?
thank you.
Paul van GentFebruary 17, 2017
There is no other place to get that one, however you could also try making your own. Extract faces from google image search, movies, art, etc. It’s more work but you have control over how big and how varied you want your dataset to be.
LuisMarch 1, 2017
Hello. I followed both tutorials for emotion recognition and everything worked smoothly 🙂 Now I’m looking to implement it by using deep learning and/or neural networks. Could you please recommend me how to start? I mean, what could I use as inputs in a neural network? Could it be 5 images (one emotion each)? What would be the next step? I’m a bit lost here 😛 Thanks!
Paul van GentMarch 13, 2017
Hi Luis,
You could read up on Google’s tensorflow. Theano in Python is another popular deep learning framework. Or you could wait a week or two. I think I’ll have a deep learning emotion recognition tutorial ready by then :).
Cheers
Rachnaa RFebruary 21, 2018
Dear Sir,
First of all thank you for your efforts! Believe me it helps a lot. I am a beginner to this environment.
Secondly, I coudnt find the deep learning emotion recognition tutorial you were talking about. Do you mind providing me the link of the same?
Regards,
Rachnaa R.
RaazApril 1, 2017
Hy Paul, I need help please, when running the comand cmake.. it gives me the error ‘cmake..’ is not recognized as an internal or external command, operable program or batch file. What should I do? Thanks, great job with your website
Paul van GentApril 8, 2017
Hi Raaz. When installing CMAke you need to check the box to “add it to your system path”, or manually add it to your system PATH variable.
JohnApril 2, 2017
Can be a train model saved and used after?
Paul van GentApril 8, 2017
Hi John. Sure.
import cv2
fishface = cv2.createFisherFaceRecognizer()
[train model]
fishface.save(“filename.xml”)
load it with fishface.load(“filename.xml”).
JohnApril 8, 2017
I mean the trained SVM ? With fisherface i saved my model, but i want to make a model with SVM
Paul van GentApril 8, 2017
Check the docs for the module you use for your SVM model. It’s always in the docs.
AparnaApril 7, 2017
Hello. I keep getting the following error:
CMake Error at C:/dlib-19.4/dlib/cmake_utils/add_python_module:116 (message):
Boost python library not found.
Call Stack (most recent call first):
CMakeLists.txt:6 (include)
— Configuring incomplete, errors occurred!
See also “C:/dlib-19.4/tools/python/build/CMakeFiles/CMakeOutput.log”.
error: cmake configuration failed!
I have tried everything I could find on internet, but haven’t been successful in installing dlib. Any suggestions would be appreciated.
Thanks,
Aparna
Paul van GentApril 8, 2017
Hi Aparna. It seems boost-python is not correctly installed or located. Did it build and install correctly? Did you set the environment variables to BOOST_ROOT and BOOST_LIBRARYDIR?
AparnaApril 8, 2017
Yes I did. I tried uninstalling and installing it thrice. It gets successfully installed without any errors or warning.
AparnaApril 8, 2017
And Yes, I did setup the BOOST_ROOT and BOOST_LIBRARYDIR path variables. But no luck yet.
Paul van GentApril 8, 2017
You have me at a loss I’m afraid. What system do you use?
Mani KumarMay 26, 2017
Hello Aparna,
Even I faced the same issue. After searching for answers on Google
and trying every answer for over a month, finally I found a solution that works for me.
I installed the dlib using its wheel file.
Download the wheel file from this link .
I used the dlib 18.17 and not 19.4 which is the latest version.
If you check the pypi it shows there’s no dlib 19.4 package for python 2.7.
Please check this link .
And make sure you have consistent installations.
All the programs in this tutorial work on my system.
And my system configurations:
OS – Windows 10 64bit.
Python 2.7 (anaconda) – 64bit
OpenCV 2.4 – 64bit
dlib 18.17.100 – cp27, win_amd64bit => should be used with python 2.7 – 64bit.
Regards,
Mani
Paul van GentMay 26, 2017
Hi Mani. Thanks for linking to the pre-built wheel files, that is indeed also a solution. It seems your link was eaten by my spam protection. I think you mean this one:
AparnaApril 8, 2017
I am using Windows 10 64-Bit. The other software versions are as follows:
Boost_1.61.0
dlib_19.2
Visual Studio 2015
CMAKE_3.72
Anaconda2-4.3.1
All the above mentioned softwares are 64-bit versions.
I did follow all the steps that you have mentioned for installation.
Is there anything I am missing?
AparnaApril 12, 2017
Hey Paul,
Seems like there was some problem with my Windows. I tried installing it on Ubuntu and was successful at the setup and running your code. Great work with the article. Thanks for the awesome work. I get accuracy around 45-48% and I am not sure why. Any help is appreciated.
Thanks
wiemJuly 30, 2017
Hi ! I’m trying installing it on Ubuntu also ! However i had some issues when I train thaa code ! would you helpe me please ??
this is my email grinawiem@gmail.com
Thank you a lot
Paul van GentJuly 30, 2017
Hi Wiem. Please send me a message with the issues you’re facing and the code. The address is info@paulvangent.com
-Paul
PrakharApril 23, 2017
Hey Paul,
On running bootstrap.bat i keep getting the error that cl is not recognised as an external of internal command. Where am I going wrong?
Paul van GentApril 25, 2017
Hi Prakhar.
Also, if you didn’t do it yet, install ‘common tools for C++’ in visual studio.
maymunaMay 6, 2017
hi, facing same error, i dont have any microsoft visual studio? how do i solve this error?
Paul van GentMay 8, 2017
Install visual studio + common tools for visual C++
RafaApril 25, 2017
Hello Paul I need your help,
I have followed the instructions given to work with dlib in python. I think everything work fine untill python setup.py install, obviously something is not working well.
I have python 3.6 with conda running in window 7 64 bit
you can see the result of my command prompt here
thanks in advance
Paul van GentApril 25, 2017
Hi Rafa. It seems your C++ compiler is not found. Did you install Visual Studio? If you did, go to control panel -> programs and features -> right click on the “microsoft visual studio” entry and select “change”. wait for it to initialise and check the “common tools for C++” under “visual C++”. You should be good to go then!
rafaApril 25, 2017
Thanks for your quick response.
You are right, there was a problem with visual studio 2017 ( is buggy and won’t compile C++11 code, which is a requirement to use dlib) so I installed visual studio 2015 with c++ packs.
However now I have other problem:
Has you any idea how I can solved it?
Paul van GentApril 25, 2017
Hi Rafa. The “SET” commands only apply to a command prompt session, so each time you close it, it forgets them. Before compiling dlib you need to do the SET BOOST_ROOT and SET BOOST_LIBRARYDIR again.
julioMay 4, 2017
hi i tried install dlib on windows 8 32 bits-cmake 3.8 win32-x86 – python2.7 – visualstudio 2012-dlib 19.4 but i have error like that:
visual studio are using is too old and doent support c++11 you need viaual studio 2015 or newer..
my question is …i just update 2015
Paul van GentMay 4, 2017
Hi Julio. That is correct, and why the VS2015 is indicated under “Getting started”. Happy to hear you found the issue :).
joyMay 2, 2017
Hello Paul, I’m new to machine learning and I’m looking to execute this program you’ve written. However, I’m not clear how we’re reading the dataset. Like the previous post do we create two folders ‘source_emotion’ and ‘source_images’? If not then it would be great if you could explain how you’re doing this. Pardon me if it’s a silly question. Thank you.
Paul van GentMay 4, 2017
Hi Joy. If you’ve followed the previous tutorial you can use the same folder structure. The “source” folders are not necessary, rather the “dataset” folder with the emotion labels as subfolders. The code should pick up the same structure without problem. Please let me know if you run into problems.
AnilMay 3, 2017
Hi Paul, how can i implement predict_proba() function above code to get that emotion label scores.
Paul van GentMay 8, 2017
Hi Anil,
Make sure than when you define the SVM classifier, that you set probability to True. In the tutorial it looks like:
“clf = SVC(kernel=’linear’, probability=True, tol=1e-3)”
Afterwards, “clf” becomes the classifier object. Call it with clf.predict_proba(n_samples, n_features).
Also see the docs at:
Cheers,
Paul
AnilMay 27, 2017
After my implementation, it procuded me following probabilities in array (‘prob score’, array([[ 0.12368627, 0.77254657, 0.01258662, 0.09118054]])). Here there are four probabilities but I have five emotions.I think it should included five probabilities.What am I missing, where am I wrong ?
Thx for reply.
Paul van GentMay 27, 2017
Hi Anil. I also mailed you, but for the benefit of others my message here as well:
If I had to make a guess, the training data does not contain any emotions from the missing fifth emotion. This way, the model will calibrate itself only for four emotions.
– Are all the emotion folders filled with images?
– Are the emotion labels in the list at the top of the code the exact same as the folder names? An empty list (and thus: no training data) will be returned if one of the folder names does not correspond with the emotion names.
Good luck!
Paul
Kowsi1997February 26, 2018
hi paul,
what is n_samples and n_features in the above code inorder to use predict_proba() function for the above code??
Paul van GentFebruary 26, 2018
Hi Kowsi. I refer to a 2d ndarray or list, with shape “all samples, all features”. So, if you have 50 pictures with 20 features on each picture, you feed predict_proba an array of shape(50,20)
pirouzMay 16, 2017
HI
thanks for your remarkable job and sharing it with us
would you please explain more about the installing dlib ?? for example what should i put after cmake command ??
i tried with cmake (GI) to configure and generate into build folder but at the end (python setup.py install) i get error
cheers
pirouz
Paul van GentMay 18, 2017
Hi Pirouz. You type “cmake ..” when in the build folder, indicating you want to build the source from its parent directory.
What error are you getting? Any help I can offer is very much dependent on that.
Mani KumarMay 26, 2017
Hi Paul,
I have gone through all your tutorials regarding the emotion detection. All the code in
your tutorials work on my system. Thank you for the good tutorial.
I am experimenting with my own ideas and methods to extract the information
related to emotion. And I am beginner to machine learning so I am not sure which method
or which library is good for my ideas.
I have a question.
Why are you using sklearn’s svm and not dlib’s svm or opencv’s svm to train and predict?
Reason for the question.
To reduce the dependency on external libraries.
Thank you,
Mani.
Paul van GentMay 26, 2017
Hi Mani. I use SKLearn because of their scientific integrity for inclusion of algorithms (see:). Additionally: it’s because it is very versatile and contains much more ML algorithms than just support vector machines. I like to work with a package that has all I need, rather than select from different packages.
This is personal taste, you can achieve similar goals with different packages.
If you like to exchange ideas on what algorithm to use for your purposes, send me a mail: info@paulvangent.com.
Mani KumarMay 29, 2017
Hi Paul,
Thank you for the reply.
I want to know if SKLearn is portable across platforms?
For example, the android os.
And the SKLearn API is accessible in c++?
Regards,
Mani
MjayJune 2, 2017
Hi Paul,
I use your code and catch this problem :
anglerelative = (math.atan((z-ymean)/(w-xmean))*180/math.pi) – anglenose
RuntimeWarning: divide by zero encountered in double_scalars
Where i make mistake?
Thanks for reply, tutorial is very good 🙂
Paul van GentJune 12, 2017
It means that there is a division by zero (you can’t divide by zero..). However, numpy should step over it and return a NaN value. You can try catching the error and removing the data entry from the dataset if you so wish. Good luck!
-Paul
JuergenJune 11, 2017
Hello Paul,
thank you for this tutorial, this is excellent work!
Just in case anyone was searching for the shape_predictor_68_face_landmarks like I did (probably I am blind), you can find it here:
Paul van GentJune 12, 2017
It seems I didn’t mention this in the tutorial, thanks for noticing it. I’ll add it
-Paul
simuxJune 20, 2017
Hello Paul,
Thank you for your job, it is helpful.
I used your code to extract feature vectors, butI got values with negative sign and in this format -1.254179104477611872e+02
Is my work correct?
Thank you
Paul van GentJune 20, 2017
Hi Simux. That depends on what feature vectors you are extracting. If you’re following the code from the tutorial, from the top of my head they can be negative (going from -180 to 180 for the angles). However, if you’re extracting other features, you need to tell me a bit more about what exactly you’re doing.
-Paul
simuxJune 21, 2017
Hi Paul,
yes I am following your tutorial. So i used your method in computing Euclidean distance. The output vector has a dimension of 268. However in your tutorial you computed the distance between the center of gravity and each point of the 68. So it must have a dimension 136.
Why i am having 268?
Thank you
Aniket MoreJuly 5, 2017
Hi Paul, Here also I am getting mean accuracy of 35%, Maybe the issue is with the version of Opencv.
Paul van GentJuly 5, 2017
That is strange indeed. This tutorial doesn’t rely on opencv except for accessing your webcam and displaying a video feed. The problem must be in the data then. How many images are in your dataset?
Could you send me your code to info@paulvangent.com? I’ll run it on my set..
Aniket MoreJuly 5, 2017
The data set has 652 images. I am using the same code without any modification still I will mail it to you. Thank you.
KVRanathungaJuly 12, 2017
Sir,
I have got the following error when executing the the final code you have post:
.”
what should i do now..????
KVRanathungaJuly 12, 2017
Sir,
I have got an error as follows when I’m executing the final code you have given.
.”
No idea what to do next. sir please help me…
Paul van GentJuly 14, 2017
The error is trying to tell you that the arrays passed to check_X_y() are empty. Try debugging why this is the case. Are the files correctly read? Is the pixel data correctly stored in an array? Are the data and the label appended correctly to the X and y arrays?
wiemJuly 30, 2017
Hi Sir,
I have got an error as follows when I’m executing the final code you have given.
Enter 1 to train and 2 to predict
1
Making sets 0
training SVM linear 0
/usr/local/lib/python3.5/dist-packages/sklearn/utils/validation.py:395:.
DeprecationWarning)
Traceback (most recent call last):
File “faceDetectionDlib.py”, line 168, in
main()
File “faceDetectionDlib.py”, line 135, in main
clf.fit(npar_train, training_labels)
File “/usr/local/lib/python3.5/dist-packages/sklearn/svm/base.py”, line 151, in fit
X, y = check_X_y(X, y, dtype=np.float64, order=’C’, accept_sparse=’csr’)
File “/usr/local/lib/python3.5/dist-packages/sklearn/utils/validation.py”, line 521, in check_X_y
ensure_min_features, warn_on_dtype, estimator)
File “/usr/local/lib/python3.5/dist-packages/sklearn/utils/validation.py”, line 424, in check_array
context))
ValueError: Found array with 0 feature(s) (shape=(1, 0)) while a minimum of 1 is required.
________________________________________________________
Would you help me please and guide me how could I solve this error and train the this code using other datasets ????
Paul van GentJuly 30, 2017
As stated to your question on the other posts: the last bit of the error tells what’s going wrong: “ValueError: Found array with 0 feature(s) (shape=(1, 0)) while a minimum of 1 is required.”. Apparently you’re feeding an empty object to the classifier. Try debugging why no images are loaded (is path correct? can it find files? are permissions set ok?)
-Paul
wiemAugust 12, 2017
Hi Paul ! I’m using Ck+ dataset however I am getting mean accuracy of 43%, I have no idea why it is so low. Could ou tel me where is the issue?
Thank you
Paul van GentAugust 13, 2017
No I cannot tell at a distance. You can check a few things:
– Are you using the correct versions of all packages?
– Where are the mistakes made? Is there a pattern?
– Are all paths correct? Are all images in the correct folders?
– Some users have reported glob.glob to behave differently in python 3, make sure lists are sorted properly when creating the dataset
wiemAugust 14, 2017
Hi Paul! Thanks alot for your quick answer . I send you the code I used and gave me 43% accuracy !!! Will you please check it and tell me where i did wrong ! my email is grinawiem@gmail.com.
Thanks
wiemAugust 14, 2017
Hi Paul ! I tried to train the code with one only emotion in one folder. So the accurancy become 50% !!!
I thing the mean problem is the glob.glob :
————————————————————————————————-
files = glob.glob(“/home/wiem/Bureau/CK+/train/*/*” )
————————————————————————————————-
in the original code is written : files = glob.glob(“dataset/%s/*” %emotion)
however when I use %emotion it gives me this error :
—————————————————————————————————–
Enter 1 to train and 2 to predict
1
Making sets 0
working on anger
Traceback (most recent call last):
File “test8.py”, line 128, in
main()
File “test8.py”, line 103, in main
training_data, training_labels, prediction_data, prediction_labels = make_sets()
File “test8.py”, line 64, in make_sets
training, prediction = get_files(“/home/wiem/Bureau/rafd/Evaluation/train/” )
File “test8.py”, line 22, in get_files
files = glob.glob(“/home/wiem/Bureau/CK+/train/*/*” %emotion)
TypeError: not all arguments converted during string formatting
—————————————————————————————————-
would you tell me please
Paul van GentAugust 14, 2017
Hi Wiem. With only one emotion SKLearn throws an error: you cannot fit a discriminatory model on one outcome! I expect something goes wrong there. As I mentioned at your other post: the likely issue is glob.glob sorting behaviour.
Seems you’re running into some basic Python problems such as string formatting. You really need to have a grasp of these things and other python concepts before trying more complex exercises such as this tutorial.
I recommend you follow some basic tutorials. A good one is
Good luck!
wiemAugust 17, 2017
Hi Sir,
Thank you a lot for your help! I figured out what was wrong. It’s just like you said the path of the data set was wrong. And, now the training works very well. However, would you please explain to me how I could Test and evaluate the training model and get its accuracy ?
Thanks
Neeraj PanchbhaiAugust 17, 2017
Im not able to install dlib successfully please help.
Paul van GentAugust 18, 2017
Never, ever ask for help without
– a detailed description
– what goes wrong
– what you have tried
wiemAugust 21, 2017
Hi Paul ! I wonder the accuracy given after the 10 runs in :
____________________________________________________________________________________________________
print(“Mean value lin svm: %s” %np.mean(accur_lin)) # Get mean accuracy of the 10 runs
____________________________________________________________________________________________________
Is that the value of the training or the prediction of the test of the model created ! Because this code gives me accuracy of 0.96 with the dataset MUG. So will you explain this to me ?
Thanks
Paul van GentAugust 21, 2017
Hi Wiem. It’s the test accuracy: the accuracy of the training dataset is not particularly useful, since this doesn’t tell anything about model performance, only model retention. The different accuracy with the MUG dataset is most likely because the data is structured differently,and as I mentioned before, there is likely some issue on your system with glob.glob sorting the returned images in the CK+ set.
However, neither accuracies tell you much about how the model will perform: always collect some data that is similar to that your system of application will use when functioning.
As mentioned before, I truly recommend you dig into Python (and machine learning theory) a bit more before attempting these kinds of projects. This will help you find faults easier.
wiemAugust 21, 2017
Thank you Paul very much for your explanation and your advice I appreciate your help . I will flow your suggestions carefully.
Thanks
SergioAugust 25, 2017
Thanks for the guide!! finally i could install dlib 19.4 for python on windows without errors.
ThariAugust 26, 2017
Every thing worked fine until ” python setup.py install”. I tried this many time but installing processes did not go future after ” face_recognition.cpp”. I waited more that 2 hours.
My system is windows 10, visual studio 2017, python 2.7 (64-bit), RAM- 8GB
cmake version 3.9.1 Dlib 19.4
Here I attached the command window in a text file for your reference.
nadhirSeptember 20, 2017
Hi Paul, I’ve used your code and I’ve obtained some good results, your work is fantastic. Thank you for sharing it.
I want ask you if you used the original size of image in the dataset CK+ or there is an optimal size you use to get better result. The, I want to cite you on a scientific paper as a reference , do you have any scientific paper where you want to be citated?, also, I don’t know if you may give me more information about the used works to create this work. I look forward to reading from you soon and thanks.
Cordially
Paul van GentSeptember 22, 2017
Hi Nadhir. Great! The citation can be to the article on the website, its format is at the top of the post, it is:
“van Gent, P. (2016). Emotion Recognition Using Facial Landmarks, Python, DLib and OpenCV. A tech blog about fun things with Python and embedded electronics. Retrieved from:”
As far as the size of the images goes, it was 350×350 pixels as stated in the other tutorial where the pre-processing was done. I’m not sure about the absolute optimal size for this, but I’m sure good performance can also be had with smaller images. Of course the larger the image, the smaller the facial movement you can quantify, but for the purposes of these tutorials (archetypical emotional expressions) the size was more than enough.
Good luck. If you want you can share the paper after publication and we can put it up on the article as well.
PhillemonSeptember 24, 2017
This is great, however I am finding it difficult to obtain the ck+ dataset. Can you please send it to my email. phillemonrasekgwalo@gmail.com thank you
Paul van GentSeptember 26, 2017
Hi Phillemon. I’m sorry, the terms of the dataset prohibit me sharing it. You need to obtain it from the original source, or find another one I’m afraid.
Randall TheunsOctober 13, 2017
Hey Paul,
Great work on this. I’m currently busy implementing this for a minor i’m doing. I’ve prepared the dataset, trained the model, it seems to give good accuracy (~85% with CK+). Right now I’m to add webcam support to allow for semi-real-time emotion recognition, but whenever I use SVC.predict on a vectorized facial detection, I only get either 5 or 7 as predictions. If I use predict_proba instead, I get an array with only 7 probabilities.
Do you have any clue why this happens?
The code is available on github:
In particular, src/webcam.py and src/frames.py matter.
Paul van GentOctober 14, 2017
Hi Randall. Several things might cause the prediction to revert to only two classes:
– Make sure you keep everything standardised. Is your face on the webcam image much larger or much smaller in terms of pixel size than the training data? Resize it before working with landmark coordinates
– What’s happening in the 15%? Are there one or two categories that host most mistakes?
– Are you expressing extreme emotional expressions? The CK+ dataset has extreme expressions. an ML model classifies correctly only the type of data you train it on.
SVC.predict_proba() works just like that: it outputs a list of decision probabilities based on which the classification is made. If you feed SVC.predict_proba() an array of image data, it gives a matrix of predictions back.
You could also try making your own dataset to append to the CK+ one. Maybe you can get 10 of your friends to each have a few photos made for each emotional expression. This might help predictions improve as well, since it trains the model explicitly on data gathered from the same sensor as from which predictions are made (your webcam).
Lastly, please do me a favour and cite my blog in the readme.md. This helps me get more traffic and in turn helps me produce more content.
-Paul
Randall TheunsOctober 14, 2017
Hi Paul,
Thanks for the swift and detailed reply.
There’s a good chance that the size of the face in the webcam is the problem. I’ll have to look into that. Due to some time constraints and deadlines, I don’t have too much time to troubleshoot te 15% (only have a total of ~8 weeks, of which 4 remain to create this prototype, and I still have to visualise it).
The predict_proba ‘issue’ I was talking about was more about the number of probabilities it returns (7, even though it was trained with 8 emotions), but this might have to do with too low probability, or just the same issue as above.
I’ll see if I can increase the dataset a bit.
You were already cited in the readme!
Thanks!
– Randall
Paul van GentOctober 15, 2017
Hi Randall. Are you sure that the model wasn’t trained on 7 emotions? It should return probabilities for the same number of classes as on which it has been trained, no matter the low probabilities..
Don’t put too much thought into the remaining 15%, you will never reach 100% under realistic conditions (an aggregate of 60-80% would already be very good performance in real-world settings).
Thanks for the citation, I must have missed that
-Paul
Randall TheunsOctober 15, 2017
Hey Paul,
Just as a quick reply and perhaps a hint for other people. The code used to sort and prepare the dataset assumed 8 emotions, including happy. The code to predict emotions, however, assumes the same 8 emotions, including *happyiness*. This means that, to train the happy emotion on the model, it was looking for a folder called happiness, rather than happy.
Fixing this simple issue seemed to have fixed the 7 or 8 prediction_proba issue. Another quick not about the above code:
In the landmark pieces of code, range(1, 68) is used, therefore only grabbing 67 of the 68 landmarks.
Thank you for the article and quick replies.
– Randall
JohnnyOctober 16, 2017
Hi Paul. Very nice job.
But i m struggling with following thing:
I want to classify one sample from my webcam. I do not know what function to use and what parameter to give.
I mean after clf.fit (training) i want to predict a frame from webcam. I used clf.predict_proba but the parameter expected must be equal with size of the training data (this is the error received).
Do you know how to proceed to classify one frame from webcam ?
Br
JohnnyOctober 16, 2017
Solved with predict_proba()
VíctorOctober 19, 2017
Hello, I have two questions from a line of the code.
1) I do not understand why to calculate the angle of each landmark point yo do:
landmarks_vectorised.append(int(math.atan((y – ymean) / (x – xmean)) * 360 / math.pi))
This would make sense for me if x and y are the coordinates of each landmark point. Nevertheless, x is xcentral and y is ycentral. And xcentral is x-xmean. By doing the sentence I mentioned before I understand that you are subtracting the mean 2 times.
2) In the same line code from before:
landmarks_vectorised.append(int(math.atan((y – ymean) / (x – xmean)) * 360 / math.pi))
I do not understand why to pass the angle to degrees you multiply per 360/pi and not 360/2*pi, that this is what I was expecting.
Paul van GentOctober 23, 2017
Hi Victor. Thanks for catching that, and you’re right of course. I must’ve been asleep when writing that line I guess! I’ve updated it:
landmarks_vectorised.append((math.atan2(y, x)*360)/(2*math.pi))
-Paul
MunOctober 23, 2017
Hi Paul
I followed all steps from ‘’ and also ‘’.
Anyway i wondered how to show the results such as “anger: 0.03239878
contempt: 0.13635423 disgust: 0.0117559 fear: 0.00202098 neutral: 0.7560004
happy: 0.00382895 sadness: 0.04207027 surprise: 0.0155695”
You mentioned to use ‘predict_proba()’ function to show the results about emotions. Do i need to make new script apart from main code? or, just add like
print(“Emotion: \n{}” .format(np.argmax(gbrt.predict_proba(testvalue), axis))), this one ??
Paul van GentOctober 25, 2017
Hi Mun. Yes, predict_proba() is a function of the SKLearn classifier:
clf = SVC(kernel=’linear’, probability=True, tol=1e-3) #Here make sure you set “probability” to true, otherwise the model cannot return the decision weights later on.
#train model here
clf.predict_proba(testvalue)
– Paul
MunNovember 1, 2017
Thank you for you kindness! I will try it 🙂
robNovember 2, 2017
Good day Sir please am facing some issues install boost python I downloaded the latest version and ran bootstrap.bat in my cmd but am facing this error
cl not recognized as an internal or external command
failed to build boost.build engine
RobNovember 3, 2017
Hello sir i downloaded the latest version of boost python and when trying to build it in command prompt i came with this error
c:\boost>build.bat
‘build.bat’ is not recognized as an internal or external command,
operable program or batch file.
c:\boost>bootstrap.bat
Building Boost.Build engine
Failed to build Boost.Build engine.
Please consult bootstrap.log for further diagnostics.
You can try to obtain a prebuilt binary from
Also, you can file an issue at
Please attach bootstrap.log in that case.
Please i need your help on this Sir what do you think am doing wrong am using the visual studio 2017 developers command prompt
Paul van GentNovember 8, 2017
Did you install CMake and added it to the path variable?
robNovember 8, 2017
I alrrady did that but still having the same error
OranNovember 8, 2017
hi,
and thanks for your work and publishing it…..
I just wanted to update that I was able to get the kc+ and kc database here:
KooperNovember 25, 2017
Hi.
I’m trying to follow this tutorial but I’m stuck at beginning….
I don’t know why I can’t download the CK+ dataset ?!
I end up getting 403 forbidden as response ?
Is there anyone who can help me ?
Paul van GentDecember 4, 2017
Hi Kooper. The availability of the dataset is intermittent. Unfortunately there is not much we can do about this. You could look at alternatives such as the Yale Face set.
NicoJanuary 6, 2018
I have few questions.
1. Why distances are not scaled? For example if you have a face near camera you will get big distances from center of gravity. If the face is far the distances are smaller.
2. landmarks_vectorised.append(w) What if the face is tilted? the difference between x- xcentral will change. Why is not applied a correction? Also why is not scaled (question 1) if the face is near camera or is far.
3. landmarks_vectorised.append((math.atan2(y, x)*360)/(2*math.pi)). Why the correction angle for tilt face is not applied?
4. landmarks_vectorised.append((math.atan2(y, x)*360)/(2*math.pi)) . What should we do around 180 degree? If we have 2 photos of same emotion and lets suppose that we take one same point in both images. in one image the angle calculated will be -178 degree and in the other will be +178. That is a big difference in value but the points are very close each other
(E.g. x differene for first image is 0.002 and x difference for second is -0.0002. Let suppose that y difference is possitive and it is the same for both picture)
Thanks!
I appreciate your work !
Paul van GentJanuary 25, 2018
Hi Nico. Apologies for the late reply, I’ve been at an international conference followed by a holiday. To answer your questions:
1. In the two tutorials here I use standardised images. I agree that this is not really that clear from the text. I definitely recommend that when using non-standardised sources such as a webcam that you detect the face first, then crop the image to a standardised size, before attempting landmark points detection.
2. & 3. It seems the tilt correction has dropped from the code. Some time ago I migrated to another plugin for all the code, since my old one became unstable with newer wordpress versions.. Thanks for noticing this. I’ll upload the full code tomorrow evening when I’m back home.
4. That’s a valid point. Some of these issues are prevented by rotating the point matrix, but some will remain. I’m unsure of a direct fix here, apart from having a larger dataset where both instances occur with reasonable frequency… If you have any ideas let me know!
Cheers,
Paul
NicoJanuary 29, 2018
Ok, thanks for answers. You can also change regularization parameter C in case of liniar kernel for a better score. Also gamma for poli or rbf. I have obtained 2% better by tunning this (i have used optunity).
Also i have observed that the emotion sadness is not quite well recognize (68% – this emotions make the rate down for me). Now, i m thinking what to do handle this for better score for this emotion. For 6 emotions i have got 87% for poli and 86% for linear. I tried to make one-vs-all models but i cannot improve this rate (also the recognition time increases). Maybe I will try to train a sadness vs all model with LBPH or eigenvectors. I will also want to try the Gabor Wavelet method but i must read more about it.
Paul van GentFebruary 21, 2018
Yes hyperparameter tuning was not applied, so there is definitely a gain to get there! Keep me updated with your progress.
– Paul
LeonardoJanuary 25, 2018
Hi!! this is awesome! congrats and thank you!!
I would like to suggest you something:
You could make a bottleneck folder, were you put all your landmarks detection files, before you start.
Next you make the getFiles, from these landmarks files.
In the end, you have only once detected all the landmarks, and have avoided a lot of computing!!
Thanks!!
Paul van GentJanuary 25, 2018
You’re absolutely right, this would be a great approach when fine-tuning the algorithm :).
– Paul
aliJanuary 26, 2018
i have problem in dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”) i saved my test in the same file of dlib but i don’t know where is the problem the error is: Traceback (most recent call last):
File “C:\Python27\alimcheik\TestinLandmarkDetecting.py”, line 7, in
predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)
RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat
Paul van GentJanuary 26, 2018
Hi Ali. Strange! I would say make sure you have the filename 100% right, and that you have permission to read the file (although I can not really imagine why this would be a problem..).
You can also try calling dlib.shape_predictor() with the full path: dlib.shape_predictor(“C:\\Python27\\alimcheik\\shape_predictor_68_face_landmarks.dat”). Be sure to use double backslashes because a single one is interpreted as a character escape.
– Paul
NoemiFebruary 5, 2018
Hi, I’m learning python and when I run your first code I get an error. How can I fix it?
File “C:\Users\noemi\Documents\Python\LandMarks.py”, line 22
for i in range(1,68): #There are 68 landmark points on each face
^
IndentationError: unexpected indent
[Finished in 0.3s]
Paul van GentFebruary 14, 2018
Hi Noemi. These kinds of errors indicate wrong indentation. See this link for more info.
If the indents look good visually and you still get the error, you’re likely mixing spaces and tabs.
– Paul
KowsalyaFebruary 21, 2018
Hi Paul,
I’m getting following error on running the last code.Also I have given path like this;(i’m using CK dataset)
files = glob.glob(“\\F:\\proj\\emotion\\cohn-kanade\\S010\\%s\\*” %emotion)
or should i need to split the dataset into different emotion folder(happy,sad,neutral..)????
Making sets 0
working on anger
working on contempt
working on disgust
working on fear
working on happiness
working on neutral
working on sadness
working on surprise
training SVM linear 0
Traceback (most recent call last):
File “emo.py”, line 96, in
clf.fit(npar_train, training_labels)
File “G:\ana\envs\keras_env\lib\site-packages\sklearn\svm\base.py”, line 149, in fit
X, y = check_X_y(X, y, dtype=np.float64, order=’C’, accept_sparse=’csr’)
File “G:\ana\envs\keras_env\lib\site-packages\sklearn\utils\validation.py”, line 573, in check_X_y
ensure_min_features, warn_on_dtype, estimator)
File “G:\ana\envs\kerasaul van GentFebruary 21, 2018
Hi Kowsalya. From what it seems, you need to pre-process the data. Be sure you follow the data preprocessing steps in the previous post before delving in here.
– Paul
Kowsi1997February 26, 2018
thank you so much paul !!!
Rachnaa RFebruary 23, 2018
“b2 install” is not being executed by CMD
says “b2 is not recognized as an internal or external command”
If anyone knows how to fix this, please help.
NandyMarch 1, 2018
what is meant by features?list the features that has been extracted in the above code?
Paul van GentMarch 1, 2018
Hi Nandy. A ‘feature’ is a machine learning term for a particular property in the source data that is expressed numerically. For example if you want to recognise a cat vs a human on a photo, you could have a variable that states whether the thing on the photo has a tail or not. This variable ‘tail’ (yes/no) could be considered a feature.
See also this link for more info.
– Paul
NandyMarch 2, 2018
Thank you paul!!! what are the features in the above code
Paul van GentMarch 4, 2018
The landmarks detected on the face.
AdarshMarch 6, 2018
Hey Paul,
how can we implement this code in order to input just a image to test and return the appropriate emotion label as the output.
Paul van GentMarch 6, 2018
After you’ve trained the model (you can save it and re-load later if necessary), you can just use its ‘predict()’ function and pass an image data array.
ShobiMarch 6, 2018
Hello Paul,
I’m using CK+ dataset what might be the n_samples,n_features inorder to use the predict() ??.Please help me out.
Paul van GentMarch 7, 2018
You feed it the features you generate from the data. In the case of this tutorial those are the coordinates of the detected landmarks.
More info on predict() can be found here:
Nitin AgarwalMarch 7, 2018
Hi Paul,
WIthout cleaning the neutral folder I got an accuracy of around 85% but after cleaning it, the accuracy drops to 75%. Can you explain why?
Paul van GentMarch 21, 2018
In the Neutral folder, before cleaning, there’s a lot of repeating images from the same person. This biases the classifier upwards because the total variance in the set is less (images with the same person with the same expression differ very little from each other).
– Paul
Liam UreApril 1, 2018
Hi,
I am currently attempting to do this for a university project. Unfortunately I missed most of the semester due to illness so I am on my own now! I am currently pulling my hair out trying to figure out where to go from here. I have everything above working but what I want to do is use the webcam to detect the users emotion in real(ish)time. I have looked at the logs and some of your comments above but I am at a complete loss of what to do. Can you advise? If you help me out Ill certainly buy you a beer!
Feel free to contact me via email if it is easier for you, liamure@yahoo.co.uk
Paul van GentApril 1, 2018
Hi Liam. Almost all the things you need are in the tutorial then. Take a look here, here and here. These are some examples of getting frames from a webcam. Put them in some sort of loop to do the real-time-ish detection.
I would really recommend to cache at least 5-10 predictions before having your algorithm make a decision. If you can, an extra dataset cannot hurt either.
– Paul
Liam
Liam UreApril 2, 2018
Ill clarify that I now know how to capture the frame. I do not know how to run the captured frame against the model to provide a prediction
Liam Ure
hassanApril 9, 2018
Please help me.
Why there is detection of just one face in live video streaming? Although there are many faces in the live video, but dlib detect only one face at a time? How I can get rid from that issue? I am following exact your code. Please help me.
Paul van GentApril 11, 2018
Hi Hassan. In my example I detect single faces, since the training and testing data consist of this. If you want to detect more faces in a single frame, you need to detect and crop all faces individually first. You can then pass the crops to the classifier one at a time to classify each face independently. Note that execution time will increase when there are more faces in a frame.
You will also need to handle what to do with the extra face detections.
– Paul
iqraApril 19, 2018
Hi Paul
I face this error when train data using your above code given
Making sets 0
working on contempt
Traceback (most recent call last):
File “C:\coding\traininClassificationSet.py”, line 92, in
training_data, training_labels, prediction_data, prediction_labels = make_sets()
File “C:\coding\traininClassificationSet.py”, line 67, in make_sets
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale
cv2.error: C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:11111: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor
Paul van GentApril 23, 2018
Hi Iqra. You get this error most often if no image data is loaded. Make sure the paths point to the correct folders, and don’t use backslashes if you’re on linux.
If you see no problems there, verify that image data exists in the data object.
– Paul
johnJuly 22, 2019
just change the code a little bit with extension *.jpg thats it
Matias AguirreMay 4, 2018
Hello Paul, first of all thank you for the publication.
I am trying to translate the code to C # with EMGUCV using the OpenCV own libraries Facemark and SVN.
I have a question about a part of the code:
xcentral is an arrangement of each element of xlist but subtracting xmean then why on lines 45, 46 and 47 is an array created with the values of xlist and ylist subtracting again ymean and xmean?
Could that subtraction be replaced by the values of xcentral and ycentral?
Maybe it’s an obvious question but I’m too new to Python.
Paul van GentMay 10, 2018
Hi Matias,
It has to do with adding as much information for the classifier as possible:
In line 36-39 I standardise by subtracting the pointcloud’s mean from each coordinate, so that the resulting array describes the facial landmarks relative to their center, rather than relative to the position in the image of the face. These are added to the feature list in line 43,44.
Additionally, in line 45-46 another featureset is generated by describing the vector norm relative to the center (the resulting xcentral,ycentral arrays are very different from what results from
np.linalg.norm(coornp-meannp)
Finally line 49 describes the vector angles relative to the image plane. I will add code to rotate the face tonight or tomorrow, so that it will instead describe the vector angles relative to the facial orientation.
So in essence the feature set describes:
– the landmark coordinates relative to point cloud center
– the vector length between each coordinate and the point cloud center
– the vector angle relative to the image plane (as said I’ll upload the rotation bit so that it become relative to the facial orientation).
Hope this clarifies it for you. Otherwise let me know if I can help
Cheers,
Paul
Mutayyba WaheedMay 27, 2018
Hi Sir.!!
I need help from from can you please Guide me how can i solve this error
Traceback (most recent call last):
File “E:\opencv_data\Facial_landmark_detection_test-master\Facial_landmark_detection\test\E.py”, line 8, in
from sklearn.svm import SVC
File “C:\Python27\lib\site-packages\sklearn\__init__.py”, line 134, in
from .base import clone
File “C:\Python27\lib\site-packages\sklearn\base.py”, line 13, in
from .utils.fixes import signature
File “C:\Python27\lib\site-packages\sklearn\utils\__init__.py”, line 11, in
from .validation import (as_float_array,
File “C:\Python27\lib\site-packages\sklearn\utils\validation.py”, line 18, in
from ..utils.fixes import signature
File “C:\Python27\lib\site-packages\sklearn\utils\fixes.py”, line 144, in
from scipy.sparse.linalg import lsqr as sparse_lsqr # noqa
File “C:\Python27\lib\site-packages\scipy\sparse\linalg\__init__.py”, line 114, in
from .isolve import *
File “C:\Python27\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py”, line 6, in
from .iterative import *
File “C:\Python27\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py”, line 7, in
from . import _iterative
ImportError: DLL load failed: %1 is not a valid Win32 application.
Paul van GentMay 30, 2018
From what I can gather there’s several options that might work. See them here:
Cheers
– Paul
AmiraJune 17, 2018
can you please help me to save and load the training model ?
Paul van GentJune 20, 2018
Hi Amira. You can use Pickle to dump a Python object to disk. It also works with classifier models.
– Paul
AmiraJune 18, 2018
when i run the project i keep getting this annoying error please help
Traceback (most recent call last):
File “C:/Users/HOME/PycharmProjects/Emojis/main.py”, line 58, in
prediction = model.predict(faces)
File “C:\Users\HOME\AppData\Local\Programs\Python\Python35\lib\site-packages\sklearn\svm\base.py”, line 548, in predict
y = super(BaseSVC, self).predict(X)
File “C:\Users\HOME\AppData\Local\Programs\Python\Python35\lib\site-packages\sklearn\svm\base.py”, line 308, in predict
X = self._validate_for_predict(X)
File “C:\Users\HOME\AppData\Local\Programs\Python\Python35\lib\site-packages\sklearn\svm\base.py”, line 459, in _validate_for_predict
(n_features, self.shape_fit_[1]))
ValueError: X.shape[1] = 272 should be equal to 268, the number of features at training time
Paul van GentJune 20, 2018
You’re using input images of different sizes. You need to resize them to the same sizes as the training data before classifying.
AnhJune 23, 2018
Hello Paul,
I’m currently stuck on Separating the images into the correct folder. I grabbed the CK+ dataset from the main source, put it in the correct folders as you mentioned above.
I tried your code in both Windows environment and Linux environment and both showed the same error (I changed \\ into / for linux). Both returned “List Index out of range” error either when I tried to take the [-1] index in the sourcefile_emotion or [0] in the sourcefile_neutral.
I tried to print out all the variables and everything worked correctly until the line when you’re trying to glob the source image.
Any help from you would be great.
Thank you in advance
Paul van GentJune 25, 2018
Hi Anh. Both of the index errors you encountered occur because the generated list is empty. when using glob(), ensure the correct path pushed to the function.
Also, if you’re on anything but windows, use
sorted()to ensure the list is returned sorted. Windows does so automatically.
– Paul
AliAugust 17, 2018
Hi Paul. I downloaded the data set and put it beside the script. when I run the code i face with this error:
Making sets 0
working on anger
working on contempt
working on disgust
working on fear
working on happiness
working on neutral
working on sadness
working on surprise
training SVM linear 0
Traceback (most recent call last):
File “SVM1.py”, line 90, in
clf.fit(npar_train, training_labels)
File “C:\Users\Ali\Anaconda3\envs\opencv-env\lib\site-packages\sklearn\svm\base.py”, line 149, in fit
X, y = check_X_y(X, y, dtype=np.float64, order=’C’, accept_sparse=’csr’)
File “C:\Users\Ali\Anaconda3\envs\opencv-env\lib\site-packages\sklearn\utils\validation.py”, line 573, in check_X_y
ensure_min_features, warn_on_dtype, estimator)
File “C:\Users\Ali\Anaconda3\envs\opencvalkabAugust 18, 2018
Hi Ali. I think the paths are not correct or the images are not loaded properly (do you have one folder with a single example?).
The error means that the input shapes are not correct. Usually they need to have shape (n, dimensions_x, dimensions_y, n_channels), so check your paths and what you’re feeding the network.
– Paul
Thân Trọng QuýSeptember 17, 2018
Big thank for providing this wonderful piece.
I have a question. When I installed Dlib. I got this error: “can’t open file ‘setup.py’: [Errno 2] No such file or directory”.
I have navigated to my Dlib folder
Can you please help me with this? Thank you very much
palkabSeptember 18, 2018
How did you install dlib?
wiemSeptember 17, 2018
Hi Paul, thank you very much for your helpful tutorial. I am trying now to do emotion recognition and face authentification in the same time. So, like that the system will return the image giving with it’s emotion and the name of the person. So, I wonder if I may use the same model trained in emotion recognition using Landmarks to train it to do face recognition using Landmarks also ?
Any help from you would be great.
Thank you in advance
palkabSeptember 18, 2018
Hi Wiem. Face recognition based on landmarks likely will not be accurate. If you’re in a hurry you can use the facerecogniser classes from opencv.
If you want to do it yourself, generally a deep net is used but there are different approaches. Do you have more details on the situation you want to apply it in?
wiemSeptember 18, 2018
Thank you, Paul, very much for your reply. In this tutorial, you used facial Landmarks to do emotion recognition and I am trying to do emotion recognition and face identification at the same time. So, I am wondering if is it applicable to do the training of the system using two classifications ( classification of emotion and classification of identity) using those Landmarks. So like that in the prediction, the system will return to me the emotion and the identity of the face detected.
Any help from you would be great.
Thank you in advance
palkabSeptember 28, 2018
Hi Wiem. Sorry for the later reply.
You could do that, but the question is if that is the best way to go. Two separate classifiers will probably be more accurate. You could use the face recognition classifier that comes with OpenCV. There’s also various ways of accomplishing this efficiently with convnets. Contact me if you want more info, might be quicker: info@paulvangent.com
Cheers
DanielOctober 22, 2018
Very nice examples Paul, I am form Brazil and studying Electrical Engineer. I was happy for find your blog and see you jobs. Continue with this and motivate people. Thanks a lot!
Rohan SathasivamNovember 1, 2018
Amazing work Paul. Just for one of my research projects on emotional recognition, I used your method to experiment and I got the same results as you got. I also experimented by increasing the number of pictures (4 times of every emotion except ‘neutral’), and this produced around 95% accuracy.
In another experiment using CNN, I connected the landmark dots using a white line, extracted this line and placed it over a black background image to reduce the noise. This method yields me 97% accuracy (Note: I increased the number of images by 4 folds )
palkabNovember 1, 2018
Hi Rohan, Nice work! You might also try training your own landmark detector, strip the output layer afterwards and append a perceptron to it and train it on the emotion picture dataset. Don’t forget to freeze the weights of the actual CNN layers in the network!
I’m writing a post on building your own landmark detector now. Give it a few days. Afterwards if there’s time (the main limiting factor I’m afraid…) I might do a post on what I mention above.
Cheers,
Paul
RohanNovember 3, 2018
Thank you very much for your precious time and the notion. It would be great to be able to customize the landmarks.
Thankfully,
Rohan
Bilal ShoaibNovember 3, 2018
i have a problem with installation of dlib library….. kindly help me
palkabNovember 4, 2018
What is the issue?
MazNovember 7, 2018
Hi Paul,
I was looking to take a one-vs-all approach for classification, which I believe the SVM does behind the scenes in this implementation. Do you know how I could get the probability of a image being each of the expressions for a given image passed in. E.g. happy: 73%, sad: 2%, angry: 62%, etc.
I want to be able to compare the probability of it being each type of emotion, to understand the similarities between classes
MazNovember 7, 2018
Sorry, I realise you’ve already answered my previous question, I just somehow missed that part of your post. Great job
palkabNovember 8, 2018
No worries, happy coding!
Jay TrivediDecember 31, 2018
Hi Sir ,
I was looking for face recognition code and suddenly I see this its very interesting as thus face recognition is my project for studying will you help me sir….. I’m facing some errors while executing code given by you
palkabJanuary 1, 2019
So what’s the error?
Kaushalya SandamaliJanuary 25, 2019
Hi sir,
This tutorial is very interesting and very helpful for my FYP. It perfectly running for CK+ data set.
When I try this code on testing data(which are not in CK+ dataset). It hardly detects an emotion. however when i use images from training data set it detects emotions accurately. what can be the possible reasons for this?
Thanks in advance.
palkabFebruary 6, 2019
Hi Kaushalya,
Likely the test set is too different from the training set. I recommend enhancing the training set with a few hundred labeled images representative of the test set, so that the model can learn to differentiate better.
– Paul
Chithira RajJanuary 30, 2019
Sir, Is there any transactions or journals regarding this technique? If possible, kindly share.
Thank you
palkabFebruary 6, 2019
Yes there are but I don’t have them on this computer. If you take a look on google scholar you’ll find plenty.
– Paul
TangelaFebruary 5, 2019
Regardless of just how much you enjoy the
game, there’ll always be new what to understand.
Hoai DucApril 24, 2019
sir i have this proplem , how can it solve it :’ascii’ codec can’t decode byte 0x81 in position 356: ordinal not in range(128
Anna TabalanJune 10, 2019
Hello sir! I was directed to this tutorial from your past ones that used Fisherface. Is it possible to get the probabilities for each emotion using that algorithm instead of SVM?
QandeelJune 19, 2019
Sir This tutorial is very interesting and very helpful for my FYP but i have a problem please let me know how to take the frame, run it through the model and then print out the result.
EZZAJuly 28, 2019
i am facing this error can any one help me?
model = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)
RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat
GauravSeptember 8, 2019
Hi Paul sir…
Greeting for the day..
Actually i am Ph.D perusing students.
My research proposal is emotion recognition on static images.
And I am bit confuse to start the work in it, as I am not getting starting point.
which tool is best? and from where i should start ? . I know basics of python so can you please help sir. to start my work on the given proposal.
srikanthDecember 4, 2019
i need same thing with tensorflow | http://www.paulvangent.com/2016/08/05/emotion-recognition-using-facial-landmarks/?replytocom=619 | CC-MAIN-2020-29 | refinedweb | 16,824 | 66.74 |
You’ve probably noticed that a lot of the methods so far have been declared to throw DOMException. This class itself is shown in Example 9.15 This is the generic exception for essentially anything that can go wrong while working with DOM ranging from logical errors like making an element one of its own children to implementation bugs. It is a runtime exception that does not have to be caught. However, you should always catch it or declare that your method throws it nonetheless. Conceptually, this should be a checked exception. However, many languages DOM supports, including C++ and Python, don’t have checked exceptions, so DOM uses runtime exceptions in order to keep the semantics of the various methods as similar as possible across languages.
Example 9.15. The DOMException class
package org.w3c.dom; public class DOMException extends RuntimeException { public DOMException(short code, String message); public short code; public static final short INDEX_SIZE_ERR = 1; public static final short DOMSTRING_SIZE_ERR = 2; public static final short HIERARCHY_REQUEST_ERR = 3; public static final short WRONG_DOCUMENT_ERR = 4; public static final short INVALID_CHARACTER_ERR = 5; public static final short NO_DATA_ALLOWED_ERR = 6; public static final short NO_MODIFICATION_ALLOWED_ERR = 7; public static final short NOT_FOUND_ERR = 8; public static final short NOT_SUPPORTED_ERR = 9; public static final short INUSE_ATTRIBUTE_ERR = 10; public static final short INVALID_STATE_ERR = 11; public static final short SYNTAX_ERR = 12; public static final short INVALID_MODIFICATION_ERR = 13; public static final short NAMESPACE_ERR = 14; public static final short INVALID_ACCESS_ERR = 15; }
DOMException is the only exception that DOM standard methods throw. DOM methods don’t throw IOException, IllegalArgumentException, SAXException, or any other exceptions you may be familiar with from Java. In a few cases the implementation classes may throw a different exception, especially NullPointerException, and methods in non-DOM support classes like org.apache.xerces.parser.DOMParser or javax.xml.transform.dom.DOMResult can most certainly throw these exceptions. However, the DOM methods themselves don’t throw them.
Not only do DOM methods only throw DOMExceptions. They don’t even throw any subclasses of DOMException. Here, DOM is following C++ conventions rather than Java’s. Where Java tends to differentiate related exceptions through many different subclasses (In Java 1.4, there are more than fifty different subclasses of IOException alone.) DOM uses named short constants to identify the different problems that can arise. This is also useful for language like AppleScript where exceptions aren’t even classes. The exception code is exposed through DOMException’s public code field. The codes have the following meanings:
Something tried to put more than two billion characters into one string. This one isn’t too likely in Java. (If you’re trying to stuff that much text into one string, you’re going to have other problems long before DOM complains.) This exception is really meant for other languages with much smaller maximum string sizes.
An attempt was made to add a node where it can’t go; e.g. making an element a child of a text node, an attribute a child of an element, adding a second root element to a document, or trying to make an element its own grandpa.
A rare exception thrown by the splitText() method of a Text object if you try split the text before the beginning or after the end of the node.
You tried to add an existing Attr to a new element without removing it from the old element first.
The class that implements the DOM interface does not support the requested method, even though you’d normally expect it to.
A Unicode character was used where it isn’t allowed; for example, an element name contained a dollar sign or a text node value contained a form feed. Many DOM implementations miss at least some problems that can occur with invalid characters. This exception is not thrown as often as it should be.
The class that implements the DOM interface cannot change the object in the requested way, even though you’d normally expect it to; e.g. it ran out of space for more child nodes.
The implementation class that backs the DOM interface you’re using has gotten confused, and cannot perform the requested operation. This would generally indicate a bug in the implementation.
The namespace prefixes or URIs specified are somehow incorrect; e.g. the qualified name contains multiple colons, or the qualified name has a prefix but the namespace URI is null, or the prefix xml but the namespace URI is not.
A referenced node is not present in the document; e.g. you tried to remove an attribute the element does not have or tried to insert a node before another node that is no longer in the document.
The implementation does not support the requested object type. For example, you tried to create a CDATA section node using an HTML document implementation.
You tried to set the value of an element, document, document fragment, document type, entity, entity reference, or notation node. These kinds of nodes always have null values.
You tried to change a read-only node. The most common reason a node is read-only is because it’s a descendant of an entity reference node.
You tried to set a value to a string that’s illegal in context; e.g. a comment value that contains the double hyphen -- or a CDATA section that contains the CDATA section end delimiter ]]>. In practice, most implementations do not watch for these sorts of syntax errors and do not throw this exception.
You tried to insert or add a node into a document other than its owner (the document that originally created the node). DOM does not allow nodes to change documents. Instead you have to use the importNode() method in the new Document to make a copy of the node you want to move.
Although there’s no way DOM can prevent programs from using error codes other than those listed here, the W3C has reserved all possible error codes for their own use. If you need something not listed here, write your own exception class or subclass DOMException. (Just because DOM doesn’t make full use of an object-oriented exception mechanism for reasons of compatibility with less object-oriented languages than Java doesn’t mean you shouldn’t do this in your pure Java code.) | http://www.cafeconleche.org/books/xmljava/chapters/ch09s10.html | CC-MAIN-2019-09 | refinedweb | 1,051 | 53.61 |
ISO Updates C Standard 378
An anonymous reader writes )."
First post!! (Score:5, Insightful)
Actually, who cares about that?
Seriously, though, am I the only one who finds it strange that one has to buy copies of the standard?
Re: (Score:2)
Not really, a lot of books cost money. Why would this one be different?
Re:First post!! (Score:5, Informative)
Oh? $300? For a PDF file? Heh.
Re: (Score:2)
Re:First post!! (Score:5, Funny)
Oh? $300? For a PDF file? Heh.
But these limited-edition PDFs are signed and numbered.
Re: (Score:3)
You laugh. But the PDFs we get at work through a subscription to a site that provides various standards are "limited to view for 48 hours". Unfortunately the limited to view bit is simply a javascript code that checks to see if the file was downloaded more than 4 days ago (date imprinted on each PDF when you download) and then covers the screen with a big denied message blocking the text.
DRM at its finest. Pity I have Javascript disabled in Acrobat. Also you can simply print the PDF to CutePDF to strip out
Re:First post!! (Score:5, Insightful)
Re: (Score:3)
I cannot say for the C standard, but in my work, we do some standards development under ISO. None of this work is funded by ISO -- it is either funded by my employer, or government agencies, commercial end-users, or others that have in interest in the technology getting standardized. This process can be quite expensive -- salaries, travel, meetings, but none of that is paid by ISO. It is all paid by the participants (or funding they can acquire from other interested parties.).
ISO basically acts as a mi
Re: (Score:3)
Not yet, the C++ standard is on there though: [thepiratebay.org]
Re:First post!! (Score:5, Insightful)
Not really, a lot of books cost money. Why would this one be different?
First of all, it's not a book. It's a PDF. Second of all, the Netherlands is a member body of ISO, so I have already paid for it through my taxes. I should be able to use the fruits of ISO without additional cost (or maybe some nominal fee). Third of all, an ISO standard has the status of a law: you'd better do it this way, or else. So they're telling me the law has changed, and then charging me 300 euros to find out precisely what the new law is. I believe that's called extortion.
Re:First post!! (Score:5, Funny)
The new standard have been on display for free at the Alpha Centauri planning office for the last fifty years.
Re: (Score:2)
Re: (Score:3)
The free sample in alpha centauri is in a filing cabinet in a dark basement guarded by leopards.
Your time dilation assumes c towards Alpha Centauri, instant deceleration to 0, collect pdf and instant acceleration to c towards earth. Wont work.
Re: (Score:3)
Your time dilation assumes c towards Alpha Centauri, instant deceleration to 0, collect pdf and instant acceleration to c towards earth. Wont work.
Is that c 9x or c 1x? Must be 9x else you wouldn't be going to Alpha Centauri to get the new c spec.
Re: (Score:3)
I'm not realy sure, but here are my best three guesses:
1. I am a funny guy (in general)
2. There is a grammatical mistake I didn't catch before clicking Submit
3. It references The Hitchhiker's Guide to the Galaxy
Re: (Score:2)
This, so very much. I've always found it mindboggling to pay for standards like this.
Pay for the work creating the standard, sure. But copies? (digital no less) wtf? Don't they want people to adopt the standard?
I can see if it's some small industry-specific council perhaps, but the ISO?!
Re:First post!! (Score:5, Informative)
Re: (Score:2)
Because it's not a book but a language standard. If you want your standards to be recognized, keep them open and free of charge.
Re: (Score:2)
Re:First post!! (Score:5, Informative)
Grab the original file from here [thepiratebay.org].
Re: (Score:2)
Re: (Score:3, Funny)
Oh what am I saying? Developers won't write standards compliant code even if they do know what the standards are!
Re:First post!! (Score:4, Informative)
Of course, when he's not doing that, he's advocating necrophilia [stallman.org] and "voluntary pedophilia" [stallman.org]. Maybe not the best spokesperson to get behind.
Re: (Score:3)
Here's the quote you refer to:).
That sounds like a jest to me.
Re:First post!! (Score:5, Insightful)
Re: (Score:3, Funny)
Do they sell them by the C-shore?
Re:First post!! (Score:5, Funny)
yes, if you have 300 clams.
Re:First post!! (Score:4, Funny)
Let's get C99 right first (Score:2, Informative)
Re:Let's get C99 right first (Score:5, Informative)
Re:Let's get C99 right first (Score:4, Insightful)
COBOL is king, always will be.
Solid and reliable code that works period!
Re: (Score:2)
What OS kernel is written in Cobol again? I seem to have forgotten. Real mission critical stuff at Boeing? NASA? All that stuff then right?
Most critical software is written in COBOL (Score:4)
Real mission critical stuff at Boeing? NASA? All that stuff then right?
Actually their most critical software is probably written in COBOL, their payroll software. Without that COBOL based software nothing gets done.
:-)
Re:Let's get C99 right first (Score:5, Interesting)
Re:Let's get C99 right first (Score:5, Insightful)
Re: (Score:2)
Re:Let's get C99 right first (Score:4, Interesting)
Hi, I'm a Windows developer.
I'll take C# over C any day, and I have 20 years of C experience.
I believe that's kinda the parent poster's point. For a windows developer MS make their proprietary C# language easy, and C hard work. Now for most stuff that's fine, but sometimes a lower level language is needed. Ever tried writing a kernel mode driver in C#?
Re: (Score:3, Informative)
For a windows developer MS make their proprietary C# language easy, and C hard work. Now for most stuff that's fine, but sometimes a lower level language is needed.
Interesting, it's like you've never heard of C++ which MS does fully support [slowly] and is standard. I know pure C is a sacred cow but writing pure procedural code in C++ won't kill you, in fact, it will probably make the code much easier to read since you can't just arbitrarily cast back and forth between void pointers and other types without explicit type brackets.
Ever tried writing a kernel mode driver in C#?
MS has been experimenting with that but it seems more likely that they'll just hoist most drivers into user space services so you can use any
Re:Let's get C99 right first (Score:5, Informative)
Microsoft
Microsoft Research has an interesting project called Singularity - an operating system running (mostly) in managed code. Some initialization routines are done in Assembly/C/C++, but the kernel itself and respective drivers are written entirely in managed code. Check [wikipedia.org].
Re: (Score:2)
Yes and no – it's more that for programming applications, a higher level language is a good idea –not dealing with memory management and every low level detail is exactly what you want there. This is why Apple keeps taking Objective-C more and more away from C too (though it's still way closer - still a strict superset - than most HLLs).
Don't get me wrong – C is a fine language for coding OSes, non-safety-critical embedded systems, etc in. But there's absolutely no denying that C# is bet
Re:Let's get C99 right first (Score:5, Interesting)
Who cares about Microsoft these days? Any damage they cause by lagging behind standards is only to themselves, unlike the bad old days. In the modern world GCC is the bar by which Microsoft is measured, and usually found lacking.
Re:Let's get C99 right first (Score:4, Informative)
Re: (Score:3, Informative)
Re:Let's get C99 right first (Score:5, Interesting)
Not being a C or C++ developer, I'm not sure who to believe - in the Firefox compilation story a few days ago, there were a fair few highly modded up posts extoling the virtues of the quality and speed of binaries output by the MS C and C++ compiler over GCC.
Any thoughts on that?
Re:Let's get C99 right first (Score:5, Informative)
Simply put, gcc beats VC on standard compliance, and VC beats gcc on optimization quality.
Anyway, VC is primarily a C++ compiler. C support is largely legacy, and hasn't been updated for a long time now.
Re: (Score:2)
VC and GCC are equally shit at standards compliance *Grumbles*
If either is specified to be used in a project, it means extra work to adapt existing portable code that is strict ISO C to perform well when compiled with either of them.
ICC has quite good ISO C compliance, and the whole thing regarding AMD processors not being optimized for was overexaggerated(and there are some technically valid reasons for it: Some of the intrinsics handling involves microcode level, and Intels and AMD's can look very differe
Re: (Score:2)
move on (Score:5, Insightful)
Many of us gave up waiting on Microsoft for our development tools.
Re: (Score:3)
How is C in anyway dependent on Microsoft VS support? VS as far as I can tell is for writing User level applications on managed code where C is terrible. Even GTK have realized that objects are good for writing most Applications.
The reason c is still around is to write the low level stuff that you can't swap out in windows. If MS could set the programming languages, C would not have been taught for at least 10 years and everything would be full of the lag and bloat that comes with non native code.
Re: (Score:2, Redundant)
The big problem is that you can't compile C programs that make use of GCC extensions using Visual C++. This includes even the most basic stuff, like declaring variables in the middle of your code. It's actually a GCC extension to C, despite being a standard feature of C++.
The only way to compile such programs on Visual Studio is to force the compiler to use C++ mode instead of C mode. Then you get a bunch of compiler errors, because C++ is a different language than C, and gives errors when you assign poi
Re:Let's get C99 right first (Score:4, Informative)
"This includes even the most basic stuff, like declaring variables in the middle of your code. It's actually a GCC extension to C"
No it's not— it's part of ISO C99.
Re:Let's get C99 right first (Score:4, Insightful)
If your program relies on the presence of GCC extensions, you did it wrong in the first place.
Re: (Score:2)
Re: (Score:2)
That is one of the reasons why I find so many FSF supporters to be such hypocrits, they blather on about standards compliance, yet they use and abuse GCC extensions etc. The Linux kernel is horribly tainted in that way.
Linux can be compiled using the Intel C compiler [linuxjournal.com]
See include/linux/compiler*.h in your kernel source
Re: (Score:3)
That's because Intel specifically had to introduce the GCCisms(and early on you also needed to patch serious parts of the kernel sources).
It's still bad though, because you have seriously non-standard stuff. The whole situation is just the same as what people have complained about Microsoft for: Implementing non-standard stuff, but instead of ruthless closed-source for-profit, GCC spread theirs with a draconic ideology and religious zealotry. The GCC project in particular has shown itself to play a serious
The GCC project didn't try to patent IsNot (Score:2)
The whole situation is just the same as what people have complained about Microsoft for: Implementing non-standard stuff
Not necessarily. The GCC project doesn't try to patent [slashdot.org] language extensions so that others can't implement the extension compatibly, such as using the name "IsNot" for a reference identity operator.
Re: (Score:2)
Patenting is not the same thing as shoddy standards-compliance/developing and extending non-standard stuff.
Also, I did point out Microsofts ruthless for-profit mentality AS OPPOSED to FSF's religious zealotry.
Re: (Score:2)
You can't write an OS kernel in standard C anyway. It's in some ways inherently lower level stuff.
What would be the obstacle to writing an OS kernel in C99? What would one need the extensions for?
Re: (Score:3)
Lots of Irritating Superfluous (curly) Parentheses (Score:3)
Declaring variables at the beginning of their scope
But do you really want to start a new scope every time you declare a variable? Then your code would be filled with so many }}}}}}}}} that it'd look like a curlier version of Lisp.
Re: (Score:3)
Declaring variables at the beginning of their scope makes the code more readable and easier to debug.
Not even a little! It does the absolute exact opposite!
Why on earth would it make code more readable to declare a variable away from the place where it is actually used? That makes no sense whatsoever.
Re: (Score:2)
Microsoft Visual Studio doesn't support a lot of things in whatever language.
It's hardly the standard by which to judge programming languages.
Although the fact that it is included in some form basically means the language is imporant enough that Microsoft couldn't replace it with one of their own languages.
Re: (Score:2)
Fortunately, there are alternatives to Visual Studio even on Windows.
Re: (Score:2)
The position on native (read: C++) vs managed has been effectively reversed in Win8. All new APIs are implemented as native, and have direct C++ bindings. Win8 dev preview that you mentioned has VS11 Express pre-beta pre installed that only supports targeting Metro for all languages, not just for C++. That's because the purpose of dev preview is to showcase Metro app development. Full version of VS supports everything supported in past releases, and a prerelease can be separately downloaded and installed on
Re: (Score:2)
The pre-release Visual Studio contained in the Windows 8 technical preview won't even allow you to write non-Metro applications
If you can't get Metro applications anywhere but the Store, then how are you supposed to test Metro applications that you've compiled?
Re:Let's get C99 right first (Score:4, Insightful)
Re: (Score:2)
You should not use Microsoft Visual Studio for writing programs since it is non-free.
So is the BIOS or EFI of every computer sold on the mass market. So is the software running on the microcontroller in your computer's mouse. What workaround do you recommend?
So... (Score:3)
I'm not willing to pony up 300 swiss Francs, so can anybody tell me, basically, how it is different ? Is it just the stuff that has creeped through in the last few years by means of gcc, or is it totally new ?
Re:So... (Score:5, Informative) [wikipedia.org]
Re:So... (Score:5, Informative)
Some of the not-so-nice features include threads.h, which is equivalent to pthreads but with a different function names (and ones that seem quite likely to cause conflicts with existing code).
Static asserts have been around for a long time (Score:2)
Static assertions, so you can do things like _Static_assert(sizeof(int) == 4, "This code path should not be used in ILP64 platforms!"); and get a compile-time error if it is.
There was already a macro for that, involving declaring an array whose size is 1 when true or -1 (and thus invalid in a way that produces a diagnostic) when false.
Re: (Score:2)
If my very cursory reading of threads.h is correct, it's designed to provide better portability between platforms, without having to use a lot of POSIXisms, for example some embedded stuff that have no use for it, but can make use of threading.
Re: (Score:2)
I'd like to see them standardize the interaction between alloca and VLAs.
And are VLAs more than just a type-safe version of alloca?
Draft available for free (Score:5, Informative)
For those interested, the last draft before the official version is available for free here: [open-std.org]
C90 * (Score:2)
I put C90 (ANSI C) on my resume, because it is more marketable. A serious employer wants to know that I know how to write C90, not just vaguely understand the C language. The fact is if your write ANSI C, it will work with just about any compiler (with the exception of any platform specific code). Many embedded compilers only support a subset of C99 anyway (usually, most, but that's the point, it's not all). ISO fussing with a new C revision is laughable.
Re: (Score:2)
Re: (Score:2)
C90 does not contain a standard type that has a 64-bit range. C99 defines long long, which must have a range greater than or equal to a 64-bit binary number. It also defines int64_t and uint64_t, which must have exactly the range of a 64-bit binary number. More usefully, it also defines [u]intptr_t, which must have the same size as void*. There is no C90 integer type that is guaranteed to be the same size as a pointer, and the fact that a lot of people assume that sizeof(long) == sizeof(void*) is one of
Fail-fast pointer types (Score:2)
the fact that a lot of people assume that sizeof(long) == sizeof(void*) is one of the things most likely to be responsible for code not being portable.
The following is technically not portable, as it's possible for an implementation to choose types that make it fail. But at least it fails fast [wikipedia.org], using an idiom for a static assertion that works even on pre-C11 compilers:
I wouldn't hire anyone who wrote C90 these days. There's simply no excuse for it.
Other than a major compiler publisher's tools not fully supporting any standard newer than C90, perhaps?
Re: (Score:2)
There is nothing in C90 that forbids 64-bit integers.
It doesn't forbid them but it doesn't standardise them either. Whether they are provided or not and the mechanism used to provide them is up to the individual implementation and as such any code that relies on them becomes implementation dependent.
Any 64-bit ints in C++? (Score:2)
Re: (Score:3)
Fucking markup. Here's a version you can read.
#include <stdio.h>
#include <stdint.h>
int main(int argc, char *argv[])
{
int64_t a = 50000000000LL;
int64_t b = 100000000000LL;
int64_t c = 0;
c = a + b;
printf("%lld\n", c);
return 0;
}
Looks like story is already dated... (Score:5, Informative)
The standard is known unofficially as C1X
GCC already says: [gnu.org].)
Syntax is everything in C.
Poul-Henning's take on this. (Score:5, Informative)
Re:Poul-Henning's take on this. (Score:5, Insightful)
His complaint about _Noreturn and similar keywords is silly. First, they were there 12 years ago already, in C99 - _Bool, _Complex etc. The reason for this scheme is that if they just made noreturn a keyword, existing valid C programs that use it as identifier would become illegal. On the other hand, underscore followed by capital letter was always reserved for implementations, so no conforming program can use it already. And then you can opt into more traditionally looking keywords, implemented via #define to the underscore versions, by explicitly including the appropriate header.
WTF is "ISO C"? (Score:3, Insightful)
I spent my early years programming K&R C on Unix systems.
When the ANSI standards were ratified, ANSI took over.
But WTF is "ISO C"? With a core language whose goal is portability and efficiency, why would I want the language trying to can platform-specific implementations like threading? C is not a general purpose language -- it's power comes from tying to the kernels and platform libraries of the industry at the lowest levels possible to maximize performance.
If you don't need that maximum performance, you use C++ or another high-level language.
ANSI C is the assembler of the modern computing age, not a general purpose programming language.
Now get off my lawn!
But... C Was Perfect... (Score:3)
Re: (Score:2)
Re: (Score:3)
Why would Dennis Ritchie have anything against it?
Re: (Score:3)
Microsoft-designed "secure function" cancer?
I'm beginning to think we need a new "law" styled somewhat after Godwin's Law - let's call it "93 Escort Wagon's Law". It goes as follows:
"As any online discussion grows longer, the probability of someone mentioning Microsoft in a derogatory manner approaches 1."
It might also make sense to add a "Slashdot Corollary" under which Microsoft and Apple are interchangeable.
Re: (Score:2)
I think we can generalise it a bit better than that.
"As any online discussion grows longer, the probability of someone mentioning anyone or anything in a derogatory manner approaches 1."
And just those two and no others, because no-one ever says mean things about (let's say
Re: (Score:2)
"As any online discussion grows longer, the probability of someone mentioning Microsoft in a derogatory manner approaches 1."
I think we can generalise it a bit better than that.
"As any online discussion grows longer, the probability of someone mentioning anyone or anything in a derogatory manner approaches 1."
It's actually a variant of the typing monkey thing: As any online discussion grows longer (by monkeys typing on their keyboards), the probability of some monkey mentioning *anything* approaches 1.
Re: (Score:2)
No, no, no. This riff only applies to a memoryless process. Long discussion threads are more like star formation. Once you get above a critical mass of Gates, Jobs, Portman, Hitler, Netcraft, emacs, vi, fiat currency, Russian dyslexia, and dcxk intelligent thought can only form at the event horizon, and the fragment emitted is barely visible against the entropic background.
Re: (Score:2)
Re:Can't we please let C die? (Score:5, Insightful)
You have that exactly backwards. It's C+++ that should die.
-jcr
Re: (Score:2)
Bring it on, tough guy.
-jcr
Re: (Score:3)
Re: (Score:2)
It is, because it helps write libraries in C which remain usable from C++.
Re: (Score:2)
If C++ code can't call C code, that's a bug for C++ people to fix. All the competently designed languages deal with C just fine.
-jcr
Re: (Score:3)
No, no, no. That's not the issue. C++ can automatically call any C code using 'extern "C"'. The issue is, how will C++ do *COMPILING* C source in C++ mode. C++ is not a true superset of C, so C is not a true subset of C++. Anything that makes them closer to being a super/sub set pair is a Good Thing.
Re: (Score:3, Funny)
Objective-C, of course. | https://developers.slashdot.org/story/11/12/24/0145238/iso-updates-c-standard | CC-MAIN-2018-17 | refinedweb | 3,944 | 63.7 |
-
- The Future of Programming
- Characters & Strings, Part I
- Characters & Strings, Part II
- Predictions for 2005
- My Ten Favorite C++ Books, Part I
- My Ten Favorite C++ Books, Part 2
- Aspect Oriented Programming
- 64-Bit Technology: Is It Greek To You?
- Agile Methodologies
- Agile Methodologies, Part 2
- Disaster Recovery, Part I
- Disaster Recovery, Part II
- Predictions for 2006
- Reflections on the delete vs. delete[ ] Split
- Predictions for 2007
- Bubble 2.0, Part I
- Bubble 2.0, Part II
- Time for a New new?
- Time for a New new? Part 2
- Are Variables Objects?
- Green Computing
- -> Whither?
- -> Whither? Part II
- Predictions for 2018, Part I
- Predictions for 2018, Part II
- Inventing New Operators
- Predictions for 2009, Part I
- Predictions for 2009, Part II
- Improved Integration with C Arrays and Strings
- Cloud Computing
- Google's Chrome OS -- Is It Really That Good?
- This Has Been 2009
- A Closer Look at bada
- A Second Look at bada
- A Quick Glance at Objective-C
- The C++0x Standardization Process -- Aiming for an FCD?
- The HipHop for PHP Project
- The HipHop Project FAQ
- Follow Me on Twitter
- C++0x Support in Visual Studio 2010
- The Buzzword Hype Dispelling Guide, Part I: Artificial Intelligence
- The Buzzword Hype Dispelling Guide, Part II: Fuzzy Logic
- The Buzzword Hype Dispelling Guide, Part III: Real-Time
- More on Real-Time Programming
- Black Swans and Gray Swans
- Reflections on <tt>new</tt>
- Reflections on <tt>new</tt>, Part II
- Reflections on <tt>new</tt>, Part III
- C++0x Features That I Like: Defaulted and Deleted Functions
- C++0x Features That I Like: Class Members Initializers
- Invention Wish-List for 2011 and Beyond, Part I
- Invention Wish-List for 2011 and Beyond, Part II
- Doing Less with More, Part I
- Doing Less with More, Part II
- Five Technology Myths and Their Lessons
- A Few Pedagogical Insights about C++ Teaching: Constructors and Destructors
- A Few Pedagogical Insights about C++ Teaching: Public Data Members
- When Bad Code Is Good for You
- Speaking Correct C++
- C++ is Back with a Vengeance
- Can Two Functions Have the Same Address?
- In Memory of Dennis Ritchie
- In Memory of Dennis Ritchie, Part II
- In Memory of Dennis Ritchie, Part III
- Time to Say Goodbye
- We Have Mail
-
- The Soapbox
- Numeric Types and Arithmetic
- Careers
- Locales and Internationalization
C++0x Support in Visual Studio 2010
Last updated Jan 1, 2003.
Microsoft's Visual Studio is the most popular C++ IDE among Windows C++ developers. Such popularity enables the C++ community to get valuable feedback from independent developers about new features. In the following sections I will outline the major new C++0x features that VS 2010 currently supports.
Core and Library features
The Visual C++ compiler that's embedded in the VS development suit supports six new C++0x features: lambda expressions, auto, decltype rvalue references, static_assert, and nullptr. Additionally, the Standard Library includes several C++0x enhancements such as new algorithms and better support for move semantics. With respect to performance, Microsoft reports that the Visual C++ compiler generates more efficient code and that the linker has also been improved in several aspects. However, I will focus here mostly on the new C++0x features.
Lambda Expressions
Microsoft is among the first vendors to support this feature. Truth be told, I have mixed emotions about lambda expressions. They are simpler to code than handwriting full-blown function objects. However, C++0x lambdas aren't polymorphic and I suspect that they will be over-used (and even misused) because programmers will consider them nifty and trendy. The syntax isn't Elysian bliss either. However, we'll probably have to live with it because lambdas are in the FCD. It seems rather unlikely this feature will undergo major revisions now.
auto and decltype
auto and decltype are probably everyone's favorites C++0x core feature. auto lets you declare objects without stating their types explicitly. The compiler infers the type of the object from its initializer:
auto p = new int; //p is int*
Similarly, decltype returns the type of its argument (an expression, a literal or an object). You can use decltype to automate the return type of a function template, create typedefs and so on:
typedef decltype (5) XTYPE; //XTYPE is synonymous for int, the type of 5 XTYPE x=0;
Rvalue references
Rvalue references were added to C++0x to facilitate move semantics and perfect forwarding. In the original proposal, rvalue references could bind to lvalues. This however led to ambiguities of overloaded functions, and undesirable implicit conversions. Later, the rules were changed (this is known as "rvalues 2.0"). In the current FCD, rvalues cannot bind to an lvalue. VS 2010 implements rvalues 2.0, which allows you to overload a function based on the type of the reference:
void readval(int&& n); void readval(int& n); int main() { int x; readval(5); //#1 readval(x);//#2 }
static_assert
You've heard of the #error preprocessor directive, which validates its argument during the preprocessing stage. You surely have seen and used the assert() macro, which is evaluated at runtime. The missing link was a compile-time assertion facility, though. The new keyword static_assert that was recently added to C++0x lets you evaluate a certain condition at compile-time. This feature is mostly useful for validation template arguments.
nullptr
The new keyword nullptr is used instead of the archaic and bug-prone NULL macro. nullptr represents a generic null pointer value. You can use it to initialize, assign or compare any pointer type, including pointers to members, pointers to functions and of course, pointers to data types:
int * n= nullptr; if (n==nullptr) {//.. } int (A::*pmf)(char) =&A::f; pmf=nullptr;
Standard Library Enhancements
A significant portion of the Visual Studio 2010 Standard Library has been rewritten to support C++0x features, most notable rvalue references. Standard containers such as vector and list now include move constructors and move assignment operators. Thus, vector reallocations now take advantage of move semantics by picking up move constructors of the vector's element type. Similarly, Standard Library algorithms recognize types that have move constructors and move assignment operators.
The new Standard Library also provides a smart pointer called unique_ptr which superseded the deprecated auto_ptr. Unlike the latter, unique_ptr safely implements move semantics so you can use it with STL containers and algorithms.
Finally, various new algorithms were added to the C++0x Standard Library including is_sorted() and iota() (one of my favorite algorithms!). The Visual Studio 2010 Standard Library implements these new algorithms. Here's an example that uses iota() to populate an array of 10 integers with the values 5,6...14:
#include <numeric> int main() { int x[10]; std::iota (x, x+10, 5);//begin, end, initial value } | http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=480 | CC-MAIN-2014-35 | refinedweb | 1,118 | 51.28 |
Protect Custom Metadata Types and Records
Learning Objectives
- Identify the options for protecting custom metadata types.
- Control the editability of a field.
- Protect custom metadata records.
When you create a custom metadata type, you can determine who can access and change the type.
- Public — Regardless of the type of package (managed or unmanaged), the following have access:
- Apex
- Formulas
- Flows
- API for users with Customize Application or permissions granted through profiles or permission sets. The custom metadata type, fields, and unprotected records are visible in setup.
- Protected — When in a managed package, only Apex code in the same namespace can see the type. The name of the type and the record are visible if they’re referenced in a formula.
- PackageProtected — When in a second-generation managed package, only Apex code in the same managed package can see the type. The name of the type and the record are visible if they’re referenced in a formula.
There are a few more things you need to know about custom metadata types and protection.
- Protected custom metadata types can only be created in developer and scratch orgs.
- By default API read access is restricted, even for types set to Public. This is set through the Schema Setting, Restricted access to custom metadata types. Restricting access is recommended. Access can be granted to users through profiles and permission sets by admins with Customize Application permission. When not enabled, users without the Customize Application permission can read custom metadata types using different Salesforce APIs that are provided by Salesforce.
- Apex code that is run in system mode ignores user permissions and your Apex code is given access to all objects and fields. Functionality that runs in system mode, such as Apex, is not affected by the Restrict access to custom metadata types org preference.
- In user mode, functionality such as Visualforce Components, Visualforce Email templates, and Aura, is run with respect to the user's permissions and sharing of records.
Protect Custom Metadata Records
Instead of protecting an entire metadata type, you can protect individual records within a public type. If a developer releases protected records in a managed package, access to them is limited in specific ways.
- Code that’s in the same namespace as custom metadata records can read the records.
- Code that’s in the same namespace as custom metadata types can read the records that belong to that type.
- Code that’s in a namespace that doesn’t contain either the type or the protected record can’t read the protected records.
- Code that the subscriber creates and code that’s in an unmanaged package can’t read the protected records.
You can also protect custom metadata types, providing the same access protection as protected records. If you change a type from protected to public, its protected records remain protected, and all other records become public. If you use Setup to create a record on a protected type, Protected Component is selected by default.
When a type is public, you can’t convert it to protected. The subscriber can’t create records of a protected type.
Field Manageability
When it comes to protecting fields on custom metadata types, you have three options.
- The package developer can use package upgrades to edit the field after release. Subscriber orgs can’t change the field.
- The subscriber org can edit the field after installing the package. Package upgrades don’t override the subscriber’s changes.
- Neither the package developer nor the subscriber can edit the field after the package is released.
These options seem fairly simple, but let’s take some time to dig into how everything works together.
Put It All Together
Let’s say we’re putting our Support Tier type in a managed package. For legal reasons, we don’t want package subscribers to change the Default Discount field. As the package developer, you still want to be able to change it in later package releases if the legal requirements change. So, you want the Default Discount field to be upgradeable.
The values on the Minimum Spending field vary depending on the org, though. Sometimes orgs change the minimum spending amounts based on local factors. To account for this need, make the Minimum Spending field subscriber editable.
If we look at the same custom metadata type and its associated records in an unmanaged package, it’s easier to see who can edit what.
An unmanaged package gives the subscriber freedom to edit records and fields on custom metadata types.
Resources
- Salesforce Help: Custom Metadata Types Implementation Guide
- Salesforce Help: Protection and Privacy Options for Custom Metadata Types | https://trailhead.salesforce.com/pt-BR/content/learn/modules/custom_metadata_types_adv/cmt_manageability?trail_id=configure-your-app-with-custom-metadata-types | CC-MAIN-2021-21 | refinedweb | 768 | 55.64 |
The QStatusTipEvent class provides an event that is used to show messages in a status bar. More...
#include <QStatusTipEvent>
Inherits QEvent.
The QStatusTipEvent class provides an event that is used to show messages in a status bar.
Status tips can be set on a widget using QWidget::setStatusTip(). They are shown in the status bar when the mouse cursor enters the widget. Status tips can also be set on actions using QAction::setStatusTip(), and they are supported for the item view classes through Qt::StatusTipRole.
See also QStatusBar, QHelpEvent, and QWhatsThisClickedEvent.
Constructs a status tip event with text specified by tip.
See also tip().
Returns the message to show in the status bar.
See also QStatusBar::showMessage(). | http://doc.trolltech.com/4.0/qstatustipevent.html | crawl-001 | refinedweb | 117 | 69.48 |
When I was developing a relational database engine, I was testing some queries in Microsoft SQL Server, and I found that some simple queries were taking a very long time to be executed, or I received a message "not enough space on the disk". After some investigation, I found that the problem was in the saving of the query result, or to be more accurate, the time to save the recordset cursor values. So, I tried to overcome this time issue in my design. This article will take a simple overview about recordset cursors, SQL statement execution, and then take a look at a simple query and explain the problem that slows down the performance of most DBMSs. Then, I'll introduce the idea of virtual cursors, and apply the idea to more sophisticated queries.
The SQL statement "Select * from orders,employees,customers where orders.employeeid=4" in SQL server takes 55 seconds to execute. I know that SQL Server can search in the field orders.employeeid for the value 4 in no more than 0 ms, but the problem is taking the result and combining it with that from two other tables (employees, customers) and saving the output cursor. That means, if the output result contains a big or a huge number of rows, SQL Server takes a lot of time to save the whole result, which motivated me to think about finding a way to overcome this problem in my company's engine.
Select * from orders,employees,customers where orders.employeeid=4
orders.employeeid
I know that this is not a normal query and the cross product of the three tables returns 664,830 rows, but if you look at my execution time for the same query, you can find the difference, and it will motivate you to read the whole article to know the idea behind the virtual cursor. The following figure displays my engine time to execute the same query. It takes less than 1 millisecond.
Actually, SQL server takes no time to execute the query; the total time depends on the final resultant cursor vector; if it is too large, it takes a long time, or returns an error of "not enough space on the disk". If it is small, it takes no time to store it. And as a proof for its speed, if we use the query "Select count(*) from ...", it takes no time. So, the problem is in the saving of the cursor values. For example, for the query:
Select count(*) from ...
Select * from Orders, Employees, Customers, Products
WHERE Orders.EmployeeID IN (4, 5, 6)
AND Employees.EmployeeID IN (1, 2, 3)
AND Customers.CustomerID LIKE 'A%'
SQL Server takes 3 seconds to return 244,860 rows (it takes 16 ms to return the same rows in my engine). If you try to delete the condition "Customers.CustomerID LIKE 'A%'", you can go make a sandwich in the time you get a result; it takes about 35 seconds!!! While my engine takes 1 ms to get 5,448,135 rows. Any way, let us start an overview about the recordset cursor and it internal structure, and then discuss cursor expansion as an introduction to virtual cursors. Please take your time to read each section, as each section is the key for the next section.
Customers.CustomerID LIKE 'A%'
To interact with any database management system, you should execute a SQL statement with a Recordset object. the Recordset object executes the input query and stores the results in a certain data structure to enable the caller to enumerate through the execution result. The enumeration usually is done with the functions:
Recordset
int GetRecordCount()
bool IsBOF()
bool IsEOF()
void MoveNext()
void MovePrev()
void MoveFirst()
void MoveLast()
Move(int nRows)
nRows
MoveTo(int nRow)
nRow
The following figure represents the location of the Cursor component in the DCMS components:
Cursor
Let us have a simple example to try to find a suitable structure to represent the Cursor. I'll assume that I have four tables: T1, T2, T3, and T4. F1 is a foreign key in T1, and a primary key in T2.
For the query "Select * from T1,T2", we have the result of a cross join, as in the following table:
Select * from T1,T2
So, we can imagine the result as a view of rows, and the cursor is just a vector of values ranging from 0 to the number of rows-1. The cursor functions are just incremented and decremented: a variable between the min and max values. But, what is the case if we have a condition that limits the result, for example, if we modify the input query to: "Select * from T1,T2 where T2.F1=T1.F1"? In this case, we have the result as:
Select * from T1,T2 where T2.F1=T1.F1
So, the result is a subset of the cross join of the two tables (the product of the two tables). As in the following figure:
So, the cursor vector just included the items (2, 3, 5, 11, 14), and the rows count is just the length of the vector (5 rows). A simple algorithm to map the cursor values to the fields record values, in its simple form, is:
void CRecordSet::GetFieldValue(CField* pField, VARIANT& var)
{
// get field record from cursor
int nFieldRecord = GetFieldRecord(pField);
// retrieve field value from field
pField->GetRecord(nFieldRecord, var);
}
int CRecordSet::GetFieldRecord(CField* pField, bool bUseCursor)
{
// map cursor position a the global cross product of tables (look figure 1)
int nCursor = bUseCursor ? m_vCursor[m_nCursor] : m_nCursor;
// iterate through recordset tables
int nTableCount = m_vTables.size(), nTableSize, nPrefixSize = 1;
for (int n = 0; n < nTableCount; n++)
{
nTableSize = m_vTables[n]->GetRecordCount();
if(m_vTables[n] == pField->m_pTable)
return (nCursor / nPrefixSize) % nTableSize;
nPrefixSize *= nTableSize;
}
return -1;
}
GetFieldRecord is a simplified version of my original code. It maps the recordset internal cursor (m_nCursor) through the cursor vector m_vCursor which includes the final result as in figure 1. To understand the function, we should first understand the cross product of any number of tables as it is the secret behind this function. The cross product for two tables is just repeating each row from the second table with all the rows of the first table. That is clear in the left side table of figure 1. So, if we execute:
GetFieldRecord
m_nCursor
m_vCursor
Select * from T3, T4
we get the rows:
Each item of T4 is repeated with the whole rows of T3, which is the key here. In the algorithm, you can find a variable nPrefixSize which is initialized to "1". So, if we iterate through the recordset tables (m_vTables) while multiplying nPrefixSize with each table size, for the first table, nPrefixSize equal to 1. Each row is repeated once. Then, nPrefixSize is multiplied with the first table size. So, for the second table, nPrefixSize equal to the first table size. Each row for it repeats with the size of the first table times. This iteration continues till we reach the table of the field:
nPrefixSize
m_vTables
if(m_vTables[n] == pField->m_pTable)
return (nCursor / nPrefixSize) % nTableSize;
Divide nCursor by nPrefixSize to remove the repetition, then take the reminder of dividing the result by nTableSize. This algorithm can be applied for any number of tables. So, we can map the cursor position to a field row value with a simple and short for loop.
nCursor
nTableSize
for
Before digging into my target of recordset cursor construction, let us have a look at the general SELECT statement structure.
DISTINCT
FROM table0, table1, ..., tablen
WHERE ......
GROUP BY field0, field1, ..., fieldn
HAVING ......
GROUP BY
ORDER BY field0, field1, ..., fieldn
The Select section can contain field names or math operations on the fields, or aggregate functions (count, sum, avg, min, max, stdev).
count
sum
avg
min
max
stdev
The following activity diagram shows the steps of execution of a SELECT statement. I'll talk here only about the part ExecuteWhere.
ExecuteWhere
The ExecuteWhere uses the "WHERE ......." conditions to filter the cross join of the "FROM tablename(s)" part. The function does Boolean operations between sub-conditions; each condition is a filter of one or more fields of the specified table(s). For example, in the following condition:
"WHERE ......."
"FROM tablename(s)"
(orders.orderdate>'5/10/1996' and employees.employeeID=3) and shipname like 'Toms*'
the ParseCondition operation constructs an execution stack like this:
ParseCondition
and
orders.orderdate > '5/10/1996'
employees.employeeID = 3
shipname like 'Toms*'
I'll describe three ways to execute the previous query. The first one is for clarifying the idea of direct filtering, the second is the key for the virtual cursor (so, take time to understand it), but it takes a lot of memory in some cases, and the third is the virtual cursor solution which has the power of the second solution plus less memory usage.
The first step is to assume that the recordset is fully expanded by the cross join of the two tables (Orders, Employees). So, the total number of records is equal to the size of Orders multiplied by the size of Employees. The second step is to loop through this record count and call the function GetFieldRecord(pField, false) for the field that needs to be filtered, and add to the cursor vector only the values that match the condition (e.g.: orders.orderdate>'5/10/1996'). So, after the loop, we have a cursor that can share in boolean operations with other cursors to have the final cursor of the recordset.
GetFieldRecord(pField, false)
orders.orderdate>'5/10/1996'
So, the function calls a function search for the top of the stack, which calls its two operands and asks them for search, then takes their two cursors and do a boolean operation depending on its operator (AND, OR, XOR). I think you understand that that is a recursion operation, no need to waste time and space here to discuss it.
First, we need to understand the concept of expanding the cursor. So, I need to take a very simple example, then enlarge the scale with a sophisticated one. If we have a table like "Orders" in the NorthWind database, and want to execute a query like:
Select * from Orders
we get 830 rows (the cursor vector is equal to the table length). But, if we execute a query like:
Select * from orders where freight<0.2
then, we get 5 rows with a cursor vector of (48, 261, 396, 724, 787). These values are a direct mapping to the full table rows, and we can get the table row number as I described in the section "Recordset Cursor". But, what is the case if we have two or three tables in the query, and we have a filter like where freight<0.2. Should we use the first way: "Direct view filtering", to filter the whole view of the three tables? Imagine the three tables: Orders, Employees, and Customers. The cross join (full view) of the tables will have 664830 rows. We don't have this view expanded in the memory, we just have the cursor vector that includes a serial from 0 to 664829, and then using the algorithm of the "Recordset Cursor", we can reach the right record number of any field. But in the case of the filter (where freight<0.2), how can we construct the cursor vector without "Direct View Filtering"? That is the question!!!
where freight<0.2
How can we transfer the vector of the query "Select * from orders where freight<0.2" (48, 261, 396, 724, 787) to the vector of the query "Select * from orders,employees,customers where freight<0.2" (48, 261, 396, 724, 787, 878, 1091, 1226, ......, 664724, 664787) which includes 4005 rows? I think you know now what the meaning of Cursor Expansion is, so we can define it as: "Cursor Expansion is the process of expanding a cursor of one (or more) table(s) (resulting from a search with table indexes) to a cursor over the cross join of this (these) table(s) with many tables".
Select * from orders,employees,customers where freight<0.2
That expansion depends on many factors:
For three tables, we have three positions for the Orders table: 0, 1, and 2. It depends on its order in the input query. So, we must have an algorithm that considers the whole case.
For example, if we have a condition like "Employees.employeeid=Orders.employeeid" (relation condition), we have two tables. The expansion depends on the location of the two tables and the distance between them, if they are sequenced or separated by other tables.
Employees.employeeid=Orders.employeeid
Suppose we have our tables:
and we have the query "SELECT T1.F1,T2.F1,T3.F2 FROM T1,T2,T3 WHERE T1.F1='C'".
SELECT T1.F1,T2.F1,T3.F2 FROM T1,T2,T3 WHERE T1.F1='C'
The final result is a subset of the cross join of the three tables T1, T2, and T3, with cursor values (1, 4, 6, 9, 11, 14, 16, 19, 21, 24, 26, 29, 31, 34, 36, 39, 41, 44, 46, 49, 51, 54, 56, 59).
And, if we change the order of the three tables in the query like this: "SELECT T1.F1,T2.F1,T3.F2 FROM T2, T1, T3 WHERE T1.F1='C'", it will be (3, 4, 5, 12, 13, 14, 18, 19, 20, 27, 28, 29, 33, 34, 35, 42, 43, 44, 48, 49, 50, 57, 58, 59), and for the order of "SELECT T1.F1,T2.F1,T3.F2 FROM T2, T1, T3 WHERE T1.F1='C'", it will be (12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59).
SELECT T1.F1,T2.F1,T3.F2 FROM T2, T1, T3 WHERE T1.F1='C'
So, the order of tables affects the output cursor values (we get the same data, just the order is different; to get a certain order, we can use Order By). So, we need an algorithm to expand the condition vector to the final cursor depending on the order of the shared tables. One can say, "Instead of that, we can construct a vector for each field containing its records, and use them to retrieve values with the cursor position". But, that will take a lot of space to keep these vectors for all the selected fields, especially if the number of fields is large (over ~ 20 fields). In addition to the wasted space, we need a very sophisticated algorithm to do boolean operations between conditions (T1.F1='C' AND T3.F2=2). But, in our case of saving only cursor values (relative to the cross join of the sharing tables), for each condition, we can do boolean operations simply between conditions' cursor values, using a simple algorithm like set_intersection and merge of the STL library.
Order By
T1.F1='C' AND T3.F2=2
set_intersection
merge
In the whole case, the view (output) size is equal to the condition cursor size multiplied by the other tables' sizes.
view size = condition size * other tables' sizes
Back to the section Recordset Cursor and figures 2, 3. The recordset object contains a stack of conditions saved in a tree structure as in figure 3. Each node represents an object of type WhereItem. The execution of the query starts at the top WhereItem of the stack, which receives a reference to the recordset cursor. The Recordset Cursor is a simple vector of values and a group of functions, like that:
WhereItem
// simplified code for recordset cursor
class CCursor : public vector<__int64>
{
public:
... constructors
public:
vector<CTable*> m_vTables;
// tables that share in the cursor (ordered with recordset tables)
public:
// Boolean operators
CCursor& operator=(const CCursor& input);
CCursor& operator|=(const CCursor& input);
CCursor& operator&=(const CCursor& input);
CCursor& operator^=(const CCursor& input);
CCursor operator|(const CCursor& input);
CCursor operator&(const CCursor& input);
CCursor operator^(const CCursor& input);
// calculate cursor records count
__int64 GetRecordCount(bool bCursor = true);
// retrieve field record corresponding to cursor nRow
int GetRecord(__int64 nRow, CIRField* pField, bool bOrdered = true);
// Filter current cursor with a condition cursor
bool Filter(CCursor& vCursor);
// Expand vCursor (with its tables) to *this (with its tables)
void Expand(CCursor& vCursor);
... order by, group by, and distinct functions
};
Each WhereItem may be an operator (node with two operands), or an operand (leaf - direct condition). So, if the top of the stack is a leaf, it will directly apply the condition to the WhereItem tables (assigned in the parsing process - out of this article's scope). If it is an operator, it will call its two operands' Search function and apply its operator to the output cursors. So, we can imagine a code like that:
Search
// this a simplified code (original include more
// conditions for time optimization and logging)
void WhereItem::Search(CCursor& cursor)
{
if(m_cOperator) // node
{
// copy my cursor (take same tables)
CCursor copy = m_vCursor;
// search in the two operands
m_pOperands[0]->Search(m_vCursor);
m_pOperands[1]->Search(copy);
switch(m_cOperator)
{
case 'a': m_vCursor &= copy; break; // and
case 'o': m_vCursor |= copy; break; // or
case 'x': m_vCursor ^= copy; break; // xor
}
}
else // leaf
{
if(m_vFields.size() == 1)
// search using Field indexes
m_vFields[0]->Search();
else if(!m_bRelation) // relation cursor retrieved at parsing process
m_vCursor.Filter(m_vFields[0], m_vFields[1], m_strQuery, m_op);
}
// filter input cursor (may contain more tables) with my filtered values
cursor.Filter(m_vCursor);
}
Top of the stack takes the recordset cursor which contains all the FROM tables, but the leaf nodes may contain one or more tables. And, each node contains the tables of its operands (ordered as Recordset FROM tables). So, imagine the input query and its node working tables:
SELECT * FROM
Categories,Products,Employees,Customers,Orders
WHERE
Categories.CategoryID = Products.CategoryID AND
Customers.CustomerID = Orders.CustomerID AND
Employees.EmployeeID = Orders.EmployeeID
As you can see from the WhereItem::Search function, it calls cursor.Filter(m_vCursor) to expand the parent cursor (input to the function from the parent). For example, the lower AND of figure 4 calls its operands with the cursor that contains the tables Employees, Customers, and Orders. Operand 0 has a cursor of the tables Customers, Orders. So, it takes the cursor (relation cursor) on tables Customers, Orders, and tries to expand it to the tables Employees, Customers, Orders. And, operand 1 has a cursor of the tables Employees, Orders. So, it takes the cursor (relation cursor) on the tables Employees, Orders, and tries to expand it to the tables Employees, Customers, Orders. After that, the two operands' cursors are expanded with the same tables (with the same order). So, we can apply the boolean operations safely.
WhereItem::Search
cursor.Filter(m_vCursor)
AND
switch(m_cOperator)
{
case 'a': m_vCursor &= copy; break; // and
case 'o': m_vCursor |= copy; break; // or
case 'x': m_vCursor ^= copy; break; // xor
}
The process is done till we expand the cursor of the top stack WhereItem with the recordset cursor, which may include more tables, as each node collects tables from its children. The key here is that each leaf/node is expanded to its parent node tables only, not the recordset tables. So, it accelerates the entire execution. As I mentioned before, the cursor expansion for any condition depends on the condition tables order relative to its parent tables. If there is no condition, the tables are cross joined, but with the condition, we have the cross join of the condition cursor with the other tables. After some examples, I found that I needed to calculate some variables that are shared in the expansion process:
recordset
// size of the cursor after expansion (calculated before expansion)
__int64 m_nViewSize;
// product of tables' sizes before the first table condition
__int64 m_nPrefixSize;
// product of condition tables' sizes located
// before first middle table (see m_nMiddleSize)
__int64 m_nGroupSize;
// product of tables' sizes between condition tables
__int64 m_nMiddleSize;
// tables cross product records count
// (calculated once) without the condition
__int64 m_nRecordCount;
As in the following figure:
m_nViewSize = T1.size * T3.size * T5.size * condition.size;
m_nPrefixSize = T1.size;
m_nGroupSize = T2.size;
m_nMiddleSize = T3.size;
m_nRecordCount = T1.size * T2.size * T3.size * T4.size * T5.size;
The function Filter(CCursor& vCursor) of the cursor takes the cursor of the condition (filter) and expands it to its tables (or from another side, filters the cross join of its tables by the condition cursor).
Filter(CCursor& vCursor)
bool CCursor::Filter(CCursor& vCursor)
{
if(vCursor.m_vTables.size() == m_vTables.size())
{ // if same tables then copy cursor (no need for expansion)
*this = vCursor;
return true;
}
// initialize values
m_nViewSize = vCursor.size(), m_nMiddleSize = 1, m_nPrefixSize = 1;
if(m_nViewSize == 0)
return true;
// seek through cursor tables to find search fields' tables
vector<int> vTableOrder;
for (vector<CTable*>::iterator table = vCursor.m_vTables.begin();
table != vCursor.m_vTables.end(); table++)
vTableOrder.push_back(m_vTables.lfindIndex(*table));
// loop through cursor tables to get relation size
// before expansion and cursor size after expansion
int nTableCount = m_vTables.size(), nTableSize;
for (int nTable = 0; nTable < nTableCount; nTable++)
if(vTableOrder.lfindIndex(nTable) == -1)
{
m_nViewSize *=
(nTableSize = m_vTables[nTable]->GetRecordCount());
for(vector<int>::iterator i =
vTableOrder.begin(); i < vTableOrder.end()-1; i++)
if(nTable > *i && nTable < *(i+1))
m_nMiddleSize *= nTableSize;
}
// seek for search table to expand cursor
m_nGroupSize = 1;
bool bFound = false;
for (int nTable = 0; nTable < nTableCount; nTable++)
{
nTableSize = m_vTables[nTable]->GetRecordCount();
if(vTableOrder.lfind(nTable))
{ // expand vCursor into (*this)
m_nGroupSize *= nTableSize;
bFound = true;
}
else
{
if(bFound)
break;
m_nPrefixSize *= nTableSize;
}
}
m_nSegmentSize = vCursor.size()*m_nMiddleSize*m_nPrefixSize;
Expand(vCursor);
return true;
}
The previous function initializes the variables needed for cursor expansion. It is time now to introduce the Expand function. Remember, the Expand function works as in the definition "Cursor Expansion is the process of expanding a cursor of one (or more) table(s) (resulting from a search with table indexes) to a cursor over the cross join of this (these) table(s) with many tables".
Expand
To find the way to expand a condition filter to the cross join values, I did an example by hand, and investigated the results to find the general expression to calculate the values. The following tables show the trial for the query:
Select * from T1,T2,T3,T4,T5 where T3.F2=T5.F2
Imagine first the result of the query: "Select * from T3,T5 where T3.F2=T5.F2"; then we can expand its cursor to the tables T1, T2, T3, T4, T5.
Select * from T3,T5 where T3.F2=T5.F2
and for the expanded query "Select * from T1,T2,T3,T4,T5 where T3.F2=T5.F2":
Actually, that was not the only trial. I tried many samples, but each trial needed a very large Excel sheet to illustrate the expansion, which is difficult to be included in the article. Finally, I found the following general expression: NC=m*G*P+(C+m)*P+p: where NC is the new cursor value and C is the old one. I implemented the following Expand function to map the old cursor to the new sharing tables:
// not optimized version of the original code (for company privacy)
void CCursor::Expand(CCursor& vCursor)
{
// resize destination cursor with the calculated view size
resizeEx((int)m_nViewSize);
// loop in source cursor items to expand items to recordset view
__int64 nForeignMatch, nViewIncrement = 0,
nSegmentSize = vCursor.size()*m_nMiddleSize*m_nPrefixSize, nCursor;
__int64 nMiddle, nMatch;
vector<__int64>::iterator _first = vCursor.begin(),
_end = vCursor.end(), _dest = begin();
// loop to fill all calculated view size
while(m_nViewSize)
{
__int64 nMaxForeignMatch = m_nGroupSize;
for(vector<__int64>::iterator i = _first;
i != _end; i += nForeignMatch,
nViewIncrement += m_nMiddleSize,
nMaxForeignMatch += m_nGroupSize)
{
// count cursor values that are under the size
// of the foreign table
// (it should be 1 for FroeignPrimary sequence)
for(nForeignMatch = 0; i+nForeignMatch != _end &&
(*(i+nForeignMatch)) < nMaxForeignMatch;
nForeignMatch++);
// repeat cursor items with the size
// of table(s) located between relation tables
for(nMiddle = 0; nMiddle < m_nMiddleSize; nMiddle++)
{
nCursor = (nMiddle + nViewIncrement) *
m_nGroupSize * m_nPrefixSize;
for(nMatch = 0; nMatch < nForeignMatch; nMatch++)
for (__int64 nPrefix = 0, c = nCursor +
(*(i+nMatch)%m_nGroupSize)*m_nPrefixSize;
nPrefix < m_nPrefixSize; nPrefix++)
*(_dest++) = c+nPrefix;
}
}
m_nViewSize -= nSegmentSize;
}
}
The algorithm starts by resizing the cursor with the expected view size. To understand the meaning of Segment Size, we should first understand how we can estimate the output view size. Actually, the view size should be the product of the condition size with the remaining tables' sizes (check figure 5). And, it can be calculated by using another method. It can be:
m_nViewSize = vCursor.size()*m_nMiddleSize*
m_nPrefixSize*(product of tables after last condition table)
// in our case: no tables after last condition table T5, So
m_nViewSize = 5 * 3 * 15 = 225
nSegmentSize is simply equal to vCursor.size()*m_nMiddleSize*m_nPrefixSize. Any way, I hope you understand the idea of Cursor Expansion, no need to fully understand the algorithm now, as I think that any one who knows the idea can extract the algorithm and implement it in his own way.
nSegmentSize
vCursor.size()*m_nMiddleSize*m_nPrefixSize
Let us check the query and have some measurements from the Windows Task Manger:
from orders,employees,customers,orderdetails,shippers,suppliers
where
customers.customerid=orders.customerid and
employees.employeeid=orders.employeeid and
orders.orderid=orderdetails.orderid and
orders.shipvia=shippers.shipperid
The following table includes each condition and its cursor size before and after expansion:
If you notice, the condition customers.customerid=orders.customerid takes 366 MB after its cursor expansion (!!!), while it was 6600 bytes before expansion (825 rows * sizeof(__int64)). The following figures show the Windows Task Manger during the query execution:
customers.customerid=orders.customerid
sizeof(__int64)
To overcome this problem, I thought about a Virtual Cursor which I will demonstrate in the next section.
SQL Server again: I don't think that SQL Server works the same way. Actually, it doesn't take memory during query execution or for the final result. But, if the final cursor is too large, it takes a long time as it saves it to the hard disk, which may fail if there is not enough space on the hard disk, or if it needs more area to be saved. Anyway, there is a problem in the way that SQL Server saves the final cursor if it is huge. So, read the next section to check if it can help solve this problem.
In the previous section, we introduced the idea of Cursor Expansion and how this expansion is useful to do boolean operations between expanded cursors of query conditions. But, we faced the problem of heavy memory usage and the risk of out of memory. So, I'll introduce a new idea for Cursor Expansion.
What if we don't expand cursors at all and leave them unexpanded. You may ask "but we need to expand them for the cross join of shared tables". You are right, but, as you know, we have the algorithm of Cursor Expansion at the Expand() function, so we can keep the same algorithm and expand the cursor values on demand. Yes, that is the idea. We can make a new function GetCursor() that receives the required row number and use the same algorithm to calculate the cursor value (relative to the cross join). So, we only need to keep the variables that share the algorithm to use it in the function. You may say "it will be slow". You are right too. But, if we add some caching and extend the cache window to an optimum number, we can accelerate the process. In addition, I have added some tricks to optimize the speed and jump faster to the required cursor value.
Expand()
GetCursor()
So, the solution now can be summarized in keeping the cursor without expansion and expanding only a small window (e.g.: 10240) of cursor values, and sliding this window as needed foreword or backward depending on the cursor type. So, we replace the size of the expanded cursor with only its original size plus the size of the cache window.
The GetCursor algorithm is the same as the Expand GetCursor, except that it expand only a small window of cursor values. So, we can take the same algorithm with some modifications. To simplify the code, we do these modifications step by step. The initial code is as follows:
GetCursor
// not optimized version of the original code (for company privacy)
__int64 CCursor::GetCursor(__int64 nRow)
{
if(empty() || m_bSerial)
return nRow;
if(m_bVirtual == false)
return (*this)[(int)nRow];
if(m_nCacheRow != -1 && nRow >= m_nCacheRow &&
nRow < m_nCacheRow+m_nCacheIndex)
return m_vCache[nRow-m_nCacheRow];
m_vCache.resizeEx(m_nCachePageSize);
m_nCacheRow = nRow;
m_nCacheIndex = 0;
__int64 nSegment = nRow/m_nSegmentSize, nMiddle,
nMatch, nPrefix, nForeignMatch;
__int64 nViewIncrement = nSegment*m_nPageViewIncrement,
nAdded = nSegment*m_nSegmentSize,
nViewSize = m_nViewSize-nAdded;
while(nViewSize > 0)
{
__int64 nForeignMatchIndex = 0;
// loop in source cursor items to expand items to this
for(iterator i = begin(), _end = end();
i != _end; i += nForeignMatch,
nViewIncrement += m_nMiddleSize)
// repeat cursor items with the size
// of table(s) located between relation
// tables
if(nForeignMatch = m_vForeignMatch[nForeignMatchIndex++])
for(nMiddle = 0; nMiddle < m_nMiddleSize; nMiddle++)
for(nMatch = 0; nMatch < nForeignMatch; nMatch++)
for(nPrefix = 0; nPrefix < m_nPrefixSize; nPrefix++)
if(nAdded++ >= nRow)
{
m_vCache[m_nCacheIndex++] =
(nMiddle + nViewIncrement) *
m_nGroupSize * m_nPrefixSize +
(*(i+nMatch)%m_nGroupSize)*m_nPrefixSize +
nPrefix;
if(m_nCacheIndex == m_nCachePageSize)
return m_vCache[0];
}
nViewSize -= m_nSegmentSize;
}
return m_vCache[0];
}
m_nCachePageSize
m_nCacheIndex
m_nCacheRow
m_vCache
To accelerate the execution of the functions, we can add some checks to the for loops. Look for the nested loops:
for(nMiddle = 0; nMiddle < m_nMiddleSize; nMiddle++)
for(nMatch = 0; nMatch < nForeignMatch; nMatch++)
for(nPrefix = 0; nPrefix < m_nPrefixSize; nPrefix++)
if(nAdded++ >= nRow)
they can be:
if(m_nCacheIndex == 0 && nRow-nAdded > m_nMiddleSize*nForeignMatch*m_nPrefixSize)
nAdded += m_nMiddleSize*nForeignMatch*m_nPrefixSize;
else
for(nMiddle = 0; nMiddle < m_nMiddleSize; nMiddle++)
if(m_nCacheIndex == 0 && nRow-nAdded > nForeignMatch*m_nPrefixSize)
nAdded += nForeignMatch*m_nPrefixSize;
else
for(nMatch = 0; nMatch < nForeignMatch; nMatch++)
if(m_nCacheIndex == 0 && nRow-nAdded > m_nPrefixSize)
nAdded += m_nPrefixSize;
else
for(nPrefix = 0; nPrefix < m_nPrefixSize; nPrefix++)
if(nAdded++ >= nRow)
Each if statement is to avoid doing the loop if there is no need to do it. Anyway, it is not correct to try to explain the function here, as it needs a white board and a marker to breakdown each detail and get an example for each part. I am interested only in the idea and the problem it faces, not its details.
if
Boolean operations with Virtual Cursor: Boolean operations are done between cursors of conditions. What is the case if cursors are not expanded? How can we intersect or merge them? I mentioned before that I am using the simple algorithm set_intersection and merge of the STL library. These algorithms take the two cursors (vectors) and do its work and output the result to a destination vector. But, in the case of the Virtual Cursor, we don't have a vector. We only have a cache window of the vector. Let us first have a look at the STL set_intersection algorithm and see how we can use it to do our task.
template<class _InIt1,
class _InIt2,
class _OutIt> inline _OutIt set_intersection(_InIt1 _First1, _InIt1 _Last1,
_InIt2 _First2, _InIt2 _Last2, _OutIt _Dest)
{ // AND sets (_First1, _Last1) and (_First2, _Last2)
for (; _First1 != _Last1 && _First2 != _Last2; )
if (*_First1 < *_First2)
++_First1;
else if (*_First2 < *_First1)
++_First2;
else
*_Dest++ = *_First1++, ++_First2;
return _Dest;
}
The algorithm iterates through the two vectors linearly, and if two values are equal, it adds it to the destination. I'll use the same idea, but instead of getting the value from the vector, I'll call GetCursor(). And, because of caching and the foreword cursor (default implementation), it will be very fast. Here is a part of the &= operator of the CCursor class:
&=
CCursor
CCursor& CCursor::operator&=(const CCursor& input)
{
if(input.empty())
clear();
else if(m_bVirtual || input.m_bVirtual)
{
vector<__int64> dest;
dest.set_grow_increment(10240);
__int64 nCursor[2] = { GetCursor(0),
((CCursor*)&input)->GetCursor(0) };
__int64 nSize0 = GetCursorCount(),
nSize1 = ((CCursor*)&input)->GetCursorCount();
// STL set_intersection algorithm
for (__int64 n0 = 0, n1 = 0; n0 < nSize0 && n1 < nSize1; )
if (nCursor[0] < nCursor[1])
nCursor[0] = GetCursor(++n0);
else if (nCursor[0] > nCursor[1])
nCursor[1] = ((CCursor*)&input)->GetCursor(++n1);
else
dest.push_backEx(nCursor[0]),
nCursor[0] = GetCursor(++n0),
nCursor[1] = ((CCursor*)&input)->GetCursor(++n1);
assignEx(dest);
m_bVirtual = false;
}
else
// if the two input vectors are sorted use normal intersection
resizeEx(set_intersection(begin(), end(), (iterator)input.begin(),
(iterator)input.end(), begin())-begin());
...
return *this;
}
In the same way, we can implement the other operators (|= , ^=, -=). The output cursor is not virtual, so the variable m_bVirtual is set to false. The output cursor is not virtual, but it can be taken by its composer WhereItem and made virtual relative to the parent of WhereItem. To clarify the previous statement, we should know that the cursor is made virtual if its condition contains tables less than its parent tables. So, we have to expand it or make it virtual. One can ask, what about Order By, Group By, and Distinct? How can we order a virtual cursor? Is that possible? Actually, I tried to do that and thought about it many times, but I failed to find a clear idea about ordering a virtual cursor. But, the good point here is Order By is done over the final cursor of the recordset, which is usually small as it is the result of some filtering conditions. So, to solve this problem, if the query contains Order By, Group By, or Distinct, I expand the final cursor as explained in the section Cursor Expansion.
|=
^=
-=
m_bVirtual
false
Group By
Distinct
bool CRecordset::ExecuteWhere()
{
// construct record set initial view
m_vCursor.m_vTables = m_vFromTables;
// check if there is a where statement to be parsed
if(m_strWhere.IsEmpty() == false)
{
// start search from stack top item
m_vWhereStack[0]->Search(m_vCursor, false);
if(m_vGroupByFields.empty() == false || m_bDistinct ||
m_vOrderFields.empty() == false)
m_vCursor.Expand();
}
else
m_vCursor.m_bSerial = true;
return true;
}
Note the condition:
if(m_vGroupByFields.empty() == false || m_bDistinct ||
m_vOrderFields.empty() == false)
m_vCursor.Expand();
and look at figure 2 to see that this function is called before the Order By or Group By execution.
After applying the technique of virtual cursor with the query:
we can check each condition and its cursor size in the following table:
Note that the maximum cursor size is just 17 KB, while it was 366 MB in the Expansion case.
Thanks to God and thanks to the Harf Company which gave me the chance to develop this interesting project.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
supercat9 wrote:I fully understand that multi-table selections can be very useful in cases where indexed items in one table are compared against indexed items in another. In such cases, a single multi-table query can return a focused result with better performance that could be obtained by either selecting everything from both tables that might be of interest and then filtering, or by iteratively taking items from the first table and querying them in the second.
supercat9 wrote:I fail, however, to see the usefulness of a query which will return substantially more records than the total number of records in the databases being queried. Am I missing something?
Hatem Mostafa wrote:It is the first useful message I have received.
supercat9 wrote:I suspect you got a '2' because it's not really clear from the article what you want to do with the Cartesian product of the tables. Knowing a beautifully-efficient way to perform a certain task is generally only useful if either the task itself is useful or the method can be applied to other useful tasks.
supercat9 wrote:Note that in that example, it would be possible that some of the cross joins will produce more records than existed in the original tables, but the number of records is unlikely to be huge. Is that the sort of situation for which you were envisioning your technique as being useful?
Hatem Mostafa wrote:I hope that our God will punish u one day if u r trying to downgrade my work.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/25728/Database-Virtual-Cursor?msg=2532255 | CC-MAIN-2014-35 | refinedweb | 5,976 | 52.29 |
#include <DEP_ContextOptionsFwd.h>
Transient Read Handles are simply accessors to the internal data of the original handle. Unlike normal read handles, transient read handles don't result in a tracked reference to the underlying data, and so have lower overhead, but are unsafe to use outside the scope of, or after any changes to, the original handle. Write handles will not be aware of any active transient read handles, so errors in this usage will not trigger assertions. In fact, no write handles to the underlying data should exist during the entire scope of a transient read handle.
You are not allowed to edit the underlying data, so many read handles may be sharing the same underlying data.
Definition at line 16 of file DEP_ContextOptionsFwd.h. | https://www.sidefx.com/docs/hdk/class_u_t___c_o_w_transient_read_handle.html | CC-MAIN-2022-27 | refinedweb | 126 | 50.67 |
.
In Visual Studio 2010, we plan to give customers the ability to use T4 templates to generate their classes, and are also planning deep integration with the Entity Designer and Visual Studio to provide a great end-to-end experience..
As many of you are aware, VS 2010 allows developers to target FX4.0 as well as FX3.5 (and older runtimes). The T4 templates that we ship in the Visual Studio box generate code that works with the Entity Framework in .NET FX 4.0. This means the VS item templates we ship in the box will not be available for projects that target FX3.5.
However, it is certainly possible (and supported) for users to create new T4 templates (and new VS item templates) that generate code from an EDMX file in projects that target FX3.5 and light up in the same experience. In fact, we might release some new templates ourselves later, on CodeGallery perhaps..
Today.DateDiff(“y”, GETDATE(), Person.Birthday) </DefiningExpression> </Function>
Here are some things to notice::
However the existence of the stub allows you to create LINQ expressions that compile correctly, and then at runtime, when used in a LINQ to Entities query, the function call is simply translated by the entity framework into a query that runs in the database..
Unfortunately.
The Entity Framework's provider model makes it possible for it to work over different database's.
The idea being that someone, either the database vendor or a third party, can write a provider that allows the Entity Framework to work with that database.
The Entity Framework asks the provider to convert queries and update operations from a canonical, database agnostic form, into native commands. I.e. T-SQL for SQLServer.
This Provider model however has an interesting side-effect. It makes it possible to write wrapping providers, providers that wrap another provider, layering in additional services.
Examples of possible wrapping providers include: logging providers, auditing providers, security providers, optimizing providers and caching providers. The latter is the subject of the following one-pager put together by Jarek
----
Business applications use various kinds of Reference Data that does not change at all during the lifetime of an application or changes very infrequently. Examples may include: countries, cities, regions, product categories, etc.
Applications that present data (such as ASP.NET applications) tend to run the same or similar queries for reference data very often resulting in a significant database load. Developers typically use ASP.NET cache and/or custom caching techniques to minimize number of queries but implementing caching manually adds additional complexity and maintenance cost to an existing solution..
We are designing a query caching layer for Entity Framework that:
· Will be transparent (existing code will automatically take advantage of caching without modification other than defining caching policy).
· Will cache query results, not entity objects so arbitrary ESQL queries can also benefit from it
· Will be optimized for read-only or mostly-read-only data. Caching of frequently changing data will also be supported, but we are not optimizing for that scenario.
· Will handle cache eviction automatically on updates.
· Will be extensible (it should be easy to use with ASP.NET cache or 3rd party caching solutions including local and distributed caches)
To implement query caching we need to be able to intercept query and update execution.
All queries in Entity Framework (regardless of their origin: Entity SQL queries, Object Query<T>, LINQ queries or internal queries generated by object layer) are processed in the Query Pipeline which at some point passes Canonical Query Tree (CQT) to the provider to get the result set of a query. We will cache query results in such a way that when the same query is used over and over again (as determined by the CQT and parameter values), the results will be assembled from the cache instead of a database.
Updates are also centralized in Entity Framework (Update Pipeline) and handled in a similar way. At some point update commands (Update CQTs) are sent to the provider. We can add cache maintenance routines at this point that ensure that proper cache invalidation happens each time an update occurs.
Query results stored in the cache will be represented by opaque data structures, which are immutable and serializable, so that they can be easily passed over the wire. An example of such structure may be:
[Serializable] public class DbQueryResults { public List<object[]> Rows = new List<object[]>(); }
When caching query results care must be taken to make sure that returned data is not stale. To be able to detect that, we associate a list of dependencies with each query result. Whenever any of the dependencies change, the query results should be evicted from cache. In the proposed approach, dependencies will be simply store-level entity set names (tables or views) that are used in the query.
For example:
· SELECT c.Id FROM Customers AS c is dependent on “Customers” entity sets
· SELECT c.Id, c.Orders FROM Customers as c is dependent on Customers and Orders entity sets
· 1+2 is not dependent on any entity sets
When adding items to the cache, we will be passing a list of dependent entity sets to the cache provider. After EF makes changes to the database, it will notify the cache about list of entity sets that have changed. All query results relying on any of those entity sets have to be removed from the cache. Dependency names will be represented as strings and collections of dependent entity sets will be IEnumerable<string>.
In the first implementation we will likely use query text or some derivative of it (such as cryptographic hash) as a cache key, but cache implementations should not rely upon cache keys being meaningful.
To be able to work with EF, cache must implement the following interface:
public interface ICache { bool TryGetEntryByKey(string key, out object value); void Add(string key, object value, IEnumerable<string> dependentEntitySets); void Invalidate(IEnumerable<string> entitySets); }
As you can see, the values are passed as objects instead of DbQueryResults.
· TryGetEntryByKey(key,out value) tries to get cache entry for a given key. If the entry is found it is returned in queryResults and the function returns true. If the entry is not found, the entry returns false and value of queryResults is not determined.
· Add (key, value, dependentEntitySets) adds the specified query results to the cache with and sets up dependencies on given entity sets.
· Invalidate(sets) – will be invoked after changes have been committed to the database. “sets” is a list of sets that have been modified. The cache should evict ALL queries whose dependencies include ANY of the modified sets.
Cache providers will typically define some specific retention policies, limits and automatic eviction policies (using LRU, LFU or some other criteria). ICache interface does not define that. Based on the user feedback we may want to extend ICache to specify parameters such as retention timeout for each item or add a new interface for configuring cache behavior in an abstract way.
Using the interface defined above, the pseudo-C#-code implementation of query execution operation may look like this:
DbQueryResults GetResultsFromCache(DbCommandDefinition query) { if (CanBeCached(query)) { // calculate cache key string cacheKey = GetCacheKey(query); DbQueryResults results; // try to look up the results in the cache if (!Cache.TryGetEntryByKey(cacheKey, out results)) { // results not found in the cache - perform a database query results = GetResultsFromDatabase(query); // add results to the cache Cache.Add(cacheKey, results, GetDependentEntitySets(query)); } return results; } else { return GetResultsFromDatabase(query); } }
As always we are keen to hear your comments. | http://blogs.msdn.com/efdesign/ | crawl-002 | refinedweb | 1,260 | 51.68 |
OSM Map On Magellan/Format/raimadb
Contents
Raima DB
The Magellan Map file uses a Raima DB Version 3.0 for storing text informations. This tries not to be a complete specification on the data base binary format. It shall only list, what is needed to know about it, to generate a compatible map.
The Raima Db conatains several tables, each stored within a single file and a dictionary file (db00.dbd) When the database is used in compression mode, there also exists one file per table, containing compression data/indexes.
db00.dbd
The dictionary lists all files, tables and fields belonging to the db. The file starts with a 22 bytes header, followed by the file table, then the list of tables and at least all fields within the tables.
The PageSize is used in the table files. The data in the tables is organized in pages. This values specifies how big they are.
File Descriptor
Each file belonging to the database is described in the list of file descriptor. Each file descriptor is 60 bytes long. The first 49 bytes are reserved for the file name, the rest contains some flags and parameters belonging to the file.
Each record inside a table must start on an even byte position. When the record length is odd the aligned record length is the real record length plus 1.
Here is the file descriptor for a compression table:
Compression data Files have the file name extension as the corresponding file and also the name is nearly the same. The difference is a trailing c in the name of the compression data file.
Data File 00cn.dat => Compression data File 00cnc.dat
Table Descriptor
Next follows the list of tables. Each Entry in this list is 12 bytes long. The compression tables are not listed here only the tables containing data.
List of Fields
Each table may have multiple fields per row. All fields of all tables are listed next. In the table descriptor there is an index (Index of first Field in Field list) pointing to the first field belonging to a table. Following types of fields are used within magellan data base files:
The file ends with the list of table names and field names, each seperated by a Carriage return ('\n').
Each row in a table has 6 starting bytes containing table and row number.
Tables
The first table in the Database is a zip Code table and a City name table. These two tables are typically empty.
This is followed by three tables per group containing names of elements in the layer files and linkage informations. ([GroupNr.] is a placeholder for the numerical number of the group, the three tables belong to.
This table can contain several texts/names in a single row. When this is the case, all texts are seperated by a 0x00 byte. It is also possible, that a text exceeds a single line. In this case the last byte of the String is not a 0x00 but any character. This is used to tell the interpreter, that the text is continued in the next row. In all other cases the last byte in the array must always be a 0x00.
This table links between the elements in layer files.
- 'NAME_REF' references a text entry in the Text Table. It is calculated by NAME_REF = (Offset of the text in the row) << 24 + (row number)
- 'CELL_NUM' is the number of the cell in the layer file.
- 'N_IN_C' is the number of the referenced element within the cell
- 'OBJ_TYPE' is at the end nothing else than the layer itself.
(All types listed in the table description are field types from the chapter before)
This table can remain empty. (Purpose is unclear)
Example
Here is an example of an db00.dbd file for a table with a single group. All files in the table are compressed. Following files are belonging to the database:
- 00cn.dat City Name References
- 00cnc.dat Compression data for 00cn.dat
- 00z.dat Zip code table
- 00zc.dat Corresponding Compression data
- 00gr0.aux Text Table
- 00gr0c.aux Compression data
- 00gr0.ext Link table
- 00gr0c.ext Compression data
- 00gr0.clp CLP Table
- 00gr0c.clp Compression data
- db00.dbd Database dictionary | https://wiki.openstreetmap.org/wiki/OSM_Map_On_Magellan/Format/raimadb | CC-MAIN-2020-45 | refinedweb | 705 | 67.96 |
STL provides a template based set of collection classes, and methods for working on those collections. The collection classes give the developer access to fast and efficient collections. While the methods, which are known as the algorithms, provide template based collection manipulations functions.
The benefits of STL include
If you are already familiar with templates then skip to the next section. Otherwise read this section for a brief tutorial on templates. A template can be thought of as a macro with type checking. For example to declare a template we would do the following:
template < class T > class Value { T _value; public: Value ( T value ) { _value = value; } T getValue (); void setValue ( T value ); }; template < class T > T Value<T>::getValue () { return _value; } template < class T > void Value<T>::setValue ( T value ) { _value = value; }
This example declares a class Value, which stores a parameterized value, _value, of type T. After the keyword template, in the angled brackets, is a list of parameters. The list tells the template what types will be used in the template. A good analogy for the template parameter list is the parameter list for a class constructor. Like a constructor, the number of arguments for the template can be from one to many.
Methods for a template that are declared outside the class definition require the template keyword, as shown above. To use the Value class to declare an array of floats we would do:
Value<float> values[10]; // array of values of type float
This declares an array of values, the angled brackets tells us that Value will store its value as a float.
If we wanted to declare a list to work with our template based Value class we could do the following:
Template < class T > class ValueList { Value<T> * _nodes. public: ValueList ( int noElements ) { _nodes = new Node<T>[noElements]; } virtual ~ValueList () { delete [] _nodes; } };
Here we have declared a template-based class that stores a variable sized list of values.
Each STL collection type has its own template parameters, which will be discussed later. What type of collection you use is up to your needs and tastes. From past experience, the vector and map classes are the most useful. The vector class is ideal for simple and complex collection types, while the map class is used when an associative type of collection is needed. The deque collection is excellent for use in systems that have queued based processing, such as a message based system.
- vector
- A collection of elements of type T.
- list
- A collection of elements of type T. The collection is stored as a bi-directional linked list of elements, each containing a member of type T.
To include the class definition use:#include <list>
- deque
- A collection of varying length of elements of type T. The sequence is represented in a way that permits insertion and removal of an element at either end with a single element copy, and supports insertion and removal anywhere in the sequence, but a sequence copy follows each operation.
To include the class definition use:#include <deque>
- map
- A collection for a varying length sequence of elements of type pair<const Key, T>. The first element of each pair is the sort key and the second is its associated value. The sequence is represented in a way that permits lookup, insertion, and removal of an arbitrary element. The key can be a simple as a number or string or as complex a key class. The key class must support normal comparison operations so that the collection can be sorted or searched.
To include the class definition use:#include <map>
- set
- A collection that controls a varying length sequence of elements of type const Key. Each element serves as both a sort key and a value. The sequence is represented in a way that permits lookup, insertion, and removal of an arbitrary element.
To include the class definition use:#include <set>
- multimap
- A collection of a varying length sequence of elements of type pair<const Key, T>. The first element of each pair is the sort key and the second is its associated value. The sequence is represented in a way that permits lookup, insertion, and removal of an arbitrary element.
To include the class definition use:#include <map>
- multiset
- A collection of a varying-length sequence of elements of type const Key. Each element serves as both a sort key and a value. The sequence is represented in a way that permits lookup, insertion, and removal of an arbitrary element.
To include the class definition use:#include <set>
STL strings support both ascii and unicode character strings.
- string
- A string is a collection of ascii characters that supports both insertion and removal.
To include the string class definitions use:#include <string>
- wstring
- A wstring is a collection of wide characters that it supports both insertion and removal. In MFC the string class is CString, which provides a Format and other methods to manipulate the string. CString has the advantage of providing methods such as Format, TrimLeft, TrimRight and LoadString. It is easy to provide a string-based class that contains these methods.
To include the wstring class definitions use:#include <xstring>
Streams provide the developer with classes that can output to a container variable types of stream elements.
- stringstream
- A string stream that supports insertions of elements, and elements are inserted via the overloaded operator <<. The method str() gives a reference back to the underlying string, and the c_str() can be used to get a constant pointer to the string buffer.
- wstringstream
- A wstring stream that supports insertions of elements, and elements are inserted via the overloaded operator <<. The method str() gives a reference back to the underlying string, and the c_str() can be used to get a constant pointer to the string buffer.
To use a string stream we would do the follwing:stringstream strStr; for ( long i=0; i< 10; i++ ) strStr << "Element " << i << endl;
To include the string class definitions use:
#include <strstring>
- empty
- Determines if the collection is empty
- size
- Determines the number of elements in the collection
- begin
- Returns a forward iterator pointing to the start of the collection. It is commonly used to iterate through a collection.
- end
- Returns a forward iterator pointing to one past the end of the collection. It is commonly used to test if an iterator is valid or in looping over a collection.
- rbegin
- Returns a backward iterator pointing to the end of the collection It is commonly used to iterate backward through a collection.
- rend
- Returns a backward iterator pointing to one before the start of the collection. It is commonly used to test if an iterator is valid or in looping over a collection.
- clear
- Erases all elements in a collection. If your collection contains pointers the elements must be deleted manually.
- erase
- Erase an element or range of elements from a collection. To erase simply call erase with an iterator pointing to the element or a pair of iterator show the range of elements to erase. Also, vector supports erasing based on array index.
STL also includes classes for printing to the standard output streams. Like standard C++ the classes are cout and wcout. To use them in a console application include the file iostream. As an example:
#include <iostream> void main () { char ch; cin >> ch; cout << “This is the output terminal for STL” << endl; }
We want to look briefly at adding/removing elements from the vector and deque collections. These collections are represented as an array, and to add an element we use the push methods with back or front depending on if we are adding at the front (start) or back (end) of an array.
The general methods are:
As an example, suppose we want to build a message processing system based on a message class:
Class Msg { int _type; int _priority; string _message; public: Msg ( int type, int priority, string & msg ) { _type = type; priority = priority; _msg = msg; } Msg ( int type, int priority, char * msg ) { _type = type; priority = priority; _msg = msg; } int getType () { return _type; } int getPriority () { reutrn _priority; } string & getMsg () {return _msg; } };
To store the messages we would need a first in first out based collection, such as deque: typedef deque<Msg> MsgList;
To send a message we might do the following:
Msg message( 0, 0, "My Message" ); msgList.push_back(msg);
And to process message we could would do the following:
void process_msgs () { bool done = false; while ( !done ) { // if no messages stop if ( msgList.size() == 0 ) { done == true; continue;} // get msg and process Msg & msg = msgList.front(); switch ( msg.getType() ) { // process messages } // remove msg from que msgList.pop_front(); } }
With just a few lines of code we have created a general messaging system, if we wanted an entire system we could create a simple COM server that exposed a mail interface, and that stored the messages using a message list.
For vector, map, deque, string and wstring collections, elements are normally added using:
operator []
Access an element at a position, and for map, string and wstring supports insert of element.
A simple example of using this operator would be to decalre a list using map:
typedef map<int, string> StringList StringList strings; stringstream strStr for ( long i=0; i<10; i++ ) { stringstream strStr; strStr << "String " << i; strings[i] = strStr.str(); } for ( long i=0; i<10; i++ ) { string str = strings[5]; cout << str.c_str() << endl; }
We have created a map, whose key is an integer, and that stores strings.
Iterators support the access of elements in a collection. They are used throughout the STL to access and list elements in a container. The iterators for a particular collection are defined in the collection class definition. Below we list three types of iterators, iterator, reverse_iterator, and random access. Random access iterators are simply iterators that can go both forward and backward through a collection with any step value. For example using vector we could do the following:
vector<int> myVec; vector<int>::iterator first, last; for ( long i=0; i<10; i++ ) myVec.push_back(i); first = myVec.begin(); last = myVec.begin() + 5; if ( last >= myVec.end() ) return; myVec.erase( first, last );
This code will erase the first five elements of the vector. Note, we are setting the last iterator to one past the last element we of interest, and we test this element against the return value of end (which give an iterator one past the last valid item in a collection). Always remember when using STL, to mark the end of an operation use an iterator that points to the next element after the last valid element in the operation.
The three types of iterators are:
- iterator (forward iterator through collection)
- Allows a collection to be traversed in the forward direction. To use the iteratorfor ( iterator element = begin(); element < end(); element++ ) t = (*element);
Forward iterators support the following operations:a++, ++a, *a, a = b, a == b, a != b
- reverse_iterator (reverse iterator through collection)
- Allows a collection to be traversed in the reverse direction. As an example:for ( reverse_iterator element = rbegin(); element < rend(); element++ ) t = (*element);
All of the collections support forward iterators. Reverse iterators support the following operations:a++, ++a, *a, a = b, a == b, a != b
- random access ( used by vector declared as forward and reverse_iterator)
- Allows a collection to be traversed in either direction, and with any step value. An example would be:for ( iterator element = begin(); element < end(); element+=2 ) t = (*element);The vector collection supports random access iterators. Iterators are the most used type of access to the collections of STL, and they are also used to remove elements from collections. Look at the following:iterator element = begin(); erase(element);
This will set an iterator to the first element of the collection and then remove it from the collection. If we were using a vector we could do the followingiterator firstElement = begin(); iterator lastElement = begin() + 5; erase(firstElement,lastElement);
to remove the first five elements of a collection. Random access iterators support the following:a++, ++a, a--, --a, a += n, a -= n, a – n, a + n*a, a[n], a = b, a == b, a != b, a < b, a <= b, a > b, a >= b
It is important to remember, when you get an iterator to a collection do not modify the collection and then expect to use the iterator. Once a collection has been modified an iterator in most cases will become invalid.
Each collection uses it’s template paramters to determine what elements the collection will store. Shown below is a list of the collections we are discussing and beside each is the template pamaters. In the parameters T denotes the element type to store in the collection, A denotes the allocator (which allocates elements), Key denotes the key for the element, and Pred denotes how the collection will be sorted.
template < class T, class A = allocator<T> > class vector template < class T, class A = allocator<T> > class list template < class T, class A = allocator<T> > class deque template <class Key, class T, class Pred = less<Key>, class A = allocator<T> > class map template <class Key, class Pred = less<Key>, class A = allocator<Key> > class set template <class Key, class T, class Pred = less<Key>, class A = allocator<T> > class multimap template <class Key, class Pred = less<Key>, class A = allocator<Key> > class multiset
This list looks somewhat daunting but it provides a quick reference for the collections. In most cases you will use the default arguments and your only concern will be what you are storing and how it is stored. T refers to what you will store, and for collections that support a key; Key shows how the elements will be associated.
From previous experience the vector, map and deque classes are the most often used so we can use them as an example for declaring a collection:
Using typedef to declare the collection:
typedef vector<int> myVector typedef map< string, int > myMap typedef deque< string > myQue
The first declaration declares a vector of integers, the second declares a collection of integers, which have a key of type string, and the last declares a queue (or stack) of strings.
Another way to declare a collection is to derive a collection from an STL collection as in the following:
class myVector : public vector<int> {};
Either method is useful, it a matter of preference. Another important consideration is declaring the iterators supported by the collection as separate types. If we use the above example we would declare:
typedef myVector::iterator vectorIterator typedef myVector::reverse_iterator revVectorIterator
This gives the user of the collection direct access to the iterator without being forced to use the following syntax:
myVector coll; for ( myVector::iterator element = coll.begin(); element < coll.end(); element++ )
The resolution operator can be cumbersome.
Up to this point we have discussed how to use STL at a bare minimum, now we need to delve into the most important part of the collections. How do we manipulate a collection? For example, if we had list of strings, what would we need to sort the list in alphabetical order, or if we wanted to search a collection for a set of elements that matched a given criterion. This is where STL algorithms are used. In your visual studio installation, under include directory, you will find an include file, algorithm. In algorithm a set of template based functions are declared. These functions can be used to manipulate STL collections. The functions can be categorized in the following: sequence, sorting and numeric. Using these categories, we can list all of the methods of algorithms:
- Sequence
-
count, count_if, find, find_if, adjacent_find, for_each, mismatch, equal, search copy, copy_backward, swap, iter_swap, swap_ranges, fill, fill_n, generate, generate_n, replace, replace_if, transform, remove, remove_if, remove_copy, remove_copy_if, unique, unique_copy, reverse, reverse_copy, rotate, rotate_copy, random_shuffle, partition, stable_partition
- Sorting
-
Sort, stable_sort, partial_sort, partial_sort_copy, nth_element, binary_search, lower_bound, upper_bound, equal_range, merge, inplace_merge, includes, set_union, set_intersection, set_difference, set_symmetric_difference, make_heap, push_heap, pop_heap, sort_heap, min, max, min_element, max_element, lexographical_compare, next_permutation, prev_permutation
- Numeric
-
Accumulate, inner_product, partial_sum, adjacent_difference
Since this is an extensive list, we will only examine a few of the methods in the algorithms. It is very important to note that the methods here are templated so we are not required to use the STL containers to use the methods. For example, we could have a list of ints and to sort this list then we could do:
#include <vector> #include <algorithm> #include <iostream> vector<int> myVec; vector<int>::iterator item; ostream_iterator<int> out(cout," "); // generate array for ( long i=0; i<10; i++ ) myVec.push_back(i); // shuffle the array random_shuffle( myVec.begin(), myVec.end() ); copy( myVec.begin(), myVec.end(), out ); // sort the array in ascending order sort( myVec.begin(), myVec.end() ); copy( myVec.begin(), myVec.end(), out );
This example shows how declare the vector and then sort it, using STL containers. We could do the same without using containers:
ostream_iterator<int> out(cout," "); // generate array (note: one extra element, end in STL is one element past last valid) int myVec[11]; for ( long i=0; i<10; i++ ) myVec[i] = i; int * begin = &myVec[0]; int * end = &myVec[10]; // shuffle the array random_shuffle( begin, end ); copy( begin, end, out ); // sort the array in ascending order sort( begin, end ); copy( begin, end, out );
How you use the algorithms is largely up to you, but they provide a rich set of methods for manipulating containers.
STL is not thread protected, so you must provide locks on your collections if they will be used in multithreaded environment. The standard locking mechanisms of Mutexes, Semaphores and Critical Sections can be used. One simple mechanism for providing locking is to declare a lock class. In this class the constructor creates the lock, and the destructor destroys the lock. Then provide lock() and unlock() methods. For example:
class Lock { public: HANDLE _hMutex; // used to lock/unlock object Lock() : _hMutex(NULL) { _hMutex = ::CreateMutex( NULL, false,NULL) ); } virtual ~Lock() { ::CloseHandle( _hMutex ); } bool lock () { if ( _hMutex == NULL ) return false; WaitForSingleObject( _hMutex, INFINITE ); return true; } void unlock () { ReleaseMutex(_hMutex); } };
Then declare a class that is derived from one of the STL collections, and in the class override the access methods to the collection that might cause an insertion or deletion of an element. For example a general vector class would be:
template <class T> class LockVector : vector<T>, Lock { public: LockVector () : vector<T>(), Lock() {} virtual LockVector () {} void insert ( T & obj ) { if ( !lock()) return; vector<T>::push_back (obj); unlock(); } };
Hopefully I’ve given you a good tutorial on how to use STL. If not then please try some of the web sites listed below or drop by your local bookstore or amazon.com and purchase one of the many books on the subject. I believe STL can provide many benefits and I hope you will to.
STL for C++ Programmers, by Leen Ammeraal.
John Wiley, 1996.ISBN 0-471-97181-2.
Designing Components with the C++ STL, by Ulrich Breymann.
Addison Wesley Longman, 1998. ISBN 0-201-17816-8..
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/stl/stlintroduction.aspx | crawl-002 | refinedweb | 3,160 | 50.77 |
Integration of REST API into site urls, and using piston resources to serve standard views
The way I would like to use piston differs from the recommendations in one big way: I don't place my API under /api/, and instead, I scatter the API across my whole site to correspond with the standard HTML views that my site produces. In such a way:
/people # Standard HTML list of people, with fancy CSS, JS, etc. /people.xml # Structured XML list of people /people.json # Structured JSON list of people
My logic behind this is that a standard views, templates and everything, is not really anything more than an Emitter that is specialised to be lovely and human-readable.
My ideal way of accomplishing this would be to format my urls.py like this:
{{{
!python
url(u'^people(.(?P<emitter_format>.+))?$', PeopleResource), }}}
Thus if we visit /people.xml, we get an XML list; if we visit /people.json we get a JSON list, etc.
We can also chop off the extension and just visit /people. Unfortunately, this again produces a JSON list. This is because in resource.py, in determine_emitter, we have:
{{{
!python
if not em: em = request.GET.get('format', 'json')
}}}
Let's comment that out, so the function signals that no known emitter was requested by returning None.
Unfortunately, Resource.call expects to be given a valid emitter, and if it isn't, it attempts to call None(). Instead, what I would like it to do if it finds no emitter is to use a function defined by me, and what this function would essentially be is a 'view': a standard python function that returns an HttpResponse. So let's let the Resource know what view we would like to use:
{{{
!python
class Resource(object): ... def init(self, handler, authentication=None, view=None): ... self.view = view }}}
And finally teach it to use the view as an alternative to the emitter, so in call:
{{{
!python
try: emitter, ct = Emitter.get(em_format) except ValueError: # Either the format is unrecognised, or not given if self.view: return self.view(request, result, typemapper, handler, fields, anonymous) else: raise NotImplementedError("No explicit emitter format was given, and no default view exists.") else: # Use the requested emitter srl = emitter(result, typemapper, handler, fields, anonymous) try: """ Decide whether or not we want a generator here, or we just want to buffer up the entire result before sending it to the client. Won't matter for smaller datasets, but larger will have an impact. """ if self.stream: stream = srl.stream_render(request) else: stream = srl.render(request) if not isinstance(stream, HttpResponse): resp = HttpResponse(stream, mimetype=ct) else: resp = stream resp.streaming = self.stream return resp except HttpStatusCode, e: return e.response
}}}
So now, to implement my desired way of using Piston, I would first define a view (the only difference from normal views is that it takes the result (queryset) directly): {{{
!python
views.py
def people_view(request, result, args, *kwargs): return render_to_response( 'people.html', { 'people': result } ) }}}
And let the Resource know about my view: {{{
!python
PeopleResource = Resource(PeopleHandler, view=people_view) }}}
Now, personally I'm pretty happy with the url schema I can quickly define with that. What I'm wondering is - does this contravene any established principles? Is it unusual or confusing to provide my JSON list of people just by appending ".json" to the standard url?
It's an unusual way to use piston, but I see where you're going. There's certainly a use for it, for people like you.
You know the drill; fork, test, pull request. Also, if I can get you to write a test for this use case, I'll be very happy.
Using
.jsonfor the emitter is not unusual;
emitter_formatis there for a reason, and you can also override it like so:
Perhaps a custom emitter that calls the view would be possible? I dunno. Your way is cleaner. Just a thought that wouldn't involve any code change.
I'll fork in a bit. But first I'm writing up another issue that kinda relates to this and I think comes before it logically: handlers for collections and for members. I think this is my final biggish quibble :).
Bump!
Bump! | https://bitbucket.org/jespern/django-piston/issues/89/integration-of-rest-api-into-site-urls-and | CC-MAIN-2017-34 | refinedweb | 701 | 66.33 |
New EntityInstance via SessionBeannimo stephan Aug 27, 2008 2:19 PM
Hello,
I have EJB-Project.
In the WEB-INF/classes, I have a POJO called
Pojo.java.
In my EJB, I have a SessionBean called
SessionBean and an EntityBean called
Entity.
Now, I want to create a new Instance of my Entity within my
Pojo. All works with Injection via @In:
public class Pojo {
// to retrieve the methods of my EJB-Bean
@In
SessionBean sessionBean;
@In
Entity entity;
...
}
All works, but I have doubts bout that:
Should I create a new Instance of my Entity via the interface of my SessionBean or can I create the Instance via the Seam-Managed-Entity? What is better?
1. Re: New EntityInstance via SessionBeannimo stephan Aug 27, 2008 2:22 PM (in response to nimo stephan)
When I reference my SessionBean via the @Local or the @Remote-Annotation, then I do not know, if the injection of my EntityBean (which is definitly not a SessionBean) works.
Any Suggestions?
2. Re: New EntityInstance via SessionBeanJoshua Jackson Aug 27, 2008 6:37 PM (in response to nimo stephan)
What do you mean by creating new Instance? Do you mean by instantiating it with new Entity() ?
Of course it is better that Seam provides your entity by injecting it with @In. :-)
3. Re: New EntityInstance via SessionBeannimo stephan Aug 28, 2008 8:36 AM (in response to nimo stephan)
Yes, I mean instantiating.
What about distributed systems and EJB. In EJB I have (one or more) Interfaces between my sessionBeans and my viewBeans. My viewBeans have access to methods of my sessionBeans via the Interface and via the @EJB-Injection. So, when I use the @In, I do not need any Interfaces ?? I do not have an Interface between my entityBean and my viewBean! However, can inject a new Instance of my entityBean in my viewBean via @In, but this is not the idea of what EJB stands for: MethodInvocation via Interfaces.
However, I can instantiate my entityBean in my sessionBean and have indirectly instantiate it by injecting my sessionBean to my viewBean via @EJB. Is this the clean way?
What do you think about that? | https://developer.jboss.org/thread/183671 | CC-MAIN-2018-39 | refinedweb | 361 | 63.09 |
Simple Introduction to Generics in C#
Generics in C# gives the ability to write type independent code rather than allowing a class or method to work with only a specific type.
Let’s consider a scenario where we have to consume a REST service and Deserialize the response to a defined object. The code would be something similar to below.
public static MovieData GetMovieData<MovieData>(responseData); }
Above code fulfills the purpose but assume we need to consume another service method and its going to return “CastData” instead of “MovieData”. So are we going to write another get “GetCastData” method? Of course we could write another method but deep down in your heart you know that there should be a better way to do this.
That’s where generics comes into play. Generics is a way of telling your class or method,
“Yo Bro, You don’t worry about the Type you are going to deal with. When I call you, I’ll let you know that information. Cool?”.
Noticed that the above “GetMovieData” method deserializes the object as “MovieData” and returns “MovieData”. We need to change those two places to be type independent using Generics.
This is how we can achieve this in C#.
public class ServiceConsumer<T> { public static T GetData<T>(responseData); } }
The “T” denotes the type. So this class will deal with the type that we specify when we create the object.
MovieData mData = ServiceConsumer<MovieData>.GetData("Movie URL as String"); //T => MovieData CastData cData = ServiceConsumer<CastData>.GetData("Cast URL as String"); //T => CastData
The above two lines of code specifies the objects that the class is going to deal with at the point of creation. So when GetData is called its going to deserialize the data to the specified object and return the particular type object. Freaking Awesome Right? | http://raathigesh.com/Simple-Introduction-to-Generics-in-C/ | CC-MAIN-2018-26 | refinedweb | 303 | 64.51 |
by Jim Bray
(editor's note: this article refers to software aimed at the
Canadian market)
It's tax time again.
Preparing your tax return is an annual ordeal that forces you to
calculate how many kilos of flesh the various levels of government are going to
extract from you this time. And with more than 80 changes to the regulations
since last year even the initiated can expect some new wrinkles.
The good news is that computer software can ride to your rescue,
turning hours of poring over arcane sheets and tables into a relatively
painless experience.
The two main contenders for your after-tax dollars are Intuit's
QuickTax family (starting at $24.95) and Taxamatic's TAXWIZ collection
(starting at $9.95). Both offer a range of "less taxing" products including
online versions that let you fill out and file your return using a Web Browser
and the Internet.
Both products let you complete your return by filling in the
virtual forms directly but, to make life easier for people who'd rather be torn
apart by wild dogs than do their own taxes, both also offer fairly easy and
reasonably "bozo proof" step-by-step methods that ease much of the pain.
TAXWIZ Deluxe (Windows CD or downloadable from),
starts with a splash screen offering you choices that range from a Browser-like
info source for first time users to the "interview" preparation method for
beginners and other options for the more tax-savvy user. It also claims to
import last year's QuickTax data.
The "newcomer" route begins with a little background advice and an
admonishment to collect all your info together (T4's, receipts, RRSP
contributions etc.) before sallying forth into the realm of the H and R bloc.
You can screw up your courage with a rather dry video outlining all this stuff,
or click "Taxes" and leap right into the fray.
Entering the data is fill-in-the-blanks easy and help is a click
away. The Help system would be easier if every time the form says "check the
guide" you could click there and link to the appropriate Guide section, but
such isn't the case.
On the other hand, TAXWIZ Deluxe highlights many topics on the
forms ("rental income" or "moving expenses," for instance) and if you click on
them you're whisked to the appropriate help area. The program also uses those
little yellow "tool tips" balloons periodically to offer advice.
And there's a "Whatif?" planner that uses sliding bars to estimate
how the tax damage would change if you made more money, dumped more into
RRSP's, etc. The booklet "Tax Tips for Canadians for Dummies" is included in
the box, too.
The company also offers the $9.95 "TAXWIZ To Go" prepaid identity
card available at such retailers as Esso, London Drugs and Future Shop. It's a
way to complete up to two returns online without having to worry about your
credit card's security.
QuickTax Deluxe ($34.95, Windows CD ROM) is littered with
commercials for related features and products (including Live - but not free -
Tax Advice: "Now you can speak to an accounting professional any time you have
a tax question
"), but they're easy to click through quickly to get to the
program's real meat.
The interface isn't quite as slick as TAXWIZ's, with its Help
links, but it does include QuickTax' usual big red circles and arrows to point
you to the right section of the screen, and many steps include their own video
window offering dry advice for a particular topic.
QuickTax lets you import data from Quicken or QuickBooks, which is
handy if you use those packages, and it offers some interesting advice along
the way. For instance, it suggests you might benefit more from making a lump
sum payment on your mortgage than from RRSP contributions; there's also an RRSP
Wizard that helps you calculate your optimum contribution if you go that
route.
The Deluxe version also comes with extras aimed at helping you
ensure the tax person's reach doesn't exceed his grab, including
RESP/retirement/loan planners and a capital gains analyzer.
QuickTax Deluxe also throws in "value added" optional features
like "QuickRefund," which gets your money back to you (for a percentage of the
refund) quicker'n you can say "Billion Dollar Boondoggle".
And there's an Internet-based version called QuickTax Web, where a
single return can be prepared and filed for $19.95 (couples can do their
returns for $24.95). There's also a $89.95 QuickTax version aimed at
incorporated businesses.
On the whole, the CD-based QuickTax' new interface is inferior to
its old one; it's still easy to use, but TAXWIZ has managed to close the gap
between the two programs. Either one of these products will do the job just
fine, however.
The CD-ROM-based TAXWIZ is less expensive than QuickTax, and you
can do as many returns as necessary (QuickTax only allows five returns). On the
other hand, QuickTax comes with extra extras and offers a Mac version.
Either package can let you do a straightforward tax return in
little more than half an hour if you have your financial ducks collected into a
row before starting. More complicated returns require more work, and more
conscious thought, but the software does a good job of walking you through the
ins and outs.
Both programs let you print out the return or Netfile it directly
to that black hole in Ottawa. | http://www.technofile.com/articles/tax_software_2002.html | crawl-002 | refinedweb | 931 | 58.01 |
Abstract base class for objects that can be drawn to a render target. More...
#include <Drawable.hpp>
Abstract base class for objects that can be drawn to a render target.
sf::Drawable is a very simple base class that allows objects of derived classes to be drawn to a sf::RenderTarget.
All you have to do in your derived class is to override the draw virtual function.
Note that inheriting from sf::Drawable is not mandatory, but it allows this nice syntax "window.draw(object)" rather than "object.draw(window)", which is more consistent with other SFML classes.
Example:
Definition at line 44 of file Drawable.hpp.
Virtual destructor.
Definition at line 52 of file Drawable.hpp.
Draw the object to a render target.
This is a pure virtual function that has to be implemented by the derived class to define how the drawable should be drawn. | https://www.sfml-dev.org/documentation/2.3/classsf_1_1Drawable.php | CC-MAIN-2019-39 | refinedweb | 147 | 67.25 |
I've created a notification bar on a form and basically have it exactly how I need it... Except, when I put a fade in feature, the taskbar icon doesn't show up. To my application, it's necessary, because the taskbar icon flashes orange when a notification is shown. I've checked the obvious such as: my
ShowInTaskbar
Imports System.Data.SqlClient
Imports system.runtime.interopservices
public class form10
<DllImport("user32.dll", EntryPoint:="FlashWindow")>
Public Shared Function FlashWindow(ByVal hwnd As Integer, ByVal bInvert As Integer) As Integer
End Function
Private Sub Form10_Load(sender As Object, e As EventArgs) Handles MyBase.Load
' sets form to bottom right of page
Me.Location = New Point(Screen.PrimaryScreen.WorkingArea.Width - 381, Screen.PrimaryScreen.WorkingArea.Height - 131)
Me.Opacity = 0.1
With Timer1
.Interval = 300
.Enabled = True
.Start()
End With
Timer2.Start()
End Sub
Private Sub Timer1_Tick(sender As Object, e As EventArgs) Handles Timer1.Tick
FlashWindow(Me.Handle, 1)
End Sub
Private Sub Timer2_Tick(sender As Object, e As EventArgs) Handles Timer2.Tick
If Me.Opacity < 1.0 Then
Me.Opacity = Me.Opacity + 0.1
End If
End Sub
I've fixed it by stopping the timer and adding this code to the Private sub Timer2_Tick
If Me.Opacity = 1 Then Timer2.Stop() End If
Also taking off the
TopMost property and adding
me.TopMost = True code to the load sub fully corrected my issue. | https://codedump.io/share/mNg4xVrwWrw7/1/taskbar-icon-won39t-show-when-i-have-a-fade-in-feature-in-my-form | CC-MAIN-2018-09 | refinedweb | 233 | 62.14 |
For the purposes of this exercise, I shall be borrowing heavily from Imar Spaanjaar's N-Layer design article series presented here: Building Layered Web Applications with Microsoft ASP.NET 2.0. I would strongly recommend that you read the series of articles, or at least the first two to familiarise yourself with the basics of an N-Layer approach to ASP.NET application design. 3 key layers from that application: Business Objects, Business Logic and Data Access will be included in the following MVC application with little or no change at all, and the article series provides excellent detail on how the layers are constructed. This article will look at what role they play, but will not delve into their code in any real detail.
First we take a look at the application as presented by Imar. It's a simple one that typifies CRUD operations. It allows the user to manage Contacts, along with their addresses, telephone numbers and email addresses. It features the ability to Create, Read, Update and Delete any of the entities.
The Entities associated with the application are ContactPersons, PhoneNumbers, Addresses and EmailAddresses. They all live in the Business Objects (BO) layer of the application. Each of these classes contain public properties with getter and setter methods in the original sample. They do not feature any behaviour, which is housed within the Business Logic Layer (BLL) in entitynameManager classes. There's a one-to-one mapping between an Entity and its associated Manager class. Each of the manager classes contain methods that retrieve an instance of an entity, a collection of entities, to save an entity (update or add) and to delete an entity. This area of the application can also be used to apply validation, security etc, but that is not included in the sample so that it isn't too cluttered. If you would like to see validation etc in action within the BLL, have a look at Imar's 6 part series that features updates to the application including moving it to version 3.5 of the ASP.NET framework.
The final layer is the Data Access Layer (DAL). This layer also features classes that have a one to one mapping with the Manager classes in the BLL. The methods within the BLL actually call related methods in the DAL. The DAL methods are the only ones in the application that know what mechanism is used to persist (store) entities. In this case, it's a SQL Server Express database, so this set of classes makes use of ADO.NET and the SqlClient classes. The idea behind this approach is that if you need to swap your persistence mechanism (to use XML or Oracle, for example, or to call Web Services, or even to use Linq To SQL or another ORM), you only need to replace the DAL. So long as the new DAL exposes methods containing the same signature as those called from the BLL, everything should continue to work without having to change other areas of the application. Ensuring that any new DAL adheres to the existing method signatures can be achieved through the use of Interfaces, but that would be the topic of a future article... perhaps.
MVC Architecture
There are plenty of good articles that discuss the architecture of MVC applications, so this one will not go into serious detail. For a deeper look, I recommend visiting the Learn section of Microsoft's official ASP.NET MVC site. Briefly, though, the M is for Model, which is where the BO, BLL and DAL will live. The V is for View, which is essentially any UI-related development - or what the user sees, and the C is for Controller. Controllers co-ordinate the applications responses to requests made by users. If a user clicks a button that points to a specific URL, that request is mapped to a Controller Action (class method) that will be responsible for handle any logic required to service the request, and returning a response - typically a new View, or an update to the existing View.
Having created a new MVC application within Visual Studio and removed the default Views, and Controllers, the first thing I did was to copy the BO, BLL and DAL files from Imar's application to the Model area of my new app. I also copied the Sql Server database file from App_Data in the original site, into the same place within the MVC application, and did the same with the Style.css file, which was copied into the MVC Content folder.
I made a few other modifications. The database connection string needed to be added to the MVC application's Web.Config file. In addition, I made couple of changes to the Namespaces within the copied class files, and updated some of the DAL code to C# 3.0, although neither of these amendments are strictly necessary. Once I had done that, I hit Ctrl + Shift + F5 to ensure that the project compiled. I will not need to revisit these files at all from now on, except for some DAL methods and their associated BLL methods, as will be covered later.
Controllers
I added four controllers (having removed the default ones provided by Visual Studio) - one for each Entity. These are ContactController, PhoneController, AddressController and EmailController.
Each of the controllers will be responsible for coordinating 4 actions - List, Add, Edit and Delete. So the first thing I will do is register the Route for these actions within Global.asax:
public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Contact", action = "List", id = "" } ); }
The default view for the application will be listing the contacts. Contact data is obtained by the existing GetList() method within the ContactPersonManager class in the BLL, so the code for the List() action is as follows:
public ActionResult List() { var model = ContactPersonManager.GetList(); return View(model); }
Strongly Typed Views
I'm going to use strongly typed Views throughout the application because they provide for intellisense in the Views and don't rely on ViewData indexing by string, which is prone to errors. To tidy things up as I go on, I have added some namespaces (the ones below the default ones in bold) to the web.config <namespaces> section. These are the ones I used to replace the existing namespaces in Imar's code:
<namespaces> <add namespace="System.Web.Mvc"/> <add namespace="System.Web.Mvc.Ajax"/> <add namespace="System.Web.Mvc.Html"/> <add namespace="System.Web.Routing"/> <add namespace="System.Linq"/> <add namespace="System.Collections.Generic"/> <add namespace="ContactManagerMVC.Views.ViewModels"/> <add namespace="ContactManagerMVC.Models.BusinessObject"/> <add namespace="ContactManagerMVC.Models.BusinessObject.Collections"/> <add namespace="ContactManagerMVC.Models.BusinessLogic"/> <add namespace="ContactManagerMVC.Models.DataAccess"/> <add namespace="ContactManagerMVC.Models.Enums"/> </namespaces>
This means they are available to all of the application and I don't need to fully qualify types within the Views. The type returned by the GetList() method is a ContactPersonList, which is set up in the Collections folder within the BO layer. It is simply a collection of ContactPerson objects. The Page declaration at the top of the List view is as follows:
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<ContactPersonList>" %>
You will also notice that I have included a MasterPage. Within the MasterPage, I have referenced the css file from Imar's sample code. The markup that manages the display of ContactPerson objects is as follows:
<table class="table"> <tr> <th scope="col">Id</th> <th scope="col">Full Name</th> <th scope="col">Date of Birth</th> <th scope="col">Type</th> <th scope="col"> </th> <th scope="col"> </th> <th scope="col"> </th> <th scope="col"> </th> <th scope="col"> </th> </tr> <% if (Model != null) { foreach (var person in Model) {%> <tr> <td><%= person.Id %></td> <td><%= person.FullName %></td> <td><%= person.DateOfBirth.ToString("d") %></td> <td><%= person.Type %></td> <td title="address/list" class="link">Addresses</td> <td title="email/list" class="link">Email</td> <td title="phone/list" class="link">Phone Numbers</td> <td title="contact/edit" class="link">Edit</td> <td title="contact/delete" class="link">Delete</td> </tr> <% } }else{%> <tr> <td colspan="9">No Contacts Yet</td> </tr> <% }%> </table>
You can immediately see the benefit of strong typing in the view. The Model is of type ContactPersonList, so each item is a ContactPerson and their properties are available without having to cast the Model to ContactPersonList. Incorrect casting is only detected at runtime, which can be inconvenient to say the least.
I cheated a little with the html. I could have selected "List" from the View Content option when generating the View, which would have provided some default templated html. I didn't. I wanted something that would work with Imar's css more easily, so I ran his application on my machine and once it was in the browser, Viewed Source and copied the rendered html from that. Imar uses GridViews in his web forms application, so a bit of css etc is automatically embedded within the source when they are rendered. I cleaned that up and moved the table styling to a css class called table over to the css file. I also added some additional styles for the <th> and <td> elements. You can see them in the accompanying download.
I also added some title attributes to some of the table cells. These are the ones that contain the links to other actions within the original application. I decided that I did not want the whole page to post back when addresses or phone numbers are viewed, or when the user wants to edit or delete existing records. I wanted the site to be Ajax-enabled. These title attributes play a key role in the Ajax (provided by jQuery) that feature in my version. The "link" css class makes text behave like a hyperlink, in that it is underlined and the cursor turns to a pointer (hand) onmouseover.
jQuery AJAX
Before looking at the main bulk of the script that manages Ajax functionality, here are three more lines of html that are added to the bottom of the List view:
<input type="button" id="addContact" name="addContact" value="Add Contact" /> <div id="details"></div> <div id="dialog" title="Confirmation Required">Are you sure about this?</div>
The first is the button that allows users to add new contacts. The second is an empty div which is waiting for content and the third is part of the jQuery modal confirmation box that prompts users to confirm that they want to delete a record.
3 files in the Master page. One is the main jQuery file, and the other two are from the jQuery.Ui library. These are sued for a modal dialog and for a date picker:
<script src="../../Scripts/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="../../Scripts/ui.core.min.js" type="text/javascript"></script> <script src="../../Scripts/jquery-ui.min.js" type="text/javascript"></script>
Here's the complete jQuery for the List view preceded by what the rendered view looks like:
<script type="text/javascript"> $(function() { // row colours $('tr:even').css('background-color', '#EFF3FB'); $('tr:odd').css('background-color', '#FFFFFF'); // selected row managment $('tr').click(function() { $('tr').each(function() { $(this).removeClass('SelectedRowStyle'); }); $(this).addClass('SelectedRowStyle'); }); // hide the dialog div $('#dialog').hide(); // set up ajax to prevent caching of results in IE $.ajaxSetup({ cache: false }); // add an onclick handler to items with the "link" css class $('.link').live('click', function(event) { var id = $.trim($('td:first', $(this).parents('tr')).text()); var loc = $(this).attr('title'); // check to ensure the link is not a delete link if (loc.lastIndexOf('delete') == -1) { $.get(loc + '/' + id, function(data) { $('#details').html(data); }); // if it is, show the modal dialog } else { $('#dialog').dialog({ buttons: { 'Confirm': function() { window.location.href = loc + '/' + id; }, 'Cancel': function() { $(this).dialog('close'); } } }); $('#dialog').dialog('open'); } }); // add an onclick event handler to the add contact button $('#addContact').click(function() { $.get('Contact/Add', function(data) { $('#details').html(data); }); }); }); </script>
This may look daunting, but as with all things jQuery, it is simple, and I have broken down the various parts through the use of comments to make it easier to follow. The first thing that the script does is to replace the AlternatingRowColor management that web forms provides on the server to data presentation controls. It applies css styling to rows once the table has been rendered. Then I have added some additional code to cater for the fact that Imar's original sample includes a way of highlighting the currently selected row. Then I have added one line to prevent IE caching results. If you don't do this, you will scratch your head wondering why edits and deletes do not show up in the browser while the records have clearly been changed in the database.
The next bit is interesting. It makes use of the .live() method, which ensures that event handlers are attached to all matching elements, whether they exist at the moment or not. When the user clicks an Addresses link for example, the result is another table listing the related phone numbers:
You can see that the table contains Edit and Delete links. If I didn't use .live(), these would not have event handlers attached. The event handler is attached to the click event of the cells decorated with the "link" class. It first obtains a value for the id of the record. In the case of the Contact Person table, that will be the ContactPersonId (it's the content of the first cell in the table row). In sub forms, it will be the id of the phone number or email address. These are needed when passed to the controller action responsible for editing, deleting or displaying. More of that later.
You should also now see why I have added title attributes to the table cells. They contain the route that needs to be invoked, and the full url is constructed by appending the id to the route. Next, a check is made to see if the route contains the word "delete". If not, a request is fired and the result is displayed in the details div. If it is a delete link, the modal confirmation box is invoked before the record is deleted, giving the user the option to change their mind. See? I said it was simple!
Finally an event handler is attached to the click event of the Add Contact button. We will have a look at this part next.
Adding Contacts and Custom View Models
When adding a record within an ASP.NET application, the customary practice is to present the user with a form containing a series of inputs via which they can supply values. Most of the inputs for a ContactPerson object are straightforward: first name, last name, date of birth. One of them is not so straightforward - Type. This value must be drawn from the enumeration defined within PersonType.cs (Friend, Colleague etc) in the enums folder within the Model. This means that we must present the user with a restricted selection of valid values. A DropDownList will do the job. However, a series of optional values forms no part of the existing ContactPerson object, so we need to present a customised version of the ContactPerson to the view to accommodate this. This is where customised View Models come in.
I've read some debate on where View Models should be placed within an application. Some people feel they are part of the model. My opinion is that they are related to views. They are only really relevant to MVC applications, and are not really reusable, so they should not go into the model. For that reason, I choose to create a ViewModels folder and add that under the Views folder. To that, I add a ContactPersonViewModel.cs file with the following code:
using System; using System.Collections.Generic; using System.Web.Mvc; namespace ContactManagerMVC.Views.ViewModels { public class ContactPersonViewModel { public int Id { get; set; } public string FirstName { get; set; } public string MiddleName { get; set; } public string LastName { get; set; } public DateTime DateOfBirth { get; set; } public IEnumerable<SelectListItem> Type { get; set; } } }
Looking at the last property, you can see that I have made Type a collection of IEnumerable<SelectListItem>. This will be bound to the Html.DropDownList in the view.
There are two actions for adding within the controller. The first is decorated with the AcceptVerbs(HttpVerbs.Get) attribute while the second is marked with the AcceptVerbs(HttpVerbs.Post) attribute. The first is responsible for passing back a data entry form and the second handles the values that are posted when the form is submitted:
[AcceptVerbs(HttpVerbs.Get)] public ActionResult Add() { var personTypes = Enum.GetValues(typeof (PersonType)) .Cast<PersonType>() .Select(p => new { ID = p, Name = p.ToString() }); var model = new ContactPersonViewModel { Type = new SelectList(personTypes, "ID", "Name") }; return PartialView(model); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Add(ContactPerson person) { ContactPersonManager.Save(person); return RedirectToAction("List"); }
The first few lines in the first action are responsible for transferring the values in the ContactType enumeration to an array, each item of which is then cast to an anonymous object having an ID property and a Name property. The ID is the enumeration value, and the Name is the constant value that goes with the enumeration. A ContactPersonViewModel object is instantiated, and the Type property has the IEnumerable of anonymous objects passed in to a SelectList object along with the data value values, and the data text values. I used a Partial View for adding a contact, selecting the strongly-typed option again and choosing ContactPersonViewModel for the type. The code for the partial follows:
<%@ Control $(function() { $('#DateOfBirth').datepicker({ dateFormat: 'yy/mm/dd' }); }); </script> <% using (Html.BeginForm("Add", "Contact", FormMethod.Post)) {%> <table> <tr> <td class="LabelCell">Name</td> <td><%= Html.TextBox("FirstName") %></td> </tr> <tr> <td class="LabelCell">Middle Name</td> <td><%= Html.TextBox("MiddleName") %></td> </tr> <tr>v <td class="LabelCell">Last Name</td> <td><%= Html.TextBox("LastName") %></td> </tr> <tr> <td class="LabelCell">Date of Birth</td> <td><%= Html.TextBox("DateOfBirth", String.Empty)%></td> </tr> <tr> <td class="LabelCell">Type</td> <td><%=Html.DropDownList("Type")%> </td> </tr> <tr> <td class="LabelCell"></td> <td><input type="submit" name="submit" id="submit" value="Save" /></td> </tr> </table> <% } %>
The jQuery at the top attaches a jQuery UI date picker (calendar) to the DateOfBirth textbox. The second argument passed in to the Html Helper for the DateOfBirth text box ensures that by default, the value is empty. Otherwise, all the inputs are given the same name as the corresponding ContactPerson property we want to populate. This is to ensure that the default model binder works for us without any unnecessary messing around. The enum for ContactType is also bound automatically for us by MVC:
The method that responds to POST requests is then able to match the incoming Request.Form values to properties on a ContactPerson object, and then call the associated BLL Save() method with minimum code. This is followed by the page refreshing in response to a RedirectToAction call to the List action.
Editing a ContactPerson
Again, there are two actions on the controller for editing: one for the initial GET request and another for the POST submission:
[AcceptVerbs(HttpVerbs.Get)] public ActionResult Edit(int id) { var personTypes = Enum.GetValues(typeof (PersonType)) .Cast<PersonType>() .Select(p => new { ID = p, Name = p.ToString() }); var contactPerson = ContactPersonManager.GetItem(id); var model = new ContactPersonViewModel { Id = id, FirstName = contactPerson.FirstName, MiddleName = contactPerson.MiddleName, LastName = contactPerson.LastName, DateOfBirth = contactPerson.DateOfBirth, Type = new SelectList(personTypes, "ID", "Name", contactPerson.Type) }; return PartialView(model); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(ContactPerson person) { ContactPersonManager.Save(person); return RedirectToAction("List"); }
We've already seen how jQuery is used to invoke the Edit action passing the ID of the person to be edited. That id is used to retrieve the person details from the database through the now familiar route of BLL calling DAL methods. The ContactPersonViewModel is constructed form the returned data, with the addition of the SelectList as in the Add method. This time though, the SelectList constructor features a third parameter, which is the actual Type for this person. The third argument in the constructor is the selected value.
The partial view code is again almost identical to the Add view:
<%@ Control $(function() { $('#DateOfBirth').datepicker({dateFormat: 'yy/mm/dd'}); }); </script> <% using (Html.BeginForm("Edit", "Contact", FormMethod.Post)) {%> <table> <tr> <td class="LabelCell">Name</td> <td><%= Html.TextBox("FirstName") %></td> </tr> <tr> <td class="LabelCell">Middle Name</td> <td><%= Html.TextBox("MiddleName") %></td> </tr> <tr> <td class="LabelCell">Last Name</td> <td><%= Html.TextBox("LastName") %></td> </tr> <tr> <td class="LabelCell">Date of Birth</td> <td><%= Html.TextBox("DateOfBirth", Model.DateOfBirth.ToString("yyyy/MM/dd")) %></td> </tr> <tr> <td class="LabelCell">Type</td> <td><%= Html.DropDownList("Type")%></td> </tr> <tr> <td class="LabelCell"><%= Html.Hidden("Id") %></td> <td><input type="submit" name="submit" id="submit" value="Save" /></td> </tr> </table> <% } %>
The key differences are that the DateOfBirth field now contains a format string for displaying the date in a friendly way. Added to that is the Html.Hidden() helper near the submit button, which has the value of the person being edited. Oh, and of course the form is targeted at a different action to the Add form. There is an argument possibly for combining the Add and Edit forms and action, and using a token to detect the mode that the form is in (Add or Edit) to conditionally set some of the mark up in a combined partial view, and to control the code within the combined action. It would reduce a fair amount of repetition. I have separated them out in this sample for the sake of clarity primarily.
Deleting a ContactPerson
The delete action is straightforward, and there is no View associated wtih it. There doesn't need to be. Just going back to the List action will refresh the data:
public ActionResult Delete(int id) { ContactPersonManager.Delete(id); return RedirectToAction("List"); }
This is where I chose to make a change to the original BLL and DAL. The original ContactPersonManager.Delete() method takes an instance of the person to be deleted. Within the DAL Delete() method, only the id of the person is referenced (or needed). I see no point in passing in whole objects when you are referencing them by their unique id, so I amended the methods to accept an int. It also makes the code easier, as I would otherwise have to instantiate the ContactPerson object just to get rid of it.
The jQuery invokes the confirmation modal dialog when a Delete link is clicked.
If the Cancel button is clicked, nothing happens (apart from the modal closing). If the Confirm button is clicked, the url that was constructed from the jQuery is invoked pointing to the delete action.
Managing the Collections
All of the collections - PhoneNumberList, EmailAddressList and AddressList are managed in exactly the same way as each other. Consequently, I shall just pick one (EmailAddressList) to illustrate the methodology. You can examine the download to see exactly how the others work.
First, we will look at displaying the Email Addresses associated with the selected ContactPerson. This involves a List action on the controller:
public ActionResult List(int id) { var model = new EmailAddressListViewModel { EmailAddresses = EmailAddressManager.GetList(id), ContactPersonId = id }; return PartialView(model); }
The method takes the id of the contact (from the first cell in the row that's been clicked, if you remember) and returns another custom view model - EmailAddressListViewModel. This is so that the Id of the contact person can be passed in to the View:
<%@ Control $(function() { $('#add').click(function() { $.get('Email/Add/<%= Model.ContactPersonId %>', function(data) { $('#details').html(data); }); }); }); </script> <table class="table"> <tr> <th scope="col">Contact Person Id</th> <th scope="col">Email</th> <th scope="col">Type</th> <th scope="col"> </th> <th scope="col"> </th> </tr> <%if(Model.EmailAddresses != null) {foreach (var email in Model.EmailAddresses) {%> <tr> <td><%= email.Id %></td> <td><%= email.Email %></td> <td><%= email.Type %></td> <td title="email/edit" class="link">Edit</td> <td title="email/delete" class="link">Delete</td> </tr> <%} }else {%> <tr> <td colspan="9">No email addresses for this contact</td> </tr> <%}%> </table> <input type="button" name="add" value="Add Email" id="add" />
You can see that the ContactPersonId is needed for the Add method. We need to make sure we are adding a new item to the correct contact's collection. Otherwise the Edit and Delete methods work in exactly the same was as for the ContactPerson itself - the id of the item to be amended or deleted is passed in via the URL, and the table cells are set up with a title attribute so that they can take advantage of the .live() method that was deployed earlier in the List view for the contacts.
[AcceptVerbs(HttpVerbs.Get)] public ActionResult Add(int id) { var contactTypes = Enum.GetValues(typeof(ContactType)) .Cast<ContactType>() .Select(c => new { Id = c, Name = c.ToString() }); var model = new EmailAddressViewModel { ContactPersonId = id, Type = new SelectList(contactTypes, "ID", "Name") }; return PartialView("Add", model); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Add(EmailAddress emailAddress) { emailAddress.Id = -1; EmailAddressManager.Save(emailAddress); return RedirectToAction("List", new {id = emailAddress.ContactPersonId}); }
A custom View Model has been created for this purpose of displaying existing EmailAddress objects for editing and adding. This includes the same kind of property for binding an IEnumerable<SelectListItem> collection for the Type dropdown. Where these methods differ from their ContactController counterparts is in what they return. The first returns the Add form as a partial, while the second Redirects to the List action on the same controller, which results in the revised collection being updated (and is also the reason why the no-cache option was set in the original Ajax).
In the case of the collection items, each one has its Id set to -1 within the action before it is saved. This is to ensure that the correct flow of code takes control in the "Upsert" stored procedure. By default at the moment, a value is picked up from the RouteData id parameter (which is that belonging to the ContactPerson) so if it is not set to -1, any EmailAddress object with the same id as the current contact will be updated instead. This is because Imar employed the "Upsert" approach to adding and editing data within the one procedure. More information can be found on this by reviewing his articles. In the meantime, here is the Partial View for adding an email address:
<%@ Control $(function() { $('#save').click(function() { $.ajax({ type: "POST", url: $("#AddEmail").attr('action'), data: $("#AddEmail").serialize(), dataType: "text/plain", success: function(response) { $("#details").html(response); } }); }); }); </script> <% using(Html.BeginForm("Add", "Email", FormMethod.Post, new { <tr> <td>Email:</td> <td><%= Html.TextBox("Email")%></td> </tr> <tr> <td>Type:</td> <td><%= Html.DropDownList("Type") %></td> </tr> <tr> <td><%= Html.Hidden("ContactPersonId") %></td> <td><input type="button" name="save" id="save" value="Save" /></td> </tr> </table> <% } %>
The jQuery in this particular instance takes care of submitting the form via AJAX. It is attached to an html button (not an input type="submit", you notice). It serializes the contents of the form fields and effects a POST request to the Add() action decorated by the appropriate AcceptVerbs attribute.
Editing and Deleting EmailAddress objects
Editing EmailAddress obejcts involves actions and views that are very similar to the ones that have been shown before. There are again two actions on the controller - one for GET and one for POST:
[AcceptVerbs(HttpVerbs.Get)] public ActionResult Edit(int id) { var emailAddress = EmailAddressManager.GetItem(id); var contactTypes = Enum.GetValues(typeof(ContactType)) .Cast<ContactType>() .Select(c => new { Id = c, Name = c.ToString() }); var model = new EmailAddressViewModel { Type = new SelectList(contactTypes, "ID", "Name", emailAddress.Type), Email = emailAddress.Email, ContactPersonId = emailAddress.ContactPersonId, Id = emailAddress.Id }; return View(model); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(EmailAddress emailAddress) { EmailAddressManager.Save(emailAddress); return RedirectToAction("List", "Email", new { id = emailAddress.ContactPersonId }); }
And the Partial View for the edit should also be quite familiar by now:
<%@ Control $(function() { $('#save').click(function() { $.ajax({ type: "POST", url: $("#EditEmail").attr('action'), data: $("#EditEmail").serialize(), dataType: "text/plain", success: function(response) { $("#details").html(response); } }); }); }); </script> <% using(Html.BeginForm("Edit", "Email", FormMethod.Post, new { <tr> <td>Email:</td> <td><%= Html.TextBox("Email")%></td> </tr> <tr> <td>Type:</td> <td><%= Html.DropDownList("Type") %></td> </tr> <tr> <td><%= Html.Hidden("ContactPersonId") %><%= Html.Hidden("Id") %></td> <td><input type="button" name="save" id="save" value="Save" /></td> </tr> </table> <% } %>
This is again almost identical to the Add view, except for the addition of a hidden field for the actual EmailAddress.Id value which ensures that the right email address gets updated. the Delete action requires no real explanation:
public ActionResult Delete(int id) { EmailAddressManager.Delete(id); return RedirectToAction("List", "Contact"); }
Summary
The object of this exercise was to demonstrate that MVC applications are perfectly possible without Linq To SQL or the Entity Framework. I took an existing ASP.NET 2.0 web forms application that had been nicely layered, and re-used the Business Objects, Business Logic and the Data Access Layers with little or no amendments. The DAL still uses ADO.NET and invokes stored procedures against the SQL Server database.
Along the way, I have shown how to use strongly typed View Models and a bit of natty jQuery to smooth the UI experience for the user. It's not a perfect application by any means, and is most definitely not ready for the real world. There is much room for improvement in terms of refactoring the views and actions to combine add and edit operations. The application lacks any kind of validation. Delete operations all result in showing the opening page again, whereas it would be nicer when deleting items in the phone, email or address collections to redisplay the revised sub form instead. This requires passing in the ContactPersonId to the action and should be relatively easy to achieve. | http://www.mikesdotnetting.com/Article/132/ASP.NET-MVC-is-not-all-about-Linq-to-SQL | CC-MAIN-2019-47 | refinedweb | 5,002 | 57.06 |
On 17/11/2010 21:22, Georg Brandl wrote: > Am 17.11.2010 22:16, schrieb Éric Araujo: >>> Excluding a builtin name from __all__ sounds like a perfectly sensible >>> idea, so even if it wasn't deliberate, I'd say it qualifies as >>> fortuitous :) >> But then, a tool that looks into __all__ to find for example what >> objects to document will miss open. I’d put open in __all__. > So it comes down again to what we'd like __all__ to mean foremost: > public API, or just a list for "import *"? Well, as noted earlier in this discussion - the language reference *states* that __all__ defines the module level public API. From: "If the list of identifiers is replaced by a star ('*'), all public names defined in the module are bound in the local namespace of the import statement." ... "The public names defined by a module are determined by checking the module’s namespace for a variable named __all__" If we decide that __all__ is purely for "import *" we should refine the use of the word public on this page. All the best, Michael Foord > Georg > > _______________________________________________ >. | https://mail.python.org/pipermail/python-dev/2010-November/105705.html | CC-MAIN-2014-10 | refinedweb | 186 | 66.17 |
Most social projects need a notification system. We need to inform users when a comment has been made on a post they have authored or if someone has commented on a post they have commented on. We also use notifications to tell users when admins/managers upload new files.
Traditionally this is accomplished through a simple email. Lately we looked into the possibility to "push" notifications directly to the client. We did some initial research and found Firebase, a simple JSON-storage with realtime functionality.
I'll go through some simple steps and thoughts regarding desktop notifications.
First of all we need to setup some django-signals. These should be specific for your project, so I'll just take a simple case: "Whenever a NEW Blog Post is created, we need to notify ALL users." Simple enough!
Our Model:
from django.contrib.auth.models import User from django.db import models from django.db.models.signals import post_save class Post(TimeStampedModel): title = models.CharField(max_length=512) content = models.TextField() author = models.ForeignKey(User) def get_subscribers(self): users = User.objects.all() users = users.exclude(pk=self.author.pk) return users
Notice that I've imported User, models and post_save.
- User because we need to connect our Post to a User.
- models for obvious reasons :)
- post_save because we want to trigger a function whenever a Blog Post has been saved.
- I've also added a method called get_subscribers(). We will call this function whenever we need a list of users that should be notified when we create a notification for Blog Posts. We also exclude the author.
So now we have a simple model with one method.
That's all for part 1. Lets take a look at our post_save method in part 2
Take me to part 2 | https://tech.willandskill.se/real-time-notifications-with-firebase-django-and-backbone/ | CC-MAIN-2018-39 | refinedweb | 297 | 61.02 |
Today we're going to go over how to make your application do a "fade-in". One common place that Windows users see this is with Microsoft Outlook's email notification. It fades in and then back out. wxPython provides a way to set the alpha transparency of any top window, which affects the widgets that are placed on the top-level widget.
In this example, I will use a frame object as the top level object and a timer to change the alpha transparency by a unit of 5 every second. The timer's event handler will cause the frame to fade into view and then back out again. The range of values is 0 - 255 with 0 being completely transparent and 255 being completely opaque.
The code is below:
import wx class Fader(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title='Test') self.amount = 5 self.delta = 5 panel = wx.Panel(self, wx.ID_ANY) self.SetTransparent(self.amount) ## ------- Fader Timer -------- ## self.timer = wx.Timer(self, wx.ID_ANY) self.timer.Start(60) self.Bind(wx.EVT_TIMER, self.AlphaCycle) ## ---------------------------- ## def AlphaCycle(self, evt): self.amount += self.delta if self.amount >= 255: self.delta = -self.delta self.amount = 255 if self.amount <= 0: self.amount = 0 self.SetTransparent(self.amount) if __name__ == '__main__': app = wx.App(False) frm = Fader() frm.Show() app.MainLoop()
As you can see, all you need to do to change the transparency of the top-level widget is to call the SetTransparent() method of that widget and pass it the amount to set. I actually use this method in an application of my own that fades in a dialog to alert me to new mail in my Zimbra email account.
For more information, check out the following resources:
Timers
Transparent Frames
Code tested on the following:
OS: Windows XP
Python: 2.5.2
wxPython: 2.8.8.1 and 2.8.9.1 | https://www.blog.pythonlibrary.org/2008/04/14/doing-a-fade-in-with-wxpython/ | CC-MAIN-2022-27 | refinedweb | 319 | 69.68 |
pair_style python command
Syntax
pair_style python cutoff
cutoff = global cutoff for interactions in python potential classes
Examples
pair_style python 2.5 pair_coeff * * py_pot.LJCutMelt lj pair_style hybrid/overlay coul/long 12.0 python 12.0 pair_coeff * * coul/long pair_coeff * * python py_pot.LJCutSPCE OW NULL
Description
The python pair style provides a way to define pairwise additive potential functions as python script code that is loaded into LAMMPS from a python file which must contain specific python class definitions. This allows to rapidly evaluate different potential functions without having to modify and recompile LAMMPS. Due to python being an interpreted language, however, the performance of this pair style is going to be significantly slower (often between 20x and 100x) than corresponding compiled code. This penalty can be significantly reduced through generating tabulations from the python code through the pair_write command, which is supported by this style.
Only a single pair_coeff command is used with the python pair style which specifies a python class inside a python module or file that LAMMPS will look up in the current directory, the folder pointed to by the LAMMPS_POTENTIALS environment variable or somewhere in your python path. A single python module can hold multiple python pair class definitions. The class definitions itself have to follow specific rules that are explained below.
Atom types in the python class are specified through symbolic constants, typically strings. These are mapped to LAMMPS atom types by specifying N additional arguments after the class name in the pair_coeff command, where N must be the number of currently defined atom types:
As an example, imagine a file py_pot.py has a python potential class names LJCutMelt with parameters and potential functions for a two Lennard-Jones atom types labeled as ‘LJ1’ and ‘LJ2’. In your LAMMPS input and you would have defined 3 atom types, out of which the first two are supposed to be using the ‘LJ1’ parameters and the third the ‘LJ2’ parameters, then you would use the following pair_coeff command:
pair_coeff * * py_pot.LJCutMelt LJ1 LJ1 LJ2
The first two arguments must be * * so as to span all LAMMPS atom types. The first two LJ1 arguments map LAMMPS atom types 1 and 2 to the LJ1 atom type in the LJCutMelt class of the py_pot.py file. The final LJ2 argument maps LAMMPS atom type 3 to the LJ2 atom type the python file. If a mapping value is specified as NULL, the mapping is not performed, any pair interaction with this atom type will be skipped. This can be used when a python potential is used as part of the hybrid or hybrid/overlay pair style. The NULL values are then placeholders for atom types that will be used with other potentials.
The python potential file has to start with the following code:
from __future__ import print_function # class LAMMPSPairPotential(object): def __init__(self): self.pmap=dict() self.units='lj' def map_coeff(self,name,ltype): self.pmap[ltype]=name def check_units(self,units): if (units != self.units): raise Exception("Conflicting units: %s vs. %s" % (self.units,units))
Any classes with definitions of specific potentials have to be derived from this class and should be initialize in a similar fashion to the example given below.
Note
The class constructor has to set up a data structure containing the potential parameters supported by this class. It should also define a variable self.units containing a string matching one of the options of LAMMPS’ units command, which is used to verify, that the potential definition in the python class and in the LAMMPS input match.
Here is an example for a single type Lennard-Jones potential class LJCutMelt in reducted units, which defines an atom type lj for which the parameters epsilon and sigma are both 1.0:
class LJCutMelt(LAMMPSPairPotential): def __init__(self): super(LJCutMelt,self).__init__() # set coeffs: 48*eps*sig**12, 24*eps*sig**6, # 4*eps*sig**12, 4*eps*sig**6 self.units = 'lj' self.coeff = {'lj' : {'lj' : (48.0,24.0,4.0,4.0)}}
The class also has to provide two methods for the computation of the potential energy and forces, which have be named compute_force, and compute_energy, which both take 3 numerical arguments:
- rsq = the square of the distance between a pair of atoms (float)
- itype = the (numerical) type of the first atom
- jtype = the (numerical) type of the second atom
This functions need to compute the force and the energy, respectively, and use the result as return value. The functions need to use the pmap dictionary to convert the LAMMPS atom type number to the symbolic value of the internal potential parameter data structure. Following the LJCutMelt example, here are the two functions:
def compute_force(self,rsq,itype,jtype): coeff = self.coeff[self.pmap[itype]][self.pmap[jtype]] r2inv = 1.0/rsq r6inv = r2inv*r2inv*r2inv lj1 = coeff[0] lj2 = coeff[1] return (r6inv * (lj1*r6inv - lj2))*r2inv def compute_energy(self,rsq,itype,jtype): coeff = self.coeff[self.pmap[itype]][self.pmap[jtype]] r2inv = 1.0/rsq r6inv = r2inv*r2inv*r2inv lj3 = coeff[2] lj4 = coeff[3] return (r6inv * (lj3*r6inv - lj4))
Note
for consistency with the C++ pair styles in LAMMPS, the compute_force function follows the conventions of the Pair::single() methods and does not return the full force, but the force scaled by the distance between the two atoms, so this value only needs to be multiplied by delta x, delta y, and delta z to conveniently obtain the three components of the force vector between these two atoms.
Note
The evaluation of scripted python code will slow down the computation pair-wise interactions quite significantly. However, this can be largely worked around through using the python pair style not for the actual simulation, but to generate tabulated potentials on the fly using the pair_write command. Please see below for an example LAMMPS input of how to build a table file:
pair_style python 2.5 pair_coeff * * py_pot.LJCutMelt lj shell rm -f melt.table pair_write 1 1 2000 rsq 0.01 2.5 lj1_lj2.table lj
Note that it is strongly recommended to try to delete the potential table file before generating it. Since the pair_write command will always append to a table file, while pair style table will use the first match. Thus when changing the potential function in the python class, the table pair style will still read the old variant unless the table file is first deleted.
After switching the pair style to table, the potential tables need to be assigned to the LAMMPS atom types like this:
pair_style table linear 2000 pair_coeff 1 1 melt.table lj
This can also be done for more complex systems. Please see the examples/python folders for a few more examples.
Mixing, shift, table, tail correction, restart, rRESPA info:
Mixing of potential parameters has to be handled inside the provided python module. The python pair style simply assumes that force and energy computation can be correctly performed for all pairs of atom types as they are mapped to the atom type labels inside the python potential class. PYTHON package. It is only enabled if LAMMPS was built with that package. See the Making LAMMPS section for more info. | https://lammps.sandia.gov/doc/pair_python.html | CC-MAIN-2018-30 | refinedweb | 1,197 | 52.39 |
On Mar 13, 2013, at 08:52, Bjoern Drabeck wrote: > Btw I think there are still a couple more issues with the configure going wrong sometimes, depending what options I choose. Bit later when I got more time I can create a list of options and outcomes.. Will try to see if I can get a debug build to work which allows me to step into the code, with the feedback from John > The easiest way to get the best possible debugging experience would of course be to change av_get_cpu_flags as below. This removes the dependency on dead-code stripping, and is all that ought to be required if indeed the only linking errors you get are about the 3 worker functions called. (And frankly, is this really so unreadable that it is preferable to leave the stripping to the compiler rather than doing it explicitly?) int av_get_cpu_flags(void) { if (checked) return flags; #ifdef ARCH_ARM if (ARCH_ARM) flags = ff_get_cpu_flags_arm(); #endif #ifdef ARCH_PPC if (ARCH_PPC) flags = ff_get_cpu_flags_ppc(); #endif #ifdef ARCH_X86 if (ARCH_X86) flags = ff_get_cpu_flags_x86(); #endif checked = 1; return flags; } | http://ffmpeg.org/pipermail/ffmpeg-devel/2013-March/140442.html | CC-MAIN-2022-05 | refinedweb | 178 | 67.22 |
This
Good to hear you were finally able to get it working! I should perhaps have made it clearer that you need to enter the entire namespace rather than just the number.
I trust everything is working OK for you now??.
Having said that, you could use Bluetooth/WiFi transceivers (as mentioned by natasha_93 - thanks!) to mimic a wired connection using a wireless setup. As long as your Arduino is accessible using a serial port API (i.e. virtual COM port on Windows machines) you can transfer data back and forth as if it were a simple USB connection.
You should be able to use the existing Arduino sketch to control a pan-only webcam. Basically, you can just ignore the fact that there is no tilt motor attached to the Arduino. Everything else should work as normal. Just make sure to connect your pan motor to the correct output pin on the Arduino board.
From the perspective of the webpage, moving the tilt slider will thus have no effect but won't interfere with the operation of the pan motor. So you can just use as normal.
The alternative would be to remove the JavaScript code and the Arduino sketch code dealing with the tilt sensor but there probably wouldn't be much point because leaving them in gives you the flexibility to add a tilt motor at any time in the future.
Hope that helps!
Best
hao
Please confirm that you have correctly downloaded the jQuery and jQueryUI libraries and placed them in the appropriate directory. Step 6 shows the required directory/file structure. From the image you posted, it looks like the jQueryUI theme may be missing.
In terms of creating the sliders and connecting to SensorMonkey, you just need to copy and paste the code listed in Step 6 into a file called Webcam.html which you then need to edit; you need to replace each instance of YOUR_NAMESPACE, YOUR_PRIVATE_KEY and YOUR_CHANNEL with those specific to you. It looks like you have the Justin.tv stream working ok, so you're almost there.
One final thing; you should host the files on a webserver, either installed locally on your machine or available over the Internet. So don't double-click the file to open it in your web-browser (i.e. using file:// protocol) - that probably won't work.
Hope this helps!
"':"
how do I place it in the same directory as the webpage(where is this webpage coming from, is it already created in the library or do i have to create a new html file??)
Please teach me how to do step 6, I am so close to finishing this!
Thank you very much
You need to create a new HTML file called 'Webcam.html'. Then, you must copy & paste the code from Step 6 into this file. You need to replace each instance of YOUR_NAMESPACE, YOUR_PRIVATE_KEY and YOUR_CHANNEL in this HTML file with those assigned to you.
Save the HTML file into a directory. Then, download the production version of the jQuery library and place the 'jquery-x.y.z.min.js' file into this directory (where x, y and z are version numbers for the latest jQuery release).
Now, download the stable version of the jQuery UI library, unzip it, navigate to the 'js' subfolder and copy the 'jquery-ui-x.y.z.min.js' file into the new directory you just created.
Assuming you chose the UI Lightness theme, navigate into the 'css/ui-lightness' subfolder and copy the entire contents of this folder into the new directory you just created.
If you follow these steps, you should have a file/directory structure identical to the screenshot in Step 6 (with the exception that the version numbers for your jQuery and jQuery UI installations will probably be different).
Finally, you need to edit the version numbers of these files in your 'Webcam.html' file to match the particular versions that you downloaded.
You must then upload the contents of your directory to a webserver so you can access it using a web-browser. You can either install a local copy of Apache, place the contents of your directory into the root public folder and access using '', or upload the contents to a hosted webserver so you can access over the Internet.
If you need further help, please do not hesitate to contact us.
To send commands to your Arduino from a webpage, you must use the following procedure:
var client = new SensorMonkey.Client( "" );
client.on( "connect", function() {
client.subscribeToStream( "/private/My Sensor", function( e ) {
client.deliverToStreamPublisher( "/private/My Sensor", "This is a string" );
If you precede the string with a "#", it will be interpreted by SensorMonkey as a series of hexadecimal character pairs and the data will be sent to your sensor in binary form (rather than a UTF8-encoded string).
For more information, you can see the SensorMonkey JavaScript API for descriptions of all the functions.
If you need further help, please do not hesitate to contact us.
"Anything you can do, she can do wirelessly..." | http://www.instructables.com/id/Remote-controlled-webcam-using-Arduino-SensorMonk/?comments=all | CC-MAIN-2015-18 | refinedweb | 846 | 63.59 |
By Shruthi T on Oct 14, 2016 5:39:36 AM
Whenever you instantiate an object in your .net application, some memory is allocated to store that object. But at some point that object may no longer be needed by your application. If you want to reuse that memory in your application, we have to de-allocate that memory first. And this process of freeing up or de-allocating memory which is no longer needed by the application is called Garbage collection.
Garbage collection is an automatic process, when object is created then it will be placed in the Generation 0. The garbage collection uses an algorithm which checks the objects in the generation, the objects life time get over then it will be removed from the memory. The two kinds of objects. One is Live Objects and Dead Objects. The Garbage collection algorithm collects all unused objects that are dead objects in the generation. If the live objects running for long time then based on that life time it will be moved to next generation.
The object cleaning in the generation will not take place exactly after the life time over of the particular objects. It takes own time to implement the sweeping algorithm to free the spaces to the process.
When a garbage collection starts, it looks at a set of references called the ‘roots’. These are memory locations that are designated to be always reachable for some reason, and which contain references to objects created by the program. It marks these objects as ‘live’ and then looks at any objects that they reference; it marks these as being ‘live’ too. It continues in this manner, iterating through all of the objects it knows are ‘live’. It marks anything that they reference as also being used until it can find no further objects. Once all of these live objects are known, any remaining objects can be discarded and those spaces can be re-used for new objects and it squashes the memory so that the free memory is always located at the end of a heap so that allocation of new objects will be faster.
How Often Does the Garbage Collector Perform a Garbage Collection?.
Garbage collection can also be triggered by calling the GC.Collect method.
C# program that uses GC.Collect
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace GarbageCollection
{
public class Work
{
public int salary;
public int months;
public int CalculateApproxCTC()
{
salary = 1025454564;
months = 12;
int CTC = salary * months;
return CTC;
}
}
class Program
{
static void Main()
{
long mem1 = GC.GetTotalMemory(false);
{
// Allocate an array and make it unreachable.
int[] allocatespace = new int[5000000];
int[] values = new int[50000]; // Generation 2 Example
GC.GetGeneration(values);
Console.WriteLine(GC.GetGeneration(values));
GC.GetGeneration(allocatespace);
Console.WriteLine(GC.GetGeneration(allocatespace));
// just create an object of class and do not use - will be cleared by Garbage Collection
Work wrk = new Work();
Console.WriteLine(GC.GetGeneration(wrk)); // Generation 0 example
values = null;
}
long mem2 = GC.GetTotalMemory(false);
{
// Forces an immediate garbage collection of all generations.
GC.Collect();
}
long mem3 = GC.GetTotalMemory(false);
{
Console.WriteLine(mem1);
Console.WriteLine(mem2);
Console.WriteLine(mem3);
Console.ReadLine();
}
}
}
}
Output
| https://blog.trigent.com/how-net-garbage-collection-works | CC-MAIN-2018-51 | refinedweb | 529 | 56.05 |
OK. This is my first post on this site, and I consider myself quite a beginner in C++. I want to know how to open up a
document using C++. Ive tried many times, but it just comes up with a blank console which goes away when I press
enter. I suspect the problem is that I am making a cionsole application instead of some other application that I sgould be using. (I am using Miscrosoft Visual C++ 2010) I really do not know why this is not working. Is it because I am
trying to open a document that is not allowed to be opened???? (Im an adminstator on my computer so that is
proobably not it.) Anyway here is the code that I wrote:
Thanks!Thanks!Code:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
ofstream wordDocument;
wordDocument.open("C:\\Users\\Haziqu\\Documents\\STUDYING STUFF\\BUSINESS\\adding value #1.txt.txt");
cin.get();
return 0;
} | http://forums.codeguru.com/printthread.php?t=531863&pp=15&page=1 | CC-MAIN-2015-18 | refinedweb | 158 | 66.54 |
Advertise Jobs
Perl
module
-
Part of CPAN
distribution
Inline-Java 0.30.
Inline::Java - Write Perl classes in Java.
my $alu = new alu() ;
print "9 + 16 = ", $alu->add(9, 16), "\n";
print "9 - 16 = ", $alu->subtract(9, 16), "\n";
use Inline Java => <<'END_OF_JAVA_CODE';
class alu {
public alu(){
}
public int add(int i, int j){
return i + j ;
}
public int subtract(int i, int j){
return i - j ;
}
}
END_OF_JAVA_CODE
THIS IS ALPHA SOFTWARE. It is incomplete and possibly unreliable.
It is also possible that some elements of the interface (API) will
change in future releases..
Inline::Java.
Inline
Inline::C
Inline::CPP
This section will explain the different ways to use Inline::Java.
For more details on Inline, see 'perldoc Inline'.
use (sub routine should return source code), or an array
reference (array contains lines of source code). This information
is detailed in 'perldoc Inline'.
In order for Inline::Java to function properly, it needs to know
where to find the Java compiler (javac) and the Java Runtime (java)
on your machine. This is done using one of the following techniques:
- set the BIN configuration option to the correct directory
- set the PERL_INLINE_JAVA_BIN environment variable to the correct
directory
- put the correct directory in your PATH environment variable
There are a number of configuration options that dictate the
behavior of Inline::Java:
BIN:
Specifies the path to your Java binaries.
Ex: BIN => 'my/java/bin/path'
Note: This configuration option only has an effect on the first
'use Inline Java' call inside a Perl script, since all other calls
make use of the same JVM. However, you can use it if you want to
change which 'javac' executable is used for subsequent calls.
PORT:
Specifies the starting port number for the server. If many
C<Inline::Java> blocks are declared, the port number is
incremented each time.
Default is 7890.
Ex: PORT => 4567
Note: This configuration option only has an effect on the first
'use Inline Java' call inside a Perl script, since all other calls
make use of the same JVM.
STARTUP_DELAY:
Specifies the maximum number of seconds that the Perl script
will try to connect to the Java server. In other.
CLASSPATH:
Adds the specified CLASSPATH to the environment CLASSPATH.
Ex: CLASSPATH => '/my/other/java/classses'
Note: This configuration option only has an effect on the first
'use Inline Java' call inside a Perl script, since all other calls
make use of the same JVM.
J can also be set globally by setting the PERL_INLINE_JAVA_JNI
environment variable to 1.
Note: This configuration option only has an effect on the first
'use Inline Java' call inside a Perl script, since all other calls
make use of the same JVM.
SHARED_JVM:
This mode enables mutiple processes to share the same JVM. It was
created mainly in order to be able to use C<Inline::Java> under mod_perl.
Ex: SHARED_JVM => 1
Note: This configuration option only has an effect on the first
'use Inline Java' call inside a Perl script, since all other calls
make use of the same JVM.
DEBUG:
Enables debugging info
Ex: DEBUG => 1
WARN_METHOD_SELECT:
Throws a warning when C<Inline::Java> has to 'choose' between
different method signatures. The warning states the possible
choices and the sugnature chosen.
Ex: WARN_METHOD_SELECT => 1
STUDY:
Takes an array of Java classes that you wish to have C<Inline::Java>
Ex: STUDY => ['java.lang.HashMap', 'my.class'] ;
AUTOSTUDY:
Makes C<Inline::Java> automatically study unknown classes it
encounters.
Ex: STUDY => ['java.lang.HashMap', 'my.class'] ;
Because Java is object oriented, any interface between Perl and Java
needs to support Java classes adequately.
Example:
use Inline Java => <<'END';
class Foo {
String data = "data" ;
static String sdata = "static data" ;
public Foo() {
System.out.println("new Foo object being created") ;
}
public String get_data(){
return data ;
}
public static get_static_data(){
return sdata ;
}
public void set_data(String d){
data = d ;
}
}
END
my $obj = new Foo ;
print $obj->get_data() . "\n" ;
$obj->set_data("new data") ;
print $obj->get_data() . "\n" ;
The output from this program is:
new Foo object being created
data
new data
Inline::Java created a new namespace called main::Foo and
created the following functions:
main::Foo
sub main::Foo::new { ... }
sub main::Foo::Foo { ... }
sub main::Foo::get_data { ... }
sub main::Foo::get_sdata { ... }
sub main::Foo::set_data { ... }
sub main::Foo::DESTROY { ... }
Note that only the public methods are exported to Perl.
Note also that the class itself is not public. With
Inline::Java you cannot create public classes because Java
requires that they be defined in a .java file of the same
name (Inline::Java can't work this way).
Inner classes are also supported, you simply need to supply a reference
to an outer class object as the first parameter of the constructor:
use Inline Java => <<'END';
class Foo {
public Foo() {
}
public class Bar {
public Bar() {
}
}
}
END
my $obj = new Foo() ;
my $obj2 = new Bar($obj) ;
In the previous example we have seen how to call a method. You can also
call static methods in the following manner:
print Foo->get_sdata() . "\n" ;
# or
my $obj = new Foo() ;
print $obj->get_sdata() . "\n" ;
both of these will print:
static data
You can pass any kind of Perl scalar or any Java object to a method. It
will be automatically converted to the correct type:
use Inline Java => <<'END';
class Foo2 {
public Foo2(int i, String j, Foo k) {
...
}
}
END
my $obj = new Foo() ;
my $obj2 = new Foo2(5, "toto", $obj) ;
will work fine. These objects can be of any type, even if these types
are not known to Inline::Java. This is also true for return types:
use Inline Java => <<'END';
import java.util.* ;
class Foo3 {
public Foo3() {
}
public HashMap get_hash(){
return new HashMap() ;
}
public void do_stuff_to_hash(HashMap h){
...
}
}
END
my $obj = new Foo3() ;
my $h = $obj->gethash() ;
$obj->do_stuff_to_hash($h) ;
Objects of types unknown to Perl can exist in the Perl space, you just
can't call any of their methods.
You can also access all public member variables (static or not) from Perl.
As with method arguments, the types of these variables does not need to
be known to Perl:
class Foo4 {
public int i ;
public static HashMap hm ;
public Foo4() {
}
}
END
my $obj = new Foo4() ;
$obj->{i} = 2 ;
my $hm1 = $obj->{hm} ; # instance way
my $hm2 = Foo4::hm ; # static way
Note: Watch out for typos when accessing members in the static fashion,
'use strict' will not catch them since they have a package name...
You can also send and receive arrays. This is done by using Perl lists:
class Foo5 {
public int i[] = {5, 6, 7} ;
public Foo5() {
}
public String [] f(String a[]){
return a ;
}
public String [][] f(String a[][]){
return a ;
}
}
END
my $obj = new Foo5() ;
my $i_2 = $obj->{i}->[2] ; # 7
my $a1 = $obj->f(["a", "b", "c"]) ; # String []
my $a2 = $obj->f([
["00", "01"],
["10", "11"],
]) ; # String [][]
print $a2->[1]->[0] ; # "10"
Sometimes when a class as many signatures for the same method,
Inline::Java will have to select one of the signatures based on
the arguments that are passed:
use Inline Java => <<'END';
class Foo6 {
public Foo6() {
}
public void f(int i){
}
public void f(char c){
}
}
END
my $obj = new Foo6() ;
$obj->f('5') ;
In this case, Inline::Java will call f(int i), because '5' is an integer.
But '5' is a valid char as well. So to force the call of f(char c), do the
following:
use Inline::Java qw(cast) ;
$obj->f(cast('char', '5')) ;
# or
$obj->f(Inline::Java::cast('char', '5')) ;
The cast function forces the selection of the matching signature. Note that
the cast must match the argument type exactly. Casting to a class that
extends the argument type will not work.
Another case where type casting is need is when one wants to pass an array
as a java.lang.Object:
use Inline Java => <<'END';
Object o ;
int a[] = {1, 2, 3} ;
class Foo7 {
public Foo7() {
}
}
END
my $obj = new Foo7() ;
3
parameter version of the cast function to do this:
$obj->{o} = Inline::Java::cast(
"java.lang.Object",
[1, 2, 3],
"[Ljava.lang.String;") ;
This tells Inline::Java to validate your Perl list as a String [], and
then cast cast:
$obj->{o} = $obj->{a} ;.lang.HashMap'],
) ;
my $hm = new java::lang::HashMap() ;
$hm->put("key", "value") ;
my $v = $hm->get( => 'DATA',
AUTOSTUDY => 1,
) ;
my $obj = new Foo8() ;
my $hm = $obj->get_hm() ;
$hm->put("key", "value") ;
my $v = $hm->get("key") ;
__END__
__Java__
import java.util.* ;
class Foo8 {
public Foo8() {
}
public HashMap get_hm(){
HashMap hm = new HashMap() ;
return hm ;
}
}.
If you wish to use more than one Inline::Java section in your Perl script,
you will need to use the Inline NAME option to name your modules. You can then
use a special syntax in your CLASSPATH (either the environment variable or the
configuration option) to tell what Inline::Java modules it will need to load
at runtime:
package Foo ;
use Inline (
Java => '<<END',
class Foo {
public Foo() {
}
}
END
NAME => "Foo",
CLASSPATH => "[PERL_INLINE_JAVA=Foo, Bar]",
) ;
package Bar ;
use Inline (
Java => '<<END',
class Bar {
public Bar() {
}
}
END
NAME => "Bar",
) ;
package main ;
my $f = new Foo() ;
my $b = new Bar() ;
If you set the CLASSPATH via the configuration option, remember to do so in the
first Inline::Java section. Also remember that you can't use Java packages with
Inline::Java. Your classes must be in the unnamed package.
Starting in version 0.20, it is possible to use the JNI (Java Native
Interface) extension. This enables Inline::Java to load the Java virtual
machine as a shared object instead of running it as a stand-alone server.
This brings an improvement in performance.
However, the JNI extension is not available on all platforms (see README and
README.JNI for more information). For that reason, if you have built the
JNI extension, you must enable it explicitely by doing one of the following:
- set the JNI configuration option to 1
- set the PERL_INLINE_JAVA_JNI environment variable to 1
Note: Inline::Java only creates one virtual machine instance. Therefore
you can't use JNI for some sections and client/server for others. The first
section determines the execution mode.
Starting with version 0.30, the Inline::Java JVM can now be shared between
multiple processes. The first process to start the JVM is considered the JVM
owner and will shutdown the JVM on exit. All other processes can connect as
needed to the JVM. If any of these other processes where created by forking
the parent process, the Inline::Java->reconnect_JVM() function must be called
in the child to get a fresh connection to the JVM. Ex:
use Inline (
Java => '<<END',
import java.util.* ;
class t {
public t(){
}
}
END
SHARED_JVM => 1,
) ;
my $t = new t() ;
my $nb = 5 ;
for (my $i = 0 ; $i < $nb ; $i++){
if (! fork()){
Inline::Java::reconnect_JVM() ;
$t = new $t() ;
}
}
Once this code was run, each of the 6 processes will have created a different
instance of the t class. Data can be shared between the processes by using
static members in the Java code.
If processes not forked off the parent are connecting to the shared JVM, the
parent's CLASSPATH must be set properly or else the parent will not see these
classes. See USING MULTIPLE SECTIONS for more details.
Please note that the SHARED_JVM feature does not work in JNI mode.
This is an ALPHA release of Inline::Java. Further testing and
expanded support for other operating systems and platforms will be a
focus for future releases. It has been tested on:
- Solaris 2.5.1 + Perl 5.6 + Java SDK 1.2.2
- Solaris 2.8 + Perl 5.6 + Java SDK 1.3.0
- Linux Redhat 6.2 Alpha + Perl 5.6 + Java SDK 1.3.1
- Windows 2000 + Perl 5.6 + Java SDK 1.3.0
- Windows 95 + Perl 5.6 + Java SDK 1.2.2 (use 'fix' option)
It likely will work with many other configurations.
This is how Inline::Java works. Once the user's code is compiled by the
javac binary, Inline::Java's own Java code is compiled. This code
implements a server (or not if you use the JNI mode) that receives requests
from Perl to create objects, call methods, destroy objects, etc. It is also
capable of analyzing Java code to extract the public symbols. Once this
code is compiled, it is executed to extract the symbols from the Java code.
Once this is done, the user's code information is fetched and is bound to
Perl namespaces. Then Inline::Java's code is run to launch the server. The
Perl script then connects to the server using a TCP socket (or not if you use
the JNI mode). Then each object creation or method invocation on "Java
objects" send requests to the server, which processes them and returns
object ids to Perl which keeps them the reference te objects in the future.
For information about using Inline, see Inline.
For information about other Inline languages, see Inline-Support.
Inline::Java.
Here are some things to watch out for:
Patrick LeBoutillier <patl@cpan.org>
Brian Ingerson <INGY@cpan.org> is the author of Inline.
All Rights Reserved. This module is free software. It may be used,
redistributed and/or modified under the terms of the Perl Artistic
License.
(see) | http://aspn.activestate.com/ASPN/CodeDoc/Inline-Java/Java.html | crawl-002 | refinedweb | 2,206 | 62.98 |
First time here? Check out the FAQ!
Now it's all up.. thanks!
Hi. I just tried to use fixed ips by setting use_floating_ips=false and it now goes on, and stops again with Heat problem.. ha..
Thanks for your answer. I was following the guide, so I'm not sure which part of configuration is telling to use which Heat stack. Where do I see which Heat stack it's trying to use to make a cluster now..?
Thanks for suggestions. I'll try all, and update with results soon..!?
Or is there any way to set config to use compute-node2 as a default instead of compute-node?
I updated my question.. sorry for the delay. So on controller, nova seems to think that ther're connected. So if compute-node's resources are not enough to handle an instance I'm trying to launch,it should automatically launch the instance on compute-node2 right? |
+----+--------------+--------------+----------+---------+-------+--------------+
Hi. I'm trying to introduce Sahara to my cloud to utilize Hadoop, and it's not going well. I tried to follow Openstack Documents but it didn't really help me. Now I'm trying to add sahara to my dashboard by command "pip install sahara-dashboard".
Sahara Dashboard is located : /usr/local/lib/python2.7/dist-packages/saharadashboard
original Dashboard is located : /usr/share/openstack-dashboard/openstack-dashboard, and I added
INSTALLED_APPS = [
'openstack_dashboard',
'saharadashboard',
',
]
this to /usr/share/openstack-dashboard/openstack-dashboard/setting.py.
and in : /usr/share/openstack-dashboard/openstack-dashboard/local/local_settings.py , I added
SAHARA_URL=''
OPENSTACK_API_VERSIONS = {
"data-processing": 1.1,
"identity": 3,
"volume": 2,
"image": 2,
}
"data-processing": 1.1
SAHARA_USE_NEUTRON=True
I can see Sahara managment interface on Dashboard, but I'm getting this error when I try to register image in Image Registry tab of Dashboard. Hope you don't mind Korean in the image. I ought to tell you other things are working fine in my cloud. I searched through all logs related to Sahara, and nothing comes up.
I suspect that thses parts of code are where showing me the error, but don't know how to fix this issue. Please Help!
glance = importutils.import_any('openstack_dashboard.api.glance',
'horizon.api.glance')
def _get_images(self, request, filter):
try:
images, _more = glance.image_list_detailed(request, filters=filter)
except Exception:
images = []
exceptions.handle(request,
_("Unable to retrieve images with filter %s.") %
filter)
return images
def _get_public_images(self, request):
filter = {"is_public": True,
"status": "active"}
return self._get_images(request, filter)
def _get_tenant_images(self, request):
filter = {"owner": request.user.tenant_id,
"status": "active"}
return self._get_images(request, filter)
glance image-list on controller
+--------------------------------------+------------------------------+
| ID | Name |
+--------------------------------------+------------------------------+
| 28747d2b-c113-4dd3-ad44-908141461e6d | cirros |
| ecb9ac84-7459-4b3b-a832-59329ae1e0ea | github-enterprise-2.6.5 |
| 39ce8087-f95b-4204-bcee-0f084735cba9 | manila-service-image |
| f9a678a8-492f-481e-8c82-5d0c84f69675 | mysqlTest |
| 5ae10b0d-c732-481a-944f-ca3a5a5f4915 | sahara-vanilla-latest-ubuntu |
| f9ea4193-1a92-434d-b247-27b748feb4a1 | Ubuntu Server 14.04 LTS |
+--------------------------------------+------------------------------+
thank you for answer. unfortunately, I have same config as yours.. any other suggestions..?
Hi. I'm trying to introduce Sahara to my cloud to utilize Hadoop, and it's not going well. I searched glance-log,and sahara-log, and it doesn't seem to show any error.(I'll provide if needed) I tried to follow Openstack Documents but it didn't really help me. Now I'm trying to add sahara to my dashboard by command "pip install sahara-dashboard".
OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license. | https://ask.openstack.org/en/users/18182/openstackstarter/?sort=recent | CC-MAIN-2020-34 | refinedweb | 596 | 52.56 |
learning vs. classifying modules
A place to ask questions about methods in Orange and how they are used and other general support.
2 posts • Page 1 of 1
learning vs. classifying modules
I know that Orange has a learning module for each classifier but does it have a classification module can be used to apply the learned model to new examples?
Re: learning vs. classifying modules
The trained classifiers can be applied to new examples. If you want serialization, simply use python's pickle module.
And later in a new process
- Code: Select all
import Orange, cPickle
data = Orange.data.Table("train.tab")
c = Orange.classification.bayes.NaiveLearner(data)
cPickle.dump(c, open("bayes.pck", "wb"))
- Code: Select all
import Orange, cPickle
c = cPickle.load(open("bayes.pck", "rb"))
data = Orange.data.Table("new.tab")
c(data[0])
2 posts • Page 1 of 1
Return to Questions & Support | http://orange.biolab.si/forum/viewtopic.php?f=4&t=1648&p=4672 | CC-MAIN-2014-15 | refinedweb | 148 | 51.44 |
Before we learn how to create functions, let’s go over some built-in functions…
C++ comes chock-full of functions that are already created as part of the standard library. But how do we access this hidden hoard of helpful functions? We gain access to various functions by including headers like
<cmath> or
<string>.
In fact, you may already have used a couple functions without even knowing it! With the following header:
#include <cmath>
We gain the power to call
sqrt() to find the square root of any number.
Wait, “call”
sqrt()?
Calling a function is how we get a function to take action. To call a basic function, we just need the function name followed by a pair of parentheses like
sqrt(9). For example:
std::cout << sqrt(9) << "\n"; // This would output 3
Instructions
Inside of
main(), call
rand() with the modulo operator to generate a random number between 0 and your favorite number. For example,
rand() % 29 would output a random number between 0 and 28.
Assign the resulting value to a new
int variable called
the_amazing_random_number.
Print
the_amazing_random_number to the terminal. | https://production.codecademy.com/courses/learn-c-plus-plus/lessons/cpp-functions/exercises/built-in-cpp-functions | CC-MAIN-2021-04 | refinedweb | 186 | 64.71 |
API.
This information puts into stark light some of the falsehoods told by Microsoft when Windows 8 was announced. In the //Build keynotes it was said that most Silverlight code could be upgraded to WinRT with minimal changes such as modifying namespaces. But if you look at the list of controls between Silverlight 5 and Windows 8 you quickly see that’s not the case. Commonly used controls such as AutoCompleteBox, ChildWindow, DataGrid, Pivot, and WebBrowser are simply not available. Alternatives from Microsoft and others do exist, but the conversion is not necessarily straight forward.
Some interesting statistics:
.NET 3.5 had a total of 8,497 classes, structures, and interfaces. .NET 4.0 increased that by nearly a third to 12,677. Last year’s release, .NET 4.5, was quite small in comparison with less than a thousand new types.
By comparison, Windows 8 and Windows Phone 8 have 2,851 and 2,266 respectively. That puts it between Silverlight (2,210) and Java Standard Edition 7 (3,977) in terms of raw size. Of course many of these types are inconsequential DTOs such as CalendarDateChangedEventArgs.
The “mockability” of .NET remains quite low. For every 100 classes in .NET, there are roughly 8 1/2 interfaces. This is actually down from .NET 3.5 where there were 8 1/2 interfaces per 100 classes. While many of these classes are simple DTOs that don’t need to be mocked, others such as DirectoryInfo still don’t offer a good option.
First Floor Software is best known for the debugging tool XAML Spy, which was previously known as Silverlight Spy. | http://www.infoq.com/news/2013/06/Diff-Lists | CC-MAIN-2015-18 | refinedweb | 271 | 67.65 |
Interview: Amr ElAdawy, JSPX Java Web Framework Developer
Amr ElAdawy is a software engineer at Etisalat Egypt in Cairo. Etisalat is an Egyptian Mobile operator, where Amr develops Java enterprise applications for telecommunications & sales.
He is a member of a small community that is developing and supporting a web framework called "JSPX" . (Its homepage is here.)
Yet another web framework?
Looking into the existing frameworks for developing Java web applications, we found that each requires a lot of knowledge and time before one can become productive with it. We wanted something that is easy to learn, productive, and (most of all) developer friendly. Hence we developed JSPX.
JSPX's main target is to be a "developer friendly" framework. Since JSPX is based on standard HTML tags and simple Java POJOs:
- JSPX is easy to learn. We already involved some fresh developers with basic knowledge of HTML and Java and no knowledge whatsoever of the framework... and they managed to start being productive in a remarkably short time.
- The out-of-the-box components that implement common tasks, such as DataTable, ListTable, Validators, and Capatcha are very powerful.
- Utilization of declarative code with full controllability, which is the ability to interact with the declared HTML controls through Java APIs is a fundamental concept in this framework.
How does declarative code make JSPX different?
JSPX is smart enough to know what you need it to do without your needing to tell it how to do so. You only need to declare some attributes in your HTML pages to change the behavior of the results. For example, by setting the value of the "AutoBind" to "True", in a DataTable component, will a data table automatically connect to a database, without the need for any Java code at all.
Here are some of the DataTable tags:
To give us a visceral idea about what JSPX entails, can you give us a "Hello World" scenario?
Hello World with JSPX is very simple. Only three steps will get you on the road:
- Configure web.xml file. Just register two servlets and choose your URL pattern:
<servlet>
<display-name>JspxHandler</display-name>
<servlet-name>JspxHandler</servlet-name>
<servlet-class>eg.java.net.web.jspx.engine.RequestHandler</servlet-class>
</servlet>
<servlet>
<display-name>ResourceHandler</display-name>
<servlet-name>ResourceHandler</servlet-name>
<servlet-class>eg.java.net.web.jspx.engine.ResourceHandler</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>JspxHandler</servlet-name>
<url-pattern>*.jspx</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>ResourceHandler</servlet-name>
<url-pattern>/jspxEmbededResources/*</url-pattern>
</servlet-mapping>
- Create HTML file with the extension you chose in the URL pattern. For example FirstPage.jspx in the Webroot folder:
<page xmlns:xsi=""
xsi:
<html>
<head>
<meta http-
<title>jspx demo</title>
</head>
<body>
<form method="post" enctype="multipart/form-data" >
<label id="resultLabel"></label>
</form>
</body>
</html>
</page>
- Create a Java class. It must have the same name as defined in the controller attribute in the HTML page's tag "jspx.demo.web.controller.FirstPage":
public class FirstPage extends Page {
protected void pageLoaded() {
resultLabel.setValue("Hello Web in JSPX");
}
public Label resultLabel = new Label();
public Label getResultLabel() {
return resultLabel;
}
public void setResultLabel(Label result) {
this.resultLabel = result;
}
}
Now you can start and try to access this url.
Now please tell us exactly how an HTML file in JSPX is different to normal HTML files.
Looking at the above example, we can see that the page contains standard HTML tags, except for the root element which is <page>. That is one of the most important features of JSPX: the ability to port an already-designed HTML page into JSPX pages. Just wrap the HTML within <page> tags! JSPX is built on top of all HTML standard tags. However, when going into some advanced business cases, such as database searches, one would require some non-standard tags that are specific to JSPX (as shown in the screenshot earlier in this interview).
What exactly is defined in the POJOs and how are they hooked into the HTML files?
The controller code, which is a simple POJO class, is a representation of the HTML declarative code on the JSPX page. In the page, you see the attribute "Controller" in the Page node. This defines the name of the controller class. In this controller, you can define web controls that have the same name as the value of the ID attribute in the HTML page. So, you will be able to interact with them. Also, through this controller you are exposed to a set of inherited methods to control all the phases of JSPX. In addition to that, there are advanced binding techniques, via the JspxBean control, which are almost the same as JSF backing beans.
What about configuration files, like struts-config.xml?
The approach taken in configuration is one of the most important advantages of JSPX. Our main goal from the start has been to eliminate the headache of configuration files. Unlike JSF and Struts, JSPX does not require any configuration files, other than the standard web.xml file. Hence, JSPX can be considered a "Zero Configurations Framework".
Are there any disadvantages to using this framework?
Using relatively new frameworks is considered a risk for some people. In JSPX, we looked into the other frameworks and we covered almost everything that is needed and missing. Also, we provided the ability to use JSPX with already developed projects that were made in different technologies like JSF and JSP. Also, we support including already-made JSPs into JSPX pages.
Are you using the framework in production code and what are the results of doing so?
Over the past 5 months, since the first announcement of JSPX, it has been used in at least five of our enterprise projects. Some of them have been totally migrated to JSPX. Others were already created in different technologies and JSPX was used to implement new requirements. In all of these cases, JSPX provided outstanding productivity. Top management has been very pleased about our time-to-market. In fact, we were able to deliver requirements that were planned to take days in just an hour!
What are the future plans of this framework?
The first JSPX release was announced on the 1st of January 2009... and the framework will not stop there! There is a plan for monthly releases, aiming to include bug fixes and new features. We are planning to provide support for AJAX, which is scheduled in the next build. Also, a plugin for NetBeans IDE will improve the productivity of users of this framework.
Mainly we are counting on user feedback to drive this framework, as it is strongly characterized as a highly dynamic and business-case driven framework.
How to start using it?
Visiting the project website at will give you a comprehensive starting point for JSPX. A demo project is also provided, demonstrating some use cases of the framework. And, of course, we are more than pleased to support any request at the JSPX support email: support DOT jspx AT gmail DOT com. | https://dzone.com/articles/interview-amr-eladawy-jspx-jav | CC-MAIN-2015-35 | refinedweb | 1,172 | 56.45 |
How do I determine the username of the currently logged in user from a python script? Elsewhere we are using scripted auth and that python script has several methods that Splunk calls and passes in the username; each method makes a HTTP POST to a REST API running on one of our servers. We need to use a similar approach to what we do in scripted auth's getUserInfo method, but have it be invoked from a custom command (defined in commands.conf), which means that the username won't be passed in. I assume that there is some way to get the current username, just haven't been able to find it yet. Thanks for any pointers,
Tom
BTW, we are currently on Splunk 4.1.4 in case that changes things
Did you try the cherrypy session object?
import cherrypy user = cherrypy.session['user'].get('name')
I tried your method, but received an error. Any ideas on the following?
AttributeError: 'module' object has no attribute 'session'
You can extract it from the auth token.
First, in the definition of your search command in
commands.conf, set
[yourcommand] filename = yourcommand.py passauth = true
Your script will then receive a token that looks like:
<auth> <userId>admin</userId> <username>admin</username> <authToken>cbd900f3b28014a1e233679d05dcd805</authToken> </auth>
(Note: The auth token will actually be in a single line with no whitespace. The above formatting is only for readability.)
Once you have that, it's just a matter of extracting the username from the string. For example, if you're using InterSplunk:
import splunk.Intersplunk as si results, dummyresults, settings = si.getOrganizedResults() authString = settings.get("authString", None) if authString != None: start = authString.find('<userId>') + 8 stop = authString.find('</userId>') user = authString[start:stop]
Hi,
Is there any pre-req in order to use the above script? I inserted to my .py and return error code 1.
It looks like
settings["owner"] will directly gives the user ID.
import splunk.Intersplunk
results, dummyresults, settings = splunk.Intersplunk.getOrganizedResults()
splunk.Intersplunk.outputResults([{"user": settings["owner"]}]) | https://community.splunk.com/t5/Security/Determine-currently-logged-in-username/td-p/93113 | CC-MAIN-2020-34 | refinedweb | 339 | 60.21 |
Tutorial 12: Getting Started with DotStars (APA102)
DotStars, also known as the APA102, are individually addressable LEDs. With them you can create persistence of vision (POV) displays, achieve a wide range of colors, and patterns relevant to your project.
In this tutorial I show you how you can connect a strip of DotStars to an Arduino Uno, how to install the library needed to make our lives easier, and how to upload your first sketch to drive them!
DIFFICULTY
EASY
CIRCUITRY KNOWLEDGE
NONE
C++ PROGRAMMING
LITTLE
ABOUT
0MINUTES
You can copy / paste the code below if you’re having issues with typos or want a shortcut. However I recommend that you follow along in the tutorial to understand what is going on!
// Simple strand test for Adafruit Dot Star RGB LED strip. // This is a basic diagnostic tool, NOT a graphics demo...helps confirm // correct wiring and tests each pixel's ability to display red, green // and blue and to forward data down the line. By limiting the number // and color of LEDs, it's reasonably safe to power a couple meters off // the Arduino's 5V pin. DON'T try that with other code! #include <Adafruit_DotStar.h> // Because conditional #includes don't work w/Arduino sketches... #include <SPI.h> // COMMENT OUT THIS LINE FOR GEMMA OR TRINKET //#include <avr/power.h> // ENABLE THIS LINE FOR GEMMA OR TRINKET #define NUMPIXELS 30 // Number of LEDs in strip // Here's how to control the LEDs from any two pins: #define DATAPIN 4 #define CLOCKPIN 5 //Adafruit_DotStar strip = Adafruit_DotStar( // NUMPIXELS, DATAPIN, CLOCKPIN, DOTSTAR_BRG); // The last parameter is optional -- this is the color data order of the // DotStar strip, which has changed over time in different production runs. // Your code just uses R,G,B colors, the library then reassigns as needed. // Default is DOTSTAR_BRG, so change this if you have an earlier strip. // Hardware SPI is a little faster, but must be wired to specific pins // (Arduino Uno = pin 11 for data, 13 for clock, other boards are different). Adafruit_DotStar strip = Adafruit_DotStar(NUMPIXELS, DOTSTAR_BRG); void setup() { #if defined(__AVR_ATtiny85__) && (F_CPU == 16000000L) clock_prescale_set(clock_div_1); // Enable 16 MHz on Trinket #endif strip.begin(); // Initialize pins for output strip.show(); // Turn all LEDs off ASAP } // Runs 10 LEDs at a time along strip, cycling through red, green and blue. // This requires about 200 mA for all the 'on' pixels + 1 mA per 'off' pixel. int head = 0, tail = -10; // Index of first 'on' and 'off' pixels uint32_t color = 0xFF0000; // 'On' color (starts red) void loop() { strip.setPixelColor(head, color); // 'On' pixel at head strip.setPixelColor(tail, 0); // 'Off' pixel at tail strip.show(); // Refresh strip delay(20); // Pause 20 milliseconds (~50 FPS) if(++head >= NUMPIXELS) { // Increment head index. Off end of strip? head = 0; // Yes, reset head index to start if((color >>= 8) == 0) // Next color (R->G->B) ... past blue now? color = 0xFF0000; // Yes, reset to red } if(++tail >= NUMPIXELS) tail = 0; // Increment, reset tail index }
Everything you need should be included in the tutorial! | http://thezanshow.com/electronics-tutorials/arduino/tutorial-12 | CC-MAIN-2019-39 | refinedweb | 501 | 62.88 |
0
OK, here is my c++ program that i made with "Visual C++ 2005 Express Edition"....
#include<cstdlib> #include<ctime> #include<iostream> using namespace std; int main() { cout << "Welcome To Danny's Game!.\n"; cout << "See if you can beat me.\n"; cout << "The First number below is your number.\n"; srand((unsigned)time(0)); int random_integerplayer; for(int index=0; index<1; index++) { random_integerplayer = (rand()%10)+1; cout << random_integerplayer << endl; cout << "\n"; cout << "This is my number.\n"; srand((unsigned)time(0)); int random_integercomputer; for(int index=0; index<1; index++) { random_integercomputer = (rand()%10)+1; cout << random_integercomputer << endl; cout << "\n"; if( random_integercomputer>=random_integerplayer ) cout << "You Lose. "; else cout << "You Win! "; } } }
OK, that was my code, but something won't work.
Whenever i use it, it always comes up with 2(two)
numbers(computer and player), that are both exactully the same!!!
Please help me make it so that when it generates the random numbers, they are both different...
thxz alot! =) | https://www.daniweb.com/programming/software-development/threads/68188/help-my-program | CC-MAIN-2016-50 | refinedweb | 161 | 57.98 |
Manage models
Databricks provides a hosted version of MLflow Model Registry to help you to manage the full lifecycle of MLflow Models. Model Registry provides chronological model lineage (which MLflow experiment and run produced the model at a given time), model versioning, stage transitions (for example, from staging to production or archived), and email notifications of model events. You can also create and view model descriptions and leave comments.
This article covers how to use Model Registry as part of your machine learning workflow and includes instructions for both the Model Registry UI and the Model Registry API.
For an overview of Model Registry concepts, see MLflow guide.
In this section:
Requirements
To use the Model Registry UI:
- Databricks Runtime 6.4 or above with mlflow>=1.7.0 installed
- Databricks Runtime 6.4 ML with mlflow>=1.7.0 installed
- Databricks Runtime 6.5 ML or above
To use the Model Registry API:
- Databricks Runtime 6.5 or above
- Databricks Runtime 6.5 ML or above
- Databricks Runtime 6.4 or below with mlflow>=1.7.0 installed
- Databricks Runtime 6.4 ML or below with mlflow>=1.7.0 installed
Create or register a model
In this section:
Create or register a model using the UI
There are two ways to register a model in the Model Registry. You can register an existing model that has been logged to MLflow, or you can create and register a new, empty model and then assign a previously logged model to it.
Register an existing logged model from a notebook
In the Workspace, identify the MLflow run containing the model you want to register.
Click the Experiment icon
in the notebook tool bar.
In the Experiment Runs sidebar, click the
icon next to the date of the run. The MLflow Run page displays. This page shows details of the run including parameters, metrics, tags, and list of artifacts.
In the Artifacts section, click the directory named xxx-model.
Click the Register Model button at the far right.
In the dialog, click in the Model box and do one of the following:
- Select Create New Model from the drop-down menu. The Model Name field appears. Enter a model name, for example
scikit-learn-power-forecasting.
- Select an existing model from the drop-down menu.
Click Register.
- If you selected Create New Model, this registers a model named
scikit-learn-power-forecasting, copies the model into a secure location managed by the MLflow Model Registry, and creates a new version of the model.
- If you selected an existing model, this registers a new version of the selected model.
After a few moments, the Register Model button changes to a link to the new registered model version.
Click the link to open the new model version in the Model Registry UI. You can also find the model in the Model Registry by clicking
Models in the sidebar.
Create a new registered model and assign a logged model to it
You can use the Create Model button on the registered models page to create a new, empty model and then assign a logged model to it. Follow these steps:
On the registered models page, click Create Model. Enter a name for the model and click Create.
Follow Steps 1 through 3 in Register an existing logged model from a notebook.
In the Register Model dialog, select the name of the model you created in Step 1 and click Register. This registers a model with the name you created, copies the model into a secure location managed by the MLflow Model Registry, and creates a model version:
Version 1.
After a few moments, the MLflow Run UI replaces the Register Model button with a link to the new registered model version. You can now select the model from the Model drop-down list in the Register Model dialog on the Experiment Runs page. You can also register new versions of the model by specifying its name in API commands like Create ModelVersion.
Register a model using the API
There are three programmatic ways to register a model in the Model Registry. All methods copy the model into a secure location managed by the MLflow Model Registry.
To log a model and register it with the specified name during an MLflow experiment, use the
mlflow.<model-flavor>.log_model(...)method. If a registered model with the name doesn’t exist, the method registers a new model, creates Version 1, and returns a
ModelVersionMLflow object. If a registered model with the name exists already, the method creates a new model version and returns the version object.
with mlflow.start_run(run_name=<run-name>) as run: ... mlflow.<model-flavor>.log_model(<model-flavor>=<model>, artifact_path="<model-path>", registered_model_name="<model-name>" )
To register a model with the specified name after all your experiment runs complete and you have decided which model is most suitable to add to the registry, use the
mlflow.register_model()method. For this method, you need the run ID for the
mlruns:URIargument. If a registered model with the name doesn’t exist, the method registers a new model, creates Version 1, and returns a
ModelVersionMLflow object. If a registered model with the name exists already, the method creates a new model version and returns the version object.
result=mlflow.register_model("runs:<model-path>", "<model-name>")
To create a new registered model with the specified name, use the MLflow Client API
create_registered_model()method. If the model name exists, this method throws an
MLflowException.
client = MlflowClient() result = client.create_registered_model("<model-name>")
Control access to models
To learn how to control access to models registered in Model Registry, see MLflow Model permissions.
Transition a model stage
A model version has one of the following stages: None, Staging, Production, or Archived. The Staging stage is meant for model testing and validating, while the Production stage is for model versions that have completed the testing or review processes and have been deployed to applications for live scoring. An Archived model version is assumed to be inactive, at which point you can consider deleting it. Different versions of a model can be in different stages.
A user with appropriate permission can transition a model version between stages. If you have permission to transition a model version to a particular stage, you can make the transition directly. If you do not have permission, you can request a stage transition and a user that has permission to transition model versions can approve, reject, or cancel the request.
Transition a model stage using the UI
Follow these instructions to transition a model’s stage.
To display the list of available model stages and your available options, in a model version page, click the Stage: <Stage> button and request or select a transition to another stage.
Enter an optional comment and click OK.
Transition a model version to the Production stage
After testing and validation, you can transition or request a transition to the Production stage.
Model Registry allows more than one version of the registered model in each stage. If you want to have only one version in Production, you can transition all versions of the model currently in Production to Archived by checking Transition existing Production model versions to Archived.
Approve, reject, or cancel a model version stage transition request
A user without stage transition permission can request a stage transition. The request appears in the Pending Requests section in the model version page:
To approve, reject, or cancel a stage transition request, click the Approve, Reject, or Cancel link.
The creator of a transition request can also cancel the request.
Transition a model stage using the API
Users with appropriate permissions can transition a model version to a new stage.
To update a model version stage to a new stage, use the MLflow Client API
transition_model_version_stage() method:
client = MlflowClient() client.transition_model_version_stage( name="<model-name>", version=<model-version>, stage="<stage>", description="<description>" )
The accepted values for
<stage> are:
"Staging"|"staging",
"Archived"|"archived",
"Production"|"production",
"None"|"none".
Use model for inference
Preview
This feature is in Public Preview.
After a model is registered in Model Registry, you can automatically generate a notebook to use the model for batch inference or create an endpoint to use the model for real-time serving.
In the upper right corner of the registered model page or the model version page, click
. The Configure model inference dialog displays, allowing you to configure batch or real-time inference.
Configure batch inference
When you follow these steps to create a batch inference notebook, the notebook is saved in your user folder under the
Batch-Inference folder in a folder with the model’s name. You can edit the notebook as needed.
Click the Batch inference tab.
From the Model version drop-down, select the model version to use. The first two items in the drop-down are the current Production and Staging version of the model (if they exist). When you select one of these options, the notebook automatically uses the Production or Staging version as of the time it is run. You do not need to update the notebook as you continue to develop the model.
Click the Browse button next to Input table. The Select input data dialog displays. If necessary, you can change the cluster in the Compute drop-down.
Select the database and table containing the input data for the model, and click Select. The generated notebook automatically imports this data and sends it to the model. You can edit the generated notebook if the data requires any transformations before it is input to the model.
Predictions are saved in a folder in the directory
dbfs:/FileStore/batch-inference. By default, predictions are saved in a folder with the same name as the model. Each run of the generated notebook writes a new file to this directory with the timestamp appended to the name. You can also choose not to include the timestamp and to overwrite the file with subsequent runs of the notebook; instructions are provided in the generated notebook.
You can change the folder where the predictions are saved by typing a new folder name into the Output table location field or by clicking the folder icon to browse the directory and select a different folder.
Configure real-time inference
In the Configure model inference dialog, click the Real-time tab.
If serving is not enabled for the model, click Enable serving. The Serving tab of the registered model page appears, with Status shown as Pending. After a few minutes, Status changes to Ready.
If serving is already enabled, click View existing real-time inference to display the Serving tab.
Control notification preferences
You can configure Model Registry to notify you by email about activity on registered models and model versions that you specify.
On the registered model page, the Notify me about menu shows three options:
- All new activity: Send email notifications about all activity on all model versions of this model. If you created the registered model, this setting is the default.
- Activity on versions I follow: Send email notifications only about model versions you follow. With this selection, you receive notifications for all model versions that you follow; you cannot turn off notifications for a specific model version.
- Mute notifications: Do not send email notifications about activity on this registered model.
The following events trigger an email notification:
- Creation of a new model version
- Request for a stage transition
- Stage transition
- New comments
You are automatically subscribed to model notifications when you do any of the following:
- Comment on that model version
- Transition a model version’s stage
- Make a transition request for the model’s stage
To see if you are following a model version, look at the Follow Status field on the model version page, or at the table of model versions on the registered model page.
Turn off all email notifications
You can turn off email notifications in the Model Registry Settings tab of the User Settings menu:
- Click
Settings in the lower left corner of your Databricks workspace.
- Click User Settings.
- Go to the Model Registry Settings tab.
- Deselect Turn on model registry email notifications.
An admin can turn off email notifications for the entire organization in the Admin Console.
Maximum number of emails sent
Model Registry limits the number of emails sent to each user per day per activity. For example, if you receive 20 emails in one day about new model versions created for a registered model, Model Registry sends an email noting that the daily limit has been reached, and no additional emails about that event are sent until the next day.
To increase the limit of the number of emails allowed, contact your Databricks representative.
Annotate a model or model version
You can provide information about a model or model version by annotating it. For example, you may want to include an overview of the problem or information about the methodology and algorithm used.
Annotate a model or model version using the UI
You can annotate a model in two ways: using a description or using comments. Descriptions are available for models and model versions; comments are only available for model versions. Descriptions are intended to provide information about the model. Comments provide a way to maintain an ongoing discussion about activities on a model version.
Add or update the description for a model or model version
From the registered model or model version page, click the Description
icon. An edit window displays.
Enter or edit the description in the edit window.
Click Save.
If you entered a description of a model version, the description appears in the Description column in the table on the registered model page. The column displays a maximum of 32 characters or one line of text, whichever is shorter.
Add comments for a model version
- Scroll down the model version page and click the down arrow next to Activities.
- Type your comment in the edit window and click Add Comment.
Rename a model (API only)
To rename a registered model, use the MLflow Client API
rename_registered_model() method:
client=MlflowClient() client.rename_registered_model("<model-name>", "<new-model-name>")
Note
You can rename a registered model only if it has no versions, or all versions are in the None or Archived stage.
All registered models live in the MLflow Model Registry. You can search for models using the UI or the API.
To display all registered models, click
Models in the sidebar.
To search for a specific model, type in the model name in the search box.
You can also filter models based on tags. Click Filter, and enter tags in the format:
tags.<key>=<value>. To filter based on multiple tags, use the
AND operator.
To retrieve a list of all registered models, use the MLflow Client API
list_model_versions()'}
You can also search for a specific model name and list its version details using MLflow Client API
search_model_versions() method:
from pprint import pprint client=MlflowClient() [pprint(mv) for mv in client.search_model_versions("name=<model-name>")]
This outputs:
{ }
Delete a model or model version
You can delete a model using the UI or the API.
Delete a model version or model using the UI.
To delete a model version:
- Click
Models in the sidebar.
- Click a model name.
- Click a model version.
- Click
at the upper right corner of the screen and select Delete from the drop-down menu.
To delete a model:
- Click
Models in the sidebar.
- Click a model name.
- Click
at the upper right corner of the screen and select Delete from the drop-down menu.
Delete a model version or model using the API.
Delete a model version
To delete a model version, use the MLflow Client API
delete_model_version() method:
# Delete versions 1,2, and 3 of the model client = MlflowClient() versions=[1, 2, 3] for version in versions: client.delete_model_version(name="<model-name>", version=version)
Example
This example illustrates how to use the Model Registry to build a machine learning application: MLflow Model Registry example. | https://docs.databricks.com/applications/machine-learning/manage-model-lifecycle/index.html | CC-MAIN-2021-43 | refinedweb | 2,665 | 54.52 |
Today, I have just released a OSS with named is Gapi4net library at codeplex. This is a wrapper some of API's Google for search Web, Local, Video, Blog, News, Book, Image, Patent and language translation. In the past, I also saw some wrapper for Google API, specially are a google-api-for-dotnet, gapidotnet. But as I realized that they are hard to using and when you used them, you must remember many input parameters. For example, if you use the google-api-for-dotnet, you must input as:
GwebSearchClient client = new GwebSearchClient(/* Enter the URL of your site here */); IList<IWebResult> results = client.Search("Google API for .NET", 32); foreach(IWebResult result in results) { Console.WriteLine("[{0}] {1} => {2}", result.Title, result.Content, result.Url); }
var result = Gapi4NetFactory<Web> .Init() .With(x => x.Version, "1.0") .With(x => x.Query, searchText) .With(x => x.Start, start) .Create();
I wrapped it and expose it with Fluent Interface and Lambda Expression. So input parameters is more meaningful than google-api-for-dotnet. And the input parameter is strong name now. You do not need to many comment on each parameter. And if you comment many line on each parameter, are you sure the end-user will see it? Trend now is avoiding the comments in code. Just simple that if you comment your code today, and next day you have the change request in your code, you maybe forget update your comment in this code. That make your comment is out of date. And why do you keep some of comments that not meaningful? So I usually make my code better possible and try to keep a little comment on my code, make my code self-talking with the end user about its function. In this sample above, I use Lambda Expression for make strong name and meaningful for input parameters. I am usually eager about this.
After talking many about my Gapi4net, I hope you will be enjoy about it in your practice. Now I will explain about architecture using in this library. The first thing is domain model, next I will explain some of technical, and finally is a example that write in ASP.NET MVC 3 Preview 1 for proving my solution.
Now, we shall jump into Gapi4net's domain model. You should see this link for APIs from Google. After see it, you will realize that many things in it. Don't worry about that. Just see for fun because I wrapped all of it in Gapi4net. And the main things I wrapped is Web, Local, Video, Blog, News, Book, Image, Patent and language translation. I worked on nine kind of APIs of Google in 2 weeks. Finally all things is going on well. And now I will show all my working in this post.
+ Domain Model's Web Search:
+ Domain Model's Local Search:
+ Domain Model's Video Search:
+ Domain Model's Blog Search:
+ Domain Model's News Search:
+ Domain Model's Book Search:
+ Domain Model's Image Search:
+ Domain Model's Patent Search:
+ Domain Model's Language Translation:
And I used the Json.net library for convert the Json data to my entities. Please see my previous post about converting technical in Json.net library. In this post I only show some line of code in Web Search for saving space in this post. All of them was put at here. And this is a code for web search:
[JsonObject] public class WebSearchResult : ContractBase { [JsonProperty("responseData")] [JsonConverter(typeof(CustomObjectCreationConverter<IResponseDataResult, ResponseDataResult>))] public IResponseDataResult ResponseData { get; set; } }
[JsonObject] public class ResponseDataResult : IResponseDataResult { [JsonProperty("cursor")] [JsonConverter(typeof(CustomObjectCreationConverter<ICursorResult, CursorResult>))] public ICursorResult Cursor { get; set; } [JsonProperty("results")] [JsonConverter(typeof(CustomArrayCreationConverter<IMainResult, MainResult>))] public IMainResult[] MainResult { get; set; } }
[JsonObject] public class MainResult : IMainResult { [JsonProperty("GsearchResultClass")] public string GsearchResultClass { get; set; } [JsonProperty("cacheUrl")] public string CacheUrl { get; set; } [JsonProperty("content")] public string Content { get; set; } [JsonProperty("title")] public string Title { get; set; } [JsonProperty("titleNoFormatting")] public string TitleNoFormatting { get; set; } [JsonProperty("unescapedUrl")] public string UnescapedUrl { get; set; } [JsonProperty("url")] public string Url { get; set; } [JsonProperty("visibleUrl")] public string VisibleUrl { get; set; } }
Next post I will continue to analyze some of technical that I used in Gapi4net and a example write in ASP.NET MVC 3 Preview 1.
Good bye and see you next time!
Thank you for submitting this cool story - Trackback from DotNetShoutout
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Hey cool! Tell me is it available / compilable into Silverlight and will you add support for Google Reader api ?
Cheers,
ian
Pingback from Twitter Trackbacks for Google API for .NET architecture (Part 1) - Context is King [asp.net] on Topsy.com
Pingback from Google API for .NET architecture (Part 1) – Context is King | Build Talk
Pingback from Google API for .NET architecture (Part 1) – Context is King | AEC Media
Part 1: Google API for .NET architecture (Part 1) Today is Harvey Nash - 10 years in Viet Nam celebration blog!
Nice weblog here! Additionally your website a lot up fast!
What host are you the use of? Can I am getting your affiliate link in your host?
I want my site loaded up as quickly as yours lol
Thanks to my father who stated to me about this blog, this web site is actually remarkable. | http://weblogs.asp.net/thangchung/archive/2010/09/13/google-api-for-net-architecture-part-1.aspx | CC-MAIN-2013-20 | refinedweb | 880 | 57.06 |
Visual inheritance is a concept that most people seem to be familiar with. However, when programmers are asked whether they're using visual inheritance, most say that they don't take advantage of it. The next section will show you what visual inheritance is, and how to take advantage of it quickly and easily within your applications. By the time you're done with this section, you will have become a convert and will use visual inheritance in all your Windows Forms applications in the future.
When you create a class that inherits from another class, the newly created class inherits those members and methods that the child class is allowed to inherit. The same is true for an inherited form. When you look at it from the lowest level, forms are just classes. A form that inherits from another form is still just one class inheriting from another class. The trick to remember is that a form renders its GUI through inheritable members, properties, and methods. By virtue of this inheritance, child forms inherit their GUI look and feel from their parents.
Just as with standard child classes, child forms are free to reuse any behavior or property of a parent. In addition, they can choose to override and provide their own implementation for any property or method where it is applicable.
You have already seen how to create reusable controls and you have seen their benefits. With a reusable control, you have a self-contained piece of functionality and user interface that you can reuse across multiple forms. With an inherited form, you create some piece of the user interface and functionality that will be provided free of charge to all child forms inheriting from the same parent. This gives you the ability to rapidly create a consistent look and feel throughout your Windows Forms application as well as giving you the ability to easily change that look and feel.
To truly see the power of visual inheritance, consider a scenario in which not using it could be disastrous. As an example, assume that you have created a large Windows Forms application that has at least 50 forms in it. On each of those forms, you have placed the company logo and an area within the logo that you can click to bring up the company's home page. Now assume that your boss has just told you to replace all 50 of those forms with the new company logo, and that the link should no longer open the company website; it should instead open a Word document that will be installed with the application.
If you haven't used visual inheritance to create the application, you will be stuck making the change manually to all 50 forms. In addition, you will have to retest every one of the forms because the code is duplicated on each and a mass-change operation like this is highly prone to errors and typos.
However, if you had used visual inheritance, you could have simply made the change to the topmost ancestor in the hierarchy of inherited forms. All the child forms would be aware of the change automatically, and you would not have to rewrite a single line of code in any of them.
This section will show you an example of creating a system that uses visual inheritance. This example illustrates a sample company that wants to create a few standard forms that will be used to create a consistent look and feel for all of their applications.
To build this sample yourself, first create a library that contains a hierarchy of inherited forms. The sample will be finished by creating an application that takes advantage of the forms that use visual inheritance.
Open Visual Studio .NET 2003 and create a new solution called VisualInheritance. In that solution, add a new Windows Application called FormsLibrary. FormsLibrary will become a class library eventually. It was started as a Windows Application because doing so enables a developer to right-click the project and choose Add Windows Form, which can't be done with a standard class library.
Start coding with the form at the top of the hierarchy of company forms, CompanyDefault. Add a new Windows Form to the FormsLibrary project and call it CompanyDefault. To that form, add a panel that docks on top. You can decorate this any way you like, but at some point, add a label to the top panel and call it lblApplicationName. Next, add a new property to the form like the following:
protected string ApplicationName { get { return lblApplicationName.Text; } set { lblApplicationName.Text = value; } }
This property will be used by child forms that want to set the application name. This gives the child forms the capability to change the display of something that was created by the parent form. Although the code could have exposed the label itself as protected, doing so violates some of the guidelines of encapsulation that are always good to follow and also gives a cleaner, more maintainable interface. The protected keyword was used so that only forms that inherit from this form, but not unrelated forms, can set this property.
Now right-click the FormsLibrary project and choose Add Inherited Form. Do not add a regular form; make sure that you choose Inherited. From there, navigate down to the user interface node and choose Inherited Form. Call the form CompanyOKCancel and when Visual Studio .NET asks you for the parent form, make sure that you choose CompanyDefault.
When the form first appears, you will see that it looks just like the CompanyDefault form. The difference between this form and the parent form is that the inherited user interface elements are locked in position and are read-only. This is an important distinction. What you see in the designer is a partial render. The user interface elements inherited from the parent are not actually part of your design surface; they are drawn in place so that you can see what your form will look like at runtime to help you position new controls.
As its name might suggest, you are going to add an OK button and a Cancel button to this form. To do so, create another panel and set the Dock property to the bottom of the form. I set the background to white to make it stand out against the content area of the form. To keep your code in sync with the example, name the OK button btnOK and the Cancel button btnCancel. Figure 17.1 shows the CompanyOKCancel form from inside the Visual Studio .NET designer.
Before moving on to writing the code that will inherit from these two forms, some event handling must be added. The OK and Cancel buttons are no good to the user if forms that inherit from CompanyOKCancel cannot make use of them. To allow child forms to use these buttons and still maintain proper encapsulation, some events must be created for the child form to consume and respond to. Listing 17.1 shows the complete code-behind for the CompanyOKCancel form.
using System; using System.Collections; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace SAMS.Evolution.CSharpUnleashed.VisualInheritance.Library { public class CompanyOKCancel : SAMS.Evolution.CSharpUnleashed.VisualInheritance.Library.CompanyDefault { private System.Windows.Forms.Panel panel2; private System.Windows.Forms.Button btnCancel; private System.Windows.Forms.Button btnOK; private System.ComponentModel.IContainer components = null; protected delegate void ButtonClickDelegate(object sender, System.EventArgs e); protected event ButtonClickDelegate OKClicked; protected event ButtonClickDelegate CancelClicked; public CompanyOKCancel() { // This call is required by the Windows Form Designer. InitializeComponent(); } /// <summary> /// Clean up any resources being used. /// </summary> protected override void Dispose( bool disposing ) { if( disposing ) { if (components != null) { components.Dispose(); } } base.Dispose( disposing ); } // *** Designer Generated Code Hidden from Listing private void btnOK_Click(object sender, System.EventArgs e) { if (OKClicked != null) { OKClicked(sender, e); } } private void btnCancel_Click(object sender, System.EventArgs e) { if (CancelClicked != null) { CancelClicked(sender, e); } } } }
The important parts of this listing are the delegates and the events. The delegates dictate the type of the event, and the events are exposed as protected members. This allows child forms that inherit from this form to consume these events, but no other forms can do so.
As you can see with the event handlers for the btnOK and btnCancel Click events, if a child form has subscribed to listen to the appropriate events, it will be notified. This method of deferring events to a child from the parent is an excellent way of maintaining encapsulation and good object-oriented design while still maintaining flexibility and functionality.
Continue this example by creating some forms that inherit from the forms in the base library. The first step is to change the FormsLibrary Windows application to a class library. You can do that by changing the output type to Class Library in the Project Properties dialog.
Next, add a Windows application to the solution called Visual Inheritance. Add a reference from the new Windows application to the class library project that we just finished building. Add a new inherited form called InheritedForm, and choose CompanyDefault as the parent form. Then add another inherited form called InheritedOKCancel and choose CompanyOKCancel as the parent form.
To add some event handlers for the CompanyOKCancel form's events to the InheritedOKCancel form, set the code for the InheritedOKCancel form to the code shown in Listing 17.2.
using System; using System.Collections; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace SAMS.Evolution.CSharpUnleashed.VisualInheritance { public class InheritedOKCancel : SAMS.Evolution.CSharpUnleashed.VisualInheritance.Library.CompanyOKCancel { private System.Windows.Forms.Label label1; private System.ComponentModel.IContainer components = null; public InheritedOKCancel() { // This call is required by the Windows Form Designer. InitializeComponent(); this.CancelClicked += new ButtonClickDelegate(InheritedOKCancel_ CancelClicked); this.OKClicked += new ButtonClickDelegate(InheritedOKCancel_OKClicked); } /// <summary> /// Clean up any resources being used. /// </summary> protected override void Dispose( bool disposing ) { if( disposing ) { if (components != null) { components.Dispose(); } } base.Dispose( disposing ); } // Windows forms designer code hidden for listing private void InheritedOKCancel_CancelClicked(object sender, EventArgs e) { MessageBox.Show(this, "You clicked Cancel."); } private void InheritedOKCancel_OKClicked(object sender, EventArgs e) { MessageBox.Show(this, "You clicked OK."); } } }
The code in Listing 17.2 sets up the event handlers in the child form to respond to events fired from controls in the parent form. This particular fact is extremely important to remember because it will help you create powerful hierarchies of reusable form templates for your applications in the future.
Now that you have created your inherited forms, create a few test controls on the main form. Create two buttons on the main form. The first button will launch the first form you created, and the second button will launch the second form.
Figure 17.2 shows the first form after it has been launched. Figure 17.3 shows the second form after it has been launched. Click the OK button on the InheritedOKCancel form to test the event-handling setup.
As you saw in the previous code sample, a separate library was created that contained the forms from which the code would inherit. This is a design pattern that I highly recommend you adopt. When creating a hierarchy of reusable forms, you want to be able to use those forms across multiple projects to provide your developers with a consistent look and feel, and a minimum set of functionality. If your forms are in a separate library that is designed to stand on its own, but that can be easily integrated by child forms, you will find that the whole experience of creating a Windows Forms application might become easier and less frustrating.
Another design pattern that you should follow is that of maintaining encapsulation. As you saw, the code ensured that properties, methods, and events to be used by child forms were set as protected, and they didn't allow the controls used by parent forms to be accessed directly by child controls. Forcing this kind of encapsulation also forces you to think more closely about the interaction model between the hierarchy of forms.
Inherited forms and visual inheritance can be extremely powerful tools, and if you spend a little extra design time up front, the rewards will more than pay for the time spent. | https://flylib.com/books/en/1.238.1.121/1/ | CC-MAIN-2022-05 | refinedweb | 2,034 | 55.84 |
0
Hello everyone...
For now i have build a Human class and i want to test it now but on 2 lines there is an error and both errors states "void expected".
I have a main function in my class but later i gonna remove it because i want this Human class later to be a Superclass and a warrior, mage and so on will be a subclass that inherits everything what i got so far in my Human class.
Errors are on line 13 and 32
Here are the codes.
import java.util.*; public class Human { public static void main(String[] arguements) { Scanner CharSet = new Scanner(System.in); String name; int strength, health, intelligence, agility; public void MyChar() { //On this line is a error() { //On this line is a error System.out.println("I am " + name + ". "); System.out.println("My Strength is " + strength + " my Health is " + health); System.out.println("My Intelligence is " + intelligence + " and my Agility is " + agility); } } }
Hope you can help me with that...
Edited 6 Years Ago by HelloMe: making more understandable | https://www.daniweb.com/programming/software-development/threads/267623/void-expected-error | CC-MAIN-2016-50 | refinedweb | 176 | 63.39 |
4 years, 5 months ago.
USBHostSerial buffers and functions - talking to FTDI chip
Hi,
I am trying to use an MBED to control a GPIB instrument, by using this GPIB-USB converter () (which uses an FDTI chip) and then using the serial host library to send SCPI commands.
So far, I am able to talk to the prologix device, get the model identifier, set the GPIB address and get data from a *IDN? query on the instrument, but I'm unable to get any data. Longer SCPI commands just don't work at all. My output data also appears embedded in '1'1'1'1'1'1'1'1 arrays, and sometimes doesn't appear on the first few calls of the loop. I assume that something is up with the way I'm using the buffers, but I can't see a better way of doing it, and there doesn't seem to be much documentation of the functions for the latest version - I would like to use an event handler really, but I'm not sure if this is possible?
Here is my (very crude) code:
USB serial host code
#include "mbed.h" #include "USBHostSerial.h" DigitalOut led(LED1); Serial pc(USBTX, USBRX); int i; int main() { pc.printf("\nMain started \r\n"); USBHostSerial serial; char str[1]; int len; while(1) { wait(1); pc.printf("Start\r\n"); // try to connect a serial device while(!serial.connect()) pc.printf("Trying to connect \r\n"); wait(1); pc.printf("Connected \r\n"); // in a loop, print all characters received // if the device is disconnected, we try to connect it again while (1) { serial.printf("*IDN?\r\n"); // send SCPI command led=!led; // if device disconnected, try to connect it again if (!serial.connected()) break; i = 0; // print characters received while (serial.available()) { pc.printf("%c", serial.getc()); i++; } pc.printf(" (%d)\r\n\n", i); wait(0.3); } } }
Any help would be very much appreciated :-)
Cheers,
Tom
1 Answer
4 years, 5 months ago.
So, I have made some progress, but there are still a few things that puzzle me. I've kept more or less the same structure, but clear the buffer at some strategic points, and put in some delays here and there - but these were done more by trial and error rather than by design... I also had missed putting the two pull-down resistors on the data lines, which have helped matters. However, I still get the '1'1'1'1'1'1' 'noise' coming through (this doesn't happen at all when I use the USB device normally on a PC). My data comes amongst this noise and I can't always filter it out. I've also noticed that it gets worse when I have high currents running on a test rig nearby, so am of course wondering if it is noise-related, but I don't understand why it doesn't do it on the PC at all. I wondered if it might be some sort of FTDI timeout thingy?
Assigned to4 years, 5 months ago.
This means that the question has been accepted and is being worked on.
You need to log in to post a question
Sposted by Tom Hillman 11 May 2015 | https://os.mbed.com/questions/8000/USBHostSerial-buffers-and-functions-talk/ | CC-MAIN-2019-43 | refinedweb | 544 | 72.26 |
Andrew Watson
Total Post:39Points:273
As you can tell I am just starting out, but I have been told I will probably be good at it
public class HelloWorld {
public static void main(string[] args) {
system.out.println("Hello world") ;
}
}
the compiler says
"HelloWorld.java:3: error: cannot find symbol
public static void main(string[] args) {
^
symbol: class string
location: class HelloWorld
HelloWorld.java:5: error: package system does not exist
system.out.println("Hello world") ;
^
2 errors"
will somebody please explain to me what is wrong
Post:48Points:336
Re: I have started with java but compiler gives erroer
Capitalization is important in this context. The class is String and I believe you need System like
public static void main(String[] args) {
System.out.println("Hello world") ;
} | https://www.mindstick.com/forum/12566/i-have-started-with-java-but-compiler-gives-erroer | CC-MAIN-2018-09 | refinedweb | 129 | 53 |
Problem ending conversation in RichFaces MenuItemSusanne Jarl Aug 22, 2008 11:36 AM
The following code for this works. What happens is that a new conversation is started and the user is redirected to the chooseSubject page defined by the transition in the jpdl.
<rich:menuItem
<s:link
</rich:menuItem>
Action method in handout component:
@Begin(pageflow = "handout")
public void personHandout() {
log.debug("Person handout");
}
JPDL:
<start-state
<transition name="next" to="chooseSubject"/>
</start-state>
But now, the problem is that I have to click on the link in the menuItem for this to work. I want to be able to click on the whole menuItem area.
So I tried the following:
<rich:menuItem
<s:conversationPropagation
</rich:menuItem>
the problem is that it does not work when another conversation is present. Then my action method never gets called and I end up in no-conversation-page of the previous conversation with an error message saying "The conversation ended.". But if no other conversation is present it works fine.
Any ideas of how to make this work? Since I can't have multiple start pages in a pageflow I need to start the pageflow from an action method in a bean and for usability reasons I need to be able to click on the whole menuItem area and not just the link.
1. Re: Problem ending conversation in RichFaces MenuItemSusanne Jarl Aug 22, 2008 2:26 PM (in response to Susanne Jarl)
I discovered that none of the above code works. Both get me the conversation ended error message.
I use
conversation-required=true
for these other conversations. My question is how I can end them and in on action method or one s:link also start a new conversation and/or pageflow.
Is that possible or do the user have to click two buttons to first end the conversation and then start a new one?
I have a menu where I always want to end the previous conversation and start a new one when I click on a menu item.
Any ideas?
2. Re: Problem ending conversation in RichFaces MenuItemTamer Gür Aug 26, 2008 10:11 PM (in response to Susanne Jarl)
Hi try this,
in your menu item
<rich:menuItem
and add an util function somewhere like this
@End(beforeRedirect=true) public String globalRedirect(String pageName){ return pageName ; }
3. Re: Problem ending conversation in RichFaces MenuItemSusanne Jarl Aug 26, 2008 10:25 PM (in response to Susanne Jarl)
Thanks for your answer, but I guess your solution only ends the conversation and does not start a new conversation.
I want to be able to click on menuItem and both end and begin a new conversation. For example if this would be possible:
<rich:menuItem <s:conversationPropagation <s:conversationPropagation </rich:menutItem>
Where the conversation is immediately ended and a new one begins. And the ending of a conversation does not give you the message
Conversation Endedand redirection to the No-conversation-page.
Also I wonder how to both end and start conversations in the beans.
For example would this be possible:
@End(beforeRedirect=true) @Begin public void someMethod() { // some code }
or like this:
@End(beforeRedirect=true) public void someMethod() { someOtherMethod(); } @Begin public void someOtherMethod() { // some code }
Thanks in advance for your answer!
4. Re: Problem ending conversation in RichFaces MenuItemTamer Gür Aug 27, 2008 8:36 AM (in response to Susanne Jarl)
Also add this to your chooseSubject.page.xhtml
<begin-conversation
5. Re: Problem ending conversation in RichFaces MenuItemVincent Crepin Sep 25, 2008 6:01 AM (in response to Susanne Jarl)
You can end a conversation and start a new one like this :
<rich:menuItem <f:facet <h:graphicImage </f:facet> </rich:menuItem>
The action is on the backing bean of the current page.
Here is the code of the action :
@TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) @End public String gotoCarManagementList(){ return "carList"; }
carList is the logical name of the other page you want to go and needs a new conversation. Here is its description in pages.xml :
<page view- <description>The month list page of the application</description> <navigation> <rule if- <redirect view- </redirect> </rule> </navigation> </page> <page view- <description>The car list page of the application</description> <navigation> <rule if- <redirect view- <param name="conversationPropagation" value="end" /> </redirect> </rule> </navigation> </page>
You see that the navigation rule of the page you quit calls an entry method on the bean of the new page you want to go to. Here is the code :
@Begin @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) public String enterList() { this.com_servier_hr_session_uievents_carList = null; queryObjects(); return "carList"; }
Voila, it ends a conversation and starts a new one.
Cheers.
6. Re: Problem ending conversation in RichFaces MenuItemSusanne Jarl Sep 30, 2008 11:26 PM (in response to Susanne Jarl)
Thanks for all your answers! But I really must say that the solutions are very complicated and require a lot of code that is only there to solve this problem and therefore looks ugly. I don't understand why there is no simpler way to do this? I would like it to be as in my examples. Maybe I should add this in a feature request?
7. Re: Problem ending conversation in RichFaces MenuItemFrancisco Jose Peredo Noguez Oct 1, 2008 1:03 AM (in response to Susanne Jarl)
Well, with some help from Guillaume Jeudy, I almost found the way to discard the current conversation and a new one (but I am no using JPDL). In fact, it is not a good idea to
endthe previous conversation, remember that users love tabbed browsing, and they will want to start a different conversation on each tab (they will get really frustrated if starting a new tab ends the conversation in the previous one), if you do not want that, then ScopeType.CONVERSATION is not the answer, ScopeType.SESSION is.
But I am stuck now because I can not find a nice way to stop parameter propagation. Any recommendations?
8. Re: Problem ending conversation in RichFaces MenuItemVladimir Kovalyuk Oct 14, 2008 5:30 PM (in response to Susanne Jarl)
I'd like to raise up this topic because I'm not happy with Richfaces menus and how they work with Seam.
Richfaces menus don't care about the cases when there is an active long running conversation. And it is not so bad, they actually should not know about Seam. It is Seam responsibility to leave conversation on action. But Seam does not offer annotation or pages.xml tag or attribute to accomplish that. As the result when the user is in long running conversation and presses menu item it stays in the same conversation. That's the big problem.
I figured out a way around it. I introduced redirector class which provides several redirect methods as in the following excerpt:
void goto(String viewId, Map<String, Object> params) { viewId = getPrefix() + viewId; Redirect.instance().setViewId(viewId); Redirect.instance().setConversationPropagationEnabled(false); if (params != null) for (Map.Entry<String, Object> entry : params.entrySet()) Redirect.instance().setParameter(entry.getKey(), entry.getValue()); Redirect.instance().execute(); }
From the other hand the problem can be easily fixed by introducing
<redirect propagation="none">
in pages.xml
I don't understand why pages.xml does not provide such a capability.
9. Re: Problem ending conversation in RichFaces MenuItemAdrien FERIAL Mar 24, 2009 12:32 PM (in response to Susanne Jarl)
My solution on this problem:
MenuBean.class:
@Name("menuBean") @Scope(ScopeType.SESSION) public class MenuBean { public String clickAndKillLastConversation(final String viewId) { log.debug("clickAndKillLastConversation : clicked on " + viewId); Manager.instance().endRootConversation(true); return viewId; }
menu.xhtml:
<rich:menuItem <f:facet <h:graphicImage </f:facet> </rich:menuItem>
manageSubjects.page.xml:
<?xml version="1.0" encoding="UTF-8"?> <page> <begin-conversation </page> | https://developer.jboss.org/thread/183615 | CC-MAIN-2019-18 | refinedweb | 1,275 | 53.41 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.