text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
It's taken some time to study all possible ways of detecting Event Dispatch Thread rule violations, and now I feel I this topic is about to be closed. But let me tell from the beginning: I was really surprised when I got to know about the smart solution invented by Scott Delapwho created a RepaintManager which finds EDT mistakes, I started to play with it and noticed that This solution has the following advantages: But from the other side I thought about the following problemsbut. Might I suggest creating a Java.net project for checking Swing threading with tools using each of the techniques you mention. It would also provide a central place for ThreadCheckingRepaintManager additions/fixes. I'm happy to donate my code as a starting point. Posted by: scottdelap on February 16, 2006 at 02:57 PM Hello Scott Thank you for your support I created the swinghelper java.net project It is pending approval, I'll let you know when it is approved and work begins Thanks Alex Posted by: alexfromsun on February 17, 2006 at 02:07 AM "Debugging Swing, the final summary" sounds a lot like "The Last Word in Swing Threads". Have we learned nothing from Sean Connery? As Ben Galbraith demonstrates so clearly here (sorry, members only), you must also not block the EDT. Here's a technique for debugging that problem. Blocking the EDT for long enough that the application seems sluggish is probably a much more common problem than actually producing a deadlock. Posted by: coxcu on February 17, 2006 at 07:14 AM Hi Alex, this is Stepan Rutz writing. Please go ahead and do whatever you want with the jvmi related code i send you. I thought i had enclosed a reference to the WTFPL licence. Its just a the code from a night of playing-around. Please consider that code to be released under the WTFPL licence. Basically WTFPL means free as in really free. Thanks for hearing my suggestions. And i agree with your thoughts. A java-based instrumentation is certainly cleaner and nicer to maintain. Regards from stormy cologne. Stepan Posted by: roots on February 17, 2006 at 07:34 AM Alex, "Don't block the EDT" is one of my Event Dispatch Thread rules. Thanks for doing all of this work to help stamp out EDT problems. Will the new project try to help with both problems? Posted by: coxcu on February 17, 2006 at 07:59 AM Hi coxcu You are absolutely right - "Don't block Edt" is definitely one the main rules That project is supposed to collect all Swing related debugging techniques, and detecting EDT blocking is definitely one ot them The blog's title might be a bit misleading, sorry I named it because of two previous blogs Alex Posted by: alexfromsun on February 17, 2006 at 08:10 AM Another useful tool when finding EDT hang-ups in applications is to find out where in the code long running tasks are executed on the EDT instead of a separate Thread. link This debug solution makes finding those tasks a one liner, main(){ EventDispatchThreadHangMonitor.initMonitoring(); ... } then check for exceptions being printed. The code referenced is GPL and the jar file can be found at link Posted by: jorgenrapp on February 17, 2006 at 12:39 PM Can you use AspectJ with the javac compiler? or does it still needs it own compiler. (That would be the showstopper for me) Kees. Posted by: keeskuip on February 17, 2006 at 03:56 PM Hello keeskuip You definitely need their compiler to compile aspects and their java runner to run your application with compiled aspects. But you don't need to recompile your applications if you want to try it. So, check it out ! Alex Posted by: alexfromsun on February 19, 2006 at 05:49 AM Alex, At the beginning of this blog you say that: "If you call e.g. JTextField.setText() from EDT and the same time JTextField.getText() from another thread the result is unpredictable " On the other hand JavaDoc for JTextComponent.setText method says: "This method is thread safe, although most Swing methods are not". Is here or there any mistake? Thanks, Maxim Posted by: maxz1 on February 23, 2006 at 08:08 AM Hello Maxim Javadoc clearly declared JTextField.setText() as a thread safe that's correct, but JTextField.getText() is not thread safe So when you invoke getText() from the wrong thread you may get an unpredictable result Posted by: alexfromsun on February 28, 2006 at 02:09 AM Hi Alexander, I've been using your repaint manager and it's been very helpful -- thanks! I wanted to amend the utility to handle the thread-safe call of JTextComponent.setText in addition to repaint. I propose a change like the following: boolean threadSafeMethod = false; boolean fromSwing = false; StackTraceElement[] stackTrace = new Exception().getStackTrace(); for (int i = 0; i < stackTrace.length; i++) { if (threadSafeMethod && stackTrace[i].getClassName().startsWith("javax.swing.")) { fromSwing = true; } if ("repaint".equals(stackTrace[i].getMethodName()) || "setText".equals(stackTrace[i].getMethodName())) { threadSafeMethod = true; fromSwing = false; } } if (threadSafeMethod && !fromSwing) { // Calling a thread safe method; no problem return; } What do you think? Posted by: jaredmac on August 18, 2006 at 08:54 AM Hello jaredmac I looked to JTextComponents' methods which marked as thread safe, and changed my mind, I suppose they are not really thread safe Just have a look to JTextComponent.setText() for example I do think if I setText() e.g. from main thread and getText() from EDT I will likely have problems I am going to file a bug to update javadoc Thanks alexp Posted by: alexfromsun on August 23, 2006 at 08:07 AM So does this exception report indicate an error in my code or within swing? PageLoader seems to be a Thread, so looks like Swing is not following the rules, perhaps this is another example where this is OK? java.lang.Exception at CheckThreadViolationRepaintManager.checkThreadViolations(CheckThreadViolationRepaintManager.java:43) at CheckThreadViolationRepaintManager.addDirtyRegion(CheckThreadViolationRepaintManager.java:37) at javax.swing.JComponent.repaint(JComponent.java:4518) at java.awt.Component.repaint(Component.java:2774) at javax.swing.text.FlowView$FlowStrategy.removeUpdate(FlowView.java:353) at javax.swing.text.FlowView.removeUpdate(FlowView.java:253).plaf.basic.BasicTextUI$RootView.removeUpdate(BasicTextUI.java:1520) at javax.swing.plaf.basic.BasicTextUI$UpdateHandler.removeUpdate(BasicTextUI.java:1764) at javax.swing.text.AbstractDocument.fireRemoveUpdate(AbstractDocument.java:242) at javax.swing.text.html.HTMLDocument.access$500(HTMLDocument.java:74) at javax.swing.text.html.HTMLDocument$HTMLReader.adjustEndElement(HTMLDocument.java:2061) at javax.swing.text.html.HTMLDocument$HTMLReader.flush(HTMLDocument.java:2100) at javax.swing.text.html.HTMLEditorKit.read(HTMLEditorKit.java:231) at javax.swing.JEditorPane.read(JEditorPane.java:519) at javax.swing.JEditorPane.read(JEditorPane.java:537) at javax.swing.JEditorPane$PageLoader.run(JEditorPane.java:566) Posted by: johnpm on November 27, 2006 at 01:08 AM Hello johnpm It is very interesting Could you publish the test case which gives that stacktrace ? Thanks alexp Posted by: alexfromsun on November 27, 2006 at 02:27 AM Here you go, seems to do the trick: /** * EditorPaneEDTViolation.java * Created on 28 November 2006, 17:10 **/ import java.awt.BorderLayout; import java.awt.EventQueue; import java.io.IOException; import java.net.URL; import javax.swing.BoxLayout; import javax.swing.JComponent; import javax.swing.JEditorPane; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.RepaintManager; import javax.swing.SwingUtilities; /** @author JohnM */ public class EditorPaneEDTViolation extends JFrame{ static JEditorPane msgArea; public EditorPaneEDTViolation() { setTitle("EDT Violations R Us"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setLayout(new BorderLayout()); msgArea = new JEditorPane(); JPanel p = new JPanel(); p.setLayout(new BoxLayout(p, BoxLayout.Y_AXIS)); p.add(msgArea); add(p, BorderLayout.CENTER); setBounds(100, 100, 800, 600); } public static void main(String[] args) { EventQueue.invokeLater(new Runnable(){ public void run(){ RepaintManager.setCurrentManager(new CheckThreadViolationRepaintManager()); final EditorPaneEDTViolation epv = new EditorPaneEDTViolation(); epv.setVisible(true); try { msgArea.setPage(new URL("")); } catch (IOException ex) { ex.printStackTrace(); } } }); } static(); } } } } Posted by: johnpm on November 28, 2006 at 09:36 AM Hello John Well done ! you have found a problem in the text package We should fire all notfications on EDT, but your test case reveals the problem I just filed a bug #6502558 (it takes some time for it to get visible) Thank you John you helped us to make Swing better ! alexp Posted by: alexfromsun on December 08, 2006 at 10:33 AM Alex, the following test case demonstrates an EDT violation by calling #repaint(Rectangle r). Because repaints are thread safe I'm wondering if this is really an EDT violation. public class RepaintEDTViolationDemo extends JFrame { public static void main(String[] args) throws Exception { SwingUtilities.invokeLater(new Runnable() { public void run() { RepaintManager.setCurrentManager(new CheckThreadViolationRepaintManager(true)); new RepaintEDTViolationDemo(); } }); } public RepaintEDTViolationDemo() { super(); add(new JButton(new StartAction("Start")), BorderLayout.NORTH); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 200); setLocationRelativeTo(null); setVisible(true); } static class StartAction extends AbstractAction { public StartAction(String name) { super(name); } public void actionPerformed(ActionEvent evt) { final JButton button = (JButton)evt.getSource(); new Thread() { public void run() { Rectangle rect = button.getBounds(); //causes EDT thread violation button.repaint(rect); //works fine //button.repaint(0, rect.x, rect.y, rect.width, rect.height); } }.start(); } } } -Wolfgang Posted by: wzberger on July 14, 2007 at 06:22 AM Hello Wolfgang Thanks for fiding this problem ! I just fixed it and updated the sources on SwingHelper Your comments are welcome Thanks again alexp Posted by: alexfromsun on July 17, 2007 at 02:49 AM This is really nice. It found a problem in my code in about 2 minutes - one that I didn't know I had. However, I am seeing an error condition being raised by a call from NetBeans code - I wonder if this is a NetBeans error or if this case should be ignored by the CheckThreadViolationRepaintManager code. 2007-09-21 14:51:28,562 [main] ERROR com.acinion.sms.util.ui.CheckThreadViolationRepaintManager - EDT violation detected: javax.swing.JPanel[,0,0,0x0,invalid,layout=java.awt.FlowLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=9,maximumSize=,minimumSize=,preferredSize=] at org.jdesktop.swinghelper.debug.CheckThreadViolationRepaintManager.checkThreadViolations(CheckThreadViolationRepaintManager.java:111) at org.jdesktop.swinghelper.debug.CheckThreadViolationRepaintManager.addDirtyRegion(CheckThreadViolationRepaintManager.java:69) at javax.swing.JComponent.repaint(JComponent.java:4714) at java.awt.Component.repaint(Component.java:2924) at javax.swing.JComponent.setFont(JComponent.java:2720) at javax.swing.LookAndFeel.installColorsAndFont(LookAndFeel.java:190) at javax.swing.plaf.basic.BasicPanelUI.installDefaults(BasicPanelUI.java:49) at javax.swing.plaf.basic.BasicPanelUI.installUI(BasicPanelUI.java:39) at javax.swing.JComponent.setUI(JComponent.java:668) org.netbeans.TopSecurityManager.makeSwingUseSpecialClipboard(TopSecurityManager.java:477) at org.netbeans.core.NonGui.run(NonGui.java:132) at org.netbeans.core.startup.Main.start(Main.java:401) at org.netbeans.core.startup.TopThreadGroup.run(TopThreadGroup.java:96) at java.lang.Thread.run(Thread.java:619) I worked around it by adding the following check: if ( "javax.swing.JComponent".equals( st.getClassName() ) && "setUI".equals( st.getMethodName() ) ) return; Posted by: dave_nedde on September 21, 2007 at 12:48 PM Hello Dave It looks like NetBeans calls updateUI outside EDT which is suspicious setUI() has to be called on EDT as well, so could you file a bug on NetBeans providing your test case ? Thanks alexp Posted by: alexfromsun on September 24, 2007 at 12:55 AM You say: "Initially there was a rule that it is safe to create and use Swing components until they are realized but this rule is not valid any more, and now it is recommended to interact with Swing from EDT only" This was surprising to me. After some googling, I am slowly realizing there has been a kind of "underadvertized" policy change regarding this. May I ask, why is this the case? I note that here, in the same Sun documentation that you link to above, ImageIcons are created, unrealized, out of the EDT, and later used in the done method inside the EDT. Taken on its face, I guess from your statement that approximately 99% of the world's Swing programs are unsafe? Refactoring to support this kind of restriction is non-trivial. So, what caused this restriction? And is it truly uniform, or does it apply only to i.e. creating Frames? Posted by: obsidian01 on January 25, 2008 at 08:54 AM Hello Obsidian01 The old poilicy was just incorrect and that was the main reason why it was changed if you create a component on main thread it fires its listeners, which are notified on EDT, so it is impossible to keep a component in a valid state if it is accessed from multilpe thread at the same time because Swing is not thread safe (like all of the rest modern GUI libraries) It workes somehow (probably most of the time) if you follow the old rulebut we don't guarantee that you won't see a strange exception sometimewhen you run your application on multi core machine or on another operating system work with Swing on EDT (including creating of JFrames ) and you will be on the safe side as for ImageIcons - they are just another icon implementation, when I say "interact with Swing" - I mean components and parts of the components (like models) so ImageIcon is an exception Thanks alexp Posted by: alexfromsun on January 27, 2008 at 03:15 PM Very interesting. When you said "components and parts of the components (like models)," are you saying that this restriction normally applies not only to Swing Components, but to model objects as well, so combo box models and so forth? Even if they, too, are "unrealized" - not attached to a Component? It would help to understand more of the reason behind this. If I create a new JPanel, and it sits alone, un-added and un-painted, what events have been fired, and to what listeners? Posted by: obsidian01 on January 28, 2008 at 09:02 PM Hello Obsidian01 While it is certainly possilble to find some exceptions, like creating unrealized model on another thread, it doesn't change the main rule about Swing only on EDT it might be ok to create an empty, un-added and up-painted JPane(if such a component is useful for your application) I don't have a test case which always fail when you incorrectly work with Swing because if you work with not thread safe object from multilple threads you have unstable results, it may seem to work on one machine 10 times in a row but fail for the next start Thanks alexp Posted by: alexfromsun on January 29, 2008 at 05:56 AM You're saying "if you create a component on main thread it fires its listeners, which are notified on EDT, so it is impossible to keep a component in a valid state". Sure, this will cause problems with threading. But, what kind of events components which are not yet showing will get from EDT? Container events, hierarchy events? Posted by: ivan_p on February 14, 2008 at 09:18 AM Hello Ivan Yes, container events, hierarchy events, model change events, property change events etc... alexp Posted by: alexfromsun on February 14, 2008 at 10:17 AM I've got EDT violation when testing with my app that uses animated GIFs: java.lang.Exception at com.emo.CheckThreadViolationRepaintManager.checkThreadViolations(CheckThreadViolationRepaintManager.java:42) at com.emo.CheckThreadViolationRepaintManager.addDirtyRegion(CheckThreadViolationRepaintManager.java:34) at javax.swing.JComponent.repaint(JComponent.java:4714) at java.awt.Component.imageUpdate(Component.java:3140) at javax.swing.AbstractButton.imageUpdate(AbstractButton.java:2201) at sun.awt.image.ImageWatched$WeakLink.newInfo(ImageWatched.java:114) at sun.awt.image.ImageWatched.newInfo(ImageWatched.java:151) at sun.awt.image.ImageRepresentation.setPixels(ImageRepresentation.java:466) at sun.awt.image.ImageDecoder.setPixels(ImageDecoder.java:108) at sun.awt.image.GifImageDecoder.sendPixels(GifImageDecoder.java:430) at sun.awt.image.GifImageDecoder.parseImage(Native Method) at sun.awt.image.GifImageDecoder.readImage(GifImageDecoder.java:572)) Posted by: pk_thoo on February 19, 2008 at 11:25 PM Hello Pk_thoo Did you test the latest CheckThreadViolationRepaintManager from SwingHelper project? Thanks alexp Posted by: alexfromsun on February 21, 2008 at 02:31 AM Hello Alex, I've used CheckThreadViolationRepaintManager from the first code-block at the top. When you referred SwingHelper project, I visited there and came back here when I click on CheckThreadViolationRepaintManager link. So, the answer to your question is Yes. I am hoping if anyone else can confirm that they have encountered EDT violations (when tested with CheckThreadViolationRepaintManager ) in their Swing apps when using animated GIFs. Thanks Alex. Posted by: pk_thoo on February 25, 2008 at 06:20 PM I, too, use animated gifs and also experienced lengthy EDT violation logs due to it . I added a getter and setter for a new variable named ignoreThisComponent and then modified the checkThread method as follows: private void checkThread(JComponent c) { if(c == getIgnoreThisComponent()) { return; } if(!SwingUtilities.isEventDispatchThread() && checkIsShowing(c)) { System.out.println("----------Wrong Thread START"); System.out.println(getStracktraceAsString(new Exception("EDT Violation"))); dumpComponentTree(c); System.out.println("----------Wrong Thread END"); } } Posted by: forcers on March 04, 2008 at 12:14 PM Hello guys The latest version of the CheckThreadViolationRepaintManager is placed on the SwingHelper under "Debugging and testing" title;just in case here the direct pointers - debug.jar and debug-src.zip ,you can also browse the CVS repository I fixed some problems with erroneous detections of EDT violations so please try the latest version of the CheckThreadViolationRepaintManager Posted by: alexfromsun on March 05, 2008 at 03:42 AM I tried the aspect, but it doesn't detect constructors for subclasses of JFrame. Is it on purpose ? Posted by: utilisateur_768 on April 21, 2008 at 02:12 AM Here again with the aspect. Some code is detected as Swing while it is not: The "Object Map.get(Object)" is advised by EdtRuleChecker.before(JComponent): anySwingMethods(BindingTypePattern(javax.swing.JComponent, 0)).. Posted by: utilisateur_768 on April 21, 2008 at 02:36 AM Hello utilisateur Constructors for JFrame is a good catch! I should add them to the aspect Could you send me an example of the code which is erroneously detected as the Swing code? Thanks alexp Posted by: alexfromsun on April 21, 2008 at 04:55 AM The 2 lines with "//here" comment are marked (in eclipse) public class Test { private static Map map = new HashMap(); public void foo() throws IOException { if ("const".equals(map.get("nom"))) { // here String s = (String)map.get("nom-sauvegarde-projet"); s += ""; } JarFile fichierJar = new JarFile("foo.jar"); Enumeration enu = fichierJar.entries(); while (enu.hasMoreElements()) { // here JarEntry je = enu.nextElement(); je.clone(); } } } Posted by: utilisateur_768 on April 22, 2008 at 04:24 AM Hello utilisateur I might be missing something, but I don't see any connection between your test case and Swing EDT issues Do you mean that eclipse IDE marks that code as suspicious or it my aspect did it? Thanks alexp Posted by: alexfromsun on April 22, 2008 at 04:42 AM I my (big) java application, some lines get detected by eclipse AOP plugin with your aspect. I have put these lines in the small test case above. Eclipse (with the plugin) just put markers to the lines concerned with your aspect, and some of them are just wrong. Posted by: utilisateur_768 on April 22, 2008 at 07:00 AM May be with the plugin output, it will be more simple: Here are the method calls which are detected by your aspect (with eclipse Aspect J - AJDT). I think they should not be detected as swing method calls. Please tell me if you know why these lines are wrongly detected as swing method calls. foo() method-call(java.lang.Object java.util.Map.get(java.lang.Object)) advised by EdtRuleChecker.before(JComponent): anySwingMethods(BindingTypePattern(javax.swing.JComponent, 0)).. method-call(java.lang.Object java.util.Map.get(java.lang.Object)) advised by EdtRuleChecker.before(JComponent): anySwingMethods(BindingTypePattern(javax.swing.JComponent, 0)).. method-call(boolean java.util.Enumeration.hasMoreElements()) advised by EdtRuleChecker.before(JComponent): anySwingMethods(BindingTypePattern(javax.swing.JComponent, 0)).. method-call(java.lang.Object java.util.Enumeration.nextElement()) advised by EdtRuleChecker.before(JComponent): anySwingMethods(BindingTypePattern(javax.swing.JComponent, 0)).. In last resort it could be a bug in aspectJ or in the aspect J eclipse plugin. Posted by: utilisateur_768 on April 23, 2008 at 06:13 AM
http://weblogs.java.net/blog/alexfromsun/archive/2006/02/debugging_swing.html
crawl-002
en
refinedweb
Blocks for Box Don Box: The underlying language feature (along with its cousin, anonymous methods) do in fact rock though. Let’s explore further. How do you call a function in C? name (parameters) Now, how do you call a method in C++? object.method(parameters) The way the latter works under the covers is that an additional parameter is typically passed, and bound to the name this. Hold that thought. Now, lets’s look at a few of the control structures in C#, as they are typically used: while (parameter) {block} if (parameter) {block} for (parameters) {block} foreach (parameters) {block} switch (parameter) {block} lock (parameter) {block} I’ve taken a few liberties with how I have named things to emphasize similarities, but my key point is that from a simple syntactic point of view, each of these statements look like a procedure call statement, albeit with a funny “extra” block parameter. And, among these examples, the lock example feels a bit strange. It feels like a bit of application specific functionality creeping down into the language. Hold that thought too. Now, one final example to feed into this mix. I’ll give a very simple C# example: public class test { public static void Main(string[] args) { int[] list = new int[] {1,2,3,4,5,6,7,8,9,10}; int sum = 0; foreach (int i in list) { sum = sum + i; } System.Console.WriteLine(sum); } } Now, I realize that this problem of solving the sum of a series of consecutive integers can be reduced to a simple equation, but dang it, remember that this is meant as an example. Bear with me. Now, let’s show how this can be implemented in Ruby: list = [1,2,3,4,5,6,7,8,9,10] sum = 0 list.each() {|i| sum = sum + i} puts sum Now, if you ignore the people who will tell you that in Ruby the parenthesis are not required, or whisper in your ear inject, inject, inject, and if you ignore the extra pro-forma baggage that C# requires; then you see that there is the same core four statements involved in both implementations. The third of which looks suspiciously like a procedure/method call like thing — with an extra parameter. Which, in Ruby, is exactly what it is. In object oriented systems, there is a bit of mental judo going on whereby you convert a system from imperative statements like “print x” to a more message oriented “to: x; message: go print yourself”. The same thing is going on here. In Ruby, you are calling one method, and passing it a block that it can chose to call as many, or as few, times as it likes. The method can even chose to stash away the block for later use. The block itself can even be parameterized, as it is above. In this example, the block expects one parameter, which it names i. One more interesting thing to note is that the block captures its scope, i.e., it behaves like a closure. Again referencing this example, the block directly accesses and modifies a value named sum, something which is not visible to the definition of the Array.each method. How can this be useful? The lock example above is a specific example whereby the language designers had enough foresight to build a similar feature into the language. In Ruby, such features need not be a part of the language, as you can build your own. Here are a few examples, but you can build more.
http://www.intertwingly.net/blog/2005/04/18/Blocks-for-Box
crawl-002
en
refinedweb
Definition GraphWin combines the two types graph and window and forms a bridge between the graph data types and algorithms and the graphics interface of LEDA. GraphWin can easily be used in LEDA programs for constructing, displaying and manipulating graph and for animating and debugging graph algorithms. There are also methods for modifying existing graph (e.g. by removing or adding a certain set of edges) to fit in one of these categories and for testing whether a given graph is planar, connected, bipartite ... For every node and edge of the graph GraphWin maintains a set of parameters. With every node is associated the following list of parameters. Note that for every parameter there are corresponding set and get operations (gw.set_param() and gw.get_param) where param has to be replaced by the corresponding parameter name. With every edge is associated the following list of parameters The corresponding types are: gw_node_shape = { circle_node, ellipse_node, square_node, rectangle_node } gw_edge_shape = { poly_edge, circle_edge, bezier_edge, spline_edge } gw_position = { central_pos, northwest_pos, north_pos, northeast_pos, east_pos, southeast_pos, south_pos, southwest_pos, west_pos } gw_label_type = { no_label, user_label, data_label, index_label } gw_edge_style = { solid_edge, dashed_edge, dotted_edge, dashed_dotted_edge } gw_edge_dir = { undirected_edge, directed_edge, bidirected_edge, rdirected_edge }; #include < LEDA/graphics/graphwin.h > Creation Operations a) Window Operations b) Graph Operations c) Node Parameters Node parameters can be retrieved or changed by a collection of get- and set- operations. We use param_type for the type and param for the value of the corresponding parameter. Individual Parameters Default Parameters d) Edge Parameters Individual ParametersIndividual Parameters Default ParametersDefault Parameters e) Global Options Animation and Zooming f) Node and Edge Selections g) Layout Operations h) Zooming i) Operations in Edit-mode Before entering edit mode ... j) Menus The default menu ... Extending menus by new buttons and sub-menus ...Extending menus by new buttons and sub-menus ... k) Input/Output l) Miscellaneous
http://www.algorithmic-solutions.info/leda_manual/GraphWin.html
crawl-002
en
refinedweb
Before we go into the details of the classes belonging to this module we want to give an overview of the different components and how they interact. We start with an example. Suppose you want to write a string to a compressed file named ``foo'' and read it back from the file. Then you can use the following program: #include <LEDA/basics/string.h> #include <LEDA/coding/compress.h> // contains all compression classes using namespace leda; typedef HuffmanCoder Coder; int main() { string str = "Hello World"; encoding_ofstream<Coder> out("foo"); out << str << "\n"; out.close(); if (out.fail()) std::cout << "error writing foo" << "\n"; decoding_ifstream<Coder> in("foo"); str.read_line(in); in.close(); if (in.fail()) std::cout << "error reading foo" << "\n"; std::cout << "decoded string: " << str << "\n"; return 0; } In the example above we used the classes encoding and decoding with LEDA datatypes only. We want to emphasize that they work together with user-defined types as well. All operations and operators (« and ») defined for C++ streams can be applied to them, too. Assume that you want to send the file ``foo'' to a friend over the internet and you want to make sure that its contents do not get corrupted. Then you can easily add a checksum to your file. All you have to do is to replace the coder in the typedef-statement by CoderPipe2<MD5SumCoder, HuffmanCoder>. The class CoderPipe2 combines the two LEDA coders MD5SumCoder (the checksummer) and HuffmanCoder into a single coder. If the pipe is used for encoding, then the MD5SumCoder is used first and the HuffmanCoder is applied to its output. In decoding mode the situation is reversed. The standard behaviour of a checksummer like MD5SumCoder is as follows: In encoding mode it reads the input stream and computes a checksum; the output data basically consists of the input data with the checksum appended. In decoding mode the checksum is stripped from the input data and verified. If the input is corrupted the failure flag of the coder is set to signal this. Suppose further that your friend has received the encoded file ``foo'' and wants to decode it but he does not know which combination of coders you have used for encoding. This is not a problem because LEDA provides a class called AutoDecoder which can be used to decode any stream that has been encoded by LEDA. The complete code for this extended example is depicted below: #include <LEDA/basics/string.h> #include <LEDA/coding/compress.h> using namespace leda; typedef CoderPipe2<MD5SumCoder, HuffmanCoder> Coder; int main() { string str = "Hello World"; // your code ... encoding_ofstream<Coder> out("foo"); out << str << "\n"; out.close(); if (out.fail()) std::cout << "error writing foo" << "\n"; // your friend's code ... autodecoding_ifstream in("foo"); // autodecoding_ifstream = decoding_istream<AutoDecoder> str.read_line(in); in.finish(); // read till the end before closing (-> verify checksum) if (in.fail()) std::cout << "decoding error, foo corrupted" << "\n"; std::cout << "decoded string: " << str << "\n"; return 0; } This example shows how easy it is to add compression to existing applications: You include the header ``LEDA/coding/compress.h'', which makes all classes in the compression module available. Then you simply replace every occurrence of ofstream by encoding <Coder > and every occurence of ifstream by autodecoding . Of course, you can also use the LEDA coders in file mode. This means you can encode a file ``foo'' into a file ``bar'' and decode ``bar'' again. The example below shows how. We also demonstrate a nice feature of the AutoDecoder: If you query a description after the decoding the object tells you which combination has been used for encoding the input. #include <LEDA/coding/compress.h> using namespace leda; typedef CoderPipe2<MD5SumCoder, HuffmanCoder> Coder; int main() { Coder coder("foo", "bar"); coder.encode(); if (coder.fail()) std::cout << "error encoding foo" << "\n"; AutoDecoder auto("bar", "foo"); auto.decode(); if (auto.fail()) std::cout << "error decoding bar" << "\n"; std::cout << "Decoding info: " << auto.get_description() << "\n"; return 0; } More examples can be found in $LEDAROOT/test/compression. There we show in particular how the user can build a LEDA compliant coder which integrates seamlessly with the AutoDecoder. Below we give a few suggestions about when to use which coder:
http://www.algorithmic-solutions.info/leda_manual/Lossless_Compression.html
crawl-002
en
refinedweb
#include <RemoteViz/Rendering/Client.h> Represents a client application instance using RemoteViz. In the case of HTML5 applications, a client represents a single instance of a web browser for a single domain name. (See RemoteVizRenderArea) In the case of the SoRemoteVizClient node, a client represents a single instance of the application using the node. A client has one or multiple connections. Disconnects the client. A KICKED disconnect message will be sent to all connections of the client, then they will be disconnected. HTML5 client: Returns the domain name of the web host which the client is connected. SoRemoteVizClient node: Returns the string "SoRemoteVizClient". Gets a Connection of the client. Gets a Connection of the client. HTML5 client: Returns the value of the user-agent header sent by the client web browser. SoRemoteVizClient node: Returns the operating system (OS) running the client application. Gets the id of the client. The client id is a Globally Unique Identifier. Gets the number of the client connections. Gets the client settings. Returns if the client supports image streaming. Returns if the client supports video streaming. Sends a binary message to all the connections of the client. Sends a text message to all the connections of the client.
https://developer.openinventor.com/refmans/latest/RefManCpp/class_remote_viz_1_1_rendering_1_1_client.html
CC-MAIN-2020-16
en
refinedweb
import java.util.Scanner;//needed to acces the console and client input import java.util.Random;//neede to generate the random number needed for student public class projectArrays { public static void main (String[] args) { Scanner cin = new Scanner(System.in);//initiate a Scanner for client inputs students =(Math.random)*50)+1;//this will generate a random number between 0-50 for students //**bonus attempt** giving the use the option to decline continueing the program System.out.println("You have :"+students+"in your class,do you wish to continue? y/n "); if (cin.next().startsWith("y")||cin.next().startsWith("Y")) { //initiating the array to hold the grades according to the number of students int[] grades = new int[students]; System.out.println("You have a total of"+grades.length+"students"); System.out.println("Enter the numerical grades for all students."); System.out.println("Press enter after each entry."); //making a statement to allow the client to enter values for the array for (int i=0;i<students+1;i++); { grades[i] = cin.nextInt(); //creating an ERROR statement if the client enters a "bad" entry if(grades.length != students|| 0>cin.nextInt()>100) { System.out.println("ERROR INCORRECT ENTRY!"); } //continue with coding as if the client has performed the right actions else { //printing the contents of the array System.out.println("you have entered"); for(int i=0;i<grades.length;i++) { System.out.print(grades[i] + " "); } //averaging the elements within the array and displaying them int sum = 0; for(int i=0;i<grades.length;i++); { sum= sum+grades[i]; double average= sum / grades.length; System.out.println("the Average of all grades is"+sum+" "); } //this section if for counting the amount of zeors in entered into the array int countZ = 0 ; for (int i=0;i<grades.length;i++) {if( grades[i]==0) countZ++; } //this section now counts the amount of Hundreds entered into the array int countH = 0; for (int i=0;i<grades.length;i++) {if(grades[i]==100) countH++; //now to display both amounts of zeros & hundreds found System.out.println("There are"+countZ+"zeros."); System.out.println("there are"+countH+"hundreds."); } } } } //else statement to run in case the client denies to use program else { System.out.println("you have chosen to conclude your session, you may now exit."); } } } help with arrays Page 1 of 1 Beginner with java coding and weak with arrays 1 Replies - 1256 Views - Last Post: 11 October 2010 - 04:51 AM #1 help with arrays Posted 11 October 2010 - 04:34 AM I am in the middle of writing a code for my class and i am a beginner to Java, I feel I am in the right direction as far as getting the code right but I get a "line 15 ";" expected" error and I think it interferes with the rest of the compiler to tell me if I've written the rest of the code right, if some one can take a look and help me out it would be much appreciated, if it helps i use JCreator. Replies To: help with arrays #2 Re: help with arrays Posted 11 October 2010 - 04:51 AM You have not specified the variable type on students... Also you need to get rid of the semicolon here: Also, this is invalid syntax:. int students = (int) ((Math.random())*50)+1 Also you need to get rid of the semicolon here: for (int i=0;i<students+1;i++) Also, this is invalid syntax: 0>cin.nextInt()>100. Page 1 of 1
https://www.dreamincode.net/forums/topic/194444-help-with-arrays/
CC-MAIN-2020-16
en
refinedweb
If you want to import or export spreadsheets and databases for use in the Python interpreter, you must rely on the CSV module, or Comma Separated Values format.. You don’t need to know JavaScript to work with these files, nor is the practice confined to that language. Obviously, since we’re working with Python here. The text inside a CSV file is laid out in rows, and each of those has columns, all separated by commas. Every line in the file is a row in the spreadsheet, while the commas are used to define and separate cells. Working with the CSV Module To pull information from CSV files you use loop and split methods to get the data from individual columns. The CSV module explicitly exists to handle this task, making it much easier to deal with CSV formatted files. This becomes especially important when you are working with data that’s been exported from actual spreadsheets and databases to text files. This information can be tough to read on its own. Unfortunately, there is no standard so the CSV module uses “dialects” to support parsing using different parameters. Along with a generic reader and writer, the module includes a dialect for working with Microsoft Excel and related files. CSV Functions The CSV module includes all the necessary functions built in. They are: - csv.reader - csv.writer - csv.register_dialect - csv.unregister_dialect - csv.get_dialect - csv.list_dialects - csv.field_size_limit In this guide we are only going to focus on the reader and writer functions which allow you to edit, modify, and manipulate the data stored in a CSV file. Reading CSV Files. To prove it, let’s take a look at an example. import CSV With open(‘some.csv’, ‘rb’) as f: reader = csv.reader(f) for row in reader: print row Notice how the first command is used to import the CSV module? Let’s look at another example. import csv import sys f = open(sys.argv[1], ‘rb’) reader = csv.reader(f) for row in reader print row f.close() In the first two lines, we are importing the CSV and sys modules. Then, we open the CSV file we want to pull information from. Next, we create the reader object, iterate the rows of the file, and then print them. Finally, we close out the operation. CSV Sample File We’re going to take a look at an example CSV file. Pay attention to how the information is stored and presented. Reading CSV Files Example We’re going to start with a basic CSV file that has 3 columns, containing the variables “A”, “B”, “C”, and “D”. $ cat test.csv A,B,”C D” 1,2,”3 4” 5,6,7 Then, we’ll use the following Python program to read and display the contents of the above CSV file. import csv ifile = open(‘test.csv’, “rb”) reader = csv.reader(ifile) rownum = 0 for row in reader: # Save header row. if rownum ==0: header = row else: colnum = 0 for col in row: print ‘%-8s: %s’ % (header[colnum], col) colnum + = 1 rownum + = 1 ifile.close() When we execute this program in Python, the output will look like this: $ python csv1.py A : 1 B : 2 C D : 3 4 A : 5 B : 6 C D : 7 Writing to CSV Files When you have a set of data that you would like to store inside a CSV file, it’s time to do the opposite and use the write function. Believe it or not, this is just as easy to accomplish as reading them. The writer() function will create an object suitable for writing. To iterate the data over the rows, you will need to use the writerow() function. Here’s an example. The following Python program converts a file called “test.csv” to a CSV file that uses tabs as a value separator with all values quoted. The delimiter character and the quote character, as well as how/when to quote, are specified when the writer is created. These same options are available when creating reader objects. import csv ifile = open('test.csv', "rb") reader = csv.reader(ifile) ofile = open('ttest.csv', "wb") writer = csv.writer(ofile, delimiter='', quotechar='"', quoting=csv.QUOTE_ALL) for row in reader: writer.writerow(row) ifile.close() ofile.close() When you execute this program, the output will be: $ python csv2.py $ cat ttest.csv "A" "B" "C D" "1" "2" "3 4" "5" "6" "7" Quoting CSV Files With the CSV module, you can also perform a variety of quoting functions. They are: - csv.QUOTE_ALL - Quote everything, regardless of type. - csv.QUOTE_MINIMAL - Quote fields with special characters - csv.QUOTE_NONNUMERIC - Quote all fields that are not integers or floats - csv.QUOTE_NONE - Do not quote anything on output More Python Reading and:
https://www.pythonforbeginners.com/systems-programming/using-the-csv-module-in-python/
CC-MAIN-2020-16
en
refinedweb
Creating your own rows for Eureka - Part 2 Some days ago, we released the introduction to create custom rows for Eureka, now we are going to go deeper and see how to build a complex row. Note: for those who are starting, I strongly recommend reading Creating your own rows for Eureka first. The row we are going to build will allow the user to create strong passwords. The developer will be in charge of defining (and implementing) the concept of password strength and also the reactive view to provide feedback of it. Motivation We will support the following features: - Show password strength while typing. - Provide hints in order to help the user create a valid password. - Provide a button to hide/show the password. Final product The result will be a row (as you can see below) which has a strength function and a way to let the user know how strong his password is. Architecture Before start coding, since we want our row to be as reusable as we can, we should think about the architecture that will empower this… As you may have learned from our previous post, any custom row in Eureka will have a Row and Cell as primary components, in this case, GenericPasswordRow and GenericPasswordCell will map with those respectively. For the customizable UI side, we will be adding a custom nib file (which you can replace or use our default one). On the other hand, we will have a PasswordValidator component that will be responsible for the password validation process, here is where you (as a programmer) could be as creative as you want. As any custom row of Eureka we will have our Row and Cell as primary components, in this example, GenericPasswordRow and GenericPasswordCell respectively. Then, we are adding a custom nib file as the Design component and the PasswordValidator component as the “brain” of the password validation process. PasswordValidator Let’s take a look over the first component of this row: protocol PasswordValidator { var maxStrength: Double { get } var strengthLevels: [Double: UIColor] { get } func strengthForPassword(password: String) -> Double func hintForPassword(password: String) -> String? func isPasswordValid(password: String) -> Bool } This protocol should specify all what’s needed to determine the UI state for any given password. Most of this methods are self-explanatory so we won’t go further there but don’t worry, we’ll cover them through the following example. So, let’s get started with the actual implementation! Let’s create our custom password validator called MyPasswordValidator with the following rules: - At least a lowercase letter - At least a number - At least an uppercase letter - At least 6 characters. Pretty basic, don’t you think? We could check if you are using part of your email in your password but maybe some other time… ¯\(ツ)/¯ First of all, we need to somehow model a certain rule and since Swift’s structs are awesome (don’t you like getting a memberwise initializer for free?): public struct PasswordRule { let hint: String let test: (String) -> Bool } It will hold a test closure to specify whether or not the password satisfies that rule and an associated hint in order to guide the user. Then, we could implement our validator like this: public class MyPasswordValidator: PasswordValidator { // For any given password, the strength will be in the [0.0, 4.0] range. public let maxStrength = 4.0 // This property should hold key points of strength an its associated color. // It should be read as follows: // - "From 0.0 to <1.0 strength values" -> color A // - "From 1.0 to <2.0 strength values" -> color B // - "From 2.0 to <3.0 strength values" -> color C // - "From 3.0 to 4.0 strength values" -> color D public let strengthLevels: [Double: UIColor] = [ 0: UIColor(red: 244 / 255, green: 67 / 255, blue: 54 / 255, alpha: 1), // A 1: UIColor(red: 255 / 255, green: 193 / 255, blue: 7 / 255, alpha: 1), // B 2: UIColor(red: 3 / 255, green: 169 / 255, blue: 244 / 255, alpha: 1), // C 3: UIColor(red: 139 / 255, green: 195 / 255, blue: 74 / 255, alpha: 1) // D ] // Our rules let rules: [PasswordRule] = [ PasswordRule(hint: "Please enter a lowercase letter") { $0.satisfiesRegexp("[a-z]") }, PasswordRule(hint: "Please enter a number") { $0.satisfiesRegexp("[0-9]") }, PasswordRule(hint: "Please enter an uppercase letter") { $0.satisfiesRegexp("[A-Z]") }, PasswordRule(hint: "At least 6 characters") { $0.characters.count > 5 } ] // In this example, each passing rule sums 1 to the password strength public func strengthForPassword(password: String) -> Double { return rules.reduce(0) { $0 + ($1.test(password) ? 1 : 0) } } // Here, we return the first failing rule hint. public func hintForPassword(password: String) -> String? { return rules.filter { !$0.test(password) }.map { $0.hint }.first } // Password will be valid only when reaching the `maxStrength` value public func isPasswordValid(password: String) -> Bool { return strengthForPassword(password) == maxStrength } } GenericPasswordRow Our row will just hold a PasswordValidator and a placeholder for our password textfield. Since the Rows in Eureka must be final classes, we always try to encapsulate all the logic in a another class (like _GenericPasswordRow) and then create our final class, which conforms to RowType, by inheriting from there. This allow us to create another row by subclassing the _GenericPasswordRow class. public class _GenericPasswordRow: Row<String, GenericPasswordCell> { public var passwordValidator: PasswordValidator = DefaultPasswordValidator() public var placeholder: String? = "Password" public required init(tag: String?) { super.init(tag: tag) displayValueFor = nil // The cellProvider is what we use to "connect" the nib file with our custom design // GenericPasswordCell must be the class of the UITableViewCell contained in the GenericPasswordCell.xib file cellProvider = CellProvider<GenericPasswordCell>(nibName: "GenericPasswordCell") } public func isPasswordValid() -> Bool { return value.map { passwordValidator.isPasswordValid($0) } ?? false } } public final class GenericPasswordRow: _GenericPasswordRow, RowType { } Design As we mentioned earlier, we will also provide our custom design in GenericPasswordCell.xib. Using IB we will create a UITableViewCell with the following views: - textField: UITextField - visibility button: UIButton - password strength view: subclass of PasswordStrengthView - hint label: UILabel Thinking ahead a little bit, we need to design this cell having in mind that the hintLabel will be toggling its hidden state depending on the return values of func hintForPassword(password: String) -> String? (whether being nil or not) on the passwordValidator instance of our GenericPasswordRow. Again, trying to be as generic as possible, we introduce the PasswordStrengthView here. The idea is that you can easily replace this strength view by subclassing this class: class PasswordStrengthView: UIView { func setPasswordValidator(validator: PasswordValidator) { } func updateStrength(password password: String, animated: Bool = true) { } } and implementing your desired behavior in those methods. In this implementation we’ll be using the DefaultPasswordStrengthView. We are not going to explain the details of this implementation but basically, once the setPasswordValidator(...) function gets called, we are adding subviews to generate the “empty state” drawing the strength ranges and their colors (since we know the validator to access this information): and then, every time updateStrength(...) function gets called (while the user is typing) we use the validator to calculate the actual strength and then update the strength view. GenericPasswordCell Finally we need to create our cell class, and set it as the class of the UITableViewCell in the GenericPasswordCell.xib file. This is probably the biggest piece of code of this blog, so take it slow. Try to read carefully the code comments in order to follow the general idea and be able to have a better understanding of the implementation. import Foundation import Eureka public class GenericPasswordCell: Cell<String>, CellType { // Outlets to be connected with our nib file views. @IBOutlet weak var textField: UITextField! @IBOutlet weak var visibilityButton: UIButton? @IBOutlet weak var passwordStrengthView: PasswordStrengthView? @IBOutlet weak var hintLabel: UILabel? // Computed property in order to access to the properties of our Row var genericPasswordRow: _GenericPasswordRow { return row as! _GenericPasswordRow } // Tuple holding the images to be used on the visibilityButton public var visibilityImage: (on: UIImage?, off: UIImage?) { didSet { setVisibilityButtonImage() } } // Since we will be updating the cell height depending on the hidden // state of the hintLabel, we need to have two height values available // and set the `height` closure of Eureka's cells with those. public var dynamicHeight = (collapsed: UITableViewAutomaticDimension, expanded: UITableViewAutomaticDimension) { didSet { let value = dynamicHeight height = { [weak self] in self?.hintLabel?.hidden == true ? value.collapsed : value.expanded } } } // Cell's constructor public required init(style: UITableViewCellStyle, reuseIdentifier: String?) { super.init(style: style, reuseIdentifier: reuseIdentifier) } // MARK - Overrides // Here we will setup our cell's behavior and style. public override func setup() { super.setup() // custom dynamic height for the design at GenericPasswordCell.xib dynamicHeight = (collapsed: 48, expanded: 64) // .... // set the validator to the strength view in order to // give it the chance to layout itself accordingly passwordStrengthView?.setPasswordValidator(genericPasswordRow.passwordValidator) } override public func update() { super.update() // in this override we need to map our row value // to the actual view that holds it. textField.text = genericPasswordRow.value textField.placeholder = genericPasswordRow.placeholder } // MARK - Callbacks public func togglePasswordVisibility() { textField.secureTextEntry = !textField.secureTextEntry setVisibilityButtonImage() // workaround to update cursor position // see let tmpString = textField.text textField.text = nil textField.text = tmpString } public func textFieldDidChange(textField: UITextField) { // every time the textfield changes we // need to update our row value. genericPasswordRow.value = textField.text // update strength updatePasswordStrengthIfNeeded() formViewController()?.tableView?.beginUpdates() // this updates the height of the cell. // In fact, it calls the 'height' closure. formViewController()?.tableView?.endUpdates() // with a little delay in order to wait for the // height change to take place, we animate the alpha // of the hintLabel to appear smoothly. UIView.animateWithDuration(0.3, delay: 0.2, options: [], animations: { [weak self] in guard let me = self else { return } me.hintLabel?.alpha = me.hintLabel?.hidden == true ? 0 : 1 }, completion: nil) // just in case that the cell gets partially covered // by the keyboard when it 'expands' we need to perform // the minimum scroll movement to make it full visible. if let indexPath = row?.indexPath() { UIView.animateWithDuration(0.3, delay: 0, options: .AllowUserInteraction, animations: { [weak self] in self?.formViewController()?.tableView?.scrollToRowAtIndexPath(indexPath, atScrollPosition: .None, animated: false) }, completion: nil) } } // MARK - Helpers private func setVisibilityButtonImage() { visibilityButton?.setImage(textField.secureTextEntry ? visibilityImage.on : visibilityImage.off, forState: .Normal) } public func updatePasswordStrengthIfNeeded(animated animated: Bool = true) { guard let password = textField.text else { return } // notify to the strength view the current password passwordStrengthView?.updateStrength(password: password, animated: animated) // update hint label using the validator let hint = genericPasswordRow.passwordValidator.hintForPassword(password) hintLabel?.text = hint hintLabel?.hidden = hint == nil || password.isEmpty } } Usage You can use this row like any other in your FormViewController import UIKit import GenericPasswordRow import Eureka class ViewController: FormViewController { override func viewDidLoad() { super.viewDidLoad() form +++ Section() <<< GenericPasswordRow() { $0.passwordValidator = // your custom validator $0.placeholder = "Create your password" } } } Again, you can see all the source files in GenericPasswordRow github and run a simple example from there. The code is in Swift 2 syntax so it will break if you are on Swift 3. Customization There are several ways to change this custom row. You should be able to reuse all of the implemented logic and just replace the design (with another nib file). Also, providing a custom implementation of the PasswordValidator protocol should be pretty much straightforward (not being annoying to the user is the hard part). What follows should be used as a guideline to customize as much as you want or even extend the current functionality. Providing a custom password validator. form +++ Section() <<< GenericPasswordRow() { $0.passwordValidator = // your custom validator } Creating another nib file to provide a different design component and connecting outlets to the GenericPasswordCellclass. This is a very important feature, you can literally change the entire design of this row having the same functionality (for free!). Check this instructions to get it done. GenericPasswordRow.defaultRowInitializer = { $0.cellProvider = CellProvider<GenericPasswordCell>(nibName: "MyPasswordCell") } or final class MyPasswordRow: _GenericPasswordRow, RowType { required init(tag: String?) { super.init(tag: tag) cellProvider = CellProvider<GenericPasswordCell>(nibName: "MyPasswordCell") } } - Subclassing DefaultPasswordStrengthViewor PasswordStrengthViewto provide a different strength view bar. Subclassing _GenericPasswordRowor GenericPasswordCell. public final class MyGenericPasswordRow: _GenericPasswordRow, RowType { // add properties, methods, override, etc } or public class MyGenericPasswordCell: GenericPasswordCell { // add properties, methods, override, etc } Where to go from here I hope it has served as a good example of the potential of Eureka’s rows. You can get as complex as you want while being flexible and scalable at the same time. Also, there are lots of customization points on this row so we can’t wait to see what you can create!. We’ve been publishing reusable custom rows.
https://blog.xmartlabs.com/2016/09/23/Eureka-custom-row-tutorial-2/
CC-MAIN-2020-16
en
refinedweb
Dialog that allows a user to sign in. More... #include <vote_dialog.hpp> Dialog that allows a user to sign in. Called every frame. Checks if any of the pending requests are finished. Reimplemented from GUIEngine::ModalDialog. Callback when a user event is triggered. Reimplemented from GUIEngine::ModalDialog. A request to the server, to perform a vote on an addon. A vote request. The callback will update the addon manager with the new average. The VoteDialog polls this request till it is finished to inform the user about the new average. Stores the id of the addon being voted on. Pointer to the cancel button. The request to fetch the current vote, which is submitted immediately when this dialog is opened. Pointer to the info widget of this dialog. Pointer to the options widget, which contains the canel button. The request to perform a vote. True if the dialog should be removed (which needs to be done in the update call each frame).
https://doxygen.supertuxkart.net/classVoteDialog.html
CC-MAIN-2020-16
en
refinedweb
3. Library calls (functions within program libraries) FREADSection: Linux Programmer's Manual (3) Updated: 2015-07-23 Index | Return to Main Contents NAMEfread, fwrite - binary stream input/output SYNOPSIS #include <stdio.h> size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream); size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream); DESCRIPTIONFor an explanation of the terms used in this section, see attributes(7). CONFORMING TOPOSIX.1-2001, POSIX.1-2008, C89. SEE ALSOread(2), write(2), feof(3), ferror(3), unlocked_stdio(3) COLOPHONThis page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Index Return to Main Contents
https://eandata.com/linux/?chap=3&cmd=fread
CC-MAIN-2020-16
en
refinedweb
In this Program, you’ll learn how to Calculate Power Using Recursion in C++ Programming along with that you’ll learn how the functions used in this program. Now let’s understand how to find the recursion manually, below is the formula to do this manually and we have to implement this in the program. Let’s say, x = 2 and y = 10 x^y =1024 Here, x^y is 2^10 now we’ll do the same with this program Program to Calculate Power Using Recursion #include using namespace std; //function declaration double Power(double base, int exponent); int main() { double base, power; int exponent; // Inputting base and exponent from user cout<<"Enter base: "; cin>>base; cout<<"Enter exponent: "; cin>>exponent; // Call Power function power = Power(base, exponent); //printf("%.2lf ^ %d = %f", base, exponent, power); cout<<base<< "^"<<exponent<<" = "<<power; return 0; } /* Calculating power of any number. Returns base ^ exponent */ double Power(double base, int exponent) { // Base condition if(exponent == 0) return 1; else if(exponent > 0) return base * pow(base, exponent - 1); else return 1 / pow(base, - exponent); } Output Enter base: 5 Enter exponent: 3 5^3 = 125 In the above program, the function pow() is a recursive function. If the power is zero, then the function returns 1 because any number raised to power 0 is 1. If the power is not 0, then the function recursively calls itself. This is demonstrated using the following code snippet. In the main() function, the pow() function is called initially and the power of a number is displayed. Related Programs - C++ Programs To Create a Pyramid and Pattern - C++ Program to Check Whether a Number is Prime or Not - C++ Program to display Amstrong Number between Two intervals. - C++ Program to create a Pyramid and Pattern. - C++ Program to make a simple calculator using switch…case. - C++ Program to Calculate Power of a Number - C++ Program to Check Whether a Number is Palindrome or Not Ask your questions and clarify your/others doubts on how to Calculate Power Using Recursion by commenting. Documentation Please write to us at [email protected] to contribute or to report an issue with the above content or for feedback.
https://coderforevers.com/cpp/cpp-program/calculate-power-using-recursion/?utm_source=rss&utm_medium=rss&utm_campaign=calculate-power-using-recursion
CC-MAIN-2020-16
en
refinedweb
(in_features) Derived Output Code sample AddXY example 1 (Python window) The following Python window script demonstrates how to use the AddXY function in immediate mode. import arcpy arcpy.env.workspace = "C:/data" arcpy.Copy_management("climate.shp", "climateXYpts.shp") arcpy.AddXY_management("climateXYpts.shp") AddXY example 2 (stand-alone script) The following Python script demonstrates how to use the AddXY function in a stand-alone script. # Name: AddXY_Example2.py # Description: Adding XY points to the climate dataset #) Environments Licensing information - Basic: Yes - Standard: Yes - Advanced: Yes
https://desktop.arcgis.com/en/arcmap/latest/tools/data-management-toolbox/add-xy-coordinates.htm
CC-MAIN-2020-16
en
refinedweb
Namespace: DevExpress.Web.Mvc Assembly: DevExpress.Web.Mvc5.v19.2.dll public class SchedulerExtension : ExtensionBase Public Class SchedulerExtension Inherits ExtensionBase Methods that return SchedulerExtension instances: To declare the Scheduler in a View, invoke the ExtensionsFactory.Scheduler helper method. This method returns the Scheduler extension that is implemented by the SchedulerExtension class. To configure the Scheduler extension, pass the SchedulerSettings object to the ExtensionsFactory.Scheduler helper method as a parameter. The SchedulerSettings object contains all the Scheduler extension settings. Refer to the Scheduler Overview topic to learn how to add the Scheduler extension to your project.
https://docs.devexpress.com/AspNet/DevExpress.Web.Mvc.SchedulerExtension
CC-MAIN-2020-16
en
refinedweb
Two-way data binding in Angular using ngModel directive In this tutorial, we are going to learn about angular two-way data binding using the ngModel directive. What is Two-way data binding? Two-way data binding means data flows in both directions, The data in view changes it also updates the model, The data in the model changes it also updates the view. ngModel directive Angular provides us a ngModel directive by using that we can sync the data in both directions. Example To use a ngModel directive inside our components first we need to import the FormsModule in app.module.ts file and add it to the imports array. import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Let’s use the ngModel model directive. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { myname = 'Angular';} <div> <input [(ngModel)]="myname" /> <h1>{{myname}}</h1> </div> In the above code ,we have added [(ngModel)]= "myname" it means we are binding input element in both directions to myname property. Have you seen in the above image, two-way data binding is working successfully.
https://reactgo.com/angular-two-way-databinding/
CC-MAIN-2020-16
en
refinedweb
Overview This is a simple Python script to check which external IP address you have. First we are importing the urllib and re modules. Check your IP Address The URL that we will use to check the IP for is: import urllib import re print "we will try to open this url, in order to get IP Address" url = "" print url request = urllib.urlopen(url).read() theIP = re.findall(r"d{1,3}.d{1,3}.d{1,3}.d{1,3}", request) print "your IP Address is: ", theIP Happy scripting:
https://www.pythonforbeginners.com/code-snippets-source-code/check-your-external-ip-address
CC-MAIN-2020-16
en
refinedweb
Signal Framework for Java ME Introduction Signal Framework is an open-source IoC, AOP and MVC framework for Java ME (J2ME) based on Spring. The framework has been designed to overcome the limitations of the CLDC API that prevent IoC containers implemented in Java SE from running on J2ME implementations. Signal Framework uses regular Spring XML configuration files, allowing developers to leverage existing tools and skill sets while coping with the limitations of J2ME. The following diagram illustrates the architecture of the framework: The name of the framework was inspired by the Antenna and, obviously, Spring Framework. The IoC container The reflection support in the CLDC API is very limited compared to Java SE. The API only allows objects to be constructed with default (no-arg) constructors. It is not possible to pass arguments to constructors, invoke methods, access fields or create dynamic proxies. To overcome those limitations the IoC framework reads context configuration files when an application is compiled and generates Java code responsible for instantiating a context at runtime. When a J2ME application is started, it executes the generated code instead of loading any configuration files. In effect an application context is created at runtime without relying on XML parsing or advanced reflection features. The footprint of the generated code is very small, because the IoC runtime only consists of approximately 10 classes. The generated code does not depend on any Spring libraries. Even though the framework is geared towards the J2ME platform, the IoC container can be used in any Java applications that need to take advantage of Spring without relying on reflection or XML parsing (e.g. Android or GWT platforms). Signal Framework supports the following features of the Spring IoC container: Spring XML configuration files, singleton beans, dependency injection through constructor arguments and properties, autowiring, Lazy initialization, bean post-processors ( com.aurorasoftworks.signal.runtime.core.context.IBeanProcessor- an equivalent of org.springframework.beans.factory.config.BeanPostProcessor), lightweight AOP based on auto-generated proxy objects, com.aurorasoftworks.signal.runtime.core.context.IInitializingBean- an equivalent of org.springframework.beans.factory.InitializingBean, com.aurorasoftworks.signal.runtime.core.context.IContextAware– an equivalent of org.springframework.beans.factory.BeanFactoryAware. The code generator is normally invoked by a Maven plugin that requires two parameters: a name of a Spring configuration file and a name of an output Java class. The generator does support the <import resource="..."> tag so it is possible to process multiple context configuration files with a single invocation of the plugin. The following example demonstrates a context configuration file and a corresponding generated Java class: <beans xmlns="" xmlns: <bean id="greeter" class="com.aurorasoftworks.signal.examples.context.core.Greeter"> <constructor-arg </bean> <bean id="msgSource" class="com.aurorasoftworks.signal.examples.context.core.MessageSource"> <constructor-arg> <map> <entry key="greeting" value="Hello!" /> </map> </constructor-arg> </bean> </beans> public class ApplicationContext extends com.aurorasoftworks.signal.runtime.core.context.Context { public ApplicationContext() throws Exception { /* Begin msgSource */ ApplicationContext.this.registerBean("msgSource", new com.aurorasoftworks.signal.examples.context.core.MessageSource( new java.util.Hashtable(){{put("greeting", "Hello!"); }} )); /* End msgSource */ /* Begin greeter */ ApplicationContext.this.registerBean("greeter", new com.aurorasoftworks.signal.examples.context.core.Greeter( ((com.aurorasoftworks.signal.examples.context.core.MessageSource) GreeterContext.this.getBean("msgSource")) )); /* End greeter */ } } The framework does not currently support Ant, but Ant tasks could be easily implemented as thin wrappers for the existing generator. The AOP framework In addition to making IoC tricky, the limitations of the CLDC API prevent existing AOP frameworks from running on Java ME devices. The AOP Alliance API and popular AOP libraries like AspectJ and JBoss AOP depend on java.lang.reflect.* types that are not present in Java ME. Moreover, AOP implementations often rely on custom class loaders and/or dynamic proxies neither of which are supported by CLDC. The AOP implementation provided by the Signal Framework is designed to work on Java ME devices: it has a relatively small footprint, it only relies on CLDC classes and it allocates as few temporary objects at runtime as possible. Those features come at a price, however: the framework is not as feature-rich as its desktop/enterprise counterparts and it only supports interception of interface method calls. The AOP framework overcomes the limitation of Java ME by relying on code generation. When an application context is created at build time, the framework identifies bean classes that implement the com.aurorasoftworks.signal.runtime.core.context.proxy.IProxyTarget interface and generates proxies for them. Those proxies are in turn used to intercept calls and execute method interceptors. This concept is similar to dynamic proxies supported by Java SE. Most classes included in the AOP framework have corresponding types in Java SE, as shown below: Table 1. Equivalent of Signal AOP types Proxies are normally created by the ProxyFactory class. The most common way of creating a proxy is passing a list of interceptors to the IProxyFactory#createProxy(IProxyTarget target, IMethodInterceptor [] interceptors) method. An object returned by that method implements the same interfaces as the passed target object and can be safely cast to those interfaces. The interception works in the following way: Due to its simplicity the framework does have some limitations. Proxies created by the AOP framework reuse array instances when passing arguments to an invocation handler so that temporary objects do not need to be created. This however implies that that proxies need to be synchronized (this is handled by the code generator). In a multithreaded application this can lead to performance issues. The proxy code is synchronized on a proxy instance so multiple proxies of the same object can be used concurrently without blocking any threads. When primitive objects are passed to or returned from a proxy method they need to be wrapped with object types like java.lang.Integer before being passed. This is the only scenario that requires allocation of temporary objects. Apart from wrapping primitives the AOP framework does not need to allocate any temporary objects and can be used safely regardless of the quality of a garbage collector. The MVC framework The MVC framework is based on IoC and AOP capabilities described in the previous sections. The framework supports both MIDP and LWUIT APIs and it is easy to add support for other view technologies if needed. The framework does not impose any restrictions on the domain model or views, as long as either LWUIT or MIDP is used. Instead, it is designed to make implementation of controllers easy. The most important features implemented in the controller layer are lazy initialization of controllers (and corresponding views, if any) and declarative navigation rules defined in an IoC configuration file. A controller is a simple bean defined in an IoC application context. All controllers need to implement the com.aurorasoftworks.signal.runtime.ui.mvc.ICotroller interface, or subtypes thereof. A controller manages part of an application workflow, which could be a single view or a complex wizard consisting of multiple steps. The most common type of a controller is a view controller. View controllers for MIDP and LWUIT views need to implement com.aurorasoftworks.signal.runtime.ui.mvc.midp.IViewController and com.aurorasoftworks.signal.runtime.ui.mvc.lwuit.IViewController, respectively. The framework automatically forwards commands ( javax.microedition.lcdui.Command and com.sun.lwuit.Command) to a controller of the view that dispatched them. Multiple view controllers can point to the same view. The second type of a controller is a flow controller that orchestrates a reusable process, typically a wizard consisting of multiple screens. Upon completion a flow returns a result to its caller, much like a method invocation. Flow controllers need to implement the com.aurorasoftworks.signal.runtime.ui.mvc.IFlowController interface. Flows can be chained together, i.e. a flow can start other flows, including instances of the same flow class. Dependencies between controllers are defined as bean dependencies, so that controllers can "fire" events by invoking interface methods. This results in loose coupling of controllers, declarative definition of navigation rules and type safety. Interface method calls are intercepted by the framework to transparently perform required processing like displaying a correct view, registering command handlers and starting or stopping a flow. In most cases it is desirable to have controllers and views lazily initialized. If so, an application context needs to be configured to do so: <beans xmlns="" xmlns: <!-- ... --> </beans> The most important bean in an MVC application is a dispatcher. The dispatcher is a framework object that intercepts method calls and performs needed processing like displaying a correct view or registering a command listener: <bean id="dispatcher" class="com.aurorasoftworks.signal.runtime.ui.mvc.midp.Dispatcher" /> There are two implementations of the dispatcher concept: com.aurorasoftworks.signal.runtime.ui.mvc.midp.Dispatcher and com.aurorasoftworks.signal.runtime.ui.mvc.lwuit.Dispatcher, used in MIDP and LWUIT applications, respectively. A definition of a dispatcher is followed by controller and view definitions, as shown below. Dependencies between controllers are defined as bean dependencies. <bean id="accountListViewCtl" class="com.aurorasoftworks.signal.examples.ui.mvc.midp.AccountListViewController"> <constructor-arg <constructor-arg <property name="newAccountEvent" ref="newAccountCtl" /> <property name="editAccountEvent" ref="editAccountCtl" /> </bean> In the example above a controller named accountListViewCtl supports two types of events: newAccountEvent and editAccountEvent that are wired to two other controllers as bean properties. Those events are simply references to interfaces shown below. The controller fires events by invoking interface methods which are in turn intercepted by the dispatcher. public interface INewAccountEvent { void onNewAccount(); } public interface IEditAccountEvent { void onEditAccount(IAccount account); } The following diagram illustrates the sequence of calls that are executed to handle user input in the scenario described above: A command selected by a user is sent to the dispatcher ( CommandListener.commandAction), which in turn forwards it to the active controller ( ICommandHandler.handleCommand). AccountListViewController reacts to the command by firing the INewAccountEvent.onNewAccount event that gets intercepted by the dispatcher. The dispatcher deactivates AccountListViewController, changes current view to the one associated with NewAccountViewController and activates it. The transition from one controller to another is correctly reflected in the state of the application: NewAccountViewController is the active controller and its view is displayed in the screen. The source code distribution of the Signal Framework contains a sample GUI application implemented in both MIDP and LWUIT technologies that demonstrates the usage of MVC concepts. Summary The framework has been designed to balance two conflicting requirements: reuse as much of the Spring code as possible and make it easy to add support for other IoC containers in the future, if needed. The functionality described in this article was implemented with a reasonably small effort: approximately 10 000 LOC, excluding sample applications included in the source code distribution. Some tradeoffs had to be made because of the limitations of the Java ME platform; most notably the framework does not support annotations and instead requires application classes to implement framework interfaces in some cases. For the same reasons, generics are not used in the framework API. After several beta releases the framework is now reasonable mature and contains all major features that have been planned. A production release should be available by January 2010. Additional information on the framework can be found at the following location:. Nobody has commented it yet.
http://www.javaexpress.pl/article/show/Signal_Framework_for_Java_ME
CC-MAIN-2020-16
en
refinedweb
Single-thread Code #include <cmath> #include <iostream> #include <random> #include <chrono> #include <fstream> int main(int argc, char* argv[]) { size_t N; double learn_rate; size_t epoches; double correct_b = 1.0; double correct_m = 0.5; std::ofstream ofs("points.csv"); ofs << "x,y" << std::endl;]; ofs<< x[i] << "," << y[i] << '\n'; } ofs.close(); ofs.open("prediction.csv"); ofs << "m,b" << std::endl; // estimated b, m double b = 0; double m = 0; // gradient descent for(size_t i = 0; i < epoches * N; i++) { int idx = i % N; double p = b+m * x[idx]; double err = p - y[idx]; b = b - learn_rate * err; m = m - learn_rate * err * x[idx]; ofs << m << "," << b << ' Optimized with DAAL Code Performance Multi-thread This code takes the single-threaded version above and applies TBB to leverage the power of threading to increase performance.
https://wiki.cdot.senecacollege.ca/w/index.php?title=DPS921/Franky&oldid=136169
CC-MAIN-2020-16
en
refinedweb
Applications architecture was neglected by official guidelines for Android for a very long time. In spite of that, interest in architecture among members of Android community has been growing steadily over the years. Official guidelines and tools slowly caught up with the community interests, culminating in the recent announcement of a set of libraries called Android Architecture Components by Google. While the motivation behind Android Architecture Components is clear, the relationship between these libraries and application architecture is not so evident. In this session Vasiliy Zukanov helps us understand what software architecture is and what it isn’t, and discuss several potential pitfalls associated with Android Architecture Components. Introduction My name is Vasiliy, and I work as an independent software consultant and freelance developer. I’m here to talk about architecture components and show you ways in which those components might harm your architecture. What is Architecture? Before we come to talk to about architecture components, we need to understand what architecture is, in particular, what is considered an architecture decision. Could it be: - Whether the application is expected to function online or offline? - When packaging an application, how should you distribute top-level packages in the codebase? - Dependency injection. - Unit testing. Approaches to Architecture This led to the emergence of several approaches to application architecture. One is architect everything - this was very popular in the 90s and early 2000s. The other extreme is to leave no time for architecture at all. Another approach is to architect some things. Get more development news like this The first two approaches are not good because they are extreme. We probably want to have is something. Architecting something in our applications and not over-architecting and not under-architecting. But how do we define this something? How do we find what we need to architect? Architecture in Relation to Change Architecture is our attempt to manage risks involved in changing requirements. As with any other risk-management activity, we cannot be prepared for all the possible risk that can come into play. We need to choose a subset of possible future changes that we are optimizing our application for, and this subset will constitute the application architecture. Architecture and Libraries How do the different libraries we might in our applications fit in this context? Can libraries become part of our application’s architecture? The answer is yes if the libraries that we integrate into our applications address the core domain problems that we are going to solve. A library that addresses core concepts of our applications is third-party login libraries. Once we decide that we want to support Google login, we need to make changes in the Clients, we need to make changes on the server, and we cannot easily reverse this decision. In most cases, however, libraries should not become part of an application’s architecture. Android Architecture Components Are Android architecture Components part of an application’s architecture? Android architecture Components is a set of libraries which provide very general functionality but is not related to any specific business domain. Therefore, these libraries should not become part of your application’s architecture. LiveData In order to understand what LiveData is we need to get back to our usual and standard, Observer design pattern. We have three blank entities in the Observer design pattern. Client, which is interested in some events that happened in the Observable; and the Observer interface, which Client implements. The Client registers itself with the Observable, and the Observable notifies the Client through this interface when the events, that the Client was interested in, happen. This is probably one of the most used design patterns in the world. What’s LiveData, then? We start with the same three classes, but we add another one, called LiveData. The Client still implements the Observer interface, but instead of directly registering itself with the Observable, the Client gets a reference to LiveData from the Observable and then registers itself with the LiveData. The Observable then pushes data changes into the LiveData and the LiveData is the one responsible for notifying the Client. The Observer interface is no longer under our control. Before that, when we implement our custom Observer design patterns, we decide how Observer interface looks. However, if we go LiveData way, Observer interface is a closed interface provided by the framework. We do not have control over this interface anymore. Custom Observer Interface If we use LiveData instead of regular Observer interface, Observer pattern, we will write less code. When we want to get just data from the Observable: public interface ReverseGeocodeObserver { boolean onReverseGeocodeLocationChanged(String location, long ageMs); void onReverseGeocodeError(int errorStatus); } This scheme lowers the risk of memory leaks. LiveData will take care of unregistering itself when the Observer dies, it comes at a cost: the scheme becomes much more complicated in regards to the Lifecycle itself. public interface Observer<T> { void onChanged(@Nullable T t); } LiveData will release us from understanding and handle Lifecycle of activities and fragments. However, in practice, I would argue that it does not achieve that and only complicates the matter. For example, if you look at the interface, at the API of LiveData, you will notice that it has several methods for registration. One method takes LifecycleOwner as a parameter and the other method, ObserveForever, takes no LifecycleOwner as a parameter. LiveData also means loss of a control over Observer interface. When we implemented the Custom Observer Pattern, we had total control over Observer interface. LiveData sits at the interface of your Observable. If you want to refactor it to the usual Observer design pattern, you will need to change your tests and refactor the codebase, and take care of bugs that slipped in between. ViewModel ViewModel has nothing in common with MVVM architectural pattern. It is misleadingly named and does not describe its real intent: a ViewModel is kept upon configuration change, so it is an Object, which is logically scoped to activity and fragment. Configuration change without ViewModel This is how Android developers handled configuration changes before ViewModel: public class MyActivity extends Activity { private String mUserId; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (savedInstanceState != null) { mUserId = savedInstanceState.getString("user_id"); } } @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); outState.putString("user_id", mUserId); } } Questioning Motivation Behind ViewModel ViewModel configuration change had been handled by saving and restoring the state, and ViewModel is a simpler solution. ViewModel comes as an additional a solution to the previous mechanism, as opposed to a replacement. The amount of applications that need to handle screen rotation is fewer than you think. This is the formula I like to have a developer ask: Start with your Home screen and swipe on the screen. Count the number of applications that you routinely interact with in landscape mode, divide them by the total number of application that you counted, that you have on this main screen. That resulting percentage is the fraction of application that needs to take care of rotation for you as a user. Optimizing for rotation is generally a waste of time, and I will wait for some analytics from my users to decide if this is necessary. ViewModel Pros and Cons ViewModel allows us to handle configuration change more easily. But the disadvantage is the additional effort and the increased chance of getting a save and restore bug. Suppose there is a reference to contact subject inside our ViewModel, that’s a memory leak. That is something that we hadn’t, we didn’t need to deal with beforehand. And the reason why we have this memory is the ViewModel does not simplify the LifeCycle. ViewModel takes the LifeCycle of the activity, adds on top of it LifeCycle of the ViewModel itself, and forces you to think about two LifeCycles. If you do not, you will end up having a memory leak. Summary Android architecture Components are not architecture components. They are design components at best. If you use them, you may increase the complexity of your codebase, for a questionable gain. There’s really no evident reason to use them. In my opinion, they address the almost non-existent problem. And surprisingly, if you use them, they will make your future changes that you might want to make in response to future requirements changes, they will make these changes harder. Lastly, this is my blog if you want to read some additional interesting and, in many cases, controversial ideas, including a related post on Android Architecture Components. About the content This talk was delivered live in October 2017 at Mobilization. The video was transcribed by Realm and is published here with the permission of the conference organizers and speakers.
https://academy.realm.io/posts/android-architecture-components-considered-harmful-mobilization/
CC-MAIN-2018-43
en
refinedweb
QTreeView doesn't show any content - Joel Bodenmann Hello folks, I'm currently experiencing an issue with a QTreeViewand a custom subclass of QAbstractItemModel: The model is working well and returns the data it's supposed to return (I confirmed that with qDebug()everywhere) but the QTreeViewnever shows any data. I experienced that issues multiple times in the past when I first started to work with Qt models and that was always due to missing calls to beginInsertRows()and endInsertRows()in my model subclass but that's not the problem here. I also tried using other views such as QTableViewand QListView. It's always the same: The model appears to operate properly internally but the view doesn't show any content. Interesting is the fact that when I move the mouse cursor inside the view I see that the data()method gets triggered all the time and according to the qDebug()it also returns the correct data (a valid non-empty QVariantobject). I'd appreciate any help on this. Here are my sources: Additional info: - Qt 5.8.0 on Windows 10 using MinGW 5.3.0 32-bit - The FindWidget::resultReady()method contains a qDebug()on the model's rowCount()which returns the expected non-zero value equal to the number of results that have been added to the model. - Chris Kawa Moderators I just skimmed through the code, but at a first glance there are at least couple of bugs there: - The reintrpret_castusage is completely wrong. That's definitely not what you meant. reinterpret_castis a "Stay away compiler, I totally know what I'm doing" cast i.e. it will always succeed no matter if the types are related in any way or not. Needless to say you are often casting things assuming you'll get null if the thing is not the thing, but what you actually get is a garbage pointer. - You are not checking the roleparameter in ResultsModel::datawhich means you return a string for everything, including things like background color, font or size hint. You should only return that string if role == Qt::DisplayRole. - Your implementation of rowCountcreates a tree of infinite depth because you're only checking for the top level and returning the number of files. For the second level you return the number of lines. For the third level you return the number of lines of something... and so on. - In the implementation of ResultsModel::addResultyou should call indexinstead of createIndex. The only methods using createIndex()directly should probably be index()and parent(). I hadn't time to go through the index()and parent()in detail, but by the sole amount of for loops for a simple two-level tree I'm betting there's something wrong there too. Keep in mind that index()and parent()are the most often called methods. It can easily go into hundreds at a time in a medium size data set. They should be as simple as possible and I have yet to see a case where a loop over everything would be needed. - Joel Bodenmann Hello Chris, glad to see that you're still around! :) - I agree with you that the use of reinterpret_castis not proper here. This was meant to be just a very quick proof-of-concept implementation of a file search tool. I always find it somewhat inconvenient to pass pointers via QVariantso being a bit in a hurry I decided to go the good old "I know what I'm doing" way. - The QTreeViewis showing the expected data now that I added a check for role != Qt::DisplayRolein my data()method. I'm not sue how I could miss that. After all, it's not the first time I'm dealing with that. Probably one shouldn't do just quick proof-of-concept implementations of models :p - Regarding the rowCount()implementation: In my opinion this is correct. After all, this model only supports two levels of depth. It's just a list of files that contain multiple sub-results in form of line numbers. Or am I missing something? This is the first time I'm implementing a somewhat "make shift" model. I regret it already. - Good catch on the createIndex()instead of index()call in addResult(). I missed that one! Thank you a lot for your help. Much appreciated! - Joel Bodenmann Err... I just realize what you actually were referring to regarding the issue with reinterpret_cast.... Thanks Chris, much appreciated, as always! :) - Chris Kawa Moderators After all, this model only supports two levels of depth. Well yeah, you and I know that. But how is the tree view suppose to know that? It's not like it understands your intent ;) The view discovers the amount of levels by calling rowCount()until it returns 0, and you never do that (if there are any lines added) so it will let you expand and expand until it probably crashes at some point if you stumble upon an out of range index. - Joel Bodenmann Chris, thank you a lot for your time to look into this. I appreciate it a lot. I'll rewrite this properly. There's no point in going quick proof-of-concept on something like this.
https://forum.qt.io/topic/84189/qtreeview-doesn-t-show-any-content
CC-MAIN-2018-43
en
refinedweb
Anyone who knows me very well could probably tell you that I’m a pretty big fan of Particle, a provider of hardware and software components for building internet-connected products (IoT). I love their product suite because they have abstracted the common functions of IoT products into easy-to-use components while still allowing access to all the nitty-gritty details for those of us who need to get down to that level. Recently, I was working on a project where I needed to add a display to a device that I had built. I wanted to use the Particle Photon to present some status to the user, so I picked up a cheap OLED display on Amazon, wired it up, and within no time, I was rendering text and graphics on the display. Below, I will outline the process I followed to get it working. Choosing a Display Finding a display was the first step. I searched Amazon for “SPI OLED Display” and quickly found the product shown below. The one I bought was from a seller called HILetGo, but there were actually many sellers offering essentially the same thing. If you’d rather not go through Amazon, you can buy a similar device from Adafruit or SparkFun. These types of displays can be controlled by either I2C or SPI. Personally, I prefer working with SPI because it’s easier to implement, and it’s usually faster for transferring data. The resolution of this display is 128 x 64. Each pixel is either on or off; there is no color variation, but you can vary the brightness. You can also find displays with different color options. This display, and the others like it, all use the same display driver chip, the SSD1306. This chip accepts commands via I2C, SPI, or parallel interfaces. Open-source libraries are readily available for talking to them. Wiring It Up To communicate with this chip via an SPI interface, you will need five micro-controller pins. Three are the typical SPI signals for Clock, MOSI, and Chip Select. The other two are the DC (Data/Command) and Reset signals. Additionally, the board requires power and ground as usual. This particular display can be powered from three tp 5VDC, but not all of them accept that wide of a range, so check the datasheet. The labeling on these boards doesn’t make it super-obvious which pins are for what purpose. That’s because the pins are used for different purposes depending on which interface you are utilizing. When using the SPI interface, the signals map to the pin labels as follows: SPI Clock: “D0” SPI MOSI: “D1” Data/Command: “DC” Chip Select: “CS” Reset: “RES” Next, you’ll need to figure out which pins to connect these to on your Particle device. The DC, CS, and Reset pins can be connected to any GPIO pins that you have available. The other two require the use of the SPI peripheral, so you’ll need to check the Particle reference document to determine which pins connect to the SPI Clock and MOSI. For the Photon, those pins are A3 and A5, respectively. If for some reason those pins are unavailable, there is a second SPI peripheral available on the Photon. Adafruit Libraries Now that you have the display wired up to your Particle board, the next step is to pull in a library so that you can talk to it. Conveniently, Adafruit has written an open-source library called Adafruit_SSD1306 which is specifically designed for talking to this display driver. The Adafruit_SSD1306 is really just a thin wrapper on top of another library, Adafruit_GFX, which does most of the heavy lifting of rendering lines, shapes and fonts. The Adafruit_SSD1306 library handles the SPI/I2C communication and the formatting of the commands and data to send to the driver. If you’re building your project with Particle’s web-based IDE, then adding the library is as simple as clicking on the Libraries icon, searching for “Adafruit_SSD1306”, and clicking Add. If you are using the Particle CLI to build your project, you can add the library by executing the following command: particle library add Adafruit_SSD1306. After that, you’re ready to start writing code and printing things to your display. Sample Code To initialize the display, all you have to do is create an instance of the Adafruit_SSD1306 class and call the begin() function on it. The code below initializes the display driver and shows the Adafruit splash screen for two seconds. #include "Adafruit_SSD1306.h" #define OLED_DC A1 #define OLED_CS A2 #define OLED_RESET A0 static Adafruit_SSD1306 display(OLED_DC, OLED_RESET, OLED_CS); void display_init() { display.begin(SSD1306_SWITCHCAPVCC); display.display(); // show the splash screen delay(2000); display.clearDisplay(); // clear the screen buffer display.display(); } The Adafruit_GFX library can render text with various sizes and can even invert the pixels to produce a highlighted effect. The following code sets the font size and cursor position, then renders a few lines of text to the display. display.clearDisplay(); display.setTextSize(2); display.setTextColor(WHITE); display.setCursor(0,0); display.println("Atomic"); display.println("Object!"); display.println("SSD1306"); display.display(); Sorry for the poor images. My display is covered by a piece of plexiglass that has a couple small scratches on it! Rendering bitmap images is really easy, too. Start out with a bitmap image with a resolution that’s less than or equal to your display. Using a program like Adobe Photoshop, adjust the image so that every pixel is either white or black. Then, use a program like Image2Code to convert the image to an array of bytes. Finally, add the byte array to your program and call display.drawBitmap(...) to render the image. uint8_t const ao_logo[] = { ... output from Image2Code here ... }; display.clearDisplay(); display.drawBitmap(0, 0, ao_logo, 128, 64, WHITE); display.display(); There is a ton more that you can do with this library, including drawing lines, curves, importing different fonts, etc. Hopefully this is enough to get you started. Now go make pretty things! Resources Display on Adafruit SSD1306 Datasheet Tutorial on Adafruit Adafruit_SSD306 Source Code Adafruit_GFX Source Code
https://spin.atomicobject.com/2017/10/14/add-oled-particle-device/
CC-MAIN-2018-43
en
refinedweb
u n i t f o u r Savings and Investments: Your Money at Work "It s possible to have your money work for you in two ways..." - Adrian Ross - 2 years ago - Views: Transcription 1 Unit Four Savings and Investments: Your Money at Work "It s possible to have your money work for you in two ways..." In Unit 2, we talked a lot about how you earn money working at a job. In this unit, we re going to turn the tables we ll focus on how you can make your money work for you! But where is this money going to come from? It s the money you save. It can be the money from the P.Y.F. ( pay yourself first ) system you learned in Unit 3. It's possible for money to work for you in two significant ways: through savings and investments. Many people think saving and investing are the same thing, but as you read on in this unit you ll discover there s a world of difference. If you really want to learn to maximize your dollars, you ll need to know when to save and when to choose investing. 43 2 Can You Believe? With a partner, fill in the following: % of teenagers pay themselves first when they receive an allowance or get paid for work. % of teenagers usually put savings into a checking or savings account, while % put savings into a certificate of deposit, mutual fund or stocks. % of teenagers have sought advice (usually from a parent) on how or where to best save or invest their money. Of all young people who are familiar with investing in the stock market, % are teenage guys and % are teenage girls. % of teenagers are saving money for college, and % are saving for a car. Some of the questions you will be able to answer by the end of this unit are: How can you develop the habit of saving? What is the difference between saving and investing? What does time value of money mean, and why is it such a neat thing for you to know now? What s a quick way to find out how long it will take your money to double? What s the difference between a stock and a bond? How do mutual funds work? Answer key: 49, 79, 5, 33, 51, 29, 42, 30 Overview: Savings? Investments? What's the Difference? In Unit 3, you learned the technique of P.Y.F. Now you need to put that money somewhere. You could keep it in your pocket, or stash it under your mattress, or keep it in a jar or locked box in your room. But are these really good choices? Where are some of the best places to put the money you save? Depending on your goals and the amount of money you have, you could put it in a savings account, or you could begin to invest it. Understanding when to save or when to invest is an essential step. 44 3 The big difference between savings and investments is time. Savings is usually money you set aside for short-term goals. One reason you might save money now is so you have some money to invest later. Money deposited into a savings account is usually very safe and probably earns a small amount of money. Another neat thing about a savings account is you can get your money out of the account whenever you want. When you invest, you set your money aside for future income, benefit, or profit to meet long-term goals. When you invest your money, there is no guarantee that your money will grow or increase. The earnings or losses from investments are usually more than what you would make or lose in a savings account. Investors recognize that it usually takes a long time to earn the big bucks, so most of the time they are in it for the long haul. Saving or Savings? When you pay yourself first P.Y.F. that s saving. You can deposit your P.Y.F. money in savings, which is usually just a short-term parking place for your cash. or some fantastic jewelry? And yes, probably some day, a house of your very own. Connecting the money you have to the things you want can be done with good choices, time, a plan that balances risk, and your commitment to yourself. Do you know that many adults never reach their goals because they don t know what you are learning right now? They don t know how to make a plan and stick to it. The rest of this unit will help you learn what you ll need to know about saving and investing. Why? So you can make your dreams and goals a reality! Time Value of Money Let s take a look at how saving and investing work. Time value of money is the relationship between time, money, and rate of return (interest), and their effect on earnings growth. Let's check out an example of this idea. A dollar you receive in the future may be worth more or less than a dollar in your hand today. That dollar will be worth more if you invest it and it grows in value. Why is Money Important? Go ahead, do some blue sky thinking about what you really want. What are some of the goals you have been writing and thinking about? Then, think of money as the tool to help you achieve those goals. Maybe what you would like is a new computer, ballet lessons, or a horse to ride. How about ski passes, golf lessons, 45 4 For example, $1 today might increase in value to $1.15 in two years. That 15 is your earned interest. Earned interest is the payment you receive for allowing a financial institution or corporation to use your money. Saving and investing are very important parts of your financial plan. How much your savings and investments earn over the period of time can be a significant factor in helping you achieve your goals. Three factors determine how much money will be available to meet your specific financial goals. These three important factors are time, money, and rate of interest. 1. The more time you have to save, the more money you will have at the end of the time period. 2. The more money you have to save, the more money you will have at the end of the time period. Pretty cool, huh? But the really neat thing is you have the dynamics of time and compound interest on your side because you have more time to save and invest! On the other hand, if you leave that dollar in the locked box in your room for the next two years, it may not be worth as much at the end of Time that two years. is Why? Because the value of that one money. dollar today might fall to just 90 in the next two years. That s because prices for goods and services usually increase over time. You know the lecture you get from your grandfather when he reminisces about the day when he paid ten cents for a pack of gum? Well, that's what economists refer to as inflation. The point is, the value of money of a dollar will change over time. 3. The higher the rate of interest you can earn, the more money you will have at the end of the time period. 46 5 Compounding When your money is working for you, it grows in value or compounds. Compounding, or compound interest, is the idea of earning interest on interest. This is one of the greatest aspects of personal finance, so you should probably listen up and read on. Assume you have $100 in an account earning 10% interest per year. At the end of that one year, you have $110 in your account. In year two, your account also earns 10%. How much do you have at the end of the second year? $120? No; Compounding is the 8th wonder as it turns of the world. out, you attributed to Albert Einstein. have $121. Where did that extra dollar come from? Compounding. wind up with the $121. Albert Einstein was so impressed with this concept that he called compounding the 8th wonder of the world. You don t have to be a genius to figure this compounding stuff out, and the super thing is you can begin to take advantage of it right now. Just include it as part of your financial plan. The important thing to remember is the amount you save is not as important as establishing the savings habit now. Savings that earn interest can grow into an investment fund. Your money will be working for you by earning interest. Individuals who learn to pay themselves first generally have money when they need it. Now let s test your understanding of compound interest by completing Assignment 4.1. Assignment 4.1 The Power of Compounding Assume you have $10 today. Using the various interest rates listed in the table below, fill in the compound value of $10 for each of the time periods listed. For example, $10 growing at 4% is worth $10.40 after one year. For the second year, multiply $10.40 by 4% and add the result to $10.40, for a total of $ Value of $10 1 Yr 2 Yrs 4 Yrs 6 Yrs 4% $10.40 $ % 6% 8% 10% 47 6 The Rule of 72 Mathematicians have come up with a simple rule based on this concept of compounding. It s called the Rule of 72, and it tells you how long it takes your money to double in value. This is an incredible way to earn more money without lifting a finger. Well, you may have to lift a finger to move some of your earnings from your savings account to an investment, but you get the drift. Here s how it works. You divide 72 by an interest rate to determine the number of years it will take your money to double. For example, assume you can earn 6% on your money. How long will it take $100 to grow to $200? 72 6% interest = 12 years That s right. At 6%, your money will double in value in 12 years. On the other hand, say you have a set time period in mind. You can figure out what interest rate you need to earn to double your money. If you have $200 today and need a total of $400 in eight years, what interest rate do you need to earn? 72 8 years = 9% interest With eight years to invest, your money will double if you can earn 9%. Along with compounding and the Rule of 72, there are some other concepts you can use to your advantage with your savings and investments. Key Investment Principles Time We touched on the importance of time earlier when we discussed the time value of money. But let s expand on that key point a little more. The fact is, the more time you have to reach your savings goal, the more money you will have at the end of that time. For example, assume you re 16 years old and decide to invest $1,000 a year money you earned from your summer jobs in an account that grows by 9% per year. You faithfully set aside the money each year for 10 years, but you decided to stop at age 25. At that time, you finally convince your best friends who are also now 25 to start setting aside some money for their future. They begin now to put away $1,000 every year, and they also are able to earn 9% on their money. After 25 years, you all get together at your 50th birthday party and compare notes. Who has the most money? Figure 4.1 on the next page shows the surprising result. Even though your friends invested more than twice as much as you did, you end up with over $46,000 more. Why? You took advantage of time, by starting to save earlier. 48 7 Figure 4.1 The Advantage of Starting Early The Impact of Time on the Value of Money You Your Friends Age Saving Early Age Saving Later $1, $1, $1, $1,000 Total of $1, $1,000 $10, $1, Invested $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1, $1,000 Amount Available at Age 50: $131,050 $84,701 Difference Due to Starting Early: $ 46,349 Total of $25,000 Invested 49 8 Figure 4.2 Risk & Return 50 Risk and Return When many people hear the word investment, they think of the stock market and the risks that go with it. Investments in the stock market do have risks, and you can certainly lose money on stocks or any other investment, for that matter. But with higher risk comes the potential to earn higher rewards or returns on your investments. This risk-to-return relationship is another key investment principle. The more risk you take with your money, the greater the potential return you receive. However, the reverse is also true less risk, less return on your money. Figure 4.2 shows this principle for some common categories. Rate of return is how fast your money grows. It is a critical factor in the savings and investment world. Interest rate and rate of return are synonymous, so if you hear savvy investors referring to the interest rate don t fear, because the NEFE High School Financial Planning Program has helped equip you to understand and talk with the best of them! You read earlier that the more time you have, the less money you need to reach your goal. In a similar way, the higher your rate of return, the less money you need to reach a goal. Let s see how the Rule of 72 applies to higher rates of return by completing Assignment 4.2. 9 Assignment 4.2 The Impact of Higher Returns on Savings and Investments Assume you have $100 to invest right now. Using the interest rates provided and your knowledge of the Rule of 72, determine how long it will take to double your money and write that amount in the appropriate column. Rule of 72: The Approximate Frequency with which $100 Doubles at Specific Interest Rates Interest Rate 6 Yrs 9 Yrs 12 Yrs 18 Yrs 24 Yrs 3% 4% 6% 8% 12% $200 Figure 4.3 Let s look at another example. You challenge a friend to an investment contest. You each have $100 to invest, and there are two choices. Savings Account A grows at 4% per year for 10 years, while Investment B grows at 8% per year for the same period. Say you wisely choose Investment B. Who will have more money in 10 years? You or your friend? Look at Figure 4.3 Even though you both started with the same amount of money, at the same time, and let it grow for the same time period, Investment B is worth $68 more. Why? Because it grew at a higher rate of return. 51 10 Rather than trying to win a lot of money through games of chance, you can make your own luck through smart investment planning. The graph below shows how long it will take to save $1 million at different rates of return, assuming you invest $2,000 per year. If you were to invest more, you d reach your goal that much faster! On the Road to $1 Million 52 Diversification People have different ideas about how much risk they should take with their money. Some are conservative and want to keep it some place safe, like a savings account. Others are more aggressive and are willing to invest it some place riskier, like the stock market. Regardless of your investment style, every wise investor knows that diversification is a critical element of any investment plan. It's pretty simple really. Diversification is the reduction of investment risk by spreading your invested dollars among several different investments. It's simply spreading your money around among different choices. When you divide your money up among different types of savings and investments, you reduce the chance that any one of them will really hurt you financially with a drop in value. For instance, let s go back to Savings Account A and Investment B. Rather than put all $100 in just one investment, you decide to put $50 in each. Five years later, Investment B is worthless, due to a bad turn of events. Now, instead of losing all your money, you ve just lost about $50. That s not good, but it s better than it would have been if all of your money had been in Investment B and you had lost the entire $100. That s an example of how diversification can help lower the risk for your investments. There are two other possible challenges that your money will face, so stay tuned. In the next section you'll learn how to avoid a few monetary disasters! Inflation and Taxes We saw earlier how compounding can make money for you by earning interest on your earnings. Unfortunately, compounding can also work against you. When this unfortunate occurrence happens, it s called inflation. Inflation occurs when the price of goods and services rise. We ve had inflation in our economy for decades, ranging from barely.5% to over 18% in a year. Fortunately, inflation usually averages between 3% and 4% per year. What does that mean, and why should you care? It means that you're going to be paying more in the future for the same pair of tennis shoes, skateboard, purse and chicken wings. 11 Because we usually do have inflation even in small amounts year after year, that becomes a problem. A dollar in the future won t buy as much as a dollar today. Think about the cost of a first class postage stamp. In 1971, a first class stamp cost 8. How much is that stamp today? One of the reasons the stamp costs more is because of inflation. The same trend will probably hold true in the future for other goods, like food and clothing. Here s an example of how compounding works for you a positive rate of return (earning money) and against you inflation (losing money) at the same time. Today, a large soft drink at your favorite fast-food place costs $1. You buy the soft drink but also decide to save some money for the future as well. So you put a dollar in your savings account, where it earns 5%. Next year, the dollar in your savings account is worth $1.05. You take your savings out and visit your favorite fast-food place again. You re ready to buy your favorite soft drink, until you realize the price has gone up to $1.10. Inflation has gone up faster than your earnings. Can it happen? You bet. It has and it will, depending on the type of savings or investment you choose. The point is that inflation can work against your money, so you need to protect yourself against that risk whenever you can. Learn to invest wisely, follow the rate of inflation, and make sure your investment rates are higher than those of inflation. Taxes are the other drain on your savings and investments. When you have income from a job, the federal and state governments collect their share of taxes from you, just like we discussed in Unit 3. When you have income from your savings and investments, the federal and state governments tax those earnings as well. When you earn interest in a savings account, the government taxes those earnings just like income from a job. If you buy and sell investments, like stocks, for instance, the government taxes you on any gains or profits you make. The rules vary depending on how long you ve owned an investment, but the bottom line remains the same: you ll pay taxes on any earnings from your money. Are there types of savings and investments that can overcome inflation and minimize the impact of taxes? Yes there are, and that s the focus of the next section. 53 12 54 Savings and Investment Choices There are many ways to put your money to work for you. The information that follows provides a broad spectrum of the choices available. All of these options have pros and cons, and some work better in certain situations. In general terms, people put their money to work for two reasons: income or growth. Income means they get paid in cash for holding certain types of investments. Growth means they hold an investment with the hope that it will increase in value over time. In terms of months or even a few years, income investments tend to provide more reliable returns. They are less risky than growth investments in the short term, and their returns are lower accordingly. Owner or Lender? Over longer periods of time, such as several years or decades, growth investments offer higher returns. But the returns come with a price higher risk and there are no guaranteed returns. Let s look briefly at some specific examples. Within each broad category, we ll list typical savings or investment choices, ranging from low risk to higher risk. As you may recall from the earlier discussion on risk and return, that also means we ll be going from low return to potentially higher return. If you are a lender, you lend your money to a business or the government and receive interest. If you are an owner, you actually buy a piece of a business and hope the business goes up in value. Lenders typically take less risk than owners, so owners tend to get paid more but there s no guarantee. Income Investments (Lenders) Savings Accounts. Banks and credit unions pay investors to loan their money to these financial institutions. These payments are called interest, which can be defined simply as the price of borrowing money. Because the federal government guarantees the safety of these accounts, they have very low risk and thus tend to pay low interest rates. You can usually take your money out at any time with no restrictions. U.S. Savings Bonds. The federal government pays cash to investors in the form of interest to loan it money, just like the banks and credit unions. But bonds are different than savings accounts. First of all, a bond is a formal agreement between a lender and a borrower covering a set time period. In exchange for loaning it money for months or years, the U.S. government agrees to pay you cash interest. If 13 you cash in these bonds within five years of purchase, it will cost you a penalty. Usually the penalty is three months of lost interest. These bonds typically pay higher rates of interest than savings accounts. Certificates of Deposit (CD). Not to be outdone by the government, banks and credit unions have their own versions of bonds, called certificates of deposit. In exchange for loaning them your money for a set period of time such as 3 months, 6 months, 1 year, 2 years, etc. banks and credit unions pay you interest. The longer the term, the higher the rate of interest paid on the CD. CDs usually pay a slightly higher rate of interest than savings bonds. But like those bonds, you will lose a few months of interest if you cash in your CD early. Money Market Accounts. Many banks, credit unions and mutual fund companies offer money market accounts. These work like checking accounts and pay you a higher rate of interest than savings accounts. Unlike CDs and savings bonds, you can take your money out whenever you want and usually with no penalty. Some financial institutions may limit the number of checks you can write per month or may require a higher deposit into the account to open it. Savings are checking accounts, savings accounts, U.S. savings bonds, CDs or money market accounts. It usually means some type of account where your money is very safe and easily accessible. Corporate and Government Bonds. Out of all the income investments, these bonds typically pay the highest interest rates. Government bonds tend to be safer than company bonds, so corporate bonds usually pay higher rates of interest. The time periods for these bonds can be 2 to 30 years. In general, the longer the time period, the higher the interest rate paid by the bond. Growth Investments (Owners) Stocks. Stocks are investments that represent ownership in a company. When a company first issues stock, it does so to raise money for itself. The company then puts that money to work to produce its product or service. It might buy some new equipment, or it may hire some new employees. The investors who buy the stock actually own a part of the company. The stock itself sells for a price. A stock buyer wants the price of the stock to increase over time. Eventually, the buyer will sell the stock. Ideally, the process goes from buying low to selling high. The difference between the purchase price and the selling price is the investor s earnings, which are also called a capital gain. For example, assume an investor buys Stock XYZ for $10 a share in In 2001, the investor sells XYZ for $25 per share. The difference in price, $15, is the investor s profit, or capital gain. Over longer periods of time, such as 5 or 10 years or longer, stocks tend to generate higher rates of return than income investments. But because stocks can also lose value, they are considered to be riskier than income investments. On the plus side, growth 55 14 investments like stocks have historically earned rates of return that consistently exceed the rate of inflation. Real Estate. Investors buy pieces of property, such as land or a building, in hopes of generating a profit. If you live in a house owned by a family member, your family owns real estate. There are many other forms of real estate investments, such as malls, apartment complexes, undeveloped land, commercial buildings, and farmland. Collectibles. Collectibles are usually unique items that are relatively rare in number. Examples include paintings, sculptures, and other works of art. If you own a collection of baseball trading cards, you have collectibles. Just like stocks or real estate, collectors buy items they hope will go up in value over time. When they later sell the item at a higher price, they lock in their earnings in the form of capital gains. Because not many people trade collectibles, investors view them as very high in risk. What is popular and in demand one year may be out of vogue the next. Prices for collectibles can change quickly and dramatically. It helps if you really have expert knowledge about a particular collectible, such as classic cars, so you know if you re getting a good deal or not. Mutual Funds When it comes to making investment decisions, investors always have a choice. They can do it on their own, or they can hire a professional to make their money management choices for them. Investors who want professional management turn to mutual funds. A mutual fund pools money from several investors and uses the money to buy a particular type of investment, such as stocks. A fund manager, who is an investment expert, makes all of the buy and sell decisions for the investments in the fund. Because mutual funds own a variety of investments, investors enjoy the benefits of diversification, which we discussed earlier. For these and other reasons, mutual funds can be a great choice for investing. Mutual funds are created for several different purposes or objectives. Some are designed to produce income and invest in bonds, CDs, and other income-producing items. Some are designed for growth and invest in stocks or real estate. Mutual funds invest in almost any area of the business world. If you can imagine it, there is probably a fund in existence that specializes in a particular type of business or product. There are funds that invest in technology, in food and agriculture, in government bonds, in foreign countries, in energy, and even in gold or other precious metals. So if you have a particular interest in a product or service and you want to invest in the companies in that industry, you can do so easily by hiring a professional money manager and investing in a mutual fund. 56 15 What You ve Learned in Unit Four how you can develop the habit of saving; the difference between saving and investing; what time value of money means, and why it s of great value to you now; what compounding is; what the difference is between a stock and a bond; how to use the Rule of 72; and, how mutual funds work. 57 16 Action Steps Get Started With Your Savings Habit Now! Think about these three ideas, which could help you get started with your savings habit NOW. Check the one(s) you think will be the best for you, and write the date you are going to begin. Savings Idea How It Works My Choice Date I Will Begin Pay Yourself First Catch Your Coins Bank Your Surprises From each of your paychecks or allowance, deposit a set dollar amount, or percentage, into a savings account before using any of your money for anything else. You could save 5% or 10%, or you could save a set dollar amount such as $5, $10, or more. At the end of each day, put all your loose change into a savings container. Once a month, deposit the coins from your container into a savings account. Whenever you receive unexpected money, such as a cash gift, put a portion of it into a savings account. 58 Investing: Making Money Work for You By the end of this unit, you will: Know the difference between saving and investing Be familiar with the time value of money Be able to compare investment options Recognize the risks and rewards of investing Personal Financial Literacy Vocabulary TEACHER GUIDE 5.3 SAVING AND INVESTING PAGE 1 Standard 5: The student will analyze the costs and benefits of saving and investing. Saving and Investing Tools Priority Academic Student Skills Personal Financial BUILDING YOUR MONEY PYRAMID: FINANCIAL PLANNING CFE 3218V BUILDING YOUR MONEY PYRAMID: FINANCIAL PLANNING CFE 3218V OPEN CAPTIONED MERIDIAN EDUCATION CORP. 1994 Grade Levels: 10-13+ 14 minutes 1 Instructional Graphic Enclosed DESCRIPTION Most people will earn: Getting Started STUDENT MODULE 5.1 SAVING AND INVESTING PAGE 1 Standard 5: The student will analyze the costs and benefits of saving and investing. Saving and Investing: Getting Started But Mom, why can I not have it College student investment. ARC Workshop for BUS By Yun Xu College student investment ARC Workshop for BUS By Yun Xu Student investments How to invest your money during college? Getting ahead financially before even embarking into the real world of work, mortgages Lesson 4 Save and Invest: Put It in the Bank Lesson 4 Save and Invest: Put It in the Bank Lesson Description Students read a passage on Banking Basics and assess the role of banks as financial intermediaries that bring together savers and borrowers. Building a Business Plan Part B: Getting Money to Start Your Business Session Eight Building a Business Plan Part B: Getting Money to Start Your Business It is not necessary to do extraordinary things to get extraordinary results. Warren Buffett Overview: One of the great Types of Savings Plans and Investments LESSON 12 Types of Savings Plans and Investments Introduction If you saved $100 under your mattress, in 50 years you d still have $100, right? Well, yes and no. Even though you would still have $100 in Keys to Retirement Planning and Investing Keys to Retirement Planning and Investing Table of Contents Keys to Retirement Planning Keys to Investing for Retirement 1 2 3 4 5 Find out how much you ll need for retirement. Most experts agree that Materials Lesson Objectives 165 Materials 1. Play money ($2,000 per student) 2. Candy (Different types, 10 per student) 3. Savvy student reward, which is an item perceived by the students to be of greater value than all the candy Saving and Investing Tools STUDENT MODULE 5.3 SAVING AND INVESTING PAGE 1 Standard 5: The student will analyze the costs and benefits of saving and investing. Saving and Investing Tools Miley and Hanna are both turning 16 this year How to Use Your Retirement Funds to Finance Your Small Business with No Taxes or Penalties. How To Use Your Retirement Funds to Finance Your Business How To Use Your Retirement Funds to Finance Your Business By Bill Seagraves, President November 9, 2010 TABLE OF CONTENTS Succeed in Business and Retire Wealthy: It s All About Cash Flow The Rich Get Richer ACTIVITY 8.1 A MARGINAL PLAY LESSON 8 BUYING ON MARGIN AND SELLING SHORT ACTIVITY 8.1 A MARGINAL PLAY Stockbroker Luke, Katie, and Jeremy are sitting around a desk near a sign labeled Brokerage Office. The Moderator is standing in GETTING READY TO INVEST GETTING READY TO INVEST 6 SAVING AND INVESTING Learn about... IMPORTANCE OF SAVING AND INVESTING INVESTMENT ALTERNATIVES RISKS & REWARDS ASSOCIATED WITH SHARE INVESTMENT ESTABLISHING INVESTMENT OBJECTIVES Registered Investment Advisor Firm Registered Investment Advisor Firm Issue VI, Vol. III Bud Heng Registered Adviser (925) 360-6819 June 2014 Congratulation Graduates! For the graduating class of 2014, let me be among the first to say congratulations! Saving to Build Wealth Make Money Work for You Saving to Build Wealth Make Money Work for You Getting started on saving Building wealth is like building a brick wall. You place one brick on top of another and make sure the wall is solid before MODULE 1 // SAVE MONEY. START NOW. AMATEUR: AGES 11-14 MODULE 1 // SAVE MONEY. START NOW. AMATEUR: AGES 11-14 MODULE 1 // FINANCIAL SOCCER PROGRAM Financial Soccer is an educational video game designed to help students learn more about the fundamentals of MODULE 3 THE NEXT BIG THING INVESTING: STOCKS, BONDS & MUTUAL FUNDS This lesson has students learning about stocks, bonds, and mutual funds. The concepts of risk and reward, and return on investment (ROI) are explored. The FIT Work Saving and Investing Teacher's Guide $ Lesson Three Saving and Investing 07/13 saving and investing websites websites for saving and investing The internet is probably the most extensive and dynamic source of information Arizona Property Advisors LLC PRIVATE MORTGAGE INVESTING ARIZONA PROPERTY ADVISORS LLC 480-228-3336 Table of Contents The Basics of Private Mortgages 1 What is a Private Mortgage? 1 Why Would Someone Borrow From Chapter Four: Saving and Investing Chapter Four: Saving and Investing Chapter Learning Objectives 1. Compare advantages and disadvantages of saving early versus saving later. 2. Explain the importance of short-term and long-term saving VOCABULARY INVESTING Student Worksheet Vocabulary Worksheet Page 1 Name Period VOCABULARY INVESTING Student Worksheet PRIMARY VOCABULARY 1. Savings: 2. Investments: 3. Investing: 4. Risk: 5. Return: 6. Liquidity: 7. Stocks: 8. Bonds: 9. Mutual Whole Life Insurance is not A Retirement Plan Whole Life Insurance is not A Retirement Plan By Kyle J Christensen, CFP I decided to write this article because over the years I have had several clients forget how their whole life insurance fits in Standard 5: The student will analyze the costs and benefits of saving and investing. TEACHER GUIDE 5.4 SAVING AND INVESTING PAGE 1 Standard 5: The student will analyze the costs and benefits of saving and investing. Time is Money Priority Academic Student Skills Personal Financial Literacy 1Planning Your Financial Future: It Begins Here This sample chapter is for review purposes only. Copyright The Goodheart-Willcox Co., Inc. All rights reserved. 1Planning Your Financial Future: It Begins Here Terms Financial plan Financial goals Needs. U.S. Treasury Securities U.S. Treasury Securities U.S. Treasury Securities 4.6 Nonmarketable To help finance its operations, the U.S. government from time to time borrows money by selling investors a variety of debt securities Banking Test - MoneyPower Banking Test - MoneyPower Multiple Choice Identify the choice that best completes the statement or answers the question. 1. If a person makes a deposit of $10,000 or more into a bank account, the bank Save and Invest Put It in the Bank Lesson 3 Save and Invest Put It in the Bank Lesson Description In this lesson, students will compare two savings plans: stuffing a mattress with money and using a bank. After identifying the disadvantages Investments. Introduction. Learning Objectives Investments Introduction Investments Learning Objectives Lesson 1 Investment Alternatives: Making it on the Street Wall Street! Compare and contrast investment alternatives, such as stocks, bonds, mutual Stocks and Mutual Funds LESSON 14 Stocks and Mutual Funds I ntroduction One characteristic of a market economy is private ownership of property. Property is not just land and real estate; it is anything of economic value that Chapter 1 The Financial Assessment Chapter 1 The Financial Assessment 64 P leasant S treet P hon e: ( 415) 830-52 44 Copyright 2007-2009 Harrison Lazarus Advisors, Inc. All Rights Reserved Page 1 of 15 It doesn t matter where you are What do other high school students know about investing? Investment Options What do other high school students know about investing? We asked high school students to describe the weirdest get rich quick scheme they ve ever heard of. Someone told me that I could A KIDS GUIDE TO STOCKS AND OTHER INVESTMENTS A KIDS GUIDE TO STOCKS AND OTHER INVESTMENTS Recommended for students ages nine through 12 You can do many things with the money you will earn and save during your lifetime. For example, you can put it Money Math for Teens. Dividend-Paying Stocks Money Math for Teens Dividend-Paying Stocks This Money Math for Teens lesson is part of a series created by Generation Money, a multimedia financial literacy initiative of the FINRA Investor Education Tips for potential Capital Market Investors Tips for potential Capital Market Investors A financial advisor once told It doesn t matter how good of job someone has, if they want to acquire wealth in this life, at some point they are going to. Bank Products. 3.1 Introduction Bank Products Bank Products For many people, the first financial institution they deal with, and the one they use most often, is a bank or credit union. That s because banks and credit unions provide a Credit: The Good and the Bad Managing Your Credit Effectively Credit can be a valuable addition to your financial toolbox if you use it carefully and sensibly. Credit means someone is willing to loan you money called Principal in Slide 2. What is Investing? Slide 1 Investments Investment choices can be overwhelming if you don t do your homework. There s the potential for significant gain, but also the potential for significant loss. In this module, you ll UNIT 7 1 Applying for a Home Mortgage UNIT 7 1 Applying for a Home Mortgage Regardless of where you get your mortgage, the issuer is not likely to keep the mortgage for the duration of the loan. So, if you get your mortgage at a local bank, What to Do With a Windfall Episode # 511 What to Do With a Windfall Episode # 511 LESSON LEVEL Grades 6-8 Key topics Decision Making Investing Personal Financial Plan Entrepreneurs & Stories LA Sparks WNBA Basketball Candace Layla West Dance A KIDS GUIDE TO STOCKS AND OTHER INVESTMENTS A KIDS GUIDE TO STOCKS AND OTHER INVESTMENTS Recommended for students ages nine through 12 ou can do many things with the money you will earn and save during your lifetime. For example, you can put it Creating a Personal Financial Plan Creating a Personal Financial Plan Overview Setting goals are important and often used to measure success. However, simply setting goals does not ensure you will someday accomplish them. Achieving goals Lesson 6: Inheritance and Investing What s Your Story? Lesson 6: Inheritance and Investing What s Your Story? All the materials and information included in this presentation is provided for educational and illustrative purposes only and is presented with the Planning for Your Retirement STUDENT MODULE 6.1 RETIREMENT PLANNING PAGE 1 Standard 6: The student will explain and evaluate the importance of planning for retirement. Planning for Your Retirement Lindzi and Lezli will attend retirement Standard 6: The student will explain and evaluate the importance of planning for retirement. TEACHER GUIDE 6.1 RETIREMENT PLANNING PAGE 1 Standard 6: The student will explain and evaluate the importance of planning for retirement. Planning for Your Retirement Priority Academic Student Skills Personal Saving Power = Spending Power Saving Power = Spending Power By Greg Glandon Federal Reserve Bank of Atlanta Lesson Plan of the Year Contest, 2009 Third Place LESSON DESCRIPTION This lesson focuses on the value of saving money and on FPA 8. 2003 Financial Planning Association WHY IS INVESTING IMPORTANT? There is an important difference between saving and investing. You should save for short-term goals, but you need to invest for long-term goals. Saving is basically a form, Let s Get to Know Spread Bets Let s Get to Know Spread Bets Spread betting is pretty cool. Here are three reasons why. Even if you ve never traded before, you probably know how the financial market works buy in and hope it goes 10 Steps to Financial Freedom in Your Twenties and Thirties 1 Steps to Financial Freedom in Your Twenties and Thirties On our journey to obtain independence and achieve financial success, we usually prioritize having good educational experiences, a sound resume What are the risks of choosing and using credit cards? Credit Cards 2 Money Matters The BIG Idea What are the risks of choosing and using credit cards? AGENDA Approx. 45 minutes I. Warm Up: A Credit Card You Can t Pass Up? (10 minutes) II. Credit Card Advantages Personal. Savings & investments. Simple ways to get steady returns. Financial solutions. For life. Personal Savings & investments Simple ways to get steady returns Financial solutions. For life. Important information Please refer to the last page of this brochure for important information you should Conservative Investment Strategies and Financial Instruments Last update: May 14, 2014 Summary Conservative Investment Strategies and Financial Instruments Last update: May 14, 2014 Most retirees should hold a significant portion in many cases, 100% of their savings in conservative financial Part 5: Saving and Investing Money Part 5: Saving and Investing Money CHAPTER 13: Putting Your Money to Work Saving and Investing Let s discuss... $ Basic banking $ Saving money $ Investing money When it comes to saving and investing, Remember the Interest STUDENT MODULE 7.1 BORROWING MONEY PAGE 1 Standard 7: The student will identify the procedures and analyze the responsibilities of borrowing money. Remember the Interest Mom, it is not fair. If Bill can Retirement Planning EMPLOYER PLANS CALCULATING YOUR NEEDS INVESTMENTS DECISIONS GOALS What You Should Know About... Retirement Planning EMPLOYER PLANS CALCULATING YOUR NEEDS INVESTMENTS DECISIONS YourMoneyCounts No matter who you are or how much money you have, you re probably hoping Sell Your House in DAYS Instead of Months Sell Your House in DAYS Instead of Months No Agents No Fees No Commissions No Hassle Learn the secret of selling your house in days instead of months If you re trying to sell your house, you may not have Chapter 1 Personal Financial Planning The Money Plan Chapter 1 Personal Financial Planning The Money Plan Q: I am a high school student. I do not have money for investments or buying property. So what difference does it make how I spend my money Accounts. Checking. Name Block Date Name Block Date Checking Accounts Tasks Checking Accounts Notes... page 5 Debit Card Questions... page 7 Writing a Check Activity... page 8 Checking Simulation #1... pages 9-19 Checking Simulation #2... Income For Life Introduction Income For Life Introduction If you are like most Americans, you ve worked hard your entire life to create a standard of living for yourself that you wish to maintain for years to come. You have NAME: CLASS PERIOD: An Introduction to Stocks and Bonds 22.1 An Introduction to Stocks and Bonds There are many different ways to invest your money. Each of them has different levels of risk and potential return. Stocks and bonds are two common types of What Money Can Really Do What Money Can Really Do This program was designed with children in mind, specifically 1st and 2nd graders. It is meant to introduce children to money and how it works. In the lesson, we will talk about:» Learner Outcomes. Target Audience. Materials. Timing. Want more background and training tips? Invest Well Stocks and Bonds. Teens. Learner Outcomes Outcome #1: Participants will be able to identify what a stock is. Outcome #2: Participants will be able to describe what affects the value of a stock. Outcome #3: Participants will be Saving and Investing Teacher's Guide $ Lesson Ten Saving and Investing 01/11 saving and investing websites web sites for savings and investing The internet is probably the most extensive and dynamic source of information in Can You Afford to Take Investment Risks? Last update: October 13. 2011 Summary Can You Afford to Take Investment Risks? Last update: October 13. 2011 There are several important reasons why you should take significantly less investment risk during retirement than before retirement. After completing this chapter, you will be able to: Savings Accounts Chapter 30 Savings Accounts Section 30.1 Savings Account Basics Discuss the three reasons people save money. Describe compound interest. After completing this chapter, you will be able to: Section 30 Seven Things You Must Know Before Hiring a Real Estate Agent Seven Things You Must Know Before Hiring a Real Estate Agent 1 Introduction Selling a home can be one of the most stressful situations of your life. Whether you re upsizing, downsizing, moving across 3-2: Open a Savings Account MODULE 3: SAVINGS/SPENDING PLAN 3-2: Open a Savings Account Cast List Darryl Terri Millie Dunn, consumer financial expert with Lifetime Savings and Loan; 60-65, motherly Caucasian woman Synopsis Terri Making and Living Within a Budget NORTH DAKOTA PERSONAL FINANCE EDUCATION Making and Living Within a Budget Leader Guide Learner Objectives Students will: Understand how to organize personal fi nances and use a budget to manage cash fl By Steven Peeris, Research Analyst 1/12/2013 Personal Finance By Steven Peeris, Research Analyst NUS Students Investment Society NATIONAL UNIVERSITY OF SINGAPORE TIME IS MONEY TRADING TIME FOR DOLLARS What are we getting paid for when we Guide to Trading GUIDE TO TRADING GUIDE TO TRADING 1 Table of contents THE GUIDE...3 INTRODUCTION...4 GETTING STARTED...8 HOW TO TRADE... 12 LADDER OPTION...20 ABOUT US...24 BASIC GLOSSARY...25 2 The Guide Dear client/investor We welcome
http://docplayer.net/8519006-U-n-i-t-f-o-u-r-savings-and-investments-your-money-at-work-it-s-possible-to-have-your-money-work-for-you-in-two-ways.html
CC-MAIN-2018-43
en
refinedweb
10.1.3, “MessageChannel Metric Features” and Section 10.1.4, “MessageHandler Metric Features” below. This causes the automatic registration of the IntegrationManagementConfigurer bean in the application context. Only one such bean can exist in the context and it must have the bean name integrationManagementConfigurer if registered manually via a <bean/> definition. This bean applies it’s configuration to beans after all beans in the context have been instantiated. = "true", 10.2, “JMX Support”), these metrics are also exposed by the IntegrationMBeanExporter. Starting with version 5.0.2, the framework will automatically detect if there is a single MetricsFactory bean in the application context and use it instead of the default metrics factory. Starting with version 5.0.3, the presence of a Micrometer MeterRegistry in the application context will trigger support for Micrometer metrics in addition to the inbuilt metrics (inbuilt metrics will be removed in a future release). Simply add a MeterRegistry bean of choice to the application context. If the IntegrationManagementConfigurer detects exactly one MeterRegistry bean, it will configure a MicrometerMetricsCaptor bean with name integrationMicrometerMetricsCaptor. For each MessageHandler and MessageChannel, timers are registered. For each MessageSource, a counter is registered. This only applies to objects that extend AbstractMessageHandler, AbstractMessageChannel and AbstractMessageSource respectively (which is the case for most framework components). With Micrometer metrics, the statsEnabled flag takes no effect, since statistics capture is delegated to Micrometer. The countsEnabled flag controls whether the Micrometer Meter s are updated when processing each message. The Timer Meters for send operations on message channels have the following name/tags: name: spring.integration.send tag: type:channel tag: name:<componentName> tag: result:(success|failure) tag: exception:(none|exception simple class name) description: Send processing time (A failure result with a none exception means the channel send() operation returned false). The Counter Meters for receive operations on pollable message channels have the following names/tags: name: spring.integration.receive tag: type:channel tag: name:<componentName> tag: result:(success|failure) tag: exception:(none|exception simple class name) description: Messages received The Timer Meters for operations on message handlers have the following name in the application. spring.integration.handlers - the number of MessageHandlers in the application. spring.integration.sources - the number of MessageSources in the application. These legacy metrics will be removed in a future release; see Section 10.1.2, “Micrometer Integration”. section below. These legacy metrics will be removed in a future release; see Section 10.1.2, “Micrometer Integration”. The following table shows the statistics maintained for message handlers. Some metrics are simple counters (message count and error count), and one is an estimate of averages of send duration.). A strategy interface MetricsFactory has been introduced allowing you to provide custom channel metrics for your MessageChannel s and MessageHandler s. By default, a DefaultMetricsFactory provides default implementation of MessageChannelMetrics and MessageHandlerMetrics which are described above.. Also see Section 10.1.2, “Micrometer Integration”. 10.1 10.1 with`@ManagedResource` 10 10, Delayer QueueChannel and PriorityChannel: Starting with version 4.3, some MessageGroupStore implementations can be injected with a custom MessageGroupFactory strategy to create/customize the MessageGroup instances used by the MessageGroupStore. This defaults to a SimpleMessageGroupFactory which produces SimpleMessageGroup s based on the GroupType.HASH_SET ( LinkedHashSet) internal collection. Other possible options are SYNCHRONISED_SET and BLOCKING_QUEUE, where the last one can be used to reinstate the SimpleMessageGroup behavior. Also the PERSISTENT option is available. See the next section for more information. Starting with _version 5.0.1, the LIST option is also available for use-cases when the order and uniqueness of messages in the group doesn’t matter. Starting with version 4.3, all persistence MessageGroupStore s retrieve MessageGroup s and their messages from the store with the Lazy-Load manner. In most cases it is useful for the Correlation MessageHandler s (Section 6.4, “Aggregator” and Section 6.5, “Resequencer”), when it would be an overhead to load entire MessageGroup from the store on each correlation operation. To switch off the lazy-load behavior the AbstractMessageGroupStore.setLazyLoadMessageGroups(false) option can be used from the configuration. Our performance tests for lazy-load on MongoDB MessageStore (Section 23.3, “MongoDB Message Store”) and <aggregator> (Section 6.4, “Aggregator”) with custom release-strategy like: <int:aggregator demonstrate this results for 1000 simple messages: StopWatch 'Lazy-Load Performance': running time (millis) = 38918 ----------------------------------------- ms % Task name ----------------------------------------- 02652 007% Lazy-Load 36266 093% Eager,.metadata 8.9.11, ).. With Java and Annotations the Control Bus can be configured as follows: @Bean @ServiceActivator(inputChannel = "operationChannel") public ExpressionControlBusFactoryBean controlBus() { return new ExpressionControlBusFactoryBean(); } Or, when using Java DSL flow definitions: @Bean public IntegrationFlow controlBusFlow() { return IntegrationFlows.from("controlBus") .controlBus() .get(); } Or, if you prefer Lambda style with automatic DirectChannel creation: @Bean public IntegrationFlow controlBus() { return IntegrationFlowDefinition::controlBus; } In this case, the channel is named controlBus.input. As described in Section 10ource s. The fourth step stops all inbound MessageProducer s . Starting with version 4.3, Spring Integration provides access to an application’s runtime object model which can, optionally, include component metrics. It is exposed as a graph, which may be used to visualize the current state of the integration application. The o.s.i.support.management.graph package contains all the required classes to collect, build and render the runtime state of Spring Integration components as a single tree-like Graph object. The IntegrationGraphServer should be declared as a bean to build, retrieve and refresh the Graph object. The resulting Graph object can be serialized to any format, although JSON is flexible and convenient to parse and represent on the client side. A simple Spring Integration application with only the default components would expose a graph as follows: { }, "errorCount": 0, "standardDeviationDuration": 0.0, "countsEnabled": true, "statsEnabled": true, "loggingEnabled": false, "handleCount": 0, "meanDuration": 0.0, "maxDuration": 0.0, "minDuration": 0.0, "activeCount": 0 }, "componentType": "logging-channel-adapter", "output": null, "input": "errorChannel" } ], "links": [ { "from": 2, "to": 3, "type": "input" } ] } As you can see, the graph consists of three top-level elements. The contentDescriptor graph element is pretty straightforward and contains general information about the application providing the data. The name can be customized on the IntegrationGraphServer bean or via spring.application.name application context environment property. Other properties are provided by the framework and allows you to distinguish a similar model from other sources. The links graph element represents connections between nodes from the nodes graph element and, therefore, between integration components in the source Spring Integration application. For example from a MessageChannel to an EventDrivenConsumer with some MessageHandler; or from an AbstractReplyProducingMessageHandler to a MessageChannel. For the convenience and to allow to determine a link purpose, the model is supplied with the type attribute. The possible types are: MessageChannelto the endpoint; inputChannelor requestChannelproperty; MessageHandler, MessageProduceror SourcePollingChannelAdapterto the MessageChannelvia an outputChannelor replyChannelproperty; MessageHandleron PollingConsumeror MessageProduceror SourcePollingChannelAdapterto the MessageChannelvia an errorChannelproperty; DiscardingMessageHandler(e.g. MessageFilter) to the MessageChannelvia errorChannelproperty. AbstractMappingMessageRouter(e.g. HeaderValueRouter) to the MessageChannel. Similar to output but determined at run-time. May be a configured channel mapping, or a dynamically resolved channel. Routers will typically only retain up to 100 dynamic routes for this purpose, but this can be modified using the dynamicChannelLimitproperty. The information from this element can be used by a visualizing tool to render connections between nodes from the nodes graph element, where the from and to numbers represent the value from the nodeId property of the linked nodes. For example the link type can be used to determine the proper port on the target node: +---(discard) | +----o----+ | | | | | | (input)--o o---(output) | | | | | | +----o----+ | +---(error) The nodes graph element is perhaps the most interesting because its elements contain not only the runtime components with their componentType s and name s, but can also optionally contain metrics exposed by the component. Node elements contain various properties which are generally self-explanatory. For example, expression-based components include the expression property containing the primary expression string for the component. To enable the metrics, add an @EnableIntegrationManagement to some @Configuration class or add an <int:management/> element to your XML configuration. You can control exactly which components in the framework collect statistics. See Section 10.1, “Metrics and Management” for complete information. See the stats attribute from the _org.springframework.integration.errorLogger component in the JSON example above. The nullChannel and errorChannel don’t provide statistics information in this case, because the configuration for this example was: @Configuration @EnableIntegration @EnableIntegrationManagement(statsEnabled = "_org.springframework.integration.errorLogger.handler", countsEnabled = "!*", defaultLoggingEnabled = "false") public class ManagementConfiguration { @Bean public IntegrationGraphServer integrationGraphServer() { return new IntegrationGraphServer(); } } The nodeId represents a unique incremental identifier to distinguish one component from another. It is also used in the links element to represent a relationship (connection) of this component to others, if any. The input and output attributes are for the inputChannel and outputChannel properties of the AbstractEndpoint, MessageHandler, SourcePollingChannelAdapter or MessageProducerSupport. See the next paragraph for more information. Spring Integration components have various levels of complexity. For example, any polled MessageSource also has a SourcePollingChannelAdapter and a MessageChannel to which to send messages from the source data periodically. Other components might be middleware request-reply components, e.g. JmsOutboundGateway, with a consuming AbstractEndpoint to subscribe to (or poll) the requestChannel ( input) for messages, and a replyChannel ( output) to produce a reply message to send downstream. Meanwhile, any MessageProducerSupport implementation (e.g. ApplicationEventListeningMessageProducer) simply wraps some source protocol listening logic and sends messages to the outputChannel. Within the graph, Spring Integration components are represented using the IntegrationNode class hierarchy, which you can find in the o.s.i.support.management.graph package. For example the ErrorCapableDiscardingMessageHandlerNode could be used for the AggregatingMessageHandler (because it has a discardChannel option) and can produce errors when consuming from a PollableChannel using a PollingConsumer. Another sample is CompositeMessageHandlerNode - for a MessageHandlerChain when subscribed to a SubscribableChannel, using an EventDrivenConsumer. @MessagingGateway(defaultRequestChannel = "four") public interface Gate { void foo(String foo); void foo(Integer foo); void bar(String bar); } produces nodes like: { "nodeId" : 10, "name" : "gate.bar(class java.lang.String)", "stats" : null, "componentType" : "gateway", "output" : "four", "errors" : null }, { "nodeId" : 11, "name" : "gate.foo(class java.lang.String)", "stats" : null, "componentType" : "gateway", "output" : "four", "errors" : null }, { "nodeId" : 12, "name" : "gate.foo(class java.lang.Integer)", "stats" : null, "componentType" : "gateway", "output" : "four", "errors" : null } This IntegrationNode hierarchy can be used for parsing the graph model on the client side, as well as for the understanding the general Spring Integration runtime behavior. See also Section 3.8, “Programming Tips and Tricks” for more information. If your application is WEB-based (or built on top of Spring Boot using an embedded web container) and the Spring Integration HTTP or WebFlux module (see Chapter 18, HTTP Support and Chapter 34, WebFlux Support) is present on the classpath, you can use a IntegrationGraphController to expose the IntegrationGraphServer functionality as a REST service. For this purpose, the @EnableIntegrationGraphController @Configuration class annotation and the <int-http:graph-controller/> XML element, are available in the HTTP module. Together with the @EnableWebMvc annotation (or <mvc:annotation-driven/> for xml definitions), this configuration registers an IntegrationGraphController @RestController where its @RequestMapping.path can be configured on the @EnableIntegrationGraphController annotation or <int-http:graph-controller/> element. The default path is /integration. The IntegrationGraphController @RestController provides these services: @GetMapping(name = "getGraph")- to retrieve the state of the Spring Integration components since the last IntegrationGraphServerrefresh. The o.s.i.support.management.graph.Graphis returned as a @ResponseBodyof the REST service; @GetMapping(path = "/refresh", name = "refreshGraph")- to refresh the current Graphfor the actual runtime state and return it as a REST response. It is not necessary to refresh the graph for metrics, they are provided in real-time when the graph is retrieved. Refresh can be called if the application context has been modified since the graph was last retrieved and the graph is completely rebuilt. Any Security and Cross Origin restrictions for the IntegrationGraphController can be achieved with the standard configuration options and components provided by Spring Security and Spring MVC projects. A simple example of that follows: <mvc:annotation-driven /> <mvc:cors> <mvc:mapping </mvc:cors> <security:http> <security:intercept-url </security:http> <int-http:graph-controller The Java & Annotation Configuration variant follows; note that, for convenience, the annotation provides an allowedOrigins attribute; this just provides GET access to the path. For more sophistication, you can configure the CORS mappings using standard Spring MVC mechanisms. @Configuration @EnableWebMvc // or @EnableWebFlux @EnableWebSecurity // or @EnableWebFluxSecurity @EnableIntegration @EnableIntegrationGraphController(path = "/testIntegration", allowedOrigins="") public class IntegrationConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/testIntegration/**").hasRole("ADMIN") // ... .formLogin(); } //... }
https://docs.spring.io/spring-integration/reference/html/system-management-chapter.html
CC-MAIN-2018-43
en
refinedweb
A package for analysis of rare particle decays with machine-learning algorithms Project description raredecay This package consists of several tools for the event selection of particle decays, mostly built on machine learning techniques. It contains: - a data-container holding data, weights, labels and more and implemented root-to-python data conversion as well as plots and KFold-data splitting - reweighting tools from the hep_ml-repository wrapped in a KFolding structure and with metrics to evaluate the reweighting quality - classifier optimization tools for hyper-parameters as well as feature selection involving a backward-elimination - an output handler which makes it easy to add text as well as figures into your code and automatically save them to a file - … and more HowTo examples To get an idea of the package, have a look at the howto notebooks: HTML version or the IPython Notebooks Minimal example Want to test whether your reweighting did overfit? Use train_similar: import raredecay as rd mc_data = rd.data.HEPDataStorage(df, weights=*pd.Series weights*, target=0) real_data = rd.data.HEPDataStorage(df, weights=*pd.Series weights*, target=1) score = rd.score.train_similar(mc_data, real_data, old_mc_weights=1 *or whatever weights the mc had before*) Getting started right now If you want it the easy, fast way, have a look at the Ready-to-use scripts. All you need to do is to have a look at every “TODO” task and probably change them. Then you can run the script without the need of coding at all. Documentation and API The API as well as the documentation: Documentation Setup and installation Anaconda Easiest way: use conda to install everything (except of the rep, which has to be upgraded with pip for some functionalities) conda install raredecay -c mayou36 PyPI The package with all extras requires root_numpy as well as rootpy (and therefore a ROOT installation with python-bindings) to be installed on your system. If that is not the case, some functions won’t work. If you want to install all the extra, first install the very newest version of REP (may also needed with conda install) (the -U can be omitted, but is recommended to have the newest dependencies): pip install -U Then, install the raredecay package (without ROOT-support) via pip install raredecay To make sure you can convert ROOT-NTuples, use pip install raredecay[root] # *use raredecay\[root\] in a zsh-console* or, instead of root/additionally (comma separated) reweight or reweight for the specific functionalities. In order to have all functionalities, use pip install raredecay[all] As it is a young package still under developement, it may receive regular updates and improvements and it is probably a good idea to regularly download the newest package. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/raredecay/
CC-MAIN-2018-30
en
refinedweb
React's new Context API and Actions Steven Washington Mar 31 Updated on Apr 02, 2018 Photo: Daniel Watson Edit: 4/2/2018 - It was pointed out to me that the example in this post had a performance issue, where render was called on Consumers unnecessarily. I've updated the article, examples, and the CodeSandbox to rectify this. The new React Context API ( coming soon now here! in React 16.3) is a massive update of the old concept of context in React, which allowed components to share data outside of the parent > child relationship. There are many examples and tutorials out there that show how to read from the state provided by context, but you can also pass functions that modify that state so consumers can respond to user interactions with state updates! Why Context? The context API is a solution to help with a number of problems that come with a complex state that is meant to be shared with many components in an app: - It provides a single source of truth for data that can be directly accessed by components that are interested, which means: - It avoids the "prop-drilling" problem, where components receive data only to pass it on to their children, making it hard to reason about where changes to state are (or aren't) happening. B-but Redux! Redux is a fantastic tool that solves these problems as well. However Redux also brings a lot of other features to the table (mostly around enforcement of the purity of state and reducers) along with required boilerplate that may be cumbersome depending on what is needed. For perspective, Redux uses the (old) context API. Check out this article by Dan the Man himself: You Might Not Need Redux What's Context do? There are plenty of articles on this (I particularly like this one), so I don't want to go into too many details about how this works. You've seen the examples so far, and they're mostly missing something: how to update the state in the provider. That state is sitting there, and everyone can read it, but how do we writeto it? Simple Context Example In many of these examples we make a custom provider to wrap around React's, which has its own state that is passed in as the value. Like so: context.js import React from "react"; const Context = React.createContext(); export class DuckifyProvider extends React.Component { state = { isADuck: false }; render() { const { children } = this.props; return ( <Context.Provider value={this.state}> {children} </Context.Provider> ); } } export const DuckifyConsumer = Context.Consumer; Seems simple, enough. Now we can use the DuckifyConsumer to read that state: DuckDeterminer.js import React from "react"; import { DuckifyConsumer } from "./context"; class DuckDeterminer extends React.Component { render() { return ( <DuckifyConsumer> {({ isADuck }) => ( <div> <div>{isADuck ? "quack" : "...silence..."}</div> </div> )} </DuckifyConsumer> ); } } export default DuckDeterminer; Passing Functions Now, what if we wanted to emulate a witch turning something into a duck (stay with me here)? We need to set isADuck to true, but how? We pass a function. In Javascript, functions are known as "first-class", meaning we can treat them as objects, and pass them around, even in state and in the Provider's value prop. It wouldn't surprise me if the reason why the maintainers chose value and not state for that prop is to allow this separation of concepts. value can be anything, though likely based on state. In this case, we can add an dispatch function to the DuckifyProvider state. dispatch will take in an action (defined as a simple object), and call a reducer function (see below) to update the Provider's state (I saw this method of implementing a redux-like reducer without redux somewhere, but I'm not sure where. If you know where, let me know so I can properly credit the source!). We pass the state into the value for the Provider, so the consumer will have access to that dispatch function as well. Here's how that can look: context.js import React from "react"; const Context = React.createContext(); const reducer = (state, action) => { if (action.type === "TOGGLE") { return { ...state, isADuck: !state.isADuck }; } }; export class DuckifyProvider extends React.Component { state = { isADuck: false, dispatch: action => { this.setState(state => reducer(state, action)); } }; render() { const { state, props: { children } } = this; return <Context.Provider value={state}>{children}</Context.Provider>; } } export const DuckifyConsumer = Context.Consumer; Note that we have dispatch in our state, which we pass into value. This is due to a caveat in how the need to re-render a consumer is determined (Thanks, Dan for pointing that out!). As long as the reference to this.state stays pointed to the same object, any updates the make the Provider re-render, but don't actually change the Provider's state, won't trigger re-renders in the consumers. Now, in DuckDeterminer, we can create an action ( {type:"TOGGLE"}) that is dispatched in the button's onClick. (We can also enforce certain action types with an enum object that we export for the DuckifyContext file. You'll see this when you check out the CodeSandbox for this) DuckDeterminer.js import React from "react"; import { DuckifyConsumer } from "./DuckContext"; class DuckDeterminer extends React.Component { render() { return ( <DuckifyConsumer> {({ isADuck, dispatch }) => { return ( <div> <div>{isADuck ? "🦆 quack" : "...silence..."}</div> <button onClick={e => dispatch({ type: "TOGGLE" })}> Change! </button> </div> ); }} </DuckifyConsumer> ); } } export default DuckDeterminer; The secret sauce here is the dispatch function. Since we can pass it around like any other object, we can pass it into our render prop function, and call it there! At that point the state of our Context store is updated, and the view inside the Consumer updates, toggling on and off whether the duck truly exists. Extra credit You can (read: I like to) also add a helpers field alongside state and dispatch, as a set of functions that "help" you sift through the data. If state is a massive array, perhaps you can write a getLargest or getSmallest or getById function to help you traverse the list without having to split the implementation details of accessing various items in a list in your consumer components. Conclusion Used responsibly, the new Context API can be very powerful, and will only grow as more and more awesome patterns are discovered. But every new pattern (including this one, even) should be used with care and knowledge of the tradeoffs/benefits, else you're dipping your toes in deaded antipattern territory. React's new context API, is incredibly flexible in what you can pass through it. Typically you will want to pass your state into the value prop to be available to consumers, but passing along functions to modify state is possible as well, and can make interacting with the new API a breeze. Try it out The DuckDeterminer component is available to play with on CodeSandbox, right now! What developers want in a job A look at what job-seekers value most when looking for a job. I'm not too familiar with Redux so I have a few questions about this method. 1) How would I fire off multiple actions at once> For instance if my component needs to change 2-3 state values? Can I just pass multiple actions? What does that look like? 2) What if my state/action doesn't take a true/false vale but instead changes a string? For example: state = { msg: 'This message can change', dispatch: action => { this.setState(state => reducer(state, action)); } }; How would I pass in a new msg to the action at this point to change the msg? Thanks so much for the guidance! 1) I would say that you could fire multiple calls to dispatchfor each of your actions, or (if possible) create a new action type (a so-called "super-action") that's actually a collection of other actions. 2) Keep in mind that you action can be any sort of object. So in addition to type(which exists so you know what action you are working with), you can add in other data. An action can look like: And then the reducer would update the stateby using action.newString Note your example will re-render consumers more often than necessary. reactjs.org/docs/context.html#caveats Thanks for pointing that out! I made some updates to that example that should address this. always important to RTFM; I messed up in this case. 😳 Always learning! Worth noting that I've got an open proof-of-concept PR that updates React-Redux to use the new context API. So React now is more than just a view library? React has always had state management built-in. This is just another way of updating the front-end state; no assumptions are made about any back-end architecture. In theory, instead of your action directly modifying the context state, the action could kick off a request to your backend to update its data, which then responds with updated (and server-validated) data that you can use to update your front-end store. In that sense, React is still just a view layer, showing the data that you pass into its state and responding to user events. It's up to you to decide what sort of data updates that triggers, and React will come along for the ride. 😀 Hey, thx for your post, I was wondering how to do something similar to this, right now I'm just thinking in how to manage multiple reducers and import them at once
https://dev.to/washingtonsteven/reacts-new-context-api-and-actions-446o
CC-MAIN-2018-30
en
refinedweb
2733113-v4\SYDDMS Legal Alert 3 MAY 2016 Download Contact us Visit our website 2016-17 Federal Budget: Multinationals subject to round two In the 2015-16 Federal Budget, the Australian Government announced the introduction of the multinational anti-avoidance law (MAAL), the doubling of penalties for large taxpayers engaging in profit shifting schemes (subsequently enacted by the Tax Laws Amendment (Combating Multinational Tax Avoidance) Act 2015) and implemented the first tranche of the OECD’s recommendations as set out in the Base Erosion and Profit Shifting (BEPS) Action Plans. The ramifications of last year's changes, and in particular the MAAL, are still being absorbed. One year on from this original announcement, taxpayers across the board are still engaged with the Australian Tax Office (ATO), spending a significant amount of time and effort discussing potential restructures to their Australian supply chains so as to be MAAL compliant. Tonight, the Federal Treasurer Scott Morrison delivered the first Federal Budget of the Turnbull Coalition Government. Set in the context of "international headwinds and fragility", as well as an impending federal election, the Australian Government has announced more fundamental changes to our tax system, again under the guise of "ensuring multinationals are paying their fair share of tax". This significant package of measures, focused on taxpayers still exhausted from last year's changes, includes: an Australian diverted profits tax (DPT); applying GST to low value goods imported by consumers; anti-avoidance rules focused on eliminating hybrid mismatch arrangements; implementing the OECD’s recently updated Transfer Pricing Guidelines; and establishing a new Tax Avoidance Taskforce within the ATO to enhance its audit activity for large corporates and high wealth individuals. In addition, there are a variety of technical changes to the consolidation regime and taxation of financial arrangements (TOFA) regime and fundamental reform to our superannuation system. But it's not all bad news. The corporate tax rate is proposed to reduce to 25% - albeit progressively over the next 10 years. The DPT According to the Consultation Papers, the Australian DPT is " designed to ensure entities operating in Australia cannot avoid Australian tax by transferring profits, assets or risks offshore through related party transactions that lack economic substance, and to discourage multinationals from delaying the resolution of transfer pricing disputes". This is described as the Australian equivalent to the second limb 2733113-v4\SYDDMS of the UK DPT. The DPT will apply to income years commencing on or after 1 July 2017 whether or not a relevant transaction was entered into before that date. Significant global entities Consistent with the MAAL, it will only apply to significant global entities with annual global income of A$1 billion or more (on a consolidated accounting basis). As an improvement on the MAAL, a de-minimis threshold will exempt entities with Australian turnover of less than A$25 million (unless revenue is artificially booked offshore rather than in Australia). This aligns with the exemption for small taxpayers applying simplified transfer pricing record-keeping requirements (and hopefully will be extended to the MAAL soon). When will it apply? An arrangement with a related party may be subject to the DPT if: the transaction has given rise to an effective tax mismatch; and the transaction has insufficient economic substance. If the related party arrangement gives rise to an effective tax mismatch, and has insufficient economic substance, the ATO may issue a DPT assessment. The assessment will be calculated by reference to the total of the Diverted Profits Amount multiplied by the DPT rate. What is an effective tax mismatch? An effective tax mismatch will exist where an Australian taxpayer has a cross-border transaction with a related offshore party and, as a result, the increased income tax liability offshore attributable to the transaction is less than 80% of the corresponding reduction in Australia. The following example (Example 1) is provided. Company A has a A$100 reduction to its Australian tax liability as a result of a deductible payment, but due to the lower tax rate in Company B’s jurisdiction, Company B only has a A$60 increase in its tax liability from the corresponding receipt. As the tax liability for Company B is less than A$80, an effective tax mismatch will arise. Available losses to the offshore related party will not be included in the effective tax mismatch calculation. This could arise where, as an example, a foreign resident provides marketing and administrative services to an Australian company for a fee. As Australia's tax rate is 30%, this covers all payments made to countries with a corporate tax rate of less than 24%. Insufficient substance test Determination of whether there is insufficient economic substance will be based upon whether it is reasonable to conclude based on the information available at the time to the ATO that the transaction(s) was designed to secure the tax reduction. This is a vague test in an Australian context. Further guidance is provided such that "Where the non-tax financial benefits of the arrangement exceed the financial benefit of the tax reduction, the arrangement will be taken to have sufficient economic substance". As an example, fees in excess of an arm’s length amount would be characterised as falling with this test. Diverted Profits Amounts For the purposes of determining the DPT assessment, where the deduction claimed is considered to exceed an arm’s length amount (‘inflated expenditure’ cases), the provisional Diverted Profits Amount will be 30% of the transaction expense. However, diverted profits are not limited to excessive expenses. For any situations where profits have been diverted offshore, the provisional Diverted Profits Amount will be based on the best estimate of the diverted taxable profit that can reasonably be made by the ATO at the time. (Examples of how this reconstruction power could apply are discussed in more detail in the two examples below). (For completeness we note that, consistent with our current transfer pricing rules, where the debt levels of a significant global entity fall within the thin capitalisation safe harbour, only the pricing of the debt 2733113-v4\SYDDMS (and not the amount of the debt) will be taken into account in determining any DPT liability).) DPT rate The DPT rate will be 40% of the Diverted Profit Amount. Interest will be charged from the date tax would have been otherwise payable if the scheme had not been entered into. The DPT will not be deductible or creditable in Australia. In calculating the DPT, an offset will be allowed for any Australian taxes paid on the diverted profits (e.g. Australian withholding taxes and Australian tax paid under the Controlled Foreign Company regime). However, there are no credits for foreign tax. Process and review The payment of tax has been accelerated compared with an ordinary review process for a risk review or audit. Initially, a provisional DPT assessment may be issued by the ATO on review of a tax return (it is not a self assessment regime). That assessment will be issued as soon as practicable after the end of an income year and no later than seven years after the taxpayer has lodged its income tax return for the relevant year. The taxpayer will then have only 60 days to make representations to correct factual matters set out in the provisional DPT assessment (but not on transfer pricing matters). Following this, the ATO will issue a final DPT assessment within 30 days and the taxpayer will have 21 days to pay the assessment and has no right of appeal against the final DPT assessment at this stage. That is, the tax payment is accelerated prior to the complete review. The ATO will have 12 months to review the final DPT assessment, during which time the taxpayer may provide information to the ATO to support an amendment to the DPT assessment, which may include an adjustment on transfer pricing grounds. During this period, if the ATO considers the amount of DPT charged to be insufficient, the ATO may issue a supplementary DPT assessment up to 30 days prior to the end of the review period to impose an additional charge of DPT. At the completion of this 12 month review, the taxpayer has 30 days to lodge an appeal through the courts. Other examples Importantly, the DPT is not limited to excessive deductions. Two other examples in the Consultation Papers show the potential breadth of the DPT to reconstruct transactions. Example 2 Australia Co, Parent Co and Foreign Co are related parties. Parent Co (a foreign resident) injects A$300 million equity funding into Foreign Co (also a foreign resident). Foreign Co uses the funds to purchase an asset, which it then leases to Australia Co. Australia Co pays A$30 million lease payments per annum to Foreign Co for use of the asset. Foreign Co has no other activities…. The ATO considers that the arrangement is artificial and contrived and that the relevant alternative scenario would have been that Parent Co would have provided equity funds to Australia Co to purchase the asset for its own use. The ATO considers issuing a DPT assessment… The Diverted Profits Amount is calculated by reference to the difference between the actual lease payment less a deemed hypothetical alternative depreciation expense. The DPT assessment is 40% of the Diverted Profits Amount plus interest. Example 3 Australia Co, Foreign Co A and Foreign Co B are related parties. Australia Co contractually transfers an intellectual property (IP) asset it has developed to Foreign Co A for a nominal amount. Australia Co continues to develop and maintain the IP. Foreign Co A only pays a small amount for this service and does not contribute in any other meaningful way to the further development or maintenance. Foreign Co A charges Foreign Co B A$50 million royalties per annum for the right to use the 2733113-v4\SYDDMS IP…… The ATO considers that there is insufficient economic substance to the transfer and the relevant alternative scenario would have been for Australia Co to remain the owner of the asset. The ATO considers issuing a DPT assessment…. The Diverted Profits Amount is the understated income of A$50 million. The DPT assessment is 40% of the Diverted Profits Amount plus interest. This reconstruction power, and the DPT more generally, is a significant change in law that will need to be worked through by all significant global entities (unfortunately, potentially, replicating the last 12 months of compliance activity). GST on imports of low value goods A$75,000 or more will be required to register for, collect and remit GST for low value goods supplied to consumers in Australia, using a vendor registration model. Previously announced changes to tax imports of digital goods effective 1 July 2017 are set out on the Tax and Superannuation Laws Amendment (2016 Measures No. 1) Bill 2016, which is currently before Parliament. Tax Avoidance Taskforce The Australian Government will establish a new Tax Avoidance Taskforce to enable the ATO to undertake enhanced compliance activities targeting multinationals, large public and private groups and high wealth individuals. This measure provides the ATO with a 55% increase in funding for compliance programs targeting multinationals and high wealth individuals. It specifically provides that it includes: "a 43 % increase in resources devoted to tackling multinationals (including ramping up to an additional 390 average staffing level per year)". This measure is estimated to have a gain to revenue of A$3.7 billion over the forward estimates period. No doubt two of the targeted areas of compliance will centre on the MAAL and DPT. Enhanced transfer pricing rules The Budget will amend the transfer pricing rules to ensure that the latest OECD Transfer Pricing Guidelines will apply in Australia. The OECD has substantially revised its transfer pricing guidance, particularly in relation to the recognition and pricing of intellectual property and other intangible assets. This new guidance enhances the ATO's ability to challenge transfer pricing arrangements which ascribe substantial value to intangible assets held outside of Australia. Even where such arrangements fall outside of the MAAL and proposed DPT, taxpayers need to closely review and monitor the transfer pricing position of their Australian operations to ensure that the principles of the new guidance are fully reflected in their intercompany relationships. Increased penalties On a similar theme, penalties for non disclosure will be increased. Historically, companies have been subject to surprisingly low penalties for failure to comply with tax disclosure obligations. Penalties relating to the lodgement of tax documents to the ATO will be increased by a factor of 100 for companies with global revenue of A$1 billion from 1 July 2017. This will raise the maximum penalty from A$4,500 to A$450,000. Penalties relating to making statements to the ATO will be doubled, to increase the penalties imposed on multinational companies that are being reckless or careless in their tax affairs. Protecting whistleblowers The Australian Government will introduce new arrangements to better protect individuals who disclose information to the ATO on tax avoidance behaviour and other tax issues. This measure will take effect from 1 July 2018 and is estimated to have an unquantifiable gain to revenue over the forward estimates 2733113-v4\SYDDMS period. Under the new arrangements, individuals, including employees, former employees and advisers, disclosing information to the ATO will be better protected under the law, in relation to identity protection, and protection from loss of employment and legal consequences. It is notable that the Australian Government has resisted taking the step of providing a financial incentive for employees to disclose confidential business information to government revenue authorities. OECD hybrid mismatch rules While the Australian Government has confirmed it will implement the OECD's recommendations to eliminate hybrid mismatch arrangements, it is low on detail. The Australian Government has asked the Board of Taxation to undertake further work on how best to implement the recommendations set out in the OECD's Base Erosion and Profit Shifting Action 2 Final Report of 5 October 2015. Aside from asking the Board of Taxation to undertake further work on how to best implement the OECD’s recommendations, the Consultation Papers give little detail on the proposed new rules. The new rules are to apply from the later of 1 January 2018 or six months following Royal Assent of the enabling legislation. Advisor disclosure The Australian Government has also announced its intention to consult on a new set of rules requiring tax advisors to report aggressive tax structuring to the ATO. These rules are likely to be strongly contested by the industry, particularly insofar as they clash with Australia’s well-established laws regarding legal professional privilege. Changes to the consolidation rules A variety of technical changes to the consolidation rules have been announced, including: extending the application of 2014-15 Budget's consolidation integrity measures for securitised assets to non-financial institutions with securitisation arrangements, which is intended to ensure consistent application to liabilities arising from securitisation arrangements within both financial and non-financial institutions for arrangements that commence on or after 3 May 2016; and removing adjustments relating to deferred tax liabilities from the consolidation entry and exit tax cost setting rules, to address the commercial / tax mismatch under these rules. Interestingly, the proposed change to amend the consolidation regime's double counting of deductible liabilities when a consolidated group acquires a joining entity has been deferred from the original start date of 14 May 2013 to 1 July 2016. TOFA rules The Australian Government will reform the TOFA rules to reduce the scope, decrease compliance costs and increase certainty through the redesign of the TOFA framework. This will be through the following four key reforms: simplifying the accruals and realisation rules to reduce the number of taxpayers caught by the TOFA rules and reducing the number of arrangements where the spreading of gains and losses is required; aligning the TOFA rules more closely to accounting; simplifying the rules for the taxation of gains and losses on foreign currency; and amending the tax hedging rules so they are easier to access, cover a wider range of risk management arrangements and remove the direct link to financial accounting. The new simplified rules will apply to income years on or after 1 January 2018. We are yet to see the details on the new rules. Tax transparency code Alongside the Budget announcements, the Board of Taxation today released its final report on a potential tax transparency code, which was commissioned by the Australian Government in the 2015 Budget. The Board proposes that a voluntary code be released that broadly encourages significant businesses to publish a range of additional tax-related information (including effective tax rates and 2733113-v4\SYDDMS summaries of cross-border dealings with related parties). Company tax rate to be cut from to 25% by 2026 The Budget promises a staged reduction in the corporate tax rate from the current 30% (28.5% for small businesses) to 25% over the next 10 years – summarised in the table below. Franking credits are to be distributed at the rate of tax paid by the company. FINANCIAL YEAR ANNUAL AGGREGATED TURNOVER LESS THAN COMPANY TAX RATE (%) 2016-17 $10 million 27.5 2017-18 $25 million 27.5 2018-19 $50 million 27.5 2019-20 $100 million 27.5 2020-21 $250 million 27.5 2021-22 $500 million 27.5 2022-23 $1 billion 27.5 2023-24 All companies 27.5 2024-25 All companies 27 2025-26 All companies 26 2026-27 All companies 25 Superannuation The Australian Government has also announced a range of major new measures to change superannuation, which is articulated as being “to provide income in retirement to substitute or supplement the Age Pension” (an objective which it is intended will be enshrined in the legislation) rather than for tax minimisation and estate planning purposes. These changes are outside the scope of this Legal Alert. Collective Investment Vehicles Two new collective investment vehicles (CIVs) will be introduced - a corporate CIV from 1 July 2017 and a limited partnership CIV from 1 July 2018. The CIVs will be taxed on a flow through basis and will be required to meet similar eligibility criteria as managed investment trusts, such as being widely held and engaging in primarily passive investment. Again, detail in the Budget is low. These reforms are intended to enhance the international competitiveness of the Australian managed funds industry by allowing fund managers to offer investment products using vehicles that are used overseas. That is, this change is about bringing Australia into line with international norms and weaning us away from trusts as the investment vehicle of choice. Private companies Australian law contains a range of measures designed to prevent tax avoidance activity between private companies and their shareholders and associates. The Australian Government proposes to amend these rules (in particular Division 7A in the 1936 Tax Act) to improve clarity of operation. Although information remains thin, these amendments appear to include a self-correction mechanism (regarding inadvertent breaches), the introduction of a safe-harbour rule (to provide greater certainty for taxpayers, particularly in identifying arm’s length pricing) and the establishment of a simpler set of rules regarding when related-party loans need to be treated as dividends. 2733113-v4\SYDDMS Download Alert Follow us For more information Amrit MacIntyre Partner amrit.macintyre @bakermckenzie.com Dixon Hearder Partner dixon.hearder @bakermckenzie.com John Walker Partner john.walker
https://www.lexology.com/library/detail.aspx?g=cd25c999-4566-4272-a917-9eacccfb56fb
CC-MAIN-2018-30
en
refinedweb
should be: if(front>=data.size()) if ((front >= data.size()) || (data.size() == 0)) Is there any reason why you're not simply returning the first element in the Vector, then deleting it? eg. public int getFront() { int result = -1; if (data.size() > 0) { result = ((Integer)data.get(0)).int data.remove(0); } else { throw new NoSuchElementException("Qu } return (result); } With this method, you don't need to store (or manage) the "front" attribute. Be seen. Boost your question’s priority for more expert views and faster solutions Second test is redundant. > Is there any reason why you're not simply returning the first element in the Vector, > then deleting it? this was discussed in a previous q. Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Ah. >> Second test is redundant. If data.size() == 0, then the data.get(front) will throw an exception. Specifically, the ArrayIndexOutOfBounds exception! (Or was this discussed in the unreferenced question too ;-) ?) then the first test will be true.
https://www.experts-exchange.com/questions/20816888/array-out-of-bound-problem.html
CC-MAIN-2018-30
en
refinedweb
This is my code so far #include<fstream> #include<iostream> #include<cstdlib> #include<stdlib.h> using namespace std; const int MAX_GENERATED = 100; void fillArray(int a[], int size, int& numberUsed); //prototype int main () { int array[MAX_GENERATED], numberUsed; fillArray (array, MAX_GENERATED, numberUsed); getchar (); getchar (); return 0; } void fillArray( int array[], int size, int& numberUsed) { ifstream inStream; string fileName; cout << "What file would you like to work with?" << endl; cout << "Remember to include the file extension." << endl; cin >> fileName; inStream.open(fileName.c_str()); //to connect with file int next, index = 0; inStream >> next; while (index < size) //loop fills array { array[index] = next; cout << array[index] << endl; index++; inStream >> next; } //end while loop numberUsed = index;//maybe can do something with this //might need 2d array?? } It reads the input from a text file with one number per line with a maximum of 100 numbers in the file. These were numbers from -20 to 20 and were randomly generated. I now have to display each number and the number of times it appears in the input file It has to look something like this: -------------- N COUNT -------------- -12 4 3 3 4 2 1 4 -1 1 2 2 -------------- Total = 16 -------------- The problem with the code as it is, is that there are 20 numbers in the file, the last one being a randomly generated 15. When the program displays the numbers it does fine, but displays the number "15" eighty times. I want to know how to fix this, if possible. Also, how may I go about counting the number of times each number appears? I know I can do an if statement that if the number is -20, increment a counter and do that for up to 20. However, this method is tedious and I don't think it is desired for this. I appreciate any help at all on this, I hope I've given enough information.
https://www.daniweb.com/programming/software-development/threads/153877/how-to-display-numbers-from-an-input-file-and-count-each-one-s-occurence
CC-MAIN-2018-30
en
refinedweb
Hey, I was just mucking around when using rand() and noticed every time it comes out as a output of 41? #include <iostream> using namespace std; int main() { unsigned int randomNumber; unsigned int guess; unsigned int guesses; randomNumber = rand(); cout << randomNumber << endl; system("PAUSE > nul"); } Should i use srand() or something?
https://www.daniweb.com/programming/software-development/threads/119615/rand-always-s-41
CC-MAIN-2018-30
en
refinedweb
Natural language processing—a technology that allows software applications to process human language—has become fairly ubiquitous over the last few years. Google search is increasingly capable of answering natural-sounding questions, Apple’s Siri is able to understand a wide variety of questions, and more and more companies are using (reasonably) intelligent chat and phone bots to communicate with customers. But how does this seemingly “smart” software really work? In this article, you will learn about the technology that makes these applications tick, and you will learn how to develop natural language processing software of your own. The article will walk you through the example process of building a news relevance analyzer. Imagine you have a stock portfolio, and you would like an app to automatically crawl through popular news websites and identify articles that are relevant to your portfolio. For example, if your stock portfolio includes companies like Microsoft, BlackStone, and Luxottica, you would want to see articles that mention these three companies. Getting Started with the Stanford NLP Library Natural language processing apps, like any other machine learning apps, are built on a number of relatively small, simple, intuitive algorithms working in tandem. It often makes sense to use an external library where all of these algorithms are already implemented and integrated. For our example, we will use the Stanford NLP library, a powerful Java-based natural-language processing library that comes with support for many languages. One particular algorithm from this library that we are interested on is the part-of-speech (POS) tagger. A POS tagger is used to automatically assign parts of speech to every word in a piece of text. This POS tagger classifies words in text based on lexical features and analyzes them in relation to other words around them. The exact mechanics of the POS tagger algorithm is beyond the scope of this article, but you can learn more about it here. To begin, we’ll create a new Java project (you can use your favorite IDE) and add the Stanford NLP library to the list of dependencies. If you are using Maven, simply add it to your pom.xml file: <dependency> <groupId>edu.stanford.nlp</groupId> <artifactId>stanford-corenlp</artifactId> <version>3.6.0</version> </dependency> <dependency> <groupId>edu.stanford.nlp</groupId> <artifactId>stanford-corenlp</artifactId> <version>3.6.0</version> <classifier>models</classifier> </dependency> Since the app will need to automatically extract the content of an article from a web page, you will need to specify the following two dependencies as well: <dependency> <groupId>de.l3s.boilerpipe</groupId> <artifactId>boilerpipe</artifactId> <version>1.1.0</version> </dependency> <dependency> <groupId>net.sourceforge.nekohtml</groupId> <artifactId>nekohtml</artifactId> <version>1.9.22</version> </dependency> With these dependencies added, you are ready to move forward. Scraping and Cleaning Articles The first part of our analyzer will involve retrieving articles and extracting their content from web pages. When retrieving articles from news sources, the pages are usually riddled with extraneous information (embedded videos, outbound links, videos, advertisements, etc.) that are irrelevant to the article itself. This is where Boilerpipe comes into play. Boilerpipe is an extremely robust and efficient algorithm for removing “clutter” that identifies the main content of a news article by analyzing different content blocks using features like length of an average sentence, types of tags used in content blocks, and density of links. The boilerpipe algorithm has proven to be competitive with other much more computationally expensive algorithms, such as those based on machine vision. You can learn more at its project site. The Boilerpipe library comes with built-in support for scraping web pages. It can fetch the HTML from the web, extract text from HTML, and clean the extracted text. You can define a function, extractFromURL, that will take a URL and use Boilerpipe to return the most relevant text as a string using ArticleExtractor for this task: import java.net.URL; import de.l3s.boilerpipe.document.TextDocument; import de.l3s.boilerpipe.extractors.CommonExtractors; import de.l3s.boilerpipe.sax.BoilerpipeSAXInput; import de.l3s.boilerpipe.sax.HTMLDocument; import de.l3s.boilerpipe.sax.HTMLFetcher; public class BoilerPipeExtractor { public static String extractFromUrl(String userUrl) throws java.io.IOException, org.xml.sax.SAXException, de.l3s.boilerpipe.BoilerpipeProcessingException { final HTMLDocument htmlDoc = HTMLFetcher.fetch(new URL(userUrl)); final TextDocument doc = new BoilerpipeSAXInput(htmlDoc.toInputSource()).getTextDocument(); return CommonExtractors.ARTICLE_EXTRACTOR.getText(doc); } } The Boilerpipe library provides different extractors based on the boilerpipe algorithm, with ArticleExtractor being specifically optimized for HTML-formatted news articles. ArticleExtractor focuses specifically on HTML tags used in each content block and outbound link density. This is better suited to our task than the faster-but-simpler DefaultExtractor. The built-in functions take care of everything for us: HTMLFetcher.fetchgets the HTML document getTextDocumentextracts the text document CommonExtractors.ARTICLE_EXTRACTOR.getTextextracts the relevant text from the article using the boilerpipe algorithm Now you can try it out with an example article regarding the mergers of optical giants Essilor and Luxottica, which you can find here. You can feed this URL to the function and see what comes out. Add the following code to your main function: public class App { public static void main( String[] args ) throws java.io.IOException, org.xml.sax.SAXException, de.l3s.boilerpipe.BoilerpipeProcessingException { String urlString = ""; String text = BoilerPipeExtractor.extractFromUrl(urlString); System.out.println(text); } } You should see in your output in the main body of the article, without the ads, HTML tags, and outbound links. Here is the beginning snippet from what I got when I ran this: MILAN/PARIS Italy's Luxottica (LUX.MI) and France's Essilor (ESSI.PA) have agreed a 46 billion euro ($49 billion) merger to create a global eyewear powerhouse with annual revenue of more than 15 billion euros. The all-share deal is one of Europe's largest cross-border tie-ups and brings together Luxottica, the world's top spectacles maker with brands such as Ray-Ban and Oakley, with leading lens manufacturer Essilor. "Finally ... two products which are naturally complementary -- namely frames and lenses -- will be designed, manufactured and distributed under the same roof," Luxottica's 81-year-old founder Leonardo Del Vecchio said in a statement on Monday. Shares in Luxottica were up by 8.6 percent at 53.80 euros by 1405 GMT (9:05 a.m. ET), with Essilor up 12.2 percent at 114.60 euros. The merger between the top players in the 95 billion eyewear market is aimed at helping the businesses to take full advantage of expected strong demand for prescription spectacles and sunglasses due to an aging global population and increasing awareness about eye care. Jefferies analysts estimate that the market is growing at between... And that is indeed the main article body of the article. Hard to imagine this being much simpler to implement. Tagging Parts of Speech Now that you have successfully extracted the main article body, you can work on determining if the article mentions companies that are of interest to the user. You may be tempted to simply do a string or regular expression search, but there are several disadvantages to this approach. First of all, a string search may be prone to false positives. An article that mentions Microsoft Excel may be tagged as mentioning Microsoft, for instance. Secondly, depending on the construction of the regular expression, a regular expression search can lead to false negatives. For example, an article that contains the phrase “Luxottica’s quarterly earnings exceeded expectations” may be missed by a regular expression search that searches for “Luxottica” surrounded by white spaces. Finally, if you are interested in a large number of companies and are processing a large number of articles, searching through the entire body of the text for every company in the user’s portfolio may prove extremely time-consuming, yielding unacceptable performance. Stanford’s CoreNLP library has many powerful features and provides a way to solve all three of these problems. For our analyzer, we will use the Parts-of-Speech (POS) tagger. In particular, we can use the POS tagger to find all the proper nouns in the article and compare them to our portfolio of interesting stocks. By incorporating NLP technology, we not only improve the accuracy of our tagger and minimize false positives and negatives mentioned above, but we also dramatically minimize the amount of text we need to compare to our portfolio of stocks, since proper nouns only comprise a small subset of the full text of the article. By pre-processing our portfolio into a data structure that has low membership query cost, we can dramatically reduce the time needed to analyze an article. Stanford CoreNLP provides a very convenient tagger called MaxentTagger that can provide POS Tagging in just a few lines of code. Here is a simple implementation: public class PortfolioNewsAnalyzer { private HashSet<String> portfolio; private static final String modelPath = "edu\\stanford\\nlp\\models\\pos-tagger\\english-left3words\\english-left3words-distsim.tagger"; private MaxentTagger tagger; public PortfolioNewsAnalyzer() { tagger = new MaxentTagger(modelPath); } public String tagPos(String input) { return tagger.tagString(input); } The tagger function, tagPos, takes a string as an input and outputs a string that contains the words in the original string along with the corresponding part of speech. In your main function, instantiate a PortfolioNewsAnalyzer and feed the output of the scraper into the tagger function and you should see something like this: MILAN/PARIS_NN Italy_NNP 's_POS Luxottica_NNP -LRB-_-LRB- LUX.MI_NNP -RRB-_-RRB- and_CC France_NNP 's_POS Essilor_NNP -LRB-_-LRB- ESSI.PA_NNP -RRB-_-RRB- have_VBP agreed_VBN a_DT 46_CD billion_CD euro_NN -LRB-_-LRB- $_$ 49_CD billion_CD -RRB-_-RRB- merger_NN to_TO create_VB a_DT global_JJ eyewear_NN powerhouse_NN with_IN annual_JJ revenue_NN of_IN more_JJR than_IN 15_CD billion_CD euros_NNS ._. The_DT all-share_JJ deal_NN is_VBZ one_CD of_IN Europe_NNP 's_POS largest_JJS cross-border_JJ tie-ups_NNS and_CC brings_VBZ together_RB Luxottica_NNP ,_, the_DT world_NN 's_POS top_JJ spectacles_NNS maker_NN with_IN brands_NNS such_JJ as_IN Ray-Ban_NNP and_CC Oakley_NNP ,_, with_IN leading_VBG lens_NN manufacturer_NN Essilor_NNP ._. ``_`` Finally_RB ..._: two_CD products_NNS which_WDT are_VBP naturally_RB complementary_JJ --_: namely_RB frames_NNS and_CC lenses_NNS --_: will_MD be_VB designed_VBN ,_, manufactured_VBN and_CC distributed_VBN under_IN the_DT same_JJ roof_NN ,_, ''_'' Luxottica_NNP 's_POS 81-year-old_JJ founder_NN Leonardo_NNP Del_NNP Vecchio_NNP said_VBD in_IN a_DT statement_NN on_IN Monday_NNP ._. Shares_NNS in_IN Luxottica_NNP were_VBD up_RB by_IN 8.6_CD percent_NN at_IN 53.80_CD euros_NNS by_IN 1405_CD GMT_NNP -LRB-_-LRB- 9:05_CD a.m._NN ET_NNP -RRB-_-RRB- ,_, with_IN Essilor_NNP up_IN 12.2_CD percent_NN at_IN 114.60_CD euros_NNS ._. The_DT merger_NN between_IN the_DT top_JJ players_NNS in_IN the_DT 95_CD billion_CD eyewear_NN market_NN is_VBZ aimed_VBN at_IN helping_VBG the_DT businesses_NNS to_TO take_VB full_JJ advantage_NN of_IN expected_VBN strong_JJ demand_NN for_IN prescription_NN spectacles_NNS and_CC sunglasses_NNS due_JJ to_TO an_DT aging_NN global_JJ population_NN and_CC increasing_VBG awareness_NN about_IN... Processing the Tagged Output into a Set So far, we’ve built functions to download, clean, and tag a news article. But we still need to determine if the article mentions any of the companies of interest to the user. To do this, we need to collect all the proper nouns and check if stocks from our portfolio are included in those proper nouns. To find all the proper nouns, we will first want to split the tagged string output into tokens (using spaces as the delimiters), then split each of the tokens on the underscore ( _) and check if the part of speech is a proper noun. Once we have all the proper nouns, we will want to store them in a data structure that is better optimized for our purpose. For our example, we’ll use a HashSet. In exchange for disallowing duplicate entries and not keeping track of the order of the entries, HashSet allows very fast membership queries. Since we are only interested in querying for membership, the HashSet is perfect for our purposes. Below is the function that implements the splitting and storing of proper nouns. Place this function in your PortfolioNewsAnalyzer class: public static HashSet<String> extractProperNouns(String taggedOutput) { HashSet<String> propNounSet = new HashSet<String>(); String[] split = taggedOutput.split(" "); for (String token: split ){ String[] splitTokens = token.split("_"); if(splitTokesn[1].equals("NNP")){ propNounSet.add(splitTokens[0]); } } return propNounSet; } There is an issue with this implementation though. If a company’s name consists of multiple words, (e.g., Carl Zeiss in the Luxottica example) this implementation will be unable to catch it. In the example of Carl Zeiss, “Carl” and “Zeiss” will be inserted into the set separately, and therefore will never contain the single string “Carl Zeiss.” To solve this problem, we can collect all the consecutive proper nouns and join them with spaces. Here is the updated implementation that accomplishes this: public static HashSet<String> extractProperNouns(String taggedOutput) { HashSet<String> propNounSet = new HashSet<String>(); String[] split = taggedOutput.split(" "); List<String> propNounList = new ArrayList<String>(); for (String token: split ){ String[] splitTokens = token.split("_"); if(splitTokens[1].equals("NNP")){ propNounList.add(splitTokens[0]); } else { if (!propNounList.isEmpty()) { propNounSet.add(StringUtils.join(propNounList, " ")); propNounList.clear(); } } } if (!propNounList.isEmpty()) { propNounSet.add(StringUtils.join(propNounList, " ")); propNounList.clear(); } return propNounSet; } Now the function should return a set with the individual proper nouns and the consecutive proper nouns (i.e., joined by spaces). If you print the propNounSet, you should see something like the following: [... Monday, Gianluca Semeraro, David Goodman, Delfin, North America, Luxottica, Latin America, Rossi/File Photo, Rome, Safilo Group, SFLG.MI, Friday, Valentina Za, Del Vecchio, CEO Hubert Sagnieres, Oakley, Sagnieres, Jefferies, Ray Ban, ...] Comparing the Portfolio against the PropNouns Set We are almost done! In the previous sections, we built a scraper that can download and extract the body of an article, a tagger that can parser the article body and identify proper nouns, and a processor that takes the tagged output and collects the proper nouns into a HashSet. Now all that’s left to do is to take the HashSet and compare it with the list of companies that we’re interested in. The implementation is very simple. Add the following code in your PortfolioNewsAnalyzer class: private HashSet<String> portfolio; public PortfolioNewsAnalyzer() { portfolio = new HashSet<String>(); } public void addPortfolioCompany(String company) { portfolio.add(company); } public boolean arePortfolioCompaniesMentioned(HashSet<String> articleProperNouns){ return !Collections.disjoint(articleProperNouns, portfolio); } Putting it All Together Now we can run the entire application—the scraping, cleaning, tagging, the collecting, and comparing. Here is the function that runs through the entire application. Add this function to your PortfolioNewsAnalyzer class: public boolean analyzeArticle(String urlString) throws IOException, SAXException, BoilerpipeProcessingException { String articleText = extractFromUrl(urlString); String tagged = tagPos(articleText); HashSet<String> properNounsSet = extractProperNouns(tagged); return arePortfolioCompaniesMentioned(properNounsSet); } Finally, we can use the app! Here is an example using the same article as above and Luxottica as the portfolio company: public static void main( String[] args ) throws IOException, SAXException, BoilerpipeProcessingException { PortfolioNewsAnalyzer analyzer = new PortfolioNewsAnalyzer(); analyzer.addPortfolioCompany("Luxottica"); boolean mentioned = analyzer.analyzeArticle(""); if (mentioned) { System.out.println("Article mentions portfolio companies"); } else { System.out.println("Article does not mention portfolio companies"); } } Run this, and the app should print “Article mentions portfolio companies.” Change the portfolio company from Luxottica to a company not mentioned in the article (such as “Microsoft”), and the app should print “Article does not mention portfolio companies.” Building an NLP App Doesn’t Need to Be Hard In this article, we stepped through the process of building an application that downloads an article from a URL, cleans it using Boilerpipe, processes it using Stanford NLP, and checks if the article makes specific references of interest (in our case, companies in our portfolio). As demonstrated, leveraging this array of technologies makes what would otherwise be a daunting task into one that is relatively straightforward. I hope this article introduced you to useful concepts and techniques in natural language processing and that it inspired you to write natural language applications of your own. [Note: You can find a copy of the code referenced in this article here.]
https://www.toptal.com/algorithms/how-to-build-a-natural-language-processing-app
CC-MAIN-2018-30
en
refinedweb
Basic features of FormFlow Dialogs are very powerful and flexible, but handling a guided conversation such as ordering a sandwich can require a lot of effort. At each point in the conversation, there are many possibilities of what will happen next. For example, you may need to clarify an ambiguity, provide help, go back, or show progress. By using FormFlow within the Bot Builder SDK for .NET, you can greatly simplify the process of managing a guided conversation like this. FormFlow automatically generates the dialogs that are necessary to manage a guided conversation, based upon guidelines that you specify. Although using FormFlow sacrifices some of the flexibility that you might otherwise get by creating and managing dialogs on your own, designing a guided conversation using FormFlow can significantly reduce the time it takes to develop your bot. Additionally, you may construct your bot using a combination of FormFlow-generated dialogs and other types of dialogs. For example, a FormFlow dialog may guide the user through the process of completing a form, while a LuisDialog may evaluate user input to determine intent. This article describes how to create a bot that uses the basic features of FormFlow to collect information from a user. Forms and fields To create a bot using FormFlow, you must specify the information that the bot needs to collect from the user. For example, if the bot's objective is to obtain a user's sandwich order, then you must define a form that contains fields for the data that the bot needs to fulfill the order. You can define the form by creating a C# class that contains one or more public properties to represent the data that the bot will collect from the user. Each property must be one of these data types: - Integral (sbyte, byte, short, ushort, int, uint, long, ulong) - Floating point (float, double) - String - DateTime - Enumeration - List of enumerations Any of the data types may be nullable, which you can use to model that the field does not have a value. If a form field is based on an enumeration property that is not nullable, the value 0 in the enumeration represents null (i.e., indicates that the field does not have a value), and you should start your enumeration values at 1. FormFlow ignores all other property types and methods. For complex objects, you must create a form for the top-level C# class and another form for the complex object. You can compose the forms together by using typical dialog semantics. It is also possible to define a form directly by implementing Advanced.IField or using Advanced.Field and populating the dictionaries within it. Note You can define a form by using either a C# class or JSON schema. This article describes how to define a form using a C# class. For more information about using JSON schema, see Define a form using JSON schema. Simple sandwich bot Consider this example of a simple sandwich bot that is designed to obtain a user's sandwich order. Create the form The SandwichOrder class defines the form and the enumerations define the options for building a sandwich. The class also includes the static BuildForm method that uses FormBuilder to create the form and define a simple welcome message. To use FormFlow, you must first import the Microsoft.Bot.Builder.FormFlow namespace. using Microsoft.Bot.Builder.FormFlow; using System; using System.Collections.Generic; // The SandwichOrder class represents the form that you want to complete // using information that is collected from the user. // It must be serializable so the bot can be stateless. // The order of fields defines the default sequence in which the user is asked questions. // The enumerations define the valid options for each field in SandwichOrder, and the order // of the values represents the sequence in which they are presented to the user in a conversation. namespace Microsoft.Bot.Sample.SimpleSandwichBot {(); } }; } Connect the form to the framework To connect the form to the framework, you must add it to the controller. In this example, the Conversation.SendAsync method calls the static MakeRootDialog method, which in turn, calls the FormDialog.FromForm method to create the SandwichOrder form. internal static IDialog<SandwichOrder> MakeRootDialog() { return Chain.From(() => FormDialog.FromForm(SandwichOrder.BuildForm)); } [ResponseType(typeof(void))] public virtual async Task<HttpResponseMessage> Post([FromBody] Activity activity) { if (activity != null) { switch (activity.GetActivityType()) { case ActivityTypes.Message: await Conversation.SendAsync(activity, MakeRootDialog); break; case ActivityTypes.ConversationUpdate: case ActivityTypes.ContactRelationUpdate: case ActivityTypes.Typing: case ActivityTypes.DeleteUserData: default: Trace.TraceError($"Unknown activity type ignored: {activity.GetActivityType()}"); break; } } ... } See it in action By simply defining the form with a C# class and connecting it to the framework, you have enabled FormFlow to automatically manage the conversation between bot and user. The example interactions shown below demonstrate the capabilities of a bot that is created by using the basic features of FormFlow. In each interaction, a > symbol indicates the point at which the user enters a response. Display the first prompt This form populates the SandwichOrder.Sandwich property. The form automatically generates the prompt, "Please select a sandwich", where the word "sandwich" in the prompt derives from the property name Sandwich. The SandwichOptions enumeration defines the choices that are presented to the user, with each enumeration value being automatically broken into words based upon changes in case and underscores. > Provide guidance The user can enter "help" at any point in the conversation to get guidance with filling out the form. For example, if the user enters "help" at the sandwich prompt, the bot will respond with this guidance. > help * You are filling in the sandwich field. Possible responses: * You can enter a number 1-15 or words from the descriptions. (BLT, Black Forest Ham, Buffalo Chicken, Chicken And Bacon Ranch Melt, Cold Cut Combo, Meatball Marinara, Oven Roasted Chicken, Roast Beef, Rotisserie Style Chicken, Spicy Italian, Steak And Cheese, Sweet Onion Teriyaki, Tuna, Turkey Breast, and Veggie) * Back: Go back to the previous question. * Help: Show the kinds of responses you can enter. * Quit: Quit the form without completing it. * Reset: Start over filling in the form. (With defaults from your previous entries.) * Status: Show your progress in filling in the form so far. * You can switch to another field by entering its name. (Sandwich, Length, Bread, Cheese, Toppings, and Sauce). Advance to the next prompt If the user enters "2" in response to the initial sandwich prompt, the bot then displays a prompt for the next property that is defined by the form: SandwichOrder.Length. > 2 Please select a length (1. Six Inch, 2. Foot Long) > Return to the previous prompt If the user enters "back" at this point in the conversation, the bot will return the previous prompt. The prompt shows the user's current choice ("Black Forest Ham"); the user may change that selection by entering a different number or confirm that selection by entering "c". > back Please select a sandwich(current choice: Black Forest Ham) > c Please select a length (1. Six Inch, 2. Foot Long) > Clarify user input If the user responds with text (instead of a number) to indicate a choice, the bot will automatically ask for clarification if user input matches more than one choice. Please select a bread 1. Nine Grain Wheat 2. Nine Grain Honey Oat 3. Italian 4. Italian Herbs And Cheese 5. Flatbread > nine grain By "nine grain" bread did you mean (1. Nine Grain Honey Oat, 2. Nine Grain Wheat) > 1 If user input does not directly match any of the valid choices, the bot will automatically prompt the user for clarification. Please select a cheese (1. American, 2. Monterey Cheddar, 3. Pepperjack) > amercan "amercan" is not a cheese option. > american smoked For cheese I understood American. "smoked" is not an option. If user input specifies multiple choices for a property and the bot does not understand any of the specified choices, it will automatically prompt the user for clarification. Please select one or more toppings 1. Banana Peppers 2. Cucumbers 3. Green Bell Peppers 4. Jalapenos 5. Lettuce 6. Olives 7. Pickles 8. Red Onion 9. Spinach 10. Tomatoes > peppers, lettuce and tomato By "peppers" toppings did you mean (1. Green Bell Peppers, 2. Banana Peppers) > 1 Show current status If the user enters "status" at any point in the order, the bot's response will indicate which values have already been specified and which values remain to be specified. Please select one or more sauce 1. Honey Mustard 2. Light Mayonnaise 3. Regular Mayonnaise 4. Mustard 5. Oil 6. Pepper 7. Ranch 8. Sweet Onion 9. Vinegar > status * Sandwich: Black Forest Ham * Length: Six Inch * Bread: Nine Grain Honey Oat * Cheese: American * Toppings: Lettuce, Tomatoes, and Green Bell Peppers * Sauce: Unspecified Confirm selections When the user completes the form, the bot will ask the user to confirm their selections. Please select one or more sauce 1. Honey Mustard 2. Light Mayonnaise 3. Regular Mayonnaise 4. Mustard 5. Oil 6. Pepper 7. Ranch 8. Sweet Onion 9. Vinegar > 1 Is this your selection? * Sandwich: Black Forest Ham * Length: Six Inch * Bread: Nine Grain Honey Oat * Cheese: American * Toppings: Lettuce, Tomatoes, and Green Bell Peppers * Sauce: Honey Mustard > If the user responds by entering "no", the bot allows the user to update any of the prior selections. If the user responds by entering "yes", the form has been completed and control is returned to the calling dialog. Is this your selection? * Sandwich: Black Forest Ham * Length: Six Inch * Bread: Nine Grain Honey Oat * Cheese: American * Toppings: Lettuce, Tomatoes, and Green Bell Peppers * Sauce: Honey Mustard > no What do you want to change? 1. Sandwich(Black Forest Ham) 2. Length(Six Inch) 3. Bread(Nine Grain Honey Oat) 4. Cheese(American) 5. Toppings(Lettuce, Tomatoes, and Green Bell Peppers) 6. Sauce(Honey Mustard) > 2 Please select a length (current choice: Six Inch) (1. Six Inch, 2. Foot Long) > 2 Is this your selection? * Sandwich: Black Forest Ham * Length: Foot Long * Bread: Nine Grain Honey Oat * Cheese: American * Toppings: Lettuce, Tomatoes, and Green Bell Peppers * Sauce: Honey Mustard > y Handling quit and exceptions If the user enters "quit" in the form or an exception occurs at some point in the conversation, your bot will need to know the step in which the event occurred, the state of the form when the event occurred, and which steps of the form were successfully completed prior to the event. The form returns this information via the FormCanceledException<T> class. This code example shows how to catch the exception and display a message according to the event that occurred. internal static IDialog<SandwichOrder> MakeRootDialog() { return Chain.From(() => FormDialog.FromForm(SandwichOrder.BuildLocalizedForm)) .Do(async (context, order) => { try { var completed = await order; // Actually process the sandwich order... await context.PostAsync("Processed your order!"); } catch (FormCanceledException<SandwichOrder> e) { string reply; if (e.InnerException == null) { reply = $"You quit on {e.Last} -- maybe you can finish next time!"; } else { reply = "Sorry, I've had a short circuit. Please try again."; } await context.PostAsync(reply); } }); } Summary This article has described how to use the basic features of FormFlow to create a bot that can: - Automatically generate and manage the conversation - Provide clear guidance and help - Understand both numbers and textual entries - Provide feedback to the user regarding what is understood and what is not - Ask clarifying questions when necessary - Allow the user to navigate between steps Although basic FormFlow functionality is sufficient in some cases, you should consider the potential benefits of incorporating some of the more advanced features of FormFlow into your bot. For more information, see Advanced features of FormFlow and Customize a form using FormBuilder. Sample code For complete samples that show how to implement FormFlow using the Bot Builder SDK for .NET, see the Multi-Dialog Bot sample and the Contoso Flowers Bot sample in GitHub. Next steps FormFlow simplifies dialog development. The advanced features of FormFlow let you customize how a FormFlow object behaves.
https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-formflow?view=azure-bot-service-3.0
CC-MAIN-2018-30
en
refinedweb
Cluster in stop-writes but UDFs continue writing Problem Description In Aerospike releases prior to Aerospike 3.8.0 a node has hit stop-writes however avail-pct continues to decrease. Analysis of the logs shows that UDFs are still executing and that the writes are being accepted by the database. Explanation There is a bug in Aerospike releases prior to 3.8.0 whereby UDF writes, will go through even when a database is in stop-writes. The aerospike.log will show: aerospike.log-20160405:Apr 05 2016 06:10:47 GMT: WARNING (namespace): (namespace.c::457) {problem_namespace} hwm_breached false, stop_writes true (disk avail pct), memory sz:20890709632 (20890709632 + 0) hwm:34789236736 sw:52183851008, disk sz:777470015488 hwm:896183828480 Using either asadm in analysis mode or asloglatency we can see that UDFs are still executing. Admin> loglatency -h udf ~~~~~~~~~~~~~~~~~~~~~~~~~~~~cluster~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NODE : s1 . . . | . : % >1ms % >8ms % >64ms ops/sec | Apr 05 2016 06:10:17: 4.38 0.06 0.00 360.8 | Apr 05 2016 06:10:27: 5.92 0.06 0.00 319.0 | Apr 05 2016 06:10:37: 5.53 0.12 0.00 403.6 | Apr 05 2016 06:10:47: 3.48 0.02 0.00 448.6 | Apr 05 2016 06:10:57: 5.93 0.06 0.00 330.7 | Apr 05 2016 06:11:07: 6.44 0.20 0.00 295.1 | Apr 05 2016 06:11:17: 5.06 0.00 0.00 385.6 | Apr 05 2016 06:11:27: 8.98 0.07 0.00 405.4 | Apr 05 2016 06:11:37: 6.66 0.14 0.00 437.2 | Apr 05 2016 06:11:47: 5.11 0.05 0.03 381.7 | Apr 05 2016 06:11:57: 6.67 0.00 0.00 316.2 | Apr 05 2016 06:12:07: 5.71 0.04 0.00 276.8 | Log messages indicate that these UDFs are writing data as the w-tot count for the devices on the namespace is increasing. aerospike.log-20160405:Apr 05 2016 06:16:50 GMT: INFO (drv_ssd): (drv_ssd.c::2088) device /dev/xvdb: used 388800305152, contig-free 38406M (76813 wblocks), swb-free 16, w-q 0 w-tot 124912121 (17.0/s), defrag-q 0 defrag-tot 124321385 (16.6/s) defrag-w-tot 57987003 (7.9/s) aerospike.log-20160405:Apr 05 2016 06:17:00 GMT: INFO (drv_ssd): (drv_ssd.c::2088) device /dev/xvdc: used 388722930048, contig-free 37535M (75071 wblocks), swb-free 16, w-q 0 w-tot 126356071 (15.6/s), defrag-q 0 defrag-tot 125706130 (14.6/s) defrag-w-tot 58184806 (6.9/s) aerospike.log-20160405:Apr 05 2016 06:17:10 GMT: INFO (drv_ssd): (drv_ssd.c::2088) device /dev/xvdb: used 388801634048, contig-free 38376M (76752 wblocks), swb-free 16, w-q 0 w-tot 124912419 (14.9/s), defrag-q 0 defrag-tot 124321650 (13.2/s) defrag-w-tot 57987129 (6.3/s) aerospike.log-20160405:Apr 05 2016 06:17:20 GMT: INFO (drv_ssd): (drv_ssd.c::2088) device /dev/xvdc: used 388724107264, contig-free 37511M (75023 wbl Solution This is a bug within Aerospike and has been resolved in Aerospike releases 3.8.x and higher. To resolve this issue the product should be upgraded. As a temporary measure, shutting down all UDFs should be included in operational process if a stop-write condition occurs. Notes - This is tracked under JIRA AER-4907 - The asadm tool includes rich functionality for log and latency analysis in addition to a comprehensive command line administrative interface. It can be cloned from the following Git repository. Keywords UDF STOP-WRITES Timestamp 5/6/16
https://discuss.aerospike.com/t/cluster-in-stop-writes-but-udfs-continue-writing/2949
CC-MAIN-2018-30
en
refinedweb
Helper to implements sentry with Angular. import "package:angular/angular.dart"; import "package:angular_sentry/angular_sentry.dart"; main() { bootstrap(MyApp, [ provide(SENTRY_DSN, useValue: "MY_SENTRY_DSN"), provide(ExceptionHandler, useClass: AngularSentry) ]); } main() { bootstrap(MyApp, [ AppSentry ]); } @Injectable() class AppSentry extends AngularSentry { AppSentry(Injector injector) : super(injector, "MY_SENTRY_DSN"); SentryUser get user => new SentryUser(); String get environment => "production"; String get release => "1.0.0"; Map<String, String> get extra => {"location_url": window.location.href}; void onCatch(dynamic exception, Trace trace, [String reason]) { if (exception is ClientException) { log("Network error"); } else { super.onCatch(exception, trace, reason); } } } sentrypackage instead of sentry_client Add this to your package's pubspec.yaml file: dependencies: angular_sentry: "^0.0.4" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:angular_sentry/angular_sentry.dart'; We analyzed this package on Jul 13, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: web Primary library: package:angular_sentry/angular_sentry angular_sentry.dart. Fix analysis and formatting issues. Analysis or formatting checks reported 1 hint. Strong-mode analysis of lib/angular_sentry.dartgave the following hint: line: 32 col: 39 'ZERO' is deprecated and shouldn't be used.
https://pub.dartlang.org/packages/angular_sentry
CC-MAIN-2018-30
en
refinedweb
2.0.2.BUILD-SNAPSHOT Table of Contents RestTemplate WebClient.0.2.BUILD-SNAPSHOT requires Java 8 or 9 and Spring Framework 5.0.6.RELEASE or above. Explicit build support is provided for Maven 3.2+ and Gradle 4..2. The following example shows a typical build.gradle file: buildscript { repositories { jcenter() maven { url '' } maven { url '' } } dependencies { classpath 'org.springframework.boot:spring-boot-gradle-plugin:2.0.2.BUILD-SNAPSHOT' } }.2.BUILD-SNAPSHOT If you develop features for the CLI and want easy access to the version you built, use the following commands: $ sdk install springboot dev /path/to/spring-boot/spring-boot-cli/target/spring-boot-cli-2.0.2.BUILD-SNAPSHOT-bin/spring-2.0.2.BUILD-SNAPSHOT/ $ sdk default springboot dev $ spring --version Spring CLI v2.0.2.0.2.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00) throws Exception {.0.2.0.2.0.2.0.2.0.2.0.2. dependencies { compile(.0.2Environment. Multiple profiles can be specified with a comma-separated list. # You can also restrict that feature to known extensions only # spring.mvc.pathmatch.use-registered-suffix-pattern=true # public class MyConfiguration { @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurer() { @Override public public class MyConfiguration { @Bean public. FilterRegistrationBean. Dedicated variants exist for Tomcat, Jetty, and Undertow.. error.jsppage does not override the default view for error handling. Custom error pages should be used instead. There is a JSP sample so that you can see how to set things up..jwk-set-uri= spring.security.oauth2.client.provider.my-oauth-provider.user-name-attribute=name, the actuators are secured by Spring Boot auto-config. If you define a custom WebSecurityConfigurerAdapter, Spring Boot auto-config “Starter”. You can inject an auto-configured Neo4jSession, Session, or Neo4jOperations instance as you would any other Spring Bean. By default, the instance example shows an interface definition for a Neo4j repository:,. public OkHttpClient.Builder) { // ... } } This sets the common prop.one Kafka property to first (applies to producers, consumers and admins), the prop.two admin property to second, the prop.three consumer property to third and the prop.four producer property to fourth. public class MyService { private final WebClient webClient; public MyService(WebClient.Builder webClientBuilder) { this.webClient = webClientBuilder.baseUrl("").build(); } public Mono<Details> someRestCall(String name) { return this.webClient.get().url("/{name}/details", name) .retrieve().bodyToMono(Details.class); } }. Quartz Scheduler configuration can be customized by using Quartz configuration properties () spring.quartz.properties.*) and SchedulerFactoryBeanCustomizer beans, which allow programmatic SchedulerFactoryBean customization.. By default, Spring Boot creates an MBeanServer bean with an ID of mbeanServer and exposes any of your beans that are annotated with Spring JMX annotations ( @ManagedResource, @ManagedAttribute, or . public void exampleTest() { ... } } private WebTestClient webClient; @Test public void exampleTest() { this.webClient.get().uri("/").exchange().expectStatus().isOk() .expectBody(String.class).isEqualTo("Hello World"); } } settings that are enabled by @WebMvcTest can be found in the appendix. private WebTestClient webClient; @MockBean private UserVehicleService userVehicleService; @Test public"); } } is for pure JDBC-related tests. By default, it also 43.3.11, “Auto-configured Data JPA Tests”".) A list of the auto-configuration that is enabled by @JdbcTest can be found in the appendix. 29.5, “Using jOOQ”", earlier in this chapter.) public. A list of the auto-configuration that is enabled by @JooqTest can be found in the appendix. 30 settings that are enabled by @DataMongoTest can be found in the appendix. 30 public { } A list of the auto-configuration settings that are enabled by @DataNeo4jTest can be found in the appendix. 30 public class ExampleDataRedisTests { @Autowired private YourRepository repository; // } A list of the auto-configuration settings that are enabled by @DataRedisTest can be found in the appendix. 30 public { }. public class UserDocumentationTests { @Autowired private MockMvc mvc; @Test publicDocs public class UserDocumentationTests { @LocalServerPort private int port; @Autowired private RequestSpecification documentationSpec; @Test public public class BatchConfiguration { ... }. public class MyAutoConfiguration { @Bean @ConditionalOnMissingBean public public"); }); } public create a project via start.spring.io. Feel free to join the #spring channel of Kotlin Slack or ask a question). WARN: Generic type arguments, varargs and array elements nullability are not yet supported. See SPR-15942 for up-to-date information. Also be aware that Spring Boot’s own API is not yet annotated. public class ActuatorSecurity extends WebSecurityConfigurerAdapter { @Override protected public class ActuatorSecurity extends WebSecurityConfigurerAdapter { @Override protected. By default, endpoints are exposed over HTTP under the /actuator path by using the ID of the endpoint. For example, the beans endpoint is exposed under /actuator/beans. If you want to map endpoints to a different path, you can use the management.endpoints.web.path-mapping property. Also, if you want change the base path, you can use management.endpoints.web.base-path. The following example remaps /actuator/health to /healthcheck: application.properties. management.endpoints.web.base-path=/ management.endpoints.web.path-mapping.health=healthcheck @WebEndpointExt.. An operation on a web endpoint or a web-specific endpoint extension can receive the current java.security.Principal or org.springframework.boot.actuate.endpoint.SecurityContext as a method parameter. The former is typically used in conjunction with @Nullable to provide different behaviour for authenticated and unauthenticated users. The latter is typically used to perform authorization checks using its isUserInRole(String) method. A Servlet can be exposed as an endpoint by implementing a class annotated with @ServletEndpoint that also implements Supplier<EndpointServlet>. Servlet endpoints provide deeper integration with the Servlet container but at the expose of portability. They are intended to be used to expose an existing Servlet as an endpoint. For new endpoints, the @Endpoint and @WebEndpoint annotations should be preferred whenever possible. . You can use health information to check the status of your running application. It is often used by monitoring software to alert someone when a production system goes down. The information exposed by the health endpoint depends on the management.endpoint.health.show-details property by using the management.health.status.order configuration property. For example, assume a new Status with code FATAL is being used in one of your HealthIndicator implementations. To configure the severity order, add the following property to your application properties: management.health.status.http-mapping.FATAL=503 The following table shows the default status mappings for the built-in statuses: For reactive applications, such as those using Spring WebFlux, ReactiveHealthIndicator provides a non-blocking contract for getting application health. Similar to a traditional HealthIndicator, health information is collected from all ReactiveHealthIndicator beans defined in your ApplicationContext. Regular HealthIndicator beans that do not check against a reactive API are included()))); } } Application information exposes various information collected from all InfoContributor beans defined in your ApplicationContext. Spring Boot includes a number of auto-configured InfoContributor beans, and you can write your own. The following InfoContributor beans are auto-configured by Spring Boot, when appropriate: If a BuildProperties bean is available, the info endpoint can also publish information about your build. This happens if a META-INF/build-info.properties file is available in the classpath. Java=*. Spring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports numerous monitoring systems, including: instance,.= Micrometer provides a hierarchical mapping to JMX, primarily as a cheap and portable way to view metrics locally..uri=proxy://localhost:2878 You can also change the interval at which metrics are sent to Wavefront: management.metrics.export.wavefront.step=30s Spring Boot registers the following core metrics when applicable: JVM metrics, report utilization of: Auto-configuration enables the instrumentation of requests handled by Spring MVC. When management.metrics.web.server.auto-time-requests is true, this instrumentation occurs for all requests. Alternatively, when set to false, you can enable instrumentation by adding @Timed to a request-handling method: @RestController @Timed public class MyController { @GetMapping("/api/people") @Timed(extraTags = { "region", "us-east-1" })public class MyController { @GetMapping("/api/people") @Timed(extraTags = { "region", "us-east-1" }) @Timed(value = "all.people", longTask = true)@Timed(value = "all.people", longTask = true) public List<Person> listPeople() { ... } }public List<Person> listPeople() { ... } } By default, metrics are generated with the name, http.server.requests. The name can be customized by setting the management.metrics.web.server.requests-metric-name property. By default, Spring MVCMvcTagsProvider. Auto-configuration enables the instrumentation of all requests handled by WebFlux controllers and functional handlers. By default, metrics are generated with the name http.server.requests. You can customize the name by setting the management.metrics.web.server.requests-metric-name property. By default, WebFluxFluxTagsProvider. The instrumentation of any RestTemplate created using the auto-configured RestTemplateBuilder is enabled. It is also possible to apply MetricsRestTemplateCustomizer manually. By default, metrics are generated with the name, http.client.requests. The name can be customized by setting the management.metrics.web.client.requests-metric-name property. By default, metrics generated by an instrumented RestTemplate are tagged with the following information: method, the request’s method (for example, GETor uri, the request’s URI template prior to variable substitution, if possible (for example, /api/person/{id}). status, the response’s HTTP status code (for example, 200or 500). clientName, the host portion of the URI. To customize the tags, provide a @Bean that implements RestTemplateExchangeTagsProvider. There are convenience static functions in RestTemplateExchangeTags. Auto-configuration enables the instrumentation of all available Caches on startup with metrics prefixed with cache. Cache instrumentation is standardized for a basic set of metrics. Additional, cache-specific metrics are also available. The following cache libraries are supported: Metrics are tagged by the name of the cache and by the name of the CacheManager that is derived from the bean name..:
https://docs.spring.io/spring-boot/docs/2.0.2.BUILD-SNAPSHOT/reference/htmlsingle/
CC-MAIN-2018-30
en
refinedweb
On Thu, 2007-06-28 at 00:43 +0200, Jacob Rief wrote: >. Advertising No objection, although it would be nice if we could find something nicer to rename "using" to than "using_". What about "using_clause" or "using_list"? You also changed "namespace" => "name_space" in builtins.h; is that necessary? > I wrote a patch which applies cleanly onto version 8.2.4 Patches should be submitted against the CVS HEAD code: we wouldn't want to apply a patch like this to the REL8_2_STABLE branch, in any case. BTW, I notice the patch also adds 'extern "C" { ... }' statements to a few random header files. Can't client programs do this before including the headers, if necessary? -Neil ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives?
https://www.mail-archive.com/pgsql-patches@postgresql.org/msg17226.html
CC-MAIN-2018-30
en
refinedweb
The context structure. More... #include <context.h> The context structure. Contains two pipes for async service qq : write queries to the async service pid/tid. rr : read results from the async service pid/tid. The context has been finalized This is after config when the first resolve is done. The modules are inited (module-init()) and shared caches created. List of alloc-cache-id points per threadnum for notinuse threads. Simply the entire struct alloc_cache with the 'super' member used to link a simply linked list. Reset super member to the superalloc before use. Tree of outstanding queries. Indexed by querynum Used when results come in for async to lookup. Used when cancel is done for lookup (and delete). Used to see if querynum is free for use. Content of type ctx_query. Referenced by add_bg_result().
http://www.unbound.net/documentation/doxygen/structub__ctx.html
CC-MAIN-2018-30
en
refinedweb
Free allocated memory #include <malloc.h> int cfree( void *ptr ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The cfree() function deallocates the memory block specified by ptr, which was previously returned by a call to calloc(), malloc() or realloc(). 1 Calling cfree() on a pointer already deallocated by a call to cfree(), free(), or realloc() could corrupt the memory allocator's data structures.
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/c/cfree.html
CC-MAIN-2018-43
en
refinedweb
Change NumberAnimation while running I wonder how to change the property of a NumberAnimation (say "loops" or "to") while it's running. Of course I tried to use a variable but it has no effect. Example: SequentialAnimation { id: anim // change rotation centers PropertyAction { target: rotSpin; property: "origin.x"; value: xCenterNeedle } PropertyAction { target: rotSpin; property: "origin.y"; value: yCenterNeedle } NumberAnimation { id: numAnim; target: rotSpin; property: "angle"; easing.type: Easing.Linear; to: _degree; duration: _time; } onStopped: {_isSpinning = false; } } I start it: function startSpin() { _degree += 360; _time = _timeForSector * 12; anim.loops = Animation.Infinite _isSpinning = true; anim.start() } and now I want to change something: function stopSpin() { if (_isSpinning) { var angle = rotSpin.angle; // current angle var currSector = Math.floor(angle / 30); // current sector numAnim.to = (currSector + 1) * 30; // angle of the next sector anim.loops = 1; // stop when reach the requested angle } } but nothing happens. Where's my error? By the way, the specific goal is to stop a spinning needle on a precise angle (x * 30) when requested. I don't know the answer to your more than one rotation... but I do know Rotations are easier with RotationAnimation QML Type - this way you can cross that 0-360 degree boundary without a full opposite wind. I have things like: onValueChanged: RotationAnimation { target: root; property: "rotation" easing.type: Easing.InOutSine; to:root.value; direction: RotationAnimator.Shortest; duration: 300; Thanks, but I don't need the shortest path. I only need to rotate an Image until a signal fires. At that moment, let's say the angle is 53 degrees, I need to stop the animation when it reaches 60 degrees. @Mark81 said in Change NumberAnimation while running: NumberAnimation Well, I'm not trying to sell you! I just know that that NumberAnimation will not spin but start counting down from the number that is in your angle just before it crosses the threshold. Your "rotate an image until a signal fires" - won't work very well when for example it's winding up / clockwise up close to a full rotation - at which point the value crosses and if the intent is keep spinning - instead for that single animation it will unwind counterclockwise, then the next update continuing to the next target clockwise. It just looks very wrong - you do what you recon - maybe you know the value is never reaching a full rotation - but "spin" to me indicated you wish to perform one or more rotations. I'll leave you to figure out your target angle - that math is basic - or ... maybe you could just use states Dunno, just trying to give you leads to get moving again... states: State { name: "SectorOne"; when: (value >= 0 && value <= 30 ) // change rotation centers PropertyAction { target: rotSpin; property: "origin.x"; value: xCenterNeedle } PropertyAction { target: rotSpin; property: "origin.y"; value: yCenterNeedle } PropertyAction { target: rotSpin; property: "angle"; value: 30 } NumberAnimation { id: numAnim; target: rotSpin; property: "angle"; easing.type: Easing.Linear; to: _degree; duration: _time; } State { name: "SectorTwo"; when: (value > 30 && value <= 60 ) // change rotation centers PropertyAction { target: rotSpin; property: "origin.x"; value: xCenterNeedle } PropertyAction { target: rotSpin; property: "origin.y"; value: yCenterNeedle } PropertyAction { target: rotSpin; property: "angle"; value: 60 } NumberAnimation { id: numAnim; target: rotSpin; property: "angle"; easing.type: Easing.Linear; to: _degree; duration: _time; } Thanks a lot for your answer. I will try your approach, but I'm not sure I explained the scenario very well. I'm going to try with a simpler example. Let's image to have a stopwatch, with a single needle. When the stopwatch start, the needle begins to "spin" forever. But when the user decides to stop it (i.e. the stopSpin() event is fired) the needle cannot just stop where it is but it has to reach the very next "hour". I.e. if the event fired when the needle was pointing to 85 degrees, it should stop ONLY when it has reached 90 degrees. Right now I ended up with a workaround. In the stopSpin() function I stop the animation and in the onStopped event I read the current angle of my needle. Then I start again the same animation setting the new target and duration fields to reach the desired angle. It works (as you said, it's a matter of basic maths) but sometimes the needle stops a while before moving again. Instead the animation should be flawless. I think you'd need to interrupt your animation. I'd be looking for a solution where you just update your target and indeed get flawless animation. I haven't had to deal with interrupted animations - I have known baud rates and controls that are reactive (not responsive to user input) so it's easy to setup animation speeds to always complete slightly faster than my configured baud rate. Maybe you'll find is better for you? @6thC said in Change NumberAnimation while running: Maybe you'll find is better for you? I will give it a try, but unfortunately I cannot use any Easing type but the Linear one. Anyway perhaps I might find a way to use it. It's not clear to me what does the maximumEasingTime property mean. I'm going to play a little with it. I have wanted to do similar things before but eventually decided I was expecting too much from the QML Animation types suddenly trying to get them to something else part way through an animation. I decided to just take more control instead... here's an example minute-hand clock (runs in Qt 5.9.1's qmlscene); click the window to start and stop the clock movement: import QtQuick 2.7 Rectangle { id: main width: 512 height: 512 Rectangle { x: 0.5*parent.width y: 0.5*parent.height width: 10 height: 200 radius: 5 color: 'red' transformOrigin: Item.Top rotation: (((control.spinning ? clock.time : control.stopTime)-control.lostTime)/60.0)%360.0 } Timer { id: clock interval: 1000.0/60.0 repeat: true running: true triggeredOnStart: true onTriggered: time=(new Date()).getTime() property real time } MouseArea { id: control anchors.fill: parent onClicked: { if (spinning) { stopTime=clock.time spinning=false } else { startTime=clock.time lostTime+=(startTime-stopTime) spinning=true } } property bool spinning: true property real stopTime: 0.0 property real startTime: 0.0 property real lostTime: 0.0 } } I was actually a little paranoid about the resolution of the QML/javascript-world timers so in an actual app I replace that QML Timer with a QTimer with setTimerType(Qt::PreciseTimer)in the hosting C++ QQuickView. Not sure it makes any difference though. Thanks for the hint. Here the changes I've made to allow stopping on defined position: rotation: { if (control.spinning || clock.time < control.stopTime) { return ((clock.time - control.lostTime) / 60.0) % 360.0; } else { return (( control.stopTime - control.lostTime) / 60.0) % 360.0; } } and in the stop event: if (spinning) { stopTime=clock.time + 500 // set the stop position spinning=false } ...
https://forum.qt.io/topic/85693/change-numberanimation-while-running
CC-MAIN-2018-43
en
refinedweb
I an trying to filter low quality reads in bam file using python's pysam. I have used the code from here. I have modified this code a little and the whole code is shown below, but the code is not giving any bam file , import argparse,pysam,re,sys def FilterReads(in_file, out_file): def read_ok(read): """ read_ok - reject reads with a low quality (<30) base call read - a PySam AlignedRead object returns: True if the read is ok """ if any([ord(c)-33 < _BASE_QUAL_CUTOFF for c in list(read.qual)]): return False else: return True _BASE_QUAL_CUTOFF = 30 bam_in = pysam.Samfile(in_file, 'rb') bam_out = pysam.Samfile(out_file, 'wb', template=bam_in) out_count = 0 for read in bam_in.fetch(): if read_ok(read): bam_out.write(read) out_count += 1 print 'reads_written =', out_count bam_out.close() bam_in.close() def GetArgs(): """ GetArgs - read the command line returns - an input bam file name and teh output filtered bam file """ def ParseArgs(parser): class Parser(argparse.ArgumentParser): def error(self, message): sys.stderr.write('error: %s\n' % message) self.print_help() sys.exit(2) parser = Parser(description='Calculate PhiX Context Specific Error Rates.') parser.add_argument('-b', '--bam_file', type=str, required=True, help='Input Bam file.') parser.add_argument('-o', '--output_file', type=str, required=True, help='Output Bam file.') return parser.parse_args() parser = argparse.ArgumentParser() args = ParseArgs(parser) return args.bam_file, args.output_file def Main(): bam_file, output_file = GetArgs() FilterReads(bam_file, output_file) if __name__ == '__main__': Main() I think you need to explain in more detail what you are trying to accomplish. Are you trying to reject any read with any base that has a quality score under 30? If so, I suggest you rethink your approach, because I can't imagine a scenario where that's a good idea. Thanks Bushnell, Yes I want to reject reads having quality score < 30. I want to do it using python's pysam. I you have a better approach please share it. No, I don't have a better approach, because I think removing any read with any base under Q30 is a terrible idea and will lead to extreme sequence-specific coverage bias. I have written tools to remove reads with average quality (based on expected error rates from the quality scores) below a certain level, but that should be used conservatively to avoid sequence-specific bias. I cannot imagine a scenario where it would be a good idea to remove every read that has a single base below a certain quality score, so I won't suggest a way to do it unless you can explain why you want to do so. I suggest you quality-trim the reads and reject reads with length under a specified value prior to mapping. You can do so like this, with the BBMap package (assuming interleaved reads): bbduk.sh in=reads.fq out=trimmed.fq qtrim=r trimq=12 minlen=125 You can set trimq to 30 if you want, but again, I cannot imagine a situation where that would be a good idea. For this command, minlen should be set to some number less than your read length; e.g. for 150bp reads, maybe set it to 100. I am developing a variant caller for detecting heteroplasmy in ngs datasets which will take bam file as input according to workflow. As the link is not working so I have added an image , see the galaxy naive variant caller portion in that figure For calculating heteroplasmy they filtered bases having <30 score. It might be a terrible idea in some cases, but when working with ancient DNA, which might have sequencing errors due to oxidative and hidrolytic damage as to contamination, you might want to trim bases below certain quality (usually <30), anyways, i just write this in order to enrich the discussion, i don't know what kind of data ammarsabir is (was) working with. if in further data handling you have a vcf step, it might be easier to just keep all the data in the bam file, and then trim the reads you want to using vcftools, like following: there's no need to put any extention in the last file name, since the tool writes a default extention. cheers.
https://www.biostars.org/p/226167/
CC-MAIN-2018-43
en
refinedweb
Download the PCG Library Minimal C Implementation If you just want a good RNG with the least amount of code possible, you may be fine with the minimal C implementation. In fact, if you want to “see the code”, the here's a complete PCG generator: // *Really* minimal PCG32 code / (c) 2014 M.E. O'Neill / pcg-random.org // Licensed under Apache License 2.0 (NO WARRANTY, etc. see website) typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t; uint32_t pcg32_random_r(pcg32_random_t* rng) { uint64_t oldstate = rng->state; // Advance internal state rng->state = oldstate * 6364136223846793005ULL + (rng->inc|1); // Calculate output function (XSH RR), uses old state for max ILP uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u; uint32_t rot = oldstate >> 59u; return (xorshifted >> rot) | (xorshifted << ((-rot) & 31)); } Although you could just copy and paste this code, the actual downloadable version of the minimal library handles proper seeding and generating numbers within a set range, and provides some demo programs to show it in use. See the documentation for the minimal library for more details on how to use this library. You can also find the code on github.PCG Random, Minimal C Implementation, 0.9 C++ Implementation This version of the code is a header-only C++ library, modeled after the C++11 random-number facility. If you can use C++ in your project, you should probably use this version of the library. See the documentation for the C++ library for more details on how to use this library. You can also find the code on github.PCG Random, C++ Implementation, 0.98 C Implementation This version of the library provides many features of the C++ library to programmers working in C. See the documentation for the full C library for more details on how to use this library. You can also find the code on github.PCG Random, C Implementation, 0.94 Haskell Implementation Christopher Chalmers has ported the basic C implementation to Haskell. His code is available on GitHub. At a glance, it should be installable via Cabal. Thanks Christopher! Java Implementation Coming later... (you could help!) Other Languages Coming later... (you could help!)
http://www.pcg-random.org/download.html
CC-MAIN-2018-43
en
refinedweb
I'm trying to use a class, "DevIO", from a Microchip library, "MCP2210-M-dotNet2.dll". When I downloaded the library, it had a managed and unmanaged folder, but Unity is reading both of them as native plugins. Supposedly, DllImport is used to access methods from a native plugin, but how do I access a class? UPDATE: Previously, I did have "MCP2210-M-dotNet2.dll" in a Plugins folder. I added it to the Assets folder as suggested. I then had to manually edit the project references in MonoDevelop and browse to add it as a .Net Assembly. Now "using MCP2210" is recognized, and there are no errors in MonoDevelop, but Unity still has errors. I read more into how to use the library, and it requires Microsoft Visual C++ 2010 Redistributable Package or the inclusion of "msvcp100.dll" and "msvcr100.dll". However, Unity says they aren't valid .Net Assemblies, so is there a way I can include the Microsoft Visual C++ Redistributable Package in Unity? Answer by ThePunisher · Jul 26, 2017 at 07:01 PM It looks like dealing with plugins is a Pro-only feature. Now that I know more about the problem. You should be able to pull in the DLL's from the managed directory under the "MCP2210DLL-M-dotNet2" directory. You will need all of the DLL's under this directory inside your plugins directory. The "MCP2210DLL-M-dotNet2.dll" and "msvcm90.dll" are both managed dlls written in c#. They will expose all the functionality from the other two dll's (msgvcp90.dll and msvcr90.dll) to you. There's example documentation within the managed directory. Just open up the "MCP2210DLL-M_CSExampleCode.sln" solution. Within this solution is Program.cs which contains examples on how to use the code. All the code made available to you is under the MCP2210 namespace so your scripts will need the using MCP2210; directive. You won't need to add the DLLs as references to your unity solution since Unity will do this step automatically (assuming you have placed them in your plugins directory). Edit: Once you've done everything above the following code in a c# script compiled just fine. using UnityEngine; using System.Collections; using MCP2210; public class MicrochipTest : MonoBehaviour { // Use this for initialization void Start () { //Variables const uint MCP2210_VID = 0x04D8; // VID for Microchip Technology Inc. const uint MCP2210_PID = 0x00DE; // PID for MCP2210 bool isConnected = false; // Connection status variable for MCP2210 MCP2210.DevIO UsbSpi = new DevIO(MCP2210_VID, MCP2210_PID); } // Update is called once per frame void Update () { } } As far as I understand it, he does not want to use a native plugin, but he want to access a .net assembly from Unity. This can be done simply by putting the .DLL in the assets folder of your project. Unity will take care of linking automatically and it should work as long as the DLL is compiled for .NET 3.5 or 4.6(experimental in unity). Compiling c# code into a DLL and bringing it to Unity assets directory allows you to reference that code (classes, types and all). But it sounds like what he's been given is a native plugin. Something probably written in C, maybe c++. This is what the Unity manual has to say about that: "Unity." If what he has is a native plugin then he may have to build the Unity script which exploses the functionality built into the plugin. If you read his post again, you'll see that it is not a native plugin he's talking about. "MCP2210-M-dotNet2.dll". Unity probably thinks its a native plugin because he's putting the file in a plugins. nonstatic extern functions from DLL plugin import 1 Answer android time cheat plugin problem with broadcastreceiver 1 Answer Do Native Mobile Plugins Require Pro? 1 Answer XInputDotNetPure plugin: Namespace could not be found. 1 Answer Getting SSH.NET to work in Unity 1 Answer
https://answers.unity.com/questions/1384925/how-do-you-use-a-class-from-a-dll-file.html
CC-MAIN-2018-43
en
refinedweb
All users were logged out of Bugzilla on October 13th, 2018 add brain-dead simple unit test to toolkit/mozapps/update/src RESOLVED FIXED Status () People (Reporter: davel, Assigned: davel) Tracking Firefox Tracking Flags (Not tracked) Details Attachments (2 attachments, 1 obsolete attachment) at the agile2006 conference, I worked with some attendees and wrote a very simple unit test for UpdatePatch() I'd like to land the test on trunk Created attachment 231443 [details] [diff] [review] brain-dead simple unit test I'm planning on factoring out the common harness bits to mozilla/testing. Part of that factoring will be ensuring lines are <80 char Attachment #231443 - Flags: superreview?(benjamin) Attachment #231443 - Flags: review?(darin) Comment on attachment 231443 [details] [diff] [review] brain-dead simple unit test >Index: testnsUpdateService.js >+function testUpdatePatchThrowsExceptionForZeroSizePatch(updateString) { >+ >+ var parser = Components.classes["@mozilla.org/xmlextras/domparser;1"] >+ .createInstance(Components.interfaces.nsIDOMParser); nit, indent continuation by one indent >+ var result = "fail" missing semicolon >+ for (var i = 0; i < updateCount; ++i) { nit, this block is indentend differently than the previous statements URL="" definitely do not want to hardcode osuosl.org URLs. Please use releases.mozilla.org or Attachment #231443 - Flags: superreview?(benjamin) → superreview- Created attachment 231452 [details] [diff] [review] unit test with benjamin's suggested changes Attachment #231443 - Attachment is obsolete: true Attachment #231452 - Flags: superreview?(benjamin) Attachment #231452 - Flags: review?(darin) Attachment #231443 - Flags: review?(darin) Comment on attachment 231452 [details] [diff] [review] unit test with benjamin's suggested changes >+check:: nsUpdateService.js >+ $(CYGWIN_WRAPPER) $(RUN_TEST_PROGRAM) $(DIST)/bin/xpcshell$(BIN_SUFFIX) -f nsUpdateService.js $(srcdir)/testnsUpdateService.js It feels like there should be more of a framework for this sort of test. I imagine that there is a lot of output from this test generated by xpcshell. It would be nice if that output were logged to a file, and then the output of the command invocation was a simple PASS or FAIL. >Index: testnsUpdateService.js >+function assertThrows(assertion, command) { ... >+ if (e && e == assertion) { >+ return true; >+ } else { >+ return false; >+ } This could be simplified to: return (e && e == assertion); >+testUpdatePatchThrowsExceptionForZeroSizePatch('<?xml version="1.0"?><updates><update type="minor" version="1.7" extensionVersion="1.7" buildID="2005102512"><patch type="complete" URL="DUMMY" hashFunction="MD5" hashValue="9a76b75088671550245428af2194e083" size="0"/><patch type="partial" URL="DUMMY" hashFunction="MD5" hashValue="71000cd10efc7371402d38774edf3b2c" size="652"/></update></updates>'); It might be nice to create an array of strings, and then loop over that array calling your test function for each string in the array instead of calling the function explicitly N times. Attachment #231452 - Flags: review?(darin) → review+ Darin, Thanks for your comments. I agree that more framework and harness "stuff" should exist. I find it easier to add such stuff once a running test is in place. checked in on trunk Checking in Makefile.in; /cvsroot/mozilla/toolkit/mozapps/update/src/Makefile.in,v <-- Makefile.in new revision: 1.10; previous revision: 1.9 done RCS file: /cvsroot/mozilla/toolkit/mozapps/update/src/testnsUpdateService.js,v done Checking in testnsUpdateService.js; /cvsroot/mozilla/toolkit/mozapps/update/src/testnsUpdateService.js,v <-- testnsUpdateService.js initial revision: 1.1 Status: NEW → RESOLVED Last Resolved: 12 years ago Resolution: --- → FIXED It seems like what was checked in didn't take into account Darin's review comments, was that just an oversight? (In reply to comment #7) > It seems like what was checked in didn't take into account Darin's review > comments, was that just an oversight? I did it somewhat intentionally. This is not the final state of this code, as I will be factoring out the harness and framework functionality to other files. I also did not interpret the comments as changes required before check-in, since the review was granted. Darin, was this not the case? How do you suggest I proceed? I can check in updated code now, or incorporate the recommended changes with my next set of harness/framework changes. Granting review with comments usually means "make these changes, but I don't need to see the patch again", as opposed to "please make these changes and request review again", although it depends on the reviewer and the patch. Either way, it's not a big deal, I just thought I'd mention it in case it was a mistake. Yes, I granted review with the intention that you would make the suggested changes before checking in. No worries ;-) I also think it might be nice if the test files lived in a separate directory. I think it clutters the "src" directory to have these tests live in there. Same goes for protocol/http. (In reply to comment #10) > I also think it might be nice if the test files lived in a separate directory. > I think it clutters the "src" directory to have these tests live in there. > Same goes for protocol/http. toolkit/mozapps/update/tests rolls off my fingers rather nicely. ;-) (In reply to comment #11) > toolkit/mozapps/update/tests rolls off my fingers rather nicely. ;-) That looks reasonable, where public, src, and tests are sibling directories. Let's move this discussion to mozilla.dev.quality for greater visibility. I've got a new version of this test that copies a bunch of jsunit code (jsunit is tri-licensed under the same terms as mozilla code). I'd like to drop that version in place here, then move it to a separate tests directory as a separate step. patch forthcoming Status: RESOLVED → REOPENED Resolution: FIXED → --- Created attachment 236311 [details] unit test for one aspect of update client code uploading the whole file, since most of the original version has been replaced Attachment #236311 - Flags: review?(darin) Checking in testnsUpdateService.js; /cvsroot/mozilla/toolkit/mozapps/update/src/testnsUpdateService.js,v <-- testnsUpdateService.js new revision: 1.2; previous revision: 1.1 done Status: REOPENED → RESOLVED Last Resolved: 12 years ago → 12 years ago Resolution: --- → FIXED Product: Firefox → Toolkit
https://bugzilla.mozilla.org/show_bug.cgi?id=346706
CC-MAIN-2018-43
en
refinedweb
Joomla registration form size take xml from 1 website. But that sizes for retail. But i will sell wholesale. So can you change this ? Example: 38 size -2 40 size-3 42 size -5 44 size-0 It has to write 38-40-42 2 serial.] i am looking for a new traders list, individual persons only including name, address, email and phone numbers, registration date. Require: ]: blue based 4. Do not have too much color. 5..Concise macro from excel to website to pre fill in required information. The info will be emailed. I need you to fill in a spreadsheet with data. need you to develop some software for me. I would like this software to be developed for Windows using Javascript. [kirjaudu nähdäksesi URL:n]. import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks ... Details will be shared with winning bidder. Please bid. Details will be shared with winning bidder. Please bid.
https://www.fi.freelancer.com/work/joomla-registration-form-size/
CC-MAIN-2018-43
en
refinedweb
Docker Experimental Features in Red Hat Enterprise Linux The following features included in the docker component (docker-1.4 and higher) are not yet upstream, and considered experimental. The user interface for these features may change in future releases. The features are fully supported as part of a standard Red Hat Enterprise Linux or Red Hat Enterprise Linux Atomic host deployment. Registry Access Control: The docker daemon includes two options to manage which registries it can connect to. These can be defined in /etc/sysconfig/docker with the ADD_REGISTRY and BLOCK_REGISTRY options. #' Each registry in ADD_REGISTRY is searched in order of appearance in the configuration file. Red Hat's default includes an entry for the Red Hat maintained registry (registry.access.redhat.com) on the Red Hat Customer Portal for images from Red Hat product families. # If you want to block registries from being used, uncomment the BLOCK_REGISTRY # option and give it a set of registries, each prepended with --block-registry # flag. For example adding docker.io will stop users from downloading images # from docker.io # BLOCK_REGISTRY='--block-registry' The default configuration includes access to the Docker Hub as builtin feature of the docker component, To prevent access to the Docker Hub, a special keyword "public", should be used in the BLOCK_REGISTRY option. Network Namespace Creation: In addition to docker automatically creating namespaces needed to start a container instance, users can now specify an externally created network namespace. This will facilitate the usage of external network interface creation tools to be leveraged in a docker container context. --net=netns:PATH The PATH is replaced with the network namespace path entry in /var/run/netns/. Tools such as ip netns can be used to create manage independent network namespaces and interfaces.
https://access.redhat.com/articles/1354823
CC-MAIN-2018-43
en
refinedweb
Resizing image size animated gif jobs Logo introsu kurgu montaj tanıtım filmi reklam filmi sosyal paylaşım videoları gifler effectif videolar videoya ait herşey :) take xml from 1 website. But that sizes for retail. But i will sell wholesale. So can you change this ? Example: 38 size -2 40 size-3 42 size -5 44 size-0 It has to write 38-40-42 2 serial. .. ... Hello, I want a developer to increase the space size between two elements in a shopify store. Simple task, but i want it now. Thanks, .. sizes shown at [login to view. I need someone to be able to make 1 i already have done the right size to print and other files i need mocked up same exact as i have em designed out to be redone and ready for print Formatting a word document of 400 pages. This include fixing the headings of content, resizing tables and fixing fonts. import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks Hi there, I need this logo re sized to suit Facebook profile image size I a image or video exactly like this [login to view URL]] We need to add a button in our footer with a small animation. the button is for our TV page ( Video ), we will post many vidéos on this page. T...vidéos on this page. The button need to be very nice, something like Fashion TV. I have attached the copy of our footer so you can have an idea. The button is called : CELINE TV Image has to be very luxury. Quick job: I need a graphic designer to convert this PDF logo in illustrator. Save it in the suitable formats ready for laser cutting, printing and resizing..
https://www.tr.freelancer.com/job-search/resizing-image-size-animated-gif/
CC-MAIN-2018-43
en
refinedweb
AdarpDomain Since: BlackBerry 10.3.1 #include <bb/platform/AdarpDomain> To link against this class, add the following line to your .pro file: LIBS += -lbbplatform A class that provides status and command control for AdarpDomain. Adarp, or Advanced Data At Rest Protection, is a feature that locks an enterprise work space when a device has been idle for a specified time period. The AdarpDomain class provides functions to monitor status and request triggering DataLock. All AdarpDomain objects provide access to a single instance of the underlying interface. This means that any AdarpDomain object requesting DataLock would trigger DataLock for all other AdarpDomain objects if any, as an example. Overview Properties Index Public Functions Index Public Slots Index Signals Index Properties bb::platform::DataLockState::Type The current DataLock state. BlackBerry 10.3.1 QDateTime The next DataLock time. The nextDataLockTime is the timestamp when the DataLock state will switch from LockPending to DataLocked. If the DataLock state is not LockPending, this will be a null (invalid) timestamp. nextDataLockTime is only valid during LockPending state, but the reverse is not necessarily true. It is possible that the DataLockState may change to LockPending before the nextDataLockTime is available. To ensure that nextDataLockTime is valid, applications should wait for the nextDataLockTimeChanged signal while in state. BlackBerry 10.3.1 Public Functions Constructs an AdarpDomain object. Information about the AdarpDomain status may be retrieved from the AdarpDomain object. BlackBerry 10.3.1 virtual Destructor. BlackBerry 10.3.1 bb::platform::DataLockState::Type Gets the current DataLock state. If DataLock is not enabled, this function returns NotLocked. DataLock state. BlackBerry 10.3.1 Public Slots int Sends a request to extend the DataLock time. This function is called to request to extend the time before dataLockState will switch to LockPending state and then subsequently to DataLocked state (that is, delay the triggering to DataLocked state). The caller must have the _sys_allow_extend_data_lock permission to extend the DataLock time. 0 if the request was sent successfully, -1 otherwise. BlackBerry 10.3.1 int Sends a request to set DataLock. This function is called to set the DataLock state to DataLocked. The caller must have the allow_request_lock permission to set DataLock. 0 if the request was sent successfully, -1 otherwise. BlackBerry 10.3.1 Signals void Emitted when the DataLock state has changed. BlackBerry 10.3.1 void Emitted when nextDataLockTime has changed. BlackBerry 10.3.1 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__platform__adarpdomain.html
CC-MAIN-2018-43
en
refinedweb
How can I get a compiler for C++ like Dev-Cpp to support chinese character and pinyin input? Like if I wanted to make a chinese program that would allow chinese input and output in the console window... Example: #include <iostream> using namespace std; int main() { wchar_t input; char loop; loop='a'; while(loop = 'a') { system("CLS"); cout << "Input Chinese Characters or pinyin to translate to english: "; cin >> input; if(input = "你") { cout << "nǐ = you" << endl; } if(input = "您") { cout << "nín = you (with respect)" << endl; } if(input = "我") { cout << "wǒ = I; me; my; (anything that has to do with yourself)" << endl; } if(input = "星") { cout << "xīng = star;" << endl; } if(input = "期") { cout << "qī = period of time; " << endl; } if(input = "子") { cout << "zi = word; character;" << endl; } if(input = "好") { cout << "hǎo = to be good/well;" << endl; } if(input = "姓") { cout << "xìng = family name; last name;" << endl; } if(input = "今") { cout << "jīn = now;" << endl; } if(input = "天") { cout << "tiān = today; sky;" << endl; } } return 0; } etc. So pinyin is like 'xìng' and the characters are like '姓'. So some how im wondering how I could create a program that would be like a dictionary type thing. BONUS: 你教什么名字 = nǐ jiáo shén me míng zi? = whats your full name? 您好 = nín hǎo! = hello! (Respectfully). 今天星期二 = jīn tiān xīng qī èr = today is tuesday. 星期 = xīng qī = week.
https://www.daniweb.com/programming/software-development/threads/151192/how-to-get-dev-cpp-to-support-chinese-characters-and-pinyin
CC-MAIN-2018-43
en
refinedweb
PyQT5 kernel getting dead while launching a simple programme.. Hiiii everyone, I am new using lib PyQT5. I wondering why my computer crash when try this code ? from PyQt5 import QtGui from PyQt5.QtWidgets import QApplication, QMainWindow import sys class Window(QMainWindow): def __init__(self): super().__init__() self.title = "PyQt5 Window" self.top = 100 self.left = 100 self.width = 680 self.height = 500 self.InitWindow() def InitWindow(self): self.setWindowTitle(self.title) self.setGeometry(self.top,self.left,self.width,self.height) self.show() App = QApplication(sys.argv) window = Window() sys.exit(App.exec()) the kernel is getting dead... - SGaist Lifetime Qt Champion Hi and welcome to devnet, What version of PyQt5 are you using ? What version of Qt is it built on ? How did you install both of them ? What Linux flavour are you running ? What graphics card do you have ? What driver are you using for it ? What do your kernel logs tell you ?
https://forum.qt.io/topic/101440/pyqt5-kernel-getting-dead-while-launching-a-simple-programme
CC-MAIN-2019-30
en
refinedweb
Data size limit in Android socket transmission? Hi! I am developing a very basic server-client implementation using Qt. From the server side, I have a Linux server. This is the main part of the code: Server::Server() : QObject() { server = new QTcpServer(this); connect(server, SIGNAL(newConnection()), this, SLOT(sendFile())); if (!server->listen(QHostAddress::Any, 8080)) qDebug() << "Server could not start!"; else qDebug() << "Server started!"; } void Server::sendFile() { QTcpSocket *socket = server->nextPendingConnection(); QString path = "/home/data/file.bin"; QFile file(path); if (file.open(QIODevice::ReadOnly)) { QByteArray data = file.readAll(); qint64 fileSize = file.size(); int length = 128; int index = 0; while (index < fileSize) { QByteArray segment = data.mid(index, length); qint64 result = socket->write(segment); if (result == -1) { qDebug() << "Error while writing data"; } else { if ((index + length) == fileSize) { break; } else { index += length; if ((index + length) > fileSize) length = fileSize - index; } } } } else { qDebug() << "Couldn't read file -> " << path; } socket->disconnectFromHost(); file.close(); } From the (Android) client side, this is the main part of the code: Client::Client() : QObject() { socket = new QTcpSocket; socket->setSocketOption(QAbstractSocket::LowDelayOption, 1); socket->connectToHost("192.168.1.1", 8080, QIODevice::ReadWrite); connect(socket, SIGNAL(connected()), this, SLOT(initFile())); connect(socket, SIGNAL(readyRead()), this, SLOT(readFileFromServer())); connect(socket, SIGNAL(disconnected()), this, SLOT(closeFile())); } void Client::initFile() { QString filename = "/path/to/file.bin"; file = new QFile(filename); if (!file->open(QIODevice::WriteOnly)) { #ifdef TUP_DEBUG qDebug() << "Insufficient permissions to create file -> " << file.fileName(); #endif } } void Client::readFileFromServer() { while (true) { QByteArray data = socket->read(64); if (data.isEmpty()) break; file->write(data); } } void Client::closeFile() { file->close(); qDebug() << "File saved!"; } Initially I started running some tests between two computers using Linux. The code works like a charm and I can transmit any file through the network without any issue. Now, when I tried the same test but running the client from my cellphone running Android, I detected an strange issue: If the file I am transmitting is larger than 14480 bytes, then, the file is limited to that size and the rest of the data is lost. To confirm the constraint, I sent several files with different sizes and the constant value is evident for every case: 14480 bytes. Looking for an answer I started to google for that number and I found that this limitation is related to some TCP/IP parameter: Any way, if I want to transmit files bigger than 14460 bytes to Android devices, what should I do? Any suggestion? Thanks! - aha_1980 Qt Champions 2018 How did you connect the devices? is there a router between linux and android? if I were you, I'd start Wireshark on the Linux machine and compare the communication to the second Linux vs. Android. Dont know if you can run a packet sniffer on Android too? that might give some more insight. Btw: why don't you use an established protocol like ftp or http for file transfer? Btw: why don't you use an established protocol like ftp or http for file transfer? You are right. I am going to take a look to Qt WebSockets to enhance my implementation. PS: Talking to Android developers, they told me about a native class called AsyncTask, commonly used by them to do actions like download a file from a given URL. You will find more info about it in this link: The key to deal with network requirements from Android using Qt, seems to be work with threads. I wonder if using QThread I can make a work-around for this problem. Making a little experimentation, I decided to mix this example I found in the Qt documentation: With this article about threads: So, basically I took an http request and I execute it from a thread. The result is really interesting, I could transmit files bigger than 14480 bytes. In most of the cases, the size of the transmitted files is the same, including the md5 hash. In other few, the downloaded file is a little bigger than the original but just for few bytes, I don't know why. Anyway, my new approach is a lot better than the first one. Definitive solution? I think I need to run more tests ;)
https://forum.qt.io/topic/85406/data-size-limit-in-android-socket-transmission/1
CC-MAIN-2019-30
en
refinedweb
Hopefully this is a simple questions to respond to. Can you do Delegated Authentication for SOAP web service calls. I ask as I am not seeing this work as expected. I have this authentication turned on and enabled in for he CSP Web Application yet I keep getting a "Security Token could not be Authenticated. And a global I was setting to capture some of the available data is not being loaded. Authentication Continue Reading Post by Rich Taylor 30 January 2017 Last answer 17 February 2017 Imagine that your .NET project uses the Caché DBMS and you need a fully-functional and reliable authorization system. Writing such a system from scratch would not make much sense, and you will clearly want to use something that already exists in .NET, e.g. ASP.NET Identity. By default, however, this framework supports only its native DBMS – MS SQL. Our task was to create an adaptor that would let us quickly and easily port Identity to the InterSystems Caché DBMS. This work resulted in creation of the ASP.NET Identity Caché Provider. Continue Reading Post by Maxim Yerokhin 21 September 2016 Last comment 25 January 2017 Hi - I know that when specifying Caché password rules (i.e. what constitutes a valid password definition) that the "Pattern Matching" logic is what is getting leveraged under the covers to enforce the "A Password Must conform to X" rule. I was hoping that people could share some more sophisticated pattern matching rules. (in particular, I was wondering what a rule that would require non-repeating mixture of letter, numbers, & punctuation of an overall minimal size) Continue Reading Post by Chip Gore 23 November 2016 Last answer 24 November 2016 Last comment 28 November 2016. Continue Reading Post by Kevin Mayfield 6 September 2016 Last answer 11 October 2016 If a user simply closes a tab (running a web application), is there any good way to ensure that the license is released AND the login cookie is destroyed? I found that if the tab is simply closed without first logging out of the application, then 1) the license hangs around forever, and 2) if the user then opens a tab, he is already logged in. Continue Reading Post by Laura Cavanaugh 6 September 2016 Last answer 6 September 2016 Last comment 7 September 2016 I. Continue Reading Post by Laura Cavanaugh 1 September 2016 Last answer 1 September 2016 Last comment 1 September. Continue Reading Post by David Clifte da... 26 August 2016 Last answer 26 August 2016 Last comment 29 August 2016 Hi, We are trying to implement a client side data provider as a component (ZEN) that will use JQuery to do rest calls to a desired URL, in this case, a %CSP.Rest service implemented by ourselves. This component will be used within our application that is authenticated with a correct user configured on Caché management portal and therefore using one license unit. As we are using a Ajax call from client side this connection creates a new session that will use a new license. Continue Reading Post by Jose Antonio Ca... 18 August 2016 Last answer 18 August 2016 Last comment 23 August 2016 I'm VERY novice on all things "OpenAM", and beyond knowing that Caché supports working with OpenAM, I have nothing else to go on. The documentation doesn't seem to be very deep on the nature of how this works beyond a single paragraph saying it's supported for Single Sign On (SSO). Continue Reading Post by Chip Gore 18 August 2016 Hello community, I have productions running in several different namespaces. They all use a common credentials ID for sending email, which is set up in only one of the namespaces. The documentation says that credentials are entered by namespace. When I ran a production in a second namespace, the error log said that credentials were not found (expected), but later attempts to send a file thorugh the production did successfully send an email. I'm wondering if Ensemble is able to look in other namespaces for the same credentials ID? Continue Reading Post by Laura Cavanaugh 9 August 2016 Last answer 10 August 2016 Last comment 11 August 2016 Hi! I am trying to create a %Installer script and I noticed from our documentation that %Installer's <CSPAuthentication> will only accept: <CSPApplication> Optional; within <Namespace>. Defines one or more CSP applications; the supported authentication flags are 4 (Kerberos), 32 (Password), and 64 (Unauthenticated). Is "Delegated" authentication supported? What is it's code? Kind regards, Amir Samary Continue Reading Post by Amir Samary 24 May 2016 Last answer 24 May 2016 Last comment 24 May 2016 Presenter: Saurav Gupta Task: Provide customized authentication support for biometrics, smart cards, etc. Approach: Provide code samples and concept examples to illustrate various custom authentication mechanisms Description: In this session we will discuss customized way to solve various authentication mechanism and show case some sample code. Problem: Using custom Authentication mechanism to support devices like biometrics, smart cards, or create an authentication front end for existing applications. Solution: Code samples and concept examples. Content related to this session, including slides, video and additional learning content can be found here. Continue Reading Post by Saurav Gupta 8 April 2016 I need to perform additional checks before Cache user logins (let's say in a terminal for simplicity) and allow access only to those, who passed them. How do I do it? After reading about delegated authentication in docs I created this ZAUTHENTICATE routine: Continue Reading Post by Eduard Lebedyuk 24 February 2016 Last comment 10 March 2016 In preparation for a presentation I need a real-world LDAP schema that has been customized a bit beyond the basics. Perferably this would be based on an OpenLDAP system which would make it easier to merge into this presentation. If you have such a schema you would be willing to share please respond or contact my directly at Rich.Taylor@InterSystems.com Thanks in advance. Rich Taylor Continue Reading Post by Rich Taylor 8 February 2016 Last comment 17 February 2016 I'm interested in different approaches on how to store user data in Caché. I'm assuming that application uses Caché security/Caché users and not a self-made authentication system. Several approaches, I'm familiar with: Continue Reading Post by Eduard Lebedyuk 7 February 2016 Last comment 12 February 2016
https://community.intersystems.com/tags/authentication?page=1
CC-MAIN-2019-30
en
refinedweb
A powerful Scala idiom is to use the Option class when returning a value from a function that can be null. Simply stated, instead of returning one object when a function succeeds and null when it fails, your function should instead return an instance of an Option, where the instance is either: - An instance of the Scala Some class - An instance of the Scala None class Because Some and None are both children of Option, your function signature just declares that you're returning an Option that contains some type (such as the Int type shown below). At the very least, this has the tremendous benefit of letting the user of your function know what’s going on. A simple Scala Option example Here’s an example of how to use the Scala Option idiom. This source code example is copied from the original version of the book, Beginning Scala: def toInt(in: String): Option[Int] = { try { Some(Integer.parseInt(in.trim)) } catch { case e: NumberFormatException => None } } Here's how this toInt function works: - It takes a String as a parameter. - If it can convert the String to an Int, it does so, returning it as Some(Int). - If the String can't be converted to an Int, it returns None. If you're a consumer of this toInt function, your code will look something like this: toInt(someString) match { case Some(i) => println(i) case None => println("That didn't work.") }Back to top Why Option is better than null This may not look any better than working with null in Java, but to see the value, you have to put yourself in the shoes of the consumer of the toInt function, and imagine you didn't write that function. In Java, if you didn't write the toInt method, you'd have to depend on the Javadoc of the toInt method. (1) In the first case, if you didn't look at the Javadoc for the Java toInt method, you might not know that toInt could return a null, and your code could potentially throw a NullPointerException. (2) In the second case, if you did happen to read the Javadoc, and did see that the code could return a null, you might handle it like this: Integer i = toInt(someString); if (i == null) { System.out.println("That didn't work."); } else { System.out.println(i); } That code isn't any worse than the Scala Option and match approach, but you did have to read the Javadoc to know this was needed. (3) In the third case, the programmer of the toInt function could handle the NumberFormatException differently, and return some other value besides null, in this case, perhaps zero or some other meaningless number. Even better! If you're still not sold on using the Option/Some/None pattern instead of using null values -- I know I wasn't completely sold at this point -- let's see where Scala shines. Let's assume you want to get the sum of a List that contains a bunch of String values, and some of those strings can be converted to Int values, and some can't: val bag = List("1", "2", "foo", "3", "bar") Seems like the code should look really ugly, right? Wrong. In Scala we can write a "sum" expression in just one line of easy to read code: val sum = bag.flatMap(toInt).sum Because (a) we've written toInt to return either a Some[Int] or None value, and (b) flatMap knows how to handle those values, writing this line of code is a piece of cake, and again, it's easy to read and understand. Now that shows how the Option/Some/None pattern really shines!Back to top Java null versus Scala Option If you've been through this scenario when using someone else's code, you know that at the very least (a) you need to read their Javadoc, and (b) if you don't, your code might throw a NullPointerException. Compare this to the Scala Option idiom: - You can see from the Scala function signature that the code returns Option[Int], and knowing the Option/Some/None idiom, you know how to handle this situation. You can see this signature from your IDE -- there's no need to go look at the Javadoc for the function. Getting exception information with Scala's Either, Left, and Right One weakness of the Option approach is that you can't tell why something failed; you can't get access to the error/exception information. A solution to this is to use the Either/Left/Right classes, which are like Option/Some/None, except you can get access to the exception information. See my Scala Either, Left, and Right tutorial to see how to use Either/Left/Right, or the Try/Success/Failure classes introduced in Scala 2.10. Summary: Scala Option, Some, and None Even with this simple example, I hope you can see that the Scala Option/Some/None idiom can save you a lot of grief. Although in a simple example like this it doesn't seem to save you any code, I believe your code will be much more readable, and you'll avoid all the problems you can encounter when potentially dealing with null values being returned from functions.Back to top Seems some error : "where the Option object is either" Either it should be that : "your function should instead return an instance of an Option, where the INSTANCE is either:..." or it should be some other way...... Isn't it? Add new comment
https://alvinalexander.com/comment/11769
CC-MAIN-2019-30
en
refinedweb
Package Project for Business Finance apps For help getting started with Flutter, view our online documentation. For help on editing package code, view the documentation. Add this to your package's pubspec.yaml file: dependencies: businesslibrary: ^0.0.179 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:businesslibrary/businesslibrary.
https://pub.dev/packages/businesslibrary
CC-MAIN-2019-30
en
refinedweb
Provided by: daxctl_63-1.3_amd64 NAME daxctl - Provides enumeration and provisioning commands for the Linux kernel Device-DAX facility SYNOPSIS daxctl [--version] [--help] COMMAND [ARGS] OPTIONS -v, --version Display daxctl version. -h, --help Run daxctl help command. DESCRIPTION. SEE ALSO ndctl-create-namespace(1), ndctl-list(1)
http://manpages.ubuntu.com/manpages/disco/man1/daxctl.1.html
CC-MAIN-2019-30
en
refinedweb
Provided by: alliance_5.1.1-3_amd64 NAME savephfig - save a physical figure on disk SYNOPSYS #include "mpu.h" void savephfig(ptfig) phfig_list ∗ptfig; PARAMETER ptfig Pointer to the phfig to write on disk DESCRIPTION savephfig writes on disk the contents of the figure pointer to by ptfig. All the figure lists are ran through, and the appropriate objects written, independently of the figure mode. The savephfig function in fact performs a call to a driver, choosen by the MBK_OUT_PH(1) environment variable. The directory in which the file is to be written is the one set by MBK_WORK_LIB(1). See MBK_OUT_PH(1), MBK_WORK_LIB(1) and mbkenv(3) for details. ERRORS "∗∗∗ mbk error ∗∗∗ not supported physical output format 'xxx'" The environment variable MBK_OUT_PH is not set to a legal physical format. "∗∗∗ mbk error ∗∗∗ savephfig : could not open file figname.ext" Either the directory or the file are write protected, so it's not possible to open figname.ext, where ext is the file format extension, for writting. EXAMPLE #include "mpu.h" void save_na2_y() { savephfig(getphfig("na2_y")); } SEE ALSO mbk(1), mbkenv(3), phfig(3), addphfig(3), getphfig(3), delphfig(3), loadphfig(3), flattenphfig(3), rflattenphfig(3), MBK_OUT_PH(1), MBK_WORK_LIB(1).
http://manpages.ubuntu.com/manpages/eoan/man3/savephfig.3.html
CC-MAIN-2019-30
en
refinedweb
Provided by: openmpi-doc_1.10.2-8ubuntu1_all NAME MPI_Get_count - Gets the number of top-level elements received. SYNTAX C Syntax #include <mpi.h> int MPI_Get_count(const MPI_Status *status, MPI_Datatype datatype, int *count) Fortran Syntax INCLUDE 'mpif.h' MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR C++ Syntax #include <mpi.h> int Status::Get_count(const Datatype& datatype) const INPUT PARAMETERS status Return status of receive operation (status). datatype Datatype of each receive buffer element (handle). OUTPUT PARAMETERS count Number of received elements (integer). IERROR Fortran only: Error status (integer). DESCRIPTION performance. A message might be received without counting the number of elements it contains, and the count value is often not needed. Also, this allows the same function to be used after a call to MPI_Probe. NOTES. ERRORS If the value to be returned is larger than can fit into the count parameter, an MPI_ERR_TRUNCATE exception is invoked._Get_elements
http://manpages.ubuntu.com/manpages/xenial/man3/MPI_Get_count.openmpi.3.html
CC-MAIN-2019-30
en
refinedweb
unique because it discusses graph algorithms in terms of generic programming, and because it presents a concrete, usable library that embodies those algorithms.”—Matthew H. Austern, AT & T Labs-Research The: Groundbreaking in its scope, this book offers the key to unlocking the power of the BGL for the C++ programmer looking to extend the reach of generic programming beyond the Standard Template Library. Internet Packet Routing with the Boost Graph Library A Boost Graph Library Tutorial Click below for Sample Chapter related to this title: siekch03.pdf Download the sample pages (includes Chapter 3 and Index) Foreword. Preface. I User Guide. 1. Introduction. Some Graph Terminology. Graph Concepts. Vertex and Edge Descriptors. Property Maps. Graph Traversa. Graph Construction and Modification Algorithm Visitors. Graph Classes and Adaptors. Graph Classes. Graph Adaptors. Generic Graph Algorithms. The Topological Sort Generic Algorithm. The Depth-First Search Generic Algorithm. Introduction. Polymorphism in Object-Oriented Programming. Polymorphism in Generic Programming. Comparison of GP and OOP. Generic Programming and the STL. Concepts and Models. Sets of Requirements. Example: InputIterator. Associated Types and Traits Classes. Associated Types Needed in Function Template. Typedefs Nested in Classes. Definition of a Traits Class. Partial Specialization. Tag Dispatching. Concept Checking. Concept-Checking Classes. Concept Archetypes. The Boost Namespace. Classes. Koenig Lookup. Named Function Parameters. File Dependencies. Graph Setup. Compilation Order. Topological Sort via DFS. Marking Vertices Using External Properties. Accessing Adjacent Vertices. Traversing All the Vertices. Cyclic Dependencies. Toward a Generic DFS: Visitors. Graph Setup: Internal Properties. Compilation Time. A Generic Topological Sort and DFS. Parallel Compilation Time. Summary. Breadth-First Search. Definitions. Six Degrees of Kevin Bacon. Depth-First Search. Definitions. Finding Loops in Program-Control-Flow Graphs. Definitions. Internet Routing. Bellman-Ford and Distance Vector Routing. Dijkstra and Link-State Routing. Definitions. Telephone Network Planning. Kruskal's Algorithm. Prim's Algorithm. Definitions. Connected Components and Internet Connectivity. Strongly Connected Components and Web Page Links. Definitions. Edge Connectivity. Knight's Jumps as a Graph. Backtracking Graph Search. Warnsdorff's Heuristic. Using BGL Topological Sort with a LEDA Graph. Using BGL Topological Sort with a SGB Graph. Implementing Graph Adaptors. Graph Class Comparisons. The Results and Discussion. Conclusion. II Reference Manual. Graph Traversal Concepts. Undirected Graphs. Graph. IncidenceGraph. BidirectionalGraph. AdjacencyGraph. VertexListGraph. EdgeListGraph. AdjacencyMatrix. Graph Modification Concepts. VertexMutableGraph. EdgeMutableGraph. MutableIncidenceGraph. MutableBidirectionalGraph. MutableEdgeListGraph. PropertyGraph. VertexMutablePropertyGraph. EdgeMutablePropertyGraph. Visitor Concepts. BFSVisitor. DFSVisitor. DijkstraVisitor. BellmanFordVisitor. Overview. Basic Algorithms. breadth_first_search. breadth_first_visit. depth_first_search. depth_first_visit. topological_sort. Shortest-Path Algorithms. dijkstra_shortest_paths. bellman_ford_shortest_paths. johnson_all_pairs_shortest_paths. Minimum-Spanning-Tree Algorithms. kruskal_minimum_spanning_tree. prim_minimum_spanning_tree. Static Connected Components. connected_components. strong_components. Incremental Connected Components. initialize_incremental_components. incremental_components. same_component. component_index. Maximum-Flow Algorithms. edmunds_karp_max_flow. push_relabel_max_flow. Graph Classes. adjacency_list. adjacency_matrix. Auxiliary Classes. graph_traits. adjacency_list_traits. adjacency_matrix_traits. property_map. property. Graph Adaptors. edge_list. reverse_graph. filtered_graph. SGB GraphPointer. LEDA GRAPH<V,E>. std::vector<EdgeList>. Property Map Concepts. ReadablePropertyMap. WritablePropertyMap. ReadWritePropertyMap. LvaluePropertyMap. Property Map Classes. property_traits. iterator_property_map. Property Tags. Creating Your Own Property Maps. Property Maps for Stanford GraphBase. A Property Map Implemented with std::map. Buffer. ColorValue. MultiPassInputIterator. Monoid. mutable queue. Disjoint Sets. disjoint_sets. find_with_path_halving. find_with_full_path_compression. tie. graph_property_iter_range. The graph abstraction is a powerful problem-solving tool used to describe relationships between discrete objects. Many practical problems can be modeled in their essential form by graphs. Such problems appear in many domains: Internet packet routing, telephone network design, software build systems, Web search engines, molecular biology, automated road-trip planning, scientific computing, and so on. The power of the graph abstraction arises from the fact that the solution to a graph-theoretic problem can be used to solve problems in a wide variety of domains. For example, the problem of solving a maze and the problem of finding groups of Web pages that are mutually reachable can both be solved using depth-first search, an important concept from graph theory. By concentrating on the essence of these problems—the graph model describing discrete objects and the relationships between them—graph theoreticians have created solutions to not just a handful of particular problems, but to entire families of problems. Now a question arises. If graph theory is generally and broadly applicable to arbitrary problem domains, should not the software that implements graph algorithms be just as broadly applicable? Graph theory would seem to be an ideal area for software reuse. However, up until now the potential for reuse has been far from realized. Graph problems do not typically occur in a pure graph-theoretic form, but rather are embedded in larger domain-specific problems. As a result, the data to be modeled as a graph are often not explicitly represented as a graph but are instead encoded in some application-specific data structure. Even in the case where the application data are explicitly represented as a graph, the particular graph representation chosen by the programmer might not match the representation expected by a library that the programmer wants to use. Moreover, different applications may place different time and space requirements on the graph data structure. This implies a serious problem for the graph library writer who wants to provide reusable software, for it is impossible to anticipate every possible data structure that might be needed and to write a different version of the graph algorithm specifically for each one. The current state of affairs is that graph algorithms are written in terms of whatever data structure is most convenient for the algorithm and users must convert their data structures to that format in order to use the algorithm. This is an inefficient undertaking, consuming programmer time and computational resources. Often, the cost is perceived not to be worthwhile, and the programmer instead chooses to rewrite the algorithm in terms of his or her own data structure. This approach is also time consuming and error prone, and will tend to lead to sub-optimal solutions since the application programmer may not be a graph algorithms expert. Generic programming lends itself well to solving the reusability problem for graph libraries. With generic programming, graph algorithms can be made much more flexible, allowing them to be easily used in a wide variety applications. Each graph algorithm is written not in terms of a specific data structure, but instead to a graph abstraction that can be easily implemented by many different data structures. Writing generic graph algorithms has the additional advantage of being more natural; the abstraction inherent in the pseudo-code description of an algorithm is retained in the generic function. The Boost Graph Library (BGL) is the first C++ graph library to apply the notions of generic programming to the construction of graph algorithms. Soon after the Standard Template Library was released, work began at the LSC to apply generic programming to scientific computing. The Matrix Template Library (MTL) was one of the first projects. Many of the lessons learned during construction of the MTL were applied to the design and implementation of the GGCL. The LSC has since evolved into the Open Systems Laboratory (OSL) the name and location have changed, the research agenda remains the same. An important class of linear algebra computations in scientific computing is that of sparse matrix computations, an area where graph algorithms play an important role. As the LSC was developing the sparse matrix capabilities of the MTL, the need for high-performance reusable (and generic) graph algorithms became apparent. However, none of the graph libraries available at the time (LEDA, GTL, Stanford GraphBase) were written using the generic programming style of the MTL and the STL, and hence did not fulfill the flexibility and high-performance requirements of the LSC. Other researchers were also expressing interest in a generic C++ graph library. During a meeting with Bjarne Stroustrup, we were introduced to several individuals at AT&T who needed such a library. Other early work in the area of generic graph algorithms included some codes written by Alexander Stepanov, as well as Dietmar Kühl’s master’s thesis. With this in mind, and motivated by homework assignments in his algorithms class ,Jeremy Siek began prototyping an interface and some graph classes in the spring of1998. Lie-Quan Lee then developed the first version of the GGCL, which became his master’s thesis project. During the following year, the authors began collaborating with Alexander Stepanov and Matthew Austern. During this time, Stepanov’s disjoint-sets-based connected components implementation was added to the GGCL, and work began on providing concept documentation for the GGCL, similar to Austern’s STL documentation. During this year the authors also became aware of Boost and were excited to find an organization interested in creating high-quality, open source C++ libraries. Boost included several people interested in generic graph algorithms, most notably Dietmar Kühl. Some discussions about generic interfaces for graph structures resulted in a revision of the GGCL that closely resembles the current Boost Graph Library interface. On September 4, 2000, the GGCL passed the Boost formal review (managed by David Abrahams) and became the Boost Graph Library. The first release of the BGL was September 27, 2000. The BGL is not a “frozen” library. It continues to grow as new algorithms are contributed, and it continues to evolve to meet users’ needs. We encourage readers to participate in the Boost group and help with extensions to the BGL. The zip archive of the Boost library collection can be unzipped by using WinZip or other similar tools. The UNIX “tar ball” can be expanded using the following command: gunzip _cd boost all.tar.gz j tar xvf _ Extracting the archive creates a directory whose name consists of the word boost and a version number. For example, extracting the Boost release 1.25.1 creates a directory boost 1 25 1. Under this top directory, are two principal subdirectories: boost and libs. The subdirectory boost contains the header files for all the libraries in the collection. The subdirectory libs contains a separate subdirectory for each library in the collection. These subdirectories contain library-specific source and documentation files. You can point your Web browser to boost 1 251/index.htm and navigate the whole Boost library collection. All of the BGL header files are in the directory boost/graph/. However, other Boost header files are needed since BGL uses other Boost components. The HTML documentation is in libs/graph/doc/ and the source code for the examples is inlibs/graph/example/. Regression tests for BGL are in libs/graph/test/. The source files in libs/graph/src/ implement the Graphviz file parsers and printers. Except as described next, there are no compilation and build steps necessary to use BGL. All that is required is that the Boost header file directory be added to your compiler’s include path. For example, using Windows 2000, if you have unzipped release 1.25.1 from boost all.zip into the top level directory of your C drive, for Borland, GCC, and Metrowerks compilers add -Ic:/boost 1 25 1 to the compiler command line, and for the Microsoft Visual C++ compiler add /I "c:/boost 1 25 1". For IDEs, add c:/boost 1 25 1 (or whatever you have renamed it to) to the include search paths using the appropriate dialog. Before using the BGL interface to LEDA or Stanford GraphBase, LEDA or GraphBase must be installed according to their installation instructions. To use the read graphviz() functions (for reading AT&T Graphviz files), you must build and link to an additional library under boost 1 251/libs/graph/src. The Boost Graph Library is written in ISO/IEC Standard C++ and compiles with most C++ compilers. For an up-to-date summary of the compatibility with a particular compiler, see the “Compiler Status” page at the Boost Web site- compiler status.html. The third partner to the user guide and reference manual is the BGL code itself. The BGL code is not simply academic and instructional. It is intended to be used. For students learning about graph algorithms and data structures, BGL provides a comprehensive graph algorithm framework. The student can concentrate on learning the important theory behind graph algorithms without becoming bogged down and distracted in too many implementation details. For practicing programmers, BGL provides high-quality implementations of graph data structures and algorithms. Programmers will realize significant time saving from this reliability. Time that would have otherwise been spent developing (and debugging) complicated graph data structures and algorithms can now be spent in more productive pursuits. Moreover, the flexible interface to the BGL will allow programmers to apply graph algorithms in settings where a graph may only exist implicitly. For the graph theoretician, this book makes a persuasive case for the use of generic programming for implementing graph-theoretic algorithms. Algorithms written using the BGL interface will have broad applicability and will be able to be reused innumerous settings. We assume that the reader has a good grasp of C++. Since there are many sources where the reader can learn about C++, we do not try to teach it here (see the references at the end of the book—The C++ Programming Language, Special ed., by Stroustrup and C++ Primer, 3rd ed., by Josee Lajoie and Stanley B. Lippman are our recommendations). We also assume some familiarity with the STL (see STL Tutorial and Reference Guide by David R. Musser, Gillmer J. Derge, and Atul Sainiand Generic Programming and the STL by Matthew Austern ). We do, however, present some of the more advanced C++ features used to implement generic libraries in general and the BGL in particular. Some necessary graph theory concepts are introduced here, but not in great detail. For a detailed discussion of elementary graph theory see Introduction to Algorithms by T. H. Cormen, C. E. Leiserson, and R.L. Rivest . Dave Abrahams, Jens Maurer, Dietmar Kühl, Beman Dawes, Gary Powell, Greg Colvin and the rest of the group at Boost provided valuable input to the BGL interface, numerous suggestions for improvement, and proofreads of this book. We also thank the following BGL users whose questions helped to motivate and improve BGL (as well as this book): Gordon Woodhull, Dave Longhorn, Joel Phillips, Edward Luke, and Stephen North. Thanks to a number of individuals who reviewed the book during its development: Jan Christiaan van Winkel, David Musser, Beman Dawes, and Jeffrey Squyres. A great thanks to our editor Deborah Lafferty; Kim Arney Mulcahy, Cheryl Ferguson, and Marcy Barnes, the production coordinator; and the rest of the team at Addison-Wesley. It was a pleasure to work with them. Our original work on the BGL was supported in part by NSF grant ACI-9982205. Parts of the BGL were completed while the third author was on sabbatical at Lawrence Berkeley National Laboratory (where the first two authors were occasional guests).All of the graph drawings in this book were produced using the dot program from the Graphviz package. The BGL may be used freely for both commercial and noncommercial use. The main restriction on BGL is that modified source code can only be redistributed if it is clearly marked as a nonstandard version of BGL. The preferred method for the distribution of BGL, and for submitting changes, is through the Boost Web site. , (comma), 40 . (period), 40 ; (semicolon), 73 Aabstract data types (ADTs), 19 accumulate function, 26-27 Adaptor(s) basic description of, 13-14 implementing, 123-126 pattern, 119 add_edge function, 9, 17, 43, 84, 121, 152-153, 226 EdgeMutablePropertyGraph concept and,157 performance guidelines and, 128 undirected graphs and, 141 AdditiveAbelianGroup class, 20-21 add_vertex function, 9, 43, 120, 152, 157, 128, 225 AdjacencyGraph concept, 46-47, 114, 115, 146-149 adjacency_graph_tag function, 124 adjacency_iterator function, 47, 146 adjacency_list class, 11-12, 13 Bacon numbers and, 63, 65 basic description of, 43 boost namespace and, 37-39 compilation order and, 37, 46 implicit graphs and, 114 interfacing with other graph libraries and,119 internal properties and, 52-53 maximum flow and, 107 minimum-spanning-tree problem and, 92 performance guidelines and, 127, 128, 130, 132 shortest-path problems and, 84 template parameters, 52 using topological sort with, 17-18 adjacency_list.hpp, 17, 216, 246 adjacency_matrix class, 11-12, 43 associated types, 238-239 basic description of, 235-242 member functions, 239-240 nonmember functions, 240-242 template parameters, 238 type requirements, 238 adjacency_matrix.hpp, 237 adjacency_vertices function, 46-47, 146 implicit graphs and, 114, 115 maximum flow and, 109 adjacent iterators, 7 ADTs (abstract data types), 19 advance_dispatch function, 33 advance function, 33 Algorithms. See also Algorithms (listed by name) basic description of, 13-18, 61-74 generic, 13-18 Koenig lookup and, 39 Algorithms (listed by name) See also Algorithms bellman_ford_shortest_paths algorithm, 40, 76-82, 162, 182-186 breadth_first_search algorithm, 11, 39, 61-67, 158-159, 165-169 depth_first_search algorithm, 13, 18, 44-46, 57, 67-75, 98, 160-161, 170-175 Dijkstra’s shortest-path algorithm, 76, 81-88, 161, 179-181, 277 Edmunds-Karp algorithm, 105, 109 Ford-Fulkerson algorithm, 105 Kruskal’s algorithm, 90-93, 95, 189-192 Prim’s algorithm, 89, 90, 94-96 push-relabel algorithm, 105 ANSI C, 262 archetype class, 36 array_traits class, 31-33 array traits, for pointer types, 32-33 Array type, 30 Assignable concept, 37, 28-29, 143 Associated types, 28-34, 143, 205 adjacency_list class, 216-217 adjacency_matrix class, 238-239 edge_list class, 251-252 filtered _raph class, 259-260 graph_property_iter_range class, 298 iterator_property_map class, 284 LEDA Graph class, 268-269 property_map class, 249 reverse_graph class, 253-255 associative_property_map adaptor, 103 Austern, Matthew, 28 Bback_edge function, 50, 67, 160 back_edge_recorder class, 70 back_edges vector, 71 backward edge, 106 Bacon, Kevin, 62-67 bacon_number array, 66 bacon_number_recorder, 66 Bacon numbers basic description of, 61-67 graph setup and, 63-65 input files and, 63-65 bar.o, 54 Base classes, 20, 21 parameter, 37 basic block, 69 BCCL (Boost Concept Checking Library), 35, 36 bellman_ford.cpp, 185-186 bellman_ford_shortest_paths algorithm, 40, 162 basic description of, 76-82, 182-186 named parameters, 184 parameters, 183 time complexity and, 185 BellmanFordVisitor concept, 161-162 BFS (breadth-first search), 11, 39. See also breadth_first_search algorithm Bacon numbers and, 65-67 basic description of, 61-67 visitor concepts and, 158-159 bfs_name_printer class, 11 BFSVisitor interface, 66, 158-159 bgl_named_params class, 40 BidirectionalGraph concept, 69, 72, 124, 145-146 bidirectional_graph_tag class, 124 Binary method problem, 23-24 boost::array, 78 Boost Concept Checking Library (BCCL), 35, 36 boost::forward_iterator_helper, 114 BOOST_INSTALL_PROPERTY, 52 Boost namespace adjacency_list class and, 37-38 basic description of, 37-39 boost:: prefix, 37-38 Boost Property Map Library, 55-56, 79, 80 Boost Tokenizer Library, 63 breadth-first search (BFS), 11, 39. See also breadth_first_search algorithm Bacon numbers and, 65-67 basic description of, 61-67 visitor concepts and, 158-159 breadth_first_search algorithm, 11, 39. See also breadth-first search (BFS) Bacon numbers and, 65-67 basic description of, 61-67, 165-169 named parameters, 167 parameters, 166 preconditions, 167 visitor concepts and, 158-159 Breadth-first tree, 61 CC++ (high-level language) associated types and, 28 binary method problem and, 23-24 code bloat and, 23 concept checking and, 34-37 expressions, 28 generic programming in, 19-59 GNU, 127, 128 Koenig lookup and, 38-39 Monoid concept and, 291 named parameters and, 39-40 object-oriented programming and, 22-25 Standard, 22, 28, 55, 125 valid expressions and, 28 capacity_map parameter, 206 cap object, 108 cc-internet.dot, 98 Chessboard, Knight’s tour problem for, 113-118 Class(es). See also Classes (listed by name) abstract, 20 archetype, 36 auxiliary, 242-251, 289-298 basic description of, 13-14, 213-275 base, 20, 21 comparisons, 127-132 concept-checking, 35-36 nesting, 52 selecting, 43-44 typedefs nested in, 30-31 Classes (listed by name). See also adjacency_list class AdditiveAbelianGroup class, 20-21 adjacency_matrix class, 11-12, 43, 234-242 adjacency_vertices class, 109, 114, 115 &nbs Click below for Errata related to this title: Errata
http://www.informit.com/store/boost-graph-library-user-guide-and-reference-manual-9780201729146?w_ptgrevartcl=The+Boost+Graph+Library_25777
CC-MAIN-2019-30
en
refinedweb
Overview One of the most important capabilities of any platform in today’s service driven, pay-as-you-go economy is metering and showback. Without a solid understanding of costs, organizations are in fact unable to provide services. With containers, metering and showback becomes more challenging. If we think about containers simply being processes, then we are basically needing to meter and perform showback at that level of granularity. In addition since OpenShift uses Kubernetes for container orchestration, there are additional concepts that are new. For example, one more more containers run together in what Kubernetes refers to as a Pod. Next Pods are extremely dynamic and their lifetime very short. All of this make metering and showback anything but straight-forward. Thankfully OpenShift and CloudForms have the solution. OpenShift comes with Hawkular, a highly scalable, flexible, performant metric storage system based on Cassandra. Hawkular runs on OpenShift (as a container) and collects CPU, Memory, Disk and Network utilization for all Pods running in environment. CloudForms is a multi-cloud management platform that integrates with many platforms including OpenShift. CloudForms is able to read the data from Hawkular, apply user-defined rates and provide report on costs for containers running in Pods on OpenShift. By default CloudForms is able to show these reports per project. Since an application or business will have multiple projects, showback should be ideally done based on a group of projects. In this article we will look at how to create showback reports from a group of interrelated projects in OpenShift using CloudForms. Pre-requisites In order to setup showback reports you will need an OpenShift environment and of course CloudForms. I tested with OpenShift 3.5 and CloudForms 4.5. - Deploy CloudForms on Red Hat Enterprise Virtualization - Deploy CloudForms on Azure - Deploy OpenShift and Configure CloudForms Label Projects in OpenShift In OpenShift since orchestration is Kubernetes, everything can be labeled. In this case we want to label our OpenShift projects (Kubernetes namespace) so projects that belong together can be easily identified. For simplicity, we will call a group of OpenShift projects that have same cost center a box. Each box has a number, so it can be identified. Ideally the name of the project should also have the box for quick identification. In the real-world you will most likely not want to allow developers to create projects directly in OpenShift. Instead you can use CloudForms and provide a service. Here you can ask end-user for a cost center or something to identify themselves and then ensure all projects that are created have that identifier in the name (optional) and label (required). You can also enforce quotas and of course would want to do that to ensure end-user’s only get what they pay for. List all projects that belong together Here it is easy since they all have Box02 in display name. Again why you would want to optionally add identifier to project name. [ktenzer@master1 ~]$ oc get projects |grep -i box greetme-poc Box02: greetme-poc Active greetme-norman-dev Box02: greetme-norman-dev Active greetme-prod Box02: greetme-prod Active greetme-workshop Box02: GreetMe-Workshop Active greetme-dev Box02: greetme-dev Active greetme-norman-prod Box02: greetme-norman-prod Active Label all the projects that belong together In this case we are labeling box02 projects. [ktenzer@master1 ~]$ oc label namespace greetme-poc box=box02 namespace "greetme-poc" labeled [ktenzer@master1 ~]$ oc label namespace greetme-norman-dev box=box02 namespace "greetme-norman-dev" labeled [ktenzer@master1 ~]$ oc label namespace greetme-prod box=box02 namespace "greetme-prod" labeled [ktenzer@master1 ~]$ oc label namespace greetme-workshop box=box02 namespace "greetme-workshop" labeled [ktenzer@master1 ~]$ oc label namespace greetme-dev box=box02 namespace "greetme-dev" labeled [ktenzer@master1 ~]$ oc label namespace greetme-norman-prod box=box02 namespace "greetme-norman-prod" labeled Map Labels to Tags in CloudForms Once projects have been labeled in OpenShift we need to map those labels to tags in CloudForms. Refresh CloudForms Data Go to the OpenShift provider in CloudForms and refresh relationships. View Labels OpenShift projects labels should now be visible in CloudForms. In this case box is the label and on our greetme-prod project it is set to the box02 label. Map label to tag Under user->Configuration you can map labels to tags. We will create a new mapping and simply map the OpenShift label box to a tag in CloudForms also called box. The mappings don’t of course need to match. Refresh CloudForms Data Go to the OpenShift provider in CloudForms and refresh relationships. CloudForms needs to now discover the tags. View Tags Under the project, smart management tab we can now see the tag box: box02 has been applied. Create Showback Reports in CloudForms Once the labels exist in OpenShift and are mapped to tags we can start to generate reports and actually sort by tag. Here we will create two reports. One that shows showback costs for all boxes or tags and other for a specific tag like box02. Create Showback Rates Under “Cloud Intel->Chargeback>Rates” you can create your own rates. CloudForms lets you choose the currency and rates. You can either configure fixed or variable for CPU, Memory, Network I/O and Disk I/O. Finally CloudForms also lets you set ranges. You could have the first few CPU cores cost X and the rest cost Y, as example. Both reports will use the same showback rates. Create Showback Report for all “box” tags This report will show all boxes for our tag box and their cost. Under “Cloud Intel>Reports->Custom” add a new report. Name the report and select the fields that should be part of the report. Here we have selected the tag and CPU / Memory used and cost. Under the “filter” tab of the report we need to define the filters. Select all projects and group by the tag box. To run the report select the report and click queue. You can also generate the report under the preview tab. Below is the output. This report shows cost according to box (a group of OpenShift projects). It is a monthly report that shows total cost per day. In this case we only have box02. Notice the <empty> box. These are projects that are not associated to a box. Create Showback Report for tag box02 In this report we want to only see projects that have the tag box02. Follow the same steps as above only this time change show costs, to tag and select the tag box02. Grouping should be done by project. To run the report select the report and click queue. You can also generate the report under the preview tab. Below is the output. This report shows cost according to box02. We see all the projects that are part of box02 and their costs. It is a monthly report that shows total cost per day per project. Summary In this article we looked at how OpenShift and CloudForms integrate with one another to provide both metering and showback. This involves labeling OpenShift projects, mapping them to tags in CloudForms and building reports based on user-defined rates. We created a report to provide an overview of all tagged OpenShift projects and a report that shows only the projects of a specific tag. Hopefully this article provides some ideas and you have seen the possibilities that exist today for metering and providing showback of costs in OpenShift using CloudForms. Happy OpenShifting! (c) 2017 Keith Tenzer Thank you very much for your guide. So I understand the reporting Modul and could create reports. Unfortunately, no values are displayed to me. Only empty fields. Hawkular works and I can also see metrics elsewhere in Cloudforms. For example, the OpenShift cluster load. Do you have to enable import of the metrics for reporting? Do you have a tip for me? Best regards Hi Holger, This I believe is because you selected all OpenShift providers. Try again with just the specific OpenShift provider selected. At least I ran into this and it was a bug. Let me know? Also make sure chargeback rates are assigned…I need to add that to blog post, a colleague reminded me it was missing This was the problem. I forgot to assign the rate to a provider. Now it is working like a dream. Thank you for the guide and the hint. Best regards Holger Hi , first at all thanks for spendd time to help us! Could you show how chargeback rate are assigned, in my case I have some rates assigned to instance flavors and I dont know how to make assigns using containers… Under rates, unfortunately I dont have access to CloudForms atm to make screenshot but basically you can create your own chargeback rate and assign it to everything or the name of the openshift provider (whatever you called it). ok, I got it what you means but the question is that , Am I able to assign rates using VM tag and container provider as the same time? In my tests always that I change compute rate assignments from to the rates assignments to tags are empty.
https://keithtenzer.com/2017/07/17/openshift-showback-reporting-using-cloudforms/?replytocom=2242
CC-MAIN-2019-30
en
refinedweb
A Flutter plugin for fetching interacting with Youtube Server to fetch data using API. Supports iOS and Android. To use this plugin, add youtube_api as a dependency in your pubspec.yaml file. static String key = 'YOUR_API_KEY'; YoutubeAPI ytApi = new YoutubeAPI(key); List<YT_API> ytResult = []; String query = "Flutter"; ytResult = await ytApi.Search(query); // data which are available in ytResult are shown below These data are stored in ytResult [ { "kind": "video", "id": "9vzd289Eedk", "channelTitle": "Java", "title": "WEBINAR - Programmatic Trading in Indian Markets using Python with Kite Connect API", "description": "For traders today, Python is the most preferred programming language for trading, as it provides great flexibility in terms of building and executing strategies.", "publishedAt":"2016-10-18T14:41:14.000Z", "channelId": "UC8kXgHG13XdgsigIPRmrIyA", "thumbnails": { "default": { "url": "", "width": 120, "height": 90 }, "medium": { "url": "", "width": 320, "height": 180 }, "high": { "url": "", "width": 480, "height": 360 } }, "channelurl":"", "url":"" }, { "kind": "video" // Here will you next result }, { // Here will you next result }, { // Here will you next result "url":"" } ] Default per-page result is 10 . If you want search any specif out i.e video or playlist or channel. For Channel only specify > Type : "channel" For Video only specify > Type : "video" For Playlist only specify > Type : "playlist" maxResults(int) can be 1 - 50 int max = 25; String type = "channel"; YoutubeAPI ytApi = new YoutubeAPI(key, maxResults: max, Type: type); Note: This plugin is still under development, and some APIs might not be available yet. Feedback welcome and Pull Requests are most welcome!: youtube_api: ^0.7.2 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:youtube_api/youtube_api.dart'; We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Fix lib/youtube_api.dart. (-1 points) Analysis of lib/youtube_api.dart reported 2 hints: line 9 col 30: The value of the field '_channel' isn't used. line 153 col 7: Name types using UpperCamelCase. Fix lib/generated/i18n.dart. (-0.50 points) Analysis of lib/generated/i18n.dart reported 1 hint: line 29 col 7: Name types using UpperCamelCase.
https://pub.dev/packages/youtube_api
CC-MAIN-2019-30
en
refinedweb
React Dazzle Dashboards made easy in React JS. Features - Grid based layout - Add/Remove widgets - Drag and drop widget re-ordering - UI framework agnostic - Simple yet flexible - Well documented (It’s a feature! Don’t you think?) Installation $ npm install react-dazzle --save Dazzle me Here is a demo. Widgets shows fake data though but they look so damn cool (At least for me). Repository of the demo is available here. Usage import React, { Component } from 'react'; import Dashboard from 'react-dazzle'; // Your widget. Just another react component. import CounterWidget from './widgets/CounterWidget'; // Default styles. import 'react-dazzl/lib/style/style.css'; class App extends Component { constructor() { this.state = { widgets: { WordCounter: { type: CounterWidget, title: 'Counter widget', } }, layout: { rows: [{ columns: [{ className: 'col-md-12', widgets: [{key: 'WordCounter'}], }], }], } }; } render() { return <Dashboard widgets={this.state.widgets} layout={this.state.layout} /> } } API Providing widgets widgets prop of the dashboard component takes an object. A sample widgets object would look like below. This object holds all the widgets that could be used in the dashboard. { HelloWorldWidget: { type: HelloWorld, title: 'Hello World Title' }, AnotherWidget: { type: AnotherWidget, title: 'Another Widget Title' } } typeproperty – Should be a React component function or class. titleproperty – Title of the widget that should be displayed on top mode Setting editable prop to true will make the dashboard editable. Add new widget When user tries to add a new widget, the onAdd callback will be called. More info here on how to handle widget addition. Remove - Improve drag and drop experience (#1) License MIT © Raathigesh » React Dazzle – A dashboard library for React JS 评论 抢沙发
http://www.shellsec.com/news/13751.html
CC-MAIN-2018-05
en
refinedweb
Hello guys, This is my O(n) Java solution that traverses the array only once, and requires (almost) no more additional space. public int canCompleteCircuit(int[] gas, int[] cost) { if (gas.length == 0) return -1; int startStation = -1; int prevMinDiff = 0; int totalDiff = 0; for (int i = 0; i < gas.length; i++) { int diff = gas[i] - cost[i]; if (diff >= 0 && startStation == -1) { startStation = i; prevMinDiff = totalDiff; totalDiff += diff; } else { totalDiff += diff; if (totalDiff < prevMinDiff) startStation = -1; } } return (totalDiff < 0) ? -1 : startStation; }
https://discuss.leetcode.com/topic/15876/my-o-n-java-solution-272ms
CC-MAIN-2018-05
en
refinedweb
Webster's Collegiate Dictionary defines an event as "a noteworthy happening." This general definition, in addition to conveying the meaning of the word in its social sense, is also remarkably precise when it comes to describing the fairly abstract concept referred to as an event in computer parlance. For software developers an event means exactly this: a "noteworthy happening" within the realm of a particular software system. Event-driven programming is a very popular type of software development model that is widely employed throughout the industry. As a result, nearly every GUI development toolkit in existence today is structured around some kind of an event loop, where custom event handling code is invoked automatically by the toolkit in response to some external actions, such as mouse movements or keystrokes. Although, GUI-based systems are the most natural candidates for applying event-driven programming techniques due to the highly irregular patterns of human interaction with computers, the event-driven model is highly beneficial even when it comes to building noninteractive, batch-mode software. Consider, for instance, a messaging system. A conventional message-processing program would have to continuously monitor the input queue, checking for newly received messages. An event-driven program, on the other hand, would install a message handling callback and rely on the messaging infrastructure to raise an event upon the arrival of a new message. Clearly, an event-driven approach is far superior because it relieves the developer from the duty of writing often convoluted polling code, and it also increases runtime efficiency and potentially lessens resource utilization. In the arena of system management, event-enabled architecture is often a critical success factor. While old-fashioned monitoring techniques that are based on periodic polling may be adequate for a standalone, simple computer system, today's enterprise installations rarely fit such a profile. Instead, a typical modern system may easily include dozens of geographically distributed computing nodes and hundreds of individually managed elements, thus rendering conventional monitoring methods completely impractical. Therefore, an ability to disseminate the management information via events is no longer a luxury; it is now a requirement for a robust management system. Because it is the leading enterprise management framework, WMI features extensive support for event-driven programming. As you can imagine, the WMI eventing mechanism is fairly complex and requires an extensive infrastructure to function in a reliable and efficient fashion. Fortunately, the event-related types of the System.Management namespace hide a great deal of complexities associated with handling management events in a distributed environment. This chapter is dedicated to explaining the basic principles of WMI eventing and illustrating the most fundamental techniques for building powerful event-driven monitoring tools with System.Management types. Just like anything else in WMI, an event is represented by an instance of a WMI class. Unlike other WMI objects, instances of event classes are transient—they are created dynamically by an event provider or the WMI itself and only exist while they are being consumed by a client application. Ultimately, all event classes are derived from a single root—a system class called __Event. This is an abstract class, intended to serve as a basis for defining more specific event types, and as such, it does not have any nonsystem properties. All event classes derived from __Event are categorized as either intrinsic, extrinsic, or timer events. Intrinsic events are the most interesting and generic category of events that are supported by WMI. The idea behind intrinsic events is simple, yet elegant and powerful; it revolves around the assumption that WMI object instances depict the current state of the associated managed elements. An event, therefore, can be viewed as a change of state that is undergone by a particular WMI object. For instance, creation of an operating system process can be modeled as creation of an instance of the Win32_Process class, while the death of the process can be represented by the deletion of a corresponding Win32_Process object. Besides the intrinsic events that represent the changes of state of an arbitrary WMI object, there are events that reflect the changes that are undergone by the CIM Repository, such as the addition and deletion of classes or namespaces. Thus, to model the intrinsic events, __Event class has three subclasses: __InstanceOperationEvent, __ClassOperationEvent, and __NamespaceOperationEvent. These subclasses are designed to represent state transitions for WMI instances, classes, and namespaces respectively. The __InstanceOperationEvent class encompasses all kinds of state changes that a single instance of a WMI class may go through, such as creation, modification, and deletion. It has a single nonsystem property, TargetInstance, which refers to a WMI object that is affected by the event. Thus, whenever a new instance is created, TargetInstance points to a newly created object; when an object is changed, this property is set to the new, modified version of the object; and finally, when an instance is deleted, TargetInstance refers to the deleted object. The __ClassOperationEvent class is an umbrella for all class-related operations. Just like its instance counterpart, it has a single property called TargetClass, which identifies a WMI class that is affected by the event. This property points to a newly created class, a modified class, or a deleted class for the creation, modification, and deletion of events correspondingly. Finally, __NamespaceOperationEvent is a class that embodies all events that affect WMI namespaces. Again, it has a single, nonsystem property, TargetNamespace, which, depending on the event, refers to a newly created, modified, or deleted namespace. Note that this property does not just contain the name of the affected namespace, but a namespace object. As you may remember, WMI namespaces are modeled as instances of the WMI class __Namespace, thus, the data type of the TargetNamespace property is __Namespace. Although more specific than their __Event superclass, __InstanceOperationEvent, __ClassOperationEvent and __NamespaceOperationEvent are much too general to be effective. These classes simply do not convey enough information to unambiguously differentiate between specific types of events, such as creation, deletion, or modification. Such design is intentional because none of these three classes are intended to have instances—in fact, all of them are just superclasses for more specialized event classes that zero in on the particulars of the operation that triggers the event. Table 4-1 presents a complete list of the intrinsic events that are supported by WMI. The first three classes listed in Table 4-1—__ClassCreationEvent, __ClassDeletionEvent and __ClassModificationEvent—are used to communicate to the consumer the details of an operation that affect an arbitrary WMI class definition. __ClassCreationEvent and __ClassDeletionEvent are exact copies of their superclass, __ClassOperationEvent, and they do not define any additional nonsystem properties. __ClassModificationEvent, on the other hand, adds a property, called PreviousClass, which refers to the copy of the class definition prior to the modification. Thus, a consumer application that handles __ClassModificationEvent can compare the new and the original class definitions that are pointed to by the TargetClass and PreviousClass properties and determine the exact nature of the modification. Similarly, __InstanceCreationEvent and __InstanceDeletionEvent do not have any additional nonsystem properties, with the exception of TargetInstance, which is inherited from their superclass __InstanceOperationEvent. __InstanceModificationEvent defines an additional property, called PreviousInstance, which refers to a copy of WMI object that reflects its state prior to the modification operation. The original and the modified instances pointed to by PreviousInstance and TargetInstance properties respectively, can be compared to obtain the details of the modification. Subclasses of __NamespaceOperationEvent also follow the same pattern so that only __NamespaceModificationEvent defines a new property, PreviousNamespace, in addition to TargetNamespace, which it inherited from the superclass. The PreviousNamespace property points to a copy of the namespace object prior to its modification. Since the __Namespace WMI class has a single nonsystem property, Name, you may assume that __NamespaceModificationEvent is generated whenever a namespace is renamed—every time its Name property is modified. However, this is not actually the case because the Name property is a key and, therefore, it is immutable. In fact, the only way to rename a namespace is to delete the original namespace and create a new one; this will trigger __NamespaceDeletionEvent followed by __NamespaceCreationEvent, rather than a single __NamespaceModificationEvent. You may also expect __NamespaceModificationEvent to be raised whenever any of the namespace contents are changed in some way. After all, a namespace is a container for WMI classes and objects; therefore, a change of state that is undergone by a WMI instance or a class definition constitutes a modification to a namespace that houses such an instance or a class. But this does not happen either; instead, changes to the WMI entities within a namespace are reflected by the appropriate subclasses of __ClassOperationEvent or __InstanceOperationEvent. Thus, __NamespaceModificationEvent may seem rather useless until you recall that a WMI class may have numerous class and property qualifiers. Thus, __NamespaceModificationEvent will be triggered every time a qualifier is added, deleted, or modified. Furthermore, since WMI namespaces are represented by instances of the __Namespace class, adding or deleting a namespace is, in fact, equivalent to adding or deleting a WMI instance. Thus, in theory, adding or deleting a namespace should trigger __InstanceCreationEvent or __InstanceDeletionEvent respectively. This, however, does not happen, and __NamespaceCreationEvent or __NamespaceDeletionEvent is raised instead. As with most WMI system classes, the intrinsic event classes that are listed in Table 4-1 cannot be extended. Thus, an event provider that wishes to support an intrinsic event must use predefined classes. Certain management events do not easily lend themselves to being modeled as changes of state that are undergone by arbitrary WMI objects. An example of such an event is a power shutdown. You may argue that such an event can also be represented by deleting some hypothetical WMI object that embodies a running computer system, but such an approach seems a bit awkward. Moreover, there are cases when an event corresponds to some actions taking place outside of the managed environment that render the intrinsic event mechanism unusable. Finally, WMI extension developers may find intrinsic events too rigid and not flexible enough for accomplishing their goals. To provide a flexible and generic solution for the potential problems mentioned above, WMI offers an alternative approach to modeling events. Another category of events, referred to as extrinsic events, exists for the sole purpose of building user-defined event classes that may represent just about any kind of action that occurs within, or outside of, the boundaries of the managed environment. The basis for deriving all user-defined event classes is an abstract class __ExtrinsicEvent, which is also a subclass of __Event. By default, only a few subclasses of __ExtrinsicEvent are loaded into CIM Repository during WMI installation. One of these subclasses is another abstract class, called __SystemEvent, which encompasses several events that are raised by WMI itself. These events, modeled using __EventDroppedEvent class and its subclasses __EventQueueOverflowEvent and __EventConsumerFailureEvent, have to do with WMI failing to deliver some other event to an event consumer. Yet another subclass of __ExtrinsicEvent is Win32_PowerManagementEvent, which represents power management events that result from power state changes. Such state changes are associated with either the Advanced Power Management (APM) protocol or the Advanced Configuration and Power Interface (ACPI) protocol. This class has two properties that detail the specifics of the power management event: EventType and OEMEventCode. EventType describes the type of the power state change, and it may take values such as "Entering Suspend," "Resume Suspend," "OEM Event," and so on. Whenever the EventType property is set to "OEM Event," the OEMEventCode fields contains the original equipment manufacturer event code. WMI distribution comes with a number of event providers that are capable of raising other extrinsic events, although these providers may not be installed by default. One such example is registry event provider that triggers various events associated with changes to the system registry. Installing this provider is trivial: you just compile and load the regevent.mof file—which contains the definitions for registry-extrinsic events as well as provider registration classes—and register the provider DLL stdprov.dll. Both the MOF file and the DLL can be found in the %SystemRoot%WBEM directory. Once installed, the registry event provider offers three extrinsic event classes that are derived from __ExtrinsicEvent: RegistryKeyChangeEvent, RegistryTreeChangeEvent, and RegistryValueChangeEvent. These classes let you monitor the changes to a hierarchy of registry keys, a single key, or a single value respectively. All three of these classes have the Hive property, which specifies the hierarchy of keys to be monitored, such as HKEY_LOCAL_MACHINE. The KeyPath property of RegistryKeyChangeEvent, the RootPath property of RegistryTreeChangeEvent, as well as the KeyPath and ValueName properties of RegistryValueChangeEvent all identify the specific registry key, tree, or value to be monitored. Extrinsic events afford unlimited flexibility to extensions developers, and while you are exploring WMI, you will certainly come across many useful extrinsic event providers, or perhaps, even try to roll your own. One thing you should keep in mind is that there are some restrictions when it comes to defining the extrinsic event classes. Naturally, in your attempt to follow the best practices of object-oriented design, you may wish to organize your event classes into a hierarchy similar to that of the registry event provider. Although it is possible to derive an extrinsic event class via multilevel inheritance, only the lowest level classes—classes with no subclasses—are allowed to have instances. For example, the base class of all registry event classes, RegistryEvent, is abstract and cannot be instantiated directly in order to be delivered to the consumer. Instead, one of its subclasses (RegistryKeyChangeEvent, RegistryTreeChangeEvent, or RegistryValueChangeEvent) must be provided. Furthermore, once a provider is registered for an extrinsic event class, this class may not be used as a superclass. Thus, you cannot derive you own event class from, say, RegistryKeyChangeEvent. Timer events are simply notifications that are generated as a result of a timer interrupt. These events are very straightforward and are modeled via a single class __TimerEvent that descends directly from __Event class. __TimerEvent has two properties: TimerId and NumFirings. TimerId is simply a string that uniquely identifies an instance of the __TimerInstruction subclass that caused the timer to fire. NumFirings is a counter that indicates how many times a timer interrupt took place before the notification was delivered to the consumer. Normally NumFirings is always set to 1, however, if a notification cannot be delivered to the consumer for some time, WMI will automatically merge multiple timer events into one and increment NumFirings to reflect that. This may happen if, for instance, a timer interval is small, which causes the timer to fire at the rate that cannot be sustained by the consumer, or when a consumer is down and unreachable for a period of time. For timer events to be useful, there has to be a way to set off the timer. This is achieved by creating an instance of the __TimerInstruction subclass. __TimerInstruction is an abstract superclass used as a base for specifying how the timer interrupts should be generated. The class has two properties: TimerId and SkipIfPassed. TimerId is a string that uniquely identifies an instance of __TimerInstruction subclass. SkipIfPassed is a Boolean flag that indicates if a timer interrupt should be suppressed in case WMI is unable to generate it at the appropriate time or if a consumer is unreachable. The default setting is FALSE, which causes the WMI to buffer the timer events, if it is unable to deliver them, until the delivery is possible. The TRUE setting will result in the event being suppressed. Conceptually, there can be two types of timer interrupts: those that occur once at a predefined time during the day, and those that take place repeatedly at certain fixed intervals. Both types can be set up using the __AbsoluteTimerInstruction or __IntervalTimerInstruction subclasses of __TimerInstruction. The former defines one property, called EventDateTime, in addition to the properties inherited from __TimerInstruction. EventDateTime is a string that specifies an absolute time when the event should fire. As is true for all dates and times in WMI, this string must adhere to the Distributed Management Task Force (DMTF) date-time format: [1] yyyymmddHHMMSS.mmmmmmsUUU where In order to generate an absolute timer event, a client application must create an instance of the __AbsoluteTimerInstruction class and set its EventDateTime property appropriately. The event will be generated once when the indicated time of day is reached. __IntervalTimerInstruction is another subclass of __TimerInstruction; it is used to generate periodic timer events based on a predefined time interval. In addition to properties inherited from __TimerInstruction, this class defines the IntervalBetweenEvents property, which is the number of milliseconds between individual timer interrupts. To set up an interval timer, a client application must create an instance of the __IntervalTimerInstruction class and set its IntervalBetweenEvents property accordingly. One thing to keep in mind when you are creating interval timers is that the interval setting should be sufficiently large. On some platforms, WMI may not be able to generate interval timer events if an interval is too small. Additionally, although the timer interval can be controlled with millisecond precision, there is no guarantee that WMI will be able to deliver timer events to consumers at the exact intervals specified by the IntervalBetweenEvents property. Due to some platform's limitations, system load, and other conditions, event delivery may be delayed. Lastly, just like most of the system classes, neither the __TimerEvent nor the subclasses of the __TimerInstruction can be extended. However, this is not a severe restriction because the timer classes provide enough flexibility and do not really warrant extension under any circumstances. [1]The latest version of FCL distributed with .NET Framework and Visual Studio .NET code named "Everett" includes the ManagementDateTimeConverter type, which allows for converting between DMTF-formatted time strings and .NET date and time types. In order to start receiving event notifications, a client application must initiate an event subscription, or, in other words, it must register with WMI as an event consumer. An event subscription is essentially a contract between WMI and an event consumer that specifies two things: in what types of events a consumer is interested, and what actions WMI is being requested to perform on behalf of the client when an event of interest takes place. When initiating an event subscription, it is the responsibility of the event consumer to supply the filtering criteria for events as well as the code, to be executed by WMI upon the arrival of an event. You can specify the event filtering criteria with a WQL query, which unambiguously identifies types and even specific properties of those events that a client application intends to handle. In fact, all that you need is a familiar WQL SELECT statement, which may occasionally utilize one or two clauses designed specifically to facilitate event filtering. The details of how WQL is used for event filtering will be covered later in this chapter. Instructing WMI which actions to take when the event of interest arrives is a bit more involved. Conventionally, event-handling applications can register the address of a custom event handler routine, or callback, with a server so that the server can invoke that callback whenever an event of interest is triggered. This is certainly possible with WMI. Client applications that wish to receive asynchronous event notifications may simply implement the IWbemObjectSink interface and then pass the interface pointer to WMI while registering for event processing. Then, whenever an event of interest is triggered, WMI calls the IWbemObjectSink::Indicate method, thus executing the custom event handling code. The apparent downside of such approach is that management events are only being processed while the consumer application is active. Although this may be satisfactory under certain circumstances, in those environments where round-the-clock monitoring is required, constantly running a consumer program may impose an unnecessarily heavy load on the managed computer systems. Additionally, it is an issue of reliability since the consumer may simply crash and not be restarted fast enough, which could cause some events to be dropped. An ideal solution would involve relying on WMI itself to correctly take appropriate actions when certain events arrive, regardless of whether a consumer application is active or not. This is exactly what is achieved via the WMI permanent event consumer framework. Essentially, this framework makes it possible to configure WMI to carry out certain predefined actions, such as sending an email, or executing an application program when some event of interest occurs. While an out-of-the-box WMI installation is equipped with only a handful of such predefined actions referred to as event consumer providers, the framework can easily be extended. Thus, if you are familiar with the fundamentals of COM programming, you can build custom event consumer providers that are capable of doing just about anything. Although WMI permanent event consumer architecture has little to do with the primary focus of this book, for the sake of completeness, I will provide more details on this subject later in this chapter. Consuming events is just one piece of the puzzle. Events have to originate somewhere; in other words, something has to act as an event source. Normally, this role is reserved for event providers. In the spirit of extensibility, WMI event providers are simply COM servers that are responsible for monitoring the underlying managed environment, detecting changes to the managed entities, and providing event notifications to WMI correspondingly. Besides implementing the provider-standard IWbemProviderInit initialization interface, such providers must implement the IWbemEventProvider interface with its single method ProvideEvents. When the provider initialization is complete, WMI calls the ProvideEvents method to request that a provider starts providing event notifications. One of the arguments of this method is a pointer to the IWbemObjectSink interface that is used by the provider to forward its events to a consumer. In essence, WMI registers its event handling code (represented by the IWbemObjectSink pointer) with a provider, so that the provider calls the IWbemObjectSink::Indicate method each time an event is triggered. Note that when sending its event notifications to WMI, the provider does not perform any filtering; in fact, all events are forwarded to WMI regardless of whether there is an interested event consumer. Thus, it is the responsibility of WMI to analyze its outstanding event subscriptions and forward appropriate events to registered consumers. Despite its simplicity, such an approach is often inefficient due to a potentially large number of unneeded event notifications generated by the provider. Thus, to reduce the traffic of event notifications and increase the overall performance, event providers have the option of implementing an additional interface, IWbemEventProviderQuerySink. Using this interface, WMI can notify a provider of all active event filters so that a provider can generate its notifications selectively—only if an interested event consumer is available. Event providers are the primary, but not the only source of management events. Depending on the type of event and the availability of an event provider, WMI may assume the responsibility of generating event notifications. One, perhaps the most obvious example of WMI acting as an event source is when it generates timer events. Once an instance of the __TimerInstruction subclass is created and saved into CIM Repository, it is the responsibility of WMI to monitor the system clock and trigger the appropriate event notification. Timer events are not an exception and, as a matter of fact, it is WMI that raises events based on changes to any kind of static data stored in CIM Repository. For instance, all namespace operation events represented by subclasses of __NamespaceOperationEvent are monitored for and triggered by WMI since each WMI namespace is represented by a static copy of the __Namespace object stored in CIM Repository. The same is true for most of the class operation events, as long as the class definitions are static, and even certain instance operation events, given that instances are stored in the CIM Repository as opposite to being generated dynamically by an instance provider. One notable exception is extrinsic events. These events are, essentially, user-defined, and as such, they are undetectable by WMI. Therefore, all extrinsic events must be backed by an event provider. Even the intrinsic instance operation events for dynamic WMI class instances may originate from WMI rather than from an event provider. Whenever an event provider for intrinsic events is not available, WMI employs a polling mechanism to detect changes to managed elements and to raise appropriate events. Polling assumes that the dynamic classes or instances are periodically enumerated and their current state is compared to previously saved state information in order to sense changes that warrant triggering events. Apparently, polling may be prohibitively expensive and, therefore, it is not initiated by WMI automatically in response to an event subscription request. Instead, an event consumer must request polling explicitly using a special WQL syntax, which will be covered later in this chapter. Despite its versatility, polling can amount to a performance nightmare, especially in a widely distributed environment that is interconnected by a slow or congested network. Therefore, it is a good idea to stay away from polling altogether, or at least exhibit caution when you are forced to resort to it. One approach you can use to monitor management events is build a custom consumer application that would initiate a subscription for certain events of interest on startup and remain listening to event notifications until shutdown. Such an event consumer, which only handles events as long as it is active, is referred to as a temporary event consumer. A typical example of a temporary consumer would be a graphical application that is only interested in receiving WMI events as long as there is a user interacting with the program. Temporary event consumers may register to receive events in either a synchronous or asynchronous fashion. Synchronous event notification is, perhaps, the simplest event-processing technique offered by WMI. Event consumers register for synchronous event delivery by calling the IWbemServices::ExecNotificationQuery method, which, among other parameters, takes a WQL query string and returns a pointer to the IWbemEnumClassObject interface. This interface is an enumerator, used to iterate through the events of interest. Contrary to what you may assume, ExecNotificationQuery does not block until an appropriate event notification arrives. Instead, this method returns immediately, letting the consumer poll for events via the pointer to the returned IWbemEnumClassObject interface. Thus, whenever a consumer attempts to invoke the Next method of IWbemEnumClassObject, the call may block, waiting for events to become available. Releasing the IWbemEnumClassObject interface pointer cancels the query and deregisters the event consumer. As simple as it is, synchronous event notification has its share of problems, the most significant of which is the performance penalty incurred as a result of the polling for notification status. As a result, it is better if event consumers use the asynchronous event delivery mechanism, which eliminates the need to continuously poll WMI for events through the Next method of IWbemEnumClassObject interface. Asynchronous event notification is initiated by calling IWbemServices::ExecNotificationQueryAsync. Similar to its synchronous counterpart, this method takes a WQL query string parameter that specifies the event filtering criteria. However, rather than returning an enumerator, the method also takes an additional input parameter—the IWbemObjectSink interface pointer. This interface, which must be implemented by an event consumer in order to engage in asynchronous event processing, allows WMI to forward event notifications to the client as they arrive. IWbemObjectSink has two methods: SetStatus, which informs the client on the progress of asynchronous method call and signals its completion; and Indicate, which provides the actual event notifications to the consumer. Since asynchronous event subscriptions are endless —they do not terminate until explicitly cancelled—WMI never calls SetStatus while delivering events. Thus, the Indicate method is the main workhorse of asynchronous event processing; WMI calls it each time an event of interest is raised. This method takes two arguments: an array of IWbemClassObject interface pointers, and a count that reflects the number of elements in the array. Normally the array would contain just one element, a single event, returned by WMI; however, there is a provision for delivering multiple notifications in a single invocation of the Indicate method. An asynchronous event subscription remains active until it is explicitly cancelled via the IWbemServices::CancelAsyncCall method call. In addition to providing a relatively simple implementation, temporary event consumers offer you nearly unlimited flexibility. After all, it is completely up to you to implement whatever event-handling logic you think you need. However, you pay a price because such consumers are only capable of listening to event notifications while they are active. Thus, if you need to monitor events round-the-clock, you may find that using temporary event consumers is not an adequate solution. Every once in a while, you may want certain management events handled continuously, regardless of whether a specific management application is active. You can accomplish this using the WMI permanent event consumer framework. Unlike temporary consumer registrations, where the event subscription information is stored in memory, permanent consumer registrations are persisted in CIM Repository, and therefore, they survive system reboots. The centerpiece of WMI permanent consumer architecture is a component called the event consumer provider. This is a regular COM server that implements a standard provider initialization interface, IWbemProviderInit, as well as two other interfaces specific to event consumer providers: IWbemConsumerProvider and IWbemUnboundObjectSink. The former is used by WMI to locate an appropriate consumer for a given event and to retrieve a pointer to its IWbemUnboundObjectSink interface. Once a consumer is identified, WMI invokes its IWbemUnboundObjectSink::IndicateToConsumer method every time an event of interest is raised. Although you can develop a custom event consumer provider, WMI SDK ships with a number of useful consumer provider components, which typically satisfy most of your event monitoring requirements. One such ready-to-use component is the Event Viewer consumer provider, which is implemented as the COM EXE server—wbemeventviewer.exe. As its name implies, the Event Viewer provider is designed to display management events using a graphical interface. The program can be started manually via the WMI Event Viewer shortcut in the WMI SDK program group, although it is not necessary—once event subscription is set up, WMI will automatically launch Event Viewer when qualifying events arrive. The Event Viewer graphical interface is shown in Figure 4-1. Figure 4-1: Event Viewer graphical interface Another example of a permanent event consumer provided as part of WMI SDK is the Active Script Event Consumer that is implemented as the COM EXE server scrcons.exe. This component lets you invoke a custom script when an arbitrary event arrives. A script can be written in any scripting language that can be consumed by the Microsoft Scripting Engine, including VBScript and JavaScript. The scripting code has access to the instance of the respective event class through the environment variable TargetEvent. Yet another example of a permanent consumer is smtpcons.dll. The SMTP event consumer provider is capable of generating and sending out email messages when the event notifications are received from WMI. This component is fairly flexible because it lets you assemble an email message using standard string templates. The templates utilize a notation, similar to one used for specifying Windows environment variables, to refer to the properties of an event. Thus, for handling process creation events, for instance, the body of the email message may include a string %TargetInstance.ExecutablePath%. Since the TargetInstance property of the __InstanceCreationEvent object always points to a newly created WMI object, this string template will be expanded to the actual path of the process executable. A few more standard consumer providers are available, such as the Command Line Event Consumer, which launches an arbitrary process upon the arrival of an event; or the NT Log Event Consumer, which writes an entry into Windows NT Event Log as the result of a management event. In general, there is rarely a need to create a new consumer provider, since most of the event-handling functionality is already embedded in the standard consumer providers. Although the purpose of event consumer providers should be fairly obvious at this point, one thing may still remain unclear. How does WMI locate an appropriate consumer for a given event? Just like all other COM components, consumer providers are registered in the system registry; however, such registration has little relevance as far as WMI is concerned. It turns out that all event consumer providers are registered as such in the CIM Repository. You can register an event consumer provider by creating instances of two system classes: __Win32Provider and __EventConsumerProviderRegistration. You will need the __Win32Provider object to register any kind of WMI provider; it contains mainly the basic information, such as the provider identification and CLSID. The __EventConsumerProviderRegistration, on the other hand, is specific to event consumer providers and it is used to link the physical provider that is implemented by the COM server with the logical event registration. This class has two properties: Provider, which points to an instance of the __Win32Provider class; and ConsumerClassNames, which is an array of logical consumer class names supported by the provider. A consumer class is a derivative of the system class __EventConsumer that is specific to an individual event consumer provider and, as a result, may contain properties that facilitate arbitrary event handling activities. For instance, the SMTP Event Consumer Provider comes with the SMTPEventConsumer class, which is defined as follows: class SMTPEventConsumer : __EventConsumer { [key, Description( "A unique name identifying this instance of the SMTPEventConsumer.")] string Name; [Description("Local SMTP Server")] string SMTPServer; [Description("The subject of the email message.")] string Subject; [Template, Description("From line for the email message. " "If NULL, a from line will be constructed" "of the form WinMgmt@MachineName")] string FromLine; [Template, Description("Reply-To line for the email message. " "If NULL, no Reply-To field will be used.") ] string ReplyToLine; [Template, Description("The body of the email message.")] string Message; [Template, Description("The email addresses of those persons to be " "included on the TO: line. Addresses must be " "separated by commas or semicolons.")] string ToLine; [Template, Description("The email addresses of those persons to be " "included on the CC: line.")] string CcLine; [Template, Description("The email addresses of those persons to be " "included on the BCC: line.")] string BccLine; [Description("The header fields will be inserted into the " "SMTP email header without interpretation.")] string HeaderFields[]; }; As you can see, the individual properties of this class correspond to the elements that constitute a typical email message. Note that most of these properties are marked with the Template qualifier indicating that their contents may contain standard string templates to be expanded by the event processor. Each instance of such a consumer class, often referred to as a logical consumer, represents an individual event subscription, or, more precisely, describes a particular action to be taken when an event arrives. Although logical consumers provide enough information to handle an event, there is one more piece of information that WMI requires in order to dispatch the notifications to appropriate consumers. More specifically, there has to be a specification as to what types of events are handled by a given consumer, or, in other words, there has to be an event filter. Such a specification is provided by instances of the __EventFilter system class. This class has three properties: Name, which is nothing more than just a unique instance identifier; Query, which is an event query string; and QueryLanguage, which, as its name implies, identifies the query language. At the time of this writing, WQL is the only query language available, so the Query property should always contain a valid WQL event query. These two pieces of information—the logical consumer and the event filter—are all you need to route and handle any event. However, you may still be unclear on how they are linked. Let me try to clarify this. In the true spirit of CIM object modeling, the logical consumer and event filter classes are connected via an association class, __FilterToConsumerBinding. Two properties of this class, Filter and Consumer, refer to the __EventFilter object and a derivative of the __EventConsumer classes respectively. An instance of __FilterToConsumerBinding, which binds two valid instances of __EventFilter and a subclass of __EventConsumer, constitutes a valid and complete event subscription. Events will be delivered to a consumer as long as such instance exists and the only way to deactivate the subscription is to physically delete the instance. You can create a permanent event registration in a few different ways. The most straightforward technique requires you to build a MOF file, which defines all necessary classes and instances. You should then compile the file using mofcomp.exe, and load it into the CIM Repository, thus creating the required event registration. Yet, another way to create a permanent event registration is to build the instances of the required classes programmatically using one of the WMI APIs. Both these techniques, although effective, are a bit tedious and error-prone. Fortunately, you can accomplish the task without writing a single line of code; just use the WMI SDK Event Registration Tool. Figure 4-2 shows its graphical interface. Figure 4-2: Event Registration graphical interface This tool is extremely easy to use because it lets you create the necessary instances of event filters and consumer classes with just a few key stokes and mouse clicks. These instances can subsequently be linked together to create a __FilterToConsumerBinding object, thus completing the event subscription. While the event monitoring mechanism nearly tops the list of the most exciting features of WMI, you may not fully appreciate its versatility until you start managing a large-scale distributed system. Just imagine an environment with dozens of computing nodes that need to be monitored on a regular basis. Setting up event consumers locally on each of the computers is a tedious and thankless job, which gets worse and worse as the environment grows larger. An ideal management system should allow you to intercept the events taking place on remote computing nodes and carry out the appropriate actions in a somewhat centralized fashion. Generally, you can handle management events, which occur on remote computer systems, in a few different ways. For instance, temporary event consumers may explicitly initiate subscriptions to events that are fired on a particular remote machine. This is the easiest approach, although it requires a consumer to manage multiple concurrent event registrations—one per each remote computer of interest. Listening to remote events is also possible with permanent event consumers. The definition for the system class __EventConsumer, which all permanent consumers are based upon, has the following form: class __EventConsumer : __IndicationRelated { string MachineName; uint32 MaximumQueueSize; uint8 CreatorSID[]; }; The first thing to notice here is the MachineName property, which is usually set to blank for locally handled events. However, it is possible to route the event notifications to a remote computer by setting the MachineName property to the name of a machine designated for handling events. After all, an event consumer provider is just a COM server that can be activated remotely using nothing but the conventional DCOM infrastructure. There are essentially two things, required for remote activation: the CLSID of the object to be created, and the name of the machine on which to instantiate the object. As you may remember, the CLSID of a consumer COM server is specified by the CLSID property of the instance of the __Win32Provider class, which describes the event consumer provider. Thus, by combining the value of the CLSID property of the __Win32Provider object with the value of the MachineName property of the instance of the __EventConsumer class, WMI is capable of connecting to an appropriate event consumer on a designated remote machine and forwarding the event notifications there. Although such a setup is fairly straightforward, it lacks some flexibility. First, you must be able to configure the appropriate permanent event consumer registrations on all monitored machines, which may be quite a nuisance, especially in a sufficiently large managed environment. Second, forwarding event notifications to multiple remote computers at the same time significantly complicates the configuration. As you just saw, with an individual instance of __EventConsumer you can send events to a single remote computer. Thus, in order to enable WMI to route events to more than one machine, you must create multiple __EventConsumer objects—one for each receiving computer. Finally, event forwarding is achieved at the consumer provider level. In other words, WMI's ability to route notifications to remote computers depends on the availability of the appropriate event consumer providers on those machines. Moreover, if you ever need to change the way events are handled (by switching from Event Viewer consumer to Active Script consumer, for example), you have to reconfigure each monitored machine. Fortunately, WMI provides a better way to set up permanent event forwarding. The solution lies in the forwarding event consumer provider, which intercepts the events, raised locally, and routes them to the remote computer, where they are raised again. Just like with any other event consumer, you will need a valid event registration for the forwarding consumer to work correctly. Therefore, you still need to carry out certain configuration tasks on each of the machines you want monitored. The configuration, though, is cleaner, since it has built-in provisions for routing the notifications to multiple remote computers simultaneously and does not enforce a particular event-handling technique. To better understand how the forwarding consumer operates, consider this simple example. The following MOF definition shows the elements that make up a registration for a forwarding consumer that sends process creation event notifications to several remote machines: instance of __EventFilter as $procfilter { Name = "Process Filter"; Query = "SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'"; QueryLanguage = "WQL"; EventNamespace = "root\CIMV2"; }; instance of MSFT_ForwardingConsumer as $procconsumer { Name = "Forwarding process consumer"; Authenticate = TRUE; Targets = { "MACHINE1", "MACHINE2", "MACHINE3" }; }; instance of __FilterToConsumerBinding { Consumer = $procconsumer; Filter = $procfilter; }; The instances of the __EventFilter and __FilterToConsumerBinding classes, which are only shown here for completeness, are identical to those you would set up for local event registrations. The logical consumer definition, however, is quite different from that of a local consumer. An instance of the MSFT_ForwardingConsumer class is used by a physical forwarding consumer provider to route the notifications to appropriate remote computers. Similar to the rest of the consumer classes, MSFT_ForwardingConsumer is a derivative of the __EventConsumer system class and, as such, it has MachineName property. However, this property does not have to be set in order to enable the event forwarding. Instead, this class defines a new property, Targets, which is an array of machine names or addresses that receive the event notifications. Thus, in the preceding example, all process creation events will be forwarded to MACHINE1, MACHINE2, and MACHINE3. As I mentioned earlier, such an event registration has to be created on each of the machines that are to forward events, which may be a bit tedious. However, once created, these registrations should rarely change, since, as you will see in a moment, the particulars of event handling can be controlled from the monitoring machines that are receiving the events. What really sets the forwarding consumer apart from the rest of the consumer providers is its ability to reraise the events on a remote machine so that the event can be handled just like any other WMI event that is local to that machine. The only catch is that the event will be represented via an instance of a special MSFT_ForwardedEvent class rather than a regular intrinsic or extrinsic event class. MSFT_ForwardedEvent has the following definition: class MSFT_ForwardedEvent : MSFT_ForwardedMessageEvent { uint8 Account[]; boolean Authenticated; string Consumer; __Event Event; string Machine; string Namespace; datetime Time; }; where Although all properties of MSFT_ForwardedEvent provide a wealth of useful information regarding the origins of a forwarded event, its Event property is the most interesting. Event points to an embedded event object that corresponds to an original WMI event intercepted by the forwarding consumer. Therefore, if the consumer is set up to forward all process creation events, the Event property of the MSFT_ForwardedEvent object will contain an __InstanceCreationEvent with its TargetInstance property set to an instance of Win32_Process. By registering for MSFT_ForwardedEvent notifications and filtering based on the properties of the associated event objects, a remote consumer may carry out just about any monitoring task. Event forwarding is a very powerful feature, much superior to all other remote notification techniques. Unfortunately, the forwarding consumer provider is only available under Windows XP and later, so users of older systems may have to resort to other event forwarding solutions. The WMI event filtering mechanism is based on WQL queries. Queries are used by both parties engaged in event processing: event consumers and event providers. Event providers utilize WQL queries to publish their capabilities; in other words, they use them to specify what types of events they supported. Thus, the EventQueryList property of the __EventProviderRegistration class, which is used to define event providers in WMI, houses an array of WQL queries that are supported by the provider. Event consumers, on the other hand, use WQL queries to initiate an event subscription and to indicate to WMI in which types of events they are interested. There is no difference in the syntax of WQL queries used by providers and consumers, although there is a conceptual distinction, since such queries are used for different purposes. Since provider development has little to do with .NET and System.Management, the remainder of this chapter will look at WQL event queries from the prospective of an event consumer. Rather than inventing a brand new syntax specifically for the purpose of handling event queries, WQL offers a specialized form of the SELECT statement. (For a formal definition of WQL SELECT, refer to Listing 3-1 in the last chapter.) The simplest form of the WQL SELECT query is as follows: SELECT * FROM As you can see, the only thing that sets this query above apart from a regular data query is the class name in the FROM clause. A data query that specifies an event class name in its FROM clause is meaningless because instances of event classes are transient and only exist while the corresponding event is being delivered to a consumer. Thus, a query that uses an event class name in its FROM clause is automatically recognized as an event query. Let us look at a very simple example: SELECT * FROM __TimerEvent This is certainly an event query, since it features a name of an event class, __TimerEvent, in the FROM clause. In fact, this query can be used to initiate a subscription to timer events, generated by any timer, that exist in CIM Repository. Just like the data queries, event queries do not have to use the * placeholder; it is perfectly valid to specify a property list instead: SELECT TimerId FROM __TimerEvent Interestingly, in respect to returning system properties, this query behaves similarly to the way that a comparable data query would. If the '*' placeholder is used as a property list in a SELECT statement, each event object received by the event consumer will have most of its system properties properly populated, with the exception of __RELPATH and __PATH. The latter two properties are always set to NULL due to the inherently transient nature of event objects. However, if a SELECT statement contains a list of nonsystem properties like the query above, the only system properties populated by WMI for each output event are __CLASS, __DERIVATION, __GENUS, and __PROPERTY_COUNT. Note that __PROPERTY_COUNT will reflect the actual number of properties selected rather than total number of properties in the class definition. The previous query is not very useful. This is because it is very likely that there is more then just one outstanding timer instruction active at any given time, which means that timer events may come from several different sources simultaneously. Since the query does not have selection criteria, it will pick up all timer interrupts, regardless of their source. You can easily solve this problem by rewriting the query as follows: SELECT * FROM __TimerEvent WHERE TimerId = 'Timer1' Here, the WHERE clause will ensure that only events that originate from the timer instruction, identified by TimerId of 'Timer1', are received by the consumer. In fact, using the WHERE clause to limit the scope of event subscriptions is strongly recommended because it greatly reduces the number of unneeded event notifications that are forwarded to event consumers. The syntax of the WHERE clause for event queries is exactly the same as that of the regular WQL SELECT queries and the same restrictions apply. Subscribing to events other than timer events is very similar, although there are a few special considerations. Receiving extrinsic events is almost as trivial as getting the timer event notification. For example, the following query can be used to initiate a subscription to RegistryValueChangeEvent that is raised by the registry event provider every time a certain registry value is changed: SELECT * FROM RegistryValueChangeEvent WHERE Hive = 'HKEY_LOCAL_MACHINE' AND KeyPath = 'SOFTWARE\MICROSOFT\WBEM\CIMOM' AND ValueName = 'Backup Interval Threshold' Note that in the case of the registry event provider, the WHERE clause is mandatory rather than optional. Thus, the provider will reject the following query with an error code of WBEMESS_E_REGISTRATION_TOO_BROAD (0x 80042001): SELECT * FROM RegistryValueChangeEvent In fact, even if there is a WHERE clause, but its search criteria do not reference all of the event class properties, the query would still be rejected: SELECT * FROM RegistryValueChangeEvent WHERE Hive = 'HKEY_LOCAL_MACHINE' AND KeyPath = 'SOFTWARE\MICROSOFT\WBEM\CIMOM' If a value for a given event class property is not explicitly supplied, the provider cannot deduce the finite set of registry entries to be monitored, and therefore, it rejects the query. Thus, when subscribing to an arbitrary registry event, the query should provide search arguments for each of the event class properties. Although this may seem like a rather severe restriction, there is an easy work around. The registry event provider offers a choice of three events: RegistryTreeChangeEvent, RegistryKeyChangeEvent, and RegistryValueChangeEvent. Therefore, the preceding query, which essentially attempts to monitor changes to a particular key rather than to the value, can be rewritten using RegistryKeyChangeEvent as follows: SELECT * FROM RegistryKeyChangeEvent WHERE Hive = 'HKEY_LOCAL_MACHINE' AND KeyPath = 'SOFTWARE\MICROSOFT\WBEM\CIMOM' This query will result in a subscription for all events generated as a result of a change to any of the values under the HKEY_LOCAL_MACHINESOFTWAREMICROSOFTWBEMCIMOM key. The only downside is that the RegistryKeyChangeEvent class does not have the ValueName property, thus, the event objects received by the consumer cannot be used to identify a value that was changed to trigger the event. Assuming that you can identify a finite set of values to be monitored, you can easily solve this problem: SELECT * FROM RegistryValueChangeEvent WHERE Hive = 'HKEY_LOCAL_MACHINE' AND KeyPath = 'SOFTWARE\MICROSOFT\WBEM\CIMOM' AND ( ValueName = 'Backup Interval Threshold' OR ValueName = 'Logging' ) Using the same technique of combining multiple search criteria, you can also monitor changes to registry values that reside under different keys or hives. Subscribing to intrinsic events is a bit trickier. Although the same WQL SELECT statement is used, there are a couple of things you should watch out for. Generally, if an event of interest is backed up by an event provider, the event query remains essentially the same as the one used to register for extrinsic or timer events. Take a look at the intrinsic events supplied by Windows Event Log provider. This provider, when it is acting in the capacity of an event provider, triggers intrinsic instance operation events whenever changes that affect the Windows Event Log take place. An arbitrary Windows Event Log event is represented by an instance of the Win32_NTLogEvent class, shown here: class Win32_NTLogEvent { [key] uint32 RecordNumber; [key] string Logfile; uint32 EventIdentifier; uint16 EventCode; string SourceName; [ValueMap{"1", "2", "4", "8", "16"}] string Type; uint16 Category; string CategoryString; datetime TimeGenerated; datetime TimeWritten; string ComputerName; string User; string Message; string InsertionStrings[]; Uint8 Data[]; }; Note that this class represents a Windows event that is recorded into the system event log and it is not associated in any way with WMI event classes. Instead, a management event, raised as a consequence of a Windows event, is modeled as a subclass of the __InstanceOperationEvent system class. For instance, whenever an event record is written into the Windows Event Log, the Event Log provider raises __InstanceCreationEvent so that the TargetInstance property of an event object refers to an associated instance of the Win32_NTLogEvent class. Thus, all you need to register to receive such events is a SELECT query that features __InstanceCreationEvent in its FROM clause and has a WHERE criteria that narrows the scope of the registration down to Windows Event Log events. The latter can be achieved by examining the TargetInstance property of the __InstanceCreationEvent class. Theoretically, the following query should do the job: SELECT * FROM __InstanceCreationEvent WHERE TargetInstance = 'Win32_NTLogEvent' In fact, that query is almost correct, but not quite. Since the idea here is to limit the scope of the query based on the class of the embedded object that is pointed to the TargetInstance property, the = operator will not work. Instead, the ISA operator, which is specifically designed for dealing with embedded objects, should be used: SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' Although this query is absolutely syntactically correct, an attempt to execute it will most likely result in WMI rejecting the query with the error code of WBEM_E_ACCESS_DENIED (0x80041003). Such behavior is puzzling at best, but there is a perfectly logical explanation. Using a query such as this one, a consumer essentially requests to be notified whenever any event is recorded into any of the Windows Event Logs, including System, Application, and Security logs. The Security log is protected and any user wishing to access it must enable SeSecurityPrivilege (which gives the user the right to manage audit and security log). In fact, this privilege should first be granted to a user and then enabled on a per-process basis. Later in this chapter I will demonstrate how to ensure that a privilege is enabled; meanwhile, there is a simple workaround. If you are only interested in Application events, the scope of the query can be narrowed down even further: SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' AND TargetInstance.Logfile = 'Application' Since this query only requests the events from the Windows Application Log, access control is no longer an issue, so no special privileges need to be enabled. To summarize, there is nothing special about handling the intrinsic events backed up by event providers, except for the few possible idiosyncrasies that a given provider may exhibit. Thus, you should study the documentation that comes with the provider before you attempt to set up event registrations. Events that are generated by WMI and are based on the changes to static data that resides in CIM Repository are also fairly straightforward. This typically applies to all namespace operation events as well as class operation events, unless the class definitions are dynamic. For instance, the following query may be used to subscribe to any modification to a namespace, called 'default': SELECT * FROM __NamespaceModificationEvent WHERE TargetNamespace.Name = 'default' Handling changes to a WMI class definition is no different. The following query will allow you to monitor all changes to a definition of the WMI class Win32_Process: SELECT * FROM __ClassModificationEvent WHERE TargetClass ISA 'Win32_Process' The situation is a bit more complicated when it comes to handling the instance operation events for dynamic objects, which are not backed up by event providers. For instance, consider an instance creation event for a Win32_Process object. First, it is a dynamic object, so it is not stored in CIM Repository. Second, there is no event provider capable of raising the instance creation event whenever a process is launched. It sounds as if subscribing to such an event is not even possible, but luckily, WMI comes to the rescue. As I already mentioned, WMI is capable of periodically polling the instance providers to detect changes and to raise the appropriate intrinsic events. Polling is an expensive procedure and therefore, is not done automatically. In fact, WMI has to be explicitly instructed to poll instance providers at certain time intervals. You can do this by including a WITHIN clause in the event query, so that the SELECT statement will take the following form: SELECT * FROM WITHIN WHERE The WITHIN clause must be placed immediately before the WHERE clause and has to specify an appropriate value for the polling interval. The polling interval is specified in units of seconds, although it is a floating point rather than an integer number, so it is possible to give it a value of fractions of a second. Due to theextremely resource-intensive nature of polling, make sure that you use sufficiently large polling intervals. In fact, if the interval value is too small, WMI may reject the query as invalid. For example, the following query initiates a subscription to instance creation events for objects of the Win32_Process class and instructs WMI to poll the instance provider at 10-second intervals: SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process' It is not always clear which events are backed up by providers and which are not. As a result, to determine whether you need a WITHIN clause in your query is an empirical process. WMI can provide assistance in this mater—if a query does not contain a WITHIN clause where it is required, WMI will reject it with the WBEMESS_E_REGISTRATION_TOO_PRECISE (0x80042002) error code. Such an error code has a corresponding error message that reads "A WITHIN clause was not used in this query". However, if WITHIN clause is specified where it is not required, the query will work just fine. Therefore, it is a good idea to always attempt to execute event queries without WITHIN clause first, and then only add it if necessary. Sometimes it helps to be able to monitor all three instance operation events—__InstanceCreationEvent, __InstanceModificationEvent, and __InstanceDeletionEvent—using a single event query. This can easily be achieved by substituting the superclass __InstanceOperationEvent for the name of a specific subclass in the FROM clause: SELECT * FROM __InstanceOperationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process' Of course, the events, returned by such a query would belong to one of the three subclasses of __InstanceCreationEvent, because the latter is an abstract class, and therefore, it cannot have instances. The actual class of these events can easily be determined by examining the __CLASS system property of each received event object. Interestingly, if there is an event provider for intrinsic events, such a generalized query may not work. For instance, the following legitimate looking query fails: SELECT * FROM __InstanceOperationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' The reason for the failure is simple: the Event Log Event provider only supports __InstanceCreationEvent, not __InstanceModificationEvent or __InstanceDeletionEvent. This becomes obvious if you look at the instance of the __EventProviderRegistration class for Event Log Event provider: Instance of __Win32Provider as $EventProv { Name = "MS_NT_EVENTLOG_EVENT_PROVIDER"; ClsId = "{F55C5B4C-517D-11d1-AB57-00C04FD9159E}"; }; Instance of __EventProviderRegistration { Provider = $EventProv; EventQueryList = {"select * from __InstanceCreationEvent where TargetInstance isa " Win32_NTLogEvent""}; }; Here, the EventQueryList property of the provider registration object clearly shows the supported query types. This seems like a rather severe limitation because __InstanceDeletionEvent for Windows log events is quite useful in detecting when the event log is cleared. This is where WMI comes to the rescue again. The error code, returned by the query above is the already-familiar error WBEMESS_E_REGISTRATION_TOO_PRECISE (0x80042002), which implies that adding a WITHIN clause to the query may solve the problem: SELECT * FROM __InstanceOperationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_NTLogEvent' Indeed, this query works perfectly now. In this case, WMI and the event provider work together so that __InstanceCreationEvent is supplied by the provider, while the two remaining events are generated by WMI using the polling technique. Specifying the polling interval is not the only use for the WITHIN clause. It also works in concert with the GROUP BY clause to indicate the grouping interval. The idea behind the GROUP BY clause revolves around WMI's ability to generate a single notification that represents a group of events. In its simplest form, the GROUP BY clause can be used as follows: SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process' GROUP WITHIN 10 This particular query instructs WMI to batch together all process-creation events that occur within 10-second intervals that start as soon as the first event is triggered. Note that there are two WITHIN clauses, contained in this query: one that requests polling at 10-second intervals, and the other that specifies the event grouping within 10-second intervals. These clauses and their respective interval values are completely independent. The GROUP clause can be used together with an optional BY clause; this allows for finer control over the grouping of event notifications. For instance, the following query batches events together based on the value of the ExecutablePath property of the Win32_Process object that is associated with the __InstanceCreationEvent object: SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process' GROUP WITHIN 10 BY TargetInstance.ExecutablePath Thus, no matter how many actual process creation events take place within a 10-second interval, the number of event notifications that is sent to the consumer will always be equal to the number of distinct executables that are used to launch processes within that interval. Finally, an optional HAVING clause offers even more control over the process of event delivery. To better understand the mechanics of the HAVING clause, first look at how the aggregated events are delivered to consumers. Contrary to what you may think, the preceding query, or any query that features the GROUP BY clause, will not result in __InstanceCreationEvent objects being forwarded to the client. Instead, WMI will assemble an instance of the __AggregateEvent class that is representative of all instance creation events that took place within the grouping interval. The __AggregateEvent class has the following definition: class __AggregateEvent : __IndicationRelated { uint32 NumberOfEvents; object Representative; }; The NumberOfEvents property contains a total number of underlying intrinsic or extrinsic events that are combined to produce a given __AggregateEvent object. Representative is a property that refers to an embedded event object that is a copy of one of the underlying events used in the aggregation. Thus, for the query just listed, the Representative property will contain one of the __InstanceCreationEvent objects, which contributed to the resulting __AggregateEvent object. Note that there is no guarantee that a particular underlying event object will be linked to the aggregate event. For instance, given the following query, it is not possible to predict whether the Representative property of __AggregateEvent will contain the __InstanceCreationEvent, __InstanceModificationEvent, or __InstanceDeletionEvent object: SELECT * FROM __InstanceOperationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process' GROUP WITHIN 10 All of these query examples, even those that result in event aggregation, use the WHERE clause to specify search criteria for the underlying event objects. The query above, for example, references the TargetInstance property of the __InstanceOperationEvent class, even though the event consumer receives __AggregateEvent objects rather than instance operation events. It is, however, possible to supply a search condition, based on the properties of the resulting __AggregateEvent object, and this is where the HAVING clause comes in handy. Consider the following query example: SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' GROUP WITHIN 10 HAVING NumberOfEvents > 5 This query requests only those __AggregateEvent objects that are composed of more than five underlying instance creation events. In other words, an aggregate event will only be delivered to the consumer if more than five Windows log events take place within a 10-second time interval. The majority of event monitoring scenarios can easily be covered using the WMI permanent event consumer framework. This approach is clearly superior under many circumstances because it alleviates many concerns typically associated with a custom monitoring tool. It is simple from both the conceptual and the implementation prospective. It is also reliable because instead of relying on an application program, the monitoring task is done by WMI. Finally, it is extremely cost-effective because no, or very little, development effort is required. Although you may argue that a custom event consumer provider may have to be developed to satisfy certain monitoring needs, standard consumer providers, which are distributed as part of the WMI SDK, are usually adequate. Thus, building an event-handling utility is rarely required because there are very few compelling reasons to engage into such activity. Nevertheless, the System.Management namespace comes well equipped for handling management events. The functionality afforded by System.Management event-handling types, however, is intended to support temporary, rather than permanent event consumers. Although it is theoretically possible to implement an event consumer provider using FCL and the .NET languages, System.Management does not include any facilities that are specifically designed to address consumer provider development. Therefore, the rest of this chapter will concentrate on explaining the mechanics of event handling from the prospective of temporary event consumers. The entire event-handling mechanism is packaged as a single System.Management type called ManagementEventWatcher. This type is solely responsible for handling all types of WMI events in both synchronous and asynchronous fashion. Just like most of the FCL types, ManagementEventWatcher is fairly straightforward and selfexplanatory. Perhaps, the simplest thing that can be achieved with ManagementEventWatcher is handling management events in synchronous mode. For example, the following code snippet initiates the subscription for all process creation events and then polls for notification in a synchronous fashion: ManagementEventWatcher ew = new ManagementEventWatcher( @""]); } Here, the instance of ManagementEventWatcher is created using a constructor that takes a single query string parameter. The code than enters an endless loop and starts polling for events using the WaitForNextEvent method. This method is built around the IWbemServices::ExecNotificationQuery method, which is what WMI uses to initiate synchronous event subscriptions. If you remember the discussion of synchronous query processing, you may assume that WaitForNextEvent essentially mimics the functionality of IWbemServices::ExecNotificationQuery by returning an instance of ManagementObjectCollection immediately after it is invoked. If this were true, the consumer would iterate through the collection so that each request for the next collection element would block until an event notification arrives. This, however, is not the case. Instead, WaitForNextEvent blocks until an appropriate event is triggered, and then it returns a single instance of the ManagementBaseObject type, which represents the delivered event. Such an approach, while certainly simpler, lacks some flexibility because events are always delivered one by one. The COM API IWbemServices::ExecNotificationQuery method, on the other hand, leaves enough room for delivering events in blocks, which may contribute to some performance gains. Be careful when you are examining the delivered event. For the code above, the returned ManagementBaseObject embodies an instance of __InstanceCreationEvent. As you may remember, the TargetInstance property of the event object refers to an embedded object that triggers the event—in this case, an instance of the Win32_Process class. This instance can be retrieved by accessing the TargetInstance property through the ManagementBaseObject indexer or its Properties collection and casting the result back to ManagementBaseObject. If you bother to compile and run the code above, it may produce an output similar to the following, assuming you launch a notepad.exe process: Event arrived: __InstanceCreationEvent Process handle: 160. Executable path: C:WINNTSystem32 otepad.exe The preceding code example sets up the event registration for those events that occur on a local computer. It is, however, entirely possible to initiate a subscription to events that takes place on a remote machine. All that you need to do to listen to remote events is set up an instance of ManagementEventWatcher that is bound to a remote computer. This can be achieved by using an alternative version of its constructor, which, in addition to the query strings, takes a scope string that identifies the target machine and namespace: ManagementEventWatcher ew = new ManagementEventWatcher( @"\BCK_OFFICE ootCIMV2", @""]); Console.WriteLine("Originating Machine: {0}", mo["__SERVER"]); } The code above registers for receiving the events that take place on a remote machine BCK_OFFICE. Note that the origins of an event can be traced by interrogating the __SERVER system property of the received event object. As I mentioned earlier, there are certain security implications involved when you subscribe for specific categories of management events. For instance, in order to receive Windows log events, SeSecurityPrivilege must be granted and enabled. Granting the privilege is a task that should be carried out using an appropriate user management tool. Enabling the privileges, however, should be done on a per process basis, and therefore, your management code should include enough provisions to get the security issues out of the way. Assuming that all the right privileges are granted, clearing WMI security is remarkably easy. Thus, the following code snippet successfully sets up a subscription for Windows log events, assuming that a user is granted SeSecurityPrivilege: ManagementEventWatcher ew = new ManagementEventWatcher( @" SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent'"); ew.Scope.Options.Impersonation = ImpersonationLevel.Impersonate; ew.Scope.Options.EnablePrivileges = true; while(true) { ManagementBaseObject mo = ew.WaitForNextEvent(); Console.WriteLine("Event arrived: {0}", mo["__CLASS"]); } The only difference here is the two lines of code that follow the construction of the ManagementEventWatcher object. It turns out that all security-related settings are packaged into an instance of the ConnectionOptions class. The ConnectionOptions object, which controls the security context of the WMI connection, is contained in the instance of the ManagementScope class, which is associated with ManagementEventWatcher object. The code above simply sets two of the properties of ConnectionOptions object—Impersonation and EnablePrivileges—which control the COM impersonation level and security privileges respectively. Once these two properties are set correctly, the code is granted the required access level. Although the detailed overview of WMI security will not be presented until Chapter 8, the technique just demonstrated, should allow you to get around most of the security-related issues that you may encounter. Although synchronous mode is definitely the simplest event-processing option available to the developers of management applications, it is not very flexible and not all that efficient. Its main drawback is that it needs to continuously poll WMI using the WaitForNextEvent method. A much better approach would be to register for events once and then handle the notifications as they arrive. This is where the asynchronous mode proves to be very helpful, although setting up an asynchronous event subscription may require just a bit more coding. The following code snippet duplicates the functionality of the previous example, but this time in asynchronous mode: class Monitor { bool stopped = true; public bool IsStopped { get { return stopped; } set { stopped = value; } } public void OnEventArrived(object sender, EventArrivedEventArgs e) { ManagementBaseObject mo = e.NewEvent; Console.WriteLine("Event arrived: {0}", mo["__CLASS"]); mo = (ManagementBaseObject)mo["TargetInstance"]; Console.WriteLine("Process handle: {0}. Executable path: {1}", mo["Handle"], mo["ExecutablePath"]); } public void OnStopped(object sender, StoppedEventArgs e) { stopped = true; } public static void Main(string[] args) { Monitor mon = new Monitor(); ManagementEventWatcher ew = new ManagementEventWatcher( @" SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'"); ew.EventArrived += new EventArrivedEventHandler(mon.OnEventArrived); ew.Stopped += new StoppedEventHandler(mon.OnStopped); ew.Start(); mon.IsStopped = false; while( true ) { // do something useful.. System.Threading.Thread.Sleep(10000); } } } This code is fairly straightforward and should remind you of the techniques used to perform asynchronous operations with the ManagementOperationObserver type. Essentially, the ManagementEventWatcher type exposes two events: EventArrived is raised whenever a management event notification is received from WMI, and Stopped is triggered when a given instance of ManagementEventWatcher stops listening for management events. Thus, setting up an asynchronous event subscription comes down to hooking up an event handler for at least the EventArrived event. The EventArrivedEventArgs object, passed as an argument to the event handler method, has one property, NewEvent, which points to an instance of the ManagementBaseObject class that represents the management event. An asynchronous event subscription is initiated by calling the Start method of the ManagementEventWatcher type. This is the method that internally invokes IWbemServices::ExecNotificationQueryAsync, which registers a consumer for asynchronous event delivery. Once started, ManagementEventWatcher continues listening for management events until stopped, either explicitly or implicitly. To explicitly terminate an event registration, consumers may call the Stop method, which internally invokes the IWbemServices::CancelAsyncCall method. Implicit termination may occur for a variety of reasons. Perhaps, the most obvious one is that the ManagementEventWatcher variable goes out of scope. This may happen as a result of a premature program termination or a function return, or simply because a programmer forgot to explicitly call Stop. Another reason is any kind of error condition detected by WMI, such as an invalid event query or some internal error. In order to cleanly shut down any outstanding event subscriptions, ManagementEventWatcher is equipped with a destructor method, Finalize, which is invoked automatically by .NET runtime. Although destructors are not very popular when it comes to garbage-collecting architectures, in this particular case, having one is a necessity. After all, leaving dangling event registrations around is not a very good idea. For obvious reasons Finalize invokes the same old Stop method, which in turn, fires the Stopped event. Thus, it is pretty much a guarantee that a Stopped event will be raised regardless of whether the subscription is terminated explicitly or implicitly. The event carries enough useful information to be able to diagnose a problem, if there is any. The StoppedEventArgs object, passed as a parameter to the handler for the Stopped event, has a single property, Status, of type ManagementStatus. This is an enumeration that contains all currently defined WMI error codes. To illustrate how it works, I will change the event handler for the Stopped event so that it will print out the value of Status property: public void OnStopped(object sender, StoppedEventArgs e) { Console.WriteLine("Stopped with status {0}", e.Status.ToString()); stopped = true; } Assuming that the ManagementEventWatcher object is created with an event query that references a nonexisting event class in its FROM clause, the code will produce the following output: Stopped with status NotFound The string "NotFound" is a textual description associated with the ManagementStatus.NotFound enumeration member, which in turn, corresponds to the WBEM_E_NOT_FOUND (0x80041002) WMI error code. In this case a ManagementException will be thrown as soon as the Start method is invoked, but the Stopped event is still triggered. Just as it is the case with synchronous event processing, asynchronous events are always delivered one-by-one. This is a bit less efficient than the native IWbemServices::ExecNotificationQueryAsync model, which allows several events to be received at once. Curiously, there is a separate type, called EventWatcherOptions, which, like all other options types, is designed to control various aspects of event processing. Besides the Context and Timeout properties inherited from its superclass ManagementOptions, EventWatcherOptions has the BlockSize property, which seems to be designed for batching the events together. However, this property is not used by any code in the System.Management namespace and it appears to have no effect on event handling. Moreover, the design of the ManagementEventWatcher type does not really support receiving multiple events at once, thus making the BlockSize option fairly useless. An event query does not have to be represented by a plain string. There is a special type, called EventQuery, that is dedicated to handling event queries. However, unlike the other query classes described in Chapter 3, EventQuery is neither sophisticated nor very useful. In fact, it is just a container for the query string and, as such, it does not provide for query building or parsing. In addition to a default parameterless constructor, the EventQuery type has two parameterized constructor methods: one that takes a query string, and the other that takes a language identifier and a query string. Thus, a simple query object can be created as follows: EventQuery q = new EventQuery( @" SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'"); While the first constructor automatically assumes that the language of the query is WQL, the second one allows the language to be set explicitly: EventQuery q = new EventQuery("WQL", @" SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'"); The problem is that WQL is the only language supported at this time. So, if you attempt to create a query with a language string of, say, "XYZ", and then you feed it into ManagementEventWatcher, an exception will be thrown. Using the EventQuery type with ManagementEventWatcher is also very straightforward. The latter offers a constructor method that takes an object of EventQuery type, rather than a query string: EventQuery q = new EventQuery( @" SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'"); ManagementEventWatcher ew = new ManagementEventWatcher(q); A query can also be explicitly associated with an instance of ManagementEventWatcher by setting its Query property. Thus, the following code is equivalent to the previous example: EventQuery q = new EventQuery( @" SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'"); ManagementEventWatcher ew = new ManagementEventWatcher(); ew.Query = q; Besides the properties inherited from its base type ManagementQuery such as QueryString and QueryLanguage, the EventQuery type does not offer any additional functionality, and therefore, it is not very useful. The WMI eventing mechanism is extremely powerful and, on occasion, simply indispensable, since in a well-oiled managed environment, the entire chore of system monitoring can be reduced to watching for management events. Fortunately, the functionality WMI offers to facilitate the tasks of event-based monitoring is rich and versatile enough to satisfy just about any taste. This chapter provided a fairly complete and accurate description of the capabilities and inner workings of the event-handling machinery. Having studied the text carefully, you should be in a position to This chapter effectively concludes the overview of the functionality WMI provides to the developers of management client applications. The rest of this book will concentrate on the issue of extending the capabilities of WMI to account for unique management scenarios.
http://flylib.com/books/en/2.568.1/handling_wmi_events.html
CC-MAIN-2018-05
en
refinedweb
The documentation of the BeamCorrelationAnalysis class is designed to plot the angle in the lab frame of an outgoing particle w.r.t to an incoming particle to test the spin correlations. More... #include <BeamCorrelationAnalysis.h> The documentation of the BeamCorrelationAnalysis class is designed to plot the angle in the lab frame of an outgoing particle w.r.t to an incoming particle to test the spin correlations. Definition at line 23 of file BeamCorrelation 67 of file BeamCorrelation 73 of file BeamCorrelation an concrete class without persistent data. Definition at line 94 of file BeamCorrelationAnalysis.h.
http://herwig.hepforge.org/doxygen/classHerwig_1_1BeamCorrelationAnalysis.html
CC-MAIN-2018-05
en
refinedweb
[ ] Nicolas Liochon commented on HBASE-9335: ---------------------------------------- I don't think we should. We're supposed to be alone. If there is another test running in parallel, it's strange and it's worth knowing, as it can take some resources we need / expect. > Zombie test detection should filter out non-HBase tests > ------------------------------------------------------- > > Key: HBASE-9335 > URL: > Project: HBase > Issue Type: Test > Reporter: Ted Yu > > Zombie test detection in test-patch.sh sometimes picks up tests from other TLP. > e.g. from: > {code} > "main" prio=10 tid=0x091b4800 nid=0x7634 waiting on condition [0xf69b1000] > java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled.TestFailoverAfterAccessKeyUpdate(TestFailoverWithBlockTokensEnabled.java:159) > {code} > When the zombie test doesn't belong to org.apache.hadoop.hbase namespace, it shouldn't be listed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/hbase-issues/201308.mbox/%3CJIRA.12665419.1377441101690.29802.1377443512103@arcas%3E
CC-MAIN-2018-05
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). def rotate(self, nums, k): if k > len(nums):k = k%len(nums) if k == 0:return nums[:] = nums[-k:] + nums[:len(nums)-k] Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/55011/3-line-python
CC-MAIN-2018-05
en
refinedweb
Pop Searches: photoshop office 2007 PC Security You are here: Brothersoft.com > Windows > Security > Encryption Software > Advertisement Advertisement Advertisement keep your data | file hider | mandala 3d screensaver | data entry practice | full nero pc | free flash player | sis software of | vbhtml namespace models | dell data safe | free flash player | microsoft works word | photo safe | opera mini flash | password safe Please be aware that Brothersoft do not supply any crack, patches, serial numbers or keygen for Protectorion Data Safe,and please consult directly with program authors for any problem with Protectorion Data Safe. free cell phone flash software | Internet safe | one safe | dell data safe online | microsoft works task launcher | edius 4.6 | data entry practice test | safe room | password safe sourceforge | n70 nokia pc suite | software kyocera kpc650 | safe search | cpp to idx converter | china mobile pc suite | safari delicious sidebar | keep safe | samsung pc suite | safe access | youtube flash player | microsoft works calendar | card driver sound xwave | gtunes for pc
http://www.brothersoft.com/protectorion-data-safe-461769.html
CC-MAIN-2018-05
en
refinedweb
View all headers Path: senator-bedfellow.mit.edu!dreaderd!not-for-mail Message-ID: <joel-furr/faq_1083761618@rtfm.mit.edu> Supersedes: <joel-furr/faq_1082292761@rtfm.mit.edu> Expires: 5 Jun 2004 12:53:38 GMT X-Last-Updated: 2000/05/11 Newsgroups: alt.fan.joel-furr,alt.bonehead.joel-furr,alt.answers,news.answers Approved: news-answers-request@mit.edu Subject: Joel Furr FAQ Followup-To: alt.fan.joel-furr Summary: Organization: Carole and Jay Furr From: jfurr@furrs.org (Joel K. 'Jay' Furr) Originator: faqserv@penguin-lust.MIT.EDU Date: 05 May 2004 12:54:50 GMT Lines: 2457 NNTP-Posting-Host: penguin-lust.mit.edu X-Trace: 1083761690 senator-bedfellow.mit.edu 556 18.181.0.29 Xref: senator-bedfellow.mit.edu alt.fan.joel-furr:12378 alt.bonehead.joel-furr:2779 alt.answers:72769 news.answers:270955 View main headers Joel Furr FAQ This is the Joel Furr FAQ. It is not provided out of a sense of personal vanity but rather for the purpose its name states: to answer some of the Frequently Asked Questions about me, such as "how'd he get three newsgroups named after him" and such. Many of these questions are sent to me in electronic mail, usually as a result of someone looking for the answers to their questions in alt.fan.joel-furr and not finding them. It would be a good idea to read this FAQ before posting to alt.fan.joel-furr. This FAQ is copyright 1998 by Joel Furr (me) and may not be reprinted in any commercial medium without my explicit and unambiguous permission. This means that you may not re-print it in a magazine, book, newspaper, or multimedia disk, nor re-print it in an online magazine or other -- Joel Furr Frequently Asked Questions (1) Who is Joel Furr? (2) Why does he have three newsgroups named after him? (3) Who appointed Joel Furr ruler of alt.*? (4) What is it about Joel and lemurs? (5) Was Joel really elected Kibo, or is that just a myth? (6) What happened between Joel and those "Green Card" lawyers in Arizona? (7) What newsgroups is Joel Furr a moderator of? (8) Does Joel Furr sell t-shirts and stuff? (9) Does Joel spend all his time logged in, or what? (10) Is Joel likely to reply if I write to him? (11) What does Joel look like? (12) What's the deal with those funny black floor lamps that point up at the ceiling with the little knobs on the side about halfway up that you turn back and forth to adjust the brightness? Everyone seems to have them these days. (13) Hey, where are the seatbelts? (14) What's the 'soup du jour' today? (15) Is cotton candy a solid or liquid or crystal or what? (16) What's the 800 number for the North Carolina ferry system? (17) Where is Paradise? (18) Hey, what about those French? (19) Is it true that if I jump up off the ground, I'm technically in low earth orbit for as long as I'm in the air? (20) Who's in charge of the weather? (21) What is it with cats? How do they make their legs disappear when they perch on the arm of a sofa, looking content? (22) Does Joel Furr like fish? (23) How 'bout them Dawgs? (24) Is Joel a Democrat, Republican, Libertarian, or what? (25) What's in those bottles in the back of Joel Furr's refrigerator? (26) Where do bad people go when they die? (27) When's the best time to go to an amusement park? (28) What's wrong with Joel Furr's blood? (29) What is that thing at the bottom of that big glass jar full of water? (30) Will seagulls eat small chunks of pork barbecue? (31) St. Patrick's Day is a festive, cheery holiday wherein we celebrate our Irish heritage, affecting bad Irish accents and wearing green. How does Joel Furr celebrate the holiday? (32) Is it true that Joel Furr's car has a guardian spirit? (33) Hey, isn't that song "YMCA" that they play at baseball games really cool? (34) What's that chunk of powdery concrete atop Joel Furr's bookcase? (35) Does Joel have a wife? (36) What's the greatest cinematographic achievement of all time? (37) What is a Hokie? (38) What instrument did Joel Furr play in the Blacksburg High School band? (39) What does Joel typically say when someone asks him, rhetorically, how he is? (40) What's the best sort of implement to use when eating ice cream? (41) What clubs and organizations does Joel Furr belong to? (42) What's Joel Furr's ethnic and socioeconomic background? (43) Define "good eatins." (44) Joel Furr visited Las Vegas in July 1995 for the better part of a day. How much money did he gamble? How much did he lose? (45) What were the schools in Blacksburg, Virginia like when Joel Furr was growing up there? (46) Where does Carole, Joel Furr's girlfriend, come from? (47) Who is the Official Stooge of alt.fan.joel-furr? (48) What exactly is "hungus?" (49) What is the name of the night manager at the International House of Pancakes franchise on Baxter Street in Athens, Georgia? (50) What is Joel Furr's best category in Trivial Pursuit? (51) Who is Wally? (52) Where can you go in Durham, North Carolina, to get "spaghetti and salmon cakes?" (53) What is Joel Furr's favorite soft drink? (54) How many fingers am I holding up? (55) Do we need more plastic cups? (56) What color should mayonnaise be? (57) What is Joel Furr's astrological sign? (58) What is Joel Furr's Myers-Briggs type? (59) Where are your videos? (60) How is "Furr" pronounced? (61) What is the law? (62) Where do the keys go? (63) What are some of the nicknames that Joel Furr has gone by over the years? (64) What happens when you put a real, formerly alive, ocean-bred sponge back in water? (65) What kind of underwear does Joel Furr wear? (66) Who is the greatest cat of all time? (67) How can I embarrass myself in front of eight thousand people? (68) Why does Joel Furr have so many strange and pointless pictures of himself and his friends on his Web page? (69) What's special about the Duke University parking deck at the corner of Fulton and NC 147 in Durham, North Carolina? (70) What fortune cookie does Joel Furr always get? (71) What is "The Mother of All Rivers?" (72) So, what was it like attending Georgia Tech? (73) What book is Joel Furr currently working on? (74) Who the hell is "Yalin Ekici?" (75) What is the ultimate slow dancing song? (76) Who was President of Joel Furr's high school Science Club? (77) What is the secret of making great Bisquick pancakes? (78) Why didn't Joel Furr wind up in the military? (79) What was the most embarrassing thing that ever happened to Joel Furr? (80) When did Joel Furr learn to read? (81) What is Joel Furr's ultimate ambition in life? (82) Aren't you cold? (83) What restaurant are Joel and Carole Furr going to open soon? (84) What collectible novelty does Joel Furr have in store for us? (85) What did Wally the gopherlike being do at the 1997 North Carolina State Fair? (86) What does Joel Furr think of the invention known as "the third mouse button"? (87) If Joel Furr were a fruit, which one would he be? (88) Did Joel Furr inhale? (89) Does Joel Furr say "toe-MAY-toe" or "toe-MAH-toe?" (90) Why? (91) Why not? (92) What did Joel's supervisors and co-workers at Glaxo Pharmaceuticals give him on his last day of work, as a going-away present? (93) What boutique are Joel and Carole Furr going to open next door to their new restaurant? The Frequently Questioned Answers (1) Who is Joel Furr? Joel Furr is a writer and trainer who lives in Essex Junction, Vermont. He was born in Roanoke, Virginia on September 20, 1967 and grew up in the nearby college town of Blacksburg, where his father was an engineering professor at Virginia Tech. After graduating from high school in 1985, he attended the University of Georgia in Athens, Georgia from 1985 to 1988, earning a bachelor of arts degree in English. Inasmuch as an English degree from a notorious football school hardly qualified him for rapid advancement through the ranks of the American industrial elite, Joel went on to graduate school at Virginia Tech, where he earned a Master of Public Administration degree in about a year and a half and then wasted the next two and a half years pursuing a Ph.D. in the same subject before finally quitting, utterly burned out, in the fall of 1992. During his graduate school years, he spent a lot of time goofing around on Usenet and a few MUD systems, since his graduate assistantship position with the Virginia Tech Department of Public Safety, Health, and Transportation wasn't exactly demanding of his time and since he was expected to spend at least four hours per day in his office -- which happened to have a fast net connection. After dropping out of his Ph.D. program in Public Administration at the end of 1992, he tried and failed to find meaningful work in western Virginia, an economically depressed area with few good-paying jobs. In late 1993, he gave up looking for work in Virginia and moved to Durham, North Carolina, where he had friends and a few relatives. In fairly short order, he got work, got an apartment, and resumed fooling around on the Internet. After a few years of working in relatively dead-end jobs, he met the woman who became his wife, got a real job doing software training, and a whole new chapter in his life, a chapter that did not rotate solely around the Internet, began. In May of 1995, he moved with his wife, Carole, to the Burlington, Vermont area, where he works for a software corporation doing training. (2) Why does he have three newsgroups named after him? The newsgroups, alt.fan.joel-furr, alt.bonehead.joel-furr, and alt.joel- furr.die.die.die, were not created by Joel Furr or by anyone acting on his behalf. Each was created as an act of satire and/or criticism by people who did not like Furr. Alt.fan.joel-furr exists because Joel Furr once created a newsgroup called alt.fan.serdar-argic, angering the infamous Ahmet Cosar, a.k.a. "Serdar Argic." Cosar's infamous alter-ego was responsible for ruining many history-related and culture-related newsgroups such as soc.history and soc.culture.turkish; Cosar liked to post lengthy rants about one of his pet delusions, namely, that in 1914, Armenians had killed all the Turks in northeastern Turkey and in Russian Armenia. This is, of course, the direct opposite of what actually happened, but Cosar, an apologist for the Turkish genocide, was certain that he could convince the world otherwise if he posted megabyte-long rants to dozens of newsgroups per day, lowering the signal-to-noise ratio so far that many posters would desert the newsgroups and leave the field to Cosar and his allies. Furr created alt.fan.serdar-argic to give people who were sick of Cosar's childish pranks a place to comment and discuss what to do about Cosar. Within 24 hours, Cosar had newg rouped alt.fan.joel-furr. Oddly enough, and no doubt to the immense surprise of Cosar, the newsgroup has actually seen considerable use from time to time. (See also question #74, "Who the hell is 'Yalin Ekici?'") Alt.bonehead.joel-furr exists for a similar reason. A user named Paul Hendry once spent a solid two months posting hundreds of messages to alt.config trying to convince the alt.config regulars that the world of Usenet direly needed a newsgroup for fans of lampreys (jawless parasitical fish) to chat. However, he failed utterly because a simple grep of the newsspool showed that the only lamprey-related traffic in existence was on alt.config itself. Hendry, as it turned out later, had been trying to trick alt.config's regulars into rubber-stamping an unnecessary newsgroup. Why he thought this would be amusing is anyone's guess. Hendry finally exhausted Joel Furr's patience, and Furr newgrouped alt.bonehead.paul-hendry. Hendry, in a masturbatory act of excess, then turned around and newgrouped alt.animals.lampreys, alt.animals.paul-hendry, and alt.bonehead.joel-furr. None of the four newsgroups gets any traffic to speak of. Both sides in the affair, in the final analysis, acted childishly. The third group, alt.joel-furr.die.die.die, is not carried much of anywhere and isn't really considered a real newsgroup. It was created by a pseudonymous Netcom user without any evident provocation -- it just "showed up" one day without any obvious justification. Fewer than 10% of sites carry the newsgroup on their system, and the sites that do are generally those sites which have their newgrouping and rmgrouping set on "autopilot," accepting all newsgroups that are created anywhere by anyone. (3) Who appointed Joel Furr ruler of alt.*? No one. In fact, references to "King Joel of alt.*" are showing up a lot less frequently because Joel no longer gives much of a damn what happens in alt.* - so many garbage newsgroups have been created that the alt.* namespace is a hopeless mess and there's nothing that can be done about it. He used to spend a half hour to an hour each day trying to explain to the endless legions of clueless newbies why we didn't need to have sixteen newsgroups on the same subject, or why a newsgroup with a confusing, meaningless name would get zero traffic. It never made a dent in the hordes of stupid-ass bozos who showed up day after day begging for newsgroups only they cared about, so Joel eventually found better uses for his time. (4) What is it about Joel and lemurs? Joel and some friends started telling each other jokes about lemurs on one of the bulletin board systems (the late, lamented vtcosy.cns.vt.edu conferencing system) at Virginia Tech back in 1991. Neither Joel nor his friends knew anything about lemurs except that they were from Madagascar and had big eyes. When Joel and company found out there was a research center dedicated to lemurs just a few hours away in Durham, North Carolina, they promptly went down and visited. The Duke University Primate Center turned out to be a really cool place with woods full of lemurs on the hoof and Joel fell in love with the furry little varmints, especially since they were (and still are) gravely endangered in their native habitat and needed help so badly. Joel started campaigning online for donations to DUPC and continued this activity when he moved down to Durham. If you would like to know more about lemurs, you can visit the DUPC home page at and/or discuss lemurs with fellow lemur fans on the Usenet newsgroup alt.fan.lemurs. (5) Was Joel really elected Kibo, or is that just a myth? In January of 1994, James "Kibo" Parry disappeared from Usenet for a long time, over a month. No one knew where he had gone or what he was up to. Some people cared, some people didn't. Finally, Andrew Bulhak, an Australian net.user, called for an election to replace Parry in the role of Kibo. Bulhak accepted any nomination that came his way, then published a list of candidates and held an open vote via e-mail. When the voting period was up, Joel Furr had won with a solid plurality and almost a majority, with 81 votes; the nearest runner up was Parry himself, with around 30 votes. Parry had returned from whatever it was he'd been off doing halfway through the voting period, but had known better than to denounce the vote for fear of inspiring people to gleefully vote against him. However, once the vote was over, Parry started whining very loudly about it and actually threatened Joel Furr with legal action over Joel's frivolous use of the title "Kibo" in a few Usenet posts. According to Parry, his nickname "Kibo" had actually won him a few endorsement contracts in Boston (primarily for computer stores, apparently with tongue lodged solidly in cheek) and if someone else were also using the term, it would damage his marketability. Inasmuch as Joel had only signed two or three messages with "Kibo," having had better things to do than engage in the sort of idiocy practiced regularly on alt.religion.kibology, he had little use for Parry's whining. It was not as though Joel had actually set out to replace Parry as Kibo in the minds of Internet users - nor would Joel have had the slightest interest in attaining Kibo-like notoriety, since being Kibo is sort of like being the biggest rat in the garbage heap. Nonetheless, Parry was so whiny about it that Joel stopped using the nickname in disgust. As Joel said at the time, "It's ironic that Usenet's biggest jokester cannot take a joke himself." (6) What happened between Joel and those "Green Card" lawyers in Arizona? In 1994, Laurence Canter and Martha Siegel, the so-called "Green Card Lawyers," were probably the most disliked people on Usenet. Their actions -- spamming repeatedly and then managing to convince the mainstream media that they were the wronged parties w hen their messages were erased -- made them extremely unpopular. Consequently, Joel Furr was asked by many people to make a t-shirt satirizing them. (Furr had previously made and sold about 150 copies of a t-shirt satirizing Ahmet "Serdar Argic" Cosar.) When he designed and began taking orders for a "Green Card Lawyers: Spamming the Globe" t- shirt, Canter and Siegel got wind of it and threatened Joel with "severe" legal action unless he removed the term "Green Card Lawyers" from the shirts. Canter and Siegel based their threats on two claims, both legally without a shred of foundation: Claim #1: They had exclusive trademark over the term "Green Card Lawyers," a term they had never used in trade and which in fact they had no rights to whatsoever. Legally, if you want to be able to assert a common-law trademark over a term, you must have used that term in trade. Canter and Siegel had never used that term as part of their business, so they had no rights to it whatsoever. Claim #2: They had exclusive rights to produce or license the rights to produce a t-shirt based on their exploits, and that "several large companies" were already interested in marketing C&S-based shirts. Needless to say, no companies ever produced such a shirt - and in any case, they certainly had no right to prevent someone else from exercising their freedom of speech by producing t-shirts satirizing them. During an exchange of email over the matter, Canter and Siegel betrayed a complete lack of knowledge of the law - or, if you want to ascribe to malice what others ascribed to stupidity, were engaged in barratry, the use of legal threats for harassment reasons. Canter and Siegel said that the concept of "public figures" being considered legally vulnerable to satire was complete nonsense, and they repeatedly asserted their trademark claim over a term they had never filed for trademark over and which they couldn't even claim common law trademark over since they had never used the term in trade. It was easy to see, after a short round of discussions with them, why they'd had to sue to be permitted to resign from the Florida Bar several years ago in an effort to avoid actual disbarment. Furr was panicked after receiving their threats, because although he knew that their claims were absolute garbage, he also knew that he didn't have the financial resources to deal with a lawsuit brought by two lawyers in a state two thousand miles from his home. He considered taking the term "Green Card Lawyers" off the shirts, but first, asked for suggestions and comments from the readers of newsgroups like comp.org.eff.talk and misc.legal. Two days of absolute pandemonium followed. Joel began getting hundreds of offers of free legal help and donations to a Joel Furr Defense Fund. Thankfully, Mike Godwin, Chief Legal Counsel of the Electronic Frontiers Foundation, also heard of the matter and offered the EFF's services in the case to defend Furr in any legal matters that did develop. Heartened, Joel publicly said "To hell with the lawyers, the shirts are going forward with the original design, let them sue." Canter and Siegel promptly began claiming that they had never made any threats whatsoever and that it was all a fiction invented by Joel Furr. In later months, after the "Green Card Lawyers" shirts had sold like hotcakes (the result of Canter and Siegel's effort to prevent their sale altogether), Canter and Siegel went around claiming that Furr had actually contacted them first and asked for permission to make the shirts and that they'd just told him to go away and not talked to him again. Since Furr had kept all the email they'd sent him and had it handy to show anyone who asked, this absurd claim was easily disproven. Canter and Siegel went on to publish a book about the Internet entitled "How To Make a Fortune on the Information Superhighway" which, from all accounts, was a pedestrian and rather lame ghost-written Net guide with a sad little chapter or two at the end declaring the authors champions of spamming. They then tried to run a spam-for-hire service which collapsed when no one would sell them net access, and after a few notable fiascoes which introduced the Net to the concept of "disposable accounts" (dial-up shell accounts used for spamming with the full knowledge that the provider would angrily delete the account once the spamming had taken place), Canter and Siegel more or less vanished from sight. What a pity. (7) What newsgroups is Joel Furr a moderator of? None, at the present time. In the past, he was sole moderator or co-moderator of the following newsgroups: comp.society.folklore, alt.folklore.suburban, alt.humor.best-of-usenet, triangle.singles.announce, soc.history.war.world-war-ii, and news.admin.net-abuse.announce (since superseded by news.admin.net-abuse.bulletins). He has relinquished all his moderation duties because married life and his demanding career as a lemur rancher don't allow much time for endless Usenet activities. (8) Does Joel Furr sell t-shirts and stuff? Yes and no. He used to do that a lot, but has more or less stopped now that he has a salaried job that requires a commute and now that he has a wife. When he has time (read: not often, these days) Joel designs various shirts and mugs and stuff and gets a local screen printing firm to make them for him once he's accumulated orders from various people around the world. People read about the shirts and stuff on the Internet and send orders and payment via ordinary postal mail. Joel collects the orders, deposits the checks, and then orders the shirts in the requested sizes and colors from the screen printer. This sometimes takes a few months from the time orders are first collected to the time the last shirt is in someone's hands -- sometimes it takes quite a while to generate enough orders to make ordering a particular shirt cost-effective, and other times, so many orders come in (for example, for the Perl/RSA t-shirt) that it takes a hell of a long time to open and enter all the orders in a spreadsheet so the actual shirts can be ordered. Joel does not charge a profit on the shirts; he prefers that the shirt business remain more or less a hobby and not an actual business. If he were to charge a profit, people would expect a lot prompter service and it'd probably stop being fun. Besides, if a profit is charged, he cannot post notices in related Usenet newsgroups (people resent advertising for profit in discussion- based newsgroups) and sometimes, a few notices to a few newsgroups are necessary to get the ball rolling. However, all that is mostly academic now that Joel has largely retired from doing shirts. (9) Does Joel spend all his time logged in, or what? No. Despite the insults from losers who, when losing an argument in a Usenet newsgroup, say "Hey, get out from in front of your monitor once in a while, bub!" Joel actually spends little time logged in. Having a wife and a demanding job will do that for you. Joel does have a real life, a life that consists of spending time with his wife, reading, going to minor league baseball games, driving, traveling, going to movies, hanging out with friends, and working on his writing. He used to spend a lot of time logged in, back when he was in graduate school (he had a do-nothing graduate assistant position), so people assume this is still the case. (10) Is Joel likely to reply if I write to him? If you write to him and ask stupid, clueless questions like "how do I set up my newsreader? I'm on a Mac," he'll cheerfully ignore you. If you have half a clue and need help, or just want to talk, he can usually find time. If you like talking about maps, travel in the USA, the South, minor league baseball, non-fiction books, and so forth, please write. He's often up late at night and may be around, but idle, when you send email. His preferred email address is jfurr@furrs.org. (11) What does Joel look like? Joel Furr is a 6'2", 210-pound Caucasian male with dark brown hair, a fairly bushy dark brown beard, and brown eyes. He has a very faint Y-shaped scar on his left cheek from a childhood accident. He typically does not have much of a tan because he spends most of his time indoors. When he's not at work, he tends to wear t-shirts or polo shirts, corduroy shorts, and sneakers. He prefers dark colors, such as navy or purple, but rarely wears black shirts because he doesn't want people to come up and start talking to him about "cyber space." He tends to wear his hair in what's called a "professional haircut" -- not too short, but definitely not very long. He prefers to wear his hair fairly short because he tends to perspire heavily in summertime and that makes long hair impractical. (12) What's the deal with those funny black floor lamps that point up at the ceiling with the little knobs on the side about halfway up that you turn back and forth to adjust the brightness? Everyone seems to have them these days. They like it if you have one. In fact, They like it if you have more than one. (If you don't know who we mean by They, sorry; we can't tell you more than we already have.) (13) Hey, where are the seatbelts? There aren't any seatbelts on this ride. (14) What's the 'soup du jour' today? Cream of broccoli. (15) Is cotton candy a solid or liquid or crystal or what? Cotton candy is, technically, one big molecule -- one very long-chain molecule, nonetheless, but one molecule. If you unraveled a cotton candy molecule of typical size and stretched it out straight, it'd stretch from Durham, North Carolina to Key West, Florida. (16) What's the 800 number for the North Carolina ferry system? 1-800-BY-FERRY. (17) Where is Paradise? Paradise can be found in the men's room of the Mardi Gras Bowling Lanes, located on NC 54 between Durham and Chapel Hill, North Carolina. (18) Hey, what about those French? For the purposes of the game, the French are goobers. (19) Is it true that if I jump up off the ground, I'm technically in low earth orbit for as long as I'm in the air? Yes. Technically, anytime you leave the surface of the Earth, you're in low earth orbit and the Earth will rotate slightly underneath you. The distance the Earth travels beneath you while you're in the air is too slight to be noticed, but there is a small but calculable orbital effect. (20) Who's in charge of the weather? The current Planetary Weather Supervisor is Mr. James L. Cambias of New Orleans, Louisiana (currently dwelling in Ithaca, New York). You can complain to him when it rains all day with no end in sight, but he rarely acts in a responsive fashion. He has his own agenda and until his demands are met (he insists that the residents of Chapel Hill learn to drive like sane people), he's not going to do anything about the weather. (21) What is it with cats? How do they make their legs disappear when they perch on the arm of a sofa, looking content? They've got little tubes up inside their body that their legs retract into. No one's figured out exactly why they evolved this trait, but the best guess anyone's come up with is that they did it so they could look cool when they perch on the arm of a sofa . (22) Does Joel Furr like fish? No. He hates fish. When he was a kid, he used to eat Fish Filet sandwiches from McDonald's with great satisfaction. This all changed when he had two bad encounters with fish which forever traumatized him. First, at the age of six or so, he happened one summer to be at the house of relatives in Florida who served up a big batch of fried mullet for dinner one night. It looked fairly nasty -- big platters of fried fish with bones and stuff sticking out -- and smelled worse. Joel didn't want to eat any, but nothing else had been cooked for dinner. Squeamishly, Joel ate a few bites, then decided hunger was preferable to eating mullet. Unfortunately, even the few bites he ate were a few bites too many. Joel developed debilitating nausea and a king-hell case of the hives which lasted for a week or so, the result of massive and previously unknown food allergies to mullet. It turned him off on eating fish in general. Second, while visiting relatives in North Carolina a year or two later, he went fishing with an uncle and promptly caught a little orange sunfish, which, in its gasping and wriggling and bulging of eyes and so forth so shocked and startled the young Furr that he dropped his pole and sprinted off, leaving his uncle to release the fish from the hook and put it back into the water. For some reason, this encounter left Furr with a lifelong aversion to fish -- he's not afraid of them but can't stand the thought of touching them, much less eating them -- and the allergy to mullet helps justify his dislike of fish to people who, annoyingly, insist that he'd really like fish if he just tried it. It's a phobia. No, it doesn't make sense. That's what makes it a phobia. (23) How 'bout them Dawgs? Gooooooooooooo Dawgs! Sic 'em! Woof woof woof woof woof! (24) Is Joel a Democrat, Republican, Libertarian, or what? Joel is a registered Democrat; this is not to say that he's of a particularly liberal bent, but rather, that he supports the broad goals of the Democratic Party and opposes the "morals-based" legislative agenda of the Republicans. Joel was 13 in 1981 when President Reagan took office. He spent his high school years watching Reagan's insane lies and deranged, senile babblings on the news each night during dinner and, as a result, developed a lifelong antipathy to the twisted Newspeak of the Republican Party. He's not real fond of the Libertarians either, though, because most Libertarians he's known have been so selfish and "it's MY money why should I pay ONE RED CENT to help the POOR"-oriented that he's learned to ignore them. Furr worked for a little over two years in a public library and learned the importance of basic governmental services such as libraries. Libertarians would have you believe that we should ban such services and let for-profit libraries come into being -- never mind the fact that a lot of residents of Furr's hometown in Appalachia couldn't afford basic telephone service much less "luxuries" like a for-profit library. What would happen to the poor in a world where the Libertarian Party has closed down all the libraries (i.e., the creation of an illiterate, ignorant underclass) does not seem to matter to the Libertarians. As someone put it recently, you don't see a lot of poor Libertarians. People only become Libertarians when they decide "hmm, okay, I've made a lot of money, it's time to change the rules so I don't have to share it with anyone or pay for any government services." But anyway, in the end Furr is like many other people in this day and age in that he tends to vote against candidates rather than for them. The Republicans being such odious walking piles of garbage and the Libertarians being so completely out in left field, this means that he typically votes Democratic. (25) What's in those bottles in the back of Joel Furr's refrigerator? The Coca-Cola bottle and the Cobb Mountain Natural Spring Water bottle are full of salt water from the Pacific Ocean off San Francisco, California, collected from the surf near Seal Rock during Joel's vacation to California in July of 1995. The bottle marked "Cuzcatlan" which appears to contain cloudy, stagnant water is actually a bottle of Cuzcatlan "soursop" soda which Joel picked up at a Mexican grocery in Durham out of curiosity and which he decided he might be better off not drinking when he noticed that the ingredients consisted solely of "water, propylene glycol, vegetable gum, and glyceryl abietate." The bottle of Shasta tonic water with about one gin-and-tonic's worth of tonic missing is just that, a partially consumed bottle of Shasta tonic water. It dates from the summer of 1988 and has been with Joel through five apartments and one house. (26) Where do bad people go when they die? Gatlinburg, Tennessee. (27) When's the best time to go to an amusement park? Well, if you ask in terms of when will you find short lines and so forth, experience shows that it's best to go during a full-fledged tropical storm. One day in 1995, Joel Furr went to the "Carowinds" amusement park in Charlotte, North Carolina on a day when Tropical Storm Jerry was approaching and torrential rains were already falling. Joel had come to town to see Warren Zevon in concert that night and had decided to drive down early to visit Carowinds as well. It was raining when the park opened at 10:00 a.m., it rained hard most of the day, and it was still raining when Joel left at 8:00 p.m. The park stayed open throughout the day and there were actually some minor lines around 1 p.m., but most of the day, the lines on the coasters were so short that you could just stay on the coasters and ride continuously for hours. Joel went on something like 30 or 40 coaster rides in one day, then left, soaking wet and chafed all over, to see Warren Zevon. It was not until the next day that Joel and friends (who'd driven down and met him at the concert) read in the newspaper about how Tropical Storm Jerry had brought extensive property damage, flooding, and a few drowning deaths to the Charlotte vicinity. "Oh," Joel said. "That explains why it was raining all day." (28) What's wrong with Joel Furr's blood? Joel has a rare blood trait known as "thalassemia trait" (or, in some references, "thalassemia minor" or "beta thalassemia"), an asymptomatic condition marked primarily by smaller, less mature red blood cells and a different type of hemoglobin from that of normal blood. It is theorized that this condition in some way aids survival in malarial regions, inasmuch as the trait is found primarily in people who live in or trace their ancestry to southern Europe (Italy, Greece, Cyprus, etc.) and south Asia, as well as certain other warm regions plagued by malaria. The trait is only dangerous to those whose parents both had it -- a child whose parents both had thalassemia trait would have a 50% chance of having thalassemia major, a condition that is usually fatal within the first few years of life. Note: this is not the same thing as sickle cell anemia. Joel's father, brother, and sister all have this trait -- but were misdiagnosed for years as having a similar trait known as "hemoglobin C." Medical science of the period 1950-1980 didn't know to look for thalassemia minor unless you specifically told them to, evidently, because Joel, his father, and his sister were all misdiagnosed. The unfortunate side of this is that Joel was treated as a child with iron supplements, which won't do a single thing to help with thalassemia minor, until someone finally noticed they weren't changing anything... and worse, for years he was told that he wasn't permitted to donate blood. Given the zealous manner in which many blood drive volunteers waylay passersby and demand a contribution to the cause, it made life annoying at times to carry a rare blood trait which was on the American Red Cross's banned list. When Joel finally found out, in the mid-1990's, that he did not in fact have hemoglobin C and instead had thalassemia minor, he checked with the Red Cross and was told "yeah, we can accept donors with that condition." Joel has wasted little time since this news -- he's given blood as often as possible since Christmas of 1996, when he first donated. Joel plans to make up for lost time. Interestingly, it turns out that Joel's blood, for all that it has small and slightly different red blood cells, is nonetheless quite desirable to the blood bank people. When Joel got his first donor card in the mail, his blood type was printed on it as well -- and to Joel's surprise, his blood type was the most desirable of all: O Negative... the so-called "Universal Donor" type. (29) What is that thing at the bottom of that big glass jar full of water? A small plastic rubber octopus. It likes it there. (30) Will seagulls eat small chunks of pork barbecue? Apparently not. They ate everything else Joel threw to them, up to and including gravel, but they spit out the pork barbecue. Ingrates. (31) St. Patrick's Day is a festive, cheery holiday wherein we celebrate our Irish heritage, affecting bad Irish accents and wearing green. How does Joel Furr celebrate the holiday? He wears orange. Every year, without fail. Orange. Orange, for your information, is the Irish Protestant color, and while Joel isn't a practicing Protestant either, he figures that an obstinate insistence on wearing orange is a good symbolic protest against the St. Patrick's Day holiday. If anyone has an explanation for why one ethnic group has managed to wangle themselves what amounts to a national holiday for their patron saint, celebrating alcoholism and leading zillions of idiots without a drop of Irish blood in their body to wander around saying "Aye and begorra" one day each year, Joel would like to hear it. And in any case, drunken driving doesn't become okay because it's St. Patrick's Day. Take a damn cab home, or don't go out in the first place. (32) Is it true that Joel Furr's car has a guardian spirit? Actually, yes. A small lemur statue, "Bondo" by name, sits on the rear shelf and theoretically keeps the car and all its passengers safe from harm. (33) Hey, isn't that song "YMCA" that they play at baseball games really cool? Or, to put it another way, isn't it really cool the way minor league baseball teams have taken to playing "YMCA" over the public-address system at every game, leading thousands of idiots who wouldn't know a fielder's choice or a suicide squeeze if it came along and bit them to turn out in large numbers night after night for no other reason than to stand up in the sixth inning and sing a lousy, annoying song that should have been left in the 1970's, in a stomach-turning display of human futility that rivals Catholic family planning efforts for utter stupidity? The answer: "Um, well, no. But at least it does help us identify those who'll be first in line for the public executions when the revolution comes." (34) What's that chunk of powdery concrete atop Joel Furr's bookcase? It's a big piece of the Berlin Wall that Julia Youngman, one of Joel Furr's older sisters, chopped out of the Wall in November or December of 1989 during the big feeding frenzy as the Wall fell. At least, that's what Julia says it is. She came back from Army duty in Europe with a suitcase full of concrete, but for all any of the recipients know, she chipped those chunks off a concourse pillar at Dulles International on her arrival in the USA. No Communists have shown up asking for the chunk back yet, but you never know. (35) Does Joel have a wife? Fortunately, yes. Her name is Carole and he met her in real life at a convention of sorts in suburban Maryland around the end of October 1995, then spent a solid month and a half exchanging email with her before they decided to arrange another meeting to determine whether or not relationship potential was present. Joel visited Carole at her home in northern Virginia in mid-December and spent part of a cold, windy Sunday afternoon strolling around the Mall in Washington, DC. As Carole and Joel were strolling past the Washington Monument, they were accosted by an ABC-TV news crew which was there interviewing tourists about the federal budget crisis which had caused all the monuments to be closed to the public that day. Carole and Joel were happy to mutter darkly about Congressional Republicans for the camera and then went on their merry way, not really expecting to make the evening news that night or anything like that. Wrong-o. Carole and Joel did make "World News Tonight" that night - one of only two tourist interviews from that afternoon that made it onto the air (the other, which came immediately before Carole and Joel's interview, was of a cranky old guy who likewise blamed the idiots in Congress as being responsible for the shutdown). Fifteen seconds of irritated grumbling, tops, but how many other couples can truthfully claim that their first date wound up being nationally televised? Joel and Carole were married in September of 1997; the ceremony was held on Saturday, September 13 in the Sarah P. Duke Gardens at Duke University in Durham, North Carolina. They honeymooned in south Florida and are now making their home in Vermont. (36) What's the greatest cinematographic achievement of all time? That would be "Repo Man," starring Emilio Estevez and Harry Dean Stanton. The life of a repo man is always intense. (37) What is a Hokie? The term "Hokie" has been applied for over a hundred years to members of the athletics teams at Virginia Tech (Virginia Polytechnic Institute and State University, located in the mountains of southwest Virginia), informally for much of that time and formally since the mid-1980's. Virginia Tech, a former military school, originally played under the name "Cadets" and then, later on, switched to the nickname "Fighting Gobblers" because, believe it or not, the members of the football team tended to have prodigious appetites. "Fighting Gobblers" is not exactly the sort of team nickname which strikes fear into the hearts of opponents, so "Hokies" was often used as an informal substitute. In the mid-1980's, under the tenure of head football coach and athletic director Bill Dooley, "Hokies" became the official team name, replacing "Fighting Gobblers," which nonetheless remained plastered across the outside of Lane Stadium ("HOME OF THE FIGHTING GOBBLERS"). Which brings us once again to the question, "What is a Hokie?" We now understand that the term refers to a Virginia Tech athlete, but we have yet to determine where the term came from. It's simple: it's a nonsense word which a student in the 1890's, one O.M. Stull, included in a cheer he submitted to a contest which was being held to pick a new school cheer. Said cheer went something like this: "Hokie, Hokie, Hokie, Hi Tech, Tech, VPI Solarex, solarah Polytech Virginia Ray, rah, VPI, TEAM TEAM TEAM" Okay, so it's a fairly lame cheer, but in the old days, things like that were all the rage. "Hokie" didn't mean anything -- it was simply filler to stretch out the first line so it could end in a word that would rhyme with the "I" in "VPI." Now, Wahoos (the hopeless, hapless denizens of the University of Virginia, a sort of technical and vocational school located in Charlottesville, Virginia) will tell you that "Hokie" means "a castrated turkey." Since you can't really castrate turkeys, you'd think the Wahoos would realize that their retroactive definition makes no sense, but sadly, asking a Wahoo to make sense is usually asking for more intellectual capacity than he or she has got. (38) What instrument did Joel Furr play in the Blacksburg High School band? Alto saxophone. And damned badly, too. (39) What does Joel typically say when someone asks him, rhetorically, how he is? "Paralyzed by fear. You?" (40) What's the best sort of implement to use when eating ice cream? Tiny little wooden spoons, the sort that look like they were cut en masse out of some thin piece of wood. You can get them in large quantities at Francesca's on Ninth Street in Durham, North Carolina. They're fun to eat ice cream with and they're environmentally friendly. (41) What clubs and organizations does Joel Furr belong to? Joel has never been much of a joiner in the sense of signing up for clubs and organizations; he prefers to have his free time to himself rather than having to head out to some meeting each night of the week. He belonged to the Demosthenian Society when he was a student at the University of Georgia and belonged briefly to two professional associations when he was in graduate school but never attended any events or conferences. Joel dislikes the petty politics that plague many organizations and prefers to remain aloof from the madding crowds who use their officership in various organizations as some sort of ego fix. That being said, he has belonged to Toastmasters International, the world's largest public-speaking education organization, since July 1, 1989, and has served in several District Officer positions, including two terms as a Division Governor and one partial term as Lieutenant Governor Marketing in District 66 (central, eastern, and western Virginia) and one term as Public Relations Officer for District 37 (North Carolina). He earned his DTM (Distinguished Toastmaster) award in 1993 after four years of membership and has also received the ATM (Able Toastmaster) Bronze speaking certification. Joel has served as a sponsor for three new Toastmasters clubs (CELCO Toastmasters, #8108-66, ISE Toastmasters, #8976-66, and Bull City Toastmasters, #9891-37) and has served two terms as a Club President (one term with Christiansburg Toastmasters, #3715-66 and one term with Bull City Toastmasters, #9891-37). Toastmasters is the only organization he's ever taken very seriously and that was mainly the result of boredom and en nui during graduate school -- serving as a Toastmasters officer gave him something to do that brought him into contact with people. The organization is worthwhile and has helped many people become better communicators but, sadly, the organization at the state level is often plagued by the same sort of petty politics and infighting that Joel prefers to avoid at all costs. Joel is relatively inactive in Toastmasters these days. (42) What's Joel Furr's ethnic and socioeconomic background? Joel is, to be blunt, highly educated white trash -- the scion of generations of poor crackers in rural North Carolina and Florida. He does not come from any clear-cut European ancestral background -- he's your basic American mongrel, not precisely what you'd call Anglo-Saxon and not precisely derived from the British Isles. At least one great-grandmother was still speaking Dutch most of her life and he does have a blood trait which is predominantly found in peoples of Mediterranean descent. His family, on both sides, has been resident in the rural South for so many years that the country of origin of any branch of the family is mostly guesswork. His earliest known ancestor, one Henry Furr (or, perhaps, Heinrich Furrer), is recorded as having arrived in the Carolinas in 1742, having come from Zurich in Switzerland. However, there are currently no Furrs listed in the Swiss telephone directory so it's anyone's guess as to whether Henry Furr was actually Swiss or whether he had just traveled there from elsewhere before journeying onward to America. In any case, Furr's ancestors, once they reached America, made their homes in the South and generally avoided those states north of the Potomac and Ohio. Furr has, as far as anyone can determine, exactly zero blood relatives who originate north of the Mason-Dixon line. His mother and father grew up in the Depression-era South: his father's father was a textile mill foreman in rural North Carolina and his mother's father was a mostly-unemployed jack-of-all-trades and farmer in a rural area on Florida's Gulf Coast. Both parents came from families where no one had ever gone to college yet both parents not only strove and toiled and studied and made it to college, but did so well that they each received master's degrees (father, in nuclear physics; mother, in botany). Both parents went on to Duke University to work on doctorates -- and that's where they met, in a required language class the morning after Furr's father had put in an all-night shift working on the campus Van de Graaf generator. His father asked his mother, a total stranger at the time, if she wanted to get a cup of coffee. When class ended that day, she followed him out of class and when he got done being confused at the fact that she'd actually followed him, a relationship was born. Unfortunately, only Furr's father finished his Ph.D -- his mother worked on hers for years but stopped just short. Furr's father earned his doctorate in nuclear physics from Duke and was offered a tenure-track position at Virginia Tech, but Tech made it clear that their anti-nepotism policy would prevent them from offering Furr's mother any position at all even if she finished her Ph.D. in plant physiology. Lacking the motivation to finish a Ph.D. that she would not get to use in any meaningful way, Furr's mother never finished her studies. Furr's father, a full professor, worked for many years at the nuclear reactor at Virginia Tech and, when that program was slated for downscaling and eventual closure, moved to the new Safety department to head up Virginia Tech's occupational safety efforts. Furr's mother, on the other hand, spent several years as a bored housewife, taking part in university events as a professor's wife until children finally started to arrive in the mid-1960's. After years spent raising kids and being a housewife, she finally took a job at the local public library -- and, by the mid-1980's, was running the place. Furr's parents both retired in 1995. They did all right for ignorant crackers from the rural South. Furr was born in September 1967 in Roanoke, Virginia (Blacksburg, home of Virginia Tech, had no hospital at the time), but grew up in the college town of Blacksburg, located in the Blue Ridge Mountains of southwestern Virginia. Blacksburg is home to Virginia's largest university but is surrounded by extremely rural parts of Appalachia to the north, south, and west -- the sort of places that have only one stoplight in the entire county. Montgomery County, where Blacksburg is located, was only somewhat less rural, and that was entirely the result of Virginia Tech. You can still go a few miles north or south from Virginia Tech and be right in the midst of darkest Appalachia. Furr does not speak with much of an accent despite growing up in Appalachia, a relatively accent-laden part of the country. This was largely the result of the averaging effect a college town has on the accents the students, faculty, and staff bring with them. With so many competing accents, everyone tends to wind up speaking Standard American before too long. On the other hand, when he wants to, when he's especially tired, or when he's talking to someone with an Appalachian or Southern accent, a muted but nonetheless bona fide cornball Suth'n accent does sneak out. Furr is very proud of growing up in Appalachia in much the same way that residents of Hell's Kitchen have convinced themselves that it's a fine thing to have grown up surrounded by squalor and ignorance. Furr's parents were well-to-do and Furr had ready access to all the books he wanted so he wasn't exactly wading in squalor or ignorance, but he saw both every time he drove out of Blacksburg and into the surrounding countryside. Even so, there are worse places to grow up in than the Appalachian Mountains. The countryside around Blacksburg is rolling and mountainous and beautiful and the Jefferson National Forest starts only two miles or so north of town. Furr feels awkward and out of sorts when he's visiting any part of the country that's especially flat and that doesn't have lots of trees. Trees are important. So in conclusion, it's fairly hard to say what Joel's ethnic group is or say "Joel's a ________." "White trash from Appalachia" is as good a term as any to describe him. He's not a WASP by any means: he's white, but not precisely Anglo-Saxon (though many of his forebears did come from England and Scotland), and he's never been a member of any church congregation at all, much less a practicing Protestant. Thus, he's never invited to join the good country clubs or included on the right mailing lists. C'est la vie. (43) Define "good eatins." "Good eatins" is a term often used in the South to refer to especially tasty, filling food: "Man, them's good eatins" or "Good eatins on that there hog." Good eatins can refer to a tasty cauldron of Brunswick stew, an expertly-barbecued pig, a fried chicken dinner with all the trimmings, or even so prosaic a meal as a bowl of pinto beans with onion on top and a piece of cornbread on the side. One thing that Southerners understand is that food need not be heavily seasoned or cost a lot to be filling and worthy of the term "good eatins." Simple food is often the best kind of food. Case in point: Joel Furr traveled to the mountain town of Galax, Virginia to attend the Galax Old Fiddlers' Convention one August when he was in graduate school, not being a fiddler himself but mainly just wanting to listen to an evening's worth of bluegrass and mountain music. Some friends from graduate school, all Utahns or otherwise Mormons who didn't know much about Appalachia, came along as well. Upon arriving at Felt Park in Galax, the traveling party from Blacksburg hit the midway for food. The Mormons cringed at some of the things being passed off as food by the locals and settled on "fajitas" -- which turned out to be ground beef and Cheez Whiz served hot in a pita pocket -- while Joel Furr instinctively headed for the "Beans" stand. This stand had the longest line at the midway and every man jack in that line was there to get a bowl of pinto beans with diced onion sprinkled on top and a piece of cornbread on the side. Joel toddled away from the stand when he'd received his food and immediately came in for astounded looks of confusion from his friends who could not conceive of anyone waiting in line for a bowl of beans with cornbread. "Them's good eatins," Joel explained, gesturing at the beans with his piece of cornbread. "Uh huh," his friends said, disbelievingly. Joel shrugged and tucked into his beans, enjoying his meal and feeling happy and content when done -- while his friends ate their "fajitas," faces wrinkled with disgust. Bright yellow cheese goo on ground beef, apparently, was not quite the haute cuisine that his friends had expected it to be -- while beans are pretty damned hard to mess up. Evidently, the concept of "good eatins" is unknown among the Latter-Day Saints -- while the rednecks from Appalachia know a good thing when they see it. (44) Joel Furr visited Las Vegas in July 1995 for the better part of a day. How much money did he gamble? How much did he lose? Not one red cent. Knowing that the odds were overwhelmingly in favor of his losing and that it's hard to stop after just one slot machine pull, Joel cleverly left the slot machines and gaming tables completely alone. His time in Las Vegas was spent wandering around the Strip eyeing the other tourists, looking at the lights, sipping a giant Margarita, and finally, going to see a Rockettes show at the Flamingo. Sadly, the 200-foot-tall video screen at the Circus-Circus which Hunter S. Thompson made famous in Fear and Loathing in Las Vegas was not there anymore. The Flamingo didn't have Neutrogena soap in the rooms either. Apparently Thompson got it all 25 years ago and they never restocked. (45) What were the schools in Blacksburg, Virginia like when Joel Furr was growing up there? Montgomery County, Virginia is a very rural county in the sticks of Appalachia which, for reasons best explained elsewhere, happens to be home to Virginia's largest university, Virginia Tech. The local schools, therefore, had a very split personality. Most of the schools in the county were geared toward the kids of the locals, few of whom had any plans at all to attend college and who wanted vocational and business classes and lots of 'em. The schools in Blacksburg proper, on the other hand, had student bodies that were about half locals and half kids of the Virginia Tech professors, with a small additional population of the kids of the local orthodontists and doctors stuck in the middle and usually identifying with the professors' kids. You might think that the local school system, faced with a large minority population of very bright children, would take some steps to make sure that all the kids got good educations, making sure that each child was presented with challenges and material appropriate for his intellectual level. You might even think that they'd try to put all the really bright kids in some sort of gifted and talented program. You'd be wrong, though -- because Montgomery County intentionally tried to slow the bright kids down so they couldn't be accused of elitism (gifted and talented programs being considered elitist, you see) and so the teachers could teach at the level of the lowest common denominator. To cite but one example, Joel Furr was reading at a second grade level before he entered kindergarten and had advanced so far by the time he entered first grade that he read his entire "Your First Reader" -- which had been intended to last him all year -- on the first day of school. The teachers and administrators at his school, not wanting to have to deal with a child who was four or five grade levels beyond what they were trying to teach the other kids, simply stuck Joel off in a second-grade reading group in order to "challenge" him. Joel's parents were pleased that their son had been moved up to a second grade reading group, but what they didn't know was that the group in question was actually made up of the kids who were considered so stupid and unteachable that they didn't actually do any reading during the reading period but instead were taken down to the gymnasium to play dodgeball (which the local kids called "bombardment") for two hours each day. Joel, not knowing any better, simply played dodgeball some days and other days snuck off to the school library and read on his own. By the time Joel Furr reached high school, the school system had developed three "tracks" for the kids in grades 9-12. You could be in the "vocational" track, the "college-bound" track, or the "honors" track. The college-bound and honors tracks were a lot alike except that the kids in the honors classes were actually presented with less work in an apparent attempt, once again, to slow them down. It came as a surprise to the honors students to find that the college-bound English classes were reading more books and writing more papers than they were. Joel Furr took all the honors classes Blacksburg High School offered -- social studies and English classes, mainly -- and even though he was so painfully bored by school that he rarely if ever took homework seriously (assuming he did it at all), he was always stuck in the honors classes again the next year. Why, you ask, was he placed in the honors classes year after year if he had lousy grades? Simple: to keep him away from the "normal" kids in the college-bound track. That's why all the bright kids were in the honors program -- to keep them from disrupting the "college-bound" classes. At least, that's the conclusion all the bright kids tended to come to, especially after they found out that the "college-bound" classes were in many ways tougher. The honors students tended to get classes where the teacher discussed "fire imagery" in Arthur Miller's The Crucible for days on end. Wheee! In addition to creating the so-called "Honors ghetto," the schools also created a Gifted and Talented program by the early 1980's -- and Joel Furr was, of course, in said program. This meant that he was bused along with all the other bright kids to the high school in the county seat, Christiansburg, one day per year to be shown a day's worth of art films, short films, and films like "The Wizard of Speed and Time." That was it. That was the "Gifted and Talented" program. Uh huh. Gifted and Talented program, my ass. Quite a few of Joel's peers did get decent educations despite the school system and made it into universities like Brown and Duke and the University of Chicago, but Joel simply hadn't cared enough to jump through the hoops necessary to get decent grades. Classwork had been so utterly boring and full of busy-work assignments that he spent most of high school with his nose in a book. He wound up attending the University of Georgia. Thank Heaven for high SAT scores -- with his grades alone, he would have been lucky to get into a community college. (46) Where does Carole, Joel Furr's wife, come from? Carole claims to have been born on the coast of California, near Monterey, in the town of Pebble Beach. After moving from the town of Pacific Grove at age 5, she spent the rest of her childhood just outside Dayton, Ohio, in a town called Oakwood. Since graduating from high school, she has lived in Cambridge Massachusetts), Baltimore, and northern Virginia. She now lives in Durham, North Carolina. This is the version of events made available for public consumption, however - the real truth is far stranger yet. In actuality, Carole is a California sea otter in human form. Her people (the otters), curious about the game of golf which was regularly played by the humans in Pebble Beach, selected her to be sent among the humans to learn this strange game and bring back its secrets. She was left, clutching a putter in her tiny little otter paw, on the thirteenth green at the Pebble Beach golf course in hopes that golfers would discover her and take her among them to learn the secrets of golf. Unfortunately, two humans who were simply touring the golf course happened to stumble upon the little otter girl and took her back to live with them. Over time, she came to resemble the humans she lived with more and more until you can hardly tell by looking at her that she's a sea otter at all. (47) Who is the Official Stooge of alt.fan.joel-furr? That would be Joe Littrell of Amherst, Massachusetts. When Joel Furr was courting Carole in December of 1995, he wanted to send her an East Carolina University sweatshirt anonymously to try to hint to her that she should consider moving to North Carolina. Joel asked for a stooge on alt.fan.joel-furr; Joe Littrell volunteered; Joel sent the shirt to Joe to send to Carole and Joe graciously complied. Unfortunately, when Carole got the package, she instantly guessed who the true sender of the shirt was and never even looked at the return address or postmark until after she'd told Joel "thanks for the sweatshirt" and got asked "didn't the postmark fool you?" Sigh. Okay, so it didn't exactly come off as planned, but Joe Littrell nonetheless earned the title of "Official Stooge." All hail the Stooge; long may he reign. (48) What exactly is "hungus?" No one knows. At least one theory exists that it has to do with the substances crusted on and life forms found growing on Joe Cochrane's bathroom floor, but since all scientists who have attempted to analyze said substances and life forms have gone instantly mad, it seems doubtful that a descriptive term having to do with said substances and life forms would have entered the scientific jargon. At present, therefore, "hungus" must remain undefined. (49) What is the name of the night manager at the International House of Pancakes franchise on Baxter Street in Athens, Georgia? Hector. (50) What is Joel Furr's best category in Trivial Pursuit? Geography. Joel's a serious map junkie; he loves to pore over maps for hours and hours. One of his favorite hobbies is asking people where they're from and then, regardless of what they answer, somehow managing to ask a question that implies extreme familiarity with the locale cited. Given that he's managed, purely by accident, to absorb the names and general locations of hundreds if not thousands of towns and localities around the world as a result of his map-browsing, he can often startle people with this trick. (N.B.: it doesn't work very well when the person in question is expecting it.) It's not really a trick, though -- he really does know a lot about places around the globe and especially about the United States of America. It just seems like a trick to some people who tell him they're from, oh, Brooklyn, and then get asked "Which neighborhood? Flatbush?" The normal assumption is that Joel has been to said locality and knows it well -- when in fact, he generally only knows a few things about each locality and certainly hasn't been to every city in the USA and every country on the planet. Yet. Technically speaking, this "trick" is a form of the carnival skill called "cold reading," practiced by mediums, palmists, and so forth. By acting authoritative, speaking in ominous generalities, and making maximum use of any information the "mark" supplies, they can appear to have supernatural powers. Some "cold readers" are uncannily good. That doesn't mean they have psychic powers. (Neither does Joel.) (51) Who is Wally? Wally is a small gopherlike being who lives under Joel Furr's bed. Neither Joel nor his wife Carole is entirely sure how Wally came to dwell under the bed. Joel and Carole were doing some shopping for home furnishings in January of 1996 and happened to be at K-Mart loading up on paper goods, shelving, various chemicals, and so forth, when it occurred to them that what the apartment really needed was a small gopherlike being. Unfortunately, none of the employees of that particular K-Mart admitted knowing where the "Small Gopherlike Beings" section might be found. Joel and Carole were forced to return home, lacking the small gopherlike being they'd set their hearts on. As it happened, however, a small gopherlike being was found living under the bed a couple of days later, sitting in a small (gopher-sized) La-Z Boy armchair reading a copy of "No Exit" by Jean-Paul Sartre and chuckling to itself. This being answers to the name of Wally and seems hell-bent on gathering all the shoes in the apartment together under the bed where they can be used for purposes unknown. Wally competed on behalf of the Gopherlike Beings team in the 1996 Summer Olympics in Atlanta, Georgia. His event was solo kayaking -- this being one of the few events in which beings only one foot tall would not be at a large competitive disadvantage against the humans. He had wanted to compete in gymnastics -- Wally is very fond of brachiating -- but the oversized pommel horses and rings and such were just too large to make that practicable. Wally spent a week or so competing on the Ocoee River in southeastern Tennessee against his much larger opposition and, while he did not bring home a medal, nonetheless made his fellow gopherlike beings proud. Wally's e-mail address is wallyglb@furrs.org. (52) Where can you go in Durham, North Carolina, to get "spaghetti and salmon cakes?" That would be the Pan-Pan Diner, located just off I-85 at the Hillandale Road exit. For reasons unknown, virtually every category of food on the Pan-Pan Diner's menu offers the option of salmon cakes on the side. The menu lists, for example, "pancakes and sausage," "pancakes and bacon," and "pancakes and salmon cakes." Salmon cakes are available as an option on dozens of items, up to and including "spaghetti and salmon cakes." No one knows why. (53) What is Joel Furr's favorite soft drink? Coca-Cola. He always preferred Coca-Cola to Pepsi-Cola when he was a child -- partly because he preferred the taste of Coke to Pepsi and partially because Pepsi's negative attack ads (which attempted to convince people that only squares and idiots drank Coca-Cola) irritated the living hell out of him. This preference for Coca-Cola was reinforced when he was in college at the University of Georgia. Coke machines were everywhere on campus and there wasn't a Pepsi machine to be seen. Coca-Cola's stockholders and founders and such had been very good to the University over the years and accordingly, no one at the university had much inclination to supplant Coke with a competing soda. The Athens community at large seemed to share this sentiment -- it was not unusual to walk into a convenience store and see two- liter jugs of Coca-Cola, stored at room temperature in the middle of the floor, outselling refrigerated two-liter jugs of Pepsi on sale at half the price. Coca-Cola would usually sell out entirely at any given convenience store before any great dents would be made in the Pepsi supply at all. Things reached the point of ultimate absurdity when, in 1986, the Coca-Cola company celebrated its centennial and, to remind us all which side our bread was buttered on, sponsored a special halftime celebration at a UGA football game which featured dancing Coca-Cola cans. Parenthetically, one of the dancing cans of Coca-Cola deflated spontaneously during the show and the person inside went on dancing merrily, apparently unable to tell that the inflated cylinder he or she was wearing was now hanging on him or her like a bright red shroud. Joel finished college a confirmed Coca-Cola addict, sadly, and only through great effort was able to switch to drinking Diet Coke in graduate school. Had he not succeeded in this effort, his two-liter-per-day Coke habit would probably have caused him to balloon to 300 pounds. Thank God for Diet Coke -- Joel remains a healthy 6'2" 210-pounder. (54) How many fingers am I holding up? Six. (55) Do we need more plastic cups? You bet. (56) What color should mayonnaise be? Yellow. Real mayonnaise, e.g. Duke's Mayonnaise, is yellow. (57) What is Joel Furr's astrological sign? If you believe in astrology, Joel Furr would be a Virgo, as he was born on September 20, 1967 at about 4:30 in the afternoon. If you have half a clue, however, you'll know that astrology is a bunch of utter bunkum, a pseudoscience not worthy of the billions of column-inches dedicated to it in magazines and newspapers each year. For one thing, the astrological tables developed millennia ago (to make it possible to generate horoscopes even on cloudy nights) contained errors which, over time, have accumulated to the point that the calculations of which planet is in which constellation are totally off. Evidently, actually going outside and looking at the sky to demonstrate that Venus is not in Aries at the current moment, despite what your friendly local astrologer might say, is too complicated for most people. Furthermore, a moment's consideration of the laws of physics should make it obvious that the obstetrician or midwife has a greater gravitational influence on a newborn child than any planet other than Earth. Finally, actually looking at horoscopes in the newspaper or the more detailed horoscopes you can purchase at supermarket checkout counters will make it obvious that the horoscopes are recycled from month to month and can't possibly begin to predict what will happen to 1/12 of the world's population on any given day. Needless to say, those who are ardent believers in astrology will retroactively interpret the way events actually take place to the benefit of the astrologers: "Well, my horoscope said I would meet a tall dark stranger who would bring me good fortune, and there was that guy who pulled up behind me at the light at the corner of Smyth Avenue and Winderly Street... and if he hadn't come to a stop behind me, he'd have totalled my car, so I guess he brought me good fortune. Wow, my horoscope was right!!!!" Uh huh. (58) What is Joel Furr's Myers-Briggs type? The last time he took the test, he got an ESTP result. The first time he took the test, he got an ENFP result. The E and the P are pretty certain: E for Extroversion and P for Perceiving (how one uses time, etc.), but the other two are indefinite. If you go by the actual personality descriptions in the various books that explain the Myers-Briggs test, the ESTP sounds more like Joel than does the ENFP. If you're not familiar with the test or the books that explain it, look in your local college library. Books include "Type Talk" and "Please Understand Me" but more may have come out since Joel was in graduate school and routinely being subjected to the scrutiny of Myers-Briggs aficionados. Joel can see that the Myers-Briggs has some validity, but still dislikes the emphasis some employers and administrators place on it. Dividing the human race up into sixteen basic personality types smacks of astrological mumbo-jumbo, even if there's somewhat more of a scientific basis to the Myers-Briggs than to astrology. Joel once worked for a man who was so into the Myers-Briggs that he had posted his own Myers-Briggs personality type on an engraved plastic sign on his office door: "You Are Now Entering 'INTJ' Zone." The "INTJ" was in big letters. Really. Once Joel grudgingly informed his boss what his Myers-Briggs type was, it was brought up over and over again for the rest of the two years that Joel worked for that office. A lot of the assignments Joel was given were prefaced by "You're an ESTP, so you'll love this." If an assignment turned out to be something Joel hated to do, he was told, with a big, cheerful smile, "No, you just don't understand it yet. This is exactly the sort of work you ESTP's love to do." Of course, this same boss once turned out all the lights in his office, sat in the dark wearing a hardhat, and muttered darkly to himself about all the North Vietnamese he had napalmed when he was a fighter pilot in Vietnam. Apparently INTJ's are good at napalming people, but they don't like it. (59) Where are your videos? Glassy smile. "I'm sorry, sir. We don't have any videos." Okay, okay, an explanation: when Joel Furr worked for the Montgomery-Floyd Regional Library system in southwestern Virginia, his job was to work the circulation desk, check books out and in, and answer reference questions that patrons brought to the desk or phoned in. Sadly, the clientele of the library were not exactly a bunch of rocket scientists and Joel and his co-workers wound up accumulating a lengthy list of utterly stupid questions that were asked over and over again by various of the local white trash. The most annoying of these was "Where are your videos?" For some strange reason, many of the patrons of the library had gotten the odd idea that a library was supposed to double as a video store and came up to the desk on occasion to ask where the videotapes were kept. You might be thinking "Well, sure, some libraries have educational videotapes and nature videotapes, so what's the big deal?" The big deal was that people weren't asking for educational videotapes or nature videotapes -- they were asking for recent-run movies that had only just come out on videotape in the stores, and when they were told "We don't have any videos," they'd gawk disbelievingly and then ask again to make sure they hadn't heard the librarians wrong. To be completely truthful, the library did in fact have two videos, both training videos the local Cub Scout troops had prevailed on the library system to keep under the desk for any Cub leaders who came by, but other than that, the place had no videos and had no plans to acquire any. With a limited budget, dollars had to be allotted between the bestseller books everyone wanted to read, children's books, books on tape, magazines, newspapers, and then general collection development. There was no money left over for luxuries such as videotapes, much less an extensive collection such as most of the patrons seemed to take for granted that the library must have hidden somewhere. On more than one occasion, conversations similar to this took place at the circulation desk: Patron: "Hi" Librarian: "Hi. Can I help you?" Patron: "Yes, where are your videotapes?" Librarian: "I'm sorry, we don't have any videotapes." Patron: "Oh, so you just have donated videotapes, educational tapes, nature tapes, and stuff like that?" Librarian: "No, we don't have any videotapes at all." Patron: "Oh, right. Well, could you show me where the instructional videos are kept?" Librarian: "We don't have any. We don't have any videotapes at all." Patron: "You mean you don't have any videotapes?" Librarian: "That's right, sir. We don't have any." Patron: "And you call yourself a library?" Grrrrr. (60) How is "Furr" pronounced? Some people pronounce "Furr" as though it was spelled "Fyure" or ""Foor" or even more unlikely pronunciations. The name is actually pronounced exactly as though it had only one "r" -- in other words, like the word "fur" which we English speakers use to refer to the pelt of an animal. (61) What is the law? Not to spill blood. Are we not men? (62) Where do the keys go? The keys go under the sofa. Silly humans! (63) What are some of the nicknames that Joel Furr has gone by over the years? For some reason, Joel Furr has never had a great deal of luck getting people to call him by various nicknames. Joel has managed to get the people at his office to call him "Jay" (which, for some odd reason, sometimes means that he gets called "Jaybird") but none of his friends seem able to make the switch from Joel to Jay. To his family and friends, "Joel" it is and "Joel" it appears it will always be. The only exceptions to this general rule came while Joel worked at the Hardee's on South Main Street in Blacksburg, Virginia from 1984 to 1985 during his senior year of high school and the summer that came after. Having found where the manager of the store kept the label-maker that made the label tape that went on the "Hardee's" nametags all the employees wore, Joel made himself a nametag that said "FLUFFY" and, when that one got old, another that said "STRUDEL." No one to speak of ever noticed, though he minced around as "STRUDEL" for months. Joel is trying to get people to call him "Jay" and is having slow success. You can call him whatever you like; he'll answer to either version of his name. (64) What happens when you put a real, formerly alive, ocean-bred sponge back in water? It comes back to life and devours you. Be warned. (65) What kind of underwear does Joel Furr wear? Until recently, Joel had been wearing plain white briefs -- had been wearing this style of undergarment his whole life, in fact -- but someone whom he feels is worth listening to has convinced him to begin wearing colored Jockey briefs. Any guesses who this might be? His new habiliments have resulted in occasional startled yelps ("Yah!") when he steps up to a bathroom fixture and sees crimson-colored fabric within when he opens his fly. Overcoming the habits of 29 years is not something that can be done overnight. (66) Who is the greatest cat of all time? The greatest cat of all time is Nubbins the Cat, a.k.a. Miss Kitty, Maximum Cat, Cuddles, Cat Nubbins, etc. etc. ad infinitum. Nubbins is a Jellicle cat and is therefore black and white in color. She is a somewhat rascally cat, but she means well. Nubbins is well-known among cats for her furriness, said furriness being of extremely high quality. Furthermore, while many cats are furry, and in fact cats in general are known for being furry, Nubbins takes furriness one step further. She keeps some of her furriness in reserve against the day when, due to emergency conditions or shortages elsewhere, it may be needed. Nubbins is a public-spirited cat. Caution should be exercised when petting Nubbins the Cat. While Nubbins loves everyone and is full of warmth and good cheer, she does at times chastise those who presume too much and pet her when she is not in a pettable mood. Such chastisement rarely leaves permanent scars or crippling injuries, however; Nubbins is a high quality cat. Joel Furr assumes no responsibility for the activities of Nubbins the Cat. Joel Furr accepts no liability for any property damage, personal injury, and/or breaches of national security which may take place as a result of her actions. Caution should be taken when approaching Nubbins the Cat when she is aboard her flying saucer; said saucer is capable of speeds well in excess of Mach 10, but Nubbins is at best an indifferent driver. (67) How can I embarrass myself in front of eight thousand people? If you attend a Durham Bulls baseball game, you can easily embarrass yourself in front of eight thousand people! During the sixth inning of each game, a lucky fan is selected and escorted out onto the field to try to throw a baseball through a hole in a large wooden target held up by two Bulls employees. The fan gets three tries. Most fans miss all three times... the hole is not much larger than a softball, say, and the target itself is held some distance away (usually about fifteen feet). If you get one ball through, you get a free Coke. If you get two balls through, you get a free Bulls cap. If you get all three balls through, you win a television or something. Rest assured that they don't give out a lot of televisions. Since you're actually on the playing surface, just to the right of the first base line in the foul area, you're easily visible from every seat in the ballpark -- and since the toss is done between half innings when the players are off the field, you're the main attraction for as long as it takes to get it over with. The fans boo or groan loudly with each miss and the contestant usually trots off the field, head hung low and feeling really stupid, night after night. If this sounds like something you would like to experience, it's easy to get yourself chosen as the lucky fan. All you have to do is be the first fan through the gates when the ballpark opens at six p.m. on game nights and go straight up to the Bulls employee selling scorecards and programs. The program is the same night after night -- it's a big color tabloid called "Bulls Illustrated." A new edition only comes out three times each season so it's not exactly a hot seller for the average fan. If you're the first or one of the first fans in the gate, you'll be assured of first dibs at the program stack that night -- and to make sure you're the lucky fan, all you have to do is buy four copies of the program. They're a dollar each, so it doesn't cost a lot. Smile broadly and walk away carrying your programs. When you're out of sight of the program stand, flip through the programs and find the signature of the Bulls' radio announcer, Steve Barnes, on the Mutual Drug ad somewhere in the program. Since they want to make sure that they have a "lucky fan" each night, they make sure and stick the signed copy somewhere in the top of the stack, usually no lower than the fourth copy down. By buying the top four copies in the stack, you've assured yourself of having the copy with Barnes' signature. This means that you are the "lucky fan" and can sheepishly report to the Fan Assistance Center during the middle of the fifth inning when they ask everyone to open their copies of "Bulls Illustrated" and look on pag e X for the Mutual Drug ad. The Bulls would probably be annoyed if you did this night after night, but so far no one has abused the opportunity. Anyone can be the "lucky fan" if you arrange things right. If you're going with friends to a game, get there before they do, buy the necessary number of copies of the program, toss all but two of the copies (the one with the signature and one other), and when your friends arrive, say "I bought a program but they gave me two by accident. Here, you can have the other copy" and give the intended victim the signed copy. Wait with concealed glee for the middle of the fifth inning when they ask everyone to pull out their copies, look in yours with feigned innocence, and then clap your friend on the back when he or she finds the signature in his or her copy. Hours of family fun -- and it only costs the cost of a game ticket ($4.50) plus $4.00 for your four programs. (68) Why does Joel Furr have so many strange and pointless pictures of himself and his friends on his Web page? Because people LOOK at them, that's why. There's no picture too pointless and boring that people won't look at it -- and besides, there actually are people who wonder what Joel looks like. The only people who complain are people who do web searches for "pictures," hoping to find pornography for viewing and downloading and instead find pictures of Joel Furr riding roller coasters and Joel Furr playing miniature golf. (69) What's special about the Duke University parking deck at the corner of Fulton Street and NC 147 in Durham, North Carolina? Due to its gargantuan size and excessive lighting, it's visible from orbit. The wattage alone used to illuminate the structure each night would suffice to power the Energizer Bunny for the next six and a half million years. (70) What fortune cookie does Joel Furr always get? "DO NOT LEAVE THIS RESTAURANT. PERIL AWAITS!" (71) What is "The Mother of All Rivers?" Oddly, "The Mother of All Rivers" is a term that has come to be applied to the James River of Virginia. Joel Furr once took a Government Administration class in graduate school; the class was mainly full of students from Furr's public administration department but there were also two students from the forestry and wildlife department, including one guy named John Stanovic. John had spent the previous summer working on a fisheries project on the headwaters of the James River near the West Virginia border. Accordingly, EVERY SINGLE TIME he was called upon to do a paper presenting some proposal, it'd be based on fisheries management in the upper James headwaters. And EVERY SINGLE TIME he mentioned the James in his classroom reports, he wouldn't just say "the James." He'd say "when I was working on the James..." pause for dramatic effect, then continue, "the mother of ALL rivers," and then go on. Every single freaking time. As far as any of the other students could tell, he barely even noticed himself doing this. Listening to this over and over again for the entire duration of a semester will take a toll on you. Consequently, even to the present date, eight years later, Joel Furr cannot help appending "the Mother of All Rivers" to any mention of the James River. John Stanovic, wherever you are, you're going to pay. (72) So, what was it like attending Georgia Tech? Joel Furr didn't attend Georgia Tech, you low-lives. The University of Georgia is the large land-grant comprehensive university located about an hour's drive northeast of Atlanta in the small city of Athens. It's home to programs in liberal arts, sciences, agriculture, human resources, business, law, veterinary medicine, and so on. It's probably best known as the alma mater of Heisman Trophy running back Herschel Walker, but it's also one of the oldest state universities and has a beautiful campus and many distinguished alumni. The University of Georgia is NOT the same institution as Georgia Tech, a.k.a. "The Georgia Institute of Technology". Georgia Tech, a.k.a. "Calculator Maggot University," is a substantially inferior school located in the middle of Atlanta, known mainly for its engineering programs and for being the site of the 1996 Olympics' athlete housing. Georgia Tech students do not bathe, use utensils at meals, or speak in coherent English much of the time. While it is of course rude to mock and make sport of the many inadequacies of Georgia Tech students, care should be taken to note the many ways in which the average Georgia Tech student falls short of the physical, mental, and social perfection exemplified by the average University of Georgia student in order to better distinguish the two schools' students and alumni. (73) What book is Joel Furr currently working on? "The Big Book of Hellish Vengeance." It'll be a coffee-table book suitable for holiday giving. Keep an eye out for it in your favorite bookstore. (74) Who the hell is "Yalin Ekici?" "Yalin Ekici," the loon who fills alt.fan.joel-furr with megabytes of drivel about the so-called Armenian genocide of Turks in 1914, is believed by many to be none other than Ahmet Cosar, the infamous "Serdar Argic" of soc.history and soc.culture.turkish fame. Cosar lost his access at the University of Minnesota in the spring of 1995 (apparently as a result of failing to register for classes two semesters in a row) and was absent from Usenet for a while. He returned with a vengeance later in the year under a new pseudonym, "Yalin Ekici," posting from ephesus@netcom.com. Since this new userid makes frequent reference to "Dr. Argic" and is recycling the old Argic library of propaganda, most people feel that this is none other than our old friend Cosar, back to his usual tricks. Netcom claims to have told "Ekici" to calm the hell down and stop spamming dozens of newsgroups with his idiotic drivel about the evil Armenians, and in fact, Cosar was quiet for a few days after Netcom said they'd reprimanded him. However, the period of quietude did not last long and Cosar returned to posting his idiocy with a vengeance and Netcom has remained mute to all requests for information on what, if anything, they are doing about the situation. Not for nothing is Netcom considered by many to be a less than exemplary member of the Internet community. In addition to posting under the pseudonym of "Yalin Ekici," Cosar also posts under the pseudonyms of "Arif Kiziltug" and "Murat Kutan," apparently in hopes of convincing the world at large that he's not a lone kook. Important safety tip: if you feel compelled to flame him, don't reply to his messages directly. The algorithm Cosar uses to locate articles to follow up to apparently searches for references to the message-id's of his old articles. In other words, he looks for responses to his articles and follows up to these responses with random attacks out of his library of inane propaganda. It seems odd to many that Cosar would go to such incredible lengths for such a bad cause. No one other than him, apparently, sincerely believes that Armenians committed genocide against Turks in 1914. It seems to be a continuing source of frustration to Cosar that, despite his best efforts, we all still go around believing that it was the Turks who did their utmost to wipe out every Armenian village they could find. Even though Cosar's claims are roughly analogous to someone claiming that Jews herded Germans by the millions into the gas chambers in 1939-1945, he goes right on posting, secure in his sick delusions. (75) What is the ultimate slow dancing song? "Nights in White Satin," by the Moody Blues. The meaning of the song is completely irrelevant -- the song was made to slow dance to. "Wonderful Tonight" by Eric Clapton is also a fine slow dancing song, but when you actually listen to the words, it's about a guy who gets drunk at a party and has to be shoveled into bed by his wife -- hardly the stuff of great romance. Carole, beacon of Joel Furr's existence and the radiant star who guides him through the day, feels that the ultimate slow dancing song is "Lady in Red," by Chris DeBurgh (better known as the guy who sang "Don't Pay the Ferryman"). She may have a point. (76) Who was President of Joel Furr's high school Science Club? Jimmy Page. Yes, the Jimmy Page. Joel Furr's high school, Blacksburg High School of Blacksburg, Virginia, encouraged membership in the various school clubs by setting aside one morning per month (or thereabouts) for club meetings to be held in lieu of classes. Attendance at clubs was essentially mandatory; if you didn't choose some club to go to, you had to spend all morning being watched a like a hawk in a study hall run by one of the more irritable teachers. Consequently, everyone found at least one club they could endure and attended its meetings each month. Those students who were either not eligible for or not interested in membership in clubs like the Leo Club, the Key Club, or the Fellowship of Christian Athletes had various clubs like the Spanish Club, the French Club, or, yes, the Science Club available to them. Since it was well-known that members of the Science Club got to see Dr. Wightman set himself on fire one day each year, the Science Club was the most popular club in the entire school most years and could count on raking in the lion's share of those students who were not otherwise inclined toward some of the more specialized clubs. The Science Club could be counted on to accomplish precisely nothing all year since each month's program consisted of someone's father (usually a physics or chemistry or biology professor from Virginia Tech) speaking on whatever it was he did for a living ... surface chemistry, nuclear physics, iguanas, whatever. Sitting boredly in the back of the room while someone's dad set himself on fire was as good a way as any to spend a morning but it wasn't the sort of thing that led people to take the club and its mission to encourage the study of science very seriously. Needless to say, it was no great honor to be elected President of the Blacksburg High School Science Club. That's how Jimmy Page got elected President of the Science Club. The first meeting of the year was always the meeting at which club officers were elected, and one year someone nominated Jimmy Page. Since the teacher who was the official sponsor of the Science Club didn't have a clue who Jimmy Page was, she wrote the nomination on the board with all the others and, after his nearly unanimous election, dutifully noted "James Page" down on the officers form that she had to turn in to the school office after the first meeting. Page never seemed to make it to meetings, oddly, so the club vice president always had to call meetings to order. (77) What is the secret of making great Bisquick pancakes? Damned if Joel Furr, or for that matter, ANY of the Scouts of Boy Scout Troop 44, could tell you. Without exception, during the years Joel Furr was a Scout, his troop always took a big box of Bisquick pancake mix along on camping trips (in addition to the other food). This was the case for two reasons. First, one of the Troop's Assistant Scoutmasters was Arthur "Torchy" Walrath, author of the official Boy Scout Cookbook. Torchy could do amazing things with Bisquick and the Scouts always made sure to have the raw materials close at hand, just in case Torchy came along on any given camping trip. Second, Bisquick was sort of a last-ditch emergency ration, just in case something bad happened to the other food that had been brought along. Without fail, something would happen to the bulk of the food -- often, the reason was simple: it was all eaten on the first night in a fit of orgiastic gluttony -- and by the final morning of the camping trip, the Scouts would be reduced to eyeing the box of Bisquick hungrily. Eventually, one would say "Well, this time we know what to do to get the pancakes to come out right" and the Great Experiment would resume. Bisquick, used by calm, sane cooks who are not crazed from smoke, cold, and exhaustion, can be used to make tasty pancakes and biscuits and so forth. On the other hand, the Scouts of Troop 44, indifferent cooks at best (the freeze-dried food they took along was usually eaten cold and uncooked, with a cup of two of water poured into the foil packets in a futile attempt at effective re-hydration) and hardly qualified as "sane" under the best of circumstances... and were usually so enervated by the exertions of the trip that they would double every measure called for on the back of the box and halve the cooking time. If it was necessary to leave out the eggs or oil or whatever because the Scouts didn't have any, then hey, so be it. Strict adherence to instructions was not a skill the Troop 44 Scouts had much familiarity with. The inevitable result was something unworthy of the name of "pancake" -- which consequently became known among the Scouts as a "fritter." Your average fritter weighed in at a pound and a half and was sufficiently dense that fritters became widely feared as weapons; a thrown fritter was dense and solid enough to knock down most anything it struck -- AND keep its shape after impact. Eating an entire fritter was out of the question -- it would have been like trying to put yourself outside an entire sack of Quikrete. A few bites were enough to rid a boy of the pangs of hunger and leave him feeling as though he'd mistaken a sandbag for a Pop-Tart. It was little wonder that the Scouts of Troop 44 were invariably running into each other at McDonald's immediately after returning to Blacksburg and being picked up by their parents at the church; without doubt, the parents of the troop knew without having to be told that their sons would go through the refrigerator like a threshing machine if other food were not found first. (78) Why didn't Joel Furr wind up in the military? Joel wanted to enter the military after graduating from Blacksburg High School in 1985; he even went so far as to apply for and interview for a Naval ROTC scholarship, knowing full well that if he was accepted into the program this would require him to complete a full term of military service after graduation. His older sister, Julia, had already entered Duke University on an Army ROTC scholarship, so Joel was hardly ignorant of what the program required of applicants or what the program would require Joel to do after graduation. It seemed like an excellent opportunity: get his education paid for, graduate, and get to see the world as a member of an honorable profession, serving the United States. Unfortunately, there was this problem... It happened like this: In November of 1984, Joel went down to be interviewed and evaluated by a Naval officer in Richmond, Virginia. The officer interviewed Joel and evaluated whether or not he'd make a good Naval ROTC cadet. Things were going pretty well during the interview -- well enough, in fact, that Joel's father was told, after it was over, that he could 'bet [his] paycheck on Joel getting a scholarship.' Evidently the interviewer thought highly of Joel. Unfortunately, Mr. Furr was being told this as he was half-helping, half-carrying Joel out to the car. Joel had felt more or less okay during the drive down from Blacksburg but had begun to feel feverish during lunch and had started feeling really bad during the interview. About halfway through the scheduled length of the interview, the room started to swim and Joel passed out. The interview was cut short, needless to say, but the interviewer assured Mr. Furr that it wouldn't negatively affect the report on Joel and that Joel was a sure thing as far as a Navy ROTC scholarship went. When the Furrs made it back to Blacksburg, three and a half hours away by car, Joel was running a high fever and was babbling deliriously. He was diagnosed the next day with a full-fledged case of pneumonia. That's right, pneumonia. As diseases go, there may well be worse ones to have, but Joel can't recommend lying on one's back for a solid month, too weak to move, as an exciting laugh-fest. When he was X-rayed the next day at the hospital, he was diagnosed as having one lung more or less entirely full of green goo and the other lung about halfway full. His parents didn't tell him until he was completely recovered that the doctors had thought there was a decent chance that he'd die. Joel recovered in a month or so, spending a solid four weeks in bed unable to do much more than roll over now and then and occasionally swallow whatever liquids his parents thrust at him. It wasn't fun. When he did finally make it back onto his feet and make it back to school, he wasn't exactly in good shape, muscle-wise. Consequently, when the time came a month or so later to take the ROTC physical fitness test, Joel performed somewhere around the fifth percentile of applicants. Spending a month in bed without moving isn't exactly going to tone one's body up to the levels desired by the United States military. Let's put it this way: Joel did not get the scholarship. Good thing Mr. Furr didn't bet his paycheck, eh? (79) What was the most embarrassing thing that ever happened to Joel Furr? People who know Joel well might think that the most embarrassing thing that ever happened to him was the time he vomited his guts out on the Monument to the Confederate War Dead in the middle of Athens, Georgia's main street, Broad Street, at 5:00 p.m. on a bright, sunny Friday afternoon. And admittedly, that was an embarrassing moment, but it fails to qualify as the most embarrassing moment inasmuch as Joel felt far too ill at the time of the incident to really care if he was embarrassing himself or not. Drinking six beers and six shots of tequila in the space of about seventy-five minutes will do that to you. No, the single most embarrassing thing that ever happened to Joel Furr has to be what happened early one summer morning during the summer of 1988. Joel had graduated from the University of Georgia in June of 1988 and was spending the summer in his home town of Blacksburg, Virginia waiting for his graduate school classes to start up that fall. He tried to find a job that summer but had little luck since no one much wanted to hire a recent college graduate whose main skill was that he could write a ten page English paper in about two hours on the day the paper was due, without having read the book the paper was supposed to be about, and still get an A. Accordingly, he spent the summer lounging about, not doing much of anything. Some days he'd drive up into the Jefferson National Forest, just north of Blacksburg, and float about on the calm and tranquil waters of eight-acre Pandapas Pond, out in the woods of the Forest. He had a black-and-yellow "two-man inflatable raft (not for life-saving purposes)" he'd picked up one summer at Cape Hatteras that served fairly well for one man if that one man happened to be six feet, two inches tall and was fond of lying on his back with a book held open on his chest. Most days, no one much came to the Pond except to walk around the circum-Pond trail and then leave again forty-five minutes later. Once in a while, someone would arrive with a canoe on top of their Wagoneer and spend a few hours paddling around the Pond while Furr floated on his back, ignoring them and reading whatever book he happened to have along. Then came the Day of the Girl Scouts. Joel was lying in the boat, half-drowsing, about nine-thirty in the morning one weekday morning when he heard a tumult from the parking lot and, a few minutes later, saw a platoon of Girl Scouts, probably Juniors, with a harried troop leader in tow, portaging silver canoes down to the Pond. The Scouts paired off and launched their canoes, voyaging out over the still waters of the Pond and chattering amiably as they paddled. This was not exactly the sort of thing Joel had wanted or expected when he'd decided to go down to the Pond that morning. It tended to break the mood something fierce; imagine Thoreau feeling as he did about Walden Pond if some idiots in canoes had routinely showed up and paddled about on days he was feeling philosophical. Joel was not entirely awake, nor entirely in a good mood, and thus he can't entirely be blamed for not foreseeing what happened next. Joel decided he would stand up in his boat and count how many Girl Scouts there were in all -- and if there were more than ten, he reasoned, the Pond could be considered "too crowded" and he would have a legitimate excuse to give up and go home. Standing up in the boat was no problem; the boat was like a big oval doughnut with a flat bottom and Joel could actually stand up in it fairly well and see around the Pond and count, "two, four, six, eight, ten, twelve, thirteen, and one troop leader, yeah, time to split." Just as he was making this decision, the bottom of the boat came free of the side of the boat and he plunged straight down, through the boat, and into the water of the Pond. From an observer's point of view, it must have looked as though Joel suddenly vanished, sucked down into the Pond by something lurking underneath the surface of the water. One moment, he was there; the next, he was gone. His boat began a somewhat slower collapse, its hull integrity destroyed by when the "deck" ripped free of the sides. Joel doesn't like fish. Pandapas Pond has fish in it. With no warning at all, Joel was down where the fish lived, and he didn't like it at all. Much in the same fashion that cartoon characters run on thin air, Joel rose up out of the water and moved like a Jet-Ski for the nearest land, which happened to be the nearby Pandapas Pond island, smack in the middle of the Pond. "Ignominy" doesn't begin to describe it. Here was Joel, soaked from head to toe, hunkered down on an island approximately the same size as a postage stamp like some sort of primeval amphibian gazing darkly over the Carboniferous swamps. There were the Girl Scouts, happily learning the ins and outs of canoe navigation and peering curiously at the spectacle on the island. What was Joel to do? Swim ashore and risk touching Pond fish? Sit there and hope the Scouts or their leader would discreetly come over and give him a lift to shore without asking too many embarrassing questions about what he'd thought he was doing when he stood up in a cheap plastic inflatable boat? As it happened, his bacon was saved when the troop leader noticed his dilemma and paddled the canoe she shared with one Scout over to the island and asked him if he needed any help. "Um," Joel said. History does not record how Joel explained what had transpired nor the manner in which he requested a lift to shore; presumably he managed somehow because in due order he was delivered onto the shore, ruined boat and all, and wished a good day by the over-cheery Girl Scouts. Suffice it to say that when Joel purchased a replacement boat for future nautical endeavors, he concluded that it would be best for all involved if he remained safely seated or supine when aboard and left standing and walking for when he had returned to dry land. (80) When did Joel Furr learn to read? Around age 3, or maybe a little earlier. Joel's younger brother Robin was born in late July 1970, two years and ten months after Joel's birth. Needless to say, the newborn required much care and attention and Joel's parents did not have the time necessary to closely supervise their other son. Consequently, they did anything they could to keep him occupied -- reading him a book and then handing him the book to page through while they attended to Robin. Joel would pore over the books for hours, looking at the pictures -- and, as it turns out, the words. It caught them by surprise when they realized one day that Joel was studying the pages with rather more concentration than one would expect of a child not quite yet three years old who didn't know how to read. "Read that to me," his mother ordered, pointing at a page. Joel did so. He read them the whole book. He had figured out how to read all by himself, based on comparisons between what his parents read to him and the corresponding marks on the pages. This caused Joel some problems later in life when he was light-years ahead of the other kids in kindergarten and first grade -- especially in kindergarten, where he'd already read all the books in the kindergarten library and had very little interest in sitting through storytime just to hear them read through again. It led to some fairly immediate problems when Joel was still a pre-schooler as well. Mrs. Furr did not always watch Joel closely when she was tending to Robin, assuming that Joel would keep busy with one of the dozens of easy-reading books in the house for a few minutes. Joel would read quietly and stay out of trouble -- but, as it happened, there's such a thing as too quiet. On occasion, when Mrs. Furr had heard nary a peep out of Joel for some hours, a feeling of "uh-oh" would come over her and she'd go in search. On one occasion, she searched the house without spotting Joel until she finally chanced to look down under the dining table and found that Joel had used up an entire box of margarine sticks greasing the entire dining room floor. He looked up at his flabbergasted mother and patted the floor proudly. On another occasion, he was found standing with the refrigerator door open, happily dropping one egg after another onto the floor. He had the last surviving egg in his hand when Mrs. Furr discovered him standing above a heap of eggshells and runny goo, beaming happily at his work. "Joel," she said cautiously, "Give me the egg." Smiling agreeably, Joel hurled the egg in her general direction, missing by a few feet. Scratch the remaining egg. On still other occasions, Joel was found standing in front of an open commode, flushing repeatedly and waving "bye-bye" at whatever he had flushed down the toilet this time. Is it any wonder his parents adopted a practice of shoving books under his nose any time they saw him otherwise unoccupied? (81) What is Joel Furr's ultimate ambition in life? Joel's ambition is a simple one: a front porch overlooking a large, grassy lawn, a rocking chair on the porch... and a cane to angrily wave at any neighborhood kids who come trespassing on the lawn. To make the situation perfect, the kids will dutifully shriek "Run, run, it's Old Man Furr!" and vamoose. (82) Aren't you cold? No. Joel Furr routinely goes out in cold weather wearing shorts, a sweatshirt, and maybe a jacket, and feels quite comfortable, thank you... even at temperatures below freezing, and certainly on "cool days" where other people have broken out the sweatshirts and long pants. People in supermarkets and such get cold just looking at him, and inevitably ask "Aren't you cold?" Joel grins amiably and says "Nope. Perfectly toasty. You?" Incidentally, now that Joel lives in Vermont (NORTHERN Vermont, at that), this habit may undergo a little modification come winter. Bare legs aren't liable to constitute a "survival trait" in temperatures below 0 Fahrenheit or in chest-deep snow. (83) What restaurant are Joel and Carole Furr going to open soon? A steak restaurant, to be named "Yesterday's Cow." (84) What collectible novelty does Joel Furr have in store for us? Soft-sculpture crucified Jesus dolls. They'll go over big in the redneck market. (85) What did Wally the gopherlike being do at the 1997 North Carolina State Fair? He helped out at the Colonic Irrigation demo booth. If you were at the fair, you may have seen him. The booth was on the back side of Dorton Arena, near the Highway Safety pavilion. Wally was the small furry mammal who was waving cheerfully at passersby and holding up a rubber tube in what he apparently hoped was an inviting sort of way. (86) What does Joel Furr think of the invention known as "the third mouse button"? Joel Furr hates it a lot. Joel Furr makes his living as a software trainer, teaching Microsoft certified classes in things like Windows NT 4.0 and such, but also training end-user applications like, oh, Microsoft Word and Lotus 1-2-3. Consequently, he sees all ranges of computer users, from those who could practically be teaching the courses in question to those who hold their mouse in the air and wave it around like a television remote control. If there's one thing he's learned as a trainer, it's that you want to avoid overloading users, especially the truly inexperienced novice users who suffer from extreme computer phobia, with unnecessary detail. In other words, there are Things End Users Were Not Meant To Know. When you're training an end user in basic mouse skills, you're doing well to convince them that it's actually vaguely useful to be able to use the right mouse button for certain things. Most of the time, you're only able to get them to realize that one button does one thing and one button does another, and if they've begun to be able to recall which is which, that's about all you can typically expect. It's not that they're dumb - just scared. It's natural to be scared of new things - if humans hadn't acquired that trait, our ancestors would have wound up lunch for the first sabertooth tiger that happened along. Therefore, given how nervous people who've never used a mouse are when they first realize that they've absolutely got to learn how to use one if they want to use their new computer without embarrassment, you don't want to drive them over the edge into shrieking hysteria by informing them that, some of the time, they're going to be expected to understand the function and uses of a THIRD mouse button, especially one that not all mice have and that not all programs even recognize. Anything that confuses the students for no good reason is a Bad Thing. Third mouse buttons are a Bad Thing. Joel has spoken. (87) If Joel Furr were a fruit, which one would he be? If Joel Furr were a fruit, he'd probably be one of those weird "ugly fruits" they sell in the produce department, the ones that no one ever seems to buy and which get progressively marked down each week until they're so old and so cheap that the fruits are practically jumping off the counters and accosting shoppers, begging to be bought. It's not that Joel is necessarily desperate - he has a wife, after all. It's more that if he were a fruit, he'd almost certainly be the kind of fruit no one really knows how to use or what it's for. (88) Did Joel Furr inhale? No. Joel has been in the direct presence of marijuana once in his life and, as it happens, didn't want to be there anyway. Someone who had asthma symptoms off and on throughout his childhood and even on into adulthood generally learns in a hurry not to breathe strange things into his lungs. Consequently, Joel has never smoked a cigarette of any kind, tobacco OR marijuana. (89) Does Joel Furr say "toe-MAY-toe" or "toe-MAH-toe?" "toe-MAY-toe." Except when he's around some of his more countrified Southern relatives, when Joel may occasionally be heard to say "toe-MAY-ter." Or even "ter-MAY-ter." It depends on how redneck he's feeling that day. (90) Why? Because he had too much free time in grad school, and the devil makes work for idle hands. (91) Why not? One supposes he was just too shy. (92) What did Joel's supervisors and co-workers at Glaxo Pharmaceuticals give him on his last day of work, as a going-away present? A case of Budweiser. No idea why. Joel was leaving that last Friday and one of his co-workers, never mind which one, said "Oh, by the way, Joel, we have a gift for you, it's in my car." Joel bemusedly followed her out to her car in the parking deck and stood there, wondering why whatever the gift was hadn't been brought inside and given to him inside at his desk, when everyone standing around snacking on the farewell brownies... only to find out why in short order, as said co-worker reached into her car and fished out one of those cardboard 24-pack "suitcases" of Budweiser. "Here," she said. "We'll miss you. Take care." (93) What boutique are Joel and Carole Furr going to open next door to their new restaurant? A trendy little place, specializing in all things radioactive: power plant parts, low-level and high-level radioactive waste, irradiated rutabagas, maybe a little bootleg U-235. They're going to call it "If It's Nuclear." Stop on by. --- This document may be found on the World Wide Web in a completely HTML-ized format, at the following address:
http://www.faqs.org/faqs/joel-furr/faq/
CC-MAIN-2018-05
en
refinedweb
The break statement is used in following two scenarios: a) Use break statement to come out of the loop instantly. Whenever a break statement is encountered inside a loop, the control directly comes out of loop terminating it. It is used along with if statement, whenever used inside loop(see the example below) so that it occurs only for a particular condition. b) It is used in switch case control structure after the case blocks. Generally all cases in switch case are followed by a break statement to avoid the subsequent cases (see the example below) execution. Whenever it is encountered in switch-case block, the control comes out of the switch-case body. Syntax of break statement break; break statement flow diagram Example – Use of break statement in a while loop In the example below, we have a while loop running from 10 to 200 but since we have a break statement that gets encountered when the loop counter variable value reaches 12, the loop gets terminated and the control jumps to the next statement in program after the loop body. #include <iostream> using namespace std; int main(){ int num =10; while(num<=200) { cout<<"Value of num is: "<<num<<endl; if (num==12) { break; } num++; } cout<<"Hey, I'm out of the loop"; return 0; } Output: Value of num is: 10 Value of num is: 11 Value of num is: 12 Hey, I'm out of the loop Example: break statement in for loop #include <iostream> using namespace std; int main(){ int var; for (var =200; var>=10; var --) { cout<<"var: "<<var<<endl; if (var==197) { break; } } cout<<"Hey, I'm out of the loop"; return 0; } Output: var: 200 var: 199 var: 198 var: 197 Hey, I'm out of the loop Example: break statement in Switch Case #include <iostream> using namespace std; int main(){ int num=2; switch (num) { case 1: cout<<"Case 1 "<<endl; break; case 2: cout<<"Case 2 "<<endl; break; case 3: cout<<"Case 3 "<<endl; break; default: cout<<"Default "<<endl; } cout<<"Hey, I'm out of the switch case"; return 0; } Output: Case 2 Hey, I'm out of the switch case In this example, we have break statement after each Case block, this is because if we don’t have it then the subsequent case block would also execute. The output of the same program without break would be: Case 2 Case 3 Default Hey, I'm out of the switch case
https://beginnersbook.com/2017/08/cpp-break-statement/
CC-MAIN-2018-05
en
refinedweb
Hi All, I am a total java novice and i am having a real problem running the following code: import java.util.*; public class Methods { public static void main( String [] args) { Date independenceDay = new Date ( 7, 4, 1776 ); int independenceMonth = independenceDay.getMonth(); System.out.println("Independence day is in month " + independenceMonth); Date graduationDate = new Date (5, 15, 2008); System.out.println("The current day for graduation is " + graduationDate.getDay() ); graduationDate.setDay(12); System.out.println( "The revised day for graduation is " + graduationDate.getDay() ); } } This is actually from a book i am working through and if it isn't working i was wondering if there was something wrong with the code or my pc. The error message reads: 1 error found: File: C:\Documents and Settings\Steven\Desktop\Methods.java [line: 16] Error: cannot find symbol symbol : method setDay(int) location: class java.util.Date Please help Thanks Sarah x
https://www.daniweb.com/programming/software-development/threads/59478/help-java-novice
CC-MAIN-2018-05
en
refinedweb
It is occasionally useful to allocate small data structures on the stack that have a variable size defined at construction, which would otherwise only be possible by using heap allocation. Examples: The C language provides variable-length arrays (VLAs) for that purpose, with the syntax void f(int n) { int a[n]; } Most C++ compilers implement this facility also in C++ mode. This paper proposes to standardize a subset of the C-language VLA facility and a simple C++ class (similar to std::initializer_list) for the purpose of stack allocation of runtime-sized data structures. For more detailed motivation and introductory explanations, please see the papers cited in the next section. Standardizing the basic C VLA syntax (but not all the anciliary features of C VLAs) was the subject of "N3639 Runtime-sized arrays with automatic storage duration (revision 5)" (by Jens Maurer), which was voted into the C++ working paper in Bristol, April 2013. A companion facility with a C++-style library interface called std::dynarrray was specified in "N3662 C++ Dynamic Arrays (dynarray)" (by Lawrence Crowl and Matt Austern), which was also voted into the C++ working paper in Bristol, April 2013. Both facilities were removed from C++14 and instead put into a separate Technical Specification in Chicago, September 2013. Authority for doing so came from National Body comment CH 2 on the C++14 Committee Draft ballot, asking for removal of any feature degrading the quality of the upcoming standard. The technical grounds for removal were the usability of std::dynarray for objects with any kind of storage duration, which made it look like a construction-sized std::vector with optional compiler optimizations, instead of a specialized stack allocation facility with a C++ look and feel that some members of the committee prefer. The working draft of the Technical Specification was published as "N3820 Working Draft, Technical Specification - Array Extensions" (editor: Lawrence Crowl). After the Chicago meeting, there was additional exploration of the design space in the papers The Issaquah 2014 meeting saw a lengthy discussion of the design approaches, but little actual progress. The Array TS project was canceled in Jacksonville, March 2016. The C++ standard is silent on concepts such as "stack" or "heap". Instead, objects have different kinds of storage duration, depending on how they came to exist. Therefore, it would be new grounds to normatively specify that certain kinds of objects must be allocated on the stack (and not on the heap). As a quality-of-implementation concern, most implementations already do the obvious. However, implementations for special target audiences may diverge from that de-facto consensus. For example, when stack space is severly limited for environmental reasons (e.g. in kernel or embedded code), it seems plausible for a compiler to transparently allocate (and deallocate) large objects from some other pool of memory and only store a simple pointer on the stack. Therefore, the normative specification can only make sure that stack allocation is actually possible and, at best, encourage it non-normatively. Construction of an object that is a runtime-sized data structure (in short, an "object of runtime size") requires a three-step approach: determine size, allocate storage, invoke the constructor. These are exactly the same three steps that are required for the expression " new T[n]". There are two conflicting opinions on the general design approach for a C++-style library interface (or building blocks enabling such), which cannot be reconciled: std::dynarray) std::bs_array) This proposal re-proposes "N3639 Runtime-sized arrays with automatic storage duration (revision 5)", by Jens Maurer, together with a simple C++ class similar to bs_array proposed in "N3810 Alternatives for Array Extensions", by Bjarne Stroustrup. Please see these papers for detailed motivation and discussion. stack_array. AnChange in 8.1.5.2 [expr.prim.lambda.capture] paragraph 11: lvalue or rvalueof type "array of N T" or "array of unknown bound of T" can be converted to a prvalue of type "pointer to T". The result is a pointer to the first element of the array. [...] [: ... ]Change in 8.1.5.2 [expr.prim.lambda.capture] paragraph 12:: ... ] A bit-field or a member of an anonymous union shall not be captured by reference.Insert a new paragraph before 8.2.8 [expr.typeid] paragraph 2: Change in 8.3.1 [expr.unary.op] paragraph 3:Change in 8.3.1 [expr.unary.op] paragraph 3: When typeidis applied to a glvalue expression whose type is a polymorphic class type (13.3), ... The result of the unary & operator is a pointer to its operand. The operand shall be an lvalue or a qualified-id. ...Change in 8.3.3 [expr.sizeof] paragraph 1: ... The sizeofoperator shall not be applied to an expression that has function or incomplete type, to the parenthesized name of such types, or to a glvalue that designates a bit-field. ... Drafting note: 8.3.7 [expr.unary.noexcept] does not need to be changed, because the declaration of an array of runtime bound cannot be lexically part of the operand of a noexcept; see also 8.1.5 [expr.prim.lambda] paragraph 2.No change to 9.5.4 [stmt.ranged] paragraph 1, supporting runtime-sized arrays implicitly. Insert a new paragraph before 10.1.3 [dcl.typedef] paragraph 3: Change in 10.1.7.2 [dcl.type.simple] paragraph 5:Change in 10.1.7.2 [dcl.type.simple] paragraph 5: In a given non-class scope, a typedefspecifier can be used to redefine the name of any type declared in that scope to refer to the type to which it already refers. [ Example: ... ] For an expression e, the type denoted byChange in 11 [dcl.decl] paragraph 4: decltype(e)is defined as follows: - if e is an unparenthesized id-expression naming a structured binding (11.5), ... noptr-declarator: declarator-id attribute-specifier-seqopt noptr-declarator parameters-and-qualifiers noptr-declarator [ constant-expressionopt] attribute-specifier-seqopt ( ptr-declarator ) Drafting note: Section 11.1 [dcl.name] defining the grammar term type-id is intentionally unchanged. Thus, constructing an array of runtime bound in a type-id is ill-formed, because the grammar continues to require all constant-expressions in array bounds.Change in 11.3.1 [dcl.ptr] paragraph 1: ... Similarly, the optional attribute-specifier-seq (7.6.1) appertains to the pointer and not to the object pointed to.Change in 11.3.2 [dcl.ref] paragraph 5: There shall be no references to references, , no arrays of references, and no pointers to references. ...Change in 11.3.4 [dcl.array] paragraph 1: In a declaration T D where D has the formChange in 11 (8.20 [expr.const]) is present, it shall be a converted constant expression of type std::size_t 11.3.4 [dcl.array] paragraph 4: ; only the first of the constant expressions that specify the bounds of the arrays may be omitted. In addition to ... An array of runtime bound shall only be used as the type of a local object with automatic storage duration. If the size of the array exceeds the size of the memory available for objects with automatic storage duration, the behavior is undefined [ Footnote: Implementations that detect this case should throw an exception that would match a handler (18.3 [except.handle]) of typeChange in 11.3.5 [dcl.fct] paragraph 11: std::bad_array_length(21.6.3.2a [array.badlength]). ] It is unspecified whether a global allocation function on a side stack. ] Functions shall not have a return type of type array or function, although they may have a return type of ype pointer or reference to such things. ...Change in 11.6.1 [dcl.init.aggr] paragraph 10: Change in 11.6.2 [dcl.init.string] paragraph 2:Change in 11.6.2 [dcl.init.string] paragraph 2: Aninitializer-list is ill-formed if the number of initializer-clauses exceeds the number of members or elements to initialize. [ Example: ... ] There shall not be more initializers than there are array elements. [ Example:Change in 12.2 [class.mem] paragraph 14:char cv[4] = "asdf"; // erroris ill-formed since there is no space for the implied trailing '\0'. -- end example ] Change in 17.1 [temp.param] paragraph 7:Change in 17.1 [temp.param] paragraph 7: Non-static(12 17.9.2 [temp.deduct] paragraph 8 says "If a substitution results in an invalid type or expression, type deduction fails. An invalid type or expression is one that would be ill-formed if written using the substituted arguments." 11.3.5 [dcl.fct] paragraph 11 (and other places) establishing restrictions on forming types are thus sufficient.Change in 18.1 [except.throw] paragraph 1: Throwing an exception transfers control to a handler. [ Note: An exception can be thrown from one of the following contexts: throw-expressions (8.17 [expr.throw]), allocation functions (6.7.4.1 [basic.stc.dynamic.allocation]),Add a new section immediately before 21.6.3.2 [new.badlength]: dynamic_cast(8.2.7 [expr.dynamic.cast]), typeid(8.2.8 [expr.typeid]), new-expressions (8.3.4 [expr.new]), and standard library functions (20.4.1.4 [structure.specifications]). -- end note ] An object is passed and the type of that object determines which handlers can catch it. [ Example: ... ] 21.6.3.2a ClassAdd a new section immediately after 21.9 [support.initlist]: bad_array_length[array.badlength]namespace std { class bad_array_length : public bad_alloc { public: bad_array_length() noexcept; const char* what() const noexcept override; }; }The class bad_array_lengthdefines the type of objects thrown as exceptions by the implementation to report an attempt to allocate an array of runtime bound with a size less than or equal to zero or greater than an implementation-defined limit (11.3.4 [dcl.array]).bad_array_length() noexcept;Effects: constructs an object of class bad_array_length.const char* what() const noexcept override;Returns: An implementation-defined NTBS. Remarks: The message may be a null-terminated multibyte string (20.4.2.1.5.2 [multibyte.strings]), suitable for conversion and display as a wstring(24.3 [string.classes], 25.4.1.4 [locale.codecvt]). 21.10 Stack-based arrays [support.stack_array] The header <stack_array>defines a class template for an array type which can only be used for variables of automatic storage duration and whose size is defined at construction. A stack_arrayis a contiguous container (26.2.1 [container.requirements.general]). All functions specified in this subclause are signal-safe (21.10.4 [support.signal]). A stack_arraysatisfies all of the requirements of a container (26.2 [container.requirements]), except that default construction and swapare not supported. A stack_arraysatisfies some of the requirements of a sequence container (26.2.3 [sequence.reqmts]). Descriptions are provided here only for operations on stack_arraythat are not described in one of these tables and for operations where there is additional semantic information. 21.10.1 Header <stack_array>synopsis [stack_array.syn]template<class T> class stack_array { using value_type = T; using pointer = T*; using const_pointer = const T*; using reference = T&; using const_reference = const T&; using size_type = size_t; using difference_type = ptrdiff_t; using iterator = implementation-defined ; using const_iterator = implementation-defined ; explicit stack_array(size_type n); stack_array(size_type n, const T& value); stack_array(const stack_array&); stack_array& operator=(const stack_array&); // iterators: constexpr iterator begin() noexcept; constexpr const_iterator begin() const noexcept; constexpr iterator end() noexcept; constexpr const_iterator end() const noexcept; constexpr const_iterator cbegin() const noexcept; constexpr const_iterator cend() const noexcept; // capacity: constexpr bool empty() const noexcept; constexpr size_type size() const noexcept; constexpr size_type max_size() const noexcept; // element access: constexpr reference operator[](size_type n); constexpr const_reference operator[](size_type n) const; constexpr reference at(size_type n); constexpr const_reference at(size_type n) const; constexpr reference front(); constexpr const_reference front() const; constexpr reference back(); constexpr const_reference back() const; constexpr T * data() noexcept; constexpr const T * data() const noexcept; }; 21.10.2 Constructors [stack_array.cons] Declaring or creating an object of type stack_arrayis ill-formed unless that object is a variable with automatic storage duration. The destructor of a stack_arrayshall not be invoked explicitly (8.2.2 [expr.call]), but only implicitly (9.6 [stmt.jump]).stack_array(size_type n);Effects: Constructs a stack_arraywith n default-constructed elements. Requires: n is greater than 0. T shall be DefaultConstructible. Complexity: Linear in n.stack_array(size_type n, const T& value);Effects: Constructs a stack_arraywith n copies of value. Requires: n is greater than 0. T shall be CopyConstructible. Complexity: Linear in n. 21.10.3 Member functions [stack_array.mem]constexpr size_type size() const noexcept;Returns: The number of elements nused for construction.constexpr T* data() noexcept; constexpr const T* data() const noexcept;Returns: A pointer such that data() == addressof(front()), and [ data(), data() + size()) is a valid range.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0785r0.html
CC-MAIN-2018-05
en
refinedweb
System.Runtime namespaces System.Runtime and its child namespaces (System.Runtime.CompilerServices, System.Runtime.ExceptionServices, System.Runtime.InteropServices, System.Runtime.InteropServices.ComTypes, System.Runtime.InteropServices.WindowsRuntime, System.Runtime.Serialization, System.Runtime.Serialization.Json, and System.Runtime.Versioning) contain types that support an application's interaction with the common language runtime, and types that enable features such as advanced exception handling, COM interop, serialization and deserialization, and versioning. This topic displays the types in the System.Runtime namespaces that are included in the .NET for Windows 8.x Store apps. Note that the .NET for Windows 8.x Store apps does not include all the members of each type. For information about individual types, see the linked topics. The documentation for a type indicates which members are included in the .NET for Windows 8.x Store apps.
https://msdn.microsoft.com/en-us/library/windows/apps/hh454059
CC-MAIN-2017-17
en
refinedweb
Problem Statement. Issues that Need to be Addressed in this Rework One: OperatorPlan has far too many operations. It has 29 public methods. This needs to be paired down to a minimal set of operators that are well defined. Two: Currently, relational operators (Join, Sort, etc.) and expression operators (add, equals, etc.) are both LogicalOperators. Operators such as Cogroup that contain expressions have OperatorPlans that contain these expressions. This was done for two reasons: - To make it easier for visitors to visit both types of operators (that is, visitors didn't have to have separate logic to handle expressions). - To better handle the ambiguous nature of inner plans in Foreach. However, it has led to visitors and graphs that are hard to understand. Both of the above concerns can be handled while breaking this binding so that relational and expression operators are separate types. Three: Related to the issue of relational and expression operators sharing a type is that inner plans have connections to outer plans. Take for example a script like A = load 'file1' as (x, y); B = load 'file2' as (u, v); C = cogroup A by x, B by u D = filter C by A.x > 0; In this case the cogroup will have two inner plans, one of which will be a project of A.x and the other a project B.u. The !LOProject objects representing these projections will hold actual references to the !LOLoad operators for A and B. This makes disconnecting and rearranging nodes in the plan much more difficult. Consider if the optimizer wants to move the filter in D above C. Now it has to not only change connections in the outer plan between load, cogroup, and filter; it also has to change connections in the first inner plan of C, because this now needs to point to the !LOFilter for D rather than the !LOLoad for A. Four: The work done on Operator and OperatorPlan to support the original rules for the optimizer had two main problems: - The set of primitives chosen were not the correct ones. - The operations chosen were put on the generic super classes (Operator) rather than further down on the specific classes that would know how to implement them. Five: At a number of points efforts were made to keep the logical plan close to the physical plan. For example, !LOProject represents all of the same operations that !POProject does. While this is convenient in translation, it is not convenient when trying to optimize the plan. The LogicalPlan needs to focus on representing the logic of the script in a way that is easy for semantic checkers (such as TypeChecker) and the optimizer to work with. Six: The rule of one operation per operator was violated. !LOProject handles three separate roles (converting from a relational to an expression operator, actually projecting, and converting from an expression to a relational operator). This makes coding much more complex for the optimizer because when it encounters an !LOProject it must first determine which of these three roles it is playing before it can understand how to work with it. The following proposal will address all of these issues. Proposed Methodology Fixing these issues will require extensive changes, including a complete rewrite of Operator, OperatorPlan, PlanVisitor, LogicalOperator, LogicalPlan, LogicalPlanVisitor, every current subclass of LogicalOperator, and all existing optimizer rules. It will also require extensive changes, though not complete rewrites, in existing subclasses of LogicalTransformer. To avoid destabilizing the entire codebase during this operation, this will be done in a new set of packages as a totally separate set of classes. Linkage code will be written to translate the current LogicalPlan to the new experimental LogicalPlan class. A new LogicalToPhysicalTranslator will also be written to translate this new LogicalPlan to a PhysicalPlan. This code path will only be taken if some type of command line switch or property is set, thus insulating current developers and users from this work. This has the added advantage that it is easy to build a prototype first. Given that our first implementation now needs rewriting, prototyping first will help us explore whether we solved the problem correctly this time. The Actual Proposal Changes to Plans In general, the top level plan classes will be changing in a couple of important ways: One, they will be made much simpler. The goal will be to find a minimal set of operations that will enable all desired plan features. Two, they will no longer be generics. While nice in theory, this led to several observed issues. One, each class had so many parameters that in practice developers have worked around rather than used the features of the generics. That is, parameterizing the classes seemed to get in developers way, rather than help them. Two, since we propose to break relational and expression operators into different types, it will no longer be possible for a single visitor to span both types. But we do not wish to prohibit this in all cases. In the following code major members and methods of each class are shown. Getters and setters are not shown unless they include functionality beyond simply getting and setting a value. New Operator class. Note that the function which was previously called visit has been renamed accept to avoid confusion with the visit method in PlanVisitor. package org.apache.pig.experimental.plan; public abstract class Operator { protected String name; protected OperatorPlan plan; // plan that contains this operator protected Map<String, Object> annotations; public Operator(String n, OperatorPlan p) { ... } /** * Accept a visitor at this node in the graph. * @param v Visitor to accept. * @throws IOException */ public abstract void accept(PlanVisitor v) throws IOException; /** * Add an annotation to a node in the plan. * @param key string name of this annotation * @param val value, as an Object */ public void annotate(String key, Object val) { ... } /** * Look to see if a node is annotated. * @param key string name of annotation to look for * @return value of the annotation, as an Object, or null if the key is * not present in the map. */ public Object getAnnotation(String key) { ... } } New OperatorPlan interface. It has been made an interface so that different implementations can be used for representing actual plans and sub-graphs of plans inside the optimizer. Note the severe paring down of the number of operations. Only simple add, remove, connect, disconnect. All all operations are left to subclasses to implement what makes sense for their plans. package org.apache.pig.experimental.plan; public interface OperatorPlan { /** * Get number of nodes in the plan. */ public int size(); /** * Get all operators in the plan that have no predecessors. * @return all operators in the plan that have no predecessors, or * an empty list if the plan is empty. */ public List<Operator> getRoots(); /** * Get all operators in the plan that have no successors. * @return all operators in the plan that have no successors, or * an empty list if the plan is empty. */ public List<Operator> getLeaves(); /** * For a given operator, get all operators immediately before it in the * plan. * @param op operator to fetch predecessors of * @return list of all operators immediately before op, or an empty list * if op is a root. * @throws IOException if op is not in the plan. */ public List<Operator> getPredecessors(Operator op) throws IOException; /** * For a given operator, get all operators immediately after it. * @param op operator to fetch successors of * @return list of all operators immediately after op, or an empty list * if op is a leaf. * @throws IOException if op is not in the plan. */ public List<Operator> getSuccessors(Operator op) throws IOException; /** * Add a new operator to the plan. It will not be connected to any * existing operators. * @param op operator to add */ public void add(Operator op); /** * Remove an operator from the plan. * @param op Operator to be removed * @throws IOException if the remove operation attempts to * remove an operator that is still connected to other operators. */ public void remove(Operator op) throws IOException; /** * Connect two operators in the plan, controlling which position in the * edge lists that the from and to edges are placed. * @param from Operator edge will come from * @param fromPos Position in the array for the from edge * @param to Operator edge will go to * @param toPos Position in the array for the to edge */ public void connect(Operator from, int fromPos, Operator to, int toPos); /** * Connect two operators in the plan. * @param from Operator edge will come from * @param to Operator edge will go to */ public void connect(Operator from, Operator to); /** * Disconnect two operators in the plan. * @param from Operator edge is coming from * @param to Operator edge is going to * @return pair of positions, indicating the position in the from and * to arrays. * @throws IOException if the two operators aren't connected. */ public Pair<Integer, Integer> disconnect(Operator from, Operator to) throws IOException; /** * Get an iterator of all operators in this plan * @return an iterator of all operators in this plan */ public Iterator<Operator> getOperators(); } There are no significant changes to PlanVisitor and PlanWalker other than the removal of generics. With the change that only LOForeach now has inner plans, it isn't clear whether the pushWalker and popWalker methods in PlanVisitor are really useful anymore or not. These may be removed. Changes to Logical Operators There will be a number of important changes to LogicalOperators. First, as mentioned above relational operators will be split into two disparate groups, relational and expression. Second, inner plans and expression plans will no longer hold any explicit references to outer plans. At most, they will reference the operator which contains the inner plan. Third, operators will represent exactly one operation. Fourth, only LOForeach will have inner plans. All other relational operators will only have expressions. Fifth, a new operator, LOInnerLoad will be introduced. The sole purpose of this operator will be to act as a root in foreach's inner plans. So, given a script like: A = load 'input'; B = group A by $0; C = foreach B { C1 = B.$1; C2 = distinct C1; generate group, COUNT(C2); } the foreach's inner plan will have two LOInnerLoad operators, one for $0 and one for $1. This will allow the C1 !LOGenerate operator to connect to another relational operator. Sixth, in the past expression operators were used at places in inner plans, such as the C1 = B1.$0; above. Relational operators will always be used in these places now. LOGenerate (or perhaps a new operator if necessary) will be used in the place of these assignment operations instead. Seventh, in the past !LOForeach had multiple inner plans, one for each of its outputs. That will no longer be the case. !LOForeach will always have exactly one inner plan, which must terminate with a !LOGenerate. That !LOGenerate will have expressions for each of its outputs. All logical operators will have a schema. This schema represents the format of the output for that operator. The schema can be null, which indicates that the format of the output for that operator is unknown. In general the notion of unknownness in a schema will be contagious. Take for example: A = load 'file1' as (x: int, y: float); B = load 'file2'; C = cogroup A by x, B by $0; D = foreach C generate flatten(A), flatten(B); A will have a schema, since one is specified for it. B will not have a schema, since one is not specified. C will have a schema, because the schema of (co)group is always known. Note however that in C's schema, the bag A will have a schema, and the bag B will not. This means that D will not have schema, because the output of flatten(B) is not known. If D is changed to be D = foreach C generate flatten(A); then D will have a schema, since the format of flatten(A) is known. LogicalPlan will contain add and removeLogical operations specifically designed for manipulating logical plans. These will be the only operations supported on the plan. package org.apache.pig.experimental.logical.relational; /** * LogicalPlan is the logical view of relational operations Pig will execute * for a given script. Note that it contains only realtional operations. * All expressions will be contained in LogicalExpressionPlans inside * each relational operator. LogicalPlan provides operations for * removing and adding LogicalRelationalOperators. These will handle doing * all of the necessary add, remove, connect, and disconnect calls in * OperatorPlan. They will not handle patching up individual relational * operators. That will be handle by the various Patchers. * */ public class LogicalPlan extends BaseOperatorPlan { /** * Add a relational operation outputs to the plan. * @param before operators that will be before the new operator. These * operator should already be in the plan. * inputs operators that will be after the new operator. These * operator should already be in the plan. * @throws IOException if add is already in the plan, or before or after * are not in the plan. */ public void add(LogicalRelationalOperator before, LogicalRelationalOperator newOper, LogicalRelationalOperator[] after) throws IOException { ... } /** * Add a relational operation to the plan when the caller wants to control * how the nodes are connected in the graph. * @param before operator that will be before the new operator. This * operator should already be in the plan. before should not be null. * the new operator will be a root. * @param beforeToPos Position in before's edges to connect newOper at. * @param beforeFromPos Position in newOps's edges to connect before at. * @param newOper new operator to add. This operator should not already * be in the plan. * @param afterToPos Position in after's edges to connect newOper at. * @param afterFromPos Position in newOps's edges to connect after at. * , int beforeToPos, int beforeFromPos, LogicalRelationalOperator newOper, int afterToPos, int afterFromPos, LogicalRelationalOperator after) throws IOException { ... } /** * Remove an operator from the logical plan. This call will take care * of disconnecting the operator, connecting the predecessor(s) and * successor(s) and patching up the plan. * @param op operator to be removed. * @throws IOException If the operator is not in the plan. */ public void removeLogical(LogicalRelationalOperator op) throws IOException { ... } } A LogicalRelationalOperator will be the logical representation of a relational operator (join, sort, etc.). package org.apache.pig.experimental.logical.relational; /** * Logical representation of relational operators. Relational operators have * a schema. */ abstract public class LogicalRelationalOperator extends Operator { protected LogicalSchema schema; protected int requestedParallelism; protected String alias; // Needed only for error messages, not used in optimizer protected int lineNum; // Needed only for error messages, not used in optimizer /** * * @param name of this operator * @param plan this operator is in */ public LogicalRelationalOperator(String name, OperatorPlan plan) { ... } /** * * @param name of this operator * @param plan this operator is in * @param rp requested parallelism */ public LogicalRelationalOperator(String name, OperatorPlan plan, int rp) { ... } /** * Get the schema for the output of this relational operator. This does * not merely return the schema variable. If schema is not yet set, this * will attempt to construct it. Therefore it is abstract since each * operator will need to construct its schema differently. * @return the schema */ abstract public LogicalSchema getSchema(); /** * Reset the schema to null so that the next time getSchema is called * the schema will be regenerated from scratch. */ public void resetSchema() { ... } } LogicalSchema will be based on the existing Schema class. It is hoped that this class can be greatly simplified. LogicalExpressionPlan will extend OperatorPlan and contain LogicalExpressionOperators. Often expression trees are built with the expressions themselves containing references to the next expression in the tree. For example, a common implementation would be something like: abstract class BinaryExpression { Expression leftHandSide; Expression rightHandSide; } class Plus extends BinaryExpression { ... } Since we already have a plan structure we will have a LogicalExpressionPlan. This has the advantage that PlanVisitors will work with expression trees and we do not need to invent a separate visitor hierarchy. LogicalExpressionOperators will have a data type (the type they return) and a unique identifier (uid). The point of the uid is to allow the optimizer to track how expressions flow through the tree. So projection expressions will have the same uid as the expression they are projecting. All other expressions will create a new uid, since they are changing the value of the expression. (Cast probably does not conform to this statement. In general casts are movable like projects. We need to think further about how casts fit into this system.) package org.apache.pig.experimental.logical.expression; /** * Logical representation of expression operators. Expression operators have * a data type and a uid. Uid is a unique id for each expression. * */ public abstract class LogicalExpression extends Operator { protected byte type; protected long uid = -1; static public long getNextUid() { ... } /** * * @param name of the operator * @param plan LogicalExpressionPlan this is part of * @param b datatype of this expression */ public LogicalExpression(String name, OperatorPlan plan, byte b) { ... } /** * Set the uid. For most expressions this will get a new uid. * ProjectExpression needs to override this and find its uid from its * predecessor. * @param currentOp Current LogicalRelationalOperator that this expression operator * is attached to. Passed so that projection operators can determine their uid. * @throws IOException */ public void setUid(LogicalRelationalOperator currentOp) throws IOException { ... } } Consider the following example: A = load 'file1' as (x:int, y:int); B = filter A by x > 0 and y > 0; C = foreach A generate x + y; In this case x and y will be assigned uids in load, where they first enter the script. The output of filter will maintain these same uids since filter does not alter the format of its input. But the output of foreach will have a different uid, since this creates a new value in the script. Hopefully an example will make all of this somewhat clearer. Consider the following script: A = load 'input1' as (x: int, y: chararray); B = load 'input2' as (u: int, v: float); C = filter A by x is not null; D = filter B by u is not null and v > 0.0; E = join C on x, D on u; F = group E on x; G = foreach F { H = E.y; I = distinct H; J = order E by v; generate group, COUNT(I), CUMULATIVE(J); } store G into 'output'; That script will produce a logical plan that looks like the following: The !LOFilter for D will have an expression plan that looks like: Changes to the Optimizer The following changes will be made to the optimizer: - Currently all rules are handed to the optimizer at once, and it iterates over them until none of the rules trigger or it reaches the maximum number of iterations. This will be changed so that rules are collected into sets. The optimizer will then iterate over rules in each set until none of the rules trigger or it reaches the maximum number of iterations. The reason for this change will be made clear below. Currently the plan itself has the knowledge of how to patch itself up after it is rearranged. (For example, how to reconstruct schemas after a plan is changed.) This will be changed so that instead the optimizer can register a number of listeners on the plan. These listeners will then be invoked after each rule that modifies the plan. In this way the plans themselves need not understand how to patch up changes made by an optimization rule. Also as we expand the plans and they record more information, it is easy to add new listeners without having to interact with existing functionality. The Rule class will be merged with the existing RuleMatcher class so that Rule takes on the functionality of matching. This match routine will be written once in Rule and extensions of rule need not re-implement it. The rewritten Rule class: package org.apache.pig.experimental.plan.optimizer; /** * Rules describe a pattern of operators. They also reference a Transformer. * If the pattern of operators is found one or more times in the provided plan, * then the optimizer will use the associated Transformer to transform the * plan. * */ public abstract class Rule { protected String name = null; protected OperatorPlan pattern; transient protected OperatorPlan currentPlan; /** * Create this rule by using the default pattern that this rule provided * @param n Name of this rule */ public Rule(String n) { .. } /** * @param n Name of this rule * @param p Pattern to look for. */ public Rule(String n, OperatorPlan p) { ... } /** * Build the pattern that this rule will look for * @return the pattern to look for by this rule */ abstract protected OperatorPlan buildPattern(); /** * Get the transformer for this rule. Abstract because the rule * may want to choose how to instantiate the transformer. * This should never return a cached transformer, it should * always return a fresh one with no state. * @return Transformer to use with this rule */ abstract public Transformer getNewTransformer(); /** * Search for all the sub-plans that matches the pattern * defined by this rule. * @return A list of all matched sub-plans. The returned plans are * partial views of the original OperatorPlan. Each is a * sub-set of the original plan and represents the same * topology as the pattern, but operators in the returned plan * are the same objects as the original plan. Therefore, * a call getPlan() from any node in the return plan would * return the original plan. * * @param plan the OperatorPlan to look for matches to the pattern */ public List<OperatorPlan> match(OperatorPlan plan) {... } } The mostly unchanged Transformer class: package org.apache.pig.experimental.plan.optimizer; public abstract class Transformer { /** * check if the transform should be done. If this is being called then * the pattern matches, but there may be other criteria that must be met * as well. * @param matched the sub-set of the plan that matches the pattern. This * subset has the same graph as the pattern, but the operators * point to the same objects as the plan to be matched. * @return true if the transform should be done. * @throws IOException */ public abstract boolean check(OperatorPlan matched) throws IOException; /** * Transform the tree * @param matched the sub-set of the plan that matches the pattern. This * subset has the same graph as the pattern, but the operators * point to the same objects as the plan to be matched. * @throws IOException */ public abstract void transform(OperatorPlan matched) throws IOException; /** * Report what parts of the tree were transformed. This is so that * listeners can know which part of the tree to visit and modify * schemas, annotations, etc. So any nodes that were removed need * will not be in this plan, only nodes that were added or moved. * @return OperatorPlan that describes just the changed nodes. */ public abstract OperatorPlan reportChanges(); } The new PlanTransformListener interface: package org.apache.pig.experimental.plan.optimizer; /** * A listener class that patches up plans after they have been transformed. */ public interface PlanTransformListener { /** * the listener that is notified after a plan is transformed * @param fp the full plan that has been transformed * @param tp a plan containing only the operators that have been transformed * @throws IOException */ public void transformed(OperatorPlan fp, OperatorPlan tp) throws IOException; } The goal in the above changes is to radically simplify writing optimizer rules. Consider a rule to push a filter above a join. A = load 'file1' as (x, y); B = load 'file2' as (u, v); C = join A by x, B by u; D = filter C by (y > 0 or v > 0) and x > 0 and u > 0 and y > v; In the current design, to push this filter, a rule must know how to split filters, how to push the parts that pushable, and reconstruct the filters of the parts that are not. In the new proposal we can instead create three rules. Rule 1 will only know how split filters. Rule 2 will only know how to push them. And Rule 3 will only know how to reconstitute them. These rules can then be placed in separate sets, so that they do not interfere with each other. So in this example, after Rule 1 has run, the script will conceptually look like A = load 'file1' as (x, y); B = load 'file2' as (u, v); C = join A by x, B by u; D1 = filter C by (y > 0 or v > 0); D2 = filter D1 by x > 0; D3 = filter D2 by u > 0; D = filter D3 by y > v; Since Rule 1 will be run repeatedly it need not manage entirely splitting the filter. It can be written to simply split one and, allowing the next iteration to split any subsequents ands. After Rule 2, the script will look like: A = load 'file1' as (x, y); D2 = filter A by x > 0; B = load 'file2' as (u, v); D3 = filter B by u > 0; C = join D2 by x, D3 by u; D1 = filter C by (y > 0 or v > 0); D = filter D1 by y > v; And finally, after Rule 3 has run, the script will look like: A = load 'file1' as (x, y); D2 = filter A by x > 0; B = load 'file2' as (u, v); D3 = filter B by u > 0; C = join D2 by x, D3 by u; D = filter C by (y > 0 or v > 0) and y > v; Writing each of these rules will be much simpler than writing one large rule that must handle all three cases. After each of these rules modify the tree, listeners will be notified that the tree has changed. The currently known listeners are one to reconstruct schemas based on the changes and one to reconnect projections to the proper field in their predecessor. After each run of a rule and operations by attached listeners the plan will be in a functionally correct state. We have identified three operations we would like to prototype. The first is pushing filters past joins. The second pushing filters after foreach with a flatten. These were chosen because both involve schema altering operations where the rules have to decide when they can and cannot push and where the listeners have real work to do to rewrite schemas and projections after a rule is run. The third operation is pruning unnecessary fields from the load. That is, if five fields are loaded but only three are used, the other two will be pruned out. This was chosen because it proved to be particularly difficult in the current framework and we wish to investigate whether it is doable in the proposed framework.
https://wiki.apache.org/pig/PigLogicalPlanOptimizerRewrite
CC-MAIN-2017-17
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byHamza Strowbridge Modified about 1 year ago 2 ©David Dubofsky and 9-1 Thomas W. Miller, Jr. Chapter 9 T-Bond and T-Note Futures Futures contracts on U.S. Treasury securities have been immensely successful. But, the outlook for Treasury bond futures contracts is bleak, as the government has not issued any new 30-year bonds since October 2001. 3 ©David Dubofsky and 9-2 Thomas W. Miller, Jr. The T-bond Futures Contract Underlying asset is: $100,000 (face value) in deliverable T-bonds. Futures prices are reported in the same way as are spot T-bonds, in "points and 32nds of 100%" of face value. A T-bond futures price of 112-15 equals 112 and 15/32% of face value, or $112,468.75. A change of one tick, say to 112-14, results in a change in value of $31.25. 4 ©David Dubofsky and 9-3 Thomas W. Miller, Jr. T-Note Futures Prices For T-bonds, a tick is 1/32 nd. The resulting quote of 112-15 equals 112 and 15/32. But, for 5 and 10-year T-notes, a tick is ½ of a 32 nd, or $15.625 per tick. The resulting quote, say, of 98.095 = 98 and 9.50/32.. For CBOT futures prices for T-bonds and T-notes, see:. 5 ©David Dubofsky and 9-4 Thomas W. Miller, Jr. What Determines T-bond and T-note Futures Prices, Basically? In a very simple sense, the futures price is the forward price of a Treasury bond, such that it has a forward yield [fr(t1,t1+30)] consistent with: [1+r(0,t1+30)] t1+30 = [1+r(0,t1)] t1 [1+fr(t1,t1+30)] 30 t1 = time until delivery, in years. 0t1t1+30 6 What is Deliverable? And When?. 7 Which T-bond will the Short naively Choose to Deliver?. If the short delivers a low priced bond, the short will receive less (low conversion factor) If the short delivers a high priced bond, the short will receive more (high conversion factor). ©David Dubofsky and 9-6 Thomas W. Miller, Jr. 8 The Invoice Amount and Conversion Factors A conversion factor for a given T-bond is its price if it had a $1 face value, and was priced to yield 6%. For a file of conversion factors, see:. So a cheap, low coupon bond will have a small conversion factor. Therefore, the short will receive less. ©David Dubofsky and 9-7 Thomas W. Miller, Jr. 9 So NOW, Which T-bond Will the Short Choose to Deliver? The one that costs the least…. ….and at the same time gets the short the most money upon delivery (i.e., the highest invoice amount). That is, the Max [invoice amount – quoted spot price] (Accrued Interest is ignored because it is included in both the invoice amount and the gross cash bond price) Max [(CF)(F) – (S)] During the delivery month, the amount in brackets will always be negative, for every deliverable bond….. WHY? ©David Dubofsky and 9-8 Thomas W. Miller, Jr. 10 ©David Dubofsky and 9-9 Thomas W. Miller, Jr. Another Method to Identify the Cheapest to Deliver T-bond Before the delivery month, find the T-bond with the highest Implied Repo Rate. This is given by: Can be used to identify the “most likely to be delivered” T-bond. When coupons will be paid between today and the delivery day, include them, and the interest earned on them, in the carry return (see eqn. 9.5b). 11 A Good Concept Check Identify three T-bonds that are deliverable into the nearby T-bond futures contract. Which of these three bonds is the cheapest to deliver? Be able to show your work, and be able to explain why one of the T-bond is more likely to be delivered than the other two. ©David Dubofsky and 9-10 Thomas W. Miller, Jr. 12 ©David Dubofsky and 9-11 Thomas W. Miller, Jr. Questions The implied repo rate for every deliverable T-bond must be less than interest rates available in the market (WHY?) Reverse cash and carry arbitrage will (almost never) be possible (WHY?) 13 The Options Held by the Short Quality option: can deliver any eligible bond. Timing option: can deliver on any day of the delivery month. Wild card option: futures cease trading at 2PM, but the short can announce intent to deliver as late as 8PM. End of month option: futures cease trading 8 business days before the end of the delivery month, but the short can deliver on any day of the month. ©David Dubofsky and 9-12 Thomas W. Miller, Jr. 14 Theoretical T-bond Futures Price Once the C-T-D T-bond has been identified, F = S + CC - CR (- value of shorts’ options) ©David Dubofsky and 9-13 Thomas W. Miller, Jr. 15 ©David Dubofsky and 9-14 Thomas W. Miller, Jr. Using T-bond and T-note Futures to Hedge Interest Rate Risk Buy T-bond or T-note futures to hedge against falling interest rates. Sell them to hedge against rising interest rates. (Remember that when interest rates fall, bond prices rise, and when interest rates rise, bond prices fall.) Use T-bond futures to hedge against changes in long-term (15+ years) rates. Use 10-year T-note futures to hedge against changes in 8-10 year rates. rr Inherent risk exposure profits Long T-bond futures Short T-bond futures rr profits 16 ©David Dubofsky and 9-15 Thomas W. Miller, Jr. Dollar Equivalency Estimate the loss in value if the spot YTM adversely changes by one basis point, denoted V S. CTD. Estimate the change in the futures price if the CTD’s price changes by S CTD, denoted F per $100 face value. It can be shown that: The profit, V F, is then $1000 F. Compute the number of futures contracts to trade, N, so that N V F = V S 17 ©David Dubofsky and 9-16 Thomas W. Miller, Jr. Bond Pricing, I U.S. Treasury bonds and notes are coupon bonds. Their values are computed using: C is the semiannual coupon payment. F is the face value. Y is the unannualized, or periodic, 6-month yield. N is the number of 6-month periods to maturity. This assumes that the first coupon payment is 6 months hence. 18 ©David Dubofsky and 9-17 Thomas W. Miller, Jr. Bond Pricing, II To calculate the value of a bond, one must discount each cash flow at the appropriate zero rate.. The bond’s value is: 19 ©David Dubofsky and 9-18 Thomas W. Miller, Jr. Yield to Maturity (YTM) The YTM is the discount rate that makes the present value of the cash flows on the bond equal to the market price of the bond. Input PV = -1033.098, FV = 1000, PMT = 25 (semiannually), N = 4 (this is four semiannual periods) => CPT I/Y = 1.63838% (per six- month period). (1.63838%) (2) = 3.277% = YTM With Excel, use =YIELD("9/15/02","9/15/04",0.05,103.3098,100,2,0) With FinCAD, use aaLCB_y 20 ©David Dubofsky and 9-19 Thomas W. Miller, Jr. Duration Duration is the weighted-average of the time cash flows are received from a bond. Example: Verified with Excel: =DURATION("6/25/2001","6/25/2003",0.06,0.068755,2,0) We will see that Modified Duration is handy for hedging purposes. Modified Duration: D/(1 + YTM/2) = 1.9135 / (1.03438) = 1.85. 21 ©David Dubofsky and 9-20 Thomas W. Miller, Jr. U.S. Treasury Bond Price Quotes U.S. T-bond and T-note prices are in percent and 32nds of face value. For example (see fig. 9.1), on 12/01/00, the bid price of the 6 3/8% of Sep 01 T-note was 100 and 4/32% of face value. If the face value of the note is $1000, then the bid price is $1001.25. The asked price of this note is $1001.875. N.B. These prices are based on transactions of $1 million or more. In other words, a trader could buy $1 million face value of these notes for about $1,001,875 from a government securities dealer. These prices are quoted flat; i.e., without any accrued interest. Cash price = Quoted Price + Accrued Interest. 22 ©David Dubofsky and 9-21 Thomas W. Miller, Jr. On December 1, 2000, the 6 3/8% of September 2001 was quoted to yield 6.12%. You can verify by using the YIELD function in Excel: =YIELD("12/01/00","9/30/01",0.06375,100.1875,100,2,1). 23 ©David Dubofsky and 9-22 Thomas W. Miller, Jr. Some Extra Slides on this Material Note: In some chapters, we try to include some extra slides in an effort to allow for a deeper (or different) treatment of the material in the chapter. If you have created some slides that you would like to share with the community of educators that use our book, please send them to us! 24 ©David Dubofsky and 9-23 Thomas W. Miller, Jr. Hedging With T-Bond Futures: Changing the Duration of a Portfolio Hedging decisions are essentially decisions to alter a portfolio’s duration. By buying or selling futures, managers can lengthen or shorten the duration of an individual security or portfolio without disrupting the underlying securities (an “overlay”). That is, adding (buying) T-bond or T-note futures to a portfolio increases its interest rate sensitivity, while selling futures decreases the interest rate sensitivity of the portfolio. A portfolio manager will want to decrease (increase) the duration of the portfolio if the manager expects interest rates to increase (decrease). A completely hedged portfolio lowers the duration to the duration of a short-term riskless Treasury bill. 25 ©David Dubofsky and 9-24 Thomas W. Miller, Jr. The Key Concept: Basis Point Value (BPV). The bond portfolio manager can change the duration of the existing portfolio to the duration of a target portfolio. This “immunizes” the portfolio against a change in interest rates. That is, if the portfolio manager knows: –how the current and target portfolios respond to interest rate changes. –how T-bond (or T-note) futures contracts respond to interest rate changes. Fortunately, if interest rates change by a small amount, say one basis point, the value of the portfolio will change predictably. 26 ©David Dubofsky and 9-25 Thomas W. Miller, Jr. BPV, II. Using the bond pricing formula, the duration formula, and some algebra, the change in the value of a bond or a portfolio of bonds if interest rates change by one basis point can be written: When dy = 0.0001 (1 basis point), then dB is called the basis point value (BPV). 27 ©David Dubofsky and 9-26 Thomas W. Miller, Jr. BPV, III. If y is defined to be one-half of the bond’s annual yield to maturity (YTM), then for a bond, or a portfolio of bonds: 28 ©David Dubofsky and 9-27 Thomas W. Miller, Jr. BPV, IV. The portfolio manager chooses a target duration so that it will have a particular BPV; i.e., a targeted change in value if interest rates change by one basis point. Assuming that the CTD bond and the bond portfolio will both experience a one basis point change in yield, the goal is to choose to buy or sell N F futures contracts so that BPV (target) = N F BPV (futures) + BPV (existing) Thus, the BPV of the existing portfolio, the target portfolio, and the futures contract must be computed. 29 ©David Dubofsky and 9-28 Thomas W. Miller, Jr. BPV, V. To determine the BPV for either a T-bond or T-note futures contract, the cheapest-to-deliver (CTD) security must first be identified. The futures price generally tracks the CTD security. The BPV of the futures price is generally written as a present value BPV(futures)/[1+h(0,T)] where BPV(futures) is the BPV of the cheapest-to-deliver instrument divided by the CTD’s conversion factor. To solve for the appropriate number of futures contracts needed to change the duration of an existing portfolio to a target duration: N F = [BPV (target) - BPV (existing) ] / BPV (futures) 30 ©David Dubofsky and 9-29 Thomas W. Miller, Jr. Example Using BPV. 31 ©David Dubofsky and 9-30 Thomas W. Miller, Jr. Inputs: –Existing Portfolio Duration: 5.7 –Target Duration: 12.0 –March T-Bond Futures Price: 102-03 –Portfolio Value: $100,000,000 –Portfolio Yield to Maturity: 6.27% Solution: 1.Find the BPV of the existing portfolio and the target portfolio: BPV(existing) = (5.7 /(1 + 0.0627/2)) * $100,000,000 * 0.0001 = $55,267.36 BPV (target) = 12 /((1+0.0627/2)) * $100,000,000 * 0.0001 = $116,352.35 32 ©David Dubofsky and 9-31 33 ©David Dubofsky and 9-32 Thomas W. Miller, Jr. Finally, 3. Determine the number of contracts required to achieve the desired portfolio duration: N F = [BPV (target) - BPV (existing) ] / BPV (futures) ($116,352.35 - $55,267.36) / $74.156= 823.736 The bond portfolio manager should buy 824 March T-Bond futures contracts in order to increase the portfolio’s duration to 12 years. Note that the portfolio manager can choose any target duration. 34 ©David Dubofsky and 9-33 Thomas W. Miller, Jr. Finally, (Really) Suppose the manager chooses to have a target duration of zero. This makes the BPV of the target equal zero. Then, N F = [BPV (target) - BPV (existing) ] / BPV (futures) (0 - $55,267.36) / $74.156= -745.285 The bond portfolio manager should sell 745 March T- Bond futures contracts in order to decrease the portfolio’s duration to 0 years. 35 ©David Dubofsky and 9-34 Thomas W. Miller, Jr. Reading Treasury Bond Futures Prices Delivery dates exist every 3 months. Delivery months are in March, June, September, and December. On December 1, 2000, the Dec T-bond settle price is 102-02. This equals 102 and 2/32% of face value, or $102,062.50. The December 2000 futures price was down 17 ticks, or 17/32. This means that on December 14, 2000, the December contract settled at 102-19. A price change of one tick (1/32) will result in a daily resettlement cash flow of $31.25. 36 ©David Dubofsky and 9-35 Thomas W. Miller, Jr. Treasury Bond Futures, Delivery The Delivery Process is Complicated. But, in sum: Any Treasury Bond that has fifteen or more years to first call, or at least 15 years to maturity (whichever comes first), as of the first day of the delivery month. The seller of the futures contract, I.e., the short, has the option of choosing which bond to deliver. Delivery can take place on any day in the delivery month…the short chooses. Cash received by the short = (Quoted futures price × Conversion factor) + Accrued interest. Conversion Factor? Wha? 37 ©David Dubofsky and 9-36 Thomas W. Miller, Jr. The Necessity for the Conversion Factor At its website, the CBOT lists deliverable T-bonds and T-notes, by delivery date. For example, as of November 29, 2000, there were 34 T-bonds deliverable into nearby T-bond futures contracts. By allowing several possible bonds to be delivered, the CBOT creates a large supply of the deliverable asset. This makes it practically impossible for a group of individuals who are long many T-bond futures contracts to “corner the market” by owning so many T-bonds in the cash market that the shorts cannot fulfill their delivery obligation. 38 ©David Dubofsky and 9-37 Thomas W. Miller, Jr. Recall that the invoice price equals the futures settlement price times a conversion factor plus accrued interest. The conversion factor is the price of the delivered bond ($1 par value) to yield 6 percent. The purpose of applying a conversion factor is to try to equalize deliverability of all of the bonds. If there were no adjustments made, the short would merely choose to deliver the cheapest (lowest priced) bond available. In theory, if the term structure of interest rates is flat at a yield of 6% then, by applying conversion factor adjustments, all bonds would be equally deliverable. In practice, however, there is a “cheapest to deliver” or CTD T- bond. This is the T-bond used to price futures contracts on T- bonds. Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/4177461/
CC-MAIN-2017-17
en
refinedweb
import java.text.ParseException;, i.e. "Event: foo; id=1234" would match "Event: foo; param=abcd; id=1234", but not "Event: Foo; id=1234". There MUST be exactly one event type listed per event header. Multiple events per message are disallowed i.e Subscribers MUST include exactly one "Event" header in SUBSCRIBE requests, indicating to which event or class of events they are subscribing. The "Event" header will contain a token which indicates the type of state for which a subscription is being requested. This token will correspond to an event package which further describes the semantics of the event or event class. The "Event" header MAY also contain an "id" parameter. When a subscription is created in the notifier, it stores the event package name and the "Event" header "id" parameter (if present) as part of the subscription information. This "id" parameter, if present: public interface EventHeader extends Parameters, Header {EventHeader extends Parameters, Header { eventType- the new string defining the eventType supported in this EventHeader java.text.ParseExceptionwhich signals that an error has been reached unexpectedly while parsing the eventType value. public void setEventType(String eventType) throws ParseException;setEventType(String eventType) throws ParseException; public String getEventType();getEventType(); eventId- the new string defining the eventId of this EventHeader java.text.ParseExceptionwhich signals that an error has been reached unexpectedly while parsing the eventId value. public void setEventId(String eventId) throws ParseException;setEventId(String eventId) throws ParseException; public String getEventId();getEventId();
http://grepcode.com/file/repository.jboss.org$nexus$content$repositories$releases@javax.sip$jain-sip-ri@1.2.148.6@javax$sip$header$EventHeader.java
CC-MAIN-2017-17
en
refinedweb
On Tue, 20 Apr 2004 11:53:29 -0400, Itamar Shtull-Trauring <itamar at itamarst.org> wrote: >Three suggestions so far: > > 1. Separate namespaces for each project. > > twisted.internet > conch Advantages: Easy on developers No distutils tricks required Disadvantages: Litters the top-level namespace Less uniquely named packages must be renamed > > 2. Keep all projects and core under twisted. > > twisted.internet > twisted.conch > Advantages: Easy on users. Keeps the top-level namespace clean. Keeps Twisted projects conceptually tied together. Disadvantages: Distutils tricks required Missing projects confuse users ("importing twisted.conch failed? But I have Twisted installed :(") > 3. Separate namespace for projects, e.g. 't' or 'tmlabs' (Zope3 was > considering using 'z', though it like it won't happen in the end - >) > > twisted.internet > t.conch or tmlabs.conch > Advantages: Keeps the top-level namespace clean. Keeps Twisted projects conceptually tied together. Disadvantages: May require distutils tricks. Missing projects confuse users, but probably less than in #2 Amendments to the above advantage/disadvantage lists welcome. I mentioned "distutils tricks" a couple times. So far I have heard both that distutils can and cannot do this. I suspect that it can, but I would like to hear details on how this would work. In particular, I would like?). I am not sure which of these I support yet, but I think I am leaning towards #3. Jp
http://twistedmatrix.com/pipermail/twisted-python/2004-April/007597.html
CC-MAIN-2017-17
en
refinedweb
I need to get the top 5 movies as ordered by average rating, with ignoring any movies with less than 50 ratings. Can i do that using count () The code: import numpy as np import pandas as pd r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] ratings = pd.read_csv( '', sep='\t', names=r_cols) ratings.head() grouped_data = ratings['rating'].groupby(ratings['movie_id']) ## average and combine average_ratings = grouped_data.mean() print ("Average ratings:") print (average_ratings.head()) Many ways to skin a cat as often in pandas, here are a couple: 1.Apply several functions to the groupby Apply both mean and count to the groupby: In [1]: df= ratings['rating'].groupby(ratings['movie_id']).agg(['mean', 'count']) df.head(3) Out[1]: mean count movie_id 1 3.878319 452 2 3.206107 131 3 3.033333 90 Then you can filter it and return the 5 largest: In [2]: df.ix[(df['count'] >= 50), 'mean'].nlargest(5) Out[2]: movie_id 408 4.491071 318 4.466443 169 4.466102 483 4.456790 114 4.447761 Name: mean, dtype: float64 2.Use boolean indexing after the fact This assumes you have executed the entire code of your question, thus average_ratings is already existing movie_count = ratings.movie_id.value_counts() higher_than_50_votes = movie_count.index[movie_count > 50] # Apply that to your average_ratings, sort, and return average_ratings.ix[higher_than_50_votes].sort_values(ascending=False).head(5) 3. Using groupby.filter ratings.groupby('movie_id').filter(lambda x: len(x) > 50).groupby('movie_id')['rating'].mean().sort_values(ascending=False).head(5)
https://codedump.io/share/s0ZgLTdphSWf/1/using-count-in-python
CC-MAIN-2017-17
en
refinedweb
Can't @Deployment methods be optional?Mousavi Jahan Abadi S. M. Aug 29, 2011 9:30 PM I know that static @Deployment methods are necessary for Arquillian to find following information: - Archive file type declaration (JAR? WAR?..) - Archive file name - Archive file contents (classes, property files, ...) - Deployment setting definition (multi-server support, ...) - Deployment order definition . and so on And, currently, for test case classes, it is REQUIRED to have such static @Deployment methods. But, I want to ask to re-think whether isn't it possible to make these methods optional??? Please consider the typical test situation. Basically, with regards to test, there are 2 categories of code: - Code to be tested (target web appliction to be tested). - Test code. Since, the Arquillian is "In-Container Test Tool", in typical situation, we can consider that "code to be tested" is already deployed/running inside the web container. So, in typical situation, Arquillian don't need to archive the "code to be tested" and Arquillian only need to archive/deploy/undeploy "Test Code". In normal test cases, each test case is single self-containing test class, it don't have any relation with other test classes. The result of this matter is that for many situations only one JAR file (with any default name) that contains the test class itself is enough. In my Java projects, I have many test cases using Arquillian. However, the result is that in all of them I am copying following un-useful code again and again: @Deployment public static JavaArchive createTestArchive(){ return ShrinkWrap.create(JavaArchive.class,"test.jar"); } So, my suggestion is that Arquillian be smarter, and make such kind of static @Deployment methods optional, and have a defualt behavior, whenever the @Deployment method doesn't exist (instead of making it error). Please don't tell me that defining one super class and adding the above method to super class can solve it. For sure, I know that solution, and users of Arquillian can use this approach. But, for Arquillian as a test framework, that want to have minimum code addition (related to Arquillian), I think it is not good design pattern. 1. Re: Can't @Deployment methods be optional?Aslak Knutsen Aug 30, 2011 6:04 AM (in response to Mousavi Jahan Abadi S. M.)1 of 1 people found this helpful From > 1.0.0.CR2 or so, Arquillian does not require a @Deployment method defined on the TestClass, but this will force as client mode and assume your testing a existing deployment as a client(no incontainer). But, you can add incontainer behavior yourself by writing a Extension and creating a Archive as you want outside of your testclass. Implement a LoadableExtension, register it in META-INF/services/org.jboss.arquillian.core.spi.LoadableExtrension The create a impl of the org.jboss.arquillian.container.test.spi.client.deployment.DeploymentScenarioGenerator and register it in the the LoadableExtension. Something in the lines of: /* * register in META-INF/services/org.jboss.arquillian.core.spi.LoadableExtension * containing the fully qualified name for AutoInContainerTestExtension */ public class AutoInContainerTestExtension implements LoadableExtension { public void register(ExtensionBuilder builder) { builder.service(DeploymentScenarioGenerator.class, AutoRegisterDeployment.class); } } public class AutoRegisterDeployment implements DeploymentScenarioGenerator { public List<DeploymentDescription> generate(TestClass testClass) { return Arrays.asList( new DeploymentDescription("AUTO-GEN", ShrinkWrap.create(JavaArchive.class) .addClass(testClass.getJavaClass()))); } } 2. Re: Can't @Deployment methods be optional?Mousavi Jahan Abadi S. M. Aug 31, 2011 9:01 PM (in response to Aslak Knutsen) Aslak, thanks for your reploy, and now I understand that @Deployment become optional in 1.0.0.RC2, and we can add in-container behavior, using the SPI extensions. But, I would like to make one suggestion (as a user of framework): - As you have mentioned, make the @Deployment methods being optional. - If @Deployment method doesn't exist, default behavior be IN-CONTAINER deployment/test of current test case class (exactly same as implementation of above "AutoRegisterDeployment" class, you have mentioned in your reply). - If @Deployment method doesn't exist, ONLY for methods with "@RunAsClient" annotation, make it to be a test in client mode. The reason for above suggestion is that I think as the title logo of this page (Arquillian) says: "Test In-Container", the incontainer test should be default behavior. And, also as a user that want to use Arquillian for unit test, it is too much/difficult to add SPI extension to support in-container behavior. Just a suggestion to make Arquillian more user-friendly. 3. Re: Can't @Deployment methods be optional?Fekete Kamosh Oct 7, 2011 11:45 AM (in response to Mousavi Jahan Abadi S. M.) Hi, I am afraid I still do not follow the solution described here. Could you please give me a clue, how to perform incontainer tests of already deployed EJBs? Suppose this situation: 1) SLSB GreetingManagerBean (example from) is deployed onto JBoss 6. 2) Server is started and running I would like to make use of incontainer injection and perform test like this: @EJB private GreetingManager greetingManager; @Test public void shouldBeAbleToInjectEJB() throws Exception { String userName = "Earthlings"; Assert.assertEquals("Hello " + userName, greetingManager.greet(userName)); } How to do it without applying new deploy of GreetingManagerBean? Thank you, Fekete 4. Re: Can't @Deployment methods be optional?Aslak Knutsen Oct 14, 2011 11:35 AM (in response to Fekete Kamosh) You can't if GreeterManager is a Local bean. Local beans can normally only be accessed within their own module, and you can't change a already deployed module. If GreeterManager is a Remote bean on the other hand, you can call is as a client from the client side or deploy a 'dummy' deployment to move the test case to the container side and invoke it from there. But it won't be in the same module as GreeterBean
https://developer.jboss.org/message/623654
CC-MAIN-2017-17
en
refinedweb
Kerim Borchaev <warkid at hotbox.ru> writes: > Hello c++-sig, > > I'm spliting to multiple files my bpl code of an extension and getting > this linker error message: > > ''' > error LNK2005: "struct swallow_assign::boost::detail::swallow_assign > boost::tuples::`anonymous namespace'::ignore" > (?ignore@?A0xdb06aef3 at tuples@boost@@3Uswallow_assign at detail@23 at A) > already defined in ... > ''' > > Alas I am not able to reproduce it for simpple examples. > Any ideas why could it happen? Yeah. Please try checking out the latest boost/tuple/detail/tuple_basic_no_partial_spec.hpp from CVS and see if that fixes your problem -- David Abrahams dave at boost-consulting.com * Boost support, enhancements, training, and commercial distribution
https://mail.python.org/pipermail/cplusplus-sig/2002-November/002374.html
CC-MAIN-2017-17
en
refinedweb
Hi! On Sun, 2016-01-31 at 14:43:08 +0100, Jérémy Bobbio wrote: > Guillem Jover: > > > How about naming the field “Environment-Variables”? > > > > Hmm, or Environment, or Build-Environment, which reminds me that I've > > found the usage of Build-Environment (as the list of transitively > > required packages) slightly confusing, precisely because the first > > thing that comes to mind with environment is the variable space. > > > > Perhaps we should consider renaming that one? Say Build-Packages (but > > that might be confusing), Build-Depends-Used, or something else? We > > also already have a Built-Using field too (although for source > > packages not binary ones, with a name I've also found slightly > > confusing as being too generic). > > Ok. What about “Environment” for the variables, Advertising I'm not sure if it'd be better to be explicit about this being a build thing, and not just a random environment. Are you worried about confusion with the previous usage of the field with the same name? > and “Installed-Build-Depends” for the list of packages? I asked for more suggestions on #debian-dpkg, and Johannes Schauer suggested Transitive-Build-Depends, which is something I had in mind too (that or «Recursive-»), but kind of softly discarded in trying to have a consistently namespaced «Build-» field name. :) Some of the reasons Johannes put forward are that this name is better because it clearly describes what's the exact purpose of the field, and gives no room for misinterpretation. And if we had to change the algorithm we could just use a new name. All of which I concur with. (BTW I also realized that I don't think we are including «Essential:yes» packages in that set, and we should.) Thanks, Guillem _______________________________________________ Reproducible-builds mailing list Reproducible-builds@lists.alioth.debian.org
https://www.mail-archive.com/reproducible-builds@lists.alioth.debian.org/msg04140.html
CC-MAIN-2017-17
en
refinedweb
stm32plus: ILI9327 TFT driver The TFT panel The ILI9327 is a driver IC for 432×240 (WQVGA) panels. The panels are typically found in mobile phones; LG went through a phase of producing lots of phones with resolutions close to this as did several other manufacturers. I got one of these panels on ebay. It came attached to a handy breakout board, though I have seen others that come with just the FPC tail if you’re feeling adventurous. This particular board seems to have the tail soldered directly to it somewhere underneath the panel. The front of the board The back of the board There’s also an ADS7843-compatible touch screen driver and an SD card cage. This is a configuration we often see on development boards sourced from China. Pinout The seller included the pinout for the display. It’s a familiar 16-bit 8080 interface that is easily connected to the FSMC of the STM32 microcontroller. There’s no sign of a step-up DC-DC converter on the board so the white LED’s that make up the backlight must be in a parallel configuration. The pinout The touch screen is compatible with the ADS7843 controller and that can be hooked up to my stm32plus driver. I’ve had varying luck with the touch screens attached to these cheap boards. The ADS7843 is an A-D converter which means that the board should be carefully designed to minimise noise and it seems that not all of them are that well thought out. Panel details This panel is slightly unusual in that its resolution is 400×240 which is less than the full 432×240 supported by the ILI9327. How does that manifest itself? Well, the co-ordinates from 0-31 are not visible. That is, you can write to them but nothing will appear on the screen. Therefore we have to make allowances for that in the stm32plus driver and you will see how in the demo code. My demo sequence exercises some of the common functions used in graphics operations such as rectangle, line and ellipse drawing as well as text rendering and hardware scrolling when the panel supports it (the ILI9327 does support hardware scrolling in portrait and landscape modes). The datasheet for the ILI9327. In my example code for this panel I’m using SRAM bank 1 and A16 for RS (register select). This configuration is compatible with the 100 and 144 pin STM32F103 devices. stm32plus driver stm32plus 2.0.0 comes with an updated ILI9327 demo application. Here’s an extract from the source code that shows how to set it up. #include "config/stm32plus.h" #include "config/display/tft.h" using namespace stm32plus; using namespace stm32plus::display; class ILI9327Test { protected:]); // declare a panel _gl=new LcdPanel(*_accessMode); // apply gamma settings ILI9327Gamma gamma(0,0x10,0,0,3,0,0,1,7,5,5,0x25, 18 bit colour (262K). If you take a look at TftInterfaces.h you will see that following modes are available: ILI9327_400x240_Portrait_64K ILI9327_400x240_Landscape_64K ILI9327_400x240_Portrait_262K ILI9327_400x240_Landscape_262K The predefined drivers are just C++ typedefs that bring together the necessary combination of template instantiations to create a coherent graphics library. If you have an ILI9327 panel that is not 400×240 then you can support it by supplying your own ILI9327400x240PanelTraits class. See the code for details. Gamma correction // apply gamma settings ILI9327Gamma gamma(0,0x10,0,0,3,0,0,1,7,5,5,0x25 Updated source code is in stm32plus 2.0.0 (or later) available from my downloads page. Watch the videos I’ve uploaded a pair of short videos that shows the demo code in action. Firstly, here it is on the STM32F103. Secondly, here it is on the STM32F4 Discovery board.
http://andybrown.me.uk/2012/07/18/stm32plus-ili9327-tft-driver/
CC-MAIN-2017-17
en
refinedweb
On the Web server, the .NET Framework maintains a pool of threads that are used to service ASP.NET requests. When a request arrives, a thread from the pool is dispatched to process that request. If the request is processed synchronously, the thread that processes the request is blocked while the request is being processed, and that thread cannot service another request. If you have multiple long-running requests, all available threads might be blocked and the Web server rejects any additional request with an HTTP 503 status (Server Too Busy). To solve this issue in ASP.NET MVC, actions can be processed asynchronously by deriving your controllers from the AsyncController class. For example, let’s convert this code sample into a more efficient asynchronous alternative: 1: public class HomeController: Controller 2: { 3: public ActionResult LongRunningAction() 4: { 5: DoLengthyOperation(); 6: return View(); 7: } 8: 9: private void DoLengthyOperation() 10: { 11: Thread.Sleep(5000); 12: } 13: 14: } To convert LongRunningAction method has been turned into two methods: LongRunningActionAsync and LongRunningActionCompleted. The LongRunningActionAsync method returns void. The LongRunningActionCompleted method returns an ActionResult instance. Although the action consists of two methods, it is accessed using the same URL as for a synchronous action method (for example, Home/LongRunningAction). Methods such as RedirectToAction and RenderAction will also refer to the action method as LongRunningAction and not LongRunningActionAsync. The parameters that are passed to LongRunningActionAsync use the normal parameter binding mechanisms. The parameters that are passed to LongRunningActionCompleted use the Parameters dictionary. Replace the synchronous call in the original ActionResult method with an asynchronous call in the asynchronous action method. 1: public class HomeController : AsyncController 2: { 3: public void LongRunningActionAsync() 4: { 5: AsyncManager.OutstandingOperations.Increment(); 6: Task.Factory.StartNew(() => DoLengthyOperation()); 7: } 8: 9: private void DoLengthyOperation() 10: { 11: Thread.Sleep(5000); 12: AsyncManager.Parameters["message"] = "hello world"; 13: AsyncManager.OutstandingOperations.Decrement(); 14: } 15: 16: 17: public ActionResult LongRunningActionCompleted(string message) 18: { 19: return View(); 20: } 21: } 1 comment: That's great post ... but how to hanlding incoming values in the action and how to handle invalid ModelState or exceptions raised by the long process ... thanks
http://bartwullems.blogspot.com/2010/01/using-asynccontroller-in-aspnet-mvc-2.html
CC-MAIN-2017-17
en
refinedweb
Up to Design Issues See the main Notation3 page. Tokenizing not explicitly specified in that white space is not in the BNF for simplicity here. White space must be inserted whenever the following token could start with a character which could extend the preceding token. All URIs are quoted with angle brackets. Qualified names have colons, so unquoted alphanumerics are all keywords, unless the @keywords directive is given, in which case the keywords given are keywords and anything else is a localname in the default namespace. Any keyword may be given even if not in the keyword list by prefacing it with "@". Non-terminal productions are defined first, terminals after statementlist period directive universal existential subject property-list statement period statementlist prop (same as: has xxx) has prop (same as: xxx ) is prop of (inverse direction) a (same as: has rdf:type ) = (same as: has daml:equivaent) => (same as: log:implies ) <= (same as: is log:implies of ) anonnode variable number string this (identifies the current formula - deprectaed from 2002/08 [ property-list ] (a blank node, rerad as "something which ..." ) { statementlist } ( a formula, the statementlist itself as a literal resource ) ( itemlist ) ( short for: eg [rdf:first node1; rdf:rest [ rdf:first node2; rdf:rest: rdf:nil ]]) @forAll uriref2list period @forSome urirefslist period uriref2 uriref2list prefix alphanumeric _ (special ntuples hack. Everything in _ namespace is implictly existentially qualified at the current scope. Must not be used in >1 scope.) verb objectlist verb objectlist ; propertylist object , objectlist node of path (Not implemented, just an idea) item itemlist path traverse node ( same as [ is node of path ] ) path uparrow node (same as [node path] ) < URI-Reference > localname ??? Allow omit colon when prefix void - keyword clash @keywords keywordlist period localname alphanumeric keywordlist localname rational real digitstring - void digit digitstring integer . digitstring rational number expressed as a decimal @prefix @keywords a this = => <= < > : are all terminals which are just the strings themselves. The following are other terminal symbols: any digit 0-9 _ (the underscore)
http://www.w3.org/DesignIssues/OldNotation3Grammar.html
CC-MAIN-2017-17
en
refinedweb
WinJS contains several useful classes which are unfortunately hidden (starts with ‘_’). In this post we will look at one of them, WinJS._Signal. A common usage of this class is to manage (completes, cancels, fails) the promise. The promise itself is simple concept. But sometimes it’s awkward to complete it. Let’s see the example how it’s possible to create and manage a promise: As you can see, the promise can be created if it’s necessary: - to wrap the asynchronous operation and completes/errors on behalf of it, shown in Example #1 - to use a promise as a synchronization construct (e.g. promise is completed based on some event), Example #2 shows it. Usage #2 is very awkward. That’s the reason why there is WinJS._Signal class. Synchronization problem It’s quite common to synchronize two code paths and one of it depends on an event. It could be possible to write the code into the event handler and do the stuff there. But the problem is that the logic is split all over the code base. Wouldn’t it be better to keep the code on one place? Let’s see how it’s possible to do it with WinJS._Signal: We could enhance the previous sample and create a similar signal/event couple when the internet connection is established, join loaded and internet connection promises and then call the xhr(uri). The main advantage is that you have one construct (_Signal) using which it’s possible to complete/cancel/fail the promise. I hope, new WinJS will have it exposed as public class in the future release.
https://blogs.msdn.microsoft.com/fkaduk/2013/11/18/winjs-internals-winjs-_signal/
CC-MAIN-2017-17
en
refinedweb
Java database connectivity (JDBC) is a standard application programming interface (API) specification that is implemented by different database vendors to allow Java programs to access their database management systems. The JDBC API consists of a set of interfaces and classes written in the Java programming language. Interestingly, JDBC is not an acronym and thus stands for nothing but popular as Java Database Connectivity. According to Sun (now merged in Oracle) JDBC is a trademark term and not an acronym. It was named to be redolent of ODBC. JDBC implementation comes in form of a database driver for a particular DBMS vendor. A Java program that uses the JDBC API loads the specified driver for a particular DBMS before it actually connects to a database. JDBC --a trademark, comes bundled with Java SE as a set of APIs that facilitates Java programs to access data stored in a database management system, particularly in a Relational Database. At the time of designing JDBC (Java database connectivity), following goals were kept in mind: To start using JDBC (Java database connectivity) first you would need a database system, and then a JDBC (Java database connectivity) driver for your database. During this article JDBC (Java database connectivity) will be explained with MySQL, but you can use it with any database system (such as Microsoft SQL, Oracle, IBM DB2, PostgresSQL), provided that you have a JDBC (Java database connectivity) driver for your database system. Suppose you have MySQL and a functional Java environment installed on your system, where you write and execute Java programs. For MySQL, download MySQL Connector/J the official JDBC (Java database connectivity) driver for MySQL. You will get a JDBC driver mysql-connector-java-***-bin.jar in zipped driver bundle. Place this JAR in Java's classpath because this JAR is the JDBC driver for MySQL and will be used by JDBC (Java database connectivity). After having environment ready to experiment JDBC connectivity you need to create a database and at least one table in the database containing a few records, we name the example database EXPDB, and table inside EXPDB is EXPTABLE. I create this database with root privileges, if you do so it's fine but if you create a database with different user then ensure that you have permissions to create, update and drop tables in the database you created (to perform above stated operations by JDBC - Java database connectivity). Now, let's create EXPDB database, a table EXPTABLE in EXPDB, and insert a few records as follows. Following steps we are executing manually, later we will connect to this database with the help of JDBC - Java database connectivity. To experiment with JDBC (Java database connectivity) you have to create a database and connect to it. On successful connection you get MySQL command prompt mysql> as follows: C:\> mysql -h localhost -u root Enter password: ***** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 7 Server version: 5.1.46-community create a database you have to supply CREATE DATABASE command followed by the database name and then semicolon. mysql> CREATE DATABASE EXPDB; Query OK, 1 row affected (0.08 sec) mysql> Once you have created the database then you have to select it for use to perform operations on it. Command USE <DATABASE-NAME> begins a mysql (The MySQL Command-line Tool) session and lets you perform database operations. Note that, you need to create database only once but have to use it each time you start a mysql session. mysql> USE EXPDB; Database changed mysql> The EXPTABLE, example table to demonstrate JDBC (Java database connectivity) is created by issuing CREATE TABLE command as shown below: mysql> CREATE TABLE EXPTABLE ( -> ID INT NOT NULL AUTO_INCREMENT PRIMARY KEY, -> NAME VARCHAR (50) -> ); Query OK, 0 rows affected (0.20 sec) mysql> Just for illustration, two records into EXPTABLE are being inserted, you can insert more if you like. Later we will perform select and edit operations on these records using JDBC (Java database connectivity). mysql> INSERT INTO EXPTABLE (NAME) VALUES ("ANUSHKA K"); Query OK, 1 row affected (0.09 sec) mysql> INSERT INTO EXPTABLE (NAME) VALUES ("GARVITA K"); Query OK, 1 row affected (0.00 sec) mysql> Select records from EXPTABLE to see that both records have been inserted correctly in previous step. mysql> SELECT * FROM EXPTABLE; +----+-----------+ | ID | NAME | +----+-----------+ | 1 | ANUSHKA K | | 2 | GARVITA K | +----+-----------+ 2 rows in set (0.03 sec) mysql> Up to now you have created a database ( EXPDB) and a table ( EXPTABLE) within it, inserted two records in the table, and listed those records to be sure that everything went fine so far. Now, we will access EXPDB through Java program and JDBC (Java database connectivity), insert and list records of EXPTABLE within from the program through JDBC. So far we have gone through basic JDBC (Java database connectivity) concepts and created a trivial MySQL database to connect to it through JDBC within from a Java program. To access the database through Java program and JDBC, you would require following items in addition. Database URL for JDBC connection or JDBC connection string follows more or less similar syntax to that of ordinary URLs. It tells the protocol used to connect to database, subprotocol, location of database, port number on which database listens client requests, and database name. The example syntax may look like jdbc:mysql://localhost:3306/EXPDB. Aforementioned URL specifies a MySQL database named EXPDB running on localhost on port 3306. As we have obtained the JDBC driver in form of a JAR file (mysql-connector-java-***-bin.jar) in which the driver for MySQL database is located. This driver needs to be registered in order to access EXPDB. Driver file name for MySQL is com.mysql.jdbc.Driver. This file has to be loaded into memory before you get connected to database, else you will result into java.sql.SQLException: No suitable driver exception. To get a JDBC connection to the database you would require the username and password, it is the same username and password which we used, while connecting to MySQL. Now we will start writing Java program to connect to EXPDB (our example database) through JDBC (Java database Connectivity) and perform INSERT and SELECT operations for demonstration. To illustrate this piece of work, we would take following steps, and finally collect all pieces of code to assemble the complete program. Registering JDBC driver class with the DriverManager means loading JDBC driver class in memory. You can load JDBC driver class in two ways. One way is to load the JDBC driver class in Java program is as follows: try { // loads com.mysql.jdbc.Driver into memory Class.forName("com.mysql.jdbc.Driver"); } catch (ClassNotFoundException cnf) { System.out.println("Driver could not be loaded: " + cnf); } If you look at above piece of code you will see that Class.forName takes a string, fully qualified class name of the JDBC driver class file, as an argument and loads the corresponding class into memory. It throws a ClassNotFoundException if fails to locate the driver file. That's the reason it is surrounded by try block. Alternatively, you can load the JDBC driver class by setting jdbc.drivers property. Then at run time you specify the property with a command-line argument as follows: C:\>java -D jdbc.drivers=com.mysql.jdbc.Driver <Program Name> You can also set the system property within from Java code as follows: In order to connect to example database EXPDB you need to open a database connection in Java program that you can do as follows by using JDBC driver: //jdbc driver connection string, db username and password private String connectionUrl = "jdbc:mysql://localhost:3306/EXPDB"; private String dbUser = "root"; private String dbPwd = "mysql"; private Connection conn; try { conn = DriverManager.getConnection(connectionUrl, dbUser, dbPwd); } catch (SQLException sqle) { System.out.println("SQL Exception thrown: " + sqle); } Above piece of code using DriverManager gets you a connection to the database specified by connectionUrl. When above code fragment is executed, the DriverManager iterates through the registered JDBC drivers to find a driver that can use the subprotocol specified in the connectionUrl. Don't forget to surround getConnection() code by try block, because it can throw an SQLException. Now that you have a JDBC Connection object conn, you would like to execute SQL statements through the conn (JDBC connection) object. A connection in JDBC is a session with a specific database, where SQL statements are executed and results are returned within the context of a connection. To execute SQL statements you would need a Statement object that you would acquire by invoking createStatement() method on conn as illustrated below. try { Statement stmt = conn.createStatement(); } catch (SQLException sqle) { System.out.println("SQL Exception thrown: " + sqle); } You would get a Statement object stmt by executing above piece of code. By definition, createStatement() throws an SQLException so the code line Statement stmt = conn.createStatement(); should either surrounded by try block or throw the exception further. On successful creation of stmt you can send SQL queries to your database with help of executeQuery(), and executeUpdate() methods. Also the Statement has many more useful methods. Next, form a query that you would like to execute on database. For an instance, we will select all records from EXPDB, our example database. Method executeQuery() will get you a ResultSet object that contains the query results. You can think a ResultSet object as two dimensional array object, where each row of an array represents one record. And, all rows have identical number of columns; some columns may contain null values. It is all depend upon what is stored in database. String queryString = "SELECT * FROM EXPTABLE" try { ResultSet rs = stmt.executeQuery(queryString); } catch (SQLException sqle) { System.out.println("SQL Exception thrown: " + sqle); } Here again, method executeQuery() throws an SQLException, so it should either be surrounded by try block or further throw the exception. After getting a ResultSet object rs you may like to process records for further use. Here, for illustration we would print them on screen. System.out.println("ID \tNAME"); System.out.println("============"); while(rs.next()) { System.out.print(rs.getInt("id") + "\t" + rs.getString("name")); System.out.println(); } In above code snippet we process one record at a time or you say one row at a time. ResultSet's next() method helps us to move one record forward in one iteration, it returns true until reaches to last record, false if there are no more records to process. We access ResultSet's columns by supplying column headers to getXxx() methods as they are in EXPTABLE e.g., rs.getInt("id"). You can also access those columns by supplying column indices to getXxx() methods e.g., rs.getInt(1) will return you the first column of current row pointed by rs. Note that the first column in a ResultSet row has index 1 not zero, as it is usually done in Java. JDBC connection to database is a session; it has been mentioned earlier. As soon as you close the session you are no longer connected to database; therefore, you would not be able to perform any operation on database. Closing connection must be the very last step when you are done with all database operations. You can close a connection to database as follows: try { if (conn != null) { conn.close(); conn = null; } } catch (SQLException sqle) { System.out.println("SQL Exception thrown: " + sqle); } Method close() too, throws SQLException as other methods those perform database operations do, so it should also be surrounded by try block. Let's assemble above JDBC code fragments explained from step 1. Registering JDBC Driver Class to 4. Closing JDBC Connection steps and get a working program. /* JDBC_Connection_Demo.java */ import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; public class JDBC_Connection_Demo { /* static block is executed when a class is loaded into memory * this block loads MySQL's JDBC driver */ static { try { // loads com.mysql.jdbc.Driver into memory Class.forName("com.mysql.jdbc.Driver"); } catch (ClassNotFoundException cnf) { System.out.println("Driver could not be loaded: " + cnf); } } public static void main(String[] args) { String connectionUrl = "jdbc:mysql://localhost:3306/EXPDB"; String dbUser = "root"; String dbPwd = "mysql"; Connection conn; ResultSet rs; String queryString = "SELECT ID, NAME FROM EXPTABLE"; try { conn = DriverManager.getConnection(connectionUrl, dbUser, dbPwd); Statement stmt = conn.createStatement(); // INSERT A RECORD stmt.executeUpdate("INSERT INTO EXPTABLE (NAME) VALUES (\"TINU K\")"); // SELECT ALL RECORDS FROM EXPTABLE rs = stmt.executeQuery(queryString); System.out.println("ID \tNAME"); System.out.println("============"); while(rs.next()) { System.out.print(rs.getInt("id") + ".\t" + rs.getString("name")); System.out.println(); } if (conn != null) { conn.close(); conn = null; } } catch (SQLException sqle) { System.out.println("SQL Exception thrown: " + sqle); } } } //JDBC_Connection_Demo ends here --------------------------------------- OUTPUT ------ ID NAME ============ 1. ANUSHKA K 2. GARVITA K 3. TINU K In order to execute above program you have to include MySQL JDBC driver JAR in Java's classpath. We used mysql-connector-java-5.1.13-bin.jar JDBC driver, it maybe a different version in your case. This tutorial explained JDBC & MySQL connection steps through an example
http://cs-fundamentals.com/java-programming/java-jdbc-connection-tutorial.php
CC-MAIN-2017-17
en
refinedweb
namespace LightSwitchApplication { public partial class ReportPreviewScreen { partial void ReportPreviewScreen_Activated() { // Assign the name of the report, which you want to preview in this screen. this.ReportTypeName = "LightSwitchApplication.XtraReport1"; } } } In this blog post I will explain another small feature of XtraReports for LightSwitch: the ability to DevExpress has added LightSwitch support to their award winning reporting solution, XtraReports. This I have to insert more then 500,000 rows into my reports, but this take much time, please how can i accelerate it, i did not lose any thing wich help.
https://community.devexpress.com/blogs/seth/archive/2011/07/14/lightswitch-reporting-showing-your-reports-in-a-separate-screen.aspx
CC-MAIN-2017-22
en
refinedweb
Introduction This article is intended to illustrate how to implement callback operations in Windows Communication Foundation through a common business scenario where the service needs to notify that some events has happened to the client. During a callback, in many aspects the tables are turned: the service becomes the client, and the client becomes the server. So, we need to develop an interesting application supporting the solution. Implementing the solution. The first thing to know before implementing this approach is that not all bindings support callback operations and only bidirectional-capable bindings can be used. Due to the connectionless nature of HTTP, this transport protocol cannot be used for callbacks, that's, you cannot use the BasicHttpBinding and WSHttpBinding for this purpose. In order to support callbacks in your application, WCF provides the WSDualHttpBinding, which actually sets up two HTTP channels: one for the calls from the client to the server, and one for the calls from the server to the client. Now, let's create a console application to host the service in Visual Studio.NET and add a reference to the System.ServiceModel assembly. Defining the callback contract. The callback operations are part of the service contract. A service contract can have at most one callback contract. Once defined, the clients are required to support the callback and also provide the callback endpoint to service in every call. In our example, we have a service which provides a math service doing some long term calculation. We want to be notified when the calculation operation begins. The service contract attribute annotates our contract and offers a CallbackContract property of the type Type setting up our callback contract, as shown in Listing 1.using System; using System.Collections.Generic; using System.Text; using System.ServiceModel; namespace CallbackApp { public interface IMathCalculationCallback { [OperationContract] void OnCalculating(); } [ServiceContract(CallbackContract = typeof(IMathCalculationCallback))] public interface IMathCalculation [OperationContract] void DoLongCalculation(int nParam1, int nParam2); } Listing 1. Definition of the service contract and the associated callback contract. The service implementation. In order to invoke the client callback from the service, we need a reference to the callback object. When the client invokes the service operations, it supplies a callback channel for the communication with the server through the callback. This channel can be referenced from the server by calling the GetCallbackChannel operation on the global OperationContext instance, as shown in the Listing 2. The DoLongCalculation operation does some long initialization, then notifies to the client the a complex operation begins now (in our case this complex operation is nParam1 + nParam2, I guess it is joke but with a delay System.Threading.Thread.Sleep(10000)). [ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant)] public class MathCalculationService : IMathCalculation public int DoLongCalculation(int nParam1, int nParam2) { System.Threading.Thread.Sleep(10000); IMathCalculationCallback objCallback = OperationContext.Current.GetCallbackChannel<IMathCalculationCallback>(); if (objCallback != null) { objCallback.OnCalculating(); } System.Threading.Thread.Sleep(10000); return nParam1 + nParam2; } Listing 2. The MathCalculationService implementation. You may notice that the service is invoking the callback reference while executing the operation DoLongCalculation. By default the service class is configure for single-threaded access, thus the service is associated with a lock, and only one thread at a time can own the lock and access the service instance. When the service invokes the callback reference while executing one of its operations, the service thread is blocked, because the thread which is processing the reply message from the client once the callback returns a message response requires ownership of the same lock, so a deadlock situation occurs. To avoid this situation, there are three possibilities: The application's main workflow is shown in Listing 3.class Program static void Main(string[] args) MathCalculationServiceHost.StartService(); System.Console.WriteLine("Please, press any key to finish ..."); System.Console.Read(); Listing 3. The service application's main workflow. And finally the configuration file is shown in Listing 4.<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="CallbackApp.MathCalculationService" behaviorConfiguration="MathCalculationServiceBeh"> <endpoint contract="CallbackApp.IMathCalculation" binding="wsDualHttpBinding"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MathCalculationServiceBeh" > <serviceDebug includeExceptionDetailInFaults="true" /> <serviceMetadata httpGetEnabled="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Listing 4. The service application's configuration file. Developing the client side. Now, add another application to the solution and name it CallbackClientApp, and also add a reference to the System.ServiceModel assembly. In order to generate the proxy class, you need to open command windows and change to the directory of the client application, and run the following line command shown in Listing 5. As you can see the proxy inherits from the class System.ServiceModel.DuplexClientBase.svcutil 5. The generated proxy is shown in Listing 6.//------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:2.0.50727.42 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace CallbackClientApp [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "3.0.0.0")] [System.ServiceModel.ServiceContractAttribute(ConfigurationName = "IMathCalculation", CallbackContract = typeof(IMathCalculationCallback))] [System.ServiceModel.OperationContractAttribute(Action = "", ReplyAction = "")] int DoLongCalculation(int nParam1, int nParam2); [System.ServiceModel.OperationContractAttribute(Action = "", ReplyAction = "")] public interface IMathCalculationChannel : IMathCalculation, System.ServiceModel.IClientChannel [System.Diagnostics.DebuggerStepThroughAttribute()] public partial class MathCalculationClient : System.ServiceModel.DuplexClientBase<IMathCalculation>, IMathCalculation public MathCalculationClient(System.ServiceModel.InstanceContext callbackInstance) :base(callbackInstance) public MathCalculationClient(System.ServiceModel.InstanceContext callbackInstance, string endpointConfigurationName) :base(callbackInstance, endpointConfigurationName) public MathCalculationClient(System.ServiceModel.InstanceContext callbackInstance, string endpointConfigurationName, string remoteAddress) :base(callbackInstance, endpointConfigurationName, remoteAddress) public MathCalculationClient(System.ServiceModel.InstanceContext callbackInstance, string endpointConfigurationName, System.ServiceModel.EndpointAddress remoteAddress) public MathCalculationClient(System.ServiceModel.InstanceContext callbackInstance, System.ServiceModel.Channels.Binding binding, System.ServiceModel.EndpointAddress remoteAddress) :base(callbackInstance, binding, remoteAddress) return base.Channel.DoLongCalculation(nParam1, nParam2); Listing 6. The generated proxy. In order to use the callback capabilities the client application needs to create an instance of a class which implements the callback logic, host it in a context, create the proxy and call the service passing the callback instance's reference. Let's define the callback class as shown in Listing 7. class MathCalculationCallback : IMathCalculationCallback #region IMathCalculationCallback Members public void OnCalculating() System.Console.WriteLine("The server begins to calculate, please for a moment ..."); #endregion Listing 7. The callback class definition. Then, we define the client's main workflow as shown in Listing 8.class Program IMathCalculationCallback objCallback = new MathCalculationCallback(); InstanceContext objContext = new InstanceContext(objCallback); int nResult = 0; using (MathCalculationClient objProxy = new MathCalculationClient(objContext)) nResult = objProxy.DoLongCalculation(1, 2); System.Console.WriteLine("The result is {0}", nResult); System.Console.WriteLine("Press any key to finish ..."); Listing 8. The client's main workflow. And the configuration file is shown in Listing 9. <?xml version="1.0" encoding="utf-8"?> <bindings> <wsDualHttpBinding> <binding name="WSDualHttpBinding_IMathCalculation""> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" /> <security mode="Message"> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsDualHttpBinding> </bindings> <client> <endpoint address="" binding="wsDualHttpBinding" bindingConfiguration="WSDualHttpBinding_IMathCalculation" contract="IMathCalculation" name="WSDualHttpBinding_IMathCalculation"> </endpoint> </client> Listing 9. The configuration file. Let's see the client application's output in Figure 1. Figure 1: The client application's output. Conclusion In this article, I covered the main concepts and strategies to develop a WCF service which notifies events to the client using callback operations, and how the client must implement the logic associated to the event handling. View All
http://www.c-sharpcorner.com/article/windows-communication-foundation-callback/
CC-MAIN-2017-22
en
refinedweb
AWS Developer Blog this blog post, we’ll walk through how to create a sample API, and generate a Java SDK from that API, and explore various features of the generated SDK. This post assumes you have some familiarity with API Gateway concepts. Create an Example API To start, let’s create a sample API by using the API Gateway console. Navigate to the API Gateway console and select your preferred region. Choose Create API, and then choose the Example API option. Choose Import to create the example API. The example API is pretty simple. It consists of four. Deploy the API Next, you’ll deploy the API to a stage. Under Actions, choose Deploy API, name the stage test, and then choose Deploy. After you deploy the API, on the SDK Generation tab, choose Java as the platform. For Service Name, type PetStore. For Java Package Name, type com.petstore.client. Leave the other fields empty. Choose Generate SDK, and then download and unzip the SDK package. There are several configuration options available for the Java platform. Before proceeding, let’s go over them. Service Name – Used to name the Java Interface you’ll use to make calls to your API. Java Package Name – The name of the package your generated SDK code will be placed under. This name is typically named based on your organization. The following optional parameters are used when publishing the SDK to a remote repository, like Maven Central. Java Build System – The build system to configure for the generated SDK, either maven or gradle. The default is maven. Java Group ID – Typically identifies your organization. Defaults to Java Package Name if not provided. Java Artifact ID – Identifies the library or product. Defaults to Service Name if not provided. Java Artifact Version – Version identifier for the published SDK. Defaults to 1.0-SNAPSHOT if not provided. Compile Client Navigate to the location where you unzipped the SDK package. If you’ve been following the example, the package will be setup as a Maven project. Ensure Maven and a JDK have been installed correctly, and run the following command to install the client package into your local Maven repository This makes it available for other local projects to use. mvn install Set Up an Application Next, you’ll set up an application that depends on the client package you previously installed. Because the client requires Java 8 or later, any application that depends on the client must also be built with Java 8. Here, you’ll use a simple Maven Archetype to generate an empty Java 8 project. mvn archetype:generate -B -DarchetypeGroupId=pl.org.miki -DarchetypeArtifactId=java8-quickstart-archetype -DarchetypeVersion=1.0.0 \ -DgroupId=com.petstore.app \ -DartifactId=petstore-app \ -Dversion=1.0 \ -Dpackage=com.petstore.app Navigate to the newly created project and open the pom.xml file. Add the following snippet to the <dependencies>…</dependencies section of the XML file. If you changed any of SDK export parameters in the console, use those values instead. <dependency> <groupId>com.petstore.client</groupId> <artifactId>PetStore</artifactId> <version>1.0-SNAPSHOT</version> </dependency> Create a file src/main/java/com/petstore/app/AppMain.java with the following contents. package com.petstore.app; import com.petstore.client.*; import com.petstore.client.model.*; import com.amazonaws.opensdk.*; import com.amazonaws.opensdk.config.*; public class AppMain { public static void main(String[] args) { } } Build the application to ensure everything is configured correctly. mvn install To run the application, you can use the following Maven command. (As you make changes, be sure to rerun mvn install before running the application.) mvn exec:java -Dexec.mainClass="com.petstore.app.AppMain" Exploring the SDK Creating the Client The first thing you need to do is construct an instance of the client. You can use the client builder obtained from a static factory method on the client interface. All configuration methods on the builder are optional (except for authorization related configuration).In the following code, you obtain an instance of the builder, override some of the configuration, and construct a client. The following settings are for demonstration only, and are not necessarily the recommended settings for creating service clients. PetStore client = PetStore.builder() .timeoutConfiguration(new TimeoutConfiguration() .httpRequestTimeout(20_000) .totalExecutionTimeout(30_000)) .connectionConfiguration(new ConnectionConfiguration() .maxConnections(100) .connectionMaxIdleMillis(120)) .build(); The builder exposes a ton of useful configuration methods for timeouts, connection management, proxy settings, custom endpoints, and authorization. Consult the Javadocs for full details on what is configurable. Making API Calls Once you’ve built a client, you’re ready to make an API call. Call the GET /pets API to list the current pets. The following code prints out each pet to STDOUT. For each API in the service, a method is generated on the client interface. That method’s name will be based on a combination of the HTTP method and resource path, although this can be overridden (more on that later in this post). client.getPets(new GetPetsRequest()) .getPets() .forEach(p -> System.out.printf("Pet: %s\n", p)); The GET /pets operation exposes a query parameter named type that can be used to filter the pets that are returned. You can set modeled query parameters and headers on the request object. client.getPets(new GetPetsRequest().type("dog")) .getPets() .forEach(p -> System.out.printf("Dog: %s\n", p)); Let’s try creating a Pet and inspecting the result from the service. Here you call the POST /pets operation, supplying information about the new Pet. The CreatePetResult contains the unmarshalled service response (as modeled in the Method Response) and additional HTTP-level metadata that’s available via the sdkResponseMetadata() method. final CreatePetResult result = client.createPet( new CreatePetRequest().newPet(new NewPet() .type(PetType.Bird) .price(123.45))); System.out.printf("Response message: %s \n", result.getNewPetResponse().getMessage()); System.out.println(result.sdkResponseMetadata().header("Content-Type")); System.out.println(result.sdkResponseMetadata().requestId()); System.out.println(result.sdkResponseMetadata().httpStatusCode()); The GET /pets/{petId} operation uses a path placeholder to get a specific Pet, identified by its ID. When making a call with the SDK, all you need to do is supply the ID. The SDK handles the rest. GetPetResult pet = client.getPet(new GetPetRequest().petId("1")); System.out.printf("Pet by ID: %s\n", pet); Overriding Configuration at the Request Level In addition to the client-level configuration you supply when creating the client (by using the client builder), you can also override certain configurations at the request level. This “request config” is scoped only to calls made with that request object, and takes precedence over any configuration in the client. client.getPets(new GetPetsRequest() .sdkRequestConfig(SdkRequestConfig.builder() .httpRequestTimeout(1000).build())) .getPets() .forEach(p -> System.out.printf("Pet: %s\n", p)); You can also set custom headers or query parameters via the request config. This is useful for adding headers or query parameters that are not modeled by your API. The parameters are scoped to calls made with that request object. client.getPets(new GetPetsRequest() .sdkRequestConfig(SdkRequestConfig.builder() .customHeader("x-my-custom-header", "foo") .customQueryParam("MyCustomQueryParam", "bar") .build())) .getPets() .forEach(p -> System.out.printf("Pet: %s\n", p)); Naming Operations It’s possible to override the default names given to operations through the API Gateway console or during an import from a Swagger file. Let’s rename the GetPet operation (GET /pets/{petId}) to GetPetById by using the console. First, navigate to the GET method on the /pets/{petId} resource. Choose Method Request, and then expand the SDK Settings section. Edit the Operation Name field and enter GetPetById. Save the change and deploy the API to the stage you created previously. Regenerate a Java SDK, and it should have the updated naming for that operation. GetPetByIdResult pet = client.getPetById(new GetPetByIdRequest().petId("1")); System.out.printf("Pet by ID: %s\n", pet); If you are importing an API from a Swagger file, you can customize the operation name by using the operationId field. The following snippet is from the example API, and shows how the operationId field is used. ... "/pets/{petId}": { "get": { "tags": [ "pets" ], "summary": "Info for a specific pet", "operationId": "GetPet", "produces": [ "application/json" ], ... Final Thoughts This post highlights how to generate the Java SDK of an API in API Gateway, and how to call the API using the SDK in an application. For more information about how to build the SDK package, initiate a client with other configuration properties, make raw requests, configure authorization, handle exceptions, and configure retry behavior, see the README.html file in the uncompressed SDK project folder.
https://aws.amazon.com/blogs/developer/api-gateway-java-sdk/
CC-MAIN-2017-22
en
refinedweb
Using ANTS for your everyday grid How I use ANTS to give me the data environment I want The tutorials show the features of ANTS but creating 5 namespaces attached to two systems with live CDs is probably not the daily computing environment you want. I'll explain how I use ANTS to make a standard Plan 9 setup work better for me. The core of my grid is a "canonical" Plan 9 setup - Venti server, Fossil server, tcp boot CPU server, terminal. The venti, fossil, and tcp cpu all run the 9pcram kernel. The terminal is a 9front machine, but sometimes Drawterm is used for a terminal, or another Bell Labs system. In addition to this main leg of the grid, a linux box runs a p9p venti and hosts qemu vms. The p9p venti and qemu vms duplicate the native machines, and create an independent active copy of my main root fs. Using ANTS allows me to administer my venti and fileservers much more easily, replicate data between the two independent legs of the grid, and change my working root on the fly into any of the namespaces. My basic user environment is seems like any other Plan 9 userspace - as a user, I don't have to do anything different. However, the underlying architecture of the namespace is created in a different way than in standard Plan 9, and I have access to independent "service namespaces" on each node. If I need to reboot the venti, I don't lose the ability to control the fossil server and tcp cpu server and use my terminal. I can shift my active root to my other chain of machines, and then redial the services on the native machines when the venti comes back online. I also have 2 remote 9 cloud nodes, one running 9pcram kernel and Bell Labs distribution, one running 9front. The labs node is the the controller and runs hubfs, which links the remote nodes to the local grid. All interaction and control of the remote nodes is done via the persistent hubfs. ANTS allows me to think of "uptime" in a totally different way - not uptime on a box, but uptime of my current active working fs. As long as I can keep working with my data and connecting to the namespaces I need, rebooting a box doesn't matter. I have had a continuous active connection to my main root fs for the past two months, since bringing the first version of the full ants toolkit together. During that time each node has been rebooted many times, usually just to upgrade to the latest kernel version, but rerooting and multiple available root fses, combined with no imposed chains of "reboot dependencies" between nodes, allow me to keep working with my current data without disruption. Other ways to use ANTS You don't need a big grid to get use from these tools. A single box can get a lot of benefit from ANTS. You can use the service namespace to let you fshalt your main file server and and do things while it is stopped. You can set up a "be your own cpu server" environment where you boot as a terminal, but keep an independent enviornment available to cpu into. One reason to do this is to use one version of Plan 9 as your main environment, and keep an alternate distro available to cpu into. This is another major application of ANTS. Per process and independent namespaces are an easy way to work with multiple forms of Plan 9 at once. Some Plan 9 users have been concerned about the growth of diverging Plan 9 distributions - ANTS offers tools which allow multiple forms of Plan 9 to work independently side by side, but share resources as needed. The design of Plan 9 means that we have an architecture which was designed from the ground up to work with different file trees and namespaces at the same time. ANTS is based on Plan 9 from Bell Labs but I use other forms of Plan 9 and want to be able to use everything at once, and combine them freely. ANTS helps me do this. When installing Plan 9 to a new system, attaching to the root fs and integrating with the network sometimes require a bit of work and setup, and it is very useful to have a working environment available at bootup with no dependency other than the kernel itself. Some of the earliest work that led to ANTS was motivated simply by reliability and control of bootup - it is frustrating when a machine has a problem finding a root fs and reboots with no chance to fix the problem. With ANTS, if there is a problem attaching to the normal root fs, you have a working environment to debug the problem or find an alternate root. I have tested using ANTS in a lot of ways, and I am still finding new applications for the different pieces of the toolkit. I am sure other users will find many more if they explore the possibilities.
http://doc.9gridchan.org/antfarm/RealLifeAnts
CC-MAIN-2017-22
en
refinedweb
Scott Zhong wrote: > do you suggest we do this to valarray? > > cat t.cpp && gcc -v && gcc t.cpp && ./a.out > class _slice std::slice et al are required by the standard. I was referring to the get_slice() member function which isn't. A program that does #define slice !ERROR! #include <valarray> is ill-formed, but a program that does #define get_slice !ERROR! #include <valarray> is not. > { > public: > _slice () : a(0), b(0), c(0) { } > _slice (int _a, int _b, int _c) Names with a single leading underscore followed by a lowercase letter are reserved for use as global names, so while that alone prevents programs from #defining macros that look like that it doesn't make such names reserved to the implementation. I don't think there's a way to detect local variables that are in violation of this requirement. Nevertheless, the convention for local variables is two leading underscores, i.e., __a, __b, and __c above. Martin
http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200703.mbox/%3C45FD9907.50600@roguewave.com%3E
CC-MAIN-2017-22
en
refinedweb
A Proxy is usually associated with a client-server paradigm. The Server and the Client classes implement the same interface. However, the Client only implements stubs of methods where as the Server provides the actual implementations for the methods specified in the interface. For example: 1. public interface SomeIfc { 2. public void aMethod(); 3. } 4. 5. public class Client implements SomeIfc { 6. Server s = new Server(); 7. // Other attributes 8. 9. public void aMethod () { 10. s.aMethod(); 11. } 12. 13. // Other methods 14. 15. } 16. 17. public class Server implements SomeIfc { 18. 19. // Class attributes 20. public aMethod () { 21. // Actual method implementation. 22. } 23. 24. // Other methods 25. } Any call made to aMethod() on the Client is "delegated" to the Server. An application using the Client class is shielded from the knowledge that there is a Server in the picture. The Client acts as a Proxy to the Server. All access to the Server is only available via the Client. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/5274
CC-MAIN-2017-22
en
refinedweb
TIFFWriteEncodedTile.3tiff man page TIFFWritedEncodedTile — compress and write a tile of data to an open TIFF file Synopsis #include <tiffio.h> tsize_t TIFFWriteEncodedTile(TIFF *tif, ttile_t tile, tdata_t buf, tsize_t size) Description Compress size bytes of raw data from buf and append the result to the end of the specified tile. Note that the value of tile is a “raw tile number.” That is, the caller must take into account whether or not the data are organized in separate places (PlanarConfiguration=2). TIFF All error messages are directed to the TIFFError(3TIFF) routine. %s: File not open for writing. The file was opened for reading, not writing. Can not write tiles to a stripped image. The image is assumed to be organized in strips because neither of the TileWidth tile arrays". There was not enough space for the arrays that hold tile offsets and byte counts. See Also TIFFOpen(3TIFF), TIFFWriteTile(3TIFF), TIFFWriteRawTile(3TIFF), libtiff(3TIFF) Libtiff library home page:
https://www.mankier.com/3/TIFFWriteEncodedTile.3tiff
CC-MAIN-2017-22
en
refinedweb
WPF is Microsoft’s latest technology for developing Windows-based rich client applications. WPF stands for Windows Presentation Foundation and it’s considered an advanced alternative to traditional .NET Windows Form applications. WPF marks a revolution in Microsoft’s approach towards building desktop based applications. WPF uses the hardware graphics card and Direct X technology for rendering graphical user interfaces rather than using traditional Windows pixel-based approach. The output window and the child controls rendered by WPF applications are resolution independent. WPF rather use an advanced technique for setting size of controls via Device Independent Points where application adjusts itself automatically and yields uniform output on screens with different pixel resolution. To learn more about WPF, take a course at Udemy.com. Developing a Basic Calculator with WPF This article is a basic WPF tutorial for absolute beginners who do not have any prior experience with developing WPF applications. In this article, a basic Calculator application will be developed that performs addition, subtraction, multiplication and division of two numbers passed in text boxes and then displays the result in another text box. Following are the steps to develop the application: 1- Open Visual Studio Application (2010 or above) and create a New Project. 2- From the template windows on the left, Choose Visual C#. 3- From the application types that appear on the middle panel of the screen, choose WPF Application. 4- Rename the application to WPFCalculator and click OK button. The aforementioned steps have been displayed in the following figure: By default, the new WPF project opens in split mode view on the top, the WYSIWYG (What you see is what you get) window appears. At the bottom, the contents of the code behind the XAML file are displayed. In the solution explorer of newly created WPFCalculator application, it is extremely important to understand the contents of the two files. By default MainWindow.xaml file is added which contains the basic XAML markup for the layout. The WYSIWYG window is based on the contents of this file. Simply put, this MainWindow.xaml file is the front end file. If MainWindow.xaml file is explored further by clicking the small triangle to the left of it, another file MainWindow.xaml.cs is displayed. This is the code behind file and contains all the programming logic and event handling mechanism for the front end MainWindow.xaml. Both of these files in WPFCalculator project solution have been shown in the following figure: MainWindow.xaml At the moment, the contents of the MainWindow.xaml file are as follows: <Window x: <Grid> </Grid> </Window> Every element in a WPF output window is represented as an element in the corresponding file. An element inside another window is represented in the output window as a control or layout inside some parent control. In the above example, there is a top level element Window. The first attribute of the Window element is x:Class which refers to the code behind file that will handle the event of this class. The next two lines are the namespaces that specify some XML standard and set of controls used in the application. The attributes named with prefix x: refers to the controls specified by the namespace prefixed with x: which is the second namespace in the above case. Finally the Title, Height and Width of the Window element have been set. There is a Grid element added in the MainWindow.xaml by default. This is due to the fact that Grid is probably the most widely used layout and hence have been added by default. It is absolute okay to remove it as will be done later. The closing </Window> tag specifies that the Window element declaration has been completed. MainWindow.xaml.cs At the moment, the MainWindow.xaml.cs (code behind) file contains following content.(); } } } The file contains several namespace declarations. Notice the name of the class is MainWindow. This was the name that was specified in the x:Class attribute of the Window element in the MainWindow.xaml file. This x:Class attribute serves as a connection between the front end and the code behind file. The constructor of the the MainWindow class contains an InitializeComponent method which is basically used to create the GUI and tie it with the code behind file. For more interesting C# tutorials, browse some C# courses at Udemy.com. Making Changes to build a Calculator Changes in MainWindow.xaml Make following changes in the XAML code of the MainWindow.xaml file. The file should look exactly like this: <Window x: <Grid> <Grid.RowDefinitions> <RowDefinition></RowDefinition> <RowDefinition></RowDefinition> <RowDefinition ></RowDefinition> <RowDefinition></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="150"></ColumnDefinition> <ColumnDefinition></ColumnDefinition> </Grid.ColumnDefinitions> <Label FontSize="30" Grid.Number 1:</Label> <Label FontSize="30" Grid.Number 2:</Label> <Label FontSize="30" Grid.Operation:</Label> <Label HorizontalAlignment="Right" FontSize="30" Grid.Result:</Label> <TextBox FontSize="30" Grid.</TextBox> <TextBox FontSize="30" Grid.</TextBox> <TextBox FontSize="30" Grid.</TextBox> <Grid Grid. > <Button FontSize="30" Name="Add" Grid.+</Button> <Button FontSize="30" Name="Subtract" Grid.-</Button> <Button FontSize="30" Name="Multiply" Grid.X</Button> <Button FontSize="30" Name="Divide" Grid.%</Button> </Grid> </Grid> </Window> Changes in MainWindow.xaml.cs In the code behind MainWindow.xaml.cs file make changes so that it looks exactly(); } privatevoid Add_Click(object sender, RoutedEventArgs e) { int output = Int32.Parse(Number1.Text) + Int32.Parse(Number2.Text); Result.Text = output.ToString(); } privatevoid Subtract_Click(object sender, RoutedEventArgs e) { int output = Int32.Parse(Number1.Text) - Int32.Parse(Number2.Text); Result.Text = output.ToString(); } privatevoid Multiply_Click(object sender, RoutedEventArgs e) { int output = Int32.Parse(Number1.Text) * Int32.Parse(Number2.Text); Result.Text = output.ToString(); } privatevoid Divide_Click(object sender, RoutedEventArgs e) { int output = Int32.Parse(Number1.Text) / Int32.Parse(Number2.Text); Result.Text = output.ToString(); } } } The output of the WPFCalculator application after making the above changes would look like this: To learn details of C# and WPF, check out a course at Udemy.com.
https://blog.udemy.com/wpf-tutorial/
CC-MAIN-2019-09
en
refinedweb
Resources¶ Resources are at the heart of the TBone framework. They provide the foundation for the application’s communication with its consumers and facilitate its API. Resources are designed to implement a RESTful abstraction layer over HTTP and Websockets protocols and assist in the creation of your application’s design and infrastructure. Overview¶ Resources are class-based. A single resource class implements all the methods required to communicate with your API over HTTP or Websocket, using HTTP-like verbs such as GET and POST. In addition it implements resource events which translate to application events sent over websockets to the consumer. A Resource subclass must implement all the methods it expects to respond to. The following table lists the HTTP verbs and their respective member methods: Sanic and AioHttp¶ TBone includes two mixin classes to adapt your resources to the underlying web-server of your application. Those are SanicResource and AioHttpResource . Every resource class in your application must include one of those mixin classes, respective to your application’s HTTP and Websockets infrastructure. These mixin classes implement the specifics pertaining to their respective libraries and leave the developer with the work on implementing the application’s domain functionality. If your application is based on Sanic your resources will be defined like so: class MyResource(SanicResource, Resource): ... If your application is based on AioHttp your resources will be defined like so: class MyResource(AioHttpResource, Resource): ... Note Adapting a resource class is done with mixins rather than with single inheritance. The reason is so developers can bind the correct resource adapter to a Resource derived class or classes that are derived from other base resources such as MongoResource . It obviously makes no sense to have resources mixed with both SanicResource and AioHttpResource in the same project. Resource Options¶ Every resource has a ResourceOptions class associated with it, that provides the default options related to the resource. Such options can be overriden using the Meta class within the resource class itself, like so: from tbone.resources import Resource class MyResource(Resource): class Meta: allowed_detail = ['get', 'post'] # In this example, only GET and POST methods are allowed Resource options are essential to resources who wish to override built-in functionality such as: - Serialization - Authentication - Allowed methods For a full list of resource options see the API Reference Formatters¶ Formatters are classes which help to convert Python dict objects to text (or binary), and back, using a certain transport protocol. In TBone terminology, formatting turns an native Python object into another representation, such as JSON or XML. Parsing is turning JSON or XML into native Python object. Formatters are used by resource objects to convert data into a format which can be wired over the net. When using the HTTP protocol, generally APIs expose data in a text-based format. By default, TBone formats and parses objects to and from a JSON representation. However, developers can override this behavior by writing additional Formatter classes to suit their needs. Authentication¶ TBone provides an authentication mechanism which is wired into the resource’s flow. All requests made on a resource are routed through a central dispatch method. Before the request is executed an authentication mechanism is activated to determine if the request is allowed to be processed. Therefore, every resource has an Authentication object associated with it. This is done using the Meta class of the resource, like so: class BookResource(Resource): class Meta: authentication = Authentication() By default, all resources are associated with a NoAuthentication class, which does not check for any authentication whatsoever. Developers need to subclass NoAuthentication to add their own authentication mechanism. Authentication classes implement a single method is_authenticated which has the request object passed. Normally, developers would use the request headers to check for authentication and return True or False based on the content of the request. HATEOAS¶ HATEOAS (Hypermedia as the Engine of Application State) is part of the REST specification. TBone supports basic HATEOAS directives and allows for extending this support in resource subclasses. By default, all TBone resources include a _links key in their serialized form, which contains a unique href to the resource itself, like so: { "first_name': 'Ron", "last_name': 'Burgundy", "_links" : { "self" : { "href" : "/api/person/1/" } } } Disabling HATEOAS support is done per resource, by setting the hypermedia flag in the ResourceOptions class to False, like so: class NoHypermediaPersonResource(Resource): class Meta: hypermedia = False ... Adding additional links to the resource is done by overriding add_hypermedia on the resource subclass. Nested Resources¶ Nested resources is a technique to extend a resource’s endpoints beyond basic CRUD. Every resource automatically exposes the HTTP verbs (GET, POST, PUT, PATCH, DELETE) with their respective methods, adhereing to REST principles. However, it is sometimes neccesary to extend a resource’s functionality by implementing additional endpoints. These can be described by two categories: - Resources which expose nested resources classes - Resources which expose additional unrest endpoints serving specific functionality. Lets look at some examples: # model representing a user's blog comment. Internal class Comment(Model): user = StringField() content = StringField() # model representing a single blog post, includes a list of comments class Blog(Model): title = StringField() content = StringField() comments = ListField(ModelField(Comment)) class CommentResource(ModelResource): class Meta: object_class = Comment class BlogResource(ModelResource): class Meta: object_class = Blog @classmethod def nested_routes(cls, base_url): return [ Route( path=base_url + '%s/comments/add/' % (cls.route_param('pk')), handler=cls.add_comment, methods=cls.route_methods(), name='blog_add_comment') ] @classmethod async def add_comment(cls, request, **kwargs): MongoDB Resources¶ The MongoResource class provides out-of-the-box CRUD functionality over your MongoDB collections with as little as three lines of code, like so: from tbone.resources.mongo import MongoResource class BookResource(AioHttpResource, MongoResource): class Meta: object_class = Book Important TBone is not aware of how you manage your application’s global infrastructure. Therefore Resources and Models are not aware of your database’s handle. Because of that, TBone makes the assumption that your global app object is attached to every request object, which both Sanic and AioHttp do by default. it also assumes that the database handler is assigned to the global app object, which you must handle yourself, like so: app.db = connect(...) See TBone examples for more details CRUD¶ The MongoResource class provides out-of-the-box CRUD operations on your data models. As mentioned in the Persistency section, models are mapped to MongoDB collections. This allows for HTTP verbs are to be mapped directly to a MongoDB collection’s core functionality. The following table lists the way HTTP verbs are mapped to MongoDB collections Filtering¶ The MongoResource provides a mapping mechanism between url parameters and MongoDB query parameters. Therefore, the url: /api/v1/movies/?genre=drama Will be mapped to: coll.find(query={"genre": "drama"}) Passing additional parameters to the url will add additional parameters to the query. In addition, it is possible to also add the query operator to the urls parameters. Operators are added to the url parameters using a double underscore __ like so: /api/v1/movies/?rating__gt=4 Which will be mapped to: coll.find(query={{"rating": {"$gt": 4}}) Sorting¶ Sorting works very similar to filtering, by passing url parameters which are mapped to the sort parameter like so: /api/v1/member/?order_by=age Which will be mapped to: coll.find(sort={'age': 1}) # pymongo.ASCENDING Placing the - sign befor ethe sorted field’s name will sort the collection in decending order like so: /api/v1/member/?order_by=-age Which will be mapped to: coll.find(sort={'age': -1}) # pymongo.DESCENDING Full Text Search¶ The MongoResource class provides an easy hook between url parameters and a full-text-search query. However, full text search is not available on a collection by default. In order to utilize MongoDB’s FTS functionality the proper indices must be configured within the collection. Please consult with the MongoDB documentation on using text indices as well as TBone’s documentation on defining indices as part of a Model . FTS (full text search) is provided out-of-the-box on all MongoResource classes, provided the relevant indices are in place. FTS can be used using query parameters like so: /api/books/?q=history This will execute a FTS query on all fields that were indexed with the text index. FTS takes presedence over standard filters, which means that if the url parameters include both FTS and filters, FTS will be executed. The default operator for accessing FTS is q. However, this can overriden in the Meta class by overriding the option fts_operator like so: class BookResource(SanicResource, MongoResource): class Meta: object_class = Book fts_operator = 'fts' This will result in a usage like so: /api/books/?fts=history Hooking up to application’s router¶ Once a resource has been implemented, it needs to be hooked up to the application’s router. With any web application such as Sanic or AioHttp, adding handlers to the application involves matching a uri to a specific handler method. The Resource class implements two methods to_list and to_detail which create list handlers and detail handlers respectively, for the application router, like so: app.add_route('GET', '/books', BookResource.as_list()) app.add_route('GET', '/books/{id}', BookResource.as_detail()) The syntax varies a little, depending on the web server used. Sanic Example¶ from sanic import Sanic from tbone.resources import Resource from tbone.resources.sanic import SanicResource class TestResource(SanicResource, Resource): async def list(self, **kwargs): return { 'meta': {}, 'objects': [ {'text': 'hello world'} ] } app = Sanic() app.add_route(methods=['GET'], uri='/', handler=TestResource.as_list()) if __name__ == "__main__": app.run(host="0.0.0.0", port=8000) AioHttp Example¶ from aiohttp import web from tbone.resources import Resource from tbone.resources.aiohttp import AioHttpResource class TestResource(AioHttpResource, Resource): async def list(self, **kwargs): return { 'meta': {}, 'objects': [ {'text': 'hello world'} ] } app = web.Application() app.router.add_get('/', TestResource.as_list()) if __name__ == "__main__": web.run_app(app, host='127.0.0.1', port=8000) The examples above demonstrate how to manually add resources to the application router. This can become tedious when the app has multiple resources which expose list and detail endpoints as well as some nested resources. An alternative way is to use a Router , described below. Routers¶ Routers are optional components which help to bind resources to the application’s url router. Whether you’re using Sanic or AioHttp every application must have its url routes defined. The fact that AioHttp uses a centralized system of defining routes, similar to Django, while Sanic uses a de-centralized system of defining routes, in the form of decorators, bears no difference. Resources are registered with routers. A router may have one or more resources registered with it. An application can have one or more routers defined. Note For small applications a single router for all your resources may be good enough. Larger applications may want to use multiple routers in order to seperate the application’s components, similar to the way a Django project may contain multiple apps. It is up to the developers to decide how many routes are needed in their projects. A router may have an optional path variable which the router prepends to all resources. Resources are registered with a router like so: class AccountResource(AioHttpResource, Resource): ... class PublicUserResource(AioHttpResource, Resource): ... router = Router(name='api/user') # api/user is the url prefix of all resources under this router router.register(AccountResource, 'account') # the full url would be api/user/account/ router.register(PublicUserResource, 'public_user') # the full url would be api/user/public_user/ Once the router is created, the urls need to be added to the application’s urls. With AioHttp it looks like this: app = web.Application() . . . for route in router.urls(): app.router.add_route( method=route.methods, path=route.path, handler=route.handler, name=route.name ) With Sanic it looks like this: app = Sanic() . . . for route in router.urls(): app.add_route( methods=route.methods, uri=route.path, handler=route.handler )
https://tbone.readthedocs.io/en/latest/source/resources.html
CC-MAIN-2019-09
en
refinedweb
PsiAPI Tutorial: Using Psi4 as a Python Module¶ transcribed by D. A. Sirianni psi4.in front, then submit it to the executable psi4which processes the Psithon into pure Python and runs it internally. In PsiAPI mode, you write a pure Python script with import psi4at the top and commands are behind the psi4.namespace, then submit it to the pythoninterpreter. Both modes are equally powerful. This tutorial covers the PsiAPI mode. Warning: Although the developers have been using PsiAPI mode stably for months before the 1.1 release and while we believe we’ve gotten everything nicely arranged within the psi4. namespace, the API should not be considered completely stable. Most importantly, as we someday deprecate the last of the global variables, options will be added to the method calls (e.g., energy('scf', molecule=mol, options=opt)) Note: Consult How to run Psi4 as Python module after compilation or How to run Psi4 as a Python module from conda installation for assistance in setting up Psi4. Unlike in the past, where Psi4 was executable software which could only be called via input files like input.dat, it is now interactive, able to be loaded directly as a Python module. Here, we will explore the basics of using Psi4 in this new style by reproducing the section A Psi4 Tutorial from the Psi4 manual in an interactive Jupyter Notebook. Note: If the newest version of Psi4 (v.1.1a2dev42 or newer) is in your path, feel free to execute each cell as you read along by pressing Shift+Enter when the cell is selected. I. Basic Input Structure¶ Psi4 is now a Python module; so, we need to import it into our Python environment: [1]: try: import os, sys sys.path.insert(1, os.path.abspath('/scratch/psilocaluser/conda-builds/psi4-docs-multiout_1550214125402/work/build/stage//scratch/psilocaluser/conda-builds/psi4-docs-multiout_1550214125402/_h_env_placehold_placehold_place/lib/python3.6/site-packages')) except ImportError: pass import psi4 Psi4 is now able to be controlled directly from Python. By default, Psi4 will print any output to the screen; this can be changed by giving a file name (with path if not in the current working directory) to the function psi4.core.set_output_file() API, as a string: [2]: psi4.core.set_output_file('output.dat', False) Additionally, output may be suppressed by instead setting psi4.core.be_quiet() API. II. Running a Basic Hartree-Fock Calculation¶ In our first example, we will consider a Hartree-Fock SCF computation for the water molecule using a cc-pVDZ basis set. First, we will set the available memory for Psi4 to use with the psi4.set_memory() API function, which takes either a string like '30 GB' (with units!) or an integer number of bytes of memory as its argument. Next, our molecular geometry is passed as a string into psi4.geometry() API. We may input this geometry in either Z-matrix or Cartesian format; to allow the string to break over multiple lines, use Python’s triple-quote """string""" syntax. Finally, we will compute the Hartree-Fock SCF energy with the cc-pVDZ basis set by passing the method/basis set as a string ( 'scf/cc-pvdz') into the function psi4.energy() API: [3]: #! Sample HF/cc-pVDZ H2O Computation psi4.set_memory('500 MB') h2o = psi4.geometry(""" O H 1 0.96 H 1 0.96 2 104.5 """) psi4.energy('scf/cc-pvdz') [3]: -76.02663273488399 If everything goes well, the computation should complete and should report a final restricted Hartree-Fock energy in the output file output.dat in a section like this: Energy converged. @DF-RHF Final Energy: -76.02663273486682 the main Psi4 manual section Compiling and Installing from Source). This very simple input is sufficient to run the requested information. Notice we didn’t tell the program some otherwise useful information like the charge the electrons are paired. For example, let’s run a computation on methylene (\(\text{CH}_2\)), first stored and then inserted into the geometry specification using Python 3 string formatting. [4]: #! Sample UHF/6-31G** CH2 Computation R = 1.075 A = 133.93 ch2 = psi4.geometry(""" 0 3 C H 1 {0} H 1 {0} 2 {1} """.format(R, A) ) psi4.set_options({'reference': 'uhf'}) psi4.energy('scf/6-31g**') [4]: -38.925334628859886 Executing this cell should yield the final energy as @DF-UHF Final Energy: -38.92533462887677 Notice the new command, psi4.set_options() API, to the input. This function takes a Python dictionary as its argument, which is a key-value list which associates a Psi4 keyword with its user-defined value.). III. Geometry Optimization and Vibrational Frequency Analysis¶ The above examples were simple single-point energy computations (as specified by the psi4.energy() API function). Of course there are other kinds of computations to perform, such as geometry optimizations and vibrational frequency computations. These can be specified by replacing psi4.energy() API with psi4.optimize() API or psi4.frequency() API, respectively. Let’s take a look at an example of optimizing the H\(_2\)O molecule using Hartree-Fock with a cc-pVDZ basis set. Now, here comes the real beauty of running Psi4 interactively: above, when we computed the energy of H\(_2\)O with HF/cc-pVDZ, we defined the Psi4 molecule object h2o. Since we’re still in the Python shell, as long as you executed that block of code, we can reuse the h2o molecule object in our optimization without redefining it, by adding the molecule=h2o argument to the psi4.optimize() API function: [5]: psi4.set_options({'reference': 'rhf'}) psi4.optimize('scf/cc-pvdz', molecule=h2o) Optimizer: Optimization complete! [5]: -76.02703272937504 This should perform a series of gradient computations. The gradient points which way is downhill in energy, and the optimizer then modifies the geometry to follow the gradient. After a few cycles, the geometry should converge with a message like Optimization). --------------------------------------------------------------------------------------------------------------- ~ Step Total Energy Delta E MAX Force RMS Force MAX Disp RMS Disp ~ --------------------------------------------------------------------------------------------------------------- ~ 1 -76.026632734908 -76.026632734908 0.01523518 0.01245755 0.02742222 0.02277530 ~ 2 -76.027022666011 -0.000389931104 0.00178779 0.00142946 0.01008137 0.00594928 ~ 3 -76.027032729374 -0.000010063363 0.00014019 0.00008488 0.00077463 0.00044738 ~ --------------------------------------------------------------------------------------------------------------- ~ To get harmonic vibrational frequencies, it’s important to keep in mind that the values of the vibrational frequencies are a function of the molecular geometry. Therefore, it’s important to obtain the vibrational frequencies AT THE OPTIMIZED GEOMETRY. Luckily, Psi4 updates the molecule with optimized geometry as it is being optimized. So, the optimized geometry for H\(_2\)O is stored inside the h2o molecule object, which we can access! To compute the frequencies, all we need to do is to again pass the molecule=h2o argument to the psi4.frequency() API function: [6]: scf_e, scf_wfn = psi4.frequency('scf/cc-pvdz', molecule=h2o, return_wfn=True, dertype=1) 6 displacements needed. 1 2 3 4 5 6 Executing this cell will prompt Psi4 to compute the Hessian (second derivative matrix) of the electronic energy with respect to nuclear displacements. From this, it can obtain the harmonic vibrational frequencies, given below (roundoff errors of around \(0.1\) cm\(^{-1}\) may exist): Irrep Harmonic Frequency (cm-1) ----------------------------------------------- A1 1775.6478 A1 4113.3795 B2 4212.1814 ----------------------------------------------- Notice the symmetry type of the normal modes is specified (A1, A1, B2). The program also print out the normal modes in terms of Cartesian coordinates of each atom. For example, the normal mode at \(1776\) cm\(^{-1}\) is: Frequency: 1775.65 Force constant: 0.1193 X Y Z mass O 0.000 0.000 -0.068 15.994915 H 0.000 0.416 0.536 1.007825 H 0.000 -0.416 0.536 1.007825 output.dat. The vibrational frequencies are sufficient to obtain vibrational contributions to enthalpy (H), entropy (S), and Gibbs free energy (G). Similarly, the molecular geometry is used to obtain rotational constants, which are then used to obtain rotational contributions to H, S, and G. Note: Psi4 has several synonyms for the functions called in this example. For instance, psi4.frequency() API will compute molecular vibrational frequencies, and psi4.optimize() API will perform a geometry optimization. IV.: [7]: # Example SAPT computation for ethene*ethyne (*i.e.*, ethylene*acetylene). # Test case 16 from S22 Database dimer = psi4.geometry(""" 0 1 C 0.000000 -0.667578 -2.124659 C 0.000000 0.667578 -2.124659 H 0.923621 -1.232253 -2.126185 H -0.923621 -1.232253 -2.126185 H -0.923621 1.232253 -2.126185 H 0.923621 1.232253 -2.126185 -- 0 1 C 0.000000 0.000000 2.900503 C 0.000000 0.000000 1.693240 H 0.000000 0.000000 0.627352 H 0.000000 0.000000 3.963929 units angstrom """) Here’s the second half of the input, where we specify the computation options: [8]: psi4.set_options({'scf_type': 'df', 'freeze_core': 'true'}) psi4.energy('sapt0/jun-cc-pvdz', molecule=dimer) [8]: -0.0022355825227244703 All of the options we have currently set using psi4.set_options() API are “global” options (meaning that they are visible to all parts of the program). Most common Psi4 options can be set like this. If an option needs to be visible only to one part of the program (e.g., we only want to increase the energy convergence in the SCF code, but not the rest of the code), it can be set by with the psi4.set_module_options() API function, psi4.set_module_options('scf', {'e_convergence': '1e-8'}). Note: The arguments to the functions we’ve used so far, like psi4.set_options() API, psi4.set_module_options() API, psi4.energy() API, psi4.optimize() API, psi4.frequency() API, etc., are case-insensitive. by adding 'scf_type': 'df' to the dictionary passed to psi4.set_options(). by adding 'freeze_core': 'true' to the dictionary passed to psi4.set_options(). The SAPT procedure is invoked by psi4.energy('sapt0/jun-cc-pvdz', molecule=dimer). Here, Psi4 of the Psi4 ( Elst10,r where the 1 indicates the first-order perturbation theory result with respect to the intermolecular interaction, and the 0 indicates zeroth-order with respect to the intramolecular electron correlation). The next most attractive contribution is the Disp20 term (second order intermolecular dispersion, which looks like MP2 in which one excitation is placed on each monomer), contributing an attraction of \(-1.21\) kcal/mol.. V. Potential Surface Scans and Counterpoise Correction Made Easy¶ Finally, let’s consider an example which highlights the advantages of being able to interact with Psi4 directly with Python. Suppose you want to do a limited potential energy surface scan, such as computing the interaction energy between two neon atoms at various interatomic distances. One simple but unappealing way to do this is to generate separate geometries for each distance to be studied. Instead, we can leverage Python loops and string formatting to make our lives simpler. Additionally, let’s couter psi4.geometry() string can be used to separate monomers. So, we’re going to do counterpoise-corrected CCSD(T) energies for Ne\(_2\) at a series of different interatomic distances. And let’s print out a table of the interatomic distances we’ve considered, and the CP-corrected CCSD(T) interaction energies (in kcal/mol) at each geometry: [9]: #! Example potential energy surface scan and CP-correction for Ne2 ne2_geometry = """ Ne -- Ne 1 {0} """ Rvals = [2.5, 3.0, 4.0] psi4.set_options({'freeze_core': 'true'}) # Initialize a blank dictionary of counterpoise corrected energies # (Need this for the syntax below to work) ecp = {} for R in Rvals: ne2 = psi4.geometry(ne2_geometry.format(R)) ecp[R] = psi4.energy('ccsd(t)/aug-cc-pvdz', bsse_type='cp', molecule=ne2) # Prints to screen print("CP-corrected CCSD(T)/aug-cc-pVDZ Interaction Energies\n\n") print(" R [Ang] E_int [kcal/mol] ") print("---------------------------------------------------------") for R in Rvals: e = ecp[R] * psi4.constants.hartree2kcalmol print(" {:3.1f} {:1.6f}".format(R, e)) # Prints to output.dat psi4.core.print_out("CP-corrected CCSD(T)/aug-cc-pVDZ Interaction Energies\n\n") psi4.core.print_out(" R [Ang] E_int [kcal/mol] \n") psi4.core.print_out("---------------------------------------------------------\n") for R in Rvals: e = ecp[R] * psi4.constants.hartree2kcalmol psi4.core.print_out(" {:3.1f} {:1.6f}\n".format(R, e)) CP-corrected CCSD(T)/aug-cc-pVDZ Interaction Energies R [Ang] E_int [kcal/mol] --------------------------------------------------------- 2.5 0.758605 3.0 0.015968 4.0 -0.016215 First, you can see the geometry string ne2_geometry has a two dashes to separate the monomers from each other. Also note we’ve used a Z-matrix to specify the geometry, and we’ve used a variable ( R) as the interatomic distance. We have not specified the value of R in the ne2_geometry string like we normally would. That’s because we are going to vary it during the scan across the potential energy surface, by using a Python loop over the list of interatomic distances Rvals. Before we are able to pass our molecule to Psi4, we need to do two things. First, we must set the value of the intermolecular separation in our Z-matrix (by using Python 3 string formatting) to the particular value of R. Second, we need to turn the Z-matrix string into a Psi4 molecule, by passing it to `psi4.geometry() <>`__. The argument bsse_type='cp' tells Psi4 to perform counterpoise (CP) correction on the dimer to compute the CCSD(T)/aug-cc-pVDZ interaction energy, which is stored in our ecp dictionary at each iteration of our Python loop. Note that we didn’t need to specify ghost atoms, and we didn’t need to call the monomer and dimer computations separately. Psi4 does it all for us, automatically. Near the very end of the output file output.dat,): N-Body: Computing complex (1/2) with fragments (2,) in the basis of fragments (1, 2). ... N-Body: Complex Energy (fragments = (2,), basis = (1, 2): -128.70932405488924) ... N-Body: Computing complex (2/2) with fragments (1,) in the basis of fragments (1, 2). ... N-Body: Complex Energy (fragments = (1,), basis = (1, 2): -128.70932405488935) ... N-Body: Computing complex (1/1) with fragments (1, 2) in the basis of fragments (1, 2). ... N-Body: Complex Energy (fragments = (1, 2), basis = (1, 2): -257.41867403127321) ... ==> N-Body: Counterpoise Corrected (CP) energies <== n-Body Total Energy [Eh] I.E. [kcal/mol] Delta [kcal/mol] 1 -257.418648109779 0.000000000000 0.000000000000 2 -257.418674031273 -0.016265984132 -0.016265984132 And that’s it! The only remaining part of the example is a little table of the different R values and the CP-corrected CCSD(T) energies, converted from atomic units (Hartree) to kcal mol\(^{-1}\) by multiplying by the automatically-defined conversion factor psi4.constants.hartree2kcalmol. Psi4 provides several built-in physical constants and conversion factors, as described in the Psi4 manual section Physical Constants. The table can be printed either to the screen, by using standard Python ``print()` syntax <>`__, or to the designated output file output.dat using Psi4’s built-in function psi4.core.print_out() API (C style printing). As we’ve seen so far, the combination of Psi4 and Python creates a unique, interactive approach to quantum chemistry. The next section will explore this synergistic relationship in greater detail, describing how even very complex tasks can be done very easily with Psi4. [ ]:
http://psicode.org/psi4manual/master/psiapi.html
CC-MAIN-2019-09
en
refinedweb
Re: [systemd-devel] pid namespaces, systemd and signals on reboot(2) W dniu 28.05.2017 o 20:43, Mike Gilbert pisze: > On Sat, May 27, 2017 at 2:51 PM, Michał Zegan > wrote: >> Hello. >> >> I came across the following: >> The manpage reboot(2) says, that inside of a pid namespace, a reboot >> call that normally would trigger restart [systemd-devel] Group of temporary but related units? Hey list, what would be a good way to manage temporary development environments with systemd? For example, if I quickly want to spawn up an environment where my service + perhaps some db or a queue or some other services are running. It would be nice to reuse systemd's service management [systemd-devel] About stable network interface names Hello, I've some dubts related to predictable network interface names. If I understand correctly the reference document about this topic is: Take this paragraph from that document: - We believe it is a good [systemd-devel] Systemd crash when trying to boot Angstrom image with systemd enabled Hi , I get the following kernel panic when trying to using system-console image from Angstrom distribution on imx28evk board [9.506682] UBIFS (ubi0:0): FS size: 236302336 bytes (225 MiB, 1861 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs) [9.517600] UBIFS (ubi0:0): reserved for Re: [systemd-devel] Systemd -- Using Environment variable in a Unit file from Environment File Guys, Any suggestions about this kind of problem ? -- Regards, Raghavendra. H. R (Raghu) On Fri, May 26, 2017 at 3:06 PM, Raghavendra. H. R wrote: > Hi All, > > I'm in the situation where path of my server changes due to version > change. I don't want to modify my Re: [systemd-devel] About stable network interface names On Mon, May 29, 2017 at 02:35:12AM +0200, Cesare Leonardi wrote: > I ask because I've done several tests, with different motherboards, adding > and removing PCI-express cards and that expectation was not satisfied in > many cases. > > For example, in one of those tests I initially had this setup:
https://www.mail-archive.com/search?l=systemd-devel@lists.freedesktop.org&q=date:20170528
CC-MAIN-2022-05
en
refinedweb
A text input for React that resizes itself to the current content. Live demo: jedwatson.github.io/react-input-autosize To run the examples locally, run: npm install npm start Then open localhost:8000 in a browser. The easiest way to use React-Input-Autosize is to install it from NPM and include it in your own React build process (using Browserify, rollup, webpack, etc). You can also use the umd build by including dist/AutosizeInput.js in your page. If you use this, make sure you have already included a umd React build. npm install react-input-autosize --save React-Input-Autosize generates an input field, wrapped in a <div> tag so it can detect the size of its value. Otherwise it behaves very similarly to a standard React input. import AutosizeInput from 'react-input-autosize'; <AutosizeInput name="form-field-name" value={inputValue} onChange={function(event) { // event.target.value contains the new value }} /> The styles applied to the input are only copied when the component mounts. Because of this, subsequent changes to the stylesheet may cause size to be detected incorrectly. To work around this, either re-mount the input (e.g. by providing a different key prop) or call the copyInputStyles() method after the styles change. The input will automatically inject a stylesheet that hides IE/Edge's "clear" indicator, which otherwise breaks the UI. This has the downside of being incompatible with some CSP policies. To work around this, you can pass the injectStyles={false} prop, but if you do this I strongly recommend targeting the input element in your own stylesheet with the following rule: input::-ms-clear {display: none;} If your input uses custom font sizes, you will need to provide the custom size to AutosizeInput. <AutosizeInput name="form-field-name" value={inputValue} inputStyle={{ fontSize: 36 }} onChange={function(event) { // event.target.value contains the new value }} /> AutosizeInput is a controlled input and depends on the value prop to work as intended. It does not support being used as an uncontrolled input.
https://codeawesome.io/react.js/autosize-input-textarea/react-input-autosize
CC-MAIN-2022-05
en
refinedweb
Experience Report for Feature Request Update: 7/1/21 - to comply with Feature Request Template: What you wanted to do 1.) Update nested nodes from the parent Update Mutation. 2.) Delete nested nodes from the parent Delete Mutation 3.) Choose which nodes allow this Cascade Delete and Cascade Update, as not all parent / children relationships should have this ability What you actually did 1.) This can only be done by creating a new update mutation for EVERY SINGLE child individually. If I am updating 10 children nodes, I need 11 mutations when including the parent. 2.) This is currently impossible, even with multiple mutations, unless you query every single ID. 3.) Obviously I don’t want to delete some nested nodes like country, language, etc. Currently, all nested updates just update the connection, not the data. Why that wasn’t great, with examples 1.) How to Update a Book’s Chapters mutation { updateChapter0: updateChapter(input: { filter: { id: "0xfffd8d6aa985abef" }, set: { ... changed info here } } ) { numUids } updateChapter1: updateChapter(input: { filter: { id: "0xfffd8d6aa985abf1" }, set: { ... changed info here } } ) { numUids } updateBook(input: { filter: { id: "0xfffd8d6aa985abee" }, set: { ... changed info here } } ) { chapter { id name slug description } numUids } } For every single chapter you’re updating, you need to manually query the id, and create a separate mutation with a separate namespace. The should be done in one mutation like on add. 2.) How to Delete a Book (with its chapters): - First Get all Chapter Ids: query { queryBook(filter: { id: "0xfffd8d6aa985abdf" }) { chapters { id ... } } } - Use the returned Ids to delete the chapters one by one…, then delete the book (no other way without querying first) mutation { deleteChapter(filter: { id: ["0xfffd8d6aa985abde", "0xfffd8d6aa985abe0"] }) { chapter { id ... } numUids msg } deleteBook(filter: { id: "0xfffd8d6aa985abdf" }) { book { id ... } numUids msg } } This isn’t even one step, but several steps, and several mutations. Nested Mutations would make this one step, but it will still be several mutations. Any external references to support your case Original Post There needs to be a way update deep fields, and delete deep fields. While it is clear why this is not default behavior, not having it as an option is also a grave problem. I realize there are many many posts on this, but in summary: Deleting Right now it is 100% impossible to delete nested fields without using DQL. I theoretically can query to get every single ID to do this manually, but it is not even advisable if I could, since there could be thousands. I also can’t flip the node, since it would be a nested field as well. So, I end up with ghost nodes. Again, it is currently IMPOSSIBLE to avoid this in graphql. I understand nested filters are on the way eventually: But, the best we can hope for if you are a cloud user like myself is November at the earliest. This again, does not guarantee (or almost guarantee since dgraph graphql is not perfect) a lack of ghost nodes. So we need to start thinking about this now. Updating If I want to update one record with nested fields (say an array of data), I currently have to create a mutation for that node, plus a mutation for every single node in the array I need to update. If that array (nested field) is 10 items, I need to create 11 different mutations. Part of that problem is the lack of multiple sets in update mutations, but I can save that for a different post. Solving the Problem The best way to solve this problem is what @amaster507 said: While this is way too complicated IMHO: So we do something like this: Type Student { id: ID! name: String classes: Class @hasInverse(field: students) ... } Type Book { id: ID! name: String ... } Type Class { id: ID! ... students: [Student] @reference(onUpdate: null, onDelete: restrict) books: [Book] @reference(onUpdate: cascade: onDelete: cascade); } And just like mySQL there are four options and two paramenters (onUpdate and onDelete): options: [restrict, cascade, null, nothing] - nothing, short for doNothing or noAction would be the default behavior for onUpdate to be backwards compatible - cascade - delete or update - restrict - throws error if trying to delete or update - null - removes connection, does not delete, would be the default behavior for onDelete to be backwards compatible (Note: Default should be a fifth option when Dgraph implements Default values) We should be able to do this, it is simple, makes sense, and keeps Dgraph consistent. J
https://discuss.dgraph.io/t/feature-request-cascade-delete-deep-mutations-by-reference-directive/14658/3
CC-MAIN-2022-05
en
refinedweb
Overview of Dapr configuration options Sidecar configuration Setup sidecar configuration Self-hosted sidecar In self hosted mode the Dapr configuration is a configuration file, for example config.yaml. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: $HOME/.dapr/config.yaml in Linux/MacOS and %USERPROFILE%\.dapr\config.yaml in Windows. A Dapr sidecar can also apply a configuration by using a --config flag to the file path with dapr run CLI command. Kubernetes sidecar In Kubernetes mode the Dapr configuration is a Configuration CRD, that is applied to the cluster. For example: kubectl apply -f myappconfig.yaml You can use the Dapr CLI to list the Configuration CRDs dapr configurations -k A Dapr sidecar can apply a specific configuration by using a dapr.io/config annotation. For example: annotations: dapr.io/enabled: "true" dapr.io/app-id: "nodeapp" dapr.io/app-port: "3000" dapr.io/config: "myappconfig" Note: There are more Kubernetes annotations available to configure the Dapr sidecar on activation by sidecar Injector system service. Sidecar configuration settings The following configuration settings can be applied to Dapr application sidecars: - Tracing - Metrics - Middleware - Scoping secrets for secret stores - Access control allow lists for service invocation - Example application sidecar configuration Tracing Tracing configuration turns on tracing for an application. The tracing section under the Configuration spec contains the following properties: tracing: samplingRate: "1" zipkin: endpointAddress: "" The following table lists the properties for tracing: samplingRate is used to enable or disable the tracing. To disable the sampling rate , set samplingRate : "0" in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. samplingRate : "1" samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces. See Observability distributed tracing for more information Metrics The metrics section can be used to enable or disable metrics for an application. The metrics section under the Configuration spec contains the following properties: metrics: enabled: true The following table lists the properties for metrics: See metrics documentation for more information Middleware Middleware configuration set named Http pipeline middleware handlers The httpPipeline section under the Configuration spec contains the following properties: httpPipeline: handlers: - name: oauth2 type: middleware.http.oauth2 - name: uppercase type: middleware.http.uppercase The following table lists the properties for HTTP handlers: See Middleware pipelines for more information Scope secret store access See the Scoping secrets guide for information and examples on how to scope secrets to an application. Access Control allow lists for building block APIs See the selectively enable Dapr APIs on the Dapr sidecar guide for information and examples on how to set ACLs on the building block APIs lists. Access Control allow lists for service invocation API See the Allow lists for service invocation guide for information and examples on how to set allow lists with ACLs which using service invocation API. Turning on preview features See the preview features guide for information and examples on how to opt-in to preview features for a release. Preview feature enable new capabilities to be added that still need more time until they become generally available (GA) in the runtime. Example sidecar configuration The following yaml shows an example configuration file that can be applied to an applications’ Dapr sidecar. apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: myappconfig namespace: default spec: tracing: samplingRate: "1" httpPipeline: handlers: - name: oauth2 type: middleware.http.oauth2 secrets: scopes: - storeName: localstore defaultAccess: allow deniedSecrets: ["redis-password"] accessControl: defaultAction: deny trustDomain: "public" policies: - appId: app1 defaultAction: deny trustDomain: 'public' namespace: "default" operations: - name: /op1 httpVerb: ['POST', 'GET'] action: deny - name: /op2/* httpVerb: ["*"] action: allow Control-plane configuration There is a single configuration file called daprsystem installed with the Dapr control plane system services that applies global settings. This is only set up when Dapr is deployed to Kubernetes. Control-plane configuration settings A Dapr control plane configuration can configure the following settings: See the Mutual TLS HowTo and security concepts for more information. Example control plane configuration apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: default namespace: default spec: mtls: enabled: true allowedClockSkew: 15m workloadCertTTL: 24h Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.dapr.io/operations/configuration/configuration-overview/
CC-MAIN-2022-05
en
refinedweb
Mupen64Plus has undergone substantial changes since the last stable release (1.5). This includes no native GUI, a modular build system, new build options/dependencies, lots of fixes/updates that required patching in the previous version, etc. This is a largely rewritten ebuild to address most of those changes, as well as optionally pull in compatible 3rd party plugins. I made it pull the live sources simply because the last dev snapshot, 1.99.4, is almost a year old. I don't expect this to enter the tree, but since most of this work will need to be done anyway once version 2.0 is released I figured I'd get a head start on it now (plug, I want to test the latest version). I'm pretty happy with the ebuild, except for one thing: because the application is split into multiple modules in their own repositories with no 'master' repository, AND because the mercurial eclass only supports pulling from a single repo (best I can tell), I have to directly pull the code within the ebuild. While this works fine, it means that the full source tree needs to be pulled EVERY time the package is installed; it writes the data to $WORKDIR, which, of course, gets wiped out after installation. I couldn't find any way to write to $DISTDIR/hg (or whatever) to leverage the caching that's usually done with live builds. If anyone knows how to work around this, suggestions/patches welcome. I started a discussion on the subject here, but didn't get any responses: Also also found bug 381655 when posting this, but that seems to apply to the 1.5 release. The live version should already include most, if not all, of the suggested patches. Reproducible: Always Created attachment 288503 [details] games-emulation/mupen64plus-9999.ebuild Created attachment 288623 [details] games-emulation/mupen64plus-9999.ebuild Minor update: removed ftbfs patch as it apparently doesn't need to be applied to 1.99, and moved man path fix to src_prepare(). Created attachment 288813 [details] games-emulation/mupen64plus-9999.ebuild Updated based on feedback from the Debian package maintainer: 1. Removed old/unneeded dependencies 2. Added undocumented dependencies for plugins (if USE flag enabled) 3. Dropped sed man dir fix in favor of MANDIR build option Created attachment 288877 [details, diff] games-emulation/mupen64plus/files/mupen64plus-minizip.patch thanks a lot for your ebuild, Jared! Would be nice to see it in the gamerlay one day (same for the dolphin ebuild) ;) the api in >=sys-libs/zlib-1.2.5.1-r1 changed a bit and a minizip useflag was introduced. using zlib-1.2.5.1-r2 here I need to apply attached patch to work. with USE="+lirc" I get some undefined reference to some lirc function. probably an upstream bug forgetting an #ifdef or something similar. please keep up the great work (pcsx2 hint hint ;) ) (In reply to comment #4) > thanks a lot for your ebuild, Jared! > Would be nice to see it in the gamerlay one day (same for the dolphin ebuild) You're very welcome, Marcel. Someone else will need to add it to an overlay, though - I don't use them, so I'm more interested in getting my ebuilds merged into portage. > the api in >=sys-libs/zlib-1.2.5.1-r1 changed a bit and a minizip useflag was > introduced. using zlib-1.2.5.1-r2 here I need to apply attached patch to work. Yeah, someone else mentioned this as well, but I run stable so I haven't encountered this, and after reading bug 383351 and doing a couple quick searches I still can't figure out what the actual problem was, why it seems to only affect gentoo, or why the proper fix is. Do you have any suggestions? It builds and runs fine on my system. > with USE="+lirc" I get some undefined reference to some lirc function. > probably an upstream bug forgetting an #ifdef or something similar. > please keep up the great work (pcsx2 hint hint ;) ) I don't use lirc so I haven't encountered this, but I'll take a look and see if I can figure something out. Thanks for the report. As for pcsx2, I actually spent quite a while trying to get something going there, but with no wxwidget support in the emul-linux-x86 libraries it just isn't going to happen. Not unless someone far smarter than I gets involved anyway. I actually spent a while building a 32-bit chroot just so I could test out the newer version, but even when I did manage to get it built and running it seemed quite unstable, and had some seriously buggy plugins. I opened a bug report or two about it, and was pretty much told that Linux support doesn't get much love anymore, so I just gave up on it at that point. Yeah, no idea on the lirc thing. I opened up an upstream bug report about it: hi Jared, thank you for reporting the lirc bug upstream. I haven't tried the changeset yet, but it looks good :} the reason for the zlib breakage only happening in gentoo is that some gentoo maintainers decided to rename some badly chosen macro definitions (ON/OF). snip from zlib-1.2.5-r2 --- sed_macros() { # clean up namespace a little #383179 # we do it here so we only have to tweak 2 files sed -i -r 's:\<(O[FN])\>:_Z_\1:g' "$@" || die } --- I didn't want to go too offtopic with pcsx2 here. unfortunately I don't find the time to write a proper ebuild (argh, all those emulators should at least have some standalone snapshot tarballs at least from their plugins :} ), but it works quiet fine on my ~x86 machine. If you don't mind I'll commit your dolphin and mupen64plus ebuilds soon to gamerlay. friendly, marcel Created attachment 289139 [details] games-emulation/mupen64plus-9999.ebuild (meta) Created attachment 289141 [details] games-emulation/mupen64plus-audio-sdl-9999.ebuild Created attachment 289143 [details] games-emulation/mupen64plus-core-9999.ebuild Created attachment 289145 [details] games-emulation/mupen64plus-input-sdl-9999.ebuild Created attachment 289147 [details] games-emulation/mupen64plus-rsp-hle-9999.ebuild Created attachment 289149 [details] games-emulation/mupen64plus-rsp-z64-9999.ebuild Created attachment 289151 [details] games-emulation/mupen64plus-ui-console-9999.ebuild Created attachment 289153 [details] games-emulation/mupen64plus-video-arachnoid-9999.ebuild Created attachment 289155 [details] games-emulation/mupen64plus-video-glide64-9999.ebuild Created attachment 289157 [details] games-emulation/mupen64plus-video-rice-9999.ebuild Created attachment 289159 [details] games-emulation/mupen64plus-video-z64-9999.ebuild thanks Jared and Franz, added with some minor works-for-me changes to gamerlay. Created attachment 289225 [details] games-emulation/mupen64plus-9999.ebuild (meta) Created attachment 289227 [details] games-emulation/mupen64plus-audio-sdl-9999.ebuild Created attachment 289229 [details] games-emulation/mupen64plus-9999.ebuild (meta) Created attachment 289231 [details] games-emulation/mupen64plus-audio-sdl-9999.ebuild Created attachment 289233 [details] games-emulation/mupen64plus-core-9999.ebuild Created attachment 289235 [details] games-emulation/mupen64plus-input-sdl-9999.ebuild Created attachment 289237 [details] games-emulation/mupen64plus-rsp-hle-9999.ebuild Created attachment 289239 [details] games-emulation/mupen64plus-rsp-z64-9999.ebuild Created attachment 289241 [details] games-emulation/mupen64plus-ui-console-9999.ebuild Created attachment 289243 [details] games-emulation/mupen64plus-video-arachnoid-9999.ebuild Created attachment 289245 [details] games-emulation/mupen64plus-video-glide64-9999.ebuild Created attachment 289247 [details] games-emulation/mupen64plus-video-rice-9999.ebuild Created attachment 289249 [details] games-emulation/mupen64plus-video-z64-9999.ebuild @Marcel Unbehaun: I tested gamerlay and it just fails with (btw., my version worked): * Failed Patch: mupen64plus-core-minizip.patch ! * ( /var/lib/layman/gamerlay/games-emulation/mupen64plus-core/files/mupen64plus-core-minizip.patch ) * * Include in your bugreport the contents of: * * /var/tmp/portage/games-emulation/mupen64plus-core-9999/temp/mupen64plus-core-minizip.patch.out * ERROR: games-emulation/mupen64plus-core-9999 failed (prepare phase): * Failed Patch: mupen64plus-core-minizip.patch! * * Call stack: * ebuild.sh, line 91: Called src_prepare * environment, line 2577: Called epatch '/var/lib/layman/gamerlay/games-emulation/mupen64plus-core/files/mupen64plus-core-minizip.patch' * environment, line 1244: Called die * The specific snippet of code: * die "Failed Patch: ${patchname}!"; * * If you need support, post the output of 'emerge --info =games-emulation/mupen64plus-core-9999', * the complete build log and the output of 'emerge -pqv =games-emulation/mupen64plus-core-9999'. * This ebuild is from an overlay named 'gamerlay-stable': '/var/lib/layman/gamerlay/' * The complete build log is located at '/var/tmp/portage/games-emulation/mupen64plus-core-9999/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/games-emulation/mupen64plus-core-9999/temp/environment'. * S: '/var/tmp/portage/games-emulation/mupen64plus-core-9999/work/mupen64plus-core-9999' franz, "my version" worked fine this morning, but you're right: now it doesn't build here anymore too. assuming some upstream changes. I'll probably look into this next week or simply commit your latest ebuild :} Comment on attachment 288877 [details, diff] games-emulation/mupen64plus/files/mupen64plus-minizip.patch fixed upstream removed obsolete patch in gamerlay. Created attachment 289269 [details] games-emulation/mupen64plus-9999.ebuild Another minor revision: 1. Changed zlib dependency to the following, so it should work with both stable and unstable versions: || ( <sys-libs/zlib-1.2.5.1-r1 >=sys-libs/zlib-1.2.5.1-r2[minizip] ) 2. Removed dependency on virtual/glu, as it appears to be fully redundant with virtual/opengl. 3. Updated description to be a bit more useful and informative (knowing that it's a fork of a dead project isn't very helpful :-) ) Marcel, since I don't use unstable zlib, can you confirm that this should now work without any additional patches? That appears to be the case after the upstream changes you mentioned, but just want to be sure. Also, the LIRC bug has been fixed upstream, so that should not longer be a problem after a fresh build. Thanks. hi Jared, I can confirm that both the lirc and the minizip bugs were fixed upstream. commit fixed lirc and commit fixed zlib/minizip (cmake checks which version is installed) Created attachment 289313 [details] games-emulation/mupen64plus-core-9999.ebuild Created attachment 289315 [details] games-emulation/mupen64plus-9999.ebuild (meta) Created attachment 289317 [details] games-emulation/mupen64plus-audio-sdl-9999.ebuild Created attachment 289319 [details] games-emulation/mupen64plus-core-9999.ebuild Created attachment 289321 [details] games-emulation/mupen64plus-input-sdl-9999.ebuild Created attachment 289323 [details] games-emulation/mupen64plus-rsp-hle-9999.ebuild Created attachment 289325 [details] games-emulation/mupen64plus-rsp-z64-9999.ebuild Created attachment 289327 [details] games-emulation/mupen64plus-ui-console-9999.ebuild Created attachment 289329 [details] games-emulation/mupen64plus-video-arachnoid-9999.ebuild Created attachment 289331 [details] games-emulation/mupen64plus-video-glide64-9999.ebuild Created attachment 289333 [details] games-emulation/mupen64plus-video-rice-9999.ebuild Created attachment 289335 [details] games-emulation/mupen64plus-video-z64-9999.ebuild Created attachment 304585 [details] games-emulation/mupen64plus-9999.ebuild Not sure how I missed this before, but I had the shared data directory set directly to ${GAMES_DATADIR}/ rather than ${GAMES_DATADIR}/${PN}/. This small update corrects that problem. Ebuild for new frontend is here . Created attachment 350058 [details] Updated mupen64plus-9999 ebuild I modified Jared's latest ebuild. -Use mercural_fetch from the mercural eclass instead of calling hg clone manually. -Default USE=plugins on. -Disable internal CFLAGS by setting an empty OPTFLAGS. (see comment in ebuild for details) Created attachment 350088 [details] games-emulation/mupen64plus-9999.ebuild Another ebuild update. -Utilize USE_EXPAND for optional plugins. This allows enabling of individual plugins. See below for a few known issues. -Remove verbose compiler output I accidentally left in when debugging modules building -Check if the file plugindir/RELEASE exists before attempting to install it. NOTES: * As the comment in the ebuild says you need to add USE_EXPAND="MUPEN64_PLUGINS_RSP MUPEN64_PLUGINS_VIDEO" to your make.conf for the USE flags to display properly. * The USE_EXPAND plugins all assume the repository has the same base uri in the form of PLUGIN_BASE_URI/mupen64plus-PLUGINTYPE-PLUGINNAME. This may not be true for all future plugins, but is true for all current plugins. * I assume the glide64 plugin is the only one that depends on GLEW, maybe someone can correct me if this is wrong. Created attachment 350376 [details] games-emulation/mupen64plus-9999.ebuild Another update. -Always run make with DEBUG=1. This option only adds -g to CFLAGS and doesn't strip binaries. Portage handles stripping for us. -Fix disabling compiling against libsamplerate. Correct option is NO_SRC=1 -Add optional speex dependency. The speex resampler is used by the SDL audio plugin. -Unconditionally disable OSS support. -Use local variables in src_compile and src_install instead of depending on accidental globals in src_install. -Fix a minor copy-paste bug on my part in src_install. Created attachment 354052 [details] games-emulation/mupen64plus-9999.ebuild *Add glew dependency for the z64 video plugin. It doesn't comple with glew-1.10.0 because of bug 477948. 1.10.0-r1 is in the tree and fixes this issue. I will work on this in a few days time. Since 2.0 is out, we'll add that instead following bug #482614. If you really feel like needing live ebuilds, please reopen.
https://bugs.gentoo.org/show_bug.cgi?id=385297
CC-MAIN-2022-05
en
refinedweb
Thread Abort Error In C# Hello, I am receiving the following error in my C# Ice Server: !! 8/07/2020 19:09:37:480 error: exception in endpoint host resolver thread Ice.HostResolver: System.Threading.ThreadAbortException: Thread was being aborted. at System.Threading.Monitor.ObjWait(Boolean exitContext, Int32 millisecondsTimeout, Object obj) at System.Threading.Monitor.Wait(Object obj, Int32 millisecondsTimeout, Boolean exitContext) at IceInternal.EndpointHostResolver.run() in C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142\csharp\src\Ice\EndpointHostResolver.cs:line 102 at IceInternal.EndpointHostResolver.HelperThread.Run() in C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142\csharp\src\Ice\EndpointHostResolver.cs:line 253 !! 8/07/2020 19:09:37:488 error: exception in `Ice.ThreadPool.Client' thread Ice.ThreadPool.Client-0: I searched the forums and found this: However I do not know what 'abort a thread while it is inside the run time' means. I guessed that it means aborting a thread inside an 'I' class or something that it calls. Therefore as a debugging step I simplified my 'I' class to be: public class ScummWebServerI : ScummWebServerDisp_ { ... public override void Init(string gameName, string gameId, string signalRConnectionId, Dictionary<string, byte[]> saveStorage, Current current = null) { int x = 4; } ... } My logic here that that the simple setting of x to 4 will not abort any threads. I called 'Init' from two different clients and got the same behaviour. The client call to 'Init' returned after a LONG time, but 'Init' was never called and the above error was logged. I feel that because calling from either of the clients, a JS client and a C# client, cause the same behaviour then it it probably an issue with the server. Please let me know what else you need. Hi Andrew, Can you try running your server with the debugger attached and get the stack of all threads, this will give us a clue to figure this why Abort is being called. Cheers, Jose I am now withdrawing this question and apologise for wasting you're time. The issue was caused by a simple 'port is already is use error' which was hidden somewhere else. Please resolve or remove this question. Thanks
https://forums.zeroc.com/discussion/46739/thread-abort-error-in-c
CC-MAIN-2022-05
en
refinedweb
import "github.com/iotexproject/iotex-core/action/protocol/vote/candidatesutil" CandidatesPrefix is the prefix of the key of candidateList ConstructKey constructs a key for candidates storage func GetMostRecentCandidateMap(sm protocol.StateManager, blkHeight uint64) (map[hash.Hash160]*state.Candidate, error) GetMostRecentCandidateMap gets the most recent candidateMap from trie LoadAndAddCandidates loads candidates from trie and adds a new candidate LoadAndDeleteCandidates loads candidates from trie and deletes a candidate if exists func LoadAndUpdateCandidates(sm protocol.StateManager, blkHeight uint64, addr string, votingWeight *big.Int) error LoadAndUpdateCandidates loads candidates from trie and updates an existing candidate Package candidatesutil imports 8 packages (graph) and is imported by 2 packages. Updated 2019-08-20. Refresh now. Tools for package owners.
https://godoc.org/github.com/iotexproject/iotex-core/action/protocol/vote/candidatesutil
CC-MAIN-2019-51
en
refinedweb
find Korea Postal address Getting Started You should ensure that you add the router as a dependency in your flutter project. dependencies: kopo: "^0.1.1" Setup iOS Opt-in to the embedded views preview by adding a boolean property to the app's Info.plist file with the key io.flutter.embedded_views_preview and the value YES. <key>io.flutter.embedded_views_preview</key> <true/> Example import 'package:kopo/kopo.dart'; MaterialButton( child: Text('find Korea Postal address'), onPressed: () async { KopoModel model = await Navigator.push( context, CupertinoPageRoute( builder: (context) => Kopo(), ), ); }, ),
https://pub.dev/documentation/kopo/latest/
CC-MAIN-2019-51
en
refinedweb